It’s OK, My eval is Sandboxed (No It’s Not)

The idea of using eval has always been in interesting debate. Instead of writing logic which accounts for possibly hundreds of different scenarios, creating a string with the correct JavaScript and then executing it dynamically is a much simpler solution. This isn’t a new approach to programming and is commonly seen in languages such as SQL (stored procedures vs. dynamically generating statements). On one hand it can save a developer an immense amount of time writing and debugging code. On the other, it’s power is something which can be abused because of it’s high execution privileges in the browser.

The question is, “should ever be used?” It technically would be safe if there is a way of securing all the code it evaluates, but this limits its effectiveness and goes against its dynamic nature. So with this, is there a balance point where using it is secure, but also flexible enough to warrant the risk?

For example purposes, we’ll use the following piece of code to show the browser has been successfully exploited: alert(‘Be sure to drink your Ovaltine.’); If the browser is able to execute that code, then restricting the use of eval failed.

In the most obvious example where nothing is sanitized executing the alert is trivial:

eval will treat any input as code and execute it. So what if eval is restricted to only execute which will correctly evaluate to a complete statement?

Nope, this still successfully executes. In JavaScript all functions return something, so calling alert and assigning undefined to total is perfectly valid.

What about forcing a conversion to a number?

This still executes also, because the alert function fires when it is parsed and its return value is converted to a string and then parsed.

The following does stop the alert from firing,

But this is rather pointless, because eval isn’t necessary. It’s much easier to assign the value to the total variable directly.

What about overriding the global function alert with a local function?

This does work for the current scenario. It overrides the global alert function with the local one but doesn’t solve the problem. The alert function can still be called explicitly from the window object itself.

With this in mind, it is possible to remove any reference to window (or alert for that matter) in the code string before executing.

This works when the word ‘window’ is together, but the following code executes successfully:

Since ‘win’ and ‘dow’ are separated, the replacement does not find it. The code works by using the first eval to join the execution code together while the second executes it. Since replace is used to remove the window code, it’s also possible to do the same thing to eval like so:

That stops the code from working, but it doesn’t stop this:

It is possible to keep accounting for different scenarios whittling down the different attack vectors, but this gets extremely complicated and cumbersome. Further more, using eval opens up other scenarios besides direct execution which may not be accounted for. Take the following example:

This code bypasses the replace sanitations, and it’s goal wasn’t to execute malicious code. It’s goal is to replace the JSON.parse with eval and depending on the application might assume that malicious code is blocked, because JSON.parse doesn’t natively execute rogue code.

Take the following example:

The code does throw an exception at the end due to invalid parsing, but that isn’t a problem for the attacker, because eval already executed the rogue code. The eval statement was used to perform a lateral attack against the functions which are assumed not to execute harmful instructions.

Server Side Validation

A great extent of the time, systems validate user input on the server trying to ensure harmful information is never stored in the system. This is a smart idea, because removing before storing it tries to ensure everything accessing potentially harmful code doesn’t need to make certain it isn’t executing something it shouldn’t (you really shouldn’t and can’t make this assumption, but it is a good start in protecting against attacks). With eval, this causes a false sense of security, because code like C# does not handle strings the same way that JavaScript does. For example:

In the first example, the C# code successfully removed the word ‘window’, but in the second, it was unable to interpret this when presented with Unicode characters which JavaScript interprets as executable instructions. (In order to test the unicode characters, you need to place an @ symbol in front of the string so that it will treat it exactly as it is written. Without it, the C# compiler will convert it.) Worse yet, JavaScript can interpret strings which are a mixture of text and Unicode values making it more difficult to search and replace potentially harmful values.

Assuming the dynamic code passed into eval is completely sanitized, and there is no possibility of executing rogue code, it should be safe to use. The problem is that it’s most likely not sanitized, and at best it’s completely sanitized for now.

Under The Mattress (or Compiled Code) is Not a Good Place to Hide Passwords

The question comes up from time to time about storing passwords in code, and is it secure. Ultimately, it’s probably a bad idea to store passwords in code strictly from a change management perspective, because you are most likely going to need to change it at some point in the future. Furthermore, passwords stored in compiled code are easy to retrieve if someone ever gets a hold of the assembly.

Using the following code as an example:

IL Spy Showing Password

So what about storing the password in some secure location and loading it into memory? This requires the attacker to take more steps to achieve the same goal, but it is still not impossible. In order to acquire the contents in memory (assuming the attacker can’t just attach a debugger to the running assembly), something will have to force the program to move the memory contents to a file for analysis.

Marc Russonivich wrote a utility called ProcDump which easily does the trick. Look for the name of the process (my process is named LookForPasswords) in task manager and run the following command:

This creates a file akin to LookForPasswords.exe_140802_095325.dmp and it contains all the memory information of the running process. To access the file contents you can either use Visual Studio or something like WinDbg.

WinDbg:
Open Dump File

After you open the dump file, you’ll need to load the SOS.dll to access information about the .NET runtime environment in Windbg.

LoadBySos

Once this is loaded, you can search the dump file for specific object types. So to get the statistics on strings (System.String):

String Statistics

This command will display a lot of information about the methods stored in the memory table, where the information lives in memory, etc. What you need to know is where the string data itself lives in memory. To access the statictics of a specific object

For example:

Show String List

Show Memory Address

Show Offset

In a string object, the actual data we want to get is located at the memory address plus the offset value (which is c). You can see this by accessing the specifics of the String object by inputting the following:

or in the example

Doing this for each string in the program would be rather tedious and time consuming considering most applications are significantly larger than the example application. WinDbg solves this issue, by having a .foreach command and this loops through all the string objects and prints out the contents.

Show all strings

To solve the issue of attacking the program by causing a memory dump, Microsoft added the System.Security.SecureString datatype in .NET 2.0. Although effective, it has some drawbacks to it, mainly that to effectively use it, you have to use pinned objects and doing this requires to check the unsafe flag in projects.

Unsafe Compile

Most organizations won’t allow unsafe code execution, so it makes using the SecureString pretty much pointless. With this in mind, the safest route to take for securing information is to not have it in memory at all. This removes the problem entirely. If it must reside in memory, then you can at least encrypt it while it’s stored there. This won’t solve every problem (if unsecured contents existed in memory, they still might), but it will at least reduce the possibility of it getting stolen.

The contents for the above example can be located on GitHub