It’s OK, My eval is Sandboxed (No It’s Not)

The idea of using eval has always been in interesting debate. Instead of writing logic which accounts for possibly hundreds of different scenarios, creating a string with the correct JavaScript and then executing it dynamically is a much simpler solution. This isn’t a new approach to programming and is commonly seen in languages such as SQL (stored procedures vs. dynamically generating statements). On one hand it can save a developer an immense amount of time writing and debugging code. On the other, it’s power is something which can be abused because of it’s high execution privileges in the browser.

The question is, “should ever be used?” It technically would be safe if there is a way of securing all the code it evaluates, but this limits its effectiveness and goes against its dynamic nature. So with this, is there a balance point where using it is secure, but also flexible enough to warrant the risk?

For example purposes, we’ll use the following piece of code to show the browser has been successfully exploited: alert(‘Be sure to drink your Ovaltine.’); If the browser is able to execute that code, then restricting the use of eval failed.

In the most obvious example where nothing is sanitized executing the alert is trivial:

eval will treat any input as code and execute it. So what if eval is restricted to only execute which will correctly evaluate to a complete statement?

Nope, this still successfully executes. In JavaScript all functions return something, so calling alert and assigning undefined to total is perfectly valid.

What about forcing a conversion to a number?

This still executes also, because the alert function fires when it is parsed and its return value is converted to a string and then parsed.

The following does stop the alert from firing,

But this is rather pointless, because eval isn’t necessary. It’s much easier to assign the value to the total variable directly.

What about overriding the global function alert with a local function?

This does work for the current scenario. It overrides the global alert function with the local one but doesn’t solve the problem. The alert function can still be called explicitly from the window object itself.

With this in mind, it is possible to remove any reference to window (or alert for that matter) in the code string before executing.

This works when the word ‘window’ is together, but the following code executes successfully:

Since ‘win’ and ‘dow’ are separated, the replacement does not find it. The code works by using the first eval to join the execution code together while the second executes it. Since replace is used to remove the window code, it’s also possible to do the same thing to eval like so:

That stops the code from working, but it doesn’t stop this:

It is possible to keep accounting for different scenarios whittling down the different attack vectors, but this gets extremely complicated and cumbersome. Further more, using eval opens up other scenarios besides direct execution which may not be accounted for. Take the following example:

This code bypasses the replace sanitations, and it’s goal wasn’t to execute malicious code. It’s goal is to replace the JSON.parse with eval and depending on the application might assume that malicious code is blocked, because JSON.parse doesn’t natively execute rogue code.

Take the following example:

The code does throw an exception at the end due to invalid parsing, but that isn’t a problem for the attacker, because eval already executed the rogue code. The eval statement was used to perform a lateral attack against the functions which are assumed not to execute harmful instructions.

Server Side Validation

A great extent of the time, systems validate user input on the server trying to ensure harmful information is never stored in the system. This is a smart idea, because removing before storing it tries to ensure everything accessing potentially harmful code doesn’t need to make certain it isn’t executing something it shouldn’t (you really shouldn’t and can’t make this assumption, but it is a good start in protecting against attacks). With eval, this causes a false sense of security, because code like C# does not handle strings the same way that JavaScript does. For example:

In the first example, the C# code successfully removed the word ‘window’, but in the second, it was unable to interpret this when presented with Unicode characters which JavaScript interprets as executable instructions. (In order to test the unicode characters, you need to place an @ symbol in front of the string so that it will treat it exactly as it is written. Without it, the C# compiler will convert it.) Worse yet, JavaScript can interpret strings which are a mixture of text and Unicode values making it more difficult to search and replace potentially harmful values.

Assuming the dynamic code passed into eval is completely sanitized, and there is no possibility of executing rogue code, it should be safe to use. The problem is that it’s most likely not sanitized, and at best it’s completely sanitized for now.

I’m out of Range? You’re out of Range!

In IIS there are several different options allowing you to control the behavior of an application. With all of these settings Microsoft attempts to validate the entered values are within the accepted entries. Unfortunately, if you are updating multiple settings at once, it’s not very clear on which entry causes the issue, and it seems that Windows key stores can become corrupt preventing valid updates to the username and password.

Modify Settings

Alert Message


While updating the username and password in the configuration section, IIS responds with the value out of range error.

Set Credentials

The system allows the credentials to be set to any of the built in accounts, but it wouldn’t set it to a user requiring a password.

Default password

IIS stores its configuration data in an xml file located in the System32/Inetsrv/Config file allowing you to see what values it actually stores after an update. This is important, because it shows which entered values are transformed, encrypted, etc. and provides insight into what might be happening:

Running Tasks

To view it, you’ll need administrative access to the folder, and if you are running a 64-bit operating system, you’ll need a 64-bit application to open the file. 64-bit Windows systems have the File System Redirector which will silently redirect any 32-bit application trying to access the System32 directory to the SysWOW64, as the System32 directory is reserved for 64-bit application use. This means that if a 32-bit text editor (which unfortunately most are) tries to access the ApplicationHost.config used by IIS, it will be redirected to a different config file, and any changes made to it won’t be reflected in IIS (unless you’ve specifically installed the 32-bit version of IIS). Notepad is 64-bit, so it can view and modify the file.

If IIS has any application pools running specific users, the application host file with have a node looking similar to this:

It shows the username and password the application pool uses, and the encryption used to store the password securely. By default, IIS uses IISWASOnlyAesProvider to encrypt the application user passwords, and ultimately encrypting the password can cause the Value does not fall within expected range error when the key store becomes corrupt.

Microsoft has a tool in the C:\Windows\System32\inetsrv directory which can quickly help test if this is the problem. Running the following command will display all the information concerning application pools including the usernames and decrypted passwords.

app cmd username and password

If IIS has a problem with encryption/decryption, then it will show up like this:

No Password

A side note about security

What you have just seen is a very easy way for someone to gain usernames and passwords to potentially very high level accounts. If you have a compromised IIS server and the account is an Active Directory account, the attacker now has access to a domain account to do further damage. If that account has elevated privileges on other machines or even worse is a domain administrator, the attacker now has access to that as well. (Remember SQL Server allows the use of domain accounts to designate access).

Furthermore, the attacker doesn’t need access to the server if the ApplicationHost.Config is available. IIS allows multiple servers to use a Shared Configuration. If that share housing the file is available on the network with relaxed security, then all an attacker needs to do is steal the file and use another machine with the AppCmd tool to get the contents.

Although not recommended, modifying the ApplicationHost.config file directly and not encrypting the password works:

This doesn’t correct the issue, as the encryption is still broken, but it does allow that application instance to run unhindered, and fixing it is relatively simple. It is possible to recreate the AES encryption keys, but it’s much easier to import a copy from an existing machine. Microsoft provides the Aspnet_Regiis tool with the .NET framework located here: C:\Windows\Microsoft.NET\Framework64\v4.0.30319\.
The following command exports the keys to the temp directory in a file named AESKeys.xml.

The -px nominates the key container to export. By default it is: iisWasKey, but in case it has been changed, IIS designates the key container it uses in the ApplicationHost.Config here:

To import it, move the file to the machine in question and run the same command with the -pi switch instead:

If IIS is configured with the defaults, this should fix the issue with minimal fuss. If it has been configured with different key stores etc. it may be slightly more difficult to update, but the process should be relatively the same.

Under The Mattress (or Compiled Code) is Not a Good Place to Hide Passwords

The question comes up from time to time about storing passwords in code, and is it secure. Ultimately, it’s probably a bad idea to store passwords in code strictly from a change management perspective, because you are most likely going to need to change it at some point in the future. Furthermore, passwords stored in compiled code are easy to retrieve if someone ever gets a hold of the assembly.

Using the following code as an example:

IL Spy Showing Password

So what about storing the password in some secure location and loading it into memory? This requires the attacker to take more steps to achieve the same goal, but it is still not impossible. In order to acquire the contents in memory (assuming the attacker can’t just attach a debugger to the running assembly), something will have to force the program to move the memory contents to a file for analysis.

Marc Russonivich wrote a utility called ProcDump which easily does the trick. Look for the name of the process (my process is named LookForPasswords) in task manager and run the following command:

This creates a file akin to LookForPasswords.exe_140802_095325.dmp and it contains all the memory information of the running process. To access the file contents you can either use Visual Studio or something like WinDbg.

WinDbg:
Open Dump File

After you open the dump file, you’ll need to load the SOS.dll to access information about the .NET runtime environment in Windbg.

LoadBySos

Once this is loaded, you can search the dump file for specific object types. So to get the statistics on strings (System.String):

String Statistics

This command will display a lot of information about the methods stored in the memory table, where the information lives in memory, etc. What you need to know is where the string data itself lives in memory. To access the statictics of a specific object

For example:

Show String List

Show Memory Address

Show Offset

In a string object, the actual data we want to get is located at the memory address plus the offset value (which is c). You can see this by accessing the specifics of the String object by inputting the following:

or in the example

Doing this for each string in the program would be rather tedious and time consuming considering most applications are significantly larger than the example application. WinDbg solves this issue, by having a .foreach command and this loops through all the string objects and prints out the contents.

Show all strings

To solve the issue of attacking the program by causing a memory dump, Microsoft added the System.Security.SecureString datatype in .NET 2.0. Although effective, it has some drawbacks to it, mainly that to effectively use it, you have to use pinned objects and doing this requires to check the unsafe flag in projects.

Unsafe Compile

Most organizations won’t allow unsafe code execution, so it makes using the SecureString pretty much pointless. With this in mind, the safest route to take for securing information is to not have it in memory at all. This removes the problem entirely. If it must reside in memory, then you can at least encrypt it while it’s stored there. This won’t solve every problem (if unsecured contents existed in memory, they still might), but it will at least reduce the possibility of it getting stolen.

The contents for the above example can be located on GitHub