Archives April 2020

Exploiting JD bugs in crypto contexts to achieve RCE and tampering with Java applets

A little over two months ago at Red Timmy Security we have released EnumJavaLibs, a tool allowing the discovery of which libraries are available in the classpath of a remote Java application.

The original idea of developing such a kind of tool is very interesting and we think would be a shame not to talk about it. It came from an assessment that was conducted a year and a half ago and led to the discovery of a Java Deserialization vulnerability on a critical application.

As we are under NDA for that project, there is not any possibility for us to divulge the name of the vendor and the product affected. However, we can describe generically the architecture and give you something juicy to learn from.

General architecture

The solution to analyse was a web application written with GWT that could be accessed only with previous TLS mutual authentication by means of a HSM (Hardware Security Module) device. The authentication process required a Java Applet.

The Google Web Toolkit Framework (GWT) was used for both server and client components. Separate from the GWT implementation, as already said, there was a Java Applet. It was downloaded by the user’s browser during the interaction with the front-end web application before authentication. It was utilized solely for communicating with the hardware token, client side, where a private key was stored.

Basically the GWT-server application and the GWT-client component were in direct communication. Also the GWT-client component spoke to the Java applet which in turn talked to the hardware token to perform operations such as data signing, needed during the authentication process. This information was finally transmitted to the server by the GWT-Client. The following diagram should clarify the generic architecture.

But let’s start from scratch.

The authentication process

When the hardware token is plugged in, a PIN is typed in to unlock the device and the login can start. At this point the authentication process is initiated and a Login request to the server is issued (see screenshot below).

The Login request is very interesting because contains an ASN.1 DER encoded stream (starting with “MII[..]” in the POST body) wrapped around another layer of encoding done with Base64. Why do not we decode it with the “openssl” binary and look inside?

Line 62 reveals the presence of a Java serialized object (sequence “ACED0005”) containing what seems a random number, a session ID and a timestamp. May be it will be deserialized on server side? To explore our hypothesis we decide to change the object at line 62 with something else (another payload generated with “ysoserial”) and submit it to the server. However, the procedure does not seem to work properly. But why? The answer we are looking for arrives after we decide to decompile and analyse the Java applet too.

The analysis pinpointed that yes we were right, at line 62 there is a Java serialized object but line 1591 is containing a RSA-SHA256 digital signature which protects the Java object itself from tampering. Now is clear why the server is not collaborating: in order to inject our own Java serialized payload at line 62 we need a way to bypass that digital signature. One approach might be to find a way to get the private key out of the hardware token, but this is probably going to keep us busy for long time. We opted instead for the “collaborative” approach. The Java applet cannot access the private key directly, however it can ask the HSM (through an interface specially exposed) to sign an arbitrary value and return it back to the applet. The Java applet could be modified as just described and act as our proxy, but there is another problem: it is digitally signed too and with the latest versions of JRE, Java refuses to run it even though we set all the security checks to the minimum possible.

Bypassing the Java checks and opening our way to JD

This is how we bypassed it in nine steps and managed to inject an arbitrary serialized payload:

  • The Java serialized object that holds the attacker payload is generated with “ysoserial” or similar tools.
  • The applet is downloaded from the web server, decompiled and its code modified in order to read from disk the custom Java object generated at point 1, just before it is digitally signed.
  • All the signature files inside “appletfile.jar/META-INF” are deleted. This will make the applet unsigned.
  • The Java applet is now signed again, but this time with a self-signed certificate:

      $ keytool -genkey -keyalg RSA -alias myKey -keystore myKeystore -validity 360

      $ jarsigner -keystore myKeystore -verbose appletfile.jar myKey

  • The security settings in the Java configuration are set to “High” (lowest value). Also the target website is added to the exception list and the certificate generated at step 4 is imported as “Secure Site CA”. This actually decreases and circumvents the security control of the local Java installation. However, there are multiple ways to bypass it, other than this. So feel free to follow any procedure you are comfortable with.
  • First time, we browse the target website as usual. The “appletfile.jar” is downloaded and stored in a cache by Java.
  • Then we search and find the jar file inside the browser cache directory and replace it with the modified and re-signed applet.
  • Now we close the browser and open it again, visiting the target website for the second time. Java will complain about the modified applet but all the security warning messages popping up can be safely ignored.
  • Upon performing the Log-in, our custom serialized Java object is signed and sent to the server.

Logging in of course fails, but the Java deserialization attack should be done at this point. In our specific case we noticed that the server’s reply would change based on what we store inside our object. For example if we serialize an empty “String” object, the reply shows the code XXXX5:

HTTP/1.1 200 OK
Connection: close

//EX[4,0,3,'WdZtoTv',2,0,1,["com.xxxxxxxxxxxxxxxx.xxxxClientException/1437829146","java.util.Date/3385151746","XXXX5","Your operation failed due to an internal error. "],0,7]

If instead we serialize an inexistent object (one not in the classpath of the Java remote application) a similar text message is received in the reply but with “XXXX9” code instead. In simple terms the aplication is behaving as an oracle! From here the idea to create EnumJavaLibs! As the server responds with two different error codes, this fact can be leveraged to find out what libraries it has in the classpath and customize the attack, instead of blindly generating payloads.

Due to the limited amount of time to close the investigation and in order to confirm whether or not what we have discovered is actually a Java deserialization vulnerability or something else, we have asked the vendor the permission to look at the server-side code. Finally our request has been accepted, and this what we have found.

On server side, in the part of the code handling the incoming login request, inside “pkcs7.getContentStream()” there is the call to “readObject()” performing deserialization. This confirms the criticality of the vulnerability.

As can be seen from the above two code snippets, exception “XXXX9” is thrown when “readObject()” fails, in case anything goes wrong during deserialization. Exception “XXXX5” is thrown instead when the object that is read is not an instance of LoginRequest, which actually occurs after deserialization and does not prevent the attack to be placed. Of course bad things can be done only if a valid gadget chain is provided (there was no lookahead protection implemented by the way), but it is not the goal of this blog post to talk about that.

Conclusion

Nice experience! As a general recommendation, do not trust user input when performing deserialization and implement look-ahead protection to prevent an unexpected object from being deserialized.

From an attacker standpoint, be aware that Java serialized objects can be hidden inside other data structures, such as certificates. Also, digital signatures on Java applets only protect against attackers who MITM an applet with a malicious one, so the client will get warnings. It will NOT protect against attackers tampering with the applet running on their own systems: they can just ignore the warnings.

Finally, if you have an application acting as an oracle when sending (malformed) serialized objects, try EnumJavaLibs. Below some references in case you want to do a deep dive into the topic. And don’t forget to follow us on twitter: https://twitter.com/redtimmysec

References:

https://www.owasp.org/index.php/Top_10-2017_A8-Insecure_Deserialization

https://www.ibm.com/developerworks/library/se-lookahead/index.html

Hey! Red Timmy Security will be at Blackhat Las Vegas this year too! Check out our trainings!

Practical Web Application Hacking Advanced 1-2 August and 3-4 August 2020.

How to hack a company by circumventing its WAF for fun and profit – part 2

So far, one of the most successful and visited posts in our blog has been this one. Following the interest of readers on the topic, we have decided to publish more contents like that. What we will be doing periodically in these spaces is to recount a few stories extrapolated from circumstances that happened for real. We had a lot of fun by living them in first person during penetration testing sessions, red team operations or well deserved bug bounties. We hope you will have too.

The “single-encoded-byte” bypass

Some time ago, we have been contacted to perform the revalidation of a RCE web vulnerability that could be triggered by visiting a URL similar to this:

https://victim.com/appname/MalPath/something?badparam=%rce_payload%

The application was hosted behind a Big-IP device with ASM (Application Security Manager) module enabled, a layer 7 web application firewall (WAF) available on F5’s BIG-IP platforms. To react immediately and not leave the solution exposed to the internet, the network engineers thought well of deploying a custom WAF rule to protect the endpoint before a formal fix was actually introduced in the application’s code. Our role consisted of investigating the effectiveness and robustness of this rule which looked like something similar to this:

The meaning of it was that in case a HTTP GET request had been intercepted (line 2) and the lowercase-converted URI contained both the strings “/malpath/” and “badparam=” (lines 3 and 4), the request would have been logged first (line 7) and then blocked by returning the HTTP error 404 to the client (line 8).

Actually, after deploying the custom WAF rule, by visiting the URL:

https://victim.com/appname/MalPath/something?badparam=%rce_payload%

…we receive the 404 HTTP error, meaning that the rule is working and the WAF is doing properly its job.

However, by reading the F5 documentation we discovered that the directive “HTTP::uri” (lines 3 and 4) returns the URI of the request without first fully decoding it! So if we visit the URL:

https://victim.com/appname/%4dalPath/something?badparam=%rce_payload%

…with “%4d” being the URL-encoded form of the character ‘M’, this is what happens:

  • The WAF intercepts the GET request. The URI in there “/appname/%4dalPath/something?badparam=%rce_payload%” is first entirely converted to lowercase because of the “tolower” function in the WAF rule, and becomes “/appname/%4dalpath/something?badparam=%rce_payload%”.
  • Then the URI is looked for the presence of the strings “/malpath/” and “badparam=”. Anyway, in the first case the string “/%4dalpath/” we sent does not match with the string “/malpath/” in the WAF rule (line 3), which hence is not hit.
  • Once the WAF control is bypassed, the HTTP request reaches the backend server where the URI canonicalization and decoding processes take place. So the request containing the URI converted from “/appname/%4dalPath/something?badparam=%rce_payload%” to “/appname/MalPath/something?badparam=%rce_payload%” is passed to the target web  application which in turn serves it.
  • The “%rce_payload%” is now executed and produces its effects (opening of a reverse shell, invocation of an operating system command, etc…)

Here the lesson learnt of the day is: always ensure that URI is decoded before comparing and checking it. Absolute lack of URI decoding is very bad. The usage of “contains” keyword in a WAF rule is even worse.

Partial URL decoding = full bypass

A couple of days later we were called back again to carry out the revalidation of the revalidation (lol) as the fix in the application code was taking more time than expected.

This time visiting both the URLs:

https://victim.com/appname/MalPath/something?badparam=%rce_payload%

and

https://victim.com/appname/%4dalPath/something?badparam=%rce_payload%

returned the 404 error page. In simple words, the WAF blocked us.

Then we attempted to double encode the letter “M” of “/Malpath/” with the sequence “%254d”. The final URL to visit was:

https://victim.com/appname/%254dalPath/something?badparam=%rce_payload%

Nothing. The web application firewall was blocking this request too. The reimplementation of the WAF rule now looks something like this:

Here the lines from 15 to 24 are equivalent to the lines 1-10 of the previous rule’s version. However this time the strings “/malpath/” and “badparam=” are checked against a variable called “$decodeUri” (lines 17 and 18) and not “HTTP::uri” as done before. What is this and where does it come from? Actually the lines from 1 to 14 are defining its content. In simple terms the “$decodeUri” variable is just the URI of the request (“[HTTP::uri]” at line 2). It is URI-decoded at most “max” times (with “max” being defined at line 4) through a “while” loop (lines 9-13) by using a counter variable (“cnt” declared at line 5). The loop is interrupted in case there are no more encoded characters left or the iteration in the cycle has occurred for “max” times (with “cnt” increased during each iteration).

To better understand what’s going on here, it could be helpful to browse this page and perform fourfold percent-encoding of the letter “M” by writing the sequence “%252525254d” into the text box. Now press over the “Decode” button (see picture below) for four times and during each pause see what happens. Have you seen it? This is exactly what the WAF rule above is doing with our URI.

So to bypass it, we have simply visited the following URL:

https://victim.com/appname/%252525254dalPath/something?badparam=%rce_payload%

On reception the WAF decodes the URI as follows:

  • (1st iteration of the loop):
    /appname/%2525254dalPath/something?badparam=%rce_payload%
  • (2nd iteration of the loop):
    /appname/%25254dalPath/something?badparam=%rce_payload%
  • (3rd iteration of the loop):
    /appname/%254dalPath/something?badparam=%rce_payload%
  • (4th iteration of the loop):
    /appname/%4dalPath/something?badparam=%rce_payload%

When the WAF rule quits the loop, the request’s URI is set to “/appname/%4dalPath/something?badparam=%rce_payload%“. Again, as in the previous case, the string “/malpath/” (line 17 of the WAF rule) does not match with the one in the URI “/%4dalpath/”, so the WAF let the request pass through. The control is bypassed once more and the effect is always the execution of “%rce_payload%”. GAME OVER for the second consecutive time.

One could think: “Ok, but this is a stupid mistake! Just fire these monkeys preteding to be security guys!”. This is not totally true as the “-normalized” option, that undoubtedly simplifies the life of network engineers, can be adopted only with the most recent versions of iRules, leaving as a unique option the creation and deployment of a custom rule which is notoriously an error-prone process. However, it is still surprising how many sources out there are publicly advertising these flawed examples, even in the official F5 documentation (watch the screenshot below).

For copy & paste supporters this kind of “code snippets” are like honey. Don’t get us wrong. We have nothing against copy & paste, except when this practice is performed in cybersecurity without understanding what is being done. In such circumstances copy&paste has been a countless number of times the root cause of many serious security vulnerabilities.

But let’s switch back to our main story! After the identification of the new issue the fix was quick (see the added part below).

Basically, following the “while” cycle (lines 9-13 in the picture at the beginning of this paragraph), the content of the WAF rule was updated with an “if” branch. It checks, at the end of the loop, if the value of the counter variable “cnt” is equal to the value of the “max” variable. In case it is, this means that the “while” cycle was interrupted while there were still characters to decode left in the URI. As we don’t want to process a request still containing encoded characters, the WAF rule responds with a 404 error, preventing the activation of the malicious payload.

Wrong assumptions … any gumption?

Most misconfigurations and bypass techniques at WAF level are not linked to technical problems or bugs afflicting the device itself, but wrong assumptions in the logic of its rules.

Our story may have had a happy ending at this point. Two revalidations had already revealed some flaws in the web application firewall’s configuration and finally a pretty tough WAF rule had been implemented, decently decoding the URI. To make it short, in a certain moment somebody realizes that the same vulnerability could have been exploited with a HTTP POST request.  The WAF rule provided protection only against GET requests and a new modification to it was needed. So, after the umpteenth change we were called back again to verify that this time everything was working as expected.

To cover both exploitation cases (GET and POST) the network engineers thought to apply a fix by deleting the line 16 of the WAF rule:

if ([HTTP::method] equals “GET”) [...]

Without that line the check on the URI would have been performed on any HTTP request, regardless of the method specified. It could have been an elegant solution if had worked…but can you spot the problem here? Take yourself few seconds before to answer. Well, when the URL below is browsed:

https://victim.com/appname/MalPath/something?badparam=%rce_payload%”

the following HTTP request is issued:

GET /appname/MalPath/something?badparam=%rce_payload% HTTP/1.1
[...]

If after discovering that the GET method is blocked at WAF level an attacker attempted to circumvent it via POST, the following request would be instead issued:

POST /appname/MalPath/something HTTP/1.1
[other HTTPheaders]

badparam=%rce_payload%

Would the “%rce_payload%” be triggered this time, given all the premises above? Yes, it would. Why? The answer is in the lines 17 and 18 of the WAF rule, shown below once more.

Let’s compare the content of the attacker’s POST request with the logic of the custom rule. In this case, at line 17 the fully decoded and lowercase-converted URI contains the string “/malpath/”. True. However the line 18 stems from an incorrect assumption. The parameters of a POST request reside in the HTTP body and not in the query string! The line 18 of the WAF rule is instead checking for the presence of the string “badparam=” in the “$decodedUri” variable which in the case of the attacker’s POST request returns the string “/appname/MalPath/something”. For sake of clarity, the WAF rule is not checking the string “badparam=” where it should, so the rule is bypassed.

Officially the revalidation of the revalidation of the revalidation was last round of check to be confident enough that the WAF rule implemented was providing the level of security desired against this web vulnerability. It took 1 week to tune up the custom WAF rule and then, suddenly, few more days later, it was removed because in the meantime the application code had been patched 😀

If it helps to repeat that once more, WAF virtual patching is ok if you need a bit of extra time before fixing a serious vulnerability, but it must not be abused. One week of exposure to RCE is a huge period of time for motivated attackers. The only way to sleep peacefully is to solve your problems at code level and patch it, patch it, patch it as soon as possible! Probably would have been worth from the beginning to dedicate all the resources to patch the application code than struggling to implement a working WAF rule by trial and error.

That’s all for today. Other stories like this will be published in future. So stay tuned and continue to follow us!

Hey! Red Timmy Security will be at Blackhat Las Vegas this year too! Check out our trainings!

Practical Web Application Hacking Advanced 1-2 August and 3-4 August 2020.

Cloud pentesting in Azure: one access key to rule them all

As cloud computing is on the rise, more of our penetration testing efforts will shift towards services hosted in the cloud. Testing a web app for vulnerabilities is no different regardless if it’s hosted on-premise, in the cloud or in your basement. What does change is how the application interacts with other services. Take for example the database: not anymore located on your own SQL server but in the cloud as well. The solution used by Azure is called Azure Storage, and in this article we explore its weaknesses.

Azure Storage

The type of data that can be stored on an Azure Storage account is wide-ranging. You should think of it as the cloud-replacement for file servers. Some examples:

  • Logs
  • Virtual machine hard disk images
  • Backups
  • Documents and files

There are multiple ways in which a user or an application can access this data. One of them is by use of a key.

By default, every storage account has two keys that provide full access to its data. They are called the primary key and the secondary key, although they both provide the same level of access. Multiple services using the storage account are using these keys, so we are dealing with:

  • A full access key
  • Shared between services
  • Inconvenient to replace

It’s a scary thought. To view these keys, we have to browse the Azure Portal as can be seen from below image.

The access keys are accessible to the Azure administrator by browsing to the storage account settings.

Attacker value

It’s immediately clear that this is an attractive lure for an attacker: lots of data and shared keys that give full access.

So where do we find this key? The first place to look is in source code and configuration files. Developers need to write code to access the storage, and might place the key inside their code (instead of the much more secure option, the Azure Key Vault). Storage account keys are 64-byte values, encoded in Base64.

Because code bases can be large, you can start by searching for usage of these classes and libraries:

  • Microsoft.WindowsAzure.Storage.Auth (.NET)
  • com.microsoft.azure.storage.StorageCredentials (Java)
  • AzureStorage.createBlobService (JavaScript)

In config files, you can try to grep on:

  • StorageConnectionString
  • StorageServiceKeys
  • StorageAccountKey

For example, you can see from this GitHub screenshot that many repositories expose such keys within code. It’s not unthinkable that the same is possible for the assignment you are working on.

Azure storage keys can be found on github
Searching all repositories of GitHub for the string “StorageConnectionString” returns more than 35.000 results. Take for example this recently updated config file exposing the storage key.

Developer tools

A second place of finding these keys is by obtaining them from the tools that developers use to access Azure Storage. For this scenario, you will need foothold on a developers machine. There are a number of tools which are frequently used for this purpose:

  • Azure Storage Explorer
  • CloudXplorer
  • TableXplorer
  • Redgate Azure Explorer

Having access to these tools, it is often straightforward to find the keys: they are usually set in the preferences or configuration section of the application.

Using the Azure Storage key

Once you have obtained the key, you should start identifying the type of storage accessible. Azure distinguishes between blob data, table data, queues and files. How to exactly determine this is out of scope for this article, but I want to share with you one quick and easy way of accessing files.

By default, Azure files are reachable via SMB 3.0, on port 445, with only one account, and it has Full Controll permissions. All of this is not configurable. You can just establish an SMB session using the key as the password:

net use * \\<storage_account_name>.file.core.windows.net\<share_name> /USER:AZURE\<storage_account_name> <primary_key>

Just by using the primary key we get full access to the files stored in Azure Storage.

SAS tokens

Next to keys there is another interesting method of access penetration testers should be aware of: SAS tokens. As can be seen from below image, they can be created with more restrictions than the storage keys which give full access. The Azure administrator can restrict data type, set a time limit and even restrict the IP addresses from which the data may be accessed.

No file level restrictions can be made for SAS tokens created in the Azure Portal

This makes them way less powerful than storage keys, however, if you find them they may still be useful. One thing that can’t be restricted from the Azure Portal is which objects exactly can be accessed with the SAS token: it just provides access to all files (or, all blobs; all queues; etc.).

It is possible to restrict access to only one file for example, but for that the SAS token would have to be generated via Powershell, which means more effort for the administrator and thus less likely to be the case.

Conclusion

Hunting for storage access keys and tokens should be a top priority for penetration testers and red teamers conducting exercises on Azure environments. Access to storage means access to files, logs, virtual machine hard drive images and will almost in any case mean game over for the target.

Join us at Blackhat Las Vegas this year with our Practical Web Application Hacking Advanced Course on 1-2 August and 3-4 August! We look forward to see you there!

Meanwhile, follow us on twitter: https://twitter.com/redtimmysec