With the difficult period of the covid-19 pandemic still ongoing, some collaborators of Red Timmy have lost their job, fired from the employers where they worked. Because bills don’t pay themselves, some of us have thought to keep the wolf from the door by investing resources in bug bounty programs, waiting for better times.
This post today has a characteristic that makes it different from others published so far in the blog. We have thought it was interesting to share our failures more than the successes (too easy to talk about), because from such mistakes one can learn how to avoid them in future.
If you are used to follow us on Twitter (ehi, if you don’t, please subscribe here) and asked yourself why we have started to fire tweets like this in the past weeks…
… it should become a little bit clearer to you by reading this post. We have three stories to share with you. So let’s start.
1 – No clear indication of targets in scope, no effort.
The prevailing attitude of security researchers is to report mainly vulnerabilities on web applications. That is obvious, considered how pervasive web technologies are. Our initial idea instead consisted in focusing on the infrastructure, excluding from the TO-DO list whatever service running on ports 80/443 (HTTP), where the competition is normally higher.
We have thus chosen two ISPs that offered bug bounty programs and scanned their IPv4 address space looking for common services. On various subnets we have noticed the presence of Cisco routers with telnet enabled. Some service banners clearly indicated that system to be property of the ISP we were scanning. Some others, in the same subnet, were anonymous (no indication about the owner) but showed something like this:
At this precise moment in time we decide to attempt the path of low hanging fruits by testing for the presence of default authentication credentials. Incredibly, on hundreds of scanned systems, we find some devices for each provider that can be accessed with default username “
cisco” and password “
cisco” (as the banner in the screenshow above advised).
After the authentication process, we have actually verified that with the credentials utilized the access permission granted to us is sometimes level 1 (read-only user), some other level 15 (admin)!
As read-only user we could dump the current device status and configuration. For example, it was possible to enumerate the machines active in the network, including establish the adopted internal IP subnet:
Protocol Address Age (min) Hardware Addr Type Interface
Internet xxx.xxx.xxx.114 27 0000.0000.0333 ARPA FastEthernet4.1125
Internet xxx.xxx.xxx.115 - 0022.557f.77de ARPA FastEthernet4.1125
Internet 192.168.1.1 - 0022.547f.a7d4 ARPA Vlan1
Internet 192.168.1.3 115 7479.f309.6079 ARPA Vlan1
Internet 192.168.1.4 116 48cd.6ecc.643c ARPA Vlan1
Or again… use the device as a bounce host and attack other networks and systems from there. The victim would log the router’s IP address as origin of the attack, potentially causing reputational damage to the company owning the hacked device. In the example below we have connected to google.com from the router by utilizing the telnet command:
PROMPT>telnet www.google.com 80
Translating "www.google.com"…domain server (xx.xxx.xxx.xx) [OK]
Trying www.google.com (22.214.171.124, 80)… Open
HEAD / HTTP/1.0 <-- OUR REQUEST
HTTP/1.0 200 OK <-- GOOGLE RESPONSE
Content-Type: text/html; charset=ISO-8859-1
Considering the last two findings this means that a malicious agent would have the ability to attack internal machines and services as well. In addition, with the highest privileges (level 15) the device itself is totally under our control.
For instance, with the command “
show running-config”, its configuration was exported and we noticed:
ip http server
ip http port 65455
Namely the presence of a web server on port TCP 65455 where one can authenticate with the same default credentials “
cisco”. And the number of vulnerabilities in this panel was consistent with the rest of the environment.
With level 15 privileges not only the firmware/configuration is exportable, but it is modifiable too. It means the device could be flashed with our own code.
At that point we had enough on the plate to submit a report. We did so and few days later, surprise surprise, we discovered our findings were not applicable for the selected bug bounty programs: the IPs we had targeted had been used by some customers of the providers, not the providers themselves, so we were not eligible for a bounty.
What was the mistake? Well, normally a good bug bounty policy includes the indication of what is in scope. Especially for web application bug bounties the scope is usually very detailed with clear mention to what URL(s) and hostnames are eligible for a prize. The same cannot be said for programs where the infrastructure is also involved. We discovered at our expenses that the line is much blurrier here.
For example, in addition to punctual indication of web sites, domains and URLs in scope, the written policy of one of the two providers we had targeted literally indicated:
“In general any vulnerability that could realistically place the online security of XXXXXXX at risk is in scope and might be rewarded.“
That included not only web targets! However such a pretty open sentence was not accompanied with a list/range of in-scope IP addresses for the infrastructural part of the program. Under these circumstances, by scanning the entire IP range, the risk is obviously to hack devices belonging to provider’s customers and not the provider itself. Even when, how in our case, precautions are taken to avoid that! Indeed, by leveraging the whois protocol, we had previously discarded away the IP ranges known to be assigned to retail or business users, focusing only on the rest. For example, with whois, the description field of the routers/devices we had identified as vulnerable looked like this:
OrgName: XXXXXXXX Corp.
Address: ABC Avenue
Address: Attn: IP Management
VVVVVV Services, Inc. XXXSERVICE-MANAGEMENT-xx-xx-xxx-0 (NET-xx-xx-xxx-0-1) xx.xx.xxx.0 - xx.xx.xxx.xxx
In our imagination “Service Management” and “IP management” were traceable to internal subnets of the ISPs. Instead something completely different has emerged after the report triaging step has been completed. Lesson learnt: never again spend energies on hosts/IPs not explicitly indicated as in-scope.
Small side note. One of the devices with default credentials (it was actually a router) turned out to be owned by the largest hydroelectric power plant company in the world (sic!), operating in a strange country, without having a bug bounty program established, unfortunately for us. It is a Chinese company. Something we have noticed is that Chinese hackers are known to be fearsome and highly skilled. Conversely Chinese giants operating in the most disparate industrial sectors seem to be easy to hack as butter! Indeed this was not an isolated case in which we ended up to get a foothold inside the infrastructure of such companies. Just a curious nonsense 😀
2 – Read twice the rules of the program
0day exploits and bug bounty programs together do not get along. Normally the usage of this kind of code is banned from bug bounty programs. That’s the reason why when a security researcher discovers a 0day vulnerability, what they do is to contact the vendor (not the vulnerable company running the program) in order to get a patch first. Then, when the fix is released, two weeks or one month after the availability of the patch (this is subjected to specific terms of the program) they cross the fingers and hope that the affected company has not solved the issue yet. If so, the report is submitted. In no other way a monetary award would be paid.
But even without putting 0day bugs at the center of the discussion, there are so many systems affected by known vulnerabilities out there! Often a security bulletin exists for a product but not the exploit code to leverage that vulnerability. Based on such a consideration, the approach we have decided to adopt has been this: let’s start from publicly known flaws for which there is no public exploit yet.
For this purpose we have monitored the network of different bug bounty program owners and selected a very popular product, common to most of them, used by businesses to filter spam and malicious emails. We have studied the security bulletins freely available for this product and fell in love for a critical DoS that if exploited would cause the service paralysis and the block of all the incoming/outgoing emails. Normally such security advisories do not indicate many technical details. Therefore a certain dose of effort in R&D is required. Hence we:
- procured a vulnerable version of the virtual appliance to test,
- analyzed its components,
- found how to trigger the bug of interest,
- developed an exploit,
- tested the exploit rigorously in local (we do not want to harm our target) and confirmed it to work.
After the completion of all the steps above we have created a report, attached a poc and submitted everything to the concerned entities. Few hours later a bitter surprise. Our submission was rejected because denial of service attacks, albeit launched against a critical business infrastructure, were not contemplated in the bug bounty programs we had chosen to adhere to. Almost all the policies explicitly and clearly indicated something like that:
“Out of scope: Any activity that could lead to the disruption of our service e.g. denial of service attacks.“
The adopted approach (starting from known vulnerabilities on common use products for which a public exploit does not exist yet) has revealed to be valid in general, because in other circumstances we have managed to get a bounty. Anyway, for this particular case the lesson learned is that the rules and policies of a bug bounty program must be read twice before engaging in any exploitation venture.
3 – Shoot in the pile: how we hacked a bank by mistake
As already highlighted, the starting point of some security researchers is to select first one or more bug bounty programs and then try to find as many vulnerabilities as possible on systems in scope there. Some other researchers prefer the approach “discover a 0day -> notify the vendor -> wait a few weeks for the fix to be released -> fingers crossed -> submit a report to the vulnerable company running a bug bounty program”. A mistreated methodology (that we at Red Timmy Security refer to as “shoot in the pile”) consists in choosing a CVE number first and then check if any of the systems that can be compromised with that bug are linkable to an existing bounty program.
Having developed over the past months a few Poc(s) for high impact known vulnerabilities without the availability of a public exploit, we have thought to scan the entire internet and see how many hackable systems had been covered by a bug bounty program.
In addition, we have implemented for the adopted exploit(s) a test mode able to tell us whether or not a host is vulnerable without sending over the network code that could be considered too malicious. A few hours later the start of the scanning, in the output log files we noticed an entry similar to this:
IP - host.somebank.domain -> [VULNERABLE]
We are talking about a real bank, with a Bank Identifier Code assigned, but unfortunately there is no mention of any public bug bounty program in their websites/socials. Then we decide to contact them in order to check for the existence of a private program. However, regardless of that, we are still intentioned to report responsibly the flaw even in the absence of a prize. After all, this is just the type of bank ideal for frauds like the one against the bank of Bangladesh occurred 4/5 years ago (more info: That Insane, $81M Bangladesh Bank Heist? Here’s What We Know).
Obviously if we have only a log file entry in hand cannot seriously think to open a ticket with the bank. A piece of information that cannot be retrieved from the public internet is needed as proof. Of course we want to get it by utilizing the less invasive possible payload. We go for the exfiltration of the hostname configured in the vulnerable system:
Another question is who should we contact? On internet there is no trace about the SOC or PSIRT team of this bank. From the info@ email address reported on their web portal no response to our email is received too. We therefore decide to attempt an approach on Twitter, sending a private message which finally gets an answer:
Their first answer does not look to be promising. At this point we try to explain better who we are and why we are contacting them… but our interlocutor disappears. A couple of days later we make the last attempt:
Uh? Advise? Assistance? Do you know what? Keep your system as it is and good luck 😀
Beyond the misadventure with this bank (still vulnerable today) the lesson learned is that the approach “shoot in the pile” makes only sense if you own an up-to-date database with all the in-scope assets/hosts for all the bug bounty programs (or at least the ones you are interested in). Otherwise the risk is to spend a lot of time manually verifying if every single vulnerable host captured from your scanner is “in-scope” for some program.
Well, going back to the opening screenshot above, what did the companies publicly contacted via Twitter answer? As would say a friend of us…nothing, absolutely nothing. Businesses without a bug bounty program tend to be suspicious when contacted for security reasons. Even when they responded to our initial message, were so scared (or it was just lack of education) that the simple request to get the email address of their security team was enough to make them stop interacting with us. Within the few cases in which we have managed to reach the security team of a vulnerable company, their internal incident handling policy has been very insufficient, regardless the area of business. We have recently had another example of this by dealing with one of the unnamed businesses contacted during the recent Fortisiem scan campaign (read the story here) even though they were a cybersecurity company and a more disciplined approach would have been expected!
On the other side, unfortunately, there are still too few bug bounty programs in circulation giving a monetary incentive to security researchers, especially in the financial, banking, oil & gas, power supply and automotive areas. Those seems to be the most vulnerable places, at the moment. Had there been bug bounty programs for such sectors, we would have already become rich and closed the shop.
In general the most difficult part of reporting a security incident, when dealing with companies without a well established policy, is to find security contacts to start a discussion with. When these are lacking, you are forced to interact with non-technical people. And the more you strive to help somebody with no security background to have safer systems exposed on the internet, the higher is the level of distrust you get in exchange from them.
It is for such a reason that, except in rare cases (that will be selected at our discretion, anyway) in future we have decided to stop of responsibly reporting RCEs to companies without a bug bounty or security policy in place. No matter how prestigious they are. No matter what our log files will capture. We will just ignore and move on, shutting up the internal voice telling us: little Red Timmy, help these poor guys… help them.
There is a new doctrine in town… the “No-mercy disclosure policy”. This policy says that …
“if an internet system is vulnerable and the company owning it does not run a bug bounty program nowadays, sooner or later a thing called providence or fate will take care of making a crypto currency miner appear in that system” – Anonymous
And the little Red Timmy believes in both fate and providence 🙂