Challenges in the always moving cloud

One thing that is radically different between testing on-premise and cloud environments, is that the attack surface is much more dynamic. Because of the scaling features of clouds, assets may be spun up and removed dynamically, as well as services. This makes it harder for defenders to define the attack surface, and also requires pen testers to do mapping differently.

One of the more reliable ways to tackle this is by using the API of the cloud provider to fetch an overview of everything that is currently up and running. In Azure one could use the Resource Graph to programmatically query attributes of resources like IP address and open ports. Taking this a step further, it’s possible to continuosly monitor the environment to produce a realtime identification of assets.

The previous approach requires credentials to be able to access the API. From a completely black box or Red Team perspective, it is much harder to enumerate the environment. In this case we’ll have to work from FQDNs, references to resources (i.e. storage, key vaults, databases) and other static identifiers.

There is also the risk of potentially attacking a resource of a different customer, when we are targetting a dynamic IP that at one point belonged to us. The IP might have been reassigned between scan time and exploitation time. Therefore, it’s advisable to either disregard dynamic IPs, or make sure to keep the time between scanning and exploitation as short as possible.

New risks

From a software engineering perspective, there are big differences between different cloud providers about how things are done. This may cause a developer who is used to working with one provider, act insecure when working on another, as he can have wrong assumptions about certain things (i.e. on AWS and Azure, serverless functions are executed via the API gateway, while on GCP they are exposed to the internet)

In your cloud penetration test, think outside the box and incorporate everything that is ‘linked’ to the cloud – there might be human or programmatic access that you can abuse. Examples:

  • Office solutions (document hosting / sharing, email)
  • VPN
  • CI/CD environment
  • Source code repos/management (e.g. BitBucket)
  • Remote management (RDP/SSH)

Because web apps are getting heavier on the client side, we also see more often that client-side logic talks directly to cloud provider resources. We can spot these endpoints by using the browser’s developer tools, looking for storage URLs, lambda functions, container server references, etc.

Certain web app attacks will be more common in the cloud. Take for example SSRF. The trend towards more microservices causes there to be more routes between servers. Each route that can be influenced by the attacker creates a potential for SSRF. Second, we have more proxy components in the cloud. Having multiple proxies that may be inconsistent with each other in parsing requests increase the risk of proxy attacks. We will cover both of these attacks in detail in our BlackHat course.

Conclusion

Performing penetration tests in cloud environments can be challenging. The classic approach of enumerating the complete network and then move to discovering vulnerabilities doesn’t work well in an environment where resources are dynamically created and services can be instantly moved from one region to another. If we can leverage the API of the cloud provider this might help us to get an up to date snapshot of the assets. For Red Teaming exercises, we have to rely on static identifiers and do most of our mapping at application level.