Security news that informs and inspires

AWS Promises to Scan for Misconfigured Servers

By

Like many recent data breaches, Capital One’s data breach was not the result of a vulnerability in Amazon’s cloud infrastructure, but rather how the financial giant had configured its systems. Even though Amazon Web Services wasn’t to blame, the company said that going forward, it will warn customers if it detects problems with how the customer had configured their systems and applications.

Amazon Web Services reiterated to Sen. Ron Wyden (D-Ore) that the attack succeeded because the attacker gained access through the misconfigured web application firewall and obtained broad-level permissions in response to his letter asking for clarification on how the attack unfolded. Specifically, Wyden wanted to know if there was a “server-side request forgery (SSRF) vulnerability” that was being exploited to trick misconfigured servers into revealing information.

“As Capital One outlined in their public announcement, the attack occurred due to a misconfiguration error at the application layer of a firewall installed by Capital One, exacerbated by permissions set by Capital One that were likely broader than intended,” wrote AWS CISO Stephen Schmidt. In the letter to Wyden, he wrote, “As discussed above, SSRF was not the primary factor in the attack.”

SSRF was one of several ways the attacker could have potentially gotten access to data once the attacker had gotten past the firewall and into the environment, Schmidt told Wyden.

As for how AWS makes sure customers are protecting their data, AWS provides “clear guidance on both the importance and necessity of protecting” systems from different attacks, including SSRF, Schmidt said. Customers have “documentation, how-to-guides, and professional services” to set up the web application firewall correctly. There is also “guidance and tools” to help customers set up the right level of permissions for different resources.

If the organization took a defense-in-depth approach and had multiple layers of protection “with intentional redundancies,” an attacker would not be able to get far enough to steal the data even after getting past the WAF, the letter said.

“Even if a customer misconfigures a resource, if the customer properly implements a ‘least privilege’ policy, there is relatively little an actor has access to once they are authenticated,” Schmidt said.

Misconfiguration is a Big Issue

One of the biggest misconceptions about moving application workloads to the cloud is that the service provider takes care of everything related to security. The reality is that both the cloud service provider and the organization have roles to play to keeps systems and data secure. When it comes to cloud infrastructure such as Amazon Web Services, Microsoft Azure, and Google Cloud, the service providers are responsible for the physical data centers and the server hardware the virtual machines run on. This is why the providers took the first pass at protecting their infrastructure from Meltdown and Spectre flaw, for example.

The organization, on the other hand, is responsible for protecting the virtual machines and the applications—by deploying the necessary controls, virtual appliances, and other security defenses. It is up to the organization to deploy a WAF, to encrypt the data, and restrict user-level permissions.

Some cloud providers make it easier to have those controls than others. While AWS has a comprehensive set of security tools, many of them are not available by default. Schmidt listed several AWS security services, including Macie, which automatically classifies data into buckets and then warns of anomalous attempts to access those buckets; and GuardDuty, which alerts when there are unusual Application Programming Interface (API) calls. “Well Architected Review” service inspects the customer’s technology architecture and give feedback on whether the customer is following best practices.

Most of these services—and many others—require administrators to consult the aforementioned documentation and guidance to deploy into their AWS environment. Some public cloud providers enable security controls by default for customers.

Providers Stepping In

Many recent cloud-based data breaches are the result of mistakes made by the organization, such as not restricting who can access Amazon Simple Storage Server (S3) buckets, or accidentally committing application tokens and credentials into public GitHub code repositories. In each of these instances, the providers aren’t at fault, but there have been so many of them in recent months that many service providers are beginning to act preemptively to detect those mistakes before they become incidents.

Schmidt told Wyden that AWS has started scanning public IP addresses to identify potentially problematic configurations. While it won’t be clear to AWS if the customer’s server is actually configured incorrectly, if something doesn’t look right, AWS will “err on the side of over-communicating” and warn the customer. AWS will “push harder” for customers to use Macie and GuardDuty, as well as “redoudle our efforts” to help customers set restricted permissions, Schmidt said.

What AWS is doing is similar to how GitHub and GitLab scans public code repositories to see if any secrets—application tokens, credentials, SSH private keys, or other sensitive information—were committed alongside the code. The repository providers alert the organizations who own those secrets to revoke them and warn the customer. For example, if someone accidentally committed the API key for AWS in to the code, the scanning service would notify Amazon so that the key could be revoked before it could be abused.

The cloud is secure, but if the organization doesn't know how to securely configure their networks, applications, and data, there is a problem. Organizations will benefit from better security in the cloud, but that presupposes they are taking care of their responsibilities. Providers are now providing some of that hand-holding to make sure the basics are taken care of.