Security news that informs and inspires

Critical Kubernetes Bug Gives Anyone Full Admin Privileges

By

The privilege escalation flaw in popular container orchestration system Kubernetes is a bad one, as it lets any user gain full administrator privileges on any compute note being run in a Kubernetes cluster.

"Not only can this user [attacker] potentially steal sensitive data or inject malicious code, but the user can also bring down production applications and services from within an organization's firewall," Red Hat said in a video describing the flaw.

The vulnerability scored a whopping 9.8 out of 10 on the Common Vulnerability Scoring System (CVSS), because anyone could execute the attack over the network, and wouldn’t need special privileges or anything complex.

Originally developed by Google and released as an open-source project in 2014, Kubernetes manages Linux applications running inside containers. The application containers are organized into pods, nodes (physical or virtual machines), and clusters. Each node has an agent called Kubelet that handles communication with the master. Multiple nodes form a cluster, managed by a master that coordinates activities like scaling, scheduling, or updating applications. A Kubernetes cluster can have hundreds, even thousands, of nodes.

A user with exec/attach/portforward privileges on the pod—which is pretty much anyone since they are granted to normal users by default—can become a cluster-admin and gain access to any container in the pod and all the information inside. The user could access all secrets, pods, environment variables, list of running pod/container processes, and persistent volumes, Red Hat said in its security advisory.

"In default configurations, all users (authenticated and unauthenticated) are allowed to perform discovery API calls that allow this escalation," Jordan Liggitt, a staff software engineer at Google and a member of the Kubernetes product security team, wrote in an advisory on Github (CVE-2018-1002105).

An authenticated user can also send specially crafted network requests to the Kubernetes application programming interface (API) server and create a connection to the backend server. The API server’s job is to determine if the requests are valid, and to instruct other components to carry out the instructions for valid requests. With the flaw, the API server is tricked into connecting to the backend server as itself and not as the user, and with the highest level of permissions. Once the connection is established, the user can send arbitrary requests—authenticated with the API server’s Transport Layer Security (TLS) credentials—directly to the backend server. The user can run any API request against the kubelet API of the node where a targeted pod is running, such as listing all pods on the node, running commands inside pods, and getting the output of those commands.

It’s one of the most serious bugs ever found in Kubernetes, and the impact across multiple industries would be massive.

"This is a big deal. Not only can this actor steal sensitive data or inject malicious code, but they can also bring down production applications and services from within an organization's firewall," said Red Hat’s general manager of cloud platforms Ashesh Badani.

Patch Production Clusters

There is really no other way to address such a serious flaw affecting all versions other than to update Kubernetes. The fixes are in Kubernetes v1.10.11, v1.11.5, v1.12.3, and v1.13.0-rc.1. Anything older—versions v1.0.x to 1.9.x—would be vulnerable.

The Google Kubernetes Engine (GKE) have been upgraded to non-vulnerable versions. Microsoft said the Azure Kubernetes Service "has patched all affected clusters by overriding the default Kubernetes configuration to remove unauthenticated access to the entrypoints [Kubernetes commands] that exposed the vulnerability." For Red Hat administrators, patches will be automatically installed if auto-updates are enabled. Otherwise, they have to manually install patches on their own.

There aren’t any palatable mitigations in case patching isn’t an option. Deployments using the admin account for everything aren’t affected by the flaw, but it’s a terrible idea of use the admin account this way, especially in production environments. Development installations of minikube already running everything as admin, but shouldn’t be used in production, anyway. Another possibility is to aggregate API servers and remove pod permissions— exec, attach, portforward—from users that should not have full access to the kubelet API, but that is a very disruptive step.

The Kubernetes community was able to deliver the patch promptly after receiving the report from Darren Shepard, chief architect and co-founder of Rancher Labs (which delivers production-ready Kubernetes clusters). However, the availability of the patch isn’t enough. Many organizations struggle with timely patching, not because they are negligent or unaware of the issues, but because patching is a time-intensive effort that can disrupt business operations. Organizations rely on Kubernetes to run intensive production workloads, which means any potential downtime or hiccup would significantly impact the business. That doesn’t mean don’t patch—just that it’s important to recognize the challenges different deployments face when applying the fix.

“What if your production systems are running specialized integration points or workloads that the patch affects adversely? Or what if applying the patch inadvertently causes a performance hit to a production system or, worse, downtime?” Red Hat’s Badani asked. This is why vendors who work with Kubernetes—for example, Red Hat's OpenShift Container Platform uses Kubernetes for orchestrating and managing containers—need to provide more than just a patch. The assistance should include documentation and strategies to help customers “assess how they are affected, what systems are affected, and why (or even why not) they should apply the fixes,” Badani said.

Compounding the seriousness of the flaw is the fact that exploiting is virtually undetectable. The unauthorized requests would not generate any messages in the Kubernetes API server audit logs or the server log because they would be made over established connections. The network requests would appear in the kubelet or aggregated API server logs, but they would be "indistinguishable from correctly authorized and proxied requests via the Kubernetes API server," Liggitt said.

It's not known at this point whether the vulnerability has been previously found and used by someone maliciously. From an enterprise IT perspective, the assumption should be that it has been, and that secrets have been exposed. Keys may need to be rotated, for example. Addressing this vulnerability is more than just patching, as it may require pulling out the incident response playbook.

Image credit: Guillaume Bolduc on Unsplash