Constructing Honeypots with vcluster and Falco: Episode II

That is half two in our sequence on constructing honeypots with Falco, vcluster, and different assorted open supply instruments. For the earlier installment, see Constructing honeypots with vcluster and Falco: Episode I.

When Final We Left our Heroes

Within the earlier article, we mentioned high-interaction honeypots and used vcluster to construct an intentionally-vulnerable SSH server inside its personal cluster so it couldn’t damage anything within the atmosphere when it acquired owned. Then, we put in Falco on the host and proceeded to assault the SSH server, watching the Falco logs to see the suitable rule set off once we learn /and so forth/shadow. 

That is all nice, but it surely’s only a begin. This time round, we’ll be including extra performance to our honeypot so we are able to react to what’s occurring inside it. A few of these extra items may also be laying down the infrastructure for including extra performance down the highway. 

We’ll be going past the fundamentals, and that is the place issues begin to get enjoyable.

Our Shortcomings

The setup from the earlier article had two main shortcomings. There are just a few extra, however we’ll get to these later.

First, the earlier iteration of our honeypot required being run immediately on an OS sitting on an precise hunk of {hardware}. That is of restricted utility because it actually doesn’t scale effectively until we need to arrange a military of {hardware} to help our eventual sprawl of honeypot bits. On the time, this was the one method we might do that with Minikube and Falco, because the Falco of yore didn’t have the kernel modules we wanted to do in any other case. Fortuitously, that is not the case. We are able to now take a extra cloud-native method and construct this on an EC2 occasion in AWS, and every little thing might be passable. To the cloud!

NOTE: We’re going to be constructing a honeypot which is, by definition, an deliberately susceptible system. We received’t have a lot in the best way of monitoring constructed out simply but, so we don’t counsel that you just expose it to the web.

Second, the outdated honeypot didn’t do a lot apart from complain into the Falco logs once we went poking round within the pod’s delicate recordsdata. This, we are able to additionally repair. We’re going to be utilizing Falcosidekick and Falco Talon to make our honeypot really do one thing once we go tripping Falco guidelines.

Response Engines

Response engine is a time period usually used within the context of EDR (Endpoint Detection and Response), SIEM (Safety Data and Occasion Administration), SOAR (Safety Orchestration, Automation and Response), and XDR (Prolonged Detection and Response). See EDR vs. XDR vs. SIEM vs. MDR vs. SOAR for extra info. 

It’s a part that executes an automatic response to safety threats. That is precisely the device we want on this case. 

Once we journey one of many Falco guidelines by interacting with our honeypot, we have to take computerized motion. In our explicit case, we’re going to be shutting down the pod that the attackers have owned so we are able to spin a clear one again up as an alternative. We’ll be utilizing a device referred to as Falco Talon for this. We’re additionally going to incorporate one other device, Falcosidekick, that can enable us some extra flexibility down the highway to do different issues in response to the occasions that occur in our surroundings. 

Falco Sidekick

Falcosidekick is a superb device that allows us to attach Falco as much as many different fascinating bits and items. We are able to use it to carry out monitoring and alerting, ship logs off to completely different instruments, and all types of different issues. That is the glue piece that we’ll use to ship the occasions to Falco Talon. 

Falco Talon

Falco Talon is the piece that might be performing the precise responses to the Falco guidelines that get tripped. Talon has its personal inner algorithm that defines which Falco guidelines it ought to reply to and what it ought to do when they’re triggered. 

Getting Our Palms Soiled

Let’s soar proper in and construct some issues. 

This time round, we’ll be constructing our honeypot on an Ubuntu Server 22.04 t3.xlarge EC2 occasion on AWS. You could possibly go along with a smaller occasion, however there’s a level at which the occasion received’t have ample assets for every little thing to spin up. Very small situations, such because the t2.micro, will nearly actually not have ample horsepower for every little thing to perform correctly. 

In concept, you need to be capable of construct this on any of the same cloud companies and have it work, so long as you will have all the right software bits in place. 

As a prerequisite, you have to to have put in the next instruments, on the famous model or greater:

The remainder we’ll set up as we work by way of the method. 

Hearth Up Minikube

1 – First we need to begin up minikube utilizing the docker driver. We’ll see it undergo its paces and obtain just a few dependencies.

21 – Subsequent, we’ll allow the ingress addon for minikube. It will enable us to achieve the SSH server that we’ll be putting in shortly.

$ minikube begin --vm-driver=docker

😄  minikube v1.32.0 on Ubuntu 22.04
✨  Utilizing the docker driver primarily based on person configuration
📌  Utilizing Docker driver with root privileges
👍  Beginning management aircraft node minikube in cluster minikube
🚜  Pulling base picture ...
💾  Downloading Kubernetes v1.28.3 preload ...
    > preloaded-images-k8s-v18-v1...:  403.35 MiB / 403.35 MiB  100.00% 51.69 M
🔥  Creating docker container (CPUs=2, Reminiscence=3900MB) ...
🐳  Making ready Kubernetes v1.28.3 on Docker 24.0.7 ...
    ▪ Producing certificates and keys ...
    ▪ Booting up management aircraft ...
    ▪ Configuring RBAC guidelines ...
🔗  Configuring bridge CNI (Container Networking Interface) ...
    ▪ Utilizing picture gcr.io/k8s-minikube/storage-provisioner:v5
🔎  Verifying Kubernetes parts...
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Accomplished! kubectl is now configured to use "minikube" cluster and "default" namespace by default

$ minikube addons allow ingress

💡  ingress is an addon maintained by Kubernetes. For any considerations contact minikube on GitHub.
You possibly can view the listing of minikube maintainers at: https://github.com/kubernetes/minikube/blob/grasp/OWNERS
    ▪ Utilizing picture registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
    ▪ Utilizing picture registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
    ▪ Utilizing picture registry.k8s.io/ingress-nginx/controller:v1.9.4
🔎  Verifying ingress addon...
🌟  The 'ingress' addon is enabled


Code language: Perl (perl)

Set up Falco

1 – Subsequent, we have to add the falcosecurity helm repo so we are able to entry the helm chart for Falco.

4 – As soon as we now have the repo added, we’ll replace to get the newest chart.

11 – We’ll use kubectl to create a namespace for Falco to dwell in. We’ll additionally use this similar namespace later for Sidekick and Talon.

14 – Now, we’ll kick off the Falco set up. You’ll discover right here we now have just a few extra arguments to disable buffering for the Falco logs so we get occasions extra rapidly, set up Sidekick in the course of the Falco set up, allow the online UI, and arrange the outgoing webhook for Sidekick to level on the URL the place Talon will shortly be listening.

$ helm repo add falcosecurity https://falcosecurity.github.io/charts
"falcosecurity" has been added to your repositories

$ helm repo replace
Dangle tight whereas we seize the newest out of your chart repositories...
...Efficiently acquired an replace from the "falcosecurity" chart repository
...Efficiently acquired an replace from the "securecodebox" chart repository
...Efficiently acquired an replace from the "stable" chart repository
Replace Full. ⎈Joyful Helming!⎈

$ kubectl create namespace falco
namespace/falco created

$ helm set up falco falcosecurity/falco --namespace falco 
--set tty=true 
--set falcosidekick.enabled=true 
--set falcosidekick.webui.enabled=true 
--set falcosidekick.config.webhook.tackle="http://falco-talon:2803"
NAME: falco
LAST DEPLOYED: Wed Dec  0 19:38:38 2023
NAMESPACE: falco
STATUS: deployed
REVISION: 1
NOTES:
Falco brokers are spinning up on every node in your cluster. After just a few
seconds, they're going to begin monitoring your containers trying for
safety points.


No additional motion needs to be required.


Code language: Perl (perl)

💡Word: If you wish to dig deeper into Falco, check out the course Falco 101.

Replace the Falco Guidelines

Afterward, we’ll be establishing a port ahead for the SSH server so we are able to attain it. Falco goes to be vocal about this and it’ll set off the “Redirect STDOUT/STDIN to Network Connection in Container” rule a LOT, which can make it troublesome to see the rule we really care about within the Falco logs, in addition to ship various further occasions to Talon. Let’s simply disable that rule.

If you’d like to check out the rule we’re disabling, you will discover it within the Falco guidelines repo right here.

1 – We’re going to make a short lived file to carry our rule modification, into which we are going to insert a customRules part.

2 – Subsequent, we’ll add the override.yaml.

3 – Then, the prevailing rule from the Falco guidelines file that we’re going to override.

4 – And, inform Falco that we need to disable it.

6 – Then, we’ll use helm to improve Falco and feed it the file we made, telling it to reuse the remainder of the values it beforehand had.

21 – Lastly, we’ll kill off the prevailing Falco pods so we get new ones with the rule disabled of their rulesets.

echo "customRules:" > /tmp/customrules.yaml
echo "  override.yaml: |-" >> /tmp/customrules.yaml
echo "    - rule: Redirect STDOUT/STDIN to Network Connection in Container" >> /tmp/customrules.yaml
echo "      enabled: false" >> /tmp/customrules.yaml

$ helm improve falco falcosecurity/falco --namespace falco --values /tmp/customrules.yaml --reuse-values
Launch "falco" has been upgraded. Joyful Helming!
NAME: falco
LAST DEPLOYED: Wed Dec  0 23:56:23 2023
NAMESPACE: falco
STATUS: deployed
REVISION: 2
NOTES:
Falco brokers are spinning up on every node in your cluster. After just a few
seconds, they're going to begin monitoring your containers trying for
safety points.


No additional motion needs to be required.

$ kubectl delete pods -n falco -l app.kubernetes.io/identify=falco
pod "falco-94wsk" deleted


Code language: Perl (perl)

Set up Falco Talon

Now let’s set up Falco Talon.

1 – Because it’s at present an alpha, Talon isn’t revealed in the usual helm repos. We’ll clone the Talon repo from GitHub to get a replica of the helm chart. 

12 – If we take a fast take a look at the Talon repo, we are able to see the helm chart for it, in addition to a pair yaml recordsdata that maintain its configuration. We’ll be altering the guidelines.yaml within the subsequent set of steps.

16 – Now, a fast helm set up of Talon into the falco namespace alongside Falco and Sidekick.

git clone https://github.com/Issif/falco-talon.git /tmp/falco-talon

Cloning into '/tmp/falco-talon'...
distant: Enumerating objects: 1599, executed.
distant: Counting objects: 100% (744/744), executed.
distant: Compressing objects: 100% (349/349), executed.
distant: Whole 1599 (delta 473), reused 565 (delta 338), pack-reused 855
Receiving objects: 100% (1599/1599), 743.58 KiB | 2.81 MiB/s, executed.
Resolving deltas: 100% (866/866), executed.

 
ls /tmp/falco-talon/deployment/helm/
Chart.yaml  guidelines.yaml  templates  values.yaml


$ helm set up falco-talon /tmp/falco-talon/deployment/helm --namespace falco

NAME: falco-talon
LAST DEPLOYED: Thu Dec  0 00:01:53 2023
NAMESPACE: falco
STATUS: deployed
REVISION: 1
TEST SUITE: None

Code language: Perl (perl)

Replace the Talon Guidelines and Configuration

As we mentioned earlier, we have to arrange the foundations for Talon individually. Let’s take a fast peek at what we now have within the guidelines.yaml now.

1 – Every rule within the file is designated with ‘- name’ and we now have just a few examples to have a look at.

21 – It is a rule alongside the traces of what we need to replicate, although we are able to drop the parameters part.

$ cat /tmp/falco-talon/deployment/helm/guidelines.yaml 

- identify: Rule Labelize                                                                                                                                                                     
  match:                                                                                                                                                                                  
    guidelines:                                                                                                                                                                                
      - Terminal shell in container                                                                                                                                                       
    output_fields:                                                                                                                                                                        
      - k8s.ns.identify!=kube-system                                                                                                                                                          
  motion:                                                                                                                                                                                 
    identify: kubernetes:labelize                                                                                                                                                             
    parameters:                                                                                                                                                                           
      labels:                                                                                                                                                                             
        suspicious: "true"                                                                                                                                                                
- identify: Rule NetworkPolicy                                                                                                                                                                
  match:                                                                                                                                                                                  
    guidelines:                                                                                                                                                                                
      - "Outbound Connection to C2 Servers"                                                                                                                                               
  motion:                                                                                                                                                                                 
    identify: kubernetes:networkpolicy                                                                                                                                                        
  earlier than: true                                                                                                                                                                            
- identify: Rule Terminate                                                                                                                                                                    
  match:                                                                                                                                                                                  
    guidelines:                                                                                                                                                                                
      - "Outbound Connection to C2 Servers"                                                                                                                                               
  motion:                                                                                                                                                                                 
    identify: kubernetes:terminate                                                                                                                                                            
    parameters:                                                                                                                                                                           
      ignoreDaemonsets: true                                                                                                                                                              
      ignoreStatefulsets: true


Code language: Perl (perl)

It will work very equally to how we edited the Falco guidelines earlier.

1 – We’ll echo a sequence of traces into the /tmp/falco-talon/deployment/helm/guidelines.yaml file. We have to identify the Talon rule (that is an arbitrary identify), inform it which Falco rule we need to match towards (that is the particular identify of the Falco rule), after which inform it what motion we wish it to tackle a match. On this case, we’ll be terminating the pod.

15 – We have to remark out one of many outputs within the values.yaml within the Talon chart listing whereas we’re in right here, since we received’t be configuring a Slack alert. If we didn’t do that, it wouldn’t damage something, however we’d see an error later within the Talon logs.

17 – As soon as once more, we’ll do a helm improve and level at our up to date recordsdata. Word that we aren’t  utilizing the –reuse-values argument to inform helm to maintain the remainder of the prevailing settings this time. If we did this, our modifications to the values.yaml wouldn’t be included.

27 – Then, we have to kill the prevailing pods to refresh them.

$ echo -e '                                                                                                                                                           ' >> /tmp/falco-talon/deployment/helm/guidelines.yaml

$ echo -e '- identify: Delicate file opened                                                                                                                                                             ' >> /tmp/falco-talon/deployment/helm/guidelines.yaml

$ echo -e '  match:                                                                                                                                                                                  ' >> /tmp/falco-talon/deployment/helm/guidelines.yaml

$ echo -e '    guidelines:                                                                                                                                                                                ' >> /tmp/falco-talon/deployment/helm/guidelines.yaml

$ echo -e '      - "Read sensitive file untrusted"                                                                                                                                                   ' >> /tmp/falco-talon/deployment/helm/guidelines.yaml

$ echo -e '  motion:                                                                                                                                                                                 ' >> /tmp/falco-talon/deployment/helm/guidelines.yaml

$ echo -e '    identify: kubernetes:terminate ' >> /tmp/falco-talon/deployment/helm/guidelines.yaml

sed -i 's/^s*-s*slack/ # - slack/' /tmp/falco-talon/deployment/helm/values.yaml

$ helm improve falco-talon /tmp/falco-talon/deployment/helm --namespace falco

Launch "falco-talon" has been upgraded. Joyful Helming!
NAME: falco-talon
LAST DEPLOYED: Thu Dec  0 00:10:28 2023
NAMESPACE: falco
STATUS: deployed
REVISION: 2
TEST SUITE: None

$ kubectl delete pods -n falco -l app.kubernetes.io/identify=falco-talon

pod "falco-talon-5bcf97655d-gvkv9" deleted
pod "falco-talon-5bcf97655d-wxr4g" deleted


Code language: Perl (perl)

Set up vcluster

In order that we are able to run our SSH server in isolation, we’ll obtain vcluster and set it up.

1 – Right here, we’ll set an atmosphere variable to fish out the newest vcluster model from the GitHub repository.

3 – Now, we’ll use that atmosphere variable to assemble the obtain URL.

5 – We’ll use curl to obtain the file and transfer it to /usr/native/bin.

11 – Now, let’s examine the vcluster model to verify we acquired every little thing put in correctly. 

14 – We’ll end up by making a vcluster namespace for every little thing to dwell in.

$ LATEST_TAG=$(curl -s -L -o /dev/null -w %{url_effective} "https://github.com/loft-sh/vcluster/releases/latest" | rev | lower -d'/' -f1 | rev)

$ URL="https://github.com/loft-sh/vcluster/releases/download/${LATEST_TAG}/vcluster-linux-amd64"

$ curl -L -o vcluster "$URL" && chmod +x vcluster && sudo mv vcluster /usr/native/bin;
  % Whole    % Obtained % Xferd  Common Velocity   Time    Time     Time  Present
                                 Dload  Add   Whole   Spent    Left  Velocity
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 61.4M  100 61.4M    0     0  80.7M      0 --:--:-- --:--:-- --:--:--  194M

$ vcluster model
vcluster model 0.18.0

$ kubectl create namespace vcluster
namespace/vcluster created


Code language: Perl (perl)

Set up the SSH Server in vcluster

Now that we now have vcluster working, we are able to get our goal SSH server put in.

1 – We’ll begin off by making a digital cluster named SSH within the vcluster namespace. It’s additionally necessary to notice that we now have now switched contexts to the SSH cluster. 

14 – Now, we’ll create a namespace referred to as SSH inside our digital cluster.

17 – We’ll add the securecodebox repo so we are able to get the chart for the SSH server.

20 – And, do a fast replace to tug the newest chart.

27 – Right here, we’ll use helm to put in the deliberately susceptible SSH server.

42 – Final, we’ll disconnect from the vcluster, which can change our context again to minikube.

$ vcluster create ssh -n vcluster

05:36:45 data Detected native kubernetes cluster minikube. Will deploy vcluster with a NodePort & sync actual nodes
05:36:45 data Create vcluster ssh...
05:36:45 data execute command: helm improve ssh /tmp/vcluster-0.18.0.tgz-1681152849 --kubeconfig /tmp/2282824298 --namespace vcluster --install --repository-config='' --values /tmp/654191707
05:36:46 executed Efficiently created digital cluster ssh in namespace vcluster
05:36:46 data Ready for vcluster to come back up...
05:37:11 data Stopping docker proxy...
05:37:21 data Beginning proxy container...
05:37:21 executed Switched lively kube context to vcluster_ssh_vcluster_minikube
- Use `vcluster disconnect` to return to your earlier kube context
- Use `kubectl get namespaces` to entry the vcluster

$ kubectl create namespace ssh
namespace/ssh created

$ helm repo add securecodebox https://charts.securecodebox.io/
"securecodebox" already exists with the identical configuration, skipping

$ helm repo replace
Dangle tight whereas we seize the newest out of your chart repositories...
...Efficiently acquired an replace from the "falcosecurity" chart repository
...Efficiently acquired an replace from the "securecodebox" chart repository
...Efficiently acquired an replace from the "stable" chart repository
Replace Full. ⎈Joyful Helming!⎈

$ helm set up my-dummy-ssh securecodebox/dummy-ssh --version 3.4.0 --namespace ssh 
--set world.service.kind="nodePort"

NAME: my-dummy-ssh
LAST DEPLOYED: Fri Dec  0 05:38:10 2023
NAMESPACE: ssh
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Demo SSH Server deployed.

Word this could used for demo and check functions.
Do not expose this to the Web!

$ vcluster disconnect

05:38:19 data Efficiently disconnected from vcluster: ssh and switched again to the unique context: minikube


Code language: Perl (perl)

Take a look at Every part Out

Okay! Now we now have every little thing constructed. Let’s give it a check.

It’s possible you’ll recall the vcluster reference diagram from the earlier article:

image5 35

This might be useful to remember when visualizing the structure as we work by way of this.

1 – Let’s take a fast take a look at the pods within the vcluster namespace. We are able to see our SSH server right here referred to as my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh. We’ll be aware that for future reference. 

10 – Right here, we’ll arrange a port ahead to reveal the SSH server.

18 – Now, we’ll kick off the remainder of the occasions through the use of sshpass to SSH into the server and skim the /and so forth/shadow file. Proper now we’re doing this manually, so we don’t strictly want sshpass, however we’re going to be automating this later and we’ll want it then.

22 – Right here, we are able to see the contents of the file.

$ kubectl get pods -n vcluster

NAME                                           READY   STATUS    RESTARTS   AGE
coredns-68bdd584b4-dwmms-x-kube-system-x-ssh   1/1     Operating   0          4m43s
my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh      1/1     Operating   0          3m42s
ssh-0                                          1/1     Operating   0          5m7s

$ sleep 30

$ kubectl port-forward svc/"$SSH_SERVICE" 5555:22 -n vcluster & 

[1] 1196783
$ Forwarding from 127.0.0.1:5555 -> 22
Forwarding from [::1]:5555 -> 22

$ sleep 10

$ sshpass -p "THEPASSWORDYOUCREATED" ssh -o StrictHostKeyChecking=no -p 5555 
root@127.0.0.1 "cat /etc/shadow"

Dealing with connection for 5555
root:$6$hJ/W8Ww6$pLqyBWSsxaZcksn12xZqA1Iqjz.15XryeIEZIEsa0lbiOR9/3G.qtXl/SvfFFCTPkElo7VUD7TihuOyVxEt5j/:18281:0:99999:7:::
daemon:*:18275:0:99999:7:::
bin:*:18275:0:99999:7:::
sys:*:18275:0:99999:7:::
sync:*:18275:0:99999:7:::
video games:*:18275:0:99999:7:::
man:*:18275:0:99999:7:::
lp:*:18275:0:99999:7:::
mail:*:18275:0:99999:7:::
information:*:18275:0:99999:7:::
uucp:*:18275:0:99999:7:::
proxy:*:18275:0:99999:7:::
www-data:*:18275:0:99999:7:::
backup:*:18275:0:99999:7:::
listing:*:18275:0:99999:7:::
irc:*:18275:0:99999:7:::
gnats:*:18275:0:99999:7:::
no person:*:18275:0:99999:7:::
systemd-timesync:*:18275:0:99999:7:::
systemd-network:*:18275:0:99999:7:::
systemd-resolve:*:18275:0:99999:7:::
systemd-bus-proxy:*:18275:0:99999:7:::
_apt:*:18275:0:99999:7:::
sshd:*:18281:0:99999:7:::

Code language: Perl (perl)

Checking the Logs

image4 48

Let’s see what all occurred because of our assault towards the SSH server.

1 – We’ll set an atmosphere variable as much as discover the Falco pod for us and maintain its location.

3 – Now, let’s take a look at these logs. The bits at first are from Falco spinning up. By the way, we are able to see the override file that we created earlier loading right here.

18 – That is the meaty bit. Within the output, we are able to see “Warning Sensitive file opened for reading by non-trusted program (file=/etc/shadow),” which is strictly what we did once we poked on the SSH server.

22 – Now, let’s take a look at the Talon logs. Right here, we’ll put a one-liner collectively that can discover the Talon pods and fetch the logs for us. Word that there are two Talon pods and what we wish may very well be in both of them, so we’ll seize the logs from each. You possibly can see that the output is interleaved from each of them.

30 – Right here, we are able to see the Falco occasion coming by way of to Talon. 

32 – And right here we acquired a match towards the Talon rule we created earlier. 

33 – Right here is the motion from the Talon rule being executed.

$ FALCO_POD=$(kubectl get pods -n falco -l app.kubernetes.io/identify=falco -o=jsonpath='{.gadgets[*].metadata.identify}')

$ kubectl logs "$FALCO_POD" -n falco

Defaulted container "falco" out of: falco, falcoctl-artifact-follow, falco-driver-loader (init), falcoctl-artifact-install (init)
Fri Dec  0 05:33:49 2023: Falco model: 0.36.2 (x86_64)
Fri Dec  0 05:33:49 2023: Falco initialized with configuration file: /and so forth/falco/falco.yaml
Fri Dec  0 05:33:49 2023: Loading guidelines from file /and so forth/falco/falco_rules.yaml
Fri Dec  0 05:33:49 2023: Loading guidelines from file /and so forth/falco/guidelines.d/override.yaml
Fri Dec  0 05:33:49 2023: The chosen syscall buffer dimension is: 8388608 bytes (8 MBs)
Fri Dec  0 05:33:49 2023: Beginning well being webserver with threadiness 4, listening on port 8765
Fri Dec  0 05:33:49 2023: Loaded occasion sources: syscall
Fri Dec  0 05:33:49 2023: Enabled occasion sources: syscall
Fri Dec  0 05:33:49 2023: Opening 'syscall' supply with Kernel module

<snip>

{"hostname":"falco-wchsq","output":"18:39:24.133546875: Warning Sensitive file opened for reading by non-trusted program (file=/etc/shadow gparent=sshd ggparent=containerd-shim gggparent=<NA> evt_type=open user=root user_uid=0 user_loginuid=0 process=cat proc_exepath=/bin/cat parent=sshd command=cat /etc/shadow terminal=0 exe_flags=O_RDONLY container_id=0f044393375b container_image=securecodebox/dummy-ssh container_image_tag=v1.0.0 container_name=k8s_dummy-ssh_my-dummy-ssh-7955bc99c8-mxshb-x-ssh-x-ssh_vcluster_e10eeedf-7ad2-4a7e-8b73-b7713d6537da_0 k8s_ns=vcluster k8s_pod_name=my-dummy-ssh-7955bc99c8-mxshb-x-ssh-x-ssh)","priority":"Warning","rule":"Read sensitive file untrusted","source":"syscall","tags":["T1555","container","filesystem","host","maturity_stable","mitre_credential_access"],"time":"2023-12-08T18:39:24.133546875Z", "output_fields": {"container.id":"0f044393375b","container.image.repository":"securecodebox/dummy-ssh","container.image.tag":"v1.0.0","container.name":"k8s_dummy-ssh_my-dummy-ssh-7955bc99c8-mxshb-x-ssh-x-ssh_vcluster_e10eeedf-7ad2-4a7e-8b73-b7713d6537da_0","evt.arg.flags":"O_RDONLY","evt.time":43012267506,"evt.type":"open","fd.name":"/etc/shadow","k8s.ns.name":"vcluster","k8s.pod.name":"my-dummy-ssh-7955bc99c8-mxshb-x-ssh-x-ssh","proc.aname[2]":"sshd","proc.aname[3]":"containerd-shim","proc.aname[4]":null,"proc.cmdline":"cat /etc/shadow","proc.exepath":"/bin/cat","proc.name":"cat","proc.pname":"sshd","proc.tty":0,"user.loginuid":0,"user.name":"root","user.uid":0}}

<snip>

$ kubectl get pods -n falco -l app.kubernetes.io/identify=falco-talon -o=jsonpath='{vary .gadgets[*]}{.metadata.identify}{"n"}{finish}' | xargs -I {} kubectl logs {} -n falco

2023-12-00T05:33:41Z INF init action_category=kubernetes
2023-12-00T05:33:41Z INF init notifier=k8sevents
2023-12-00T05:33:41Z INF init notifier=slack
2023-12-00T05:33:41Z INF init consequence="4 rules have been successfully loaded"
2023-12-00T05:33:41Z INF init consequence="watch of rules enabled"
2023-12-00T05:33:41Z INF init consequence="Falco Talon is up and listening on 0.0.0.0:2803"
2023-12-00T05:44:46Z INF occasion output="05:44:46.118305822: Warning Sensitive file opened for reading by non-trusted program (file=/etc/shadow gparent=sshd ggparent=containerd-shim gggparent=<NA> evt_type=open user=root user_uid=0 user_loginuid=0 process=cat proc_exepath=/bin/cat parent=sshd command=cat /etc/shadow terminal=0 exe_flags=O_RDONLY container_id=1536aa9c45c2 container_image=securecodebox/dummy-ssh container_image_tag=v1.0.0 container_name=k8s_dummy-ssh_my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh_vcluster_21bdc319-5566-41ee-8a64-d8b7628e5937_0 k8s_ns=vcluster k8s_pod_name=my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh)" precedence=Warning rule="Read sensitive file untrusted" supply=syscall 
trace_id=79db4b47-0112-4a22-8068-e171702e018a
2023-12-00T05:44:46Z INF match motion=kubernetes:terminate rule="Sensitive file opened" trace_id=79db4b47-0112-4a22-8068-e171702e018a
2023-12-00T05:44:46Z INF motion Namespace=vcluster Pod=my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh motion=kubernetes:terminate occasion="05:44:46.118305822: Warning Sensitive file opened for reading by non-trusted program (file=/etc/shadow gparent=sshd ggparent=containerd-shim gggparent=<NA> evt_type=open user=root user_uid=0 user_loginuid=0 process=cat proc_exepath=/bin/cat parent=sshd command=cat /etc/shadow terminal=0 exe_flags=O_RDONLY container_id=1536aa9c45c2 container_image=securecodebox/dummy-ssh container_image_tag=v1.0.0 container_name=k8s_dummy-ssh_my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh_vcluster_21bdc319-5566-41ee-8a64-d8b7628e5937_0 k8s_ns=vcluster k8s_pod_name=my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh)" rule="Sensitive file opened" standing=success trace_id=79db4b47-0112-4a22-8068-e171702e018a
2023-12-00T05:44:46Z INF notification motion=kubernetes:terminate notifier=k8sevents rule="Sensitive file opened" standing=success trace_id=79db4b47-0112-4a22-8068-e171702e018a
2023-12-00T05:33:41Z INF init action_category=kubernetes
2023-12-00T05:33:41Z INF init notifier=k8sevents
2023-12-00T05:33:41Z INF init notifier=slack
2023-12-00T05:33:41Z INF init consequence="4 rules have been successfully loaded"
2023-12-00T05:33:41Z INF init consequence="watch of rules enabled"
2023-12-00T05:33:41Z INF init consequence="Falco Talon is up and listening on 0.0.0.0:2803
Code language: Perl (perl)

Now, let’s go take a peek on the cluster and see what occurred because of our efforts. As we famous earlier, the identify of the SSH server pod was my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh.

1 – Let’s get the pods once more from the vcluster namespace. Now, we are able to see the identify of the SSH server pod is my-dummy-ssh-7955bc99c8-k8jgl-x-ssh-x-ssh. Success!

8 – We’ll check out the occasions within the vcluster namespace and grep for my-dummy-ssh to search out the bits we care about.

14 – Right here, we are able to see the brand new SSH server pod my-dummy-ssh-7955bc99c8-k8jgl-x-ssh-x-ssh being began up.

20 – We are able to see the owned pod my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh being killed off. 

$ kubectl get pods -n vcluster

NAME                                           READY   STATUS    RESTARTS   AGE
coredns-68bdd584b4-dwmms-x-kube-system-x-ssh   1/1     Operating   0          9m11s
my-dummy-ssh-7955bc99c8-k8jgl-x-ssh-x-ssh      1/1     Operating   0          95s
ssh-0                                          1/1     Operating   0          9m35s

$ kubectl get occasions -n vcluster | grep my-dummy-ssh

113s        Regular    falco-talon:kubernetes:terminate:success   pod/my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh      Standing: success...
113s        Regular    Scheduled                                  pod/my-dummy-ssh-7955bc99c8-k8jgl-x-ssh-x-ssh      Efficiently assigned vcluster/my-dummy-ssh-7955bc99c8-k8jgl-x-ssh-x-ssh to minikube
112s        Regular    Pulled                                     pod/my-dummy-ssh-7955bc99c8-k8jgl-x-ssh-x-ssh      Container picture "docker.io/securecodebox/dummy-ssh:v1.0.0" already current on machine
112s        Regular    Created                                    pod/my-dummy-ssh-7955bc99c8-k8jgl-x-ssh-x-ssh      Created container dummy-ssh
112s        Regular    Began                                    pod/my-dummy-ssh-7955bc99c8-k8jgl-x-ssh-x-ssh      Began container dummy-ssh
8m28s       Regular    Scheduled                                  pod/my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh      Efficiently assigned vcluster/my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh to minikube
8m27s       Regular    Pulling                                    pod/my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh      Pulling picture "docker.io/securecodebox/dummy-ssh:v1.0.0"
8m18s       Regular    Pulled                                     pod/my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh      Efficiently pulled picture "docker.io/securecodebox/dummy-ssh:v1.0.0" in 9.611s (9.611s together with ready)
8m17s       Regular    Created                                    pod/my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh      Created container dummy-ssh
8m16s       Regular    Began                                    pod/my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh      Began container dummy-ssh
113s        Regular    Killing                                    pod/my-dummy-ssh-7955bc99c8-mwqxg-x-ssh-x-ssh      Stopping container dummy-ssh

Code language: Perl (perl)

And there we now have it, finish to finish. Right here’s what we did:

  • Attacked the SSH server pod
  • Tripped the ‘Sensitive file opened for reading by non-trusted program’ rule in Falco
  • Used a webhook from Falcosidekick to Falco Talon to ship the occasions over
  • Tripped the ‘Sensitive file opened’ rule in Falco Talon
  • Terminated the offending pod

And Now With Barely Extra Automation

All of that above was fairly just a few transferring elements. Wouldn’t it’s good if we might simply run a script to do all of this? Sure, sure it might. Fortuitously, we are able to just do that.

Within the Sysdig TRT GitHub repo, pull down the minhoney.sh file. You’ll need to set it executable. To fireplace up the honeypot, merely run the script with the --buildit argument:

$ ./minhoney.sh --builditCode language: Perl (perl)

To take every little thing again down once more, run the script once more with the --burnit argument.

$ ./minhoney.sh --burnitCode language: Perl (perl)
NOTE: When run with –burnit, the script will try and do some cleanup of issues which will trigger issues with future runs. It can uninstall something in helm, kill off every little thing in minikube, and delete every little thing out of /tmp that the present person has permissions to delete. It’s NOT really helpful that you just run this on something apart from a system or occasion constructed for this particular function. Don’t say we didn’t warn you, trigger we completely warned you.  

That’s All (for Now) People

If we take a step again to see every little thing we now have defined, there we now have it, finish to finish:

image3 57
  • Assault the SSH server pod
  • Activate the rule ‘Sensitive file open for reading by untrusted program’ in Falco
  • Used a webhook from Falcosidekick to Falco Talon to ship the occasions
  • Enabled the ‘Sensitive file open’ rule in Falco Talon
  • Terminated the offending pod

Within the subsequent a part of this sequence, we’ll add a number of extra items to this. Logging and alerting could be good, in addition to extra automation to set every little thing up. We’ll additionally scale this up with some extra targets to assault.

For the earlier episode with the fundamentals, see Constructing honeypots with vcluster and Falco: Episode I.

Recent articles

INTERPOL Pushes for

Dec 18, 2024Ravie LakshmananCyber Fraud / Social engineering INTERPOL is...

Patch Alert: Essential Apache Struts Flaw Discovered, Exploitation Makes an attempt Detected

Dec 18, 2024Ravie LakshmananCyber Assault / Vulnerability Risk actors are...

LEAVE A REPLY

Please enter your comment!
Please enter your name here