Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unknown container runtime on some kubernetes clusters #3003

Closed
NDStrahilevitz opened this issue Apr 18, 2023 · 12 comments · Fixed by #4155
Closed

Unknown container runtime on some kubernetes clusters #3003

NDStrahilevitz opened this issue Apr 18, 2023 · 12 comments · Fixed by #4155

Comments

@NDStrahilevitz
Copy link
Collaborator

Description

  1. Create a containerd gke cluster
  2. Run tracee with -f e=cgroup_mkdir,container_create
  3. Observe that cgroup paths are of the form kubepods/<besteffort|burstable>/podXXXX/<container_id>
  4. Observe that container_create events have the runtime argument set to unknown

Output of tracee -v:

Tracee version "v0.13.1"

Output of uname -a:

Linux gke-enforcer-overhea-prometheus-node--49423563-kggk 5.15.0-1027-gke #32-Ubuntu SMP Tue Jan 24 11:53:18 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

Additional details

@NDStrahilevitz
Copy link
Collaborator Author

Currently my best idea is to try detecting on startup if tracee is running on a kubernetes node.
Then using the following command tr \\0 ' ' < /proc/"$(pgrep kubelet)"/cmdline, we can determine the runtime socket used by kubelet.
We can then add logic in the containers package of setting a default kubernetes runtime and using that when a pattern matches to kubepods/<besteffort|burstable>/podXXXX/<container_id>.

If anyone has a simpler idea please comment here otherwise I will go for this solution as a start.

@NDStrahilevitz NDStrahilevitz modified the milestones: v0.14.0, v0.15.0 May 2, 2023
@NDStrahilevitz NDStrahilevitz modified the milestones: v0.15.0, v0.16.0 May 24, 2023
@yanivagman yanivagman modified the milestones: v0.16.0, v0.17.0 May 31, 2023
@NDStrahilevitz
Copy link
Collaborator Author

Waiting for the process tree to be added.
Then I can query kubelet from the process tree and resolve the kubernetes default runtime from there.

@geyslan
Copy link
Member

geyslan commented Oct 18, 2023

Well, things seem to have changed in GKE since this issue was opened. Following are analyzes in two environments (cos_containerd and ubuntu_containerd).

cos_containerd

env

uname -a

Linux geyslan@gke-cluster-1-default-pool-29cfd713-gdzs 5.15.109+ #1 SMP Fri Jun 9 10:57:30 UTC 2023 x86_64 Intel(R) Xeon(R) CPU @ 2.20GHz GenuineIntel GNU/Linux

tr \\0 ' ' < /proc/"$(pgrep kubelet)"/cmdline

/home/kubernetes/bin/kubelet --v=2 --cloud-provider=external --experimental-mounter-path=/home/kubernetes/containerized_mounter/mounter --cert-dir=/var/lib/kubelet/pki/ --kubeconfig=/var/lib/kubelet/kubeconfig --max-pods=110 --volume-plugin-dir=/home/kubernetes/flexvolume --node-status-max-images=25 --container-runtime-endpoint=unix:///run/containerd/containerd.sock --runtime-cgroups=/system.slice/containerd.service --registry-qps=10 --registry-burst=20 --config /home/kubernetes/kubelet-config.yaml --pod-sysctls=net.core.somaxconn=1024,net.ipv4.conf.all.accept_redirects=0,net.ipv4.conf.all.forwarding=1,net.ipv4.conf.all.route_localnet=1,net.ipv4.conf.default.forwarding=1,net.ipv4.ip_forward=1,net.ipv4.tcp_fin_timeout=60,net.ipv4.tcp_keepalive_intvl=60,net.ipv4.tcp_keepalive_probes=5,net.ipv4.tcp_keepalive_time=300,net.ipv4.tcp_rmem=4096 87380 6291456,net.ipv4.tcp_s

Event Output

kubectl logs -f tracee-glphv -n tracee-system | jq '{ eventName: .eventName, args: .args }'

For the following commands:

kubectl apply -f pod.yaml

{
  "eventName": "cgroup_mkdir",
  "args": [
    {
      "name": "cgroup_id",
      "type": "u64",
      "value": 11031
    },
    {
      "name": "cgroup_path",
      "type": "const char*",
      "value": "/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f2261e2_c7c4_4d8c_b264_25fc19151247.slice/cri-containerd-d238fcfd99d54dc7fbb14b5b9b183d05d68a41f9ac460c886aba30c41a0466d5.scope"
    },
    {
      "name": "hierarchy_id",
      "type": "u32",
      "value": 0
    }
  ]
}
{
  "eventName": "container_create",
  "args": [
    {
      "name": "runtime",
      "type": "const char*",
      "value": "containerd"
    },
    {
      "name": "container_id",
      "type": "const char*",
      "value": "d238fcfd99d54dc7fbb14b5b9b183d05d68a41f9ac460c886aba30c41a0466d5"
    },
    {
      "name": "ctime",
      "type": "unsigned long",
      "value": 1697667965255697000
    },
    {
      "name": "container_image",
      "type": "const char*",
      "value": "docker.io/library/nginx:latest"
    },
    {
      "name": "container_image_digest",
      "type": "const char*",
      "value": "docker.io/library/nginx:latest"
    },
    {
      "name": "container_name",
      "type": "const char*",
      "value": "my-container"
    },
    {
      "name": "pod_name",
      "type": "const char*",
      "value": "my-pod"
    },
    {
      "name": "pod_namespace",
      "type": "const char*",
      "value": "default"
    },
    {
      "name": "pod_uid",
      "type": "const char*",
      "value": "5f2261e2-c7c4-4d8c-b264-25fc19151247"
    },
    {
      "name": "pod_sandbox",
      "type": "bool",
      "value": false
    }
  ]
}

docker run --rm -it hello-world:latest

{
  "eventName": "cgroup_mkdir",
  "args": [
    {
      "name": "cgroup_id",
      "type": "u64",
      "value": 11670
    },
    {
      "name": "cgroup_path",
      "type": "const char*",
      "value": "/system.slice/docker-ee6fc54ba2af7eb3978948a22b843fb35e8644001da43f483c465e6fd28e9fe2.scope"
    },
    {
      "name": "hierarchy_id",
      "type": "u32",
      "value": 0
    }
  ]
}
{
  "eventName": "container_create",
  "args": [
    {
      "name": "runtime",
      "type": "const char*",
      "value": "docker"
    },
    {
      "name": "container_id",
      "type": "const char*",
      "value": "ee6fc54ba2af7eb3978948a22b843fb35e8644001da43f483c465e6fd28e9fe2"
    },
    {
      "name": "ctime",
      "type": "unsigned long",
      "value": 1697664457594386200
    },
    {
      "name": "container_image",
      "type": "const char*",
      "value": "hello-world:latest"
    },
    {
      "name": "container_image_digest",
      "type": "const char*",
      "value": "hello-world@sha256:88ec0acaa3ec199d3b7eaf73588f4518c25f9d34f58ce9a0df68429c5af48e8d"
    },
    {
      "name": "container_name",
      "type": "const char*",
      "value": "gallant_wing"
    },
    {
      "name": "pod_name",
      "type": "const char*",
      "value": ""
    },
    {
      "name": "pod_namespace",
      "type": "const char*",
      "value": ""
    },
    {
      "name": "pod_uid",
      "type": "const char*",
      "value": ""
    },
    {
      "name": "pod_sandbox",
      "type": "bool",
      "value": false
    }
  ]
}

sudo ctr image pull docker.io/library/hello-world:latest
sudo ctr run --rm docker.io/library/hello-world:latest hello-container

// container_create isn't triggered
{
  "eventName": "cgroup_mkdir",
  "args": [
    {
      "name": "cgroup_id",
      "type": "u64",
      "value": 11550
    },
    {
      "name": "cgroup_path",
      "type": "const char*",
      "value": "/default/hello-container"
    },
    {
      "name": "hierarchy_id",
      "type": "u32",
      "value": 0
    }
  ]
}

ubuntu_containerd

env

uname -a

Linux gke-cluster-2-default-pool-0de6e28f-8424 5.15.0-1036-gke #41-Ubuntu SMP Wed Jun 7 04:23:11 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

tr \\0 ' ' < /proc/"$(pgrep kubelet)"/cmdline

/home/kubernetes/bin/kubelet --v=2 --cloud-provider=external --experimental-mounter-path=/home/kubernetes/containerized_mounter/mounter --cert-dir=/var/lib/kubelet/pki/ --kubeconfig=/var/lib/kubelet/kubeconfig --max-pods=110 --volume-plugin-dir=/home/kubernetes/flexvolume --node-status-max-images=25 --container-runtime-endpoint=unix:///run/containerd/containerd.sock --runtime-cgroups=/system.slice/containerd.service --registry-qps=10 --registry-burst=20 --config /home/kubernetes/kubelet-config.yaml --pod-sysctls=net.core.somaxconn=1024,net.ipv4.conf.all.accept_redirects=0,net.ipv4.conf.all.forwarding=1,net.ipv4.conf.all.route_localnet=1,net.ipv4.conf.default.forwarding=1,net.ipv4.ip_forward=1,net.ipv4.tcp_fin_timeout=60,net.ipv4.tcp_keepalive_intvl=60,net.ipv4.tcp_keepalive_probes=5,net.ipv4.tcp_keepalive_time=300,net.ipv4.tcp_rmem=4096 87380 6291456,net.ipv4.tcp_syn_retries=6,net.ipv4.tcp_tw_reuse=0,net.ipv4.tcp_wmem=4096 16384 4194304,net.ipv4.udp_rmem_min=4096,net.ipv4.udp_wmem_min=4096,net.ipv6.conf.all.disable_ipv6=1,net.ipv6.conf.default.accept_ra=0,net.ipv6.conf.default.disable_ipv6=1,net.netfilter.nf_conntrack_generic_timeout=600,net.netfilter.nf_conntrack_tcp_be_liberal=1,net.netfilter.nf_conntrack_tcp_timeout_close_wait=3600,net.netfilter.nf_conntrack_tcp_timeout_established=86400 --cgroup-driver=systemd --pod-infra-container-image=gke.gcr.io/pause:3.8@sha256:880e63f94b145e46f1b1082bb71b85e21f16b99b180b9996407d61240ceb9830

Event Output

kubectl logs -f tracee-glphv -n tracee-system | jq '{ eventName: .eventName, args: .args }'

For the following commands:

kubectl apply -f pod.yaml

{
  "eventName": "cgroup_mkdir",
  "args": [
    {
      "name": "cgroup_id",
      "type": "u64",
      "value": 10778
    },
    {
      "name": "cgroup_path",
      "type": "const char*",
      "value": "/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd38e8673_9c07_45d5_afc9_00c4b231298d.slice/cri-containerd-cc8dfcf4431e1b1485e2cd1f59c4a36a27a3d2e4bb53d0fef558adb87bb26cd4.scope"
    },
    {
      "name": "hierarchy_id",
      "type": "u32",
      "value": 0
    }
  ]
}
{
  "eventName": "container_create",
  "args": [
    {
      "name": "runtime",
      "type": "const char*",
      "value": "containerd"
    },
    {
      "name": "container_id",
      "type": "const char*",
      "value": "cc8dfcf4431e1b1485e2cd1f59c4a36a27a3d2e4bb53d0fef558adb87bb26cd4"
    },
    {
      "name": "ctime",
      "type": "unsigned long",
      "value": 1697667647556392200
    },
    {
      "name": "container_image",
      "type": "const char*",
      "value": ""
    },
    {
      "name": "container_image_digest",
      "type": "const char*",
      "value": ""
    },
    {
      "name": "container_name",
      "type": "const char*",
      "value": ""
    },
    {
      "name": "pod_name",
      "type": "const char*",
      "value": ""
    },
    {
      "name": "pod_namespace",
      "type": "const char*",
      "value": ""
    },
    {
      "name": "pod_uid",
      "type": "const char*",
      "value": ""
    },
    {
      "name": "pod_sandbox",
      "type": "bool",
      "value": false
    }
  ]
}

docker run --rm -it hello-world:latest

{
  "eventName": "cgroup_mkdir",
  "args": [
    {
      "name": "cgroup_id",
      "type": "u64",
      "value": 8915
    },
    {
      "name": "cgroup_path",
      "type": "const char*",
      "value": "/system.slice/docker-7a9fc4c6d3676d57ecf74a72dff1d6b06231bef6d23997a0eb1eb28b22524a70.scope"
    },
    {
      "name": "hierarchy_id",
      "type": "u32",
      "value": 0
    }
  ]
}
{
  "eventName": "container_create",
  "args": [
    {
      "name": "runtime",
      "type": "const char*",
      "value": "docker"
    },
    {
      "name": "container_id",
      "type": "const char*",
      "value": "7a9fc4c6d3676d57ecf74a72dff1d6b06231bef6d23997a0eb1eb28b22524a70"
    },
    {
      "name": "ctime",
      "type": "unsigned long",
      "value": 1697665168308888800
    },
    {
      "name": "container_image",
      "type": "const char*",
      "value": "hello-world:latest"
    },
    {
      "name": "container_image_digest",
      "type": "const char*",
      "value": "hello-world@sha256:88ec0acaa3ec199d3b7eaf73588f4518c25f9d34f58ce9a0df68429c5af48e8d"
    },
    {
      "name": "container_name",
      "type": "const char*",
      "value": "priceless_feistel"
    },
    {
      "name": "pod_name",
      "type": "const char*",
      "value": ""
    },
    {
      "name": "pod_namespace",
      "type": "const char*",
      "value": ""
    },
    {
      "name": "pod_uid",
      "type": "const char*",
      "value": ""
    },
    {
      "name": "pod_sandbox",
      "type": "bool",
      "value": false
    }
  ]
}

sudo ctr image pull docker.io/library/hello-world:latest
sudo ctr run --rm docker.io/library/hello-world:latest hello-container

// container_create isn't triggered
{
  "eventName": "cgroup_mkdir",
  "args": [
    {
      "name": "cgroup_id",
      "type": "u64",
      "value": 9785
    },
    {
      "name": "cgroup_path",
      "type": "const char*",
      "value": "/default/hello-container"
    },
    {
      "name": "hierarchy_id",
      "type": "u32",
      "value": 0
    }
  ]
}

@geyslan
Copy link
Member

geyslan commented Oct 18, 2023

Summary, for both cos_containerd and ubuntu_containerd, the runtime is correctly detected when the container is created by kubernetes and docker, but not when by ctr. So I consider that there is no obvious problem at the moment. If another GKE environment is not compliant, please reopen this issue specifying which environment it is and how to reproduce it.

Closing as non-reproducible on GKE.

@geyslan
Copy link
Member

geyslan commented Oct 19, 2023

Considering other environments/platforms besides GKE, follow the results:

aks

env

az aks show --resource-group ROCINANTE --name ROCI --query kubernetesVersion -o tsv

1.26.6

kubectl get nodes -o wide -n tracee-system

NAME                                STATUS   ROLES   AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
aks-agentpool-39237752-vmss000000   Ready    agent   32m   v1.26.6   10.224.0.5    <none>        Ubuntu 22.04.3 LTS   5.15.0-1049-azure   containerd://1.7.5-1

RESULT

  • runtime containerd (correct) ✅
  • args (correct but container_name is empty) ✅ ⛔
Event Output
{
  "eventName": "cgroup_mkdir",
  "args": [
    {
      "name": "cgroup_id",
      "type": "u64",
      "value": 9540
    },
    {
      "name": "cgroup_path",
      "type": "const char*",
      "value": "/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b9d24d5_38a0_4800_870e_18471c07e8f8.slice/cri-containerd-b69314304aa728ab173865dfef6e15a7500a7c373c865d4f8461dd8038b96bbe.scope"
    },
    {
      "name": "hierarchy_id",
      "type": "u32",
      "value": 0
    }
  ]
}
{
  "eventName": "container_create",
  "args": [
    {
      "name": "runtime",
      "type": "const char*",
      "value": "containerd"
    },
    {
      "name": "container_id",
      "type": "const char*",
      "value": "b69314304aa728ab173865dfef6e15a7500a7c373c865d4f8461dd8038b96bbe"
    },
    {
      "name": "ctime",
      "type": "unsigned long",
      "value": 1697741372622129400
    },
    {
      "name": "container_image",
      "type": "const char*",
      "value": "mcr.microsoft.com/oss/kubernetes/pause:3.6"
    },
    {
      "name": "container_image_digest",
      "type": "const char*",
      "value": "mcr.microsoft.com/oss/kubernetes/pause:3.6"
    },
    {
      "name": "container_name",
      "type": "const char*",
      "value": ""
    },
    {
      "name": "pod_name",
      "type": "const char*",
      "value": "my-pod"
    },
    {
      "name": "pod_namespace",
      "type": "const char*",
      "value": "tracee-system"
    },
    {
      "name": "pod_uid",
      "type": "const char*",
      "value": "9b9d24d5-38a0-4800-870e-18471c07e8f8"
    },
    {
      "name": "pod_sandbox",
      "type": "bool",
      "value": true
    }
  ]
}

kind

env

kind version

kind v0.20.0 go1.19.7 linux/amd64

tr \\0 ' ' < /proc/"$(pgrep kubelet)"/cmdline

/usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --node-ip=172.19.0.2 --node-labels= --pod-infra-container-image=registry.k8s.io/pause:3.9 --provider-id=kind://docker/kind/kind-control-plane --runtime-cgroups=/system.slice/containerd.service

RESULT

  • runtime containerd (correct) ✅
  • args (no) ⛔
Event Output
{
  "eventName": "cgroup_mkdir",
  "args": [
    {
      "name": "cgroup_id",
      "type": "u64",
      "value": 20922
    },
    {
      "name": "cgroup_path",
      "type": "const char*",
      "value": "/system.slice/docker-832aa69574e4b66dfe297fa8960e68843dddcb5673d9dad0cc996722aa2540dd.scope/kubelet.slice/kubelet-kubepods.slice/kubelet-kubepods-besteffort.slice/kubelet-kubepods-besteffort-poda7b57a44_a1df_463c_be01_d6f4a3312eef.slice/cri-containerd-6ac9b7af1d5b7453529baa5efdd3e44c8a0eda2407ff36be675e853264ac8f04.scope"
    },
    {
      "name": "hierarchy_id",
      "type": "u32",
      "value": 0
    }
  ]
}
{
  "eventName": "container_create",
  "args": [
    {
      "name": "runtime",
      "type": "const char*",
      "value": "containerd"
    },
    {
      "name": "container_id",
      "type": "const char*",
      "value": "6ac9b7af1d5b7453529baa5efdd3e44c8a0eda2407ff36be675e853264ac8f04"
    },
    {
      "name": "ctime",
      "type": "unsigned long",
      "value": 1697735931618706475
    },
    {
      "name": "container_image",
      "type": "const char*",
      "value": ""
    },
    {
      "name": "container_image_digest",
      "type": "const char*",
      "value": ""
    },
    {
      "name": "container_name",
      "type": "const char*",
      "value": ""
    },
    {
      "name": "pod_name",
      "type": "const char*",
      "value": ""
    },
    {
      "name": "pod_namespace",
      "type": "const char*",
      "value": ""
    },
    {
      "name": "pod_uid",
      "type": "const char*",
      "value": ""
    },
    {
      "name": "pod_sandbox",
      "type": "bool",
      "value": false
    }
  ]
}

minikube

env

minikube version

minikube version: v1.31.2
commit: fd7ecd9c4599bef9f04c0986c4a0187f98a4396e-dirty

tr \\0 ' ' < /proc/"$(pgrep kubelet)"/cmdline

/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.193

RESULT

  • runtime (unknown) ⛔
  • args (no) ⛔
Event Output
{
  "eventName": "cgroup_mkdir",
  "args": [
    {
      "name": "cgroup_id",
      "type": "u64",
      "value": 778
    },
    {
      "name": "cgroup_path",
      "type": "const char*",
      "value": "/kubepods/besteffort/pod19e4fb1e-cc64-4935-b592-b24b6d1ab689/d1e16407868a862baa8f72ad841ad8e27cd1a61433adfc5ae6b5c5acc7b05750"
    },
    {
      "name": "hierarchy_id",
      "type": "u32",
      "value": 2
    }
  ]
}
{
  "eventName": "container_create",
  "args": [
    {
      "name": "runtime",
      "type": "const char*",
      "value": "unknown"
    },
    {
      "name": "container_id",
      "type": "const char*",
      "value": "d1e16407868a862baa8f72ad841ad8e27cd1a61433adfc5ae6b5c5acc7b05750"
    },
    {
      "name": "ctime",
      "type": "unsigned long",
      "value": 1697731959307002554
    },
    {
      "name": "container_image",
      "type": "const char*",
      "value": ""
    },
    {
      "name": "container_image_digest",
      "type": "const char*",
      "value": ""
    },
    {
      "name": "container_name",
      "type": "const char*",
      "value": ""
    },
    {
      "name": "pod_name",
      "type": "const char*",
      "value": ""
    },
    {
      "name": "pod_namespace",
      "type": "const char*",
      "value": ""
    },
    {
      "name": "pod_uid",
      "type": "const char*",
      "value": ""
    },
    {
      "name": "pod_sandbox",
      "type": "bool",
      "value": false
    }
  ]
}

@rafaeldtinoco
Copy link
Contributor

If there isn't an issue for mapping the supported runtimes (I believe there was one at sometime) then I would recommend creating an issue for it with a more consumable table (since these comments are too long and makes it hard to summarize final picture). Up to you!

@geyslan
Copy link
Member

geyslan commented Oct 20, 2023

Here's a tabled summary of the above analysis.

Platform/env args args missing values runtime --container-runtime-endpoint cgroup_path
GKE (cos_containerd) - unix:///run/containerd/containerd.sock /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f2261e2_c7c4_4d8c_b264_25fc19151247.slice/cri-containerd-d238fcfd99d54dc7fbb14b5b9b183d05d68a41f9ac460c886aba30c41a0466d5.scope
GKE (ubuntu_container) all unix:///run/containerd/containerd.sock /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd38e8673_9c07_45d5_afc9_00c4b231298d.slice/cri-containerd-cc8dfcf4431e1b1485e2cd1f59c4a36a27a3d2e4bb53d0fef558adb87bb26cd4.scope
AKS ✅ ⛔ container_name   /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b9d24d5_38a0_4800_870e_18471c07e8f8.slice/cri-containerd-b69314304aa728ab173865dfef6e15a7500a7c373c865d4f8461dd8038b96bbe.scope
Kind all unix:///run/containerd/containerd.sock /system.slice/docker-832aa69574e4b66dfe297fa8960e68843dddcb5673d9dad0cc996722aa2540dd.scope/kubelet.slice/kubelet-kubepods.slice/kubelet-kubepods-besteffort.slice/kubelet-kubepods-besteffort-poda7b57a44_a1df_463c_be01_d6f4a3312eef.slice/cri-containerd-6ac9b7af1d5b7453529baa5efdd3e44c8a0eda2407ff36be675e853264ac8f04.scope
Minikube all unix:///var/run/cri-dockerd.sock /kubepods/besteffort/pod19e4fb1e-cc64-4935-b592-b24b6d1ab689/d1e16407868a862baa8f72ad841ad8e27cd1a61433adfc5ae6b5c5acc7b05750
microk8s - unix:///run/containerd/containerd.sock /kubepods/burstable/pod63a1179e-5810-4719-860e-4aeeafe24743/016a33241f9385b12e1c4f2fa7a95d7458a5ada3a6a5b15d77b7a76d9972a665

@itaysk itaysk removed this from the v0.19.0 milestone Oct 25, 2023
@geyslan
Copy link
Member

geyslan commented Oct 27, 2023

I was using jq for all tests and it has a bad reputation for messing up output, so I need to redo them all just grepping the output. Reopening and moving it to v0.20.0.

@geyslan geyslan reopened this Oct 27, 2023
@geyslan geyslan added this to the v0.20.0 milestone Oct 27, 2023
@rafaeldtinoco
Copy link
Contributor

rafaeldtinoco commented Nov 1, 2023

Using microk8s and it uses containerd (unix:///run/containerd/containerd.sock) and sysfs: cgroup container entry is is

/sys/fs/cgroup/kubepods/burstable/pod63a1179e-5810-4719-860e-4aeeafe24743/016a33241f9385b12e1c4f2fa7a95d7458a5ada3a6a5b15d77b7a76d9972a665

For microk8s, I can register the default location for the socket:

/snap/microk8s/current/bin/ctr -a /var/snap/microk8s/common/run/containerd.sock -n k8s.io containers ls

And I believe microk8s container enrichment would start working.

@NDStrahilevitz
Copy link
Collaborator Author

NDStrahilevitz commented Nov 5, 2023

Kind - looks like Kind creates an overlay mount which includes the original containerd socket, but for some reason using ctr inside the container and outside (sudo ctr c ls and sudo ctr namespace ls) gives different results. I doubt that they run a separate containerd instance (if so, how does the same socket communicate to different instances?), so i'm not sure what causes it. So here we have no need for correct runtime recognition, only fix the enricher interaction.
microk8s - Runtime not identified due to the pattern in the OP. as @rafaeldtinoco pointed out, the socket is different.
Minikube (on docker) - Correctly identifies runtime (due to the recognized pattern), again seems like there is a duplicate runtime inside the container which tracee can't access for enrichment.

Overall we have two issues here:

  1. The unrecognized pattern described above (should it automatically be assigned containerd? can it occur in cri-o environments? docker even?). Perhaps the best solution would be to add a "k8s" runtime value, agnostic to it's internal container engine, which would use a k8s aware enricher. This was actually one of the original approaches (using the CRI directly), but the CRI is outputted differently in containerd and cri-o which led to the current solution. Maybe kubectl should be used.
  2. Enrichers stop working for internal runtimes inside containers (which do not mount the original socket, but even in kind which seems like it does the limitation exists). Perhaps on container creation we should search their filesystem for the same list we use in autodetection (this should be an easy check, get the container's merged overlay fs, append the autodiscover list and stat the sockets as before.
    I am not convinced tracking through k8s directly is preferable, as just like we have the issue with enrichers, we would need to detect new nodes on the system at all time (microk8s and minikube running at the same time? If we have only vanilla k8s directly on the system, enrichment's issue is just detecting the runtime from patterns, not the enricher itself) .

@josedonizetti
Copy link
Contributor

This bugs also happens on microk8s -> #3003

@yanivagman yanivagman modified the milestones: v0.20.0, v0.21.0 Feb 7, 2024
@yanivagman yanivagman modified the milestones: v0.21.0, v0.22.0 Apr 18, 2024
@NDStrahilevitz
Copy link
Collaborator Author

Decision: we will consider this path as a containerd path. TBD if its found in further environments.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants