-
Notifications
You must be signed in to change notification settings - Fork 908
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Falco 0.38.0 - k8s specific fields are not populated any more #3243
Comments
Thanks for getting back to me @incertum! As soon as I am running falco 0.38.0 I never get these k8s.* fields populated. Always N/A... while the container fields are always populated. Crictl Output (issued on k8s node)
crictl output with containerd socket
So I tried the following without success:
As soon as I roll back to 0.37.1 and default args:
the fields are populated again. So is there any way I can debug this in detail within the falco container? Or maybe some work is do be done to officially support cri-dockerd as cri interface? |
Thanks for providing the crictl output. The labels are there.
The regression you mention seems puzzling. Plus you also have 2 runtimes running right? We definitely touched the container engines during the last releases. Maybe the regression is something very subtle wrt to the container engine type and/or the fact you run these 2? Maybe it enters the docker container engine logic now and not the CRI logic even though you pass the --cri socket. IDK yet. Btw we never tested it with Not sure we can fix this for the immediate next Falco 0.38.1 patch release, because we are always very careful when touching the container engine as it can break easily CC @falcosecurity/falco-maintainers . Edit: In addition, the new
Likely need to compile the source and sprinkle more debug statements here and there, but you can try running with the lib logger in debug mode for sure. Your runtime-endpoint: unix:///run/containerd/containerd.sock |
Installed
It also can't be decoupled from the docker service.
Above's command are the libs logger debug lines, so you should be able to get similar logs when enabling the libs logger. Maybe the fact that it worked before was a lucky accident and now since we cleaned up the code a bit, it doesn't work anymore. Let me check now what would need to be done to support this scenario. |
@networkhell opened a WIP PR. The issue was that the cgroups layout for docker was not supported for our internal CRI container engine. However right now we would do lookups against the docker and cri-dockerd sockets ... We need a design discussion among the maintainers to see how we can best support
|
@incertum thank you for your efforts regarding this issue 🙂
Let me provide some information that may help you with your design discussion:
Kubernetes deprecated docker as container runtime as of version 1.20 and removed the dockershim in version 1.24 in favor for cri compliant runtimes.
Cri-dockerd is maintained outside of Kubernetes by docker / mirantis as a cri interface for the docker runtime. So it is still possible to use Kubernetes with docker as runtime.
I guess most cloud providers already dropped docker as runtime but like me there are a lot of users that run k8s on prem and need to stick with docker, at least for a while, for various reasons.
So I would fully understand if you decide to not add support for this “deprecated” runtime when used alongside Kubernetes. But on the other hand maybe a lot of users rely on the combination of docker and Kubernetes.
You also mentioned before that it seems that I am running two runtimes (cri-dockerd and containerd). So the reason for this is that docker uses containerd for some operations so it is a docker dependency. But with its default configuration the cri interface of containerd is disabled so it can’t be queried by crio and neither being used as cri interface for kubelet.
I hope this helps a little. Please let me know if you need any further information.
|
Thanks for the additional info. I believe we should support cri docker, because we also support docker and who knows maybe it becomes more relevant in the future. Just need to check and find a way to make sure that we do not look up the same container from 2 sockets, that's all. Not like it involves lots of code changes or so. @leogr thoughts? However it definitely wouldn't be part of the next patch release. |
I have no idea if this may become relevant. We should dig into it.
Totally agree. Let's see in falcosecurity/libs#1907 and target it for libs 0.18 (Falco 0.39) |
@leogr I believe exposing container engines configs in Basically, if we have explicit We have a few options:
Defaults will of course remain the same. |
Totally agree 👍
I prefer 2 over 1. Anyway, we will still have the issue that's not easy to use lists with The option 3 might be: container_engines:
docker:
enabled: true
cri:
enabled: true
cri: ["/run/containerd/containerd.sock", "/run/crio/crio.sock", "/run/k3s/containerd/containerd.sock"]
disable-cri-async: false That being said, I believe it's time to open a dedicated issue for this :) |
I may like option 3, it seems shorter. yes let me open a dedicated issue. |
/milestone 0.39.0 |
Once (1) falcosecurity/libs#1907 and (2) #3266 are merged you could test the master falco container with the folllowing config. Important would be to disable docker.
|
@incertum I had the chance to test the current master / main images just now. But unfortunately the problem is not solved. Additions to Falco config: Sample Log output: As you can see in the logs neither k8s fields nor container specific fields are populated. So if you agree I would opt to re-open this issue. |
/reopen |
@leogr: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
I believe we need to bump libs in falco first. We will ping you here when we do it the next time. |
Ehy @networkhell we just released Falco 0.39.0-rc3, that should become final Falco 0.39.0 release in a week. |
I tested just now with the suggested version and the results are not promising... Full falco config:
Log output:
Environment ist basically unchanged.
Sorry for the bad news. |
:D doesn't matter, thanks for testing indeed! /milestone 0.40.0 |
Hey @networkhell First of all, thanks for your feedback. I just got a question about the following:
Was it a short-lived container? 🤔 |
Hi @leogr,
I tested this with one of the falco containers itself. I guess the pods uptime was something between 3 and 10 Minutes before my tests. What is your definition of short-lived? ;-) |
Hey @networkhell have you considered using the k8s-metacollector for Kubernetes metadata? |
@alacuku not yet - could it help to populate basic fields coming from the runtime such as container_name or k8s_pod_name? If yes I will give it a try. |
@networkhell, at the following link you can find the |
May #2700 (comment) be related? 🤔 |
@alacuku thanks for the hint. I guess this will work for us if we adjust our alerts to match the metacollector fields. @leogr @incertum in the meantime we decided to drop support for docker runtime in our production k8s clusters in favour of containerd. So this issue will not really bother us any more in the future. But more important I will loose the access to my testing environments set up with this combination of tools within the next two weeks. So if nobody else cares about k8s + docker as runtime feel free to close this issue. |
After upgrading to falco 0.38.0 some k8s specific fields are not pupulated any more. E.g. k8s.ns.name amd k8s.pod.name.
Enviroment ist k8s 1.28.6 with the following runtime components:
Deploy falco 0.38.0 via Manifest with default config. Trigger any alert that contains k8s specific output fields e.g. spawn a shell in a container.
When a rule is triggered I execpt the relevant fields to be pupulated from the container runtime. But k8s.* fields are missing after the upgrade to 0.38.0
14:37:02.679088469: Notice A shell was spawned in a container with an attached terminal (evt_type=execve user=root user_uid=1000 user_loginuid=-1 process=bash proc_exepath=/usr/bin/bash parent=runc command=bash terminal=34816 exe_flags=0 container_id=ce69f7e51afe container_image=harbor.***/hub.docker.com-proxy/library/python container_image_tag=3.12-slim container_name=k8s_***-service-python_***-oauth-service-5995bb9788-fllrf_management_b8968793-8b38-42fd-b2cf-1681edb9f99e_0 k8s_ns=<NA> k8s_pod_name=<NA>)
Environment
k8s on premise (kubespray 2.24.1)
Linux k8s-master01vt-nbg6.senacor-lbb.noris.de 6.1.0-21-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.90-1 (2024-05-03) x86_64 GNU/Linux
Kubernetes Manifest files from example Repo
The text was updated successfully, but these errors were encountered: