Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

failed to create containerd task: OCI runtime create failed: memory.kmem.limit_in_bytes: operation not supported: unknown #3777

Closed
mfalkvidd opened this issue Feb 17, 2023 · 3 comments
Labels
inactive kind/support Question with a workaround

Comments

@mfalkvidd
Copy link

Summary

Pod is stuck in CrashLoopBackOff
kubectl describe pod shows the following reason:

failed to create containerd task: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:326: applying cgroup configuration for process caused: failed to write 1 to memory.kmem.limit_in_bytes: write /sys/fs/cgroup/memory/kubepods/.../memory.kmem.limit_in_bytes: operation not supported: unknown

What Should Happen Instead?

Pod should start

Reproduction Steps

This started happening after a reboot. My guess is that it happened due to switching to a new kernel version: 5.19.0-32-generic

Linux 5.19.0-32-generic #33~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Mon Jan 30 17:03:34 UTC 2 x86_64 x86_64 x86_64 GNU/Linux

22.04.1 LTS (Jammy Jellyfish)

Introspection Report

inspection-report-20230217_092820.tar.gz

Can you suggest a fix?

Commenting out all resource directives (limits and requests) in the helm chart and running helm upgrade allows the pod to start.

Are you interested in contributing with a fix?

@neoaggelos
Copy link
Contributor

@mfalkvidd Was this fixed by changing something particular in the helm chart configuration? I believe the same would have happened just by recreating the pod from scratch (e.g. with a kubectl rollout restart of the deployment/daemonset/etc)

Not sure if there is something we can do for this on the MicroK8s side, perhaps you might have better luck by creating an issue at the containerd repository?

@neoaggelos neoaggelos added the kind/support Question with a workaround label Mar 16, 2023
@mfalkvidd
Copy link
Author

@neoaggelos I fixed it by commenting out all resource directives (limits and requests) in the helm chart and running helm upgrade. I do not know if kubectl rollout restart would have worked instead.

Copy link

stale bot commented Feb 9, 2024

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the inactive label Feb 9, 2024
@stale stale bot closed this as completed Mar 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
inactive kind/support Question with a workaround
Projects
None yet
Development

No branches or pull requests

2 participants