We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Description
We faced high memory usage of cAdvisor on some nodes
There is no OOM kills, memory cleaned after some time, here is two pod on the same cluster
And spikes higher after 05/31, its just because we increased limits for pod, but it proceed consume all available memory
Details
Pods are placed on different node-groups separated by load type with affinity In other it's the same amd64 arch instances started from one image
cAdvisor versions which we tested v0.47.2 v0.49.1
kubectl version Client Version: v1.29.2 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.27.13-eks-3af4770
cAdvisor params:
- --update_machine_info_interval=5m - --housekeeping_interval=15s - --max_housekeeping_interval=15s - --event_storage_event_limit=default=0 - --event_storage_age_limit=default=0 - --containerd=/run/containerd/containerd.sock - --disable_root_cgroup_stats=false - --store_container_labels=false - --whitelisted_container_labels=io.kubernetes.container.name,io.kubernetes.pod.name,io.kubernetes.pod.namespace - --disable_metrics=percpu,tcp,udp,referenced_memory,advtcp,memory_numa,hugetlb - --enable_load_reader=true - --docker_only=true - --profiling=true
go tool pprof -http=:8082 cadvisor-zd6t4-heap.out
The text was updated successfully, but these errors were encountered:
@thunderbird86, can you try code from #3561?
Sorry, something went wrong.
Thanks @iwankgb, I've rolled out it, and need couple of days to gather statistics
Hi @iwankgb, I've tried few attempts, and it doesn't help Here is last week
Successfully merging a pull request may close this issue.
Description
We faced high memory usage of cAdvisor on some nodes
There is no OOM kills, memory cleaned after some time, here is two pod on the same cluster
And spikes higher after 05/31, its just because we increased limits for pod, but it proceed consume all available memory
Details
Pods are placed on different node-groups separated by load type with affinity
In other it's the same amd64 arch instances started from one image
cAdvisor versions which we tested
v0.47.2
v0.49.1
cAdvisor params:
go tool pprof -http=:8082 cadvisor-zd6t4-heap.out
The text was updated successfully, but these errors were encountered: