You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Pods can consume all the available capacity on a node by default. This is an issue because nodes typically run quite a few system daemons that power the OS like (sshd, udev, etc. ) and Kubernetes itself. Unless resources are set aside for these system daemons, pods and system daemons compete for resources and lead to resource starvation issues on the node.
Without leaving RAM/Resources set aside, the Kubelet will happily use it all up, and then when we try to SSH in to debug why the node has gotten really slow and unstable, we could not be able to.
it is recommended to configure the kubeletNode Allocatable feature based on the workload density on each node.
note that the node Allocatable same as the node Capacity, because we don't leave room for system daemons.
this could be or not the reason for something like this ? note the RESTARTS value
root@zosv2-04:/sandbox/code/github/threefoldtech/js-sdk# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
...
kube-system local-path-provisioner-7ff9579c6-mgwnn 1/1 Running 60 10d
...
The text was updated successfully, but these errors were encountered:
Description
Pods can consume all the available capacity on a node by default. This is an issue because nodes typically run quite a few system daemons that power the OS like (sshd, udev, etc. ) and Kubernetes itself. Unless resources are set aside for these system daemons, pods and system daemons compete for resources and lead to resource starvation issues on the node.
Without leaving RAM/Resources set aside, the Kubelet will happily use it all up, and then when we try to SSH in to debug why the node has gotten really slow and unstable, we could not be able to.
it is recommended to configure the
kubelet
Node Allocatable
feature based on the workload density on each node.Implementation
more info about this can found here:
https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/
More
the Node Allocatable = the Node Capacity - kube-reserved - system-reserved - eviction-threshold
on my VDC kubernetes cluster, describing any node will give you something like this:
note that the node Allocatable same as the node Capacity, because we don't leave room for system daemons.
this could be or not the reason for something like this ? note the
RESTARTS
valueThe text was updated successfully, but these errors were encountered: