-
Notifications
You must be signed in to change notification settings - Fork 244
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NFD will remove and re-add node labels if nfd-worker pod is deleted (and re-created by the nfd-worker DS) #1752
Comments
I like the first of the idea but the second part |
this bit is intended to handle update of nfd-worker ds selectors/affinity/tolerations where nfd-worker pods may get removed from some nodes in the cluster. this can be an additional improvement (separate PR ?) as im not sure how often this will happen. what i was thinking re the GC flow for this case:
other ideas are welcome :) |
I would prefer to go for something like |
how would this work ? who adds/removes the finalizer ? |
yeah you are right, after thinking about it, your idea is the right approach. |
I think we can split this issue into 2 action items:
|
First PR to address this issue: |
I was exploring this very issue in the v0.16 cycle but didn't come up with any good solution (lack of bandwidth). I was pondering three possible solutions, two of which have been mentioned here:
From these 2) isn't viable (afaiu) as the finalizer will only delay the deletion of NF but you cannot undelete it when the new worker pod comes up. For 1) I did some prototyping but the code ended up hairy (at least in my hands) and this felt like a probable source of caveats and different problems. So maybe 3) would be the least problematic solution. For the possible GC improvements (if we need/want it), could we exploit the owner refs for that, too. E.g. see if the owner pod of NF exists, if not, mark the NF as orphaned. On the next gc round, if the NF still is orphaned (and the owner pod uuid hasn't changed), delete the NF. Thoughts? |
Hey @marquiz ! if we set NF ownerReference to nfd-worker DS then if the DS is deleted (e.g helm uninstall of NFD) then k8s will garbage collect NF objects which is what we want. if user updates the daemonset (e.g changes pod template node affinity via helm update) then worker DS will be updated and pods will get re-created potentially running on different nodes. in this case the owner of NF is still the same DS. is this not the case ? |
This would rely on having two owner references, both the DS and the Pod. a) If the DS is deleted then both DS and Pod are gone -> NF will be GC'd by kubernetes b) If only the Pod is gone but DS remains -> NF will not be GC'd by kubernetes. However, nfd-gc could detect this situation and GC the NF Makes sense? |
yes it does, thx. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
What happened:
NFD will remove any node labels associated with NodeFeature of a specific node if nfd-worker pod of that node gets deleted.
after pod delete, it will get re-created, which will then recreate NodeFeature CR for the node and labels will be back (same goes for annotations, extendedResources).
workloads that rely on such labels in their nodeSelector/affinity will get disrupted as they will be removed and re scheduled.
This happens since nfd-worker is creating NodeFeature CR with OwnerReference pointing to itself[1]
[1]
node-feature-discovery/pkg/nfd-worker/nfd-worker.go
Line 716 in 0418e7d
What you expected to happen:
At the end id expect labels to not get removed if nfd-worker pod get restarted.
going further into the details, id expect NodeFeature CR is not deleted if pod is deleted.
This can be achieved by setting owner reference to nfd-worker daemonset which is not as ephemeral as the pod it creates.
In addition to deal with redeploying daemonset with different selectors/affinity/tolerations the gc component can be extended to clean up NodeFeature objects for nodes that are not intended to run nfd-worker pods.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
kubectl version
): 1.30 (but will reproduce in any)cat /etc/os-release
): N/Auname -a
): N/AThe text was updated successfully, but these errors were encountered: