-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix resuming JobSet after restoring PodTemplate (by Jobs update) #640
Conversation
✅ Deploy Preview for kubernetes-sigs-jobset canceled.
|
6ec5015
to
3113392
Compare
3113392
to
7125907
Compare
Can #625 be closed now in favor of this |
@@ -51,6 +52,11 @@ const ( | |||
CoordinatorKey = "jobset.sigs.k8s.io/coordinator" | |||
) | |||
|
|||
var ( | |||
// the legacy names are no longer defined in the api, only in k/2/apis/batch |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we not introduce the legacy labels now? I think we can remove the original ones now that 1.27 is out of support.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it is better to copy them, because Job controller still sets them.
Here is an example Pod template created on cluster 1.30.2:
batch.kubernetes.io/controller-uid: c46c3b0d-76d8-427b-bda6-a9cb06c77cfd
batch.kubernetes.io/job-name: sample-job-a-lv6wd
controller-uid: c46c3b0d-76d8-427b-bda6-a9cb06c77cfd
job-name: sample-job-a-lv6wd
If they are not present on the original Job template (null) I wouldn't copy them to the newJob template.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moreover we still expect them in the Job validation: https://github.com/kubernetes/kubernetes/blob/60c4c2b2521fb454ce69dee737e3eb91a25e0535/pkg/apis/batch/validation/validation.go#L139-L146
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still think there is no reason to use them in JobSet even if they are used in the Job API. We won't ever drop them in kube api but they are really for public consumption.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that I don't insert / generate them, just copy over from old template to the new, if they are already present on the old.
I'm also not sure this will pass validation, let me test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should not need to expose the details of those labels and copy all labels.
For now I don't see another way of fixing the bug, either:
- copy over the Job labels,
- delete the Jobs and recreate the Jobs, letting API server default all the fields / labels
This is partly why I prefer (2.) - and implemented initially in #625. However, I understand that updating the Jobs might be preferred as deleting and recreating is a bigger change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My point is that 1 could be done just by copying ALL labels from the previous job.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So could we copy all the labels from the old Job without looping over the special managedByJob ones?
Consider the following transitions by Kueue: resume (RF1) -> suspend -> resume (RF2).
It is possible that resuming on RF2 does not add the same labels as for RF1. If we copied all labels, then we would always trail the old ones from RF1.
Indeed, from practical standpoint, this is mostly problem for nodeSelectors
, where Kueue may want to choose a new set of nodeSelectors for RF2. I would be ok to accept this issue for labels as it does not have much of a practical issue for Kueue currently, but it would be a "known issue", and may hit us one day.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My point is that 1 could be done just by copying ALL labels from the previous job.
Seems like a race condition :). Consider the scenario I describe here: #640 (comment). The code will then remain problematic if RF1 adds a label, which is not expected in RF2, because with this approach we would never get rid of it.
It will remain problematic in some corner cases and hard to debug. It will also be very surprising why we restore nodeSelectors, but not labels.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I synced with @kannon92 on slack and there is a simpler solution which I implement here: #644. It does not cover fully the case of "restoring" a PodTemplate, but it should cover most cases where the admission to RF2 overrides the previous values.
This is needed anyway as the first step, and part of this PR. We can return to these bits if really shown they are needed.
LGTM. Small nits but code looks good. /assign @danielvegamyhre @ahg-g |
7125907
to
80333ad
Compare
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: mimowo The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
80333ad
to
4b96257
Compare
@mimowo: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
What type of PR is this?
/kind bug
What this PR does / why we need it:
Which issue(s) this PR fixes:
Fixes #624
Special notes for your reviewer:
This approach relies on updating the Jobs on resume, rather the deleting Jobs on suspend.
Alternative implementation was done in: #625
Does this PR introduce a user-facing change?