Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix resuming JobSet after restoring PodTemplate (by Jobs update) #640

Closed
wants to merge 1 commit into from

Conversation

mimowo
Copy link
Contributor

@mimowo mimowo commented Aug 6, 2024

What type of PR is this?

/kind bug

What this PR does / why we need it:

Which issue(s) this PR fixes:

Fixes #624

Special notes for your reviewer:

This approach relies on updating the Jobs on resume, rather the deleting Jobs on suspend.
Alternative implementation was done in: #625

Does this PR introduce a user-facing change?

Allow restoring PodTemplate on suspend, and fix resuming JobSet after restoring 
the PodTemplate. This fixes the integration with Kueue to support eviction of workloads
and re-admitting in another ResourceFlavor (with other nodeSelectors, or without nodeSelectors).
It is achieved by updating the Jobs on resuming the JobSet.

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. kind/bug Categorizes issue or PR as related to a bug. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Aug 6, 2024
@k8s-ci-robot k8s-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Aug 6, 2024
Copy link

netlify bot commented Aug 6, 2024

Deploy Preview for kubernetes-sigs-jobset canceled.

Name Link
🔨 Latest commit 4b96257
🔍 Latest deploy log https://app.netlify.com/sites/kubernetes-sigs-jobset/deploys/66b64518c250bd0008c80fd5

@mimowo mimowo changed the title WIP: Fix resuming JobSet after restoring PodTemplate (by Jobs update) Fix resuming JobSet after restoring PodTemplate (by Jobs update) Aug 6, 2024
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Aug 6, 2024
@mimowo
Copy link
Contributor Author

mimowo commented Aug 6, 2024

/cc @danielvegamyhre @kannon92

@danielvegamyhre
Copy link
Contributor

Can #625 be closed now in favor of this

@danielvegamyhre danielvegamyhre self-assigned this Aug 6, 2024
@@ -51,6 +52,11 @@ const (
CoordinatorKey = "jobset.sigs.k8s.io/coordinator"
)

var (
// the legacy names are no longer defined in the api, only in k/2/apis/batch
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we not introduce the legacy labels now? I think we can remove the original ones now that 1.27 is out of support.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is better to copy them, because Job controller still sets them.

Here is an example Pod template created on cluster 1.30.2:

        batch.kubernetes.io/controller-uid: c46c3b0d-76d8-427b-bda6-a9cb06c77cfd
        batch.kubernetes.io/job-name: sample-job-a-lv6wd
        controller-uid: c46c3b0d-76d8-427b-bda6-a9cb06c77cfd
        job-name: sample-job-a-lv6wd

If they are not present on the original Job template (null) I wouldn't copy them to the newJob template.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still think there is no reason to use them in JobSet even if they are used in the Job API. We won't ever drop them in kube api but they are really for public consumption.

Copy link
Contributor Author

@mimowo mimowo Aug 9, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that I don't insert / generate them, just copy over from old template to the new, if they are already present on the old.

I'm also not sure this will pass validation, let me test.

Copy link
Contributor Author

@mimowo mimowo Aug 9, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should not need to expose the details of those labels and copy all labels.

For now I don't see another way of fixing the bug, either:

  1. copy over the Job labels,
  2. delete the Jobs and recreate the Jobs, letting API server default all the fields / labels

This is partly why I prefer (2.) - and implemented initially in #625. However, I understand that updating the Jobs might be preferred as deleting and recreating is a bigger change.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My point is that 1 could be done just by copying ALL labels from the previous job.

Copy link
Contributor Author

@mimowo mimowo Aug 9, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So could we copy all the labels from the old Job without looping over the special managedByJob ones?

Consider the following transitions by Kueue: resume (RF1) -> suspend -> resume (RF2).

It is possible that resuming on RF2 does not add the same labels as for RF1. If we copied all labels, then we would always trail the old ones from RF1.

Indeed, from practical standpoint, this is mostly problem for nodeSelectors, where Kueue may want to choose a new set of nodeSelectors for RF2. I would be ok to accept this issue for labels as it does not have much of a practical issue for Kueue currently, but it would be a "known issue", and may hit us one day.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My point is that 1 could be done just by copying ALL labels from the previous job.

Seems like a race condition :). Consider the scenario I describe here: #640 (comment). The code will then remain problematic if RF1 adds a label, which is not expected in RF2, because with this approach we would never get rid of it.

It will remain problematic in some corner cases and hard to debug. It will also be very surprising why we restore nodeSelectors, but not labels.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I synced with @kannon92 on slack and there is a simpler solution which I implement here: #644. It does not cover fully the case of "restoring" a PodTemplate, but it should cover most cases where the admission to RF2 overrides the previous values.

This is needed anyway as the first step, and part of this PR. We can return to these bits if really shown they are needed.

test/e2e/e2e_test.go Outdated Show resolved Hide resolved
@kannon92
Copy link
Contributor

kannon92 commented Aug 9, 2024

LGTM. Small nits but code looks good.

/assign @danielvegamyhre @ahg-g

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: mimowo
Once this PR has been reviewed and has the lgtm label, please ask for approval from ahg-g. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@mimowo
Copy link
Contributor Author

mimowo commented Aug 9, 2024

/close
After syncing with @kannon92 we decided it is better to start with something simple, which already covers most use cases, so I open another PR: #644

@k8s-ci-robot
Copy link
Contributor

@mimowo: Closed this PR.

In response to this:

/close
After syncing with @kannon92 we decided it is better to start with something simple, which already covers most use cases, so I open another PR: #644

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/bug Categorizes issue or PR as related to a bug. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Allow to mutate PodTemplate when suspending a JobSet and support resuming such JobSet
5 participants