-
Notifications
You must be signed in to change notification settings - Fork 1.5k
KEP-5307 Initial KEP for container restart policy #5308
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
yuanwang04
commented
May 16, 2025
- One-line PR description: Initial KEP for container restart policy
- Issue link: Container restart rules to customize the pod restart policy #5307
- Other comments: Discussion link https://docs.google.com/document/d/13fQu343OBEM2ICHLXfWHrApmzskd4nY3xWI27EbMJyI/edit?tab=t.0
Welcome @yuanwang04! |
Hi @yuanwang04. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
we may need to implement as described here: https://github.com/kubernetes/enhancements/issues/3329#issuecomment-1571643421 | ||
|
||
``` | ||
restartPolicy: Never |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dchen1107 I remember you had a concern with having Never
pod with the container inside it that has restart count increasing. How strong is this concern? Strong enough to iontroduce the restartPolicy: Custom
?
6cb2b80
to
0cbfc18
Compare
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: yuanwang04 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/ok-to-test |
0cbfc18
to
040ffef
Compare
- name: myapp1 | ||
# the default behavior is inherited from the Pod’s restartPolicy | ||
restartPolicy: Custom | ||
# pod-level API for specifying container restart rules |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This API is at the container level it seems.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, just mentioning it here since there are some discussion around whether we need to introduce another pod-level restart policy #5308 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A big concern is once we introduced pod-level restart policy, how to interact with job failure policy? Maybe not today.
The container restart will still follow the exponential backoff to avoid | ||
excessive resource consumption due to restarts. | ||
|
||
## Design Details |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does this interact with PodFailurePolicy?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added discussion around Job's PodFailurePolicy. Basically, this shouldn't affect how Job handles pod failures. Job only checks container exit codes (for PodFailurePolicy) when the Pod finished (and restartPolicy=Never). However, if the container is restarted by the proposed container restart policy, the pod will still be Running, and Job controller will not kick in.
dd2b1f7
to
bee6cce
Compare
bee6cce
to
67df630
Compare
containers: | ||
- name: myapp1 | ||
# the default behavior is inherited from the Pod’s restartPolicy | ||
restartPolicy: Custom |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would suggest to consider alternative names, maybe:
- "RuleBased"
- "Conditional"
I'm leaning to "RuleBased" to match the "restartRules" name.
# the default behavior is inherited from the Pod’s restartPolicy | ||
restartPolicy: Custom | ||
# pod-level API for specifying container restart rules | ||
restartRules: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would suggest to propose the full API, similarly as here, and provide this yaml as an example.
know that this has succeeded? | ||
--> | ||
|
||
- Allow the Pod with the restartPolicy=Custom to keep restarting a container |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Allow the Pod with the restartPolicy=Custom to keep restarting a container | |
- Introduce API which allows to keep restarting a container |
I guess at this point we don't need to provide the shape of the API
|
||
#### Story 1 | ||
|
||
The Pod with two containers - one is the main container and one is the sidecar |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would suggest to orient the "Story" from the perspective of the Kubernetes users. For example, as an ML researcher I'm creating long-running AI/ML workloads. Pods in such workloads are unavoidable due to various reasons, but in many cases they are recoverable. I would like to be able to avoid re-scheduling the workload as this consumes signifficant amount of time, and only restart the failed container "in-place".
Feel free to rephrase as you see fit.
may have containers restarted and have the restart count higher than 0. This may | ||
also affect how Job `podFailurePolicy` interacts with pod failures. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, I think this approach is fully transparent to the Job controller, because the Job controller only matches Pods in terminal phase. It is mentioned here:
enhancements/keps/sig-apps/3329-retriable-and-non-retriable-failures/README.md
Lines 902 to 903 in fc234c1
When matching a failed pod against Job pod failure policy, it is important that | |
the pod is actually in the terminal phase (`Failed`), to ensure their state is |
This works with Job `podFailurePolicy` without any changes on Job API. Currently, | ||
Job only checks for `podFailurePolicy` after the Pod has finished running. | ||
Kubelet restarting the container of the Pod will not change the Pod's status. | ||
This is the ideal improvement where the Job configured `podFailurePolicy` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest to avoid phrases like " ideal improvement", let's leave it to reviewers :)
The proposal is to implement a simple API with the very limited set of | ||
allowed values to [Container](https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/core/types.go#L2528) | ||
under k8s.io/apis/core. The shape of API is informed by some future improvements |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The proposal is to implement a simple API with the very limited set of | |
allowed values to [Container](https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/core/types.go#L2528) | |
under k8s.io/apis/core. The shape of API is informed by some future improvements | |
The proposal is to extend the Pod's [container API](https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/core/types.go#L2528) to allow to restarting containers based on | |
the container's end state, e.g. exit code. |
I would suggest to avoid using adjective like "simple API", or "very limited set". Let's leave it to the readers / reviewers to assess.
we may need to implement as described here: | ||
https://github.com/kubernetes/enhancements/issues/3329#issuecomment-1571643421 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, this reference is slightly different that the current proposal. One difference is that the reference assumes "OnFailure". Instead of design details I would propose to move it to the Notes and Caveats section.
# The following PRR answers are required at alpha release | ||
# List the feature gate name and the components for which it must be enabled | ||
feature-gates: | ||
- name: MyFeature |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ContainerRestartPolicy ?
|
||
# The following PRR answers are required at beta release | ||
metrics: | ||
- my_feature_metric |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cleanup. This is alpha, so IIUC no need for metric yet.
@yuanwang04 thank you for the work, I like this proposal, AFAIK this approach is fully compatible with the Job's podFailurePolicy (at least if I'm not missing something), because when Pod's restartPolicy: Never, then Job's podFailurePolicy only analyzes pods which reach the "Failed" phase. Here, the pods avoid reaching the failed phase. Once they reach, they will be matched against podFailurePolicy which may decide to recreate the entire pod. |