-
Notifications
You must be signed in to change notification settings - Fork 140
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix k8s_drain runs into timeout with pods from stateful sets. #793
Fix k8s_drain runs into timeout with pods from stateful sets. #793
Conversation
Build succeeded. ✔️ ansible-galaxy-importer SUCCESS in 4m 29s |
Build succeeded. ✔️ ansible-galaxy-importer SUCCESS in 5m 00s |
Build succeeded (gate pipeline). ✔️ ansible-galaxy-importer SUCCESS in 4m 14s |
fca0dc0
into
ansible-collections:main
Backport to stable-3: 💚 backport PR created✅ Backport PR branch: Backported as #807 🤖 @patchback |
SUMMARY Fixes #792 . The function wait_for_pod_deletion in k8s_drain never checks on which node a pod is actually running: try: response = self._api_instance.read_namespaced_pod( namespace=pod[0], name=pod[1] ) if not response: pod = None time.sleep(wait_sleep) This means that if a pod is successfully evicted and restarted with the same name on a new node, k8s_drain does not notice and thinks that the original pod is still running. This is the case for pods which are part of a stateful set. ISSUE TYPE Bugfix Pull Request COMPONENT NAME k8s_drain Reviewed-by: Mike Graves <[email protected]> (cherry picked from commit fca0dc0)
Backport to stable-5: 💚 backport PR created✅ Backport PR branch: Backported as #808 🤖 @patchback |
SUMMARY Fixes #792 . The function wait_for_pod_deletion in k8s_drain never checks on which node a pod is actually running: try: response = self._api_instance.read_namespaced_pod( namespace=pod[0], name=pod[1] ) if not response: pod = None time.sleep(wait_sleep) This means that if a pod is successfully evicted and restarted with the same name on a new node, k8s_drain does not notice and thinks that the original pod is still running. This is the case for pods which are part of a stateful set. ISSUE TYPE Bugfix Pull Request COMPONENT NAME k8s_drain Reviewed-by: Mike Graves <[email protected]> (cherry picked from commit fca0dc0)
SUMMARY Fixes #792 . The function wait_for_pod_deletion in k8s_drain never checks on which node a pod is actually running: try: response = self._api_instance.read_namespaced_pod( namespace=pod[0], name=pod[1] ) if not response: pod = None time.sleep(wait_sleep) This means that if a pod is successfully evicted and restarted with the same name on a new node, k8s_drain does not notice and thinks that the original pod is still running. This is the case for pods which are part of a stateful set. ISSUE TYPE Bugfix Pull Request COMPONENT NAME k8s_drain Reviewed-by: Mike Graves <[email protected]> (cherry picked from commit fca0dc0)
SUMMARY Fixes #792 . The function wait_for_pod_deletion in k8s_drain never checks on which node a pod is actually running: try: response = self._api_instance.read_namespaced_pod( namespace=pod[0], name=pod[1] ) if not response: pod = None time.sleep(wait_sleep) This means that if a pod is successfully evicted and restarted with the same name on a new node, k8s_drain does not notice and thinks that the original pod is still running. This is the case for pods which are part of a stateful set. ISSUE TYPE Bugfix Pull Request COMPONENT NAME k8s_drain Reviewed-by: Mike Graves <[email protected]> (cherry picked from commit fca0dc0)
…807) This is a backport of PR #793 as merged into main (fca0dc0). SUMMARY Fixes #792 . The function wait_for_pod_deletion in k8s_drain never checks on which node a pod is actually running: try: response = self._api_instance.read_namespaced_pod( namespace=pod[0], name=pod[1] ) if not response: pod = None time.sleep(wait_sleep) This means that if a pod is successfully evicted and restarted with the same name on a new node, k8s_drain does not notice and thinks that the original pod is still running. This is the case for pods which are part of a stateful set. ISSUE TYPE Bugfix Pull Request COMPONENT NAME k8s_drain
…808) This is a backport of PR #793 as merged into main (fca0dc0). SUMMARY Fixes #792 . The function wait_for_pod_deletion in k8s_drain never checks on which node a pod is actually running: try: response = self._api_instance.read_namespaced_pod( namespace=pod[0], name=pod[1] ) if not response: pod = None time.sleep(wait_sleep) This means that if a pod is successfully evicted and restarted with the same name on a new node, k8s_drain does not notice and thinks that the original pod is still running. This is the case for pods which are part of a stateful set. ISSUE TYPE Bugfix Pull Request COMPONENT NAME k8s_drain
SUMMARY
Fixes #792 .
The function
wait_for_pod_deletion
in k8s_drain never checks on which node a pod is actually running:This means that if a pod is successfully evicted and restarted with the same name on a new node,
k8s_drain
does not notice and thinks that the original pod is still running. This is the case for pods which are part of a stateful set.ISSUE TYPE
COMPONENT NAME
k8s_drain