Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Backport release-1.31] Applier manager improvements #5171

Open
wants to merge 7 commits into
base: release-1.31
Choose a base branch
from

Conversation

k0s-bot
Copy link

@k0s-bot k0s-bot commented Oct 29, 2024

Automated backport to release-1.31, triggered by a label in #5062.
See .

twz123 and others added 7 commits October 29, 2024 21:28
The map is only ever used in the loop to create and remove stacks, so it
doesn't need to be stored in the struct. This ensures that there can't
be any racy concurrent accesses to it.

Signed-off-by: Tom Wieczorek <[email protected]>
(cherry picked from commit ba547ed)
The only reason these channels get closed is if the watcher itself gets
closed. This happens only when the method returns, which in turn only
happens when the context is done. In this case, the loop has already
exited without a select on a potentially closed channel. So the branches
that checked for closed channels were effectively unreachable during
runtime.

Signed-off-by: Tom Wieczorek <[email protected]>
(cherry picked from commit db5e0d2)
Rename cancelWatcher to stop and wait until the newly added stopped
channel is closed. Also, add a stopped channel to each stack to do the
same for each stack-specific goroutine.

Signed-off-by: Tom Wieczorek <[email protected]>
(cherry picked from commit 402c728)
Cancel the contexts with a cause. Add this cause to the log statements
when exiting loops. Rename bundlePath to bundleDir to reflect the fact
that it is a directory, not a file.

Signed-off-by: Tom Wieczorek <[email protected]>
(cherry picked from commit edb105c)
Exit the loop on error and restart it after a one-minute delay to allow
it to recover in a new run. Also replace the bespoke retry loop for
stacks with the Kubernetes client's wait package.

Signed-off-by: Tom Wieczorek <[email protected]>
(cherry picked from commit 404c6cf)
Seems to be a remnant from the past.

Signed-off-by: Tom Wieczorek <[email protected]>
(cherry picked from commit c2beea7)
Signed-off-by: Ethan Mosbaugh <[email protected]>
(cherry picked from commit 11b3197)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants