-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Eviction controller #942
Conversation
0d52af4
to
e910804
Compare
c1e80cd
to
0b76225
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added some comments, PTAL 🙏
Arvind and I had a discussion about some of the finer points; just to scribble down some of the thoughts/points we went through here:
|
pkg/controllers/clusterresourceplacementeviction/controller_intergration_test.go
Outdated
Show resolved
Hide resolved
pkg/controllers/clusterresourceplacementeviction/controller_intergration_test.go
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a few minor comments; LGTM ;)
klog.V(2).InfoS("Found more than one cluster resource binding for cluster targeted by eviction", | ||
"clusterResourcePlacementEviction", evictionName, "clusterResourcePlacement", eviction.Spec.PlacementName) | ||
markEvictionInvalid(&eviction, evictionInvalidMultipleCRBMessage) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what happens if one is deleting while the other is bound/scheduled?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Marking eviction as invalid for now, the user can retry once the deleting eviction is removed
pkg/controllers/clusterresourceplacementeviction/controller_test.go
Outdated
Show resolved
Hide resolved
pkg/controllers/clusterresourceplacementeviction/controller_test.go
Outdated
Show resolved
Hide resolved
I've taken a closer look at the implementation for the K8s disruption controller, and given it a try in a real environment:
(I was always under the impression that K8s PDB is a selector based mechanism -> I even have some very vivid recollection about this gained during prev. on-call experiences; that wasn't right at all -> though the API certainly suggests so, under the hood it basically assumes single ownership per pod with the support for multiple owners protected by one DB -> not sure how useful that case would be though. Well, learned something new every day 😑) To summarize the model we could have if we would like to keep things analogous to the K8s model:
|
Description of your changes
Fixes #
I have:
make reviewable
to ensure this PR is ready for review.How has this code been tested
UTs and ITs
Special notes for your reviewer
Changes to rollout controller needs to be present to add E2E tests