-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change kube-vip LB IP pool logic #3637
Comments
Untested : giantswarm/cluster-vsphere#266 |
I made some small changes to the PR and now several ips can be assigned. |
the standard test was green when I ran it last week. However, as of now all providers fail and the reason is related to some |
Thanks a lot! I'll test this manually as the E2E tests won't be testing what we're changing here |
I created some random lb services on a test cluster and we see the expected behavior. |
Ok that's good. I think for now we can stick with this being an immutable field and keep 3 as default like you suggested here as it would be more work than worth it to make it a dynamic value. wdyt? |
changes were merged |
Current setup
globalippool
in the cluster values: hereipadressclaim
using this pool which blocks a singleipaddress
: herecidrGlobal:
: hereI don't get why we do that. It means we have a 10-IPs IPAM pool in the MC but the WC as a 1-IP pool 🤷. So the first load balancer will work fine (most often nginx) while subsequent LBs will be stuck on
pending
.Proposition
Collect the actual IP pool from the LB IP pool in the MC.
And patch the Helmrelease with a range-global instead.
EDIT: Actually now I remember. We use an IPaddressclaim to block the IP on the MC side. If we set a range in the WC it could (potentially) be set in multiple WCs.
Ideally we would need something like
ipaddressrangeclaim
or create a claim for each IP. Somehow.In the meantime, the solution Could be:
ipAddressClaim
in the MC and add an interface in the cluster chart to add the IP to the CPI helmrelease.The text was updated successfully, but these errors were encountered: