Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change kube-vip LB IP pool logic #3637

Closed
vxav opened this issue Aug 14, 2024 · 7 comments
Closed

Change kube-vip LB IP pool logic #3637

vxav opened this issue Aug 14, 2024 · 7 comments
Assignees
Labels
team/bigmac Team BigMac team/rocket Team Rocket

Comments

@vxav
Copy link

vxav commented Aug 14, 2024

Current setup

  • We specify a globalippool in the cluster values: here
  • We create an ipadressclaim using this pool which blocks a single ipaddress: here
  • We collect the IP from the cluster and edit the CPI helmrelease to create a one-IP pool with cidrGlobal:: here

I don't get why we do that. It means we have a 10-IPs IPAM pool in the MC but the WC as a 1-IP pool 🤷. So the first load balancer will work fine (most often nginx) while subsequent LBs will be stuck on pending.

Proposition
Collect the actual IP pool from the LB IP pool in the MC.

> kg globalinclusterippools.ipam.cluster.x-k8s.io cassino-svc-lb-ips -ojsonpath='{.spec.addresses[0]}'
10.109.147.221-10.109.147.230

And patch the Helmrelease with a range-global instead.

EDIT: Actually now I remember. We use an IPaddressclaim to block the IP on the MC side. If we set a range in the WC it could (potentially) be set in multiple WCs.
Ideally we would need something like ipaddressrangeclaim or create a claim for each IP. Somehow.
In the meantime, the solution Could be:

  • Add an ipAddressClaim in the MC and add an interface in the cluster chart to add the IP to the CPI helmrelease.
  • Add a field in the cluster values to specify the number of IPs to take from the global pool for the WC pool so we can iterate to create that number of ipaddressclaims and patch the cpi helmrelease with the joined IPs.
@vxav vxav added this to Roadmap Aug 14, 2024
@vxav vxav converted this from a draft issue Aug 14, 2024
@vxav
Copy link
Author

vxav commented Aug 14, 2024

Untested : giantswarm/cluster-vsphere#266

@anvddriesch anvddriesch self-assigned this Aug 19, 2024
@architectbot architectbot added the team/bigmac Team BigMac label Aug 19, 2024
@anvddriesch
Copy link

I made some small changes to the PR and now several ips can be assigned.

@anvddriesch
Copy link

the standard test was green when I ran it last week. However, as of now all providers fail and the reason is related to some already-exists issue with cert exporter configmaps in the default-apps.
This leads me to believe that its related to the default apps changes still being made.

@vxav vxav self-assigned this Aug 28, 2024
@architectbot architectbot added the team/rocket Team Rocket label Aug 28, 2024
@vxav
Copy link
Author

vxav commented Aug 28, 2024

Thanks a lot! I'll test this manually as the E2E tests won't be testing what we're changing here

@anvddriesch
Copy link

I created some random lb services on a test cluster and we see the expected behavior.
Three single ip cidrs are added to the helm relesase for lb ip space.
Creating four lb services in a row resulted in these three ips being assigned to the first three and then the fourth one not getting an assignment.

@vxav
Copy link
Author

vxav commented Aug 29, 2024

Ok that's good. I think for now we can stick with this being an immutable field and keep 3 as default like you suggested here as it would be more work than worth it to make it a dynamic value. wdyt?

@anvddriesch
Copy link

changes were merged

@anvddriesch anvddriesch moved this from In Progress ⛏️ to Validation ☑️ in Roadmap Sep 2, 2024
@github-project-automation github-project-automation bot moved this from Validation ☑️ to Done ✅ in Roadmap Sep 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
team/bigmac Team BigMac team/rocket Team Rocket
Projects
Archived in project
Development

No branches or pull requests

3 participants