You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Services of type load balancer should not have their vip running on masters. Instead there should probably be another daemonset running on workers for allocating these IPs.
Current Behavior
I am using kube-vip to advertise IPs for services of type load balancer and so I deployed another daemonset but the IP ended up getting picked up by the kube-vip-ds which manages the vip for the cp. This is because within the kube-vip daemonset there is this logic.
# svc_enable -> From docs: Enables kube-vip to watch Services of type LoadBalancer
- name: svc_enablevalue: "{{ 'true' if kube_vip_lb_ip_range is defined else 'false' }}"
Since kube-vip for the cp runs on masters, seems like this isn't the desirable behavior if you are trying to allocate vips for services of type load balancer. A better solution could be taking this logic out of the kube-vip meant for the cp and when kube_vip_lb_ip_range is given, another daemonset is deployed with a node selector for workers.
Steps to Reproduce
Give kube_vip_lb_ip_range
See IPs allocated on the masters
Context (variables)
Operating system: Ubuntu 22
Hardware: Bare metal
Variables Used
all.yml
cilium_iface: "eth0"cilium_mode: "native"# native when nodes on same subnet or using bgp, else set routedcilium_tag: "v1.16.0"# cilium version tagcilium_hubble: true # enable hubble observability relay and ui#flannel_iface: ""#calico_iface: ""#calico_ebpf: ""#calico_cidr: ""#calico_tag: ""kube_vip_tag_version: "v0.8.2"kube_vip_lb_ip_range: "10.100.22.100-10.100.22.116"# kube_vip_cloud_provider_tag_version: "main"#metal_lb_speaker_tag_version: ""#metal_lb_controller_tag_version: ""#metal_lb_ip_range: ""
Possible Solution
The simplest solution could be just documenting this better in the group_vars/all.yml with a little warning like:
# Warning - IPs will be allocated on the kube-vip-ds running on the masters kube_vip_lb_ip_range: "10.100.22.100-10.100.22.116"
There isn't anything inherently wrong with running services on masters but just generally not advised right?
Also as mentioned, another solution could be just triggering the deployment of another daemonset on the workers or just removing the capability of running kube-vip to watch for services of type load balancer and just make the user run their own daemonset if this is what they are trying to do.
This wouldn't have been a problem for me if I had not been advised to not use BGP for cilium yet (my original plan) due to it being a little too early for this new feature (there are documented bugs and keeping IPs is pretty mission critical) so I decided to go down this route with kube-vip since I was also told metallb and cilium don't play nice with each other. Anyways, happy to submit PR especially if the agreed solution would be just better docs.
The text was updated successfully, but these errors were encountered:
Expected Behavior
Services of type load balancer should not have their vip running on masters. Instead there should probably be another daemonset running on workers for allocating these IPs.
Current Behavior
I am using kube-vip to advertise IPs for services of type load balancer and so I deployed another daemonset but the IP ended up getting picked up by the kube-vip-ds which manages the vip for the cp. This is because within the kube-vip daemonset there is this logic.
Since kube-vip for the cp runs on masters, seems like this isn't the desirable behavior if you are trying to allocate vips for services of type load balancer. A better solution could be taking this logic out of the kube-vip meant for the cp and when
kube_vip_lb_ip_range
is given, another daemonset is deployed with a node selector for workers.Steps to Reproduce
kube_vip_lb_ip_range
Context (variables)
Operating system: Ubuntu 22
Hardware: Bare metal
Variables Used
all.yml
Possible Solution
The simplest solution could be just documenting this better in the group_vars/all.yml with a little warning like:
There isn't anything inherently wrong with running services on masters but just generally not advised right?
Also as mentioned, another solution could be just triggering the deployment of another daemonset on the workers or just removing the capability of running kube-vip to watch for services of type load balancer and just make the user run their own daemonset if this is what they are trying to do.
This wouldn't have been a problem for me if I had not been advised to not use BGP for cilium yet (my original plan) due to it being a little too early for this new feature (there are documented bugs and keeping IPs is pretty mission critical) so I decided to go down this route with kube-vip since I was also told metallb and cilium don't play nice with each other. Anyways, happy to submit PR especially if the agreed solution would be just better docs.
The text was updated successfully, but these errors were encountered: