-
Notifications
You must be signed in to change notification settings - Fork 465
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Creating LoadBalancer service blocks API server IP #1685
Comments
Out of curiosity, what is the address of the kube-apiserver in your kubectl config file? If this is a DNS name, please resolve it first before sending it. |
Hi, thanks for the reply. The API server is addressed with https://[fdcd:0cd1:5dd1:28::]:6443 everywhere including all kubeconfigs, no DNS involved. |
It seems the API server is blocked because the node IP is added to the |
Good sleuthing I got distracted on another problem that I found with ipv6 while I was looking into this one. I'll try to track down the codepath for that after I see if I can reproduce this. Out of curiosity, do you see that IP (the one of the node) listed in |
The node's IP is listed in the endpoint of the
I'll see if I can collect some logs from the kube-router pod, if that helps. |
I'm having a hard time getting logs due to the fact that the API server is unreachable, but I've dug up some older logs and noticed this excerpt where kube-router adds the node IP to the kube-router log excerpt 1
|
Still trying to figure this one out... If your kube controller node IP is getting added to the Confirming this is happening should be as simple as looking at When I tested this locally, it is what I saw: Before adding the node's ipv6 address to #ip6tables -nvL KUBE-ROUTER-SERVICES
Chain KUBE-ROUTER-SERVICES (1 references)
pkts bytes target prot opt in out source destination
0 0 ACCEPT 58 -- * * ::/0 ::/0 /* allow icmp echo requests to service IPs */ ipv6-icmptype 128
0 0 ACCEPT 58 -- * * ::/0 ::/0 /* allow icmp ttl exceeded messages to service IPs */ ipv6-icmptype 3
0 0 ACCEPT 58 -- * * ::/0 ::/0 /* allow icmp destination unreachable messages to service IPs */ ipv6-icmptype 1
0 0 ACCEPT 0 -- * * ::/0 ::/0 /* allow input traffic to ipvs services */ match-set inet6:kube-router-svip-prt dst,dst
0 0 REJECT 0 -- * * ::/0 ::/0 /* reject all unexpected traffic to service IPs */ ! match-set inet6:kube-router-local-ips dst reject-with icmp6-port-unreachable After adding the controller node's IPv6 address to #ip6tables -nvL KUBE-ROUTER-SERVICES
Chain KUBE-ROUTER-SERVICES (1 references)
pkts bytes target prot opt in out source destination
0 0 ACCEPT 58 -- * * ::/0 ::/0 /* allow icmp echo requests to service IPs */ ipv6-icmptype 128
0 0 ACCEPT 58 -- * * ::/0 ::/0 /* allow icmp ttl exceeded messages to service IPs */ ipv6-icmptype 3
2 256 ACCEPT 58 -- * * ::/0 ::/0 /* allow icmp destination unreachable messages to service IPs */ ipv6-icmptype 1
0 0 ACCEPT 0 -- * * ::/0 ::/0 /* allow input traffic to ipvs services */ match-set inet6:kube-router-svip-prt dst,dst
2 160 REJECT 0 -- * * ::/0 ::/0 /* reject all unexpected traffic to service IPs */ ! match-set inet6:kube-router-local-ips dst reject-with icmp6-port-unreachable So the question becomes, then, how does your node's address end up in that set? After looking into it some more, I noticed that this set ( This is probably not the best logic, but it has been this way for years. I'm wondering if there might be something else on your system that is creating an IPVS service for you node's IP address? Can you run I'm going to assume that Unfortunately, ipvs services don't have names, so it might be a bit of a guess to figure out what is creating that. But maybe the port will let us know? Specifically, if that port relates to a kubernetes service, then maybe kube-router is somehow errantly creating it? If not, then maybe some other part of your OS or setup is creating it? |
If this is indeed not a kube-router created IPVS service, then the following fix would resolve the issue you're experiencing: #1699 As part of that pipeline, it should build a test container that maybe you could try out as a shortcut to see if it just magically resolves this issue? The container created from the pipeline was: |
What happened?
After creating a service with
type
set toLoadBalancer
, the Kubernetes API server becomes unreachable.What did you expect to happen?
API server accessibility is unaffected.
How can we reproduce the behavior you experienced?
Steps to reproduce the behavior:
kubectl create service loadbalancer example --tcp=1337:1337
)kubectl get nodes
), orSystem Information:
kube-router --version
): v2.1.2kubectl version
) : v1.28.10Logs, other output, metrics
ipset list
output before creating the service (API server is reachable)ipset list
output after creating the service (API server is unreachable)ip6tables -L
output before creating the service (API server is reachable)ip6tables -L
output after creating the service (API server is unreachable)fdcd:0cd1:5dd1:28::1
is the address of the API server.The text was updated successfully, but these errors were encountered: