-
Notifications
You must be signed in to change notification settings - Fork 114
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
missing /32 FIB entries #1614
Comments
Hey Giles, Contiv-VPP is actually NOT installing any static From where are you trying to ping the remote BVI interfaces? From VPP? Do you use some specific source interface? Apart from ping not working, do you see any issue with communication between the nodes? You know that ping utility on VPP has many issues... |
interesting. but not sure why e.g. worker2 gets a route for worker1. and yeah - was trying to ping from VPP using the loop0 as the source. Thee reason I started looking at this was we had a broken cluster (workers unable to reach etcd IIRC) and that was the only thing I could see different in the FIBs. will dig some more... |
oh yes, so it was the vxlanCIDR addresses. |
Well if the issue was that workers were unable to reach contiv ETCD, you may need to look at a different place. Contiv-ETCD uses nodeport service, so relies on kube-proxy to do the NAT, then the traffic goes via mgmt node inter-connection as opposed to via VPP (to the master node's mgmt IP from the workers). The reason for that is that the agents need to be able to connect to ETCD even before the CNI starts working (before VPP is running & configured). Maybe you are hitting this issue? #1430 |
BTW I am confused about why, when and how VPP installs those |
It seems like Contiv-VPP adds a /32 FIB entry in VRF1 on a new node for each existing node in the cluster (by default those are in 192.168.30.0/24).
But it also seems that the existing nodes don't get updated. So, for example, the master node in my cluster only has the /24 plus a local /32, the first worker I start also has a /32 for the master, and the next worker gets that plus a /32 for the first worker - and so on.
So I'm seeing pings drop when destined for 192.168.30.0/24 addresses when there's no matching /32, but stuff behind those address (e.g. pod IPs) seems to be ok (looks like the FIB entry for the /24 resolves to ARP whereas the FIB entry for the IPAM network resolves to the correct next-hop including MAC addresses etc.)
The text was updated successfully, but these errors were encountered: