-
Notifications
You must be signed in to change notification settings - Fork 467
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Docs update of the proposed HA solution of node local dns in PVS mode of kube-proxy #323
Comments
Any ideas would be welcome :-) |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/lifecycle-remove stale |
For IPVS mode, the HA solution is to run 2 replicas as described in the KEP:
I see this got removed from the KEP when merging 2 KEPs together in kubernetes/enhancements#2487 |
Pavithra -- can you send a PR to add back to the KEP? |
Also -- it probably needs to have documentation if this is a recommended setup... |
Yes, already created - kubernetes/enhancements#2592
I agree we need better documentation of this. It isn't necessarily the recommended setup, since it is twice the resources and requires managing conflicts - only one replica should handle interface/iptables management. |
@prameshj: Regarding your comment:
Are there known issues where the two replicas will step on each other? If so, can you point me to them? When I look at the code, it seems idempotent and thread-safe. Is it not sufficient to simply pass |
I missed this comment.. apologies for the super late reply. You could pass --skipteardown to both replicas, but then the iptables rules and nodelocaldns interface would need to be torn down. This is applicable mostly for cases where nodelocaldns is being disabled. It is probably not a huge issue in IPVS mode(where only a link local IP is used) if cleanup is skipped, since no other service uses that same link local IP. So, even if iptables rules take it nowhere, nothing will break, if pods have already switched to using kube-dns service IP. For upgrades, since only one replica upgrades at a time, --skipteardown on both replicas should be ok. However, if kube-dns service VIP is reused for nodelocaldns(in order to cleanly fallback to kubedns when nodelocaldns is down/disabled), then skipping cleanup will blackhole DNS traffic. |
The proposed HA solution of node local dns has been given here. But It will not work in IPVS mode of kube-proxy. I want to know whether there are some plans to support the HA solution in IPVS mode of kube-proxy.
Looking forward to your reply!!
The text was updated successfully, but these errors were encountered: