Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CoreDNS currently only supports Deployment mode, but would you consider supporting Daemonset mode as well #86

Closed
kahirokunn opened this issue Dec 15, 2022 · 11 comments · May be fixed by #173

Comments

@kahirokunn
Copy link

kahirokunn commented Dec 15, 2022

CoreDNS itself consumes only a small amount of resources, depending on the environment. By using DaemonSet, the following benefits can be expected:

  1. When combined with Topology Aware Hint, you can save on network fees.
  2. The availability of CoreDNS will increase.
  3. CoreDNS failures will be contained at the Node level rather than the cluster level.
@kahirokunn
Copy link
Author

I am not sure which is better between CoreDNS DaemonSet vs CoreDNS and nodelocaldns configuration in terms of advantages.

@hagaibarel
Copy link
Collaborator

Running CoreDNS as a daemonset isn't a use case we currently support in this chart. If you're looking for DNS caching or performance gains, nodelocaldns + CoreDNS is a better approach (see the docs)

@dudicoco
Copy link

@hagaibarel can you please elaborate why nodelocaldns + CoreDNS is a better approach?
It is a much more complicated solution -
nodelocaldns + CoreDNS + DNS autoscaler
vs
CoreDNS daemonset

@chrisohaver
Copy link
Member

chrisohaver commented Apr 10, 2023

Nodelocal DNS instances are lighter weight and less resource intensive because they don't maintain a k8s API connection.
In a CoreDNS Daemonset deployment, every instance maintains a k8s API connection, which takes up more memory and adds more load to the k8s API per instance.

@dudicoco
Copy link

@chrisohaver looking at our production coredns pods (with DNS autoscaling), the memory footprint of each is on average 20MB which is pretty insignificant.
What's the measured load on the k8s API server with a daemonset? kube-proxy and CNI plugin daemonsets maintain a k8s API connection and there are no issues there.

@dudicoco
Copy link

@chrisohaver @hagaibarel can we reopen the issue?
I still can not see a convincing argument to use an overly complex nodelocal dns setup.

@hagaibarel
Copy link
Collaborator

Hi @dudicoco,

I have yet to see a compelling argument why we should support this model in this chart. My initial arguments are still valid IMO:

  • Upstream CoreDNS isn't suitable to be used as-is as NodeLocal DNS. While it's true that NodeLocalDNS is based on CoreDNS it's always with some adaptations or modifications.
  • This is out of scope for this chart and the intended purposes.

Given that, I don't see a reason to reopen this issue

@dudicoco
Copy link

There's an ongoing discussion in the upstream in an issue that i've opened: kubernetes/dns#594

Once conclusions were made there i'll update this issue with the information.

@vaskozl
Copy link

vaskozl commented Jan 10, 2024

From the requested rejected MR:

A daemonset doesn't support rolling upgrades and as such could lead to service disruptions

This is now supported and works great: https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/

spec:
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0

Furthermore internalTrafficPolicy: Local is also now available.

This makes it much simpler to deploy CoreDNS as a DS. For me the benefits are:

  • no iptables/kube-proxy interaction/requirement
  • no duplicate coredns config full of __PILLAR__ variables

@gecube
Copy link

gecube commented Sep 20, 2024

I'd like to open the ticket. The use case is very simple:

  • like a devops I want to deploy the coredns to control plane (in case of bare metal)
  • now I am forced to use other solutions
  • or use Deployment with anti affinity and topology spread. Also I need to change replica number manually.

So I would be happy to have daemonset which will scale up/down according to control plane nodes count. And schedule coredns only to them (by nodeselector).

@gecube
Copy link

gecube commented Sep 20, 2024

@vaskozl thanks for the clarification, totally agree. I think we need to elaborate on the issue and re-open it. I think that the PR would be almost trivial. And yes - for the reverse compatibility - there will be a flag mode which controls the deployment type and by default it would be deployment

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants