-
Notifications
You must be signed in to change notification settings - Fork 135
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
deprecate this module in favour of puppet/k8s #642
Comments
This is something we have discussed too! Would be good to get some wider opinions from the community on the topic. |
ping. How do we move forward with this? :) |
I think this is probably a good move, but need to remind everyone that as a Supported module, which Enterprise customers consume, it would need business analysis and a significant deprecation period |
puppet/k8s looks promising. But it seems to me that it still lack documentation and tests. E.g. how does the PuppetDB discovery work, certificates management, renewal etc. And also some release history, in order to make an impression of a maintained module. |
What are the reasons the k8s module is better then this one? I have not found it yet. We are running several (10+) k8s clusters on production for our customers with this module and it is working for us very good. It follows the standard setup using kubeadm. See also https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/. We started with managed k8s cluster around 5y ago (first with kubeadm and ansible) so we know who to manages those using packages from the OS (CentOS and/or Ubuntu). The k8s module is in my opinion not good enough for production, some remarks about that one (maybe not everything is correct because i checked it for 30m):
I think the k8s module can work for k8s cluster for development purpose (make it from scratch and not using OS packages), but not for k8s cluster which are in use in production for customers who use it every day (like we do). So we will not switch to the k8s module. I think it would be better to deprecate the k8s module in favor of this one. |
Whilst this module is rusty and from first impressions not ready out the box for the latest versions of k8s and Cilium, the k8s module looks significantly less mature than this. I'd hope to see development work focused here rather than any deprecation of this module. |
Another happy user if this module here. It is far from perfect and might not even be the typical puppet module but it does the job for us. We have a couple of clusters deployed (one PROD (512 cores / 1Tb of RAM), one DEV) with it and so far so good. Originally the clusters were deployed with: kubernetes 1.20, calico-tigera, cri_containerd and runc on RHEL. Today we run kubernetes 1.26, all the upgrades have been done via puppet with this module. |
I'm too running k8s cluster with this module and would like to stick this one and it has better doc and much better options to deal with |
Use Case
Hi, at config management camp this year we discussed the future of the puppetlabs/kubernetes module. puppetlabs/kubernetes uses many exec resources and is hard to use because we need to generate the config outside of puppet. the puppet/k8s module has proper types and providers. I proposed to deprecate puppetlabs/kubernetes in favour of puppet/k8s.
Describe the Solution You Would Like
I would like to deprecate puppetlabs/kubernetes
Describe Alternatives You've Considered
A clear and concise description of any alternative solutions or features you've considered.
Additional Context
I don't think it makes sense to maintain multiple kubernetes modules and my impression is that puppet/k8s has the better code quality.
The text was updated successfully, but these errors were encountered: