Replies: 0 comments 5 replies
-
You may want to have a look at Flux2. Here is an example repo for managing releases across clusters without duplicating the objects, it uses Kustomize overlays to patch the HelmReleases values https://github.com/fluxcd/flux2-kustomize-helm-example |
Beta Was this translation helpful? Give feedback.
-
One observation, isn't the namespace (lets say Is this a use case for https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/#setting-cross-cutting-fields . I guess it can be done with the |
Beta Was this translation helpful? Give feedback.
-
For www.keptn.sh we use a "reference" version of the artefacts and then create individual version for stages. These version are managed by the control plane. |
Beta Was this translation helpful? Give feedback.
-
There is no such thing in https://github.com/fluxcd/flux2-kustomize-helm-example, each env is a dedicated cluster, adding a cluster means adding a kustomize overlay. |
Beta Was this translation helpful? Give feedback.
-
At JPL, I addressed this problem with my gitops-like process in our current v1.11 K8S environment. For CI, I use tekton to automate the build & publishing of images. For CD, I use cue to import the relevant K8S schemas we depend on (services, deployments, ...) In practice, this means that we also have a gitops-like process where we have a git repo for the generic deployment component definitions + a git repo for each deployment environment (i.e., 5 repos). For all 6 repos, cue effectively allows us to verify, at compile time, that the constraints are satisfiable. For the environment-specific repos, cue also produces a solution to these constraints which it exports as yaml files that we apply to the cluster. I haven't yet applied the flux operator to sync the cue-generated solutions with the actual state of an environment in k8s. I hope this brief summary gives enough of a sense of the power that gitops + cue brings to address this problem. |
Beta Was this translation helpful? Give feedback.
-
Following https://github.com/fluxcd/helm-operator-get-started I noticed that configuration is duplicated
In a terraform approach podinfo.yaml would be defined once and have variable place holders (
var.namespace
,var.replica_count
) and a separate tfvars file for each environment. Within the repo there is duplication between podinfo.yaml files , if I want to change chart path I have do it in two places, similarly for the git repo location etc.There may be different repo layouts that better support this , if so documenting and having it as an output from this working group would be greatly appreciated.
Beta Was this translation helpful? Give feedback.
All reactions