-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Template for provisioning GKE + Flux #18
Comments
For flux to apply a manifest on a remote cluster, a k8s secret with The Another way is to "construct" a To get around the concerns mentioned above, we can use a hybrid of the above two approaches i.e, use |
An alternative is to do away with bootstrapping flux as per the above mentioned method and use |
So after some internal discussions, it seems like it'd be more useful to have flux reconciled against the client repository as opposed to the previously proposed approaches of having it reconciled to a central place. The reason for this is; anything beyond GCP resources should be considered as application specific tools. Having multiple flux instances from different remote clusters reconciling to a single flux repo increases the maintenance complexity for our team. Plus, it'd be difficult to "customize" flux deployment on a remote cluster if there's ever a need for that. For example, one project might need the The solution then is to have each flux instance reconcile to it's own client repo. However, as noted previously, that means configuring deploy keys. One way to approach this is by using a crossplane composition that would create the remote k8s cluster and then use Fundamentally, this approach is similar to ANZ's Google Next demo. However, instead of github actions we'd use a kubernetes job within a crossplane composition. This is due to the security concerns around github's location. Moreover, the demo uses token auth, i.e, one highly privileged token would need to have access to all the repositories in the org and would be stored as a k8s secret in each cluster that uses this template. On the other hand, a k8s job that's part of the crossplane composition can run a shell script that -
The job runs on the management cluster, therefore, the highly privileged org-wide token only lives in the management cluster. The end result might looks something like this: apiVersion: dip.phac.gc.ca/v1beta1
kind: XFluxGKE
metadata:
name: alpha
spec:
name: alpha # cluster name on gcp
projectId: phsp-fb3a2b560a617fbf # project id where the cluster would be created
xnetwork: # network config for the cluster
networkId: projects/phsp-fb3a2b560a617fbf/global/networks/alpha-vpc
subnetworkId: projects/phsp-fb3a2b560a617fbf/regions/northamerica-northeast1/subnetworks/alpha-vpc
repoName: cpho-phase2 # repo name that resolves to ssh://[email protected]/PHACDataHub/<repoName> |
A cluster resource by the crossplane gcp provider can be used in conjunction with an
XProject
andXNetwork
resources to initialize a new project space with the desired networking setup and a GKE autopilot cluster.Flux can be installed on the remote cluster by leveraging the
kubeConfig
exported by previously created cluster resource and,provider-kubernetes
to apply management repo's flux sync manifest that has akubeConfig
reference.This implies that the remote cluster's flux deployment will be in sync (managed) with the management cluster and the application team would be responsible for configuring GitOps on their repo i.e, a new
GitRepository
resource pointing to their repo. An upside to this is - we don't need worry about access or bootstrapping deploy keys on client / application repo.In addition to this, the cluster can be added to the fleet-monitoring project to provide centralized monitoring of all clusters.
This would probably include a template on both crossplane and backstage side.
The text was updated successfully, but these errors were encountered: