Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add hub and spoke docs #910

Merged
merged 1 commit into from
Dec 10, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
46 changes: 46 additions & 0 deletions backstage-resources/docs/hub-and-spoke/application-deployment.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
# Application Deployment - DRAFT

Deploying team applications is managed by the team-onboarding chart. Currently teams have two options:

* create own application definitions in the app-of-apps git-repo by themselves
* create only application specific gitops-repos and let a platform AppSet SCM generator find this repos and create the application definition

In both options the application is restricted to the teams specific argo project in the team-onboarding chart.

## Project definition

The project definition in the team-onboarding chart in a hub-and-spoke topology needs different destination rules:

Single instance:

```
destinations:
- name: in-cluster
namespace: {{ .name }}-*
server: https://kubernetes.default.svc
- name: in-cluster
namespace: adn-{{ .name }}
server: https://kubernetes.default.svc
```

Hub & Spoke:

```
destinations:
# do not allow to deploy team applications on the hub
- name: !in-cluster
namespace: {{ .name }}-*
server: https://kubernetes.default.svc
# only allow to deploy team app-of-apps or appsets on the hub in the special "adn" namespace (application-definition-namespace)
- name: in-cluster
namespace: adn-{{ .name }}
server: https://kubernetes.default.svc
# all other clusters are allowed as long as the target namespace starts with the teams name
- name: *
namespace: {{ .name }}-*
server: https://kubernetes.default.svc
```

see allow and deny rules in https://argo-cd.readthedocs.io/en/latest/user-guide/projects/#managing-projects


149 changes: 149 additions & 0 deletions backstage-resources/docs/hub-and-spoke/hub-and-spoke-basics.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,149 @@
# Hub & Spoke Architecture (kubriX prime Feature)

In Topologies where you want a central management cluster managing many workload clusters
we often speak about so called "Hub and Spoke" topologies.

"Hub" is the central management cluster (or control-plane cluster) and "spokes" are the workload clusters where your custom applications run.

![image](../img/hub-and-spoke-topology-1.png)

On the central mgmt-cluster the following kubriX components can run:

* kubriX delivery
* kubriX security
* kubriX observability
* kubriX portal

At least "kubriX delivery" is essential on the hub, since it is responsible for deploying all apps on the spokes.
Of course it is totally legit to deploy e.g. kubriX portal and kubriX observability also on a seperate control-plane cluster.
However, for the sake of simplicity, let's assume every kubriX control-plane component is on the same control-plane cluster.

On the spokes are the following components installed (via kubriX delivery on the hub):

* your custom application workload
* some kubriX platform services like cert-manager, ingress-nginx, external-secrets
* kubriX security agents like falco, kyverno, ...
* kubriX observability agents like k8s-monitoring (alloy, ...)

# Deployment of kubriX platform services on the hub

This is the same way as in a single instance topology.
Define your bricks in your values file in the folder "platform-apps/target-chart", bootstrap your platform and let argocd manage your platform apps.

# Deployment of kubriX platform services on the spokes

To set up the spoke deployments you need to do the following steps.

## Add spoke-appset app to your platform

In your "platform-apps/target-chart" values file add the following stanza:

```
# include app when you want to deploy apps to spoke clusters in a hub-and-spoke architecture
- name: spoke-appset
destinationNamespaceOverwrite: argocd
```

The values of this spoke-appset need to define a (cluster) generator for the appset and default valueFiles (see example values files).

With this an ArgoCD ApplicationSet gets deployed which creates an App-Of-Apps for each spoke cluster according to the generator.
This App-Of-Apps gets defined in the spoke-applications chart as follows.

## Define spoke-applications

The values file of "platform-apps/charts/spoke-applications" is very similar to the "platform-apps/target-chart" values file.
You define a list of applications which should get deployed to all spokes. The semantic is the same as in the target-chart.

# Propagating spoke specific values to spoke applications

The values files for each spoke application is defined in `.default.valueFiles` array, which would be the same for every spoke cluster.
Therefore, just with values files there would not be any possibility to set spoke specific values.

To achieve this you should define spoke specific properties as labels in your [cluster secret](https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/#clusters).

The value of these labels can then be used the `valuesObject` attribute of your spoke application.

## Example

Ingress label in cluster secret:

```
kind: Secret
metadata:
labels:
spoke.kubrix.io/ingress-domain: staging.kubrix.cloud
```

spoke-appset values file:

Use this label in your generator attribute and the value in your parameters attribute:

```
generator:
- clusters:
selector:
matchExpressions:
- key: name
operator: NotIn
values:
- in-cluster
values:
ingressDomain: '{{index .metadata.labels "spoke.kubrix.io/ingress-domain"}}'

parameters:
- name: default.repoURL
value: '{{ .Values.default.repoURL }}'
- name: default.targetRevision
value: '{{ .Values.default.targetRevision }}'
- name: destinationServer
value: '{{`{{.server}}`}}'
- name: destinationClusterName
value: '{{`{{.name}}`}}'
- name: ingressDomain
value: '{{`{{.values.ingressDomain}}`}}'
```

Now the `ingressDomain` value gets propagated to your spoke-applications and can be used here in the applications list `valuesObject`:

```
- name: falco
valuesObject:
falco:
falcosidekick:
webui:
ingress:
hosts:
- host: falco.{{ .Values.ingressDomain }}
paths:
- path: /
pathType: Prefix
tls:
- secretName: falco-server-tls
hosts:
- falco.{{ .Values.ingressDomain }}
annotations:
argocd.argoproj.io/compare-options: ServerSideDiff=true
helmOptions:
skipCrds: true
syncOptions:
- ServerSideApply=true
```

# Overall composition of Apps-Of-Apps and AppSets in Hub & Spoke

In the first place it can be quite confusing how all these pieces work together.
So lets have a look at the overall composition of app-of-apps, appsets and apps and where they get deployed to.

![image](../img/hub-and-spoke-topology-2.png)

The spoke-appset is part of the bootstrap-app. This spoke-appset creates an ApplicationSet 'spoke-applications'
which creates a spoke-applications App-Of-Apps for each spoke.

While the grey applications get deployed on the Hub, the yellow and green applications get deployed on the corresponding spokes.







Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 3 additions & 0 deletions backstage-resources/mkdocs.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -28,3 +28,6 @@ nav:
- Testing the platform: platform-testing.md
- Mimir Runbooks: runbooks/mimir.md
- Loki Runbooks: runbooks/loki.md
- Hub & Spoke (kubriX prime):
- Hub & Spoke Architecture: hub-and-spoke/hub-and-spoke-basics.md
- Deploying custom apps to spokes: hub-and-spoke/application-deployment.md
Loading