Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[velero] Chart ux improvements #138

Open
wants to merge 23 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,15 @@ You can then run `helm search repo vmware-tanzu` to see the charts.

TBD

### Running Tests

To run unit tests in this repository please install helm-unittest

```sh
helm plugin install https://github.com/quintush/helm-unittest
helm unittest charts/velero
```

## License

[Apache 2.0 License](./LICENSE).
2 changes: 1 addition & 1 deletion charts/velero/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ apiVersion: v1
appVersion: 1.5.2
description: A Helm chart for velero
name: velero
version: 2.13.6
version: 2.14.0
home: https://github.com/vmware-tanzu/velero
icon: https://cdn-images-1.medium.com/max/1600/1*-9mb3AKnKdcL_QD3CMnthQ.png
sources:
Expand Down
39 changes: 15 additions & 24 deletions charts/velero/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,6 @@ The default configuration values for this chart are listed in values.yaml.

See Velero's full [official documentation](https://velero.io/docs/v1.5/basic-install/). More specifically, find your provider in the Velero list of [supported providers](https://velero.io/docs/v1.5/supported-providers/) for specific configuration information and examples.


#### Using Helm 3

First, create the namespace: `kubectl create namespace <YOUR NAMESPACE>`
Expand All @@ -38,19 +37,15 @@ Specify the necessary values using the --set key=value[,key=value] argument to h
```bash
helm install vmware-tanzu/velero --namespace <YOUR NAMESPACE> \
--set-file credentials.secretContents.cloud=<FULL PATH TO FILE> \
--set configuration.provider=<PROVIDER NAME> \
--set configuration.backupStorageLocation.name=<BACKUP STORAGE LOCATION NAME> \
--set configuration.backupStorageLocation.bucket=<BUCKET NAME> \
--set configuration.backupStorageLocation.config.region=<REGION> \
--set configuration.volumeSnapshotLocation.name=<VOLUME SNAPSHOT LOCATION NAME> \
--set configuration.volumeSnapshotLocation.config.region=<REGION> \
--set provider=<PROVIDER NAME> \
--set backupStorageLocation.name=<BACKUP STORAGE LOCATION NAME> \
--set backupStorageLocation.bucket=<BUCKET NAME> \
--set backupStorageLocation.config.region=<REGION> \
--set volumeSnapshotLocation.name=<VOLUME SNAPSHOT LOCATION NAME> \
--set volumeSnapshotLocation.config.region=<REGION> \
--set image.repository=velero/velero \
--set image.tag=v1.5.1 \
--set image.pullPolicy=IfNotPresent \
--set initContainers[0].name=velero-plugin-for-aws \
--set initContainers[0].image=velero/velero-plugin-for-aws:v1.1.0 \
--set initContainers[0].volumeMounts[0].mountPath=/target \
--set initContainers[0].volumeMounts[0].name=plugins \
--generate-name
```

Expand All @@ -66,7 +61,7 @@ helm install vmware-tanzu/velero --namespace <YOUR NAMESPACE> -f values.yaml --g
If a value needs to be added or changed, you may do so with the `upgrade` command. An example:

```bash
helm upgrade vmware-tanzu/velero <RELEASE NAME> --namespace <YOUR NAMESPACE> --reuse-values --set configuration.provider=<NEW PROVIDER>
helm upgrade vmware-tanzu/velero <RELEASE NAME> --namespace <YOUR NAMESPACE> --reuse-values --set provider=<NEW PROVIDER>
```

#### Using Helm 2
Expand All @@ -90,19 +85,15 @@ Specify the necessary values using the --set key=value[,key=value] argument to h
```bash
helm install vmware-tanzu/velero --namespace <YOUR NAMESPACE> \
--set-file credentials.secretContents.cloud=<FULL PATH TO FILE> \
--set configuration.provider=aws \
--set configuration.backupStorageLocation.name=<BACKUP STORAGE LOCATION NAME> \
--set configuration.backupStorageLocation.bucket=<BUCKET NAME> \
--set configuration.backupStorageLocation.config.region=<REGION> \
--set configuration.volumeSnapshotLocation.name=<VOLUME SNAPSHOT LOCATION NAME> \
--set configuration.volumeSnapshotLocation.config.region=<REGION> \
--set provider=aws \
--set backupStorageLocation.name=<BACKUP STORAGE LOCATION NAME> \
--set backupStorageLocation.bucket=<BUCKET NAME> \
--set backupStorageLocation.config.region=<REGION> \
--set volumeSnapshotLocation.name=<VOLUME SNAPSHOT LOCATION NAME> \
--set volumeSnapshotLocation.config.region=<REGION> \
--set image.repository=velero/velero \
--set image.tag=v1.5.1 \
--set image.pullPolicy=IfNotPresent \
--set initContainers[0].name=velero-plugin-for-aws \
--set initContainers[0].image=velero/velero-plugin-for-aws:v1.1.0 \
--set initContainers[0].volumeMounts[0].mountPath=/target \
--set initContainers[0].volumeMounts[0].name=plugins
--set image.pullPolicy=IfNotPresent
Copy link
Collaborator

@jenting jenting Aug 17, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this idea that the user does not need to specify the plugin images anymore and I think the below code is a bit redundant from my usage.

    --set initContainers[0].volumeMounts[0].mountPath=/target \	
    --set initContainers[0].volumeMounts[0].name=plugins

However, we have a use case that we'll mount two plugin images, one for AWS and the other one for CSI, so our command is:

    --set initContainers[0].name=velero-plugin-for-aws \
    --set initContainers[0].image=velero/velero-plugin-for-aws:v1.1.0 \
    --set initContainers[0].volumeMounts[0].mountPath=/target \
    --set initContainers[0].volumeMounts[0].name=plugins \
    --set initContainers[1].name=velero-plugin-for-csi \
    --set initContainers[1].image=velero/velero-plugin-for-csi:v0.1.1 \
    --set initContainers[1].volumeMounts[0].mountPath=/target \
    --set initContainers[1].volumeMounts[0].name=plugins

With this PR, I think our use case can't be fulfilled. But I'd rather see if it's possible to remove the redundant volumeMounts[0] and the charts would help us generate automatically, which becomes:

    --set initContainers[0].name=velero-plugin-for-aws \
    --set initContainers[0].image=velero/velero-plugin-for-aws:v1.1.0 \
    --set initContainers[1].name=velero-plugin-for-csi \
    --set initContainers[1].image=velero/velero-plugin-for-csi:v0.1.1

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can add additional initContainers as before, the one for the provider will be added automatically. At present you'd add (assuming aws is the provider):

--set initContainers[0].name=velero-plugin-for-csi \
--set initContainers[0].image=velero/velero-plugin-for-csi:v1.1.0 \
--set initContainers[1].volumeMounts[0].mountPath=/target \
--set initContainers[1].volumeMounts[0].name=plugins

Are you using both because the other provider is in a backup storage location or snapshot location? Can we just add initContainers for all the various providers across the values file and leave initContainers as a standard extension point for fallback.

I'd recommend against automounting plugins at /target for all initContainers as that is a general extension point. If you really want something similar perhaps it'sworth adding a list of plugin images to add over and above the provider one? Something like

additionalPlugins:
- name: velero-plugin-for-csi
  image: velero/velero-plugin-for-csi:v0.1.1

which then creates an initContainer entry with the volume mount.

Copy link
Collaborator

@jenting jenting Aug 18, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you using both because the other provider is in a backup storage location or snapshot location? Can we just add initContainers for all the various providers across the values file and leave initContainers as a standard extension point for fallback.

Yes, we're using S3-compatible (minion) which requires velero-plugin-for-aws and also deploy ceph-csi to take a volume snapshot by Kubernetes volume snapshot CRDs with velero-plugin-for-csi.

I'd recommend against automounting plugins at /target for all initContainers as that is a general extension point. If you really want something similar perhaps its worth adding a list of plugin images to add over and above the provider one? Something like

additionalPlugins:
- name: velero-plugin-for-csi
  image: velero/velero-plugin-for-csi:v0.1.1

which then creates an initContainer entry with the volume mount.

Sounds good. Personally I'd like this approach then we could remove the initContainers in values.yaml.

Any feedback on this? cc @carlisia @nrb @ashish-amarnath

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added initContainers for each provider configured at the various levels in the values so if you have different providers for configuration, backupStorageLocation, and snapshotLocation it will add all the plugins.

It looks like only the configuration.provider has a secret injected into the primary container. Is that a concern? Should we be refactoring the credentials to allow for multiple providers here as well?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also worth mentioning I removed the "data" in helpers and put it into values, it dumbs the logic down nicely and moves the images to standard patterns. This is nicer as it makes the images Flux HelmOperator friendly and I'm sure has other similar benefits

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Velero installs plugins as initContainers with the expectation that they will be copied to the correct mountpoint for invocation; all plugins should end up in the same directory for them to work properly at runtime.

Copy link
Author

@kav kav Aug 18, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't be a problem. They are all copied to /target in the plugins volume as expected. As you mention re the secret though we'll only have one set of credentials right?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kav sorry for the late reply.

I'd prefer to change the installation way from

helm install \
  ... \
  --set initContainers[0].name=velero-plugin-for-aws \
  --set initContainers[0].image=velero/velero-plugin-for-aws:v1.1.0 \
  --set initContainers[0].volumeMounts[0].mountPath=/target \
  --set initContainers[0].volumeMounts[0].name=plugins \
  --set initContainers[1].name=velero-plugin-for-csi \
  --set initContainers[1].image=velero/velero-plugin-for-csi:v0.1.1 \
  --set initContainers[1].volumeMounts[0].mountPath=/target \
  --set initContainers[1].volumeMounts[0].name=plugins

to

helm install \
  ... \
  --set plugins velero/velero-plugin-for-aws:v1.1.0,velero/velero-plugin-for-csi:v0.1.1

then, the installation command is aligned to velero install.

WDYT?

Copy link
Author

@kav kav Aug 29, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds great to me! I wrote a reply discussing the merits of that vs a yaml list and then realized it's a single split call and erased it and implemented support for both.

Updated, we now support
--set plugins velero/velero-plugin-for-aws:v1.1.0,velero/velero-plugin-for-csi:v0.1.1
and

plugins:
- velero/velero-plugin-for-aws:v1.1.0
- velero/velero-plugin-for-csi:v0.1.1

and even

plugins:
 - velero/velero-plugin-for-aws:v1.1.0
 - repository: velero/velero-plugin-for-csi
   digest: sha256:60d47fd25216f13073525823a067eab223d12e695d4b41e480aa3ff13a58c916
   pullPolicy: Always

Also added unit tests for all of that using https://github.com/quintush/helm-unittest

```

##### Option 2) YAML file
Expand All @@ -118,7 +109,7 @@ helm install vmware-tanzu/velero --namespace <YOUR NAMESPACE> -f values.yaml
If a value needs to be added or changed, you may do so with the `upgrade` command. An example:

```bash
helm upgrade vmware-tanzu/velero <RELEASE NAME> --reuse-values --set configuration.provider=<NEW PROVIDER>
helm upgrade vmware-tanzu/velero <RELEASE NAME> --reuse-values --set provider=<NEW PROVIDER>
```

## Upgrading
Expand Down
36 changes: 36 additions & 0 deletions charts/velero/ci/test-values-back-compat.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
installCRDs: true

# Set provider name and backup storage location bucket name
configuration:
provider: aws
backupStorageLocation:
bucket: velero
config:
region: us-west-1
profile: test
volumeSnapshotLocation:
provider: aws
config:
bucket: velero
region: us-west-1

# Set a service account so that the CRD clean up job has proper permissions to delete CRDs
serviceAccount:
server:
name: velero

schedules:
mybackup:
labels:
myenv: foo
schedule: "0 0 * * *"
template:
ttl: "240h"
includedNamespaces:
- foo

# Whether or not to clean up CustomResourceDefintions when deleting a release.
# Cleaning up CRDs will delete the BackupStorageLocation and VolumeSnapshotLocation instances, which would have to be reconfigured.
# Backup data in object storage will _not_ be deleted, however Backup instances in the Kubernetes API will.
# Always clean up CRDs in CI.
cleanUpCRDs: true
33 changes: 16 additions & 17 deletions charts/velero/ci/test-values.yaml
Original file line number Diff line number Diff line change
@@ -1,33 +1,32 @@
installCRDs: true

# Set provider name and backup storage location bucket name
configuration:
provider: aws
backupStorageLocation:
bucket: velero
config:
region: us-west-1
profile: test
volumeSnapshotLocation:
provider: aws
backupStorageLocation:
config:
bucket: velero
config:
region: us-west-1
profile: test
volumeSnapshotLocation:
provider: aws
config:
bucket: velero
region: us-west-1
region: us-west-1

# Set a service account so that the CRD clean up job has proper permissions to delete CRDs
serviceAccount:
server:
name: velero

schedules:
mybackup:
- name: mybackup
labels:
myenv: foo
schedule: "0 0 * * *"
template:
ttl: "240h"
includedNamespaces:
- foo

# Set a service account so that the CRD clean up job has proper permissions to delete CRDs
serviceAccount:
server:
name: velero
- foo

# Whether or not to clean up CustomResourceDefintions when deleting a release.
# Cleaning up CRDs will delete the BackupStorageLocation and VolumeSnapshotLocation instances, which would have to be reconfigured.
Expand Down
41 changes: 30 additions & 11 deletions charts/velero/templates/_helpers.tpl
Original file line number Diff line number Diff line change
Expand Up @@ -79,34 +79,53 @@ Create the Restic priority class name.
Create the backup storage location name
*/}}
{{- define "velero.backupStorageLocation.name" -}}
{{- with .Values.configuration.backupStorageLocation -}}
{{ default "default" .name }}
{{- end -}}
{{ coalesce .Values.configuration.backupStorageLocation.name .Values.backupStorageLocation.name "default" }}
{{- end -}}

{{/*
Create the backup storage location provider
*/}}
{{- define "velero.backupStorageLocation.provider" -}}
{{- with .Values.configuration -}}
{{ default .provider .backupStorageLocation.provider }}
{{- end -}}
{{ coalesce .Values.configuration.backupStorageLocation.provider .Values.backupStorageLocation.provider .Values.configuration.provider .Values.provider }}
{{- end -}}

{{/*
Create the volume snapshot location name
*/}}
{{- define "velero.volumeSnapshotLocation.name" -}}
{{- with .Values.configuration.volumeSnapshotLocation -}}
{{ default "default" .name }}
{{- end -}}
{{ coalesce .Values.configuration.volumeSnapshotLocation.name .Values.volumeSnapshotLocation.name "default" }}
{{- end -}}

{{/*
Create the volume snapshot location provider
*/}}
{{- define "velero.volumeSnapshotLocation.provider" -}}
{{- with .Values.configuration -}}
{{ default .provider .volumeSnapshotLocation.provider }}
{{ coalesce .Values.configuration.volumeSnapshotLocation.provider .Values.volumeSnapshotLocation.provider .Values.configuration.provider .Values.provider}}
{{- end -}}

{{- define "velero.image-from-values" -}}
{{- if kindIs "string" . -}}
{{- . }}
{{- else -}}
{{- if .digest -}}
{{- .repository }}@{{ .digest }}
{{- else -}}
{{- .repository }}:{{ .tag }}
{{- end -}}
{{- end -}}
{{- end -}}
{{- define "velero.pull-policy-from-values" -}}
{{- if kindIs "string" . -}}
{{ "IfNotPresent" -}}
{{- else -}}
{{ .pullPolicy -}}
{{- end -}}
{{- end -}}

{{- define "velero.name-from-values" -}}
{{- if kindIs "string" . -}}
{{ splitList "@" . | first | splitList ":" | first | splitList "/" | last -}}
{{- else -}}
{{ splitList "/" .repository | last -}}
{{- end -}}
{{- end -}}
2 changes: 1 addition & 1 deletion charts/velero/templates/backupstoragelocation.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ metadata:
helm.sh/chart: {{ include "velero.chart" . }}
spec:
provider: {{ include "velero.backupStorageLocation.provider" . }}
{{- with .Values.configuration.backupStorageLocation }}
{{- with coalesce .Values.configuration.backupStorageLocation .Values.backupStorageLocation }}
objectStorage:
bucket: {{ .bucket }}
{{- with .prefix }}
Expand Down
64 changes: 42 additions & 22 deletions charts/velero/templates/deployment.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
{{- if .Values.configuration.provider -}}
{{- $provider := .Values.configuration.provider -}}
{{- if or .Values.provider .Values.configuration.provider -}}
{{- $providers := list .Values.provider .Values.backupStorageLocation.provider .Values.volumeSnapshotLocation.provider .Values.configuration.backupStorageLocation.provider .Values.configuration.volumeSnapshotLocation.provider | compact | uniq -}}
{{- $provider := first $providers -}}
{{- $useSecret := or .Values.credentials.existingSecret (or .Values.credentials.secretContents .Values.credentials.extraEnvVars) -}}
apiVersion: apps/v1
kind: Deployment
metadata:
Expand Down Expand Up @@ -49,12 +51,8 @@ spec:
{{- end }}
containers:
- name: velero
{{- if .Values.image.digest }}
image: "{{ .Values.image.repository }}@{{ .Values.image.digest }}"
{{- else }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
{{- end }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
image: {{include "velero.image-from-values" .Values.image }}
imagePullPolicy: {{ include "velero.pull-policy-from-values" .Values.image }}
{{- if .Values.metrics.enabled }}
ports:
- name: monitoring
Expand All @@ -64,26 +62,26 @@ spec:
- /velero
args:
- server
{{- with .Values.configuration }}
{{- with .backupSyncPeriod }}
{{- with .Values }}
{{- with coalesce .configuration.backupSyncPeriod .backupSyncPeriod }}
- --backup-sync-period={{ . }}
{{- end }}
{{- with .resticTimeout }}
{{- with coalesce .configuration.resticTimeout .resticTimeout}}
- --restic-timeout={{ . }}
{{- end }}
{{- if .restoreOnlyMode }}
{{- if coalesce .configuration.restoreOnlyMode .restoreOnlyMode}}
- --restore-only
{{- end }}
{{- with .restoreResourcePriorities }}
{{- with coalesce .configuration.restoreResourcePriorities .restoreResourcePriorities }}
- --restore-resource-priorities={{ . }}
{{- end }}
{{- with .features }}
{{- with coalesce .configuration.features .features }}
- --features={{ . }}
{{- end }}
{{- with .logLevel }}
{{- with coalesce .configuration.logLevel .logLevel }}
- --log-level={{ . }}
{{- end }}
{{- with .logFormat }}
{{- with coalesce .configuration.logFormat .logFormat }}
- --log-format={{ . }}
{{- end }}
{{- if .defaultVolumesToRestic }}
Expand All @@ -97,7 +95,7 @@ spec:
volumeMounts:
- name: plugins
mountPath: /plugins
{{- if .Values.credentials.useSecret }}
{{- if or .Values.credentials.secretContents .Values.credentials.extraEnvVars }}
- name: cloud-credentials
mountPath: /credentials
- name: scratch
Expand All @@ -116,7 +114,7 @@ spec:
fieldPath: metadata.namespace
- name: LD_LIBRARY_PATH
value: /plugins
{{- if .Values.credentials.useSecret }}
{{- if $useSecret }}
{{- if eq $provider "aws" }}
- name: AWS_SHARED_CREDENTIALS_FILE
value: /credentials/cloud
Expand All @@ -131,7 +129,7 @@ spec:
value: /credentials/cloud
{{- end }}
{{- end }}
{{- with .Values.configuration.extraEnvVars }}
{{- with coalesce .Values.configuration.extraEnvVars .Values.extraEnvVars }}
{{- range $key, $value := . }}
- name: {{ default "none" $key }}
value: {{ default "none" $value }}
Expand All @@ -146,15 +144,37 @@ spec:
key: {{ default "none" $key }}
{{- end }}
{{- end }}
{{- if .Values.initContainers }}
initContainers:
{{- $plugins := list -}}
{{- if kindIs "string" .Values.plugins -}}
{{- $plugins = splitList "," .Values.plugins -}}
{{- else -}}
{{- $plugins = .Values.plugins -}}
{{- end -}}
{{- range $providers -}}
{{- $plugins = append $plugins (pluck . $.Values.pluginImages | first) }}
{{- end }}
{{- range $pluginImage := $plugins }}
- name: {{ include "velero.name-from-values" $pluginImage }}
image: {{ include "velero.image-from-values" $pluginImage }}
imagePullPolicy: {{ include "velero.pull-policy-from-values" $pluginImage }}
volumeMounts:
- mountPath: /target
name: plugins
{{- end }}
{{- if .Values.initContainers }}
{{- toYaml .Values.initContainers | nindent 8 }}
{{- end }}
{{- end }}


volumes:
{{- if .Values.credentials.useSecret }}
{{- if $useSecret }}
- name: cloud-credentials
secret:
secretName: {{ include "velero.secretName" . }}
items:
- key: {{ .Values.credentials.existingSecretKey }}
path: cloud
{{- end }}
- name: plugins
emptyDir: {}
Expand Down
Loading