Skip to content

Commit

Permalink
Updating the Kyma backup documentation (#18739)
Browse files Browse the repository at this point in the history
* kyma-18738 documentation update

* Update 10-backup-kyma.md

* Updated unsupported velero section

* Update 10-backup-kyma.md

* Update 10-backup-kyma.md

* Update 10-backup-kyma.md

* Apply suggestions from code review

Co-authored-by: Iwona Langer <[email protected]>

* Update docs/04-operation-guides/operations/10-backup-kyma.md

Co-authored-by: Iwona Langer <[email protected]>

---------

Co-authored-by: Iwona Langer <[email protected]>
  • Loading branch information
AribaOpensource and IwonaLanger authored Aug 14, 2024
1 parent 28a8298 commit dcc1288
Showing 1 changed file with 21 additions and 16 deletions.
37 changes: 21 additions & 16 deletions docs/04-operation-guides/operations/10-backup-kyma.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,10 +25,9 @@ Taking volume snapshots is possible thanks to [Container Storage Interface (CSI)

You can create on-demand volume snapshots manually, or set up a periodic job that takes automatic snapshots periodically.

## Back Up Resources Using Velero

You can back up and restore individual resources manually or automatically with Velero. For more information, read the [Velero documentation](https://velero.io/docs/).
Be aware that a full backup of a Kyma cluster isn't supported. Start with the existing Kyma installation and restore specific resources individually.
## Back Up Resources Using Third-Party Tools
>[!WARNING]
> Third-party tools like Velero are not currently supported. These tools may have limitations and might not fully support automated cluster backups. They often require specific access rights to cluster infrastructure, which may not be available in Kyma's managed offerings, where access rights to the infrastructure account are restricted.
## Create On-Demand Volume Snapshots

Expand All @@ -46,7 +45,7 @@ If you want to provision a new volume or restore the existing one, create on-dem
- for Azure: `disk.csi.azure.com`

```yaml
apiVersion: snapshot.storage.k8s.io/v1beta1
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
annotations:
Expand All @@ -59,7 +58,7 @@ If you want to provision a new volume or restore the existing one, create on-dem
2. Create a VolumeSnapshot resource:
```yaml
apiVersion: snapshot.storage.k8s.io/v1beta1
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: snapshot
Expand Down Expand Up @@ -144,7 +143,7 @@ subjects:
- kind: ServiceAccount
name: volume-snapshotter
---
apiVersion: batch/v1beta1
apiVersion: batch/v1
kind: CronJob
metadata:
name: volume-snapshotter
Expand All @@ -156,6 +155,7 @@ spec:
template:
spec:
serviceAccountName: volume-snapshotter
restartPolicy: Never
containers:
- name: job
image: europe-docker.pkg.dev/kyma-project/prod/tpi/k8s-tools:v20231026-aa6060ec
Expand All @@ -165,16 +165,15 @@ spec:
- |
# Create volume snapshot with random name.
RANDOM_ID=$(openssl rand -hex 4)
cat <<EOF | kubectl apply -f -
apiVersion: snapshot.storage.k8s.io/v1beta1
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: volume-snapshot-${RANDOM_ID}
namespace: {NAMESPACE}
labels:
"job": "volume-snapshotter"
"name": "volume-snapshot-${RANDOM_ID}"
job: volume-snapshotter
name: volume-snapshot-${RANDOM_ID}
spec:
volumeSnapshotClassName: {SNAPSHOT_CLASS_NAME}
source:
Expand All @@ -192,14 +191,20 @@ spec:
fi
if [[ "${i}" -lt "${attempts}" ]]; then
echo "Volume snapshot is not yet ready to use, let's wait ${retryTimeInSec} seconds and retry. Attempts ${i} of ${attempts}."
echo "Volume snapshot [volume-snapshot-${RANDOM_ID}] is not yet ready to use, let's wait ${retryTimeInSec} seconds and retry. Attempts ${i} of ${attempts}."
else
echo "Volume snapshot is still not ready to use after ${attempts} attempts, giving up."
echo "Volume snapshot [volume-snapshot-${RANDOM_ID}] is still not ready to use after ${attempts} attempts, giving up."
exit 1
fi
sleep ${retryTimeInSec}
done
# Delete old volume snapshots.
kubectl delete volumesnapshot -n {NAMESPACE} -l job=volume-snapshotter,name!=volume-snapshot-${RANDOM_ID}
```
# Retain only the last $total_snapshot_count snapshots.
total_snapshot_count=1
snapshots_to_delete=$(kubectl get volumesnapshot -n {NAMESPACE} -l job=volume-snapshotter -o=jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}' | sort -r | tail -n +$(($total_snapshot_count + 1)))
if [ -n "$snapshots_to_delete" ]; then
echo "Deleting old snapshots: $snapshots_to_delete"
echo "$snapshots_to_delete" | xargs -n 1 kubectl -n {NAMESPACE} delete volumesnapshot
else
echo "No snapshots to delete, keeping the last $total_snapshot_count snapshots."
fi

0 comments on commit dcc1288

Please sign in to comment.