-
Notifications
You must be signed in to change notification settings - Fork 151
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
K8SPSMDB-1237: add incremental backups #1836
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pooknull please add example into https://github.com/percona/percona-server-mongodb-operator/blob/main/deploy/cr.yaml
Type: t, | ||
IncrBase: cr.Spec.Type == backup.BackupTypeIncrementalBase, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe it is worth having here
cmd := ctrl.Cmd{
Cmd: ctrl.CmdBackup,
Backup: &ctrl.BackupCmd{
Name: name,
Type: cr.Spec.BackupType(),
IncrBase: cr.Spec.IsBackupTypeIncrementalBase(),
Compression: cr.Spec.Compression,
CompressionLevel: compLevel,
},
}
And having inside the spec some functions like the following:
func (p PerconaServerMongoDBBackupSpec) IsBackupTypeIncrementalBase() bool
func (p PerconaServerMongoDBBackupSpec) BackupType() defs.BackupType
Mainly because this logic belongs to the backup spec type and less to the backup and cmd creation itself.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pkg/psmdb/backup/pbm.go
Outdated
const ( | ||
BackupTypeIncrementalBase defs.BackupType = defs.IncrementalBackup + "-base" | ||
) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can also move this type to pkg/apis/psmdb/v1/perconaservermongodbbackup_types.go
since it is more related to
type PerconaServerMongoDBBackupSpec struct {
...
...
....
// +kubebuilder:validation:Enum={logical,physical,incremental}
Type defs.BackupType `json:"type,omitempty"`
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -17,7 +17,7 @@ type PerconaServerMongoDBBackupSpec struct { | |||
Compression compress.CompressionType `json:"compressionType,omitempty"` | |||
CompressionLevel *int `json:"compressionLevel,omitempty"` | |||
|
|||
// +kubebuilder:validation:Enum={logical,physical} | |||
// +kubebuilder:validation:Enum={logical,physical,incremental} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
don't we need to add incremental-base
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
||
desc 'Testing on not sharded cluster' | ||
|
||
echo "Creating PSMDB cluster" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please use log
function instead of echo
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -17,7 +17,7 @@ type PerconaServerMongoDBBackupSpec struct { | |||
Compression compress.CompressionType `json:"compressionType,omitempty"` | |||
CompressionLevel *int `json:"compressionLevel,omitempty"` | |||
|
|||
// +kubebuilder:validation:Enum={logical,physical,incremental} | |||
// +kubebuilder:validation:Enum={logical,physical,incremental,incremental-base} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please regenerate CRDs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -43,18 +43,18 @@ run_recovery_check() { | |||
local base=${3:-true} | |||
|
|||
wait_restore "${backup_name}" "${cluster}" "requested" "0" "3000" | |||
echo | |||
log "" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can continue using echo
for empty lines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Operator always supported multiple storages but didn't have native support for having multiple backup storages until v2.6.0. In operator we were reconfiguring PBM every time user selects a storage for their backups/restores different than the previous storage. This was causing long wait periods esp. in storages with lots of backups due to resync operation. Another limitation was forcing users to have only 1 backup storage if they want to enable point-in-time-recovery. In case of multiple storages, PBM would upload oplog chunks to whatever storage is last used by a backup/restore and this would make consistent recovery impossible. PBM v2.6.0 added native support for multiple storages and these changes introduce it to our operator: * User can have one main storage in PBM configuration. Any other storages can be added as profiles. Main storage can be found in: ``` kubectl exec cluster1-rs0-0 -c backup-agent -- pbm config ``` This commit introduces a new field `main` in storage spec: ``` storages: s3-us-west: main: true type: s3 s3: bucket: operator-testing credentialsSecret: cluster1-s3-secrets region: us-west-2 ```` If user only has 1 storage configured in `cr.yaml`, operator will automatically use it as main storage. If more than 1 storage is configured, one of them must have `main: true`. User can't have more than 1 storage with `main: true`. If user changes main storage in cr.yaml, operator will configure PBM with the new storage and start resync. Any other storage in `cr.yaml` will be added to PBM as a profile. User can see profiles using cli: ``` kubectl exec cluster1-rs0-0 -c backup-agent -- pbm profile list ``` When user adds a new profile to `cr.yaml`, operator will add it to PBM but won't start resync. **`pbm config --force-resync` only start resync for the main storage.** To manually resync a profile: ``` kubectl exec cluster1-rs0-0 -c backup-agent -- pbm profile sync <storage-name> ``` If user starts a restore using a backup in a storage configured as profile, operator will start resync operation for profile and block restore until resync finishes. Note: Profiles are also called external storages in PBM documentation. If user has multiple storages in `cr.yaml` and changes main storage between them, operator: 1. configures PBM with the new main storage. 2. adds the old main as a profile. 2. deletes profile for the new main storage. If user configures `backupSource` in backups/restores: * if `cr.yaml` has no storages configured, operator configures PBM with storage data in `backupSource` field. This storage will effectively be the main storage until user adds a storage to `cr.yaml`. After a storage is configured PBM configuration will be overwritten and `backupSource` storage will be gone. * if `cr.yaml` has a storage configured, operator adds `backupSource` storage as a profile. * Oplog chunks will be only be uploaded to main storage. User can use any backup as base backup for point-in-time-recovery. * Incremental backup chains all need to be stored in the same storage. TBD after #1836 merged. --- Other significant changes in operator behavior: * Operator now configures automatically configures PBM on a fresh cluster. Before this changes, PBM was not configured until user starts a backup/restore after deploying a fresh cluster. Now, PBM will directly configured with main storage in `cr.yaml` and resync will be started in the background. There's a new field in `PerconaServerMongoDB` status: `backupConfigHash`. Operator will maintain hash of the current PBM configuration and reconfigures PBM if hash is changed. Fields in `spec.backup.pitr` are excluded from hash calculation, they're handled separately. * If `PerconaServerMongoDB` is annotated with `percona.com/resync-pbm=true`, operator will start resync operation both for main storage and profiles. Resync for profiles are started with the equivalent of `pbm profile sync --all`. These resync operations will be run in the background and will not block the reconciliation. * If a backup that has `percona.com/delete-backup` finalizer is deleted, operator will only delete oplogs chunks if it's in main storage.
commit: 9b5d933 |
https://perconadev.atlassian.net/browse/K8SPSMDB-1237
DESCRIPTION
This PR introduces support for incremental physical backups.
A new backup types for the field
.spec.type
have been added to thePerconaServerMongoDBBackup
resource:incremental
: makes an incremental backupincremental-base
: makes a full incremental backupCHECKLIST
Jira
Needs Doc
) and QA (Needs QA
)?Tests
compare/*-oc.yml
)?Config/Logging/Testability