-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
On transient failure in velero csi plugin, the volumesnapshot is getting deleted without updating the object store #8116
Comments
@Lyndon-Li i dont think the design will be able to the address this issue. Velero doesnt patch VSC during finalizing phase and that is still not covered as part of the design 8063 |
@soumyapattnaik
I don't think this matches the line you pasted: Generally, velero removes the volumesnapshot to make sure it doesn't impact the actual snapshot in storage provider when the resource is removed. I don't think it deliberately removes it when a Could you double check? |
vmware-tanzu/velero-plugin-for-csi#177 (comment) The deletion VolumeSnapshot code is used to purge unneeded VolumeSnapshot after PVC data backup. It's not used for error handling. if backup.Status.Phase == velerov1api.BackupPhaseFinalizing || backup.Status.Phase == velerov1api.BackupPhaseFinalizingPartiallyFailed {
p.Log.WithField("Backup", fmt.Sprintf("%s/%s", backup.Namespace, backup.Name)).
WithField("BackupPhase", backup.Status.Phase).Debugf("Clean VolumeSnapshots.")
util.DeleteVolumeSnapshot(vs, *vsc, backup, snapshotClient.SnapshotV1(), p.Log)
return item, nil, "", nil, nil
} |
Per #8063 (comment) I think we have agreement that it can cover finalizing phase. |
@reasonerjt - on any transient failure at https://github.com/vmware-tanzu/velero-plugin-for-csi/blob/e8f7af4b65f0ed6c69d340aefe2257dc25cd013f/internal/backup/volumesnapshot_action.go#L98C2-L98C157 this goes into the code - https://github.com/vmware-tanzu/velero-plugin-for-csi/blob/e8f7af4b65f0ed6c69d340aefe2257dc25cd013f/internal/backup/volumesnapshot_action.go#L100 for my customer the get failed with TLS handshake error and then below logs got printed. Deleting Volumesnapshot XX/XXXX :: {"cmd":"/plugins/velero-plugin-for-csi"} Also from our arm traces i could see that our disk snapshot gets cleaned up for this VS. For other VS where the get calls succeed it went into line https://github.com/vmware-tanzu/velero-plugin-for-csi/blob/e8f7af4b65f0ed6c69d340aefe2257dc25cd013f/internal/backup/volumesnapshot_action.go#L104 as pointed by you above. |
I see. The error happened while waiting for VolumeSnapshot.Status.ReadyToUse turning into |
the error was a transient error , where the call was not reaching api server because of TLS handshake timeout. The VS and VSC was present during the duration of get call |
By saying a transient error of TLS handshake timeout, do you mean the Velero pod lost connection with kube-apiserver? It could cause Velero's client to fail to read VS and VSC. If so, this issue is also related to the request for a retry mechanism with kube-apiserver. |
yes. correct. building a retry logic here will help as in my customer case this failure was observed for only 1 VS and VSC and for other two VS and VSC there were no issues. Can you please share me the issue # where the retry mechanism with kube-apiserver is being discussed. |
A retry mechanism was discussed there, although it may not cover your case. Could you give more information about why the kube-apiserver didn't work temporarily? |
the setup belongs to one of the customer. I am not sure why kube-apiserver didn't work temporarily. |
for one of our customers it's due to api server SSL certificate rotation which means tls wouldn't work temporarily. |
@soumyapattnaik It's a valid issue. |
This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 14 days. If a Velero team member has requested log or more information, please provide the output of the shared commands. |
unstale |
This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 14 days. If a Velero team member has requested log or more information, please provide the output of the shared commands. |
What steps did you take and what happened:
In the finalizing phase today, we do a get on volumesnapshot, if it fails due to some transient failures like TLS handshake timeout, velero csi plugin deletes the volumesnapshot and volumesnapshotcontent.
https://github.com/vmware-tanzu/velero-plugin-for-csi/blob/e8f7af4b65f0ed6c69d340aefe2257dc25cd013f/internal/backup/volumesnapshot_action.go#L104
Post delete the backup controller re uploads the backup TarBall.
velero/pkg/backup/backup.go
Line 756 in 1ec52be
But it does not update CSI related artifacts in the object store.
Because of which there is mismatch between what is there in object store and what is actually backed up.
This has led to other issue in velero- #7979
What did you expect to happen:
The expectation is if the snapshot is cleaned up then the corresponding entry should also be removed from object store. Also for transient errors we should have a retry mechanism in velero to retry the get operation atleast and not fail the operation upfront.
The following information will help us better understand what's going on:
If you are using velero v1.7.0+:
Please use
velero debug --backup <backupname> --restore <restorename>
to generate the support bundle, and attach to this issue, more options please refer tovelero debug --help
If you are using earlier versions:
Please provide the output of the following commands (Pasting long output into a GitHub gist or other pastebin is fine.)
kubectl logs deployment/velero -n velero
velero backup describe <backupname>
orkubectl get backup/<backupname> -n velero -o yaml
velero backup logs <backupname>
velero restore describe <restorename>
orkubectl get restore/<restorename> -n velero -o yaml
velero restore logs <restorename>
Anything else you would like to add:
Environment:
velero version
):velero client config get features
):kubectl version
):/etc/os-release
):Vote on this issue!
This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.
The text was updated successfully, but these errors were encountered: