Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CSI Artifacts are not patched in object store after Finalizing Phase #7979

Open
anshulahuja98 opened this issue Jul 4, 2024 · 3 comments
Open
Assignees
Milestone

Comments

@anshulahuja98
Copy link
Collaborator

anshulahuja98 commented Jul 4, 2024

What steps did you take and what happened:

In the finalizing phase today the backup controller re uploads the backup TarBall. (

func buildFinalTarball(tr *tar.Reader, tw *tar.Writer, updateFiles map[string]FileForArchive) error {
)
But it does not update CSI related artifacts in the object store.
The CSI gzips with VolumeSNapshotCOntent, VolumeSnapshot etc.

Velero in the CSI plugin BIAv2 implementation does a cleanup of the VolumeSnapshot & recreates VolumeSnapshotContent after the backup goes into finalizing phase.

// Make the VolumeSnapshotContent static
vsc.Spec.Source = snapshotv1api.VolumeSnapshotContentSource{
SnapshotHandle: vsc.Status.SnapshotHandle,
}

Given this behavioural gap in velero, the object store is not updated with this recreated VolumeSnapshotContent as the contents are not re uploaded.

This has lead to other behavioural issues in Velero as highlighted in Issue - #7978

What did you expect to happen:

The following information will help us better understand what's going on:

If you are using velero v1.7.0+:
Please use velero debug --backup <backupname> --restore <restorename> to generate the support bundle, and attach to this issue, more options please refer to velero debug --help

If you are using earlier versions:
Please provide the output of the following commands (Pasting long output into a GitHub gist or other pastebin is fine.)

  • kubectl logs deployment/velero -n velero
  • velero backup describe <backupname> or kubectl get backup/<backupname> -n velero -o yaml
  • velero backup logs <backupname>
  • velero restore describe <restorename> or kubectl get restore/<restorename> -n velero -o yaml
  • velero restore logs <restorename>

Anything else you would like to add:

Environment:

  • Velero version (use velero version):
  • Velero features (use velero client config get features):
  • Kubernetes version (use kubectl version):
  • Kubernetes installer & version:
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):

Vote on this issue!

This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.

  • 👍 for "I would like to see this bug fixed as soon as possible"
  • 👎 for "There are more important bugs to focus on right now"
@reasonerjt
Copy link
Contributor

@anshulahuja98 @blackpiglet
This is essentially the reason for #7978, right?
I see we are discussing whether we can skip uploading vsc to BSL and modify the deletion/restore process, if we can reach an agreement this is a good candidate for v1.15 IMO

@anshulahuja98
Copy link
Collaborator Author

Yes @reasonerjt
This is just the rootcause Bug item

And yes I am in favour of removing dependency on VSC for the various flows, we can plan for 1.15

@anshulahuja98
Copy link
Collaborator Author

#7978 (comment)
Link to another explanation of the issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants