Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Include total transfer time in DataUpload/DataDownload columns. #8128

Draft
wants to merge 4 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions config/crd/v1/bases/velero.io_podvolumebackups.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -199,6 +199,9 @@ spec:
bytesDone:
format: int64
type: integer
skippedBytes:
format: int64
type: integer
totalBytes:
format: int64
type: integer
Expand Down
3 changes: 3 additions & 0 deletions config/crd/v1/bases/velero.io_podvolumerestores.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -191,6 +191,9 @@ spec:
bytesDone:
format: int64
type: integer
skippedBytes:
format: int64
type: integer
totalBytes:
format: int64
type: integer
Expand Down
4 changes: 2 additions & 2 deletions config/crd/v1/crds/crds.go

Large diffs are not rendered by default.

19 changes: 19 additions & 0 deletions config/crd/v2alpha1/bases/velero.io_datadownloads.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,15 @@ spec:
jsonPath: .status.node
name: Node
type: string
- description: Elapsed time of the actual transfer, eventually completion time
- start time
jsonPath: .status.elapsedTransferTime
name: Elapsed Time
type: string
- description: Actual bytes/second moved onto the cluster
jsonPath: .status.throughput
name: Throughput bytes/sec
type: integer
name: v2alpha1
schema:
openAPIV3Schema:
Expand Down Expand Up @@ -144,6 +153,9 @@ spec:
format: date-time
nullable: true
type: string
elapsedTransferTime:
description: ElapsedTransferTime
type: string
message:
description: Message is a message about the DataDownload's status.
type: string
Expand Down Expand Up @@ -171,6 +183,9 @@ spec:
bytesDone:
format: int64
type: integer
skippedBytes:
format: int64
type: integer
totalBytes:
format: int64
type: integer
Expand All @@ -182,6 +197,10 @@ spec:
format: date-time
nullable: true
type: string
throughput:
description: Throughput
format: int64
type: integer
type: object
type: object
served: true
Expand Down
30 changes: 28 additions & 2 deletions config/crd/v2alpha1/bases/velero.io_datauploads.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,17 @@ spec:
jsonPath: .status.node
name: Node
type: string
- description: Elapsed time of actual transfer, eventually 'completion time' -
'start time'.
jsonPath: .status.elapsedTransferTime
name: Elapsed Time
type: string
- description: Actual bytes/second moved off cluster, ignoring skipped/cached
bytes due to incremental hashing
format: int64
jsonPath: .status.throughput
name: Throughput bytes/sec
type: integer
name: v2alpha1
schema:
openAPIV3Schema:
Expand Down Expand Up @@ -159,6 +170,11 @@ spec:
as a result of the DataUpload.
nullable: true
type: object
elapsedTransferTime:
description: |-
ElapsedTransferTime is the total amount of time it took to complete
this DataUpload (completion time - start time).
type: string
message:
description: Message is a message about the DataUpload's status.
type: string
Expand All @@ -183,13 +199,17 @@ spec:
type: string
progress:
description: |-
Progress holds the total number of bytes of the volume and the current
number of backed up bytes. This can be used to display progress information
Progress holds the total number of bytes of the volume, the current
number of backed up bytes, and the number of bytes skipped due to
any incremental caching. This can be used to display progress information
about the backup operation.
properties:
bytesDone:
format: int64
type: integer
skippedBytes:
format: int64
type: integer
totalBytes:
format: int64
type: integer
Expand All @@ -207,6 +227,12 @@ spec:
format: date-time
nullable: true
type: string
throughput:
description: |-
Throughput is the rate of actual bytes transferred off-cluster during
this DataUpload. It is equivalent to: (doneBytes - skippedBytes) / elapsedTime
format: int64
type: integer
type: object
type: object
served: true
Expand Down
4 changes: 2 additions & 2 deletions config/crd/v2alpha1/crds/crds.go

Large diffs are not rendered by default.

3 changes: 3 additions & 0 deletions pkg/apis/velero/shared/data_move_operation_progress.go
Original file line number Diff line number Diff line change
Expand Up @@ -26,4 +26,7 @@ type DataMoveOperationProgress struct {

// +optional
BytesDone int64 `json:"bytesDone,omitempty"`

// +optional
SkippedBytes int64 `json:"skippedBytes,omitempty"`
}
10 changes: 10 additions & 0 deletions pkg/apis/velero/v2alpha1/data_download_types.go
Original file line number Diff line number Diff line change
Expand Up @@ -115,6 +115,14 @@ type DataDownloadStatus struct {
// Node is name of the node where the DataDownload is processed.
// +optional
Node string `json:"node,omitempty"`

// ElapsedTransferTime
// +optional
ElapsedTransferTime metav1.Duration `json:"elapsedTransferTime,omitempty"`

// Throughput
// +optional
Throughput int64 `json:"throughput,omitempty"`
}

// TODO(2.0) After converting all resources to use the runtime-controller client, the genclient and k8s:deepcopy markers will no longer be needed and should be removed.
Expand All @@ -130,6 +138,8 @@ type DataDownloadStatus struct {
// +kubebuilder:printcolumn:name="Storage Location",type="string",JSONPath=".spec.backupStorageLocation",description="Name of the Backup Storage Location where the backup data is stored"
// +kubebuilder:printcolumn:name="Age",type="date",JSONPath=".metadata.creationTimestamp",description="Time duration since this DataDownload was created"
// +kubebuilder:printcolumn:name="Node",type="string",JSONPath=".status.node",description="Name of the node where the DataDownload is processed"
// +kubebuilder:printcolumn:name="Elapsed Time",type="string",JSONPath=".status.elapsedTransferTime",description="Elapsed time of the actual transfer, eventually completion time - start time"
// +kubebuilder:printcolumn:name="Throughput bytes/sec",type="integer",JSONPath=".status.throughput",description="Actual bytes/second moved onto the cluster"

// DataDownload acts as the protocol between data mover plugins and data mover controller for the datamover restore operation
type DataDownload struct {
Expand Down
17 changes: 15 additions & 2 deletions pkg/apis/velero/v2alpha1/data_upload_types.go
Original file line number Diff line number Diff line change
Expand Up @@ -135,15 +135,26 @@ type DataUploadStatus struct {
// +nullable
CompletionTimestamp *metav1.Time `json:"completionTimestamp,omitempty"`

// Progress holds the total number of bytes of the volume and the current
// number of backed up bytes. This can be used to display progress information
// Progress holds the total number of bytes of the volume, the current
// number of backed up bytes, and the number of bytes skipped due to
// any incremental caching. This can be used to display progress information
// about the backup operation.
// +optional
Progress shared.DataMoveOperationProgress `json:"progress,omitempty"`

// Node is name of the node where the DataUpload is processed.
// +optional
Node string `json:"node,omitempty"`

// ElapsedTransferTime is the total amount of time it took to complete
// this DataUpload (completion time - start time).
// +optional
ElapsedTransferTime metav1.Duration `json:"elapsedTransferTime,omitempty"`

// Throughput is the rate of actual bytes transferred off-cluster during
// this DataUpload. It is equivalent to: (doneBytes - skippedBytes) / elapsedTime
// +optional
Throughput int64 `json:"throughput,omitempty"`
}

// TODO(2.0) After converting all resources to use the runttime-controller client,
Expand All @@ -160,6 +171,8 @@ type DataUploadStatus struct {
// +kubebuilder:printcolumn:name="Storage Location",type="string",JSONPath=".spec.backupStorageLocation",description="Name of the Backup Storage Location where this backup should be stored"
// +kubebuilder:printcolumn:name="Age",type="date",JSONPath=".metadata.creationTimestamp",description="Time duration since this DataUpload was created"
// +kubebuilder:printcolumn:name="Node",type="string",JSONPath=".status.node",description="Name of the node where the DataUpload is processed"
// +kubebuilder:printcolumn:name="Elapsed Time",type="string",JSONPath=".status.elapsedTransferTime",description="Elapsed time of actual transfer, eventually 'completion time' - 'start time'."
// +kubebuilder:printcolumn:name="Throughput bytes/sec",type="integer",format="int64",JSONPath=".status.throughput",description="Actual bytes/second moved off cluster, ignoring skipped/cached bytes due to incremental hashing"

// DataUpload acts as the protocol between data mover plugins and data mover controller for the datamover backup operation
type DataUpload struct {
Expand Down
2 changes: 2 additions & 0 deletions pkg/apis/velero/v2alpha1/zz_generated.deepcopy.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

3 changes: 3 additions & 0 deletions pkg/controller/data_download_controller.go
Original file line number Diff line number Diff line change
Expand Up @@ -486,6 +486,9 @@

original := dd.DeepCopy()
dd.Status.Progress = shared.DataMoveOperationProgress{TotalBytes: progress.TotalBytes, BytesDone: progress.BytesDone}
dd.Status.ElapsedTransferTime = metav1.Duration{

Check failure on line 489 in pkg/controller/data_download_controller.go

View workflow job for this annotation

GitHub Actions / Run Linter Check

composites: k8s.io/apimachinery/pkg/apis/meta/v1.Duration struct literal uses unkeyed fields (govet)
time.Since(dd.Status.StartTimestamp.Time),
}

if err := r.client.Patch(ctx, &dd, client.MergeFrom(original)); err != nil {
log.WithError(err).Error("Failed to update restore snapshot progress")
Expand Down
11 changes: 10 additions & 1 deletion pkg/controller/data_upload_controller.go
Original file line number Diff line number Diff line change
Expand Up @@ -442,7 +442,12 @@
log.Info("Data upload completed")
r.metrics.RegisterDataUploadSuccess(r.nodeName)
}

du.Status.ElapsedTransferTime = metav1.Duration{

Check failure on line 446 in pkg/controller/data_upload_controller.go

View workflow job for this annotation

GitHub Actions / Run Linter Check

composites: k8s.io/apimachinery/pkg/apis/meta/v1.Duration struct literal uses unkeyed fields (govet)
du.Status.CompletionTimestamp.Sub(du.Status.StartTimestamp.Time),
}

}

Check failure on line 450 in pkg/controller/data_upload_controller.go

View workflow job for this annotation

GitHub Actions / Run Linter Check

unnecessary trailing newline (whitespace)

func (r *DataUploadReconciler) OnDataUploadFailed(ctx context.Context, namespace, duName string, err error) {
defer r.dataPathMgr.RemoveAsyncBR(duName)
Expand Down Expand Up @@ -539,7 +544,11 @@
}

original := du.DeepCopy()
du.Status.Progress = shared.DataMoveOperationProgress{TotalBytes: progress.TotalBytes, BytesDone: progress.BytesDone}
du.Status.Progress = shared.DataMoveOperationProgress{TotalBytes: progress.TotalBytes, BytesDone: progress.BytesDone, SkippedBytes: progress.SkippedBytes}
du.Status.ElapsedTransferTime = metav1.Duration{

Check failure on line 548 in pkg/controller/data_upload_controller.go

View workflow job for this annotation

GitHub Actions / Run Linter Check

composites: k8s.io/apimachinery/pkg/apis/meta/v1.Duration struct literal uses unkeyed fields (govet)
time.Since(du.Status.StartTimestamp.Time),
}
du.Status.Throughput = (progress.BytesDone - progress.SkippedBytes) / int64(du.Status.ElapsedTransferTime.Seconds())

if err := r.client.Patch(ctx, &du, client.MergeFrom(original)); err != nil {
log.WithError(err).Error("Failed to update progress")
Expand Down
2 changes: 1 addition & 1 deletion pkg/controller/pod_volume_backup_controller.go
Original file line number Diff line number Diff line change
Expand Up @@ -279,7 +279,7 @@ func (r *PodVolumeBackupReconciler) OnDataPathProgress(ctx context.Context, name
}

original := pvb.DeepCopy()
pvb.Status.Progress = veleroapishared.DataMoveOperationProgress{TotalBytes: progress.TotalBytes, BytesDone: progress.BytesDone}
pvb.Status.Progress = veleroapishared.DataMoveOperationProgress{TotalBytes: progress.TotalBytes, BytesDone: progress.BytesDone, SkippedBytes: progress.SkippedBytes}

if err := r.Client.Patch(ctx, &pvb, client.MergeFrom(original)); err != nil {
log.WithError(err).Error("Failed to update progress")
Expand Down
2 changes: 1 addition & 1 deletion pkg/datapath/file_system.go
Original file line number Diff line number Diff line change
Expand Up @@ -236,7 +236,7 @@ func (fs *fileSystemBR) StartRestore(snapshotID string, target AccessPoint, uplo
// UpdateProgress which implement ProgressUpdater interface to update progress status
func (fs *fileSystemBR) UpdateProgress(p *uploader.Progress) {
if fs.callbacks.OnProgress != nil {
fs.callbacks.OnProgress(context.Background(), fs.namespace, fs.jobName, &uploader.Progress{TotalBytes: p.TotalBytes, BytesDone: p.BytesDone})
fs.callbacks.OnProgress(context.Background(), fs.namespace, fs.jobName, &uploader.Progress{TotalBytes: p.TotalBytes, BytesDone: p.BytesDone, SkippedBytes: p.SkippedBytes})
}
}

Expand Down
2 changes: 1 addition & 1 deletion pkg/uploader/kopia/progress.go
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ func (p *Progress) EstimatedDataSize(fileCount int, totalBytes int64) {
// UpdateProgress which calls Updater UpdateProgress interface, update progress by third-party implementation
func (p *Progress) UpdateProgress() {
if p.outputThrottle.ShouldOutput() {
p.Updater.UpdateProgress(&uploader.Progress{TotalBytes: p.estimatedTotalBytes, BytesDone: p.processedBytes})
p.Updater.UpdateProgress(&uploader.Progress{TotalBytes: p.estimatedTotalBytes, BytesDone: p.processedBytes, SkippedBytes: p.cachedBytes})
}
}

Expand Down
7 changes: 4 additions & 3 deletions pkg/uploader/types.go
Original file line number Diff line number Diff line change
Expand Up @@ -57,10 +57,11 @@ type SnapshotInfo struct {
Size int64 `json:"Size"`
}

// Progress which defined two variables to record progress
// Progress which defined three variables to record progress
type Progress struct {
TotalBytes int64 `json:"totalBytes,omitempty"`
BytesDone int64 `json:"doneBytes,omitempty"`
TotalBytes int64 `json:"totalBytes,omitempty"`
BytesDone int64 `json:"doneBytes,omitempty"`
SkippedBytes int64 `json:"skippedBytes,omitempty"`
}

// UploaderProgress which defined generic interface to update progress
Expand Down
Loading