Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GITHUB-ISSUE-1847: Fix Azure binlog upload for large files #1848

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

dcaputo-harmoni
Copy link
Contributor

@dcaputo-harmoni dcaputo-harmoni commented Oct 16, 2024

Fixes #1847

@egegunes
Copy link
Contributor

@dcaputo-harmoni tests are failing because GKE 1.27 is removed. I'll ping you to ask rebase after #1849 merged.

@dcaputo-harmoni
Copy link
Contributor Author

@egegunes No problem, just let me know when

@JNKPercona
Copy link
Collaborator

Test name Status
affinity-8-0 passed
auto-tuning-8-0 passed
cross-site-8-0 passed
demand-backup-cloud-8-0 passed
demand-backup-encrypted-with-tls-8-0 failure
demand-backup-8-0 passed
haproxy-5-7 passed
haproxy-8-0 passed
init-deploy-5-7 passed
init-deploy-8-0 passed
limits-8-0 failure
monitoring-2-0-8-0 passed
one-pod-5-7 passed
one-pod-8-0 passed
pitr-8-0 passed
pitr-gap-errors-8-0 failure
proxy-protocol-8-0 passed
proxysql-sidecar-res-limits-8-0 passed
pvc-resize-5-7 passed
pvc-resize-8-0 passed
recreate-8-0 passed
restore-to-encrypted-cluster-8-0 passed
scaling-proxysql-8-0 passed
scaling-8-0 passed
scheduled-backup-5-7 passed
scheduled-backup-8-0 failure
security-context-8-0 passed
smart-update1-8-0 passed
smart-update2-8-0 passed
storage-8-0 passed
tls-issue-cert-manager-ref-8-0 passed
tls-issue-cert-manager-8-0 passed
tls-issue-self-8-0 passed
upgrade-consistency-8-0 passed
upgrade-haproxy-5-7 passed
upgrade-haproxy-8-0 passed
upgrade-proxysql-5-7 failure
upgrade-proxysql-8-0 failure
users-5-7 passed
users-8-0 passed
validation-hook-8-0 passed
We run 41 out of 41

commit: d86af6e
image: perconalab/percona-xtradb-cluster-operator:PR-1848-d86af6e9

_, err := a.client.UploadStream(ctx, a.container, objPath, data, nil)
uploadOption := azblob.UploadStreamOptions{
Concurrency: 4,
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dcaputo-harmoni Hi! Thank you for your contribution.
Is it possible to make this value configurable? I believe it will super useful for all our users.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's possible, just don't have the bandwidth to do it at the moment. Given that it's currently broken for large files what I would suggest is that we release as is to fix the issue and then just open a feature request / issue to track it and I'll get that in down the line.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, time is always the problem.
We can take over this PR and make value configurable.

Copy link
Contributor

@nmarukovich nmarukovich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make concurrency value configurable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Large PITR binlog uploads to Azure storage timeout with "Context deadline exceeded"
5 participants