-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New error entries with 1.14.0-rc.1 #7892
Comments
@Dannhausen Are you using AWS S3? |
Yes, but a self hosted privat version. |
Experience the same problem with final 1.14.0 and aws plugin 1.10.0 with OVH S3 storage. Are there any news? |
I confirm the same behaviour on a fresh installation of Velero in a brand new EKS cluster (v1.30) with AWS S3 bucket.
|
Updated to 1.14.1 with helm chart version 7.1.5, still have this issue |
I also face these errors on GCP with velero 1.14.1 plugin 1.10.1 chart 7.2.1:
|
what is the stable version matrix for GCP plugin and velero? I followed the guidance from gcp plugin docs, 1.14.x and 1.10.x seems doesn't work. |
velero 1.14.1 and gcp-plugin 1.10.1 are working fine for me with the exception of the occasional error message as indicated above, but the backups are being made at the right moments. |
We are experience the same issue:
velero plugin for aws k get pods -o json | jq '.items[].spec.initContainers[].image'
"velero/velero-plugin-for-aws:v1.11.0" velero server versionk exec -it velero-6c8c466c59-4xg4l -- /velero version
Defaulted container "velero" out of: velero, velero-plugin-for-aws (init)
Client:
Version: v1.15.0
Git commit: 1d4f1475975b5107ec35f4d19ff17f7d1fcb3edf-dirty
Server:
Version: v1.15.0 velero helm chart versionhelm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
velero velero 3 2024-12-16 18:35:35.619604 +0200 EET deployed velero-8.1.0 1.15.0 velero errortime="2024-12-17T08:55:43Z" level=error msg="error encountered while scanning stdout" backup-storage-location=velero/default cmd=/plugins/velero-plugin-for-aws controller=backup-storage-location error="read |0: file already closed" logSource="pkg/plugin/clientmgmt/process/logrus_adapter.go:90" |
yeah it does work in GKE, all backup processes were perfect, but the error exists. Could we ignore this error as I update the cluster version over time. |
What steps did you take and what happened:
We did an update to version 1.14.0-rc.1 in our developer stage. After that update we got several log entries like this:
time="2024-06-14T00:44:51Z" level=error msg="error encountered while scanning stdout" backupLocation=velero/default cmd=/plugins/velero-plugin-for-aws controller=backup-sync error="read |0: file already closed" logSource="pkg/plugin/clientmgmt/process/logrus_adapter.go:90"
Our backup is running normal so far.
What did you expect to happen:
No new unknown error messages
Environment:
velero version
):Client:
Version: v1.13.2
Git commit: 4d961fb
Server:
Version: v1.14.0-rc.1
velero client config get features
): not setkubectl version
): Server Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.15+vmware.1"Vote on this issue!
This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.
The text was updated successfully, but these errors were encountered: