You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
When I run the tool from an environment where I can directly access the Kubernetes API via internet, it works like a charm.
However, if I run it from another environment where I have to communicate with the API via an HTTP proxy (still the same cluster), the k8s job that is supposed to perform the migration is not created. The tool then terminates with the error below.
The values.yaml is identical to the one on the system where it works. Setting the proxy in the bash via environment variables (https_proxy and HTTPS_PROXY due to case sensitivity) does not work either. Simple kubectl commands that interact with the cluster do work with proxy.
Is there a way to solve the problem?
To Reproduce
$ kubectl -n playground get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
www-web-0 Bound pvc-a7b96ed6-83eb-4e93-9f19-c3b37add3989 1Gi RWO partition1 37m
www-web-0-migration Bound pvc-878a156c-cae5-4332-9e87-0e746c3d7c4f 1Gi RWO partition2 35m
$ pv-migrate --source www-web-0 --dest www-web-0-migration --source-namespace playground --dest-namespace playground --helm-values values.yaml --dest-delete-extraneous-files --strategies mnt2Jul 17 10:09:59.463 INF 🚀 Starting migrationJul 17 10:09:59.463 INF ❕ Extraneous files will be deleted from the destinationJul 17 10:09:59.667 INF 💭 Attempting migration source=playground/www-web-0 dest=playground/www-web-0-migration strategies=mnt2Jul 17 10:09:59.667 INF 🚁 Attempt using strategy source=playground/www-web-0 dest=playground/www-web-0-migration attempt_id=bedea strategy=mnt2Jul 17 10:11:59.823 INF 🧹 Cleaning up source=playground/www-web-0 dest=playground/www-web-0-migration attempt_id=bedea strategy=mnt2Jul 17 10:11:59.923 INF ✨ Cleanup done source=playground/www-web-0 dest=playground/www-web-0-migration attempt_id=bedea strategy=mnt2Jul 17 10:11:59.930 WRN 🔶 Migration failed with this strategy, will try with the remaining strategies source=playground/www-web-0 dest=playground/www-web-0-migration attempt_id=bedea strategy=mnt2 error="failed to wait for job completion: failed to wait for pod: timed out waiting for the condition"Error: migration failed: all strategies failed for this migration
Expected behavior
Create job within Kubernetes, bind both PVCs and start migrating
Version
Server Kubernetes version: v1.28.11
Client Kubernetes version: v1.27.1
Source and destination container runtimes: containerd://1.6.31
pv-migrate version and architecture: v2.0.1 - windows_x86_64
Installation method: binary download
Source and destination PVC type, size and accessModes ReadWriteOnce, 1G, csi -> ReadWriteOnce, 1GB, csi ]
The text was updated successfully, but these errors were encountered:
Hard to tell what the issue is, please consider passing --skip-cleanup flag and after the failure, inspect the logs of both source and destination pods. They can give some insight.
Describe the bug
When I run the tool from an environment where I can directly access the Kubernetes API via internet, it works like a charm.
However, if I run it from another environment where I have to communicate with the API via an HTTP proxy (still the same cluster), the k8s job that is supposed to perform the migration is not created. The tool then terminates with the error below.
The values.yaml is identical to the one on the system where it works. Setting the proxy in the bash via environment variables (https_proxy and HTTPS_PROXY due to case sensitivity) does not work either. Simple kubectl commands that interact with the cluster do work with proxy.
Is there a way to solve the problem?
To Reproduce
Expected behavior
Create job within Kubernetes, bind both PVCs and start migrating
Version
v1.28.11
v1.27.1
containerd://1.6.31
pv-migrate
version and architecture:v2.0.1 - windows_x86_64
ReadWriteOnce, 1G, csi -> ReadWriteOnce, 1GB, csi
]The text was updated successfully, but these errors were encountered: