You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With a poor network the copy work often failed A large image needs to download all the layers every time and it will fail every time.
I need a way to cache the downloaded layer so that I don’t have to download it again when I copy it again after failure.
The text was updated successfully, but these errors were encountered:
A skopeo copy [--all --preserve-digests] docker://… dir: will create a local copy, which can then be pushed elsewhere. But for that, the whole image download needs to succeed, because a write to dir: wipes the previous contents.
Alternatively, you can run a temporary registry on localhost (maybe in a container); that one does work a layer at a time, and on a failure, restarts the copy from the failed layer (from start).
In recent versions, downloads from registries have a limited resume logic, so rare network interruptions of specific kinds don’t terminate the whole layer operation. But uploads do need to happen whole layers at a time, so if an upload is interrupted, it can only be restarted from scratch.
With a poor network the copy work often failed A large image needs to download all the layers every time and it will fail every time.
I need a way to cache the downloaded layer so that I don’t have to download it again when I copy it again after failure.
The text was updated successfully, but these errors were encountered: