-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFE: Squash images down to a single UID/GID when pulling #3589
Comments
it is being addressed here: containers/storage#387 |
@jwflory could you build a podman using that version of containers/storage and see if it solves your problem? |
Can you please include the output you get? |
@jwflory You need to set a mount option for it. There should be no change to podman, once it is vendored in. --storage-opt overlay.ignore_chown_errors=true ... Then if container storage fails to chown, the error will be ignored, meaning the file will be stored with the default root uid inside of the container |
any chance to get the PR merged? Having currently lot's of problems due to the image squashing issue |
@hypery2k we are still testing it, and it will hopefully merged soon. Even with that in place, it is still suggested to use multiple IDs inside of a user namespace, as it doesn't require to use a modified image. |
totally with you, but I didn't get it working on OpenShift 3.11 |
@giuseppe @rhatdan This is what I currently get with the c/storage patch. The Podman package itself I am using includes up to commit ade0d87:
INFO[0000] running as rootless
DEBU[0000] Initializing boltdb state at /tmp/jflory-libpod/libpod/bolt_state.db
DEBU[0000] Using graph driver vfs
DEBU[0000] Using graph root /tmp/jflory-libpod
DEBU[0000] Using run root /run/user/43228
DEBU[0000] Using static dir /tmp/jflory-libpod/libpod
DEBU[0000] Using tmp dir /run/user/43228/libpod/tmp
DEBU[0000] Using volume path /tmp/jflory-libpod/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] [graphdriver] trying provided driver "vfs"
DEBU[0000] Initializing event backend journald
DEBU[0000] parsed reference into "[vfs@/tmp/jflory-libpod+/run/user/43228:overlay.ignore_chown_errors=true]registry.fedoraproject.org/fedora:latest"
Trying to pull registry.fedoraproject.org/fedora:latest...
DEBU[0000] reference rewritten from 'registry.fedoraproject.org/fedora:latest' to 'registry.fedoraproject.org/fedora:latest'
DEBU[0000] Trying to pull "registry.fedoraproject.org/fedora:latest"
DEBU[0000] Using registries.d directory /etc/containers/registries.d for sigstore configuration
DEBU[0000] Using "default-docker" configuration
DEBU[0000] No signature storage configuration found for registry.fedoraproject.org/fedora:latest
DEBU[0000] Looking for TLS certificates and private keys in /etc/docker/certs.d/registry.fedoraproject.org
DEBU[0000] GET https://registry.fedoraproject.org/v2/
DEBU[0000] Ping https://registry.fedoraproject.org/v2/ status 200
DEBU[0000] GET https://registry.fedoraproject.org/v2/fedora/manifests/latest
DEBU[0000] Using blob info cache at /home/jfloryintern/.local/share/containers/cache/blob-info-cache-v1.boltdb
DEBU[0000] Source is a manifest list; copying (only) instance sha256:25019b36e1f368ac8e263b3ccbc1e0f9f542286dfae9a889b863259b5e284437
DEBU[0000] GET https://registry.fedoraproject.org/v2/fedora/manifests/sha256:25019b36e1f368ac8e263b3ccbc1e0f9f542286dfae9a889b863259b5e284437
DEBU[0000] IsRunningImageAllowed for image docker:registry.fedoraproject.org/fedora:latest
DEBU[0000] Using default policy section
DEBU[0000] Requirement 0: allowed
DEBU[0000] Overall: allowed
DEBU[0000] Downloading /v2/fedora/blobs/sha256:47c865867d25e2a8bb593e329d73a7109164b37140848ffa6105a8a40b05555f
DEBU[0000] GET https://registry.fedoraproject.org/v2/fedora/blobs/sha256:47c865867d25e2a8bb593e329d73a7109164b37140848ffa6105a8a40b05555f
Getting image source signatures
DEBU[0000] Manifest has MIME type application/vnd.docker.distribution.manifest.v2+json, ordered candidate list [application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.v1+prettyjws, application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v1+json]
DEBU[0000] ... will first try using the original manifest unmodified
DEBU[0000] Downloading /v2/fedora/blobs/sha256:fdc986defe1731a1baf2f3720228be2992307c5e1eb817c3b146daafed7cea40
DEBU[0000] GET https://registry.fedoraproject.org/v2/fedora/blobs/sha256:fdc986defe1731a1baf2f3720228be2992307c5e1eb817c3b146daafed7cea40
DEBU[0001] Detected compression format gzip
DEBU[0001] Using original blob without modification
Copying blob fdc986defe17 done
DEBU[0004] No compression detected
DEBU[0004] Using original blob without modification
Copying config 47c865867d done
Writing manifest to image destination
Storing signatures
DEBU[0005] Start untar layer
ERRO[0005] Error while applying layer: ApplyLayer exit status 1 stdout: stderr: there might not be enough IDs available in the namespace (requested 0:35 for /usr/libexec/utempter): lchown /usr/libexec/utempter: invalid argument
ERRO[0005] Error pulling image ref //registry.fedoraproject.org/fedora:latest: Error committing the finished image: error adding layer with blob "sha256:fdc986defe1731a1baf2f3720228be2992307c5e1eb817c3b146daafed7cea40": ApplyLayer exit status 1 stdout: stderr: there might not be enough IDs available in the namespace (requested 0:35 for /usr/libexec/utempter): lchown /usr/libexec/utempter: invalid argument
Failed
ERRO[0005] error pulling image "registry.fedoraproject.org/fedora:latest": unable to pull registry.fedoraproject.org/fedora:latest: unable to pull image: Error committing the finished image: error adding layer with blob "sha256:fdc986defe1731a1baf2f3720228be2992307c5e1eb817c3b146daafed7cea40": ApplyLayer exit status 1 stdout: stderr: there might not be enough IDs available in the namespace (requested 0:35 for /usr/libexec/utempter): lchown /usr/libexec/utempter: invalid argument |
seeing the same error |
It is working for me with this command
|
I noticed you are using |
@rhatdan Can you upload the podman binary on google drive or dropbox? Want to give a try and having issue with building the PR |
centos/rhel would be great |
Struggling with the VFS patch right now... |
@giuseppe Could you help me here how to do this on OpenShift 3.11? Is there a mailing list or chat to discuss this further? |
so what is the solution on rhel/centos-7.7 ? |
Probably nothing until podman 1.6 arrives. |
This issue had no activity for 30 days. In the absence of activity or the "do-not-close" label, the issue will be automatically closed within 7 days. |
@rhatdan, this is working now, is it? |
Should arrive in RHEL/Centos 7.8 with podman 1.6.* |
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind feature
Description
In environments where user namespaces are not possible (#3478), the
--uidmap
flag is useful for running a container with a single UID/GID as a workaround for these scenarios (more context in #3561).Currently,
podman pull
fails if there are not enough UIDs/GIDs allocated on the system:This is a chicken-and-egg scenario. I cannot pull containers to the host even if I will use
--uidmap 0:0:1
when running the container.@mheon mentioned adding support for Podman to tell containers/storage to to squash an image down to a single UID/GID if they are not allocated on the system. This would enable us to pull any image, vs. building custom images with a manual
chown
step added in the build instructions.For what it is worth, I did bind-mount a NFS path in as a volume and read/wrote to it in the container with the
--uidmap
flag in Podman v1.4.5-dev, so this use case is already possible inpodman run
. I have not tested with GPFS yet though because of this issue.Any thoughts?
The text was updated successfully, but these errors were encountered: