Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No-cache not working for--mount=type=cache when mode is specified #5305

Closed
jackblackevo opened this issue Jul 27, 2024 · 13 comments · Fixed by #5306
Closed

No-cache not working for--mount=type=cache when mode is specified #5305

jackblackevo opened this issue Jul 27, 2024 · 13 comments · Fixed by #5306
Assignees
Milestone

Comments

@jackblackevo
Copy link

jackblackevo commented Jul 27, 2024

Description

I've encountered an inconsistent behavior when using --mount=type=cache in Dockerfile. When the mode parameter is not explicitly specified, the cache mount doesn't seem to work correctly, despite the documentation stating that the default mode is 0755. However, when I explicitly use --mount=type=cache,mode=0755, it works normally.

Reproduce

  1. Create an npm project
  2. Run npm install hello
  3. Create a Dockerfile with the following content:
    FROM node:20.16.0-alpine3.20
    
    COPY package.json package-lock.json* ./
    
    RUN --mount=type=cache,target=/root/.npm npm cache verify
    
    RUN --mount=type=cache,target=/root/.npm \
      npm install --cache=/root/.npm --prefer-offline --verbose
    
    RUN --mount=type=cache,target=/root/.npm npm cache verify
  4. Run docker build --progress=plain --no-cache . twice
  5. Observe that the cache doesn't seem to be working correctly

Expected behavior

The cache mount should work correctly with the default mode (0755), as per the documentation.

It should work the same as:

FROM node:20.16.0-alpine3.20

COPY package.json package-lock.json* ./

RUN --mount=type=cache,mode=0755,target=/root/.npm npm cache verify

RUN --mount=type=cache,mode=0755,target=/root/.npm \
  npm install --cache=/root/.npm --prefer-offline --verbose

RUN --mount=type=cache,mode=0755,target=/root/.npm npm cache verify

docker version

Client:
 Version:           27.1.1
 API version:       1.46
 Go version:        go1.21.12
 Git commit:        6312585
 Built:             Tue Jul 23 19:55:52 2024
 OS/Arch:           linux/amd64
 Context:           default

Server: Docker Desktop  ()
 Engine:
  Version:          27.1.1
  API version:      1.46 (minimum version 1.24)
  Go version:       go1.21.12
  Git commit:       cc13f95
  Built:            Tue Jul 23 19:57:19 2024
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.7.19
  GitCommit:        2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41
 runc:
  Version:          1.7.19
  GitCommit:        v1.1.13-0-g58aa920
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

docker info

Client:
 Version:    27.1.1
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.16.1-desktop.1
    Path:     /usr/local/lib/docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  v2.29.1-desktop.1
    Path:     /usr/local/lib/docker/cli-plugins/docker-compose
  debug: Get a shell into any image or container (Docker Inc.)
    Version:  0.0.34
    Path:     /usr/local/lib/docker/cli-plugins/docker-debug
  dev: Docker Dev Environments (Docker Inc.)
    Version:  v0.1.2
    Path:     /usr/local/lib/docker/cli-plugins/docker-dev
  extension: Manages Docker extensions (Docker Inc.)
    Version:  v0.2.25
    Path:     /usr/local/lib/docker/cli-plugins/docker-extension
  feedback: Provide feedback, right in your terminal! (Docker Inc.)
    Version:  v1.0.5
    Path:     /usr/local/lib/docker/cli-plugins/docker-feedback
  init: Creates Docker-related starter files for your project (Docker Inc.)
    Version:  v1.3.0
    Path:     /usr/local/lib/docker/cli-plugins/docker-init
  sbom: View the packaged-based Software Bill Of Materials (SBOM) for an image (Anchore Inc.)
    Version:  0.6.0
    Path:     /usr/local/lib/docker/cli-plugins/docker-sbom
  scout: Docker Scout (Docker Inc.)
    Version:  v1.11.0
    Path:     /usr/local/lib/docker/cli-plugins/docker-scout

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 9
 Server Version: 27.1.1
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41
 runc version: v1.1.13-0-g58aa920
 init version: de40ad0
 Security Options:
  seccomp
   Profile: unconfined
 Kernel Version: 5.15.153.1-microsoft-standard-WSL2
 Operating System: Docker Desktop
 OSType: linux
 Architecture: x86_64
 CPUs: 24
 Total Memory: 31.3GiB
 Name: docker-desktop
 ID: 9ac84166-3dc4-4d9e-b820-a7e13e5c2e6c
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 HTTP Proxy: http.docker.internal:3128
 HTTPS Proxy: http.docker.internal:3128
 No Proxy: hubproxy.docker.internal
 Labels:
  com.docker.desktop.address=unix:///var/run/docker-cli.sock
 Experimental: false
 Insecure Registries:
  hubproxy.docker.internal:5555
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No blkio throttle.read_bps_device support
WARNING: No blkio throttle.write_bps_device support
WARNING: No blkio throttle.read_iops_device support
WARNING: No blkio throttle.write_iops_device support
WARNING: daemon is not using the default seccomp profile

Additional Info

No response

@thaJeztah
Copy link
Member

Seems related to the issue I was trying in this PR in BuildKit;

And this comment; #4936 (comment) - not sure if we can apply a similar hack in dockerd, as it may affect permissions elsewhere?

cc @tonistiigi @crazy-max

@polarathene
Copy link
Contributor

polarathene commented Aug 20, 2024

Ignore, see next comment

FWIW, I could not seem to reproduce any difference from running it twice, but it is unclear if the reproduction is meant to have the failure?

Adding mode=0755 made no difference for me (my docker environment info can be found here, v26.1.1 on WSL2 / Docker-Desktop):

Build output
$ docker build --progress plain --no-cache .

#0 building with "default" instance using docker driver

moby/moby#1 [internal] load build definition from Dockerfile.node
moby/moby#1 transferring dockerfile: 354B done
moby/moby#1 DONE 0.0s

moby/moby#2 [internal] load metadata for docker.io/library/node:alpine
moby/moby#2 DONE 0.0s

moby/moby#3 [internal] load .dockerignore
moby/moby#3 transferring context: 2B done
moby/moby#3 DONE 0.0s

moby/moby#4 [stage-0 1/5] FROM docker.io/library/node:alpine
moby/moby#4 CACHED

moby/moby#5 [internal] load build context
moby/moby#5 transferring context: 104B done
moby/moby#5 DONE 0.0s

moby/moby#6 [stage-0 2/5] COPY node/package.json node/package-lock.json* ./
moby/moby#6 DONE 0.0s

moby/moby#7 [stage-0 3/5] RUN --mount=type=cache,target=/root/.npm npm cache verify
moby/moby#7 0.885 Cache verified and compressed (~/.npm/_cacache)
moby/moby#7 0.886 Content verified: 0 (0 bytes)
moby/moby#7 0.886 Index entries: 0
moby/moby#7 0.886 Finished in 0.077s
moby/moby#7 DONE 0.9s

moby/moby#8 [stage-0 4/5] RUN --mount=type=cache,target=/root/.npm   npm install --cache=/root/.npm --prefer-offline --verbose
moby/moby#8 0.585 npm verbose cli /usr/local/bin/node /usr/local/bin/npm
moby/moby#8 0.586 npm info using [email protected]
moby/moby#8 0.586 npm info using [email protected]
moby/moby#8 0.591 npm verbose title npm install
moby/moby#8 0.591 npm verbose argv "install" "--cache" "/root/.npm" "--prefer-offline" "--loglevel" "verbose"
moby/moby#8 0.591 npm verbose logfile logs-max:10 dir:/root/.npm/_logs/2024-08-20T06_12_06_997Z-
moby/moby#8 0.594 npm verbose logfile /root/.npm/_logs/2024-08-20T06_12_06_997Z-debug-0.log
moby/moby#8 0.967 npm verbose stack Error: Tracker "idealTree" already exists
moby/moby#8 0.967 npm verbose stack     at #onError (/usr/local/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/tracker.js:84:11)
moby/moby#8 0.967 npm verbose stack     at Arborist.addTracker (/usr/local/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/tracker.js:26:20)
moby/moby#8 0.967 npm verbose stack     at #buildDeps (/usr/local/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/arborist/build-ideal-tree.js:753:10)
moby/moby#8 0.967 npm verbose stack     at Arborist.buildIdealTree (/usr/local/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/arborist/build-ideal-tree.js:181:28)
moby/moby#8 0.967 npm verbose stack     at async Promise.all (index 1)
moby/moby#8 0.967 npm verbose stack     at async Arborist.reify (/usr/local/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/arborist/reify.js:131:5)
moby/moby#8 0.967 npm verbose stack     at async Install.exec (/usr/local/lib/node_modules/npm/lib/commands/install.js:150:5)
moby/moby#8 0.967 npm verbose stack     at async Npm.exec (/usr/local/lib/node_modules/npm/lib/npm.js:207:9)
moby/moby#8 0.967 npm verbose stack     at async module.exports (/usr/local/lib/node_modules/npm/lib/cli/entry.js:74:5)
moby/moby#8 0.967 npm error Tracker "idealTree" already exists
moby/moby#8 0.971 npm verbose cwd /
moby/moby#8 0.971 npm verbose os Linux 5.15.123.1-microsoft-standard-WSL2
moby/moby#8 0.971 npm verbose node v22.6.0
moby/moby#8 0.971 npm verbose npm  v10.8.2
moby/moby#8 0.971 npm verbose exit 1
moby/moby#8 0.971 npm verbose code 1
moby/moby#8 0.972 npm error A complete log of this run can be found in: /root/.npm/_logs/2024-08-20T06_12_06_997Z-debug-0.log
moby/moby#8 ERROR: process "/bin/sh -c npm install --cache=/root/.npm --prefer-offline --verbose" did not complete successfully: exit code: 1
------
 > [stage-0 4/5] RUN --mount=type=cache,target=/root/.npm   npm install --cache=/root/.npm --prefer-offline --verbose:
0.967 npm verbose stack     at async Npm.exec (/usr/local/lib/node_modules/npm/lib/npm.js:207:9)
0.967 npm verbose stack     at async module.exports (/usr/local/lib/node_modules/npm/lib/cli/entry.js:74:5)
0.967 npm error Tracker "idealTree" already exists
0.971 npm verbose cwd /
0.971 npm verbose os Linux 5.15.123.1-microsoft-standard-WSL2
0.971 npm verbose node v22.6.0
0.971 npm verbose npm  v10.8.2
0.971 npm verbose exit 1
0.971 npm verbose code 1
0.972 npm error A complete log of this run can be found in: /root/.npm/_logs/2024-08-20T06_12_06_997Z-debug-0.log
------
Dockerfile.node:8
--------------------
   7 |
   8 | >>> RUN --mount=type=cache,target=/root/.npm \
   9 | >>>   npm install --cache=/root/.npm --prefer-offline --verbose
  10 |
--------------------
ERROR: failed to solve: process "/bin/sh -c npm install --cache=/root/.npm --prefer-offline --verbose" did not complete successfully: exit code: 1

No difference with docker-container driver when using docker buildx build.

That said I was not able to reproduce the BuildKit 0.13.2 - 0.15.0 bug reports for COPY either 🤷‍♂️ (docker driver is BuildKit 0.13.2 and I tried with docker-container set to that version) Perhaps I didn't reproduce it correctly.


4. Run docker build --progress=plain --no-cache . twice
5. Observe that the cache doesn't seem to be working correctly

A bit more clarity on the expectation here would be helpful.

You run the build with --no-cache which in my experience is not only for layer cache in the image build but also wipes cache mounts.

@polarathene
Copy link
Contributor

polarathene commented Aug 20, 2024

TL;DR:

So @jackblackevo observed:

  • Only with the docker builder driver using BuildKit from 0.13.2 and before 0.15.2, umask of 022 is applied (EDIT: Given the above update, this may be inaccurate, and should be verified against a docker driver offering BuildKit 0.15.2+).
    • The default 755 for the target path is set. --no-cache will discard the cache mount contents with each build.
    • With mode option set, the umask effect is more apparent (773 => 751).
  • With either docker or docker-container builder drivers:
    • If any of the mode, uid, or gid cache mount options are present, the data will not be discarded with the --no-cache build option. Hence the repeated build inconsistency observed with their --no-cache expectation?

Reproduction

Here is a reliable reproduction example that isn't dependent upon external state (since that nodejs build example was failing).

FROM alpine
RUN --mount=type=cache,target=/example,mode=773 <<HEREDOC
  echo "Octal permissions are: $(stat -c %a /example)"
  ls /example
  touch /example/i-will-persist-if-mode-is-set
HEREDOC
# Sanity check, that file should only persist here and not above on subsequent builds with `--no-cache`:
RUN --mount=type=cache,target=/example,mode=773 \
  ls /example
# Add --builder arg as described below to observe difference in build outputs
docker buildx build --load --no-cache --progress plain .

You will observe with the above:

  • --builder default (docker driver, at least with BuildKit 0.13.2) will apply the umask 022 and output octal 751 not 773.
  • --builder bk-13-2 (docker-container driver with BuildKit 0.13.2) does not reproduce the same behaviour as docker driver does.
  • Other versions of BuildKit with docker-container driver, such as 0.15.2 are the same as observed with 0.13.2 with docker-container.

More interestingly though, regardless of docker vs docker-container driver... The mode parameter being present has the cache mount persist instead of cleared from --no-cache. Without it you will notice the cache mount is always cleared with --no-cache.

The build output will indicate the cache mount persistence with this additional info:

moby/moby#5 [internal] settings cache mount permissions
moby/moby#5 CACHED

#4936 references a line that uses all these options as inputs, which might explain the relation to those options?:

return st.File(llb.Mkdir("/cache", mode, llb.WithUIDGID(uid, gid)), llb.WithCustomName("[internal] settings cache mount permissions"))

Presumably a bug? (I would actually like this flexibility to retain cache mounts with --no-cache)


Each builder instance maintains their own cache mount. The mode contributes to an implicit cache key, but it is unclear why it breaks --no-cache expectations. Setting an explicit id did not persist the cache with --no-cache like this.

I assume this inconsistency is related to the COPY --chmod behaviour (although mode= here has more strict validation).

TBH I would prefer this distinction to not clear cache mounts when I only want to do a clean image build, especially since cache mounts can be shared across Dockerfile builds, so I like this behaviour. A separate control to temporarily ignore cache mounts would make more sense than discarding them (unexpected for me when I first encountered that).

@jackblackevo
Copy link
Author

Sorry, my reproduction method was mixed with npm-related information, which prevented the issue from being presented simply. I referred to your Dockerfile and made some modifications to demonstrate the problem I encountered.

Reproduce

FROM alpine

RUN --mount=type=cache,target=/example <<HEREDOC
  echo "Octal permissions are: $(stat -c %a /example)"
  ls /example # The files should be listed during the second build.
  touch /example/i-should-persist
HEREDOC

RUN --mount=type=cache,target=/example \
  ls /example

First run docker build --progress=plain --no-cache .

moby/moby#5 [stage-0 2/3] RUN --mount=type=cache,target=/example <<HEREDOC (echo "Octal permissions are: $(stat -c %a /example)"...)
moby/moby#5 0.217 Octal permissions are: 755
moby/moby#5 DONE 0.2s

moby/moby#6 [stage-0 3/3] RUN --mount=type=cache,target=/example   ls /example
moby/moby#6 0.396 i-should-persist
moby/moby#6 DONE 0.4s
Full output
#0 building with "default" instance using docker driver

moby/moby#1 [internal] load build definition from Dockerfile
moby/moby#1 transferring dockerfile: 322B done
moby/moby#1 DONE 0.0s

moby/moby#2 [internal] load metadata for docker.io/library/alpine:latest
moby/moby#2 DONE 0.9s

moby/moby#3 [internal] load .dockerignore
moby/moby#3 transferring context: 2B done
moby/moby#3 DONE 0.0s

moby/moby#4 [stage-0 1/3] FROM docker.io/library/alpine:latest@sha256:0a4eaa0eecf5f8c050e5bba433f58c052be7587ee8af3e8b3910ef9ab5fbe9f5
moby/moby#4 resolve docker.io/library/alpine:latest@sha256:0a4eaa0eecf5f8c050e5bba433f58c052be7587ee8af3e8b3910ef9ab5fbe9f5 0.0s done
moby/moby#4 CACHED

moby/moby#5 [stage-0 2/3] RUN --mount=type=cache,target=/example <<HEREDOC (echo "Octal permissions are: $(stat -c %a /example)"...)
moby/moby#5 0.217 Octal permissions are: 755
moby/moby#5 DONE 0.2s

moby/moby#6 [stage-0 3/3] RUN --mount=type=cache,target=/example   ls /example
moby/moby#6 0.396 i-should-persist
moby/moby#6 DONE 0.4s

moby/moby#7 exporting to image
moby/moby#7 exporting layers 0.1s done
moby/moby#7 exporting manifest sha256:c0b0223d57a59638875bbf424dd55f42dd48482b020a4cf65a7c04803d4bbb77 done
moby/moby#7 exporting config sha256:7e8501e6376c71812f3c44caf51740ace6bc8d16883501cee9bb1b50fa89a759 done
moby/moby#7 exporting attestation manifest sha256:36666be511f8705a8037b6bf7b575fb59367012e4a69a83b91df484e7b26f99f 0.0s done
moby/moby#7 exporting manifest list sha256:0f232347c2828b027c1f939a4dea132f40968a6ad6d5f3ba65582950c07e97ef done
moby/moby#7 naming to moby-dangling@sha256:0f232347c2828b027c1f939a4dea132f40968a6ad6d5f3ba65582950c07e97ef done
moby/moby#7 unpacking to moby-dangling@sha256:0f232347c2828b027c1f939a4dea132f40968a6ad6d5f3ba65582950c07e97ef done
moby/moby#7 DONE 0.1s

Second run docker build --progress=plain --no-cache .

moby/moby#5 [stage-0 2/3] RUN --mount=type=cache,target=/example <<HEREDOC (echo "Octal permissions are: $(stat -c %a /example)"...)
moby/moby#5 0.213 Octal permissions are: 755
moby/moby#5 DONE 0.2s

moby/moby#6 [stage-0 3/3] RUN --mount=type=cache,target=/example   ls /example
moby/moby#6 0.345 i-should-persist
moby/moby#6 DONE 0.4s
Full output
#0 building with "default" instance using docker driver

moby/moby#1 [internal] load build definition from Dockerfile
moby/moby#1 transferring dockerfile: 322B done
moby/moby#1 DONE 0.0s

moby/moby#2 [internal] load metadata for docker.io/library/alpine:latest
moby/moby#2 DONE 0.9s

moby/moby#3 [internal] load .dockerignore
moby/moby#3 transferring context: 2B done
moby/moby#3 DONE 0.0s

moby/moby#4 [stage-0 1/3] FROM docker.io/library/alpine:latest@sha256:0a4eaa0eecf5f8c050e5bba433f58c052be7587ee8af3e8b3910ef9ab5fbe9f5
moby/moby#4 resolve docker.io/library/alpine:latest@sha256:0a4eaa0eecf5f8c050e5bba433f58c052be7587ee8af3e8b3910ef9ab5fbe9f5 0.0s done
moby/moby#4 CACHED

moby/moby#5 [stage-0 2/3] RUN --mount=type=cache,target=/example <<HEREDOC (echo "Octal permissions are: $(stat -c %a /example)"...)
moby/moby#5 0.213 Octal permissions are: 755
moby/moby#5 DONE 0.2s

moby/moby#6 [stage-0 3/3] RUN --mount=type=cache,target=/example   ls /example
moby/moby#6 0.345 i-should-persist
moby/moby#6 DONE 0.4s

moby/moby#7 exporting to image
moby/moby#7 exporting layers 0.1s done
moby/moby#7 exporting manifest sha256:8778cffbb808d216a378ebe024b8016b6ea940b8b1d62bcd0c3c806d30e08b32 done
moby/moby#7 exporting config sha256:8bb2ab898e12464c7479e4403d6b495f2c12fdf93e64176bb28ef40a97aa340b done
moby/moby#7 exporting attestation manifest sha256:69752bcfb3be6f251001fcf7bbcd028ce7819378dca9ba6a8e99b155ebe86c64 0.0s done
moby/moby#7 exporting manifest list sha256:f58283825ceb734866cb28c8b658600df081f651d6855981c94cf5f910be2b0f done
moby/moby#7 naming to moby-dangling@sha256:f58283825ceb734866cb28c8b658600df081f651d6855981c94cf5f910be2b0f done
moby/moby#7 unpacking to moby-dangling@sha256:f58283825ceb734866cb28c8b658600df081f651d6855981c94cf5f910be2b0f done
moby/moby#7 DONE 0.1s

Expected behavior

The cache mount should work correctly with the default mode (0755), as per the documentation.

It should work the same as:

FROM alpine

RUN --mount=type=cache,mode=0755,target=/example <<HEREDOC
  echo "Octal permissions are: $(stat -c %a /example)"
  ls /example # The files should be listed during the second build.
  touch /example/i-should-persist
HEREDOC

RUN --mount=type=cache,mode=0755,target=/example \
  ls /example

First run docker build --progress=plain --no-cache .

moby/moby#6 [stage-0 2/3] RUN --mount=type=cache,mode=0755,target=/example <<HEREDOC (echo "Octal permissions are: $(stat -c %a /example)"...)
moby/moby#6 0.208 Octal permissions are: 755
moby/moby#6 DONE 0.2s

moby/moby#7 [stage-0 3/3] RUN --mount=type=cache,mode=0755,target=/example   ls /example
moby/moby#7 0.397 i-should-persist
moby/moby#7 DONE 0.4s
Full output
#0 building with "default" instance using docker driver

moby/moby#1 [internal] load build definition from Dockerfile
moby/moby#1 transferring dockerfile: 342B done
moby/moby#1 DONE 0.0s

moby/moby#2 [internal] load metadata for docker.io/library/alpine:latest
moby/moby#2 DONE 0.9s

moby/moby#3 [internal] load .dockerignore
moby/moby#3 transferring context: 2B done
moby/moby#3 DONE 0.0s

moby/moby#4 [internal] settings cache mount permissions
moby/moby#4 CACHED

moby/moby#5 [stage-0 1/3] FROM docker.io/library/alpine:latest@sha256:0a4eaa0eecf5f8c050e5bba433f58c052be7587ee8af3e8b3910ef9ab5fbe9f5
moby/moby#5 resolve docker.io/library/alpine:latest@sha256:0a4eaa0eecf5f8c050e5bba433f58c052be7587ee8af3e8b3910ef9ab5fbe9f5 0.0s done
moby/moby#5 CACHED

moby/moby#6 [stage-0 2/3] RUN --mount=type=cache,mode=0755,target=/example <<HEREDOC (echo "Octal permissions are: $(stat -c %a /example)"...)
moby/moby#6 0.208 Octal permissions are: 755
moby/moby#6 DONE 0.2s

moby/moby#7 [stage-0 3/3] RUN --mount=type=cache,mode=0755,target=/example   ls /example
moby/moby#7 0.397 i-should-persist
moby/moby#7 DONE 0.4s

moby/moby#8 exporting to image
moby/moby#8 exporting layers 0.1s done
moby/moby#8 exporting manifest sha256:9e1c5b35379b5b148f66016c5a2a01a22d0f15dd51126951eebbcf794f5afbde done
moby/moby#8 exporting config sha256:9463528d8b827adf20638e0a990b8bb423d6c25a292f56358402988d46388156 done
moby/moby#8 exporting attestation manifest sha256:2e35f0d403adac720049241ad1ce7593874c854b778229dc6433458537a8ffb9 0.0s done
moby/moby#8 exporting manifest list sha256:31d24f1a4b482ec5ccc9af1be977b944119530d0e6c0b4d17b2901bb84db294c done
moby/moby#8 naming to moby-dangling@sha256:31d24f1a4b482ec5ccc9af1be977b944119530d0e6c0b4d17b2901bb84db294c done
moby/moby#8 unpacking to moby-dangling@sha256:31d24f1a4b482ec5ccc9af1be977b944119530d0e6c0b4d17b2901bb84db294c done
moby/moby#8 DONE 0.1s

Second run docker build --progress=plain --no-cache .

Please note #6 0.211 i-should-persist:

moby/moby#6 [stage-0 2/3] RUN --mount=type=cache,mode=0755,target=/example <<HEREDOC (echo "Octal permissions are: $(stat -c %a /example)"...)
moby/moby#6 0.211 Octal permissions are: 755
moby/moby#6 0.211 i-should-persist
moby/moby#6 DONE 0.2s

moby/moby#7 [stage-0 3/3] RUN --mount=type=cache,mode=0755,target=/example   ls /example
moby/moby#7 0.314 i-should-persist
moby/moby#7 DONE 0.3s
Full output
#0 building with "default" instance using docker driver

moby/moby#1 [internal] load build definition from Dockerfile
moby/moby#1 transferring dockerfile: 342B done
moby/moby#1 DONE 0.0s

moby/moby#2 [internal] load metadata for docker.io/library/alpine:latest
moby/moby#2 DONE 0.9s

moby/moby#3 [internal] load .dockerignore
moby/moby#3 transferring context: 2B done
moby/moby#3 DONE 0.0s

moby/moby#4 [internal] settings cache mount permissions
moby/moby#4 CACHED

moby/moby#5 [stage-0 1/3] FROM docker.io/library/alpine:latest@sha256:0a4eaa0eecf5f8c050e5bba433f58c052be7587ee8af3e8b3910ef9ab5fbe9f5
moby/moby#5 resolve docker.io/library/alpine:latest@sha256:0a4eaa0eecf5f8c050e5bba433f58c052be7587ee8af3e8b3910ef9ab5fbe9f5 0.0s done
moby/moby#5 CACHED

moby/moby#6 [stage-0 2/3] RUN --mount=type=cache,mode=0755,target=/example <<HEREDOC (echo "Octal permissions are: $(stat -c %a /example)"...)
moby/moby#6 0.211 Octal permissions are: 755
moby/moby#6 0.211 i-should-persist
moby/moby#6 DONE 0.2s

moby/moby#7 [stage-0 3/3] RUN --mount=type=cache,mode=0755,target=/example   ls /example
moby/moby#7 0.314 i-should-persist
moby/moby#7 DONE 0.3s

moby/moby#8 exporting to image
moby/moby#8 exporting layers 0.1s done
moby/moby#8 exporting manifest sha256:25ffeebae1668915e10545e8bf3d5f0a03f01d801efea57c34acbe9b63636bd9 done
moby/moby#8 exporting config sha256:afa660594db693bfd8577729640d69d0f805dfb08e9caf9f958eb8e347b80ed8 done
moby/moby#8 exporting attestation manifest sha256:3079f5f105ea8dd30f0c4ec6747f05edb664338d5886746f0bb487b6efd26fdd 0.0s done
moby/moby#8 exporting manifest list sha256:02425dd91320b3f293af3ea01d44253463b671f7259bdd461c18128ff352912a done
moby/moby#8 naming to moby-dangling@sha256:02425dd91320b3f293af3ea01d44253463b671f7259bdd461c18128ff352912a done
moby/moby#8 unpacking to moby-dangling@sha256:02425dd91320b3f293af3ea01d44253463b671f7259bdd461c18128ff352912a done
moby/moby#8 DONE 0.1s

@polarathene
Copy link
Contributor

to demonstrate the problem I encountered.

Thanks for putting together the expected vs actual outcomes 👍

I have only seen --no-cache discard the cache mount. I agree with you that I prefer --no-cache to only discard stages/layers of the Dockerfile and not cache mounts which can be shared across builds / Dockerfiles.

Inspecting the cache mounts we can observe that how the ones without mode/uid/gid options added will persist the same ID, while without that --no-cache will prune the cache mount and a new one with different ID will take it's place:

# List available cache mounts:
docker builder du --builder default --filter 'type=exec.cachemount' --verbose

While you can remove any of those via prune leveraging the --filter, this is not akin to the separate --no-cache option --no-cache-filter (only for filtering stages).

In buildx 0.16.2 release (July 2024) the --filter syntax added support for negation, which might be useful as: --filter 'type!=exec.cachemount', but it might be too late for --no-cache-filter to support such. I can understand wanting to temporary ignore a cache mount (rather than discard the previous one during a --no-cache build), as well as the more common expectation of it persisting with --no-cache.

I suppose the outcome depends on what the general expectation of --no-cache implies for an image build. That may be to ignore the cache mount, but I doubt it was ever expected to discard it completely.

@tonistiigi
Copy link
Member

@thaJeztah I assume this is related to host umask leaking to files created by dockerd. I think it should use the host umask like buildkitd.

@polarathene
Copy link
Contributor

polarathene commented Aug 22, 2024

related to host umask

That is a separate issue with mode option when used with the docker builder driver.


The main issue here is with the mount options mode, uid, gid and affects both docker and docker-container builder drivers:

  • When present cache mount is persisted with --no-cache (desired).
  • When none of those options are present --no-cache will prune/replace the existing cache mount (undesirable).

Also note, any change in those options values will create a new cache mount (unclear if that is a bug), even when the target / id is the same.

@tonistiigi
Copy link
Member

@polarathene Expected behavior is that the cache mounts that would be used by the build are removed if --no-cache is set #1092.

@polarathene
Copy link
Contributor

Expected behavior is that the cache mounts that would be used by the build are removed if --no-cache is set

Ok thanks :)

The behaviour is still inconsistent when those mount options are present, along with the id option no longer uniquely identifying a mount. I assume those are bugs.


Below is for reference, regarding related concerns on the topic.

From the PR description:

No-cache doesn't really work in combination with cache mounts as it is solver level and caches instructions. But the behavior is non-obvious from the user perspective.
So if no-cache is used in a run operation that uses cache mounts, all cache volumes with the matching id are released before the build even starts now.

From associated issue it resolves: docker/buildx#109 (comment)

This is a bit tricky cause cache mounts are global and not tied to a specific build request. I see how this can be confusing though so hopefully there is a solution.

From related issue: moby/moby#38255 (comment)

RUN --mount=type=cache got somehow corrupt and i can't seem to clean it...
I was expecting --no-cache to avoid using the host cache and regenerate it... which it doesn't

That corruption concern is still expressed in 2024 as there is no proper support for querying the appropriate cache mount to prune.

As stated --no-cache does not clear the cache mount under certain conditions I detailed in my previous comment regardless of the builder driver. It is already inconsistent in behaviour.

Running a build with --no-cache should not prune the mount for the very reason that it can be valuable as a global cache mount shared across Dockerfiles.

  • --no-cache should allow for the possibility to ignore the cache mount and substitute with a temporary one that is discarded after the build.
  • For running with --no-cache and discarding the build layers but not the cache mounts, there is the 2019 --no-cache-filter request (only landed support in Nov 2021 for filtering by stage): Add --no-cache-filter to disable cache per target and per cache mount #1213
  • Proper pruning support (such as by id or by last-used metadata to --filter) handles the explicitly remove case, or a user can use the existing --filter support to nuke all cache mounts. At the very least a JSON output option for builder du would enable users to have flexible filters.

@tonistiigi
Copy link
Member

--no-cache should allow for the possibility to ignore the cache mount and substitute with a temporary one that is discarded after the build.

That is not the behavior of --no-cache (with or without cache-mounts). --no-cache ignores the previous cache, and the new cache generated after build would be the one used by the next builds.

@jackblackevo
Copy link
Author

@polarathene Expected behavior is that the cache mounts that would be used by the build are removed if --no-cache is set #1092.

Where can I find the relevant documentation? I was reading about the --no-cache option in the Docker docs, but it seems to only mention the layer cache. Thank you!

@polarathene
Copy link
Contributor

polarathene commented Aug 22, 2024

That is not the behavior of --no-cache (with or without cache-mounts).

I understand that, hence the mention of the --no-cache-filter request from 2019 to have the ability to discard only the layer cache. Temporarily replacing the cache mounts with another is easy enough by changing the id option and could be done via an ARG / --build-arg as a workaround to not destroy a global cache that would be wastefully lost to a --no-cache run otherwise.

Presently --no-cache does not do what you say it should when mode, uid, or gid options are set. Do you want me to raise a new bug report specifically about that, or is the one here sufficient?

@tonistiigi tonistiigi transferred this issue from moby/moby Sep 5, 2024
@tonistiigi tonistiigi changed the title Inconsistent behavior of --mount=type=cache when mode is not explicitly specified No-cache not working for--mount=type=cache when mode is specified Sep 5, 2024
@thompson-shaun thompson-shaun added this to the v0.future milestone Sep 6, 2024
@thompson-shaun
Copy link
Collaborator

thompson-shaun commented Sep 6, 2024

👋 @polarathene . We've transferred to the buildkit repo and will be tracking this issue for now. Thanks!

#5306 opened.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants