Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run image extension does not install RPMs that are already installed in the builder #1285

Open
yters opened this issue Feb 6, 2024 · 5 comments
Labels
status/awaiting-response Further information is requested type/bug Something isn't working

Comments

@yters
Copy link

yters commented Feb 6, 2024

Summary

When the run image run.Dockerfile is extended to RUN dnf install RPMs, if the RPM is already in the builder image then it is not installed in the run image. This is a problem since the run image does not have access to the RPMs in the builder image.


Reproduction

Steps
  1. Create a build image with the following Dockerfile named <path>/build-base-python3.11:debug.
FROM registry.access.redhat.com/ubi8-minimal:8.8

ARG STACK_ID="debug"

ENV CNB_USER_ID=${CNB_UID:-1000}
ENV CNB_GROUP_ID=${CNB_GID:-1000}

RUN microdnf update
RUN microdnf install python3.11

RUN groupadd cnb --gid ${CNB_GROUP_ID} 
RUN useradd --uid ${CNB_USER_ID} --gid ${CNB_GROUP_ID} -m -s /bin/bash cnb 

USER cnb
LABEL io.buildpacks.stack.id=${STACK_ID}
ENV CNB_STACK_ID=${STACK_ID}
ENV CNB_USER_ID=${CNB_UID:-1000}
ENV CNB_GROUP_ID=${CNB_GID:-1000}
  1. Create a run image with the following Dockerfile named <path>/run-base:debug
FROM registry.access.redhat.com/ubi8-minimal:8.8

ARG STACK_ID="debug"

ENV APP_USER_ID=${APP_UID:-31460}
ENV APP_GROUP_ID=${APP_GID:-31460}

LABEL io.buildpacks.stack.id=${STACK_ID}
ENV CNB_STACK_ID=${STACK_ID}
  1. Create an extension with the following files, named <path>/extension-dnf-install:debug.
    extension.toml
api = "0.10"

[extension]
  id = "dnf-install"
  name = "dnf python3.11 RPM installer"
  description = "Extension that installs python3.11 RPM in the run image."
  version = "debug"

[[targets]]
os = "linux/amd64"
arch = "amd64"

generate/run.Dockerfile

ARG base_image
FROM ${base_image}

USER root

RUN microdnf install python3.11

ARG user_id
USER \${user_id}
  1. Create a buildpack from the hello-word example buildpack, named <path>/buildpack-hello-world:debug.
  2. Create a builder with the following builder.toml, named <path>/builder-python3.11:debug.
[[extensions]]
uri = "<path>/extension-dnf-install:debug"
id = "dnf-install"

[[order-extensions]]
[[order-extensions.group]]
id = "dnf-install"
version = "debug"

[[buildpacks]]
uri = "<path>/buildpack-hello-world:debug"
id  = "hello-world"

[[order]]
[[order.group]]
id = "hello-world"
version = "debug"

[stack]
id = "debug"
run-image = "<path>/run-base:debug"
build-image = "<path>/build-base-python3.11:debug"
  1. Use the lifecycle phases in the builder-python3.11:debug image to create an image. Create the following files in the current directory.
    config/run.toml
[[images]]
 image = "<path>/run-base:debug"

source/hello.sh

#!/bin/sh

echo hello

Then, run the following commands to build a container.

export image=<path>/builder-python3.11:debug
docker run --user root -v ~/.docker:/workspace/dockerconfig -v $(pwd)/source:/workspace/source -v $(pwd)/config:/workspace/config -it --rm $image /bin/bash
# These following commands are run within the container started in the immediately previous `docker run` command.
mkdir /kaniko
export run_image=<path>/stack-rhel8-run/amd64:debug
export image=<path>/test:debug
DOCKER_CONFIG=/workspace/dockerconfig CNB_PLATFORM_API=0.12 /cnb/lifecycle/analyzer -cache-image=$image -uid=1000 -gid=1000 -run-image $run_image $image
su cnb
CNB_PLATFORM_API=0.12 CNB_EXPERIMENTAL_MODE=silent /cnb/lifecycle/detector -app=/workspace/source/ -run=/workspace/config/run.toml
exit
DOCKER_CONFIG=/workspace/dockerconfig CNB_PLATFORM_API=0.12 /cnb/lifecycle/restorer -cache-image=$image -uid=1000 -gid=1000
mkdir /layers/extended
DOCKER_CONFIG=/workspace/dockerconfig CNB_PLATFORM_API=0.12 CNB_EXPERIMENTAL_MODE=silent /cnb/lifecycle/extender -app=/workspace/source -kind=run -extended=/layers/extended

You'll see a result like this.

$ export image=<path>/builder-python3.11:debug
$ docker run --user root -v ~/.docker:/workspace/dockerconfig -v $(pwd)/source:/workspace/source -v $
(pwd)/config:/workspace/config -it --rm $image /bin/bash       
[root@102b97e9b6a4 layers]# mkdir kaniko                                                                                     
[root@102b97e9b6a4 layers]# rm -rf kaniko/               
[root@102b97e9b6a4 layers]# mkdir /kaniko                
[root@102b97e9b6a4 layers]# export run_image=<path>/stack-rhel8-run/amd64:debug
[root@102b97e9b6a4 layers]# export image=<path>/test:debug
[root@102b97e9b6a4 layers]# DOCKER_CONFIG=/workspace/dockerconfig CNB_PLATFORM_API=0.12 /cnb/lifecycle/analyzer -cache-image=
$image -uid=1000 -gid=1000 -run-image $run_image $image 
Timer: Analyzer started at 2024-02-06T17:49:04Z         
Image with name "<path>/test:debug" not found
Timer: Analyzer ran for 1.438062316s and ended at 2024-02-06T17:49:05Z
[root@102b97e9b6a4 layers]# su cnb                                                                                           
[cnb@102b97e9b6a4 layers]$ CNB_PLATFORM_API=0.12 CNB_EXPERIMENTAL_MODE=silent /cnb/lifecycle/detector -app=/workspace/source/
 -run=/workspace/config/run.toml                             
Warning: Platform requested experimental feature 'Dockerfiles' 
Timer: Detector started at 2024-02-06T17:49:11Z                                                                              
dnf-install debug
hello-world debug                                                                                                            
Timer: Detector ran for 4.60913ms and ended at 2024-02-06T17:49:11Z
Timer: Generator started at 2024-02-06T17:49:11Z        
Timer: Generator ran for 841.845µs and ended at 2024-02-06T17:49:11Z
[cnb@102b97e9b6a4 layers]$ exit                                                                                              exit      
[root@102b97e9b6a4 layers]# DOCKER_CONFIG=/workspace/dockerconfig CNB_PLATFORM_API=0.12 /cnb/lifecycle/restorer -cache-image=
$image -uid=1000 -gid=1000                                    
Timer: Restorer started at 2024-02-06T17:49:21Z          
Layer cache not found                                                                                                        
Timer: Restorer ran for 1.114690805s and ended at 2024-02-06T17:49:22Z
[root@102b97e9b6a4 layers]# mkdir /layers/extended                                                                           
[root@102b97e9b6a4 layers]# DOCKER_CONFIG=/workspace/dockerconfig CNB_PLATFORM_API=0.12 CNB_EXPERIMENTAL_MODE=silent /cnb/lif
ecycle/extender -app=/workspace/source -kind=run -extended=/layers/extended
INFO[0000] Built cross stage deps: map[]                
INFO[0000] Executing 0 build triggers                   
INFO[0000] Building stage 'base@sha256:79db881f1fa031623c738a9f0cf7a3dce4ceb2e0183c17d6364bd5b3cf0b3f02' [idx: '0', base-idx:
 '-1'] 
INFO[0000] Cmd: USER                                     
INFO[0000] Checking for cached layer oci:/kaniko/cache/layers/cached:eba5636260b56b6f0c94d54b9a764036297f933fa03d198068c41c6e
723c055a... 
INFO[0000] No cached layer found for cmd RUN microdnf install python3.11 
INFO[0000] Cmd: USER                                     
INFO[0000] Unpacking rootfs as cmd RUN microdnf install python3.11 requires it. 
INFO[0000] Skipping unpacking as no commands require it. 
INFO[0000] USER root                                     
INFO[0000] Cmd: USER                                     
INFO[0000] No files changed in this command, skipping snapshotting. 
INFO[0000] RUN microdnf install python3.11              
INFO[0000] Initializing snapshotter ...                 
INFO[0000] Taking snapshot of full filesystem...        
INFO[0000] Cmd: /bin/sh                                  
INFO[0000] Args: [-c microdnf install python3.11]       
INFO[0000] Util.Lookup returned: &{Uid:0 Gid:0 Username:root Name:root HomeDir:/root} 
INFO[0000] Performing slow lookup of group ids for root 
INFO[0000] Running: [/bin/sh -c microdnf install python3.11] 

(microdnf:72): librhsm-WARNING **: 17:49:30.110: Found 0 entitlement certificates

(microdnf:72): librhsm-WARNING **: 17:49:30.111: Found 0 entitlement certificates
Nothing to do.
INFO[0000] Taking snapshot of full filesystem...        
INFO[0000] ARG user_id                                   
INFO[0000] Pushing layer oci:/kaniko/cache/layers/cached:eba5636260b56b6f0c94d54b9a764036297f933fa03d198068c41c6e723c055a to 
cache now 
INFO[0000] No files changed in this command, skipping snapshotting. 
INFO[0000] USER \${user_id}                              
INFO[0000] Cmd: USER                      
INFO[0000] No files changed in this command, skipping snapshotting. 
INFO[0000] Skipping push to container registry due to --no-push flag 
Warning: The original user ID was app but the final extension left the user ID set to ${user_id}.
Timer: Extender ran for 965.770627ms and ended at 2024-02-06T17:49:30Z
Current behavior

In the extend phase when it comes time to install python3.11 it will say:

INFO[0000] Running: [/bin/sh -c microdnf install python3.11] 
Nothing to do.

despite python3.11 not being installed in the run image.

Expected behavior

I expect python3.11 to be installed in the run image since it is not installed in the base run image.

Context

lifecycle version

Lifecycle version is 0.17.2.

platform version(s)
$ pack report
Pack:
  Version:  0.32.1+git-b14250b.build-5241
  OS/Arch:  linux/amd64

Default Lifecycle Version:  0.17.2

Supported Platform APIs:  0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.10, 0.11, 0.12

Config:
  experimental = true
  lifecycle-image = "buildpacksio/lifecycle:0.17.1"
  layout-repo-dir = "/home/<redacted>/.pack/layout-repo"
$ docker info
Client: Docker Engine - Community
 Version:    25.0.1
 Context:    default
 Debug Mode: false                                            
 Plugins:                                                     
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.12.1
    Path:     /usr/libexec/docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  v2.24.2
    Path:     /usr/libexec/docker/cli-plugins/docker-compose
                                                              
Server:           
 Containers: 35     
  Running: 1         
  Paused: 0  
  Stopped: 34               
 Images: 51
 Server Version: 25.0.1                          
 Storage Driver: overlay2
  Backing Filesystem: btrfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
 Swarm: inactive
 Runtimes: runc io.containerd.runc.v2
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: a1496014c916f9e62104b33d1bb5bd03b0858e59
 runc version: v1.1.11-0-g4bccb38
 init version: de40ad0
 Security Options:
  seccomp
   Profile: builtin
  cgroupns
 Kernel Version: 6.6.13-100.fc38.x86_64
 Operating System: Fedora Linux 38 (Workstation Edition)
 OSType: linux
 Architecture: x86_64
 CPUs: 16
 Total Memory: 31.03GiB
 Name: fedora
 ID: 6a61b820-9dc5-4e48-9607-b2df128276e9
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false
@yters yters added status/triage type/bug Something isn't working labels Feb 6, 2024
@natalieparellano
Copy link
Member

natalieparellano commented Feb 6, 2024

@yters looking at your code above, I think the issue is that you are trying to extend the run image in the context of the build image. So in this snippet:

export image=<path>/builder-python3.11:debug
docker run --user root -v ~/.docker:/workspace/dockerconfig -v $(pwd)/source:/workspace/source -v $(pwd)/config:/workspace/config -it --rm $image /bin/bash
...

Where $image is the builder image - there is only one docker run instruction. There should be at least three (one for analyze/detect/restore/extend-build that uses $image, one for extend-run that uses $run_image, and one for export AFTER both extend phases have completed that uses $image). You would mount the same directories across all three invocations. If it's helpful here is how we do it in pack.

You may be interested in this (soon to be implemented) RFC which would make it easier to share things across the builder image and the run image: buildpacks/rfcs#301

@natalieparellano natalieparellano added status/awaiting-response Further information is requested and removed status/triage labels Feb 6, 2024
@yters
Copy link
Author

yters commented Feb 7, 2024

Thanks for the response.

The docker run starts a terminal inside the builder docker container, and the lifecycle phase commands are run inside the container. I don't see how using the run image instead would work, since the run image doesn't have the lifecycle commands:

$ docker run --user root -it --rm <path>/run-base:debug /bin/sh
sh-4.4# ls /cnb/lifecycle
ls: cannot access '/cnb/lifecycle': No such file or directory

If I were to use the pack command to extend a run image instead of the lifecycle phase commands, should this work? I.e. the RPMs in the builder image will not conflict with the run image?

@natalieparellano
Copy link
Member

the run image doesn't have the lifecycle commands

Ah yes, you are right. In the case of pack we download the lifecycle layer from the "lifecycle image" and append that to the run image before performing run image extension. You could accomplish something similar with volume mounts for local testing.

@yters
Copy link
Author

yters commented Feb 8, 2024

Thanks, that's a good idea. I'll test it out, and close this ticket if successful.

@natalieparellano
Copy link
Member

Any further update here? Can we close this issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status/awaiting-response Further information is requested type/bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants