Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Switch default base image, adjust other base images #7

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
216 changes: 108 additions & 108 deletions .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,8 @@ jobs:
version-changed: ${{ steps.version-metadata.outputs.changed }}
new-version: ${{ steps.version-metadata.outputs.newVersion }}
steps:
- uses: actions/checkout@v4
- uses: Quantco/ui-actions/version-metadata@v1
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: Quantco/ui-actions/version-metadata@cd71d2a0e30b25569f6d723e57acca83347e58fc # v1.0.18
id: version-metadata
with:
file: Dockerfile
Expand All @@ -39,131 +39,131 @@ jobs:
fail-fast: false
matrix:
base-image:
- debian:bookworm-slim # 12
- debian:bookworm # 12
- debian:bullseye-slim # 11
- debian:bullseye # 11
- ubuntu:noble # 24.04
- ubuntu:mantic # 23.10
- ubuntu:jammy # 22.04
- ubuntu:focal # 20.04
- nvidia/cuda:12.6.3-base-ubuntu24.04
- nvidia/cuda:12.6.3-base-ubuntu22.04
- nvidia/cuda:12.6.1-base-ubuntu24.04
- nvidia/cuda:12.6.1-base-ubuntu22.04
- nvidia/cuda:12.3.1-base-ubuntu22.04
- nvidia/cuda:12.3.1-base-ubuntu20.04
- nvidia/cuda:12.2.2-base-ubuntu22.04
- nvidia/cuda:12.2.2-base-ubuntu20.04
- nvidia/cuda:12.1.1-base-ubuntu22.04
- nvidia/cuda:12.1.1-base-ubuntu20.04
- nvidia/cuda:11.8.0-base-ubuntu22.04
- nvidia/cuda:11.8.0-base-ubuntu20.04
- nvidia/cuda:11.7.1-base-ubuntu22.04
- nvidia/cuda:11.7.1-base-ubuntu20.04
- nvidia/cuda:11.6.2-base-ubuntu20.04
- nvidia/cuda:11.4.3-base-ubuntu20.04
- nvidia/cuda:11.3.1-base-ubuntu20.04
- nvidia/cuda:11.2.2-base-ubuntu20.04
# https://hub.docker.com/_/debian
- debian:bookworm-slim # 12
- debian:bookworm # 12
- debian:bullseye-slim # 11
- debian:bullseye # 11
# https://hub.docker.com/_/ubuntu
- ubuntu:plucky # 25.04
- ubuntu:oracular # 24.10
- ubuntu:noble # 24.04
- ubuntu:jammy # 22.04
- ubuntu:focal # 20.04
# https://hub.docker.com/r/nvidia/cuda
- nvidia/cuda:12.6.3-base-ubuntu24.04
- nvidia/cuda:12.6.3-base-ubuntu22.04
- nvidia/cuda:12.6.3-base-ubuntu20.04
- nvidia/cuda:12.3.1-base-ubuntu22.04
- nvidia/cuda:12.3.1-base-ubuntu20.04
- nvidia/cuda:12.2.2-base-ubuntu22.04
- nvidia/cuda:12.2.2-base-ubuntu20.04
- nvidia/cuda:12.1.1-base-ubuntu22.04
- nvidia/cuda:12.1.1-base-ubuntu20.04
steps:
- name: Checkout source
uses: actions/checkout@v4
- name: Set image variables
id: image-variables
env:
IMAGE: ${{ matrix.base-image }}
run: |
import os
- name: Checkout source
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Set image variables
id: image-variables
env:
IMAGE: ${{ matrix.base-image }}
run: |
import os

base_image = "${{ matrix.base-image }}"
if base_image.startswith("nvidia/cuda"):
code_names = {
"22.04": "jammy",
"20.04": "focal",
"23.10": "mantic",
"24.04": "noble",
}
ubuntu_version_number = base_image.split("-ubuntu")[-1]
base_tag = base_image.split(":")[-1]
cuda_version = base_tag.split("-")[0]
tag = f"{code_names[ubuntu_version_number]}-cuda-{cuda_version}"
platforms = "linux/amd64,linux/arm64"
else:
tag = base_image.split(":")[-1]
platforms = "linux/amd64,linux/arm64"
is_default = "true" if base_image == "${{ env.DEFAULT_BASE_IMAGE }}" else "false"
base_image = "${{ matrix.base-image }}"
if base_image.startswith("nvidia/cuda"):
code_names = {
"22.04": "jammy",
"20.04": "focal",
"24.04": "noble",
"24.10": "oracular",
"25.05": "plucky"
}
ubuntu_version_number = base_image.split("-ubuntu")[-1]
base_tag = base_image.split(":")[-1]
cuda_version = base_tag.split("-")[0]
tag = f"{code_names[ubuntu_version_number]}-cuda-{cuda_version}"
platforms = "linux/amd64,linux/arm64"
else:
tag = base_image.split(":")[-1]
platforms = "linux/amd64,linux/arm64"
is_default = "true" if base_image == "${{ env.DEFAULT_BASE_IMAGE }}" else "false"

GITHUB_OUTPUT = os.environ["GITHUB_OUTPUT"]
with open(GITHUB_OUTPUT, "a") as f:
f.write(f"tag={tag}\n")
f.write(f"platforms={platforms}\n")
f.write(f"is-default={is_default}\n")
shell: python
- name: Get docker metadata
id: metadata
uses: docker/metadata-action@369eb591f429131d6889c46b94e711f089e6ca96
with:
images: |-
ghcr.io/modular/magic
flavor: latest=false
# latest
# base-image
# major.minor.patch
# major.minor.patch-base-image
tags: |
GITHUB_OUTPUT = os.environ["GITHUB_OUTPUT"]
with open(GITHUB_OUTPUT, "a") as f:
f.write(f"tag={tag}\n")
f.write(f"platforms={platforms}\n")
f.write(f"is-default={is_default}\n")
shell: python
- name: Get docker metadata
id: metadata
uses: docker/metadata-action@369eb591f429131d6889c46b94e711f089e6ca96 # v5.6.1
with:
images: |-
ghcr.io/modular/magic
flavor: latest=false
# latest
# base-image
# major.minor.patch
# major.minor.patch-base-image
tags: |
type=raw,value=latest,priority=1000,enable=${{ steps.image-variables.outputs.is-default }}
type=raw,value=${{ steps.image-variables.outputs.tag }},priority=900
type=semver,pattern={{version}},enable=${{ steps.image-variables.outputs.is-default }},value=${{ needs.version.outputs.new-version }},priority=800
type=semver,pattern={{version}}-${{ steps.image-variables.outputs.tag }},value=${{ needs.version.outputs.new-version }},priority=500
- name: Setup docker buildx
uses: docker/setup-buildx-action@6524bf65af31da8d45b59e8c27de4bd072b392f5
- name: Login to GHCR
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build Docker images
id: build
uses: docker/build-push-action@48aba3b46d1b1fec4febb7c5d0c644b249a11355
with:
# provenance: false is needed to avoid unkown/unknown os/arch on ghcr
# see: https://github.com/docker/build-push-action/issues/820
provenance: false
platforms: ${{ steps.image-variables.outputs.platforms }}
push: ${{ needs.version.outputs.push == 'true' }}
build-args: |-
BASE_IMAGE=${{ matrix.base-image }}
tags: ${{ steps.metadata.outputs.tags }}
labels: ${{ steps.metadata.outputs.labels }}
- uses: actions/upload-artifact@v4
with:
name: ${{ steps.image-variables.outputs.tag }}
path: ${{ steps.metadata.outputs.bake-file }}
- name: Run tests
# buildx does not support outputting the image so we need to pull it and run tests
if: needs.version.outputs.push == 'true'
run: |
docker images
docker run --rm ghcr.io/modular/magic:${{ needs.version.outputs.new-version }}-${{ steps.image-variables.outputs.tag }} magic --version
docker run --rm ghcr.io/modular/magic:${{ needs.version.outputs.new-version }}-${{ steps.image-variables.outputs.tag }} sh -c "mkdir /app && cd /app && magic init && magic add python && magic run python --version"
- name: Image digest
run: echo ${{ steps.build.outputs.digest }}
- name: Setup docker buildx
uses: docker/setup-buildx-action@6524bf65af31da8d45b59e8c27de4bd072b392f5 # v3.8.0
- name: Login to GHCR
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567 # v3.3.0
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build Docker images
id: build
uses: docker/build-push-action@b32b51a8eda65d6793cd0494a773d4f6bcef32dc # v6.11.0
with:
# provenance: false is needed to avoid unkown/unknown os/arch on ghcr
# see: https://github.com/docker/build-push-action/issues/820
provenance: false
platforms: ${{ steps.image-variables.outputs.platforms }}
push: ${{ needs.version.outputs.push == 'true' }}
build-args: |-
BASE_IMAGE=${{ matrix.base-image }}
tags: ${{ steps.metadata.outputs.tags }}
labels: ${{ steps.metadata.outputs.labels }}
- uses: actions/upload-artifact@65c4c4a1ddee5b72f698fdd19549f0f0fb45cf08 # v4.6.0
with:
name: ${{ steps.image-variables.outputs.tag }}
path: ${{ steps.metadata.outputs.bake-file }}
- name: Run tests
# buildx does not support outputting the image so we need to pull it and run tests
if: needs.version.outputs.push == 'true'
run: |
docker images
# Test the magic binary is available
docker run --rm ghcr.io/modular/magic:${{ needs.version.outputs.new-version }}-${{ steps.image-variables.outputs.tag }} magic --version
# Test end-to-end magic workflow
docker run --rm ghcr.io/modular/magic:${{ needs.version.outputs.new-version }}-${{ steps.image-variables.outputs.tag }} sh -c "mkdir /app && cd /app && magic init && magic add python && magic run python --version"
# Test magic global binaries are in PATH
docker run --rm ghcr.io/modular/magic:${{ needs.version.outputs.new-version }}-${{ steps.image-variables.outputs.tag }} sh -c "magic global install rsync && rsync --version"
- name: Image digest
run: echo ${{ steps.build.outputs.digest }}

release:
needs: [version, build]
runs-on: ubuntu-22.04
runs-on: ubuntu-latest
permissions:
contents: write
if: needs.version.outputs.push == 'true'
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Push ${{ needs.version.outputs.new-version }} tag
run: |
git tag ${{ needs.version.outputs.new-version }}
git push origin ${{ needs.version.outputs.new-version }}
- name: Create release
uses: softprops/action-gh-release@v2
uses: softprops/action-gh-release@c95fe1489396fe8a9eb87c0abf8aa5b2ef267fda # v2.2.1
with:
generate_release_notes: true
tag_name: ${{ needs.version.outputs.new-version }}
12 changes: 6 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,10 +15,10 @@ docker pull ghcr.io/modular/magic:latest

There are different tags for different base images available:

- `latest` - based on `ubuntu:jammy`
- `latest` - based on `ubuntu:noble`
- `focal` - based on `ubuntu:focal`
- `bullseye` - based on `debian:bullseye`
- `jammy-cuda-12.2.2` - based on `nvidia/cuda:12.2.2-jammy`
- `noble-cuda-12.6.3` - based on `nvidia/cuda:12.6.3-ubuntu24.04`
- ... and more

## Usage with shell-hook
Expand Down Expand Up @@ -65,17 +65,17 @@ There are images based on `ubuntu`, `debian` and `nvidia/cuda` available.

### Ubuntu

The `ubuntu:jammy` (22.04) based image is the default base image. It is used for the `latest` and `0.x.y` tag.
The [`ubuntu:noble`](https://hub.docker.com/_/ubuntu) (24.04) based image is the default base image. It is used for the `latest` and `0.x.y` tag.

There are also images based on `ubuntu:focal` (20.04), `ubuntu:mantic` (23.10) and `ubuntu:noble` (24.04) available.
There are also images based on `ubuntu:focal` (20.04), `ubuntu:jammy` (22.04), `ubuntu:oracular` (24.10) and `ubuntu:plucky` (25.04) available.
These images use the tags `focal`, `0.x.y-focal`, ...

### Debian

Images based on `debian:bullseye`, `debian:bullseye-slim` (11), `debian:bookworm` and `debian:bookworm-slim` (12) are available.
Images based on [`debian:bullseye`](https://hub.docker.com/_/debian), `debian:bullseye-slim` (11), `debian:bookworm` and `debian:bookworm-slim` (12) are available.

These images have the tags `bullseye`, `0.x.y-bullseye`, ...

### NVIDIA/CUDA

Images based on `nvidia/cuda` are available using the tags `cuda-<cuda-version>-jammy`, `cuda-<cuda-version>-focal`, `0.x.y-cuda-<cuda-version>-jammy`, ...
Images based on [`nvidia/cuda`](https://hub.docker.com/r/nvidia/cuda) are available using the tags `cuda-<cuda-version>-jammy`, `cuda-<cuda-version>-focal`, `0.x.y-cuda-<cuda-version>-jammy`, ...

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ROCM? rocm/dev-ubuntu-22.04:{rocm-version}

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't need ROCm base images technically because ROCm software doesn't have the same license redistribution restriction that CUDA has that force us to use their base image. We can instead just install ROCm packages from conda-forge directly instead of assuming they on the system already.

4 changes: 2 additions & 2 deletions example/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
FROM ghcr.io/modular/magic:0.4.0 AS build
FROM ghcr.io/modular/magic:latest AS build

# copy source code, pixi.toml and pixi.lock to the container
COPY . /app
Expand All @@ -14,7 +14,7 @@ RUN magic shell-hook -e prod > /shell-hook.sh
# extend the shell-hook script to run the command passed to the container
RUN echo 'exec "$@"' >> /shell-hook.sh

FROM ubuntu:22.04 AS production
FROM ubuntu:24.04 AS production

# only copy the production environment into prod container
# please note that the "prefix" (path) needs to stay the same as in the build container
Expand Down
Loading