Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dockerfile.ubi: get rid of --link flags for COPY operations #3

Merged
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 7 additions & 7 deletions Dockerfile.ubi
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ RUN curl -fsSL -o ~/miniforge3.sh -O "https://github.com/conda-forge/miniforge/
## Python Base #################################################################
FROM base as python-base

COPY --from=python-install --link /opt/vllm /opt/vllm
COPY --from=python-install /opt/vllm /opt/vllm

ENV PATH=/opt/vllm/bin/:$PATH

Expand Down Expand Up @@ -132,7 +132,7 @@ RUN ldconfig /usr/local/cuda-12.2/compat/
## Development #################################################################
FROM cuda-devel AS dev

COPY --from=python-torch-base --link /opt/vllm /opt/vllm
COPY --from=python-torch-base /opt/vllm /opt/vllm
ENV PATH=/opt/vllm/bin/:$PATH

# install build and runtime dependencies
Expand Down Expand Up @@ -260,9 +260,9 @@ FROM base AS vllm

WORKDIR /vllm-staging
# COPY files from various places into a staging directory
COPY --link vllm vllm
COPY --from=build --link /workspace/vllm/*.so vllm/
COPY --from=gen-protos --link /workspace/vllm/entrypoints/grpc/pb vllm/entrypoints/grpc/pb
COPY vllm vllm
COPY --from=build /workspace/vllm/*.so vllm/
COPY --from=gen-protos /workspace/vllm/entrypoints/grpc/pb vllm/entrypoints/grpc/pb

# custom COPY command to use umask to control permissions and grant permissions
# to the group
Expand All @@ -281,7 +281,7 @@ FROM cuda-runtime AS vllm-openai
WORKDIR /workspace

# Create release python environment
COPY --from=python-torch-base --link /opt/vllm /opt/vllm
COPY --from=python-torch-base /opt/vllm /opt/vllm
ENV PATH=/opt/vllm/bin/:$PATH

RUN --mount=type=cache,target=/root/.cache/pip \
Expand All @@ -301,7 +301,7 @@ RUN --mount=type=bind,from=flash-attn-builder,src=/usr/src/flash-attention-v2,ta
pip3 install /usr/src/flash-attention-v2/*.whl --no-cache-dir

# vLLM will not be installed in site-packages
COPY --from=vllm --link /workspace/ ./
COPY --from=vllm /workspace/ ./

# Triton needs a CC compiler
RUN microdnf install -y gcc \
Expand Down