Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add example of helm chart for vllm deployment on k8s #9199

Merged
merged 90 commits into from
Dec 10, 2024
Merged
Show file tree
Hide file tree
Changes from 89 commits
Commits
Show all changes
90 commits
Select commit Hold shift + click to select a range
f847283
add chart helm example with related github workflow
mfournioux Nov 28, 2024
66efcb8
add chart labels in service object
mfournioux Nov 28, 2024
3502310
Merge branch 'vllm-project:main' into add_chart_helm_example
mfournioux Nov 29, 2024
d025994
add automatic vllm cpu docker image build
mfournioux Nov 29, 2024
80bd125
add buildx for vllm cpu docker image build
mfournioux Nov 29, 2024
b37a995
correct buildx command for vllm cpu docker image build
mfournioux Nov 29, 2024
1c24885
correct image repository for helm install
mfournioux Nov 29, 2024
348aa49
add debug pod status command during helm install
mfournioux Nov 29, 2024
26c7c6b
add debug pod status command during helm install
mfournioux Nov 29, 2024
a587472
add debug pod status command during helm install
mfournioux Nov 29, 2024
902c15c
add debug pod status command during helm install
mfournioux Nov 29, 2024
867eef2
remove debug pod status command during helm install
mfournioux Nov 29, 2024
8169780
add debug get pods command
mfournioux Nov 29, 2024
cadc7be
remove shm-size argument from docker build
mfournioux Nov 29, 2024
0f48176
add debug init container command
mfournioux Nov 29, 2024
c7ba597
bug on latest cpu vllm docker image build, remove build in order to t…
mfournioux Nov 29, 2024
5d5937b
Merge branch 'vllm-project:main' into add_chart_helm_example
mfournioux Dec 2, 2024
63ee666
add buildx command for vllm cpu docker image build
mfournioux Dec 2, 2024
473f16e
add request argument for curl test command
mfournioux Dec 2, 2024
4a04c72
add debug deployment container command
mfournioux Dec 2, 2024
202fc63
add debug deployment container command
mfournioux Dec 2, 2024
8ce6521
increase timeout for debug deployment container command
mfournioux Dec 2, 2024
f15d994
add debug command on curl test
mfournioux Dec 2, 2024
2c6bc8b
add debug command on curl test
mfournioux Dec 2, 2024
2a03f0c
add debug command on curl test
mfournioux Dec 2, 2024
d63d29e
add debug command on curl test
mfournioux Dec 2, 2024
f39feef
add debug command on curl test
mfournioux Dec 2, 2024
ec23081
increase vllm rpc timeout
mfournioux Dec 2, 2024
742da50
increase vllm rpc timeout
mfournioux Dec 2, 2024
45d5d98
remove the value fix of shm size during docker build
mfournioux Dec 2, 2024
ad23934
increase kv cache space
mfournioux Dec 2, 2024
3e2d501
add debug options for curl test
mfournioux Dec 2, 2024
1434cfa
add dtype argument for vllm command
mfournioux Dec 2, 2024
0988d86
remove ressources limits for helm install
mfournioux Dec 2, 2024
7846ba2
restore kv cache specification for vllm deployment
mfournioux Dec 2, 2024
4e2f531
add debug options for curl test
mfournioux Dec 2, 2024
3f73dd9
add debug options for curl test
mfournioux Dec 2, 2024
dc68113
increase VLLM_RPC_TIMEOUT for vllm deployment
mfournioux Dec 2, 2024
816a971
add debug options for curl test
mfournioux Dec 2, 2024
6db44b9
Merge branch 'vllm-project:main' into add_chart_helm_example
mfournioux Dec 2, 2024
3a95f5c
add debug mode for vllm deployment
mfournioux Dec 2, 2024
5a4cbf0
add debug argument for vllm deployment
mfournioux Dec 3, 2024
3509d12
update setting value arguments for vllm deployment
mfournioux Dec 3, 2024
fbc3dc9
update setting value arguments for vllm deployment
mfournioux Dec 3, 2024
1013952
update build argument for vllm docker image build
mfournioux Dec 3, 2024
bcbee96
remove debug argument from vllm deployment
mfournioux Dec 3, 2024
bef565c
Merge branch 'vllm-project:main' into add_chart_helm_example
mfournioux Dec 3, 2024
7f8147e
test on dtype for debug for vllm deployment
mfournioux Dec 3, 2024
5b48fee
remove build args for cpu docker image build
mfournioux Dec 3, 2024
b4858ac
update python version for setup action
mfournioux Dec 3, 2024
e555c03
Merge branch 'vllm-project:main' into add_chart_helm_example
mfournioux Dec 3, 2024
22546c5
add commit hasg for github action versions and rename workflow
mfournioux Dec 4, 2024
516d30d
remove dtype argument for vllm serving command from values file
mfournioux Dec 4, 2024
f765d4c
Merge branch 'vllm-project:main' into add_chart_helm_example
mfournioux Dec 4, 2024
07eb755
update documentation file path
mfournioux Dec 4, 2024
be2c1ea
add dtype argument in vllm deployment for debug
mfournioux Dec 4, 2024
12bac88
rename github workflow and update table format on doc file
mfournioux Dec 4, 2024
0f6a6ce
add caching management for docker image
mfournioux Dec 4, 2024
0f45ea4
test registry cache management
mfournioux Dec 4, 2024
f450ca7
restore inline cache management with context argument
mfournioux Dec 4, 2024
c0108fc
remove cache management
mfournioux Dec 4, 2024
92bb1c7
remove cache management
mfournioux Dec 4, 2024
9836ec3
restore initial docker build command
mfournioux Dec 4, 2024
fb179e6
restore build command
mfournioux Dec 4, 2024
9e2af93
correct missing github action
mfournioux Dec 4, 2024
19e6f1c
restore build command
mfournioux Dec 4, 2024
fd1f830
remove setup buildx action
mfournioux Dec 4, 2024
3549861
correct format doc errors
mfournioux Dec 4, 2024
bf27179
correct format doc errors
mfournioux Dec 4, 2024
dcca35d
correct format doc errors
mfournioux Dec 4, 2024
499cc39
update documentation
mfournioux Dec 4, 2024
b422f09
update documentation
mfournioux Dec 4, 2024
610e1d8
update documentation
mfournioux Dec 4, 2024
002ed4d
update documentation
mfournioux Dec 4, 2024
0002a31
update documentation
mfournioux Dec 4, 2024
d98e8c4
update documentation
mfournioux Dec 4, 2024
93562cf
update documentation
mfournioux Dec 4, 2024
8479f9f
correct malformed table error on rst file
mfournioux Dec 4, 2024
3914276
rename image file used in rst file
mfournioux Dec 4, 2024
9cc1fc2
Merge branch 'vllm-project:main' into add_chart_helm_example
mfournioux Dec 5, 2024
58c7f96
correct malformed table error on rst file
mfournioux Dec 5, 2024
435d4dd
correct malformed table error on rst file
mfournioux Dec 5, 2024
40e481a
correct malformed table error on rst file
mfournioux Dec 5, 2024
9fc0593
Merge branch 'vllm-project:main' into add_chart_helm_example
mfournioux Dec 5, 2024
b270d3c
Merge branch 'vllm-project:main' into add_chart_helm_example
mfournioux Dec 6, 2024
a65aa6b
Merge branch 'vllm-project:main' into add_chart_helm_example
mfournioux Dec 6, 2024
1634fec
Merge branch 'vllm-project:main' into add_chart_helm_example
mfournioux Dec 8, 2024
0fe290d
Merge branch 'vllm-project:main' into add_chart_helm_example
mfournioux Dec 9, 2024
77f6675
Merge branch 'vllm-project:main' into add_chart_helm_example
mfournioux Dec 9, 2024
911f78e
Fix doc format
DarkLight1337 Dec 10, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
81 changes: 81 additions & 0 deletions .github/workflows/lint-and-deploy.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
name: Lint and Deploy Charts

on: pull_request

jobs:
lint-and-deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0

- name: Set up Helm
uses: azure/setup-helm@fe7b79cd5ee1e45176fcad797de68ecaf3ca4814 # v4.2.0
with:
version: v3.14.4

#Python is required because ct lint runs Yamale and yamllint which require Python.
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b # v5.3.0
with:
python-version: '3.13'

- name: Set up chart-testing
uses: helm/chart-testing-action@e6669bcd63d7cb57cb4380c33043eebe5d111992 # v2.6.1
with:
version: v3.10.1

- name: Run chart-testing (lint)
run: ct lint --target-branch ${{ github.event.repository.default_branch }} --chart-dirs examples/chart-helm --charts examples/chart-helm

- name: Setup minio
run: |
docker network create vllm-net
docker run -d -p 9000:9000 --name minio --net vllm-net \
-e "MINIO_ACCESS_KEY=minioadmin" \
-e "MINIO_SECRET_KEY=minioadmin" \
-v /tmp/data:/data \
-v /tmp/config:/root/.minio \
minio/minio server /data
export AWS_ACCESS_KEY_ID=minioadmin
export AWS_SECRET_ACCESS_KEY=minioadmin
export AWS_EC2_METADATA_DISABLED=true
mkdir opt-125m
cd opt-125m && curl -O -Ls "https://huggingface.co/facebook/opt-125m/resolve/main/{pytorch_model.bin,config.json,generation_config.json,merges.txt,special_tokens_map.json,tokenizer_config.json,vocab.json}" && cd ..
aws --endpoint-url http://127.0.0.1:9000/ s3 mb s3://testbucket
aws --endpoint-url http://127.0.0.1:9000/ s3 cp opt-125m/ s3://testbucket/opt-125m --recursive
- name: Create kind cluster
uses: helm/kind-action@0025e74a8c7512023d06dc019c617aa3cf561fde # v1.10.0

- name: Build the Docker image vllm cpu
run: docker buildx build -f Dockerfile.cpu -t vllm-cpu-env .

- name: Configuration of docker images, network and namespace for the kind cluster
run: |
docker pull amazon/aws-cli:2.6.4
kind load docker-image amazon/aws-cli:2.6.4 --name chart-testing
kind load docker-image vllm-cpu-env:latest --name chart-testing
docker network connect vllm-net "$(docker ps -aqf "name=chart-testing-control-plane")"
kubectl create ns ns-vllm
- name: Run chart-testing (install)
run: |
export AWS_ACCESS_KEY_ID=minioadmin
export AWS_SECRET_ACCESS_KEY=minioadmin
helm install --wait --wait-for-jobs --timeout 5m0s --debug --create-namespace --namespace=ns-vllm test-vllm examples/chart-helm -f examples/chart-helm/values.yaml --set secrets.s3endpoint=http://minio:9000 --set secrets.s3bucketname=testbucket --set secrets.s3accesskeyid=$AWS_ACCESS_KEY_ID --set secrets.s3accesskey=$AWS_SECRET_ACCESS_KEY --set resources.requests.cpu=1 --set resources.requests.memory=4Gi --set resources.limits.cpu=2 --set resources.limits.memory=5Gi --set image.env[0].name=VLLM_CPU_KVCACHE_SPACE --set image.env[1].name=VLLM_LOGGING_LEVEL --set-string image.env[0].value="1" --set-string image.env[1].value="DEBUG" --set-string extraInit.s3modelpath="opt-125m/" --set-string 'resources.limits.nvidia\.com/gpu=0' --set-string 'resources.requests.nvidia\.com/gpu=0' --set-string image.repository="vllm-cpu-env"
- name: curl test
run: |
kubectl -n ns-vllm port-forward service/test-vllm-service 8001:80 &
sleep 10
CODE="$(curl -v -f --location http://localhost:8001/v1/completions \
--header "Content-Type: application/json" \
--data '{
"model": "opt-125m",
"prompt": "San Francisco is a",
"max_tokens": 7,
"temperature": 0
}'):$CODE"
echo "$CODE"
1 change: 1 addition & 0 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,7 @@ Documentation
serving/openai_compatible_server
serving/deploying_with_docker
serving/deploying_with_k8s
serving/deploying_with_helm
serving/deploying_with_nginx
serving/distributed_serving
serving/metrics
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
253 changes: 253 additions & 0 deletions docs/source/serving/deploying_with_helm.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,253 @@
.. _deploying_with_helm:

Deploying with Helm
===================

A Helm chart to deploy vLLM for Kubernetes

Helm is a package manager for Kubernetes. It will help you to deploy vLLM on k8s and automate the deployment of vLLMm Kubernetes applications. With Helm, you can deploy the same framework architecture with different configurations to multiple namespaces by overriding variables values.

This guide will walk you through the process of deploying vLLM with Helm, including the necessary prerequisites, steps for helm install and documentation on architecture and values file.

Prerequisites
-------------
Before you begin, ensure that you have the following:

- A running Kubernetes cluster
- NVIDIA Kubernetes Device Plugin (`k8s-device-plugin`): This can be found at `https://github.com/NVIDIA/k8s-device-plugin/`
- Available GPU resources in your cluster
- S3 with the model which will be deployed

Installing the chart
--------------------

To install the chart with the release name `test-vllm`

.. code-block:: console

helm upgrade --install --create-namespace --namespace=ns-vllm test-vllm . -f values.yaml --set secrets.s3endpoint=$ACCESS_POINT --set secrets.s3buckername=$BUCKET --set secrets.s3accesskeyid=$ACCESS_KEY --set secrets.s3accesskey=$SECRET_KEY

Uninstalling the Chart
----------------------

To uninstall the `test-vllm` deployment

.. code-block:: console

helm uninstall test-vllm --namespace=ns-vllm

The command removes all the Kubernetes components associated with the
chart **including persistent volumes** and deletes the release.

Architecture
------------

.. image:: architecture_helm_deployment.png

Values
------

.. list-table:: Values
:widths: 25 25 25 25
:header-rows: 1

* - Key
- Type
- Default
- Description
* - autoscaling
- object
- {"enabled":false,"maxReplicas":100,"minReplicas":1,"targetCPUUtilizationPercentage":80}
- Autoscaling configuration
* - autoscaling.enabled
- bool
- false
- Enable autoscaling
* - autoscaling.maxReplicas
- int
- 100
- Maximum replicas
* - autoscaling.minReplicas
- int
- 1
- Minimum replicas
* - autoscaling.targetCPUUtilizationPercentage
- int
- 80
- Target CPU utilization for autoscaling
* - configs
- object
- {}
- Configmap
* - containerPort
- int
- 8000
- Container port
* - customObjects
- list
- []
- Custom Objects configuration
* - deploymentStrategy
- object
- {}
- Deployment strategy configuration
* - externalConfigs
- list
- []
- External configuration
* - extraContainers
- list
- []
- Additional containers configuration
* - extraInit
- object
- {"pvcStorage":"1Gi","s3modelpath":"relative_s3_model_path/opt-125m", "awsEc2MetadataDisabled": true}
- Additional configuration for the init container
* - extraInit.pvcStorage
- string
- "50Gi"
- Storage size of the s3
* - extraInit.s3modelpath
- string
- "relative_s3_model_path/opt-125m"
- Path of the model on the s3 which hosts model weights and config files
* - extraInit.awsEc2MetadataDisabled
- boolean
- true
- Disables the use of the Amazon EC2 instance metadata service
* - extraPorts
- list
- []
- Additional ports configuration
* - gpuModels
- list
- ["TYPE_GPU_USED"]
- Type of gpu used
* - image
- object
- {"command":["vllm","serve","/data/","--served-model-name","opt-125m","--host","0.0.0.0","--port","8000"],"repository":"vllm/vllm-openai","tag":"latest"}
- Image configuration
* - image.command
- list
- ["vllm","serve","/data/","--served-model-name","opt-125m","--host","0.0.0.0","--port","8000"]
- Container launch command
* - image.repository
- string
- "vllm/vllm-openai"
- Image repository
* - image.tag
- string
- "latest"
- Image tag
* - livenessProbe
- object
- {"failureThreshold":3,"httpGet":{"path":"/health","port":8000},"initialDelaySeconds":15,"periodSeconds":10}
- Liveness probe configuration
* - livenessProbe.failureThreshold
- int
- 3
- Number of times after which if a probe fails in a row, Kubernetes considers that the overall check has failed: the container is not alive
* - livenessProbe.httpGet
- object
- {"path":"/health","port":8000}
- Configuration of the Kubelet http request on the server
* - livenessProbe.httpGet.path
- string
- "/health"
- Path to access on the HTTP server
* - livenessProbe.httpGet.port
- int
- 8000
- Name or number of the port to access on the container, on which the server is listening
* - livenessProbe.initialDelaySeconds
- int
- 15
- Number of seconds after the container has started before liveness probe is initiated
* - livenessProbe.periodSeconds
- int
- 10
- How often (in seconds) to perform the liveness probe
* - maxUnavailablePodDisruptionBudget
- string
- ""
- Disruption Budget Configuration
* - readinessProbe
- object
- {"failureThreshold":3,"httpGet":{"path":"/health","port":8000},"initialDelaySeconds":5,"periodSeconds":5}
- Readiness probe configuration
* - readinessProbe.failureThreshold
- int
- 3
- Number of times after which if a probe fails in a row, Kubernetes considers that the overall check has failed: the container is not ready
* - readinessProbe.httpGet
- object
- {"path":"/health","port":8000}
- Configuration of the Kubelet http request on the server
* - readinessProbe.httpGet.path
- string
- "/health"
- Path to access on the HTTP server
* - readinessProbe.httpGet.port
- int
- 8000
- Name or number of the port to access on the container, on which the server is listening
* - readinessProbe.initialDelaySeconds
- int
- 5
- Number of seconds after the container has started before readiness probe is initiated
* - readinessProbe.periodSeconds
- int
- 5
- How often (in seconds) to perform the readiness probe
* - replicaCount
- int
- 1
- Number of replicas
* - resources
- object
- {"limits":{"cpu":4,"memory":"16Gi","nvidia.com/gpu":1},"requests":{"cpu":4,"memory":"16Gi","nvidia.com/gpu":1}}
- Resource configuration
* - resources.limits."nvidia.com/gpu"
- int
- 1
- Number of gpus used
* - resources.limits.cpu
- int
- 4
- Number of CPUs
* - resources.limits.memory
- string
- "16Gi"
- CPU memory configuration
* - resources.requests."nvidia.com/gpu"
- int
- 1
- Number of gpus used
* - resources.requests.cpu
- int
- 4
- Number of CPUs
* - resources.requests.memory
- string
- "16Gi"
- CPU memory configuration
* - secrets
- object
- {}
- Secrets configuration
* - serviceName
- string
-
- Service name
* - servicePort
- int
- 80
- Service port
* - labels.environment
- string
- test
- Environment name
* - labels.release
- string
- test
- Release name
6 changes: 6 additions & 0 deletions examples/chart-helm/.helmignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
*.png
.git/
ct.yaml
lintconf.yaml
values.schema.json
/workflows
21 changes: 21 additions & 0 deletions examples/chart-helm/Chart.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
apiVersion: v2
name: chart-vllm
description: Chart vllm

# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application

# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.0.1

maintainers:
- name: mfournioux
3 changes: 3 additions & 0 deletions examples/chart-helm/ct.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
chart-dirs:
- charts
validate-maintainers: false
Loading
Loading