diff --git a/docs/source/distributions/index.md b/docs/source/distributions/index.md index ee7f4f23cd..1f766e75e8 100644 --- a/docs/source/distributions/index.md +++ b/docs/source/distributions/index.md @@ -14,7 +14,12 @@ Another simple way to start interacting with Llama Stack is to just spin up a co **Conda**: -Lastly, if you have a custom or an advanced setup or you are developing on Llama Stack you can also build a custom Llama Stack server. Using `llama stack build` and `llama stack run` you can build/run a custom Llama Stack server containing the exact combination of providers you wish. We have also provided various templates to make getting started easier. See [Building a Custom Distribution](building_distro) for more details. +If you have a custom or an advanced setup or you are developing on Llama Stack you can also build a custom Llama Stack server. Using `llama stack build` and `llama stack run` you can build/run a custom Llama Stack server containing the exact combination of providers you wish. We have also provided various templates to make getting started easier. See [Building a Custom Distribution](building_distro) for more details. + + +**Kubernetes**: + +If you have built a container image and want to deploy it in a Kubernetes cluster instead of starting the Llama Stack server locally. See [Kubernetes Deployment Guide](kubernetes_deployment) for more details. ```{toctree} @@ -25,4 +30,5 @@ importing_as_library building_distro configuration selection +kubernetes_deployment ``` diff --git a/docs/source/distributions/kubernetes_deployment.md b/docs/source/distributions/kubernetes_deployment.md new file mode 100644 index 0000000000..6cca2bc476 --- /dev/null +++ b/docs/source/distributions/kubernetes_deployment.md @@ -0,0 +1,207 @@ +# Kubernetes Deployment Guide + +Instead of starting the Llama Stack and vLLM servers locally. We can deploy them in a Kubernetes cluster. In this guide, we'll use a local [Kind](https://kind.sigs.k8s.io/) cluster and a vLLM inference service in the same cluster for demonstration purposes. + +First, create a local Kubernetes cluster via Kind: + +```bash +kind create cluster --image kindest/node:v1.32.0 --name llama-stack-test +``` + +Start vLLM server as a Kubernetes Pod and Service: + +```bash +cat </tmp/test-vllm-llama-stack/Containerfile.llama-stack-run-k8s <