LiteLLM Proxy in Kubernetes and questions #1705
Replies: 2 comments 8 replies
-
Replying to myself.
Not sure why it's implemented this way. The litellm command understands the config parameter and uses the config file when one is provided.
According to the data model I found in the code, the DB is at least used to store API keys and some user information. A hint for everyone who reads this after me and wants to implement it in Kubernetes. I assume you want to use Kubernetes secrets for the DB password. |
Beta Was this translation helpful? Give feedback.
-
Hi @chrbrnracn thanks for this - can we hop on a call to better understand your problem ? I'm unable to repro this on my side Sharing my calendly for your convenience: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat |
Beta Was this translation helpful? Give feedback.
-
I'm trying to run Ollama and LiteLLM in Kubernetes. Ollama is running in a separate deployment and does not expose any external ports, only internally as a service. For now it's planned to keep Ollama separate by intention.
Now I want to run LiteLLM Proxy as OpenAI API compatible frontend for use with IDE code assisting plugins (eg. continue.dev).
I'm slightly confused by the documentation.
In Docker, Deploying LiteLLM Proxy there are examples how to deploy with database.
I couldn't find any documentation on the differences. From the code it looks like litellm (Dockerfile?) pulls in some additional config file while litellm-database (Dockerfile.database?) does not by default (CMD)
"Admin UI"
The docs point to Google SSO, Microsoft SSO or a hard coded username password.
Thanks in advance for any help.
Beta Was this translation helpful? Give feedback.
All reactions