You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently KubeAI only supports initial scale-from zero for the Ollama backend. Autoscaling of vLLM is currently implemented by scraping metrics directly from the vLLM Pods. Ideally we would implement the same process for Ollama.
Waiting on metrics support (ollama/ollama#3144) to land in the upstream project.
The text was updated successfully, but these errors were encountered:
Currently KubeAI only supports initial scale-from zero for the Ollama backend. Autoscaling of vLLM is currently implemented by scraping metrics directly from the vLLM Pods. Ideally we would implement the same process for Ollama.
Waiting on metrics support (ollama/ollama#3144) to land in the upstream project.
The text was updated successfully, but these errors were encountered: