You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm using the aws bedrock model with litellm for librechat and I get an error when I get a reply in the chat to a prompt.
as I understand it, the problem lies in the litellm configuration.
error: [handleAbortError] AI response error; aborting request: 400 {"error":"chat_completion: Invalid model name passed in model=llama3-70b"}
version: 1.0.8
cache: true
interface:
#Privacy policy settings
privacyPolicy:
externalUrl: 'https://librechat.ai/privacy-policy'
openNewTab: true
#Terms of service
termsOfService:
externalUrl: 'https://librechat.ai/tos'
openNewTab: true
registration:
socialLogins: ["discord", "facebook", "github", "google", "openid"]
endpoints:
custom:
- name: "Lite LLM"
# A place holder - otherwise it becomes the default (OpenAI) key
# Provide the key instead in each "model" block within "litellm/litellm-config.yaml"
apiKey: "sk-from-config-file"
# See the required changes above in "Start LiteLLM Proxy Server" step.
baseURL: "http://host.docker.internal:4000"
# A "default" model to start new users with. The "fetch" will pull the rest of the available models from LiteLLM
# More or less this is "irrelevant", you can pick any model. Just pick one you have defined in LiteLLM.
models:
default: [llama3-8b,
llama3-70b, llama2-70b]
fetch: true
titleConvo: true
titleModel: "llama3-8b"
summarize: false
summaryModel: "llama3-8b"
forcePrompt: false
modelDisplayLabel: "Lite LLM"
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I'm using the aws bedrock model with litellm for librechat and I get an error when I get a reply in the chat to a prompt.
as I understand it, the problem lies in the litellm configuration.
error: [handleAbortError] AI response error; aborting request: 400 {"error":"chat_completion: Invalid model name passed in model=llama3-70b"}
The documentation I referred to:
1.https://litellm.vercel.app/docs/proxy/quick_start - litellm-config
2. https://github.com/aws-samples/bedrock-litellm/blob/main/litellm/proxy_config.yaml - librechat
can you tell me what could be wrong?
litellm-config
librechat
Beta Was this translation helpful? Give feedback.
All reactions