-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Wrong ollama embedding endpoint #494
Comments
So the examples we used following Ollama OpenAI API specs Please use the Test connection feature to make sure the Ollama connection working properly for both LLM & embedding models. |
Same issue here with ollama -v |
+1 |
Is this just an issue with Ollama v0.4?
Call from within Kotaemon app docker runtime:
Seems fine ... |
Description
Hi, I think you are calling the wrong endpoint for local embedding for ollama, if I use settings from your instructions here
From official ollama api documentation here it should be called:
http://localhost:11434/api/embedd
endpoint, but from koteamon it is calledhttp://localhost:11434/api/embeddings
following works:
curl http://localhost:11434/api/embedd -d '{
"model": "",
"input": "Why is the sky blue?"
}'
following does not:
curl http://localhost:11434/api/embeddings-d '{
"model": "",
"input": "Why is the sky blue?"
}'
There is also a problem that on UI we don't get any notification about the issues and one has to look into the logs. It would be great if it could be a little bit more explicit.
Reproduction steps
Screenshots
![DESCRIPTION](LINK.png)
Logs
No response
Browsers
No response
OS
No response
Additional information
No response
The text was updated successfully, but these errors were encountered: