-
-
Notifications
You must be signed in to change notification settings - Fork 266
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
llama3 ollama version errors #277
Comments
Did your problem solved? I tried Gemma2 and it doesn't work as well. |
No, unfortunately I'm still having the same problem, still waiting for some sort of response |
Hey, sorry for the late reply. Can you share the logs, if possible? |
Are you using admin/api/chat or the playground (UI) |
I'm using the playground UI |
Hey, delete the current model, then go to Admin > Application, and turn on the fetch Ollama model dynamically |
Got it to work thank you!! |
I've seen online that ollama works when using Gemma2 but nothing mething any ollama version of llama3. I want to create a chatbot that is based off of my local version of llama3 but every time I upload my ollama version of llama3 no matter what the embedded method is I can never recive a proper response. Whenever I try and use the playground chat I receive the message "There was an error processing your request" and whether or not I have data sources uploaded does not seem to affect this issue either.
The text was updated successfully, but these errors were encountered: