-
-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: No model on ollama was used to answer #2595
Comments
To address the issue with the Ollama dialogue model not being invoked and the SSL error you're encountering, let's tackle these separately:
These steps should help resolve the issues you're facing. If the problem persists, providing more details about your setup, including Python version and operating system, could offer more clues for further assistance.
|
2024-05-15 05:12:32 Traceback (most recent call last): |
How can I set brain_settings? |
To correctly set up and use your locally deployed Ollama model with the
This setup should direct your application to use the locally deployed Ollama model. If you encounter any issues or need further assistance, ensure that the model name and API base URL are correctly specified as per your local deployment details.
|
"ou are incorrect in what you said. Which specific file should I specify the default response model in? |
To specify the default response model for your locally deployed Ollama model, you should set the
This setting ensures that your application communicates correctly with the Ollama model. For more detailed configuration, refer to the
|
I have correctly set OLLAMA_API_BASE_URL, how do I specify the model for the conversation? |
To specify the model for the conversation after setting
Remember, these steps are specific to configuring Quivr to use Ollama models for conversation handling. Ensure your Ollama service is running and accessible at the URL specified in
|
I have assigned some Ollama models for users, but users can only select models in the brain, and there is no model selection in the dialogue interface. Also, no Ollama models are invoked during the conversation. |
To address the issue of users only being able to select models in the brain and not in the dialogue interface, and to ensure Ollama models are invoked during conversations, follow these steps:
These steps are designed to enable model selection in the dialogue interface and ensure the correct Ollama models are invoked during conversations. For detailed guidance, refer to the Ollama Integration Guide and the example
|
2024-05-15 10:55:50 INFO: 172.20.0.1:53992 - "GET /brains/0602d8e3-f73a-4f8d-b41f-4df00fd8d471/ HTTP/1.1" 200 OK |
I can modify the default embedding model, what I want to know is how to modify the default dialogue model. Then how do users switch their dialogue models. |
To modify the default dialogue model and enable users to switch between dialogue models, follow these steps:
This approach addresses both the issue of the 'llama2' model not being found and enables users to switch between different dialogue models during their sessions. If you encounter any specific error messages or issues during this process, those logs could provide further insights into the problem.
|
In fact, I have already assigned a custom model to the user, but llama2 is still the only one called in question and answer. |
To address the issue where your custom model is not being invoked and "llama2" is used instead, consider the following steps:
If after these checks the issue remains unresolved, it might indicate a more complex problem within the Quivr codebase or configuration that requires further investigation.
|
I am having same issue and it is more than few days I am stuck here.
And below is the error log
|
I am having same issue too. |
I updated backend/model/settings.py in ollama_api_base_url: str = "http://host.docker.internal:11434", and then updated the models in the user_settings table are "ollama/llama3".
Here I created a new web1 in docker and mapped it QUIVR_DOMAIN=http://localhost:3088/ |
Similar problem here, error: just look at the last two errors in above log. how to alter llama2 to llama3 ? |
|
Thanks for your contributions, we'll be closing this issue as it has gone stale. Feel free to reopen if you'd like to continue the discussion. |
What happened?
Why did I set up the Ollama model, but it doesn't use the locally deployed model on Ollama to answer? The Ollama link is valid because the Ollama embedding model can work. However, the Ollama dialogue model cannot be correctly invoked. Instead, the remote ms-marco-TinyBERT-L-2-v2 is called.
Relevant log output
Twitter / LinkedIn details
No response
The text was updated successfully, but these errors were encountered: