Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: How can i config the Quivr to use local Ollama model ? i can't find the config example and no env.example file #3457

Open
lijinxinKing opened this issue Nov 6, 2024 · 5 comments
Labels
bug Something isn't working

Comments

@lijinxinKing
Copy link

What happened?

How can i config the Quivr to use local Ollama model ? i can't find the config example and no env.example file

Relevant log output

How can i config the Quivr to use local Ollama model ? i can't find the config example and no env.example file

Twitter / LinkedIn details

No response

@lijinxinKing lijinxinKing added the bug Something isn't working label Nov 6, 2024
Copy link

dosubot bot commented Nov 6, 2024

Hey @lijinxinKing! I'm here to help you with any bugs, questions, or contributions. Let's dive into this together!

To configure Quivr to use a local Ollama model, follow these steps:

  1. Install Ollama: Download and install Ollama from their website.

  2. Run Ollama in the Background: Use the command below to start Ollama:

    ollama run llama2
  3. Run Migrations: Move the migration file and reset the database:

    mv supabase/migrations/local_20240107152745_ollama.sql supabase/migrations/20240107152745_ollama.sql
    supabase db reset
  4. Add Ollama Model to Quivr: Update the user_settings table in Supabase to include the Ollama models:

    [
      "ollama/llama2",
      "ollama/mistral"
    ]

    If you want to add it to your user only, you can add it to the models column in the user_settings table. If you want to set it as default for all new users, drop the entire table with the following command:

    DELETE TABLE user_settings;
  5. Configure .env File: Update your .env file to activate the Ollama API by setting the OLLAMA_API_BASE_URL:

    # LOCAL
    OLLAMA_API_BASE_URL=http://host.docker.internal:11434
  6. Integrate with Quivr: Ensure that the QuivrRAG class in quivr_rag.py is configured to use the Ollama model. The ChatLiteLLM class should be instantiated with the api_base parameter set to the OLLAMA_API_BASE_URL if the model starts with "ollama". This is handled in the _create_llm method of the QuivrRAG class.

Additionally, you may need to update the embedding dimension in .env.example to match the model's requirements and modify certain files to use ChatOllama instead of ChatOpenAI. Rebuild the image and supabase after making these changes [1][2].

To continue talking to Dosu, mention @dosu.

@wonderu
Copy link

wonderu commented Nov 7, 2024

your @dosubot is useless and harmful! It proposes to delete table.

@gamingflexer
Copy link

@admins where the main repo? using which we can install this https://docs.quivr.app/install

@gamingflexer
Copy link

@wonderu @lijinxinKing check this

stephane-lbf commented last month
@caishanli if this can help, I was a bit lost too because Quivr's Team is working on a refacto.

So for now on, you can do this :

git clone https://github.com/QuivrHQ/quivr.git
git rev-list -n1 --before=2024-10-20 main | xargs git checkout
Then just follow the README 👍🏼

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants