-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ollama doc update, with good news #908
Comments
Hey @PieBru how about helping us with contributing this updates here: https://docs.gptr.dev/docs/gpt-researcher/llms/llms |
Also if you can help this issue: #904 |
how are you running it with Ollama? with docker or what? |
I run Ollama in a LXC with GPU pass through and LLM beefy storage on a separate NVMe. It's all on a old notebook installed with Proxmox VE 8.2. I mostly followed this guide https://fileflows.com/docs/guides/linux/proxmox-lxc-nvidia |
For who may be interested, here is my GPTR config that works fully local with SearXNG and Ollama. export DOC_PATH=./my-docs export RETRIEVER=searx export LLM_PROVIDER=ollama |
Interesting, I have a searxng server running, copied your config settings but with my own ip and port, and I still get 'no content for query' |
Is your feature request related to a problem? Please describe.
The doc refers to Ollama with the mixtral model.
Describe the solution you'd like
Update the doc.
Describe alternatives you've considered
I tested llama3.2 (FAST) and llama3.1(SMART) and I confirm they works for general needs.
Also, llama3.2 works reasonably well as SMART model, so the whole system can run pretty well locally on a 4GB VRAM GPU.
Additional context
Add any other context or screenshots about the feature request here.
The text was updated successfully, but these errors were encountered: