-
-
Notifications
You must be signed in to change notification settings - Fork 362
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Not working out of the box ! #117
Comments
2 big step, first is running ollama with a model ->install ollama, use open-webui to manage it ; second is running docker instance of llocalsearch 1 -> install ollama -> ollama |
You want to set: Please review OLLAMA_GUIDE.md |
I am quite frustrated - have followed all instructions to the letter - still getting:
Have updated .env file to include: Have updated line 20 in docker-compose to: 3001:80 I am not running Ollama in Docker, but instead is the Windows install. When I navigate to http://host.docker.internal:11434/ Ollama is running. I have set Environmental Variable OLLAMA_HOST to Value 0.0.0.0 Would appreciate any advice, as I have run out of solutions. |
If you're not using docker for ollama, update env to reflect that. Don't use docker related thing with ollama if running without docker |
Thank you for responding @Arnaud3013, much appreciated. Unfortunately its still not working - but I have made progress. I can see why pointing to docker in .env would not work now. With the following, I am getting this error:
When I navigate to Here is docker-compose:
Here is .env:
Am I obviously doing something wrong? I think LLocalSearch is speaking to Ollama, but they are having a hard time understanding one another. |
Hi, thought I would come back here and post my settings that I have used to get this to work: Here is .env
Here is docker-compose.yaml
Want to also point out that I followed @pmancele suggestions for container network communication at this post: #116 See their code snippet adding ports to docker-compose below:
|
Did you had great success with your exchanges, once working? |
I did have success, thank you. I have been trialling several models, and most recently this model has worked well - Reader LM Unfortunately Phi3.5 often gets stuck in a loop. I have had decent success with larger models too, like Llama3.1 and Mistral-Nemo, but often output is not in markdown language, which produces an error. What model you recommend? |
Current github seems to not embed any ollama engine and no quick installation document is provided.
A clear and concise description of the prerequisite and also ollama installation and config document. A better approach could be to also embed the ollama install script inside this repository for docker.
Currently the instruction are useless are it is not working out of the box.
Please elaborate on ollama part. I do not have any instance on my machine and if i get the latest one using docker hub docker pull ollama/ollama it is not not working.
The text was updated successfully, but these errors were encountered: