Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not working out of the box ! #117

Open
NSP-0123456 opened this issue Aug 27, 2024 · 8 comments
Open

Not working out of the box ! #117

NSP-0123456 opened this issue Aug 27, 2024 · 8 comments
Labels
enhancement New feature or request

Comments

@NSP-0123456
Copy link

Current github seems to not embed any ollama engine and no quick installation document is provided.

A clear and concise description of the prerequisite and also ollama installation and config document. A better approach could be to also embed the ollama install script inside this repository for docker.

Currently the instruction are useless are it is not working out of the box.

Please elaborate on ollama part. I do not have any instance on my machine and if i get the latest one using docker hub docker pull ollama/ollama it is not not working.

@NSP-0123456 NSP-0123456 added the enhancement New feature or request label Aug 27, 2024
@Arnaud3013
Copy link

Arnaud3013 commented Sep 3, 2024

2 big step, first is running ollama with a model ->install ollama, use open-webui to manage it ; second is running docker instance of llocalsearch

1 -> install ollama -> ollama
2 -> with docker, install open-webui (open-webui) with this command
in shell (cmd)
git clone https://github.com/open-webui/open-webui
cd open-webui
docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main
go -> http://localhost:3000/
down left side / admin panels
go settings -> in part models enter mistral:v0.3, and on the right click on icon download
you are good with ollama. if you want another model go https://ollama.com/ and serah your model, then look for tags you want (like llama3.1:8b but it didn't work well)
now llocalsearch
go back in main folder inside shell (cd ..)
git clone https://github.com/nilsherzig/LLocalSearch
edit docker-compose.yaml in order to change port line 20. change '3000:80' to '3001:80'
docker-compose up -d
go http://localhost:3001/chat/new
In top right, you should be able to select your mistral model in The agent chain is using the model ""
close.
it should be working -> try asking something

@gardner
Copy link

gardner commented Sep 4, 2024

You want to set: OLLAMA_HOST

Please review OLLAMA_GUIDE.md

@pcrothers91
Copy link

pcrothers91 commented Sep 24, 2024

I am quite frustrated - have followed all instructions to the letter - still getting:

Model nomic-embed-text:v1.5 does not exist and could not be pulled: Post "http://0.0.0.0:11434/api/pull": dial tcp 0.0.0.0:11434: connect: connection refused

Have updated .env file to include: OLLAMA_HOST=host.docker.internal:11434

Have updated line 20 in docker-compose to: 3001:80

I am not running Ollama in Docker, but instead is the Windows install. When I navigate to http://host.docker.internal:11434/ Ollama is running.

I have set Environmental Variable OLLAMA_HOST to Value 0.0.0.0

Would appreciate any advice, as I have run out of solutions.

@Arnaud3013
Copy link

Arnaud3013 commented Sep 24, 2024

If you're not using docker for ollama, update env to reflect that.
Maybe OLLAMA_HOST=http://127.0.0.1:11434
Or
OLLAMA_HOST=localhost:11434

Don't use docker related thing with ollama if running without docker

@pcrothers91
Copy link

pcrothers91 commented Sep 24, 2024

Thank you for responding @Arnaud3013, much appreciated.

Unfortunately its still not working - but I have made progress. I can see why pointing to docker in .env would not work now.

With the following, I am getting this error:

Model nomic-embed-text:v1.5 does not exist and could not be pulled: Post "http://127.0.0.1:11434/api/pull": dial tcp 127.0.0.1:11434: connect: connection refused

When I navigate to http://127.0.0.1:11434/api/pull Ollama is responding 404 page not found.

Here is docker-compose:

services:
  backend:
    image: nilsherzig/llocalsearch-backend:latest
    environment:
      - OLLAMA_HOST=${OLLAMA_HOST:-http://127.0.0.1:11434}
      - CHROMA_DB_URL=${CHROMA_DB_URL:-http://chromadb:8000}
      - SEARXNG_DOMAIN=${SEARXNG_DOMAIN:-http://searxng:8080}

Here is .env:

OLLAMA_HOST=http://127.0.0.1:11434
MAX_ITERATIONS=30
CHROMA_DB_URL=http://chromadb:8000
SEARXNG_DOMAIN=http://searxng:8080
SEARXNG_HOSTNAME=localhost`

Am I obviously doing something wrong? I think LLocalSearch is speaking to Ollama, but they are having a hard time understanding one another.

@pcrothers91
Copy link

pcrothers91 commented Sep 28, 2024

Hi, thought I would come back here and post my settings that I have used to get this to work:

Here is .env

OLLAMA_HOST=http://host.docker.internal:11434
MAX_ITERATIONS=30
CHROMA_DB_URL=http://chromadb:8000
SEARXNG_DOMAIN=http://searxng:8080
SEARXNG_HOSTNAME=localhost

Here is docker-compose.yaml

build:
      context: ./backend
      dockerfile: Dockerfile.dev
    environment:
      - OLLAMA_HOST=${OLLAMA_HOST:-host.docker.internal:11434}
      - CHROMA_DB_URL=${CHROMA_DB_URL:-http://chromadb:8000}
      - SEARXNG_DOMAIN=${SEARXNG_DOMAIN:-http://searxng:8080}
      - EMBEDDINGS_MODEL_NAME=${EMBEDDINGS_MODEL_NAME:-nomic-embed-text:v1.5}
      - VERSION=${VERSION:-dev}

Want to also point out that I followed @pmancele suggestions for container network communication at this post: #116

See their code snippet adding ports to docker-compose below:

@@ -39,6 +39,8 @@ services:
       - SETGID
       - SETUID
       - DAC_OVERRIDE
+    ports:
+      - '6379:6379'

   searxng:
     image: docker.io/searxng/searxng:latest
@@ -60,6 +62,8 @@ services:
       options:
         max-size: '1m'
         max-file: '1'
+    ports:
+      - '8080:8080'

@Arnaud3013
Copy link

Did you had great success with your exchanges, once working?
Did you got good answer? If yes, with which model?

@pcrothers91
Copy link

pcrothers91 commented Oct 1, 2024

I did have success, thank you.

I have been trialling several models, and most recently this model has worked well - Reader LM

Unfortunately Phi3.5 often gets stuck in a loop.

I have had decent success with larger models too, like Llama3.1 and Mistral-Nemo, but often output is not in markdown language, which produces an error.

What model you recommend?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants