Skip to content

Latest commit

 

History

History
26 lines (21 loc) · 554 Bytes

README.md

File metadata and controls

26 lines (21 loc) · 554 Bytes

Minimal LangChain llama3 chat

Python 3.8+ Ollama latest

Pull ollama models

ollama pull llama3
etc...

Start Ollama model server (port is hardcoded in chat.py. change both if you need)

OLLAMA_HOST=127.0.0.1:5151 ollama serve 

Start chat server

python -m venv venv
. ./venv/bin/activate
pip install -r requirements.txt
python chat.py

Notes: Needless to say you can change LLM_MODEL = "llama3" in chat.py to whatever supported by Ollama (https://ollama.com/library) and it should still work