An LLM based contextual question answering chatbot
- Install Ollama
- Start the server
- Run
ollama pull llama3.2:1b
- Set openai env var
export OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
- Set default model to LLAMA3_2_1B
export MODEL=LLAMA3_2_1B
pip install poetry && poetry install --no-dev
poetry run uvicorn app.main:app --reload --host 0.0.0.0 --port 8000
To run the server, follow these steps:
-
Build the Docker Image
docker build -t doculoom .
-
Start the server
docker run -it --rm -p 8000:8000 doculoom