Skip to content

katanaml/llm-rag-invoice-cpu

Repository files navigation

Invoice data processing with Llama2 13B LLM RAG on Local CPU

Youtube: Invoice Data Processing with Llama2 13B LLM RAG on Local CPU


Quickstart

RAG runs on: LlamaCPP, Haystack, Weaviate

  1. Download the Llama2 13B model, check models/model_download.txt for the download link.
  2. Install Weaviate local DB with Docker

docker compose up -d

  1. Install the requirements:

pip install -r requirements.txt

  1. Copy text PDF files to the data folder.
  2. Run the script, to convert text to vector embeddings and save in Weaviate vector storage:

python ingest.py

  1. Run the script, to process data with Llama2 13B LLM RAG and return the answer:

python main.py "What is the invoice number value?"

About

Data extraction with LLM on CPU

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages