Skip to content

a full stack fastapi application with llama index integrated

Notifications You must be signed in to change notification settings

KrishantSethia/llama-index-fastapi

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

67 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

A Local Knowledge Base Augmented LLM able to serve millions of users, on top of llama index, fastapi and MongoDB

pic

  • if user asks a question, the bot will try to match the question and find the answer from local database first
  • local knowledge base is a csv file of question/answer pairs, which is embedded(vectorized) by llama index when first run
  • if no good matches found, the bot then call openAI's chatgpt api to get the answer, and insert the question/answer pair into the index. so next time the bot will be able to answer a similar question from local database
  • if the question is not relevant to the topic(in our case the topic is Golf), the bot will call openAI's chatgpt api to get the answer

When asking a question in the knowledge base

pic

When asking a question which is not relevant to the topic

pic

More details

  • the bot uses fastapi as the web framework, llama index as the search engine, MongoDB as the metadata storage
  • during the first run, csv file is ingested and embedded by llama index as vector store, and the metadata is stored in MongoDB
  • the bot uses https://api.openai.com/v1/embeddings for embedding. it is very cheap and with high performance
  • the bot uses https://api.openai.com/v1/chat/completions to ask chatgpt for answers. by default gpt-3.5-turbo is used as the model
  • concurrency is naturally supported

Next steps

  • currently the bot only supports question answering. plan to support chat as well.
  • use openAI's Assistant API as the search engine(I've already tried, but it is not as good as llama index at the moment)
  • more test cases

Development

  • Setup Environment
export OPENAI_API_KEY=your_openai_api_key
virtualenv -p python3.9 env
source env/bin/activate
pip install -r requirements.txt
  • Run the application locally
PYTHONPATH=. python app/launch.py
PYTHONPATH=. python app/utils/api-docs/extract_openapi.py app.main:app --out openapi.yaml
python app/utils/api-docs/swagger_html.py < openapi.yaml > swagger.html
python app/utils/api-docs/redoc_html.py < openapi.yaml > redoc.html
  • Test cases(for local tests)
    • write test cases in /app/tests/test_*.py
    • need to pass local test cases before commit

Reference

About

a full stack fastapi application with llama index integrated

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 69.8%
  • HTML 29.7%
  • Makefile 0.5%