A repository for learning LangChain by building a generative ai application.
This is a web application is using a Pinecone as a vectorsotre and answers questions about LangChain (sources from LangChain official documentation).
Client: Streamlit
Server Side: LangChain 🦜🔗
Vectorstore: Pinecone 🌲
To run this project, you will need to add the following environment variables to your .env file
PINECONE_API_KEY
OPENAI_API_KEY
Clone the project
git clone https://github.com/AviTewari/Knowledge-Retrieval-Assistant-using-LLM.git
Go to the project directory
cd Knowledge-Retrieval-Assistant-using-LLM
Download LangChain Documentation
mkdir langchain-docs
wget -r -A.html -P langchain-docs https://api.python.langchain.com/en/latest
Install dependencies
pipenv install
Start the flask server
streamlit run main.py
To run tests, run the following command
pipenv run pytest .