This Streamlit application is an interactive chatbot showcasing your professional background and qualifications. Leveraging large language models (LLMs) and a Retrieval Augmented Generation (RAG) framework, it delivers accurate and context-aware answers based on your resume and other data stored in a CSV file. The chatbot utilizes a FAISS vector store for efficient information retrieval and the OpenAI GPT-3.5-turbo model for answer generation. Streamlit provides a user-friendly interface for seamless interaction.
- RAG-based: Employs a Retrieval Augmented Generation approach, ensuring responses are both relevant and grounded in the information provided.
- Interactive Q&A: Enables users to ask questions and receive informative answers directly derived from the CSV data.
- Conversation History: Retains conversations in MongoDB Atlas for later review and analysis, enhancing user engagement.
- Knowledge Awareness: Gracefully handles situations where the chatbot lacks the information to answer a question, ensuring a smooth user experience.
- Question Suggestions: Offers prompts to guide users and encourage meaningful conversations, promoting deeper exploration of your qualifications.
This is my ResumeGPT demo Wesbite : https://art-career-bot.streamlit.app/
This is a detailed post about the logic behind this chatbot and explanations how it works: www.artkreimer.com/
- Python 3.9 or higher
- OpenAI API key
- MongoDB Atlas account
- Streamlit account to host the app
- Fork my repository and change the name to your desired name
- Clone the repository:
git clone https://github.com/{username}/{yourGPT}.git && cd {yourGPT}
- Create virtual env
python3 -m venv venv
source venv/bin/activate
- Install the required dependencies:
pip install -r requirements.txt
-
Update the following files:
- Replace all instance of the name
Art Kreimer
orArt
with your name and nickname in the following files:app.py
templates/template.json
- Change the following in the
data
folderabout_me.csv
with relevant questions about you
- Replace all instance of the name
-
Add OPENAI_API_KEY key to your shell variables
export OPENAI_API_KEY="your_openai_api_key"
- Run the Streamlit application:
streamlit run app.py
- The application will open in your default web browser. First time it will run a bit longer indexing the csv file and storing it in a FAISS vector store.
- Ask questions about your background and qualifications, and the chatbot will provide relevant responses.
-
Once you commit all your files to GitHub, create an account in Streamlit.io preferably with your GitHub account.
-
Click on the
New app
option: -
Input your
OPENAI_API_KEY
andmongodB_pass
using the Advanced settings... option -
Click on Deploy button
You can also deploy the same on HuggingFace Spaces. You can find more documentation on the same here.
- The application uses a FAISS index to store the CSV and PDF data embeddings. If the index file (
faiss_index
) does not exist, it will be created automatically and it will take several minutes to generate all embeddings. - The CSV data file path is set in the
data_source
variable, and the PDF resume file path is set in thepdf_source
variable.
This project is highly influence by the Repository created by Art Kreimer and is dependent on the following libraries and tools:
- Streamlit for building the web application
- LangChain for integrating the language model and retrieval chain
- OpenAI API for the language model
- FAISS for the vector database
- Firebase for storing the conversation history
This project is licensed under the MIT License.