The LLM_LangChain_ChatBot is a Retrieval-Augmented Generation (RAG) tutorial designed to guide you how to build a chatbot that responds to queries based on the content of the provided dataset. Unlike traditional models that respond based on pre-learned information, this model dynamically retrieves information from a document corpus to generate answers.
Follow through the Jupyter Notebook to understand the complete workflow of building a context-aware chatbot using RAG:
- Document Loading: Importing text from URLs.
- Document Splitting: Breaking down documents into manageable chunks.
- Vector Storing: Embedding document chunks for efficient retrieval.
- Retrieval: Fetching relevant text based on user queries.
- Question Answering: Generating responses using a large language model.
- Interactive Widgets: Experiment with different queries interactively.
Clone the repository and install the required dependencies:
pip install -r requirements.txt
Execute the Jupyter Notebook LLM_LangChain_ChatBot.ipynb to follow the tutorial. The notebook guides you through each step of the process, from loading data to interacting with the chatbot.
This project is licensed under the MIT License - see the LICENSE file for details.