This project demonstrates how to run and manage models locally using Ollama by creating an interactive UI with Streamlit.
The app has a page for running chat-based models.
- Interactive UI: Utilize Streamlit to create a user-friendly interface.
- Local Model Execution: Run your Ollama models locally without the need for external APIs.
- Real-time Responses: Get real-time responses from your models directly in the UI.
Before running the app, ensure you have Python installed on your machine. Then, clone this repository and install the required packages using pip:
git clone https://github.com/20481A5450/Chat_With_LLM's.git
cd Chat_With_LLM's
pip install -r requirements.txt
To start the app, run the following command in your terminal:
streamlit run 01_💬_Chat_Demo.py
Navigate to the URL provided by Streamlit in your browser to interact with the app.
NB: Make sure you have downloaded Ollama to your system.
Interested in contributing to this app?
- Great!
- I welcome contributions from everyone.
Got questions or suggestions?
- Feel free to open an issue or submit a pull request.
👏 Kudos to the Ollama team for their efforts in making open-source models more accessible!