Chat with large language models (LLMs) right from your phone using this amazing Telegram bot.
- Real-time Response Streaming: The bot streams responses sentence by sentence, providing a conversational and dynamic user experience.
- Multiple Chat Modes: Using
/mode
, You can choose any chat mode like Coding Assistant, Travel Guide, Movie expert, etc.. - Inline Keyboard for Model Selection: Easily choose models using interactive inline keyboard buttons within Telegram.
- Fully Dockerized Bot: Easy deployment and management through Docker, ensuring seamless integration and scalability.
- Asynchronous Messaging: Support asynchronous interactions for a smoother user experience.
- Log Management in MongoDB: Efficient log management, including storing queries and model responses in MongoDB.
The bot is currently in its early stages of development, with many more exciting features planned for the future. Here's what's on the roadmap:
- Model Usage Analytics: Track usage statistics such as questions asked, response time, and tokens consumed.
- Personalized Character: Create your own custom personalized character (e.g., MyGF, MyCodeAssistant, etc.).
- Customizable Model System Prompts: Personalize system prompts with custom messages for each model.
- Download and Query Any Ollama Model: Download any Ollama model locally and interact with it directly via the bot.
- Voice Input with Real-Time Response: Ask questions using voice commands and get real-time streaming responses.
- Conversation History Management: Store, manage, and search past conversations for easy reference and continuity.
- Multi-Language Support: Communicate with the bot in multiple languages with automatic detection and translation.
- Real-Time Web Search (Future Feature): Integration of real-time web search to retrieve live information using agents.
This project is built using the following technologies:
Before installation, make sure to set the necessary environment variables:
Step | Description |
---|---|
Rename .env.example to .env |
Rename the .env.example file to .env . |
Set BOT_TOKEN |
Replace #Your BOT TOKEN without Double Quotes with your actual bot token. |
Set MONGO_URI |
Replace #Your Mongo DB URI (Required Compulsory) with your actual MongoDB URI. |
Set OLLAMA_BASE_URL |
Set OLLAMA_BASE_URL to localhost or any custom IP/Domain. |
Set OLLAMA_DEFAULT_MODEL |
Set OLLAMA_DEFAULT_MODEL to your default model, e.g., dolphin-mistral . |
Set OLLAMA_CUSTOM_PORT |
Set OLLAMA_CUSTOM_PORT to 11434 (default port). |
- Install Poetry using the following command:
curl -sSL https://install.python-poetry.org | python3 -
- After Poetry is installed, install the project dependencies:
poetry install
- Run the bot:
poetry run python main.py
Note: This project currently works on Linux.
If you'd prefer an easier deployment, you can use Docker with the following steps:
- Clone the repository and navigate to the project directory.
- Use Docker Compose to build and start the bot:
docker compose up --build
Docker will automatically install all the dependencies and set up the environment for you. Once the containers are up and running, the bot will be ready for use.
This project is open for contributions! Whether you're interested in fixing bugs, adding new features, or improving documentation, you're more than welcome to join the development process.
- Fork this repository.
- Create a new branch for your feature or bug fix.
- Submit a pull request, and include a description of your changes.
We encourage you to get involved and help shape the future of this project!
- Ollama
- Aiogram 3.x
- chatgpt-telegram-bot - Inspired by Karfly's work on this project
- ruecat/ollama-telegram
This project is released under the terms of the GPL 2.0 license. For more information, see the LICENSE file included in the repository.