๐ Talk to your local LLM models in human voice and get responses in realtime!
โ ๏ธ Active Development Alert!โ ๏ธ We're cooking up something amazing! While the core functionality is taking shape, some features are still in the oven. Perfect for experiments, but maybe hold off on that production deployment for now! ๐
echoOLlama
is a cool project that lets you talk to AI models using your voice, just like you'd talk to a real person! ๐ฃ๏ธ
Here's what makes it special:
- ๐ค You can speak naturally and the AI understands you
- ๐ค It works with local AI models (through Ollama) so your data stays private
- โก Super fast responses in real-time
- ๐ The AI talks back to you with a natural voice
- ๐ Works just like OpenAI's API but with your own models
Think of it like having a smart assistant that runs completely on your computer. You can have natural conversations with it, ask questions, get help with tasks - all through voice! And because it uses local AI models, you don't need to worry about your conversations being stored in the cloud.
Perfect for developers who want to:
- Build voice-enabled AI applications
- Create custom AI assistants
- Experiment with local language models
- Have private AI conversations
- โ Connection handling and session management
- โ Real-time event streaming
- โ Redis-based session storage
- โ Basic database interactions
- โ OpenAI compatibility layer
- โ Core WebSocket infrastructure
- ๐ Message processing pipeline (In Progress)
- ๐ค Advanced response generation with client events
- ๐ฏ Function calling implementation with client events
- ๐ Audio transcription service connection with client events
- ๐ฃ๏ธ Text-to-speech integration with client events
- ๐ Usage analytics dashboard
- ๐ Enhanced authentication system
-
Real-time Chat ๐ฌ
- Streaming responses via websockets
- Multi-model support via Ollama
- Session persistence
- ๐ค Audio Transcription (FASTER_Whisper)
- ๐ฃ๏ธ Text-to-Speech (OpenedAI/Speech)
-
Coming Soon ๐
- ๐ง Function Calling System
- ๐ Advanced Analytics
- โก Lightning-fast response times
- ๐ Built-in rate limiting
- ๐ Usage tracking ready
- โ๏ธ Load balancing for scale
- ๐ฏ 100% OpenAI API compatibility
Click on the image to view the interactive version on Excalidraw!
- ๐ FastAPI - Lightning-fast API framework
- ๐ Redis - Blazing-fast caching & session management
- ๐ PostgreSQL - Rock-solid data storage
- ๐ฆ Ollama - Local LLM inference
- ๐ค faster_whisper - Speech recognition (coming soon)
- ๐ฃ๏ธ OpenedAI TTS - Voice synthesis (coming soon)
- Clone & Setup ๐ฆ
git clone https://github.com/iamharshdev/EchoOLlama.git
cd EchoOLlama
python -m venv .venv
source .venv/bin/activate # or `.venv\Scripts\activate` on Windows
pip install -r requirements.txt
- Environment Setup โ๏ธ
cp .env.example .env
# Update .env with your config - check .env.example for all options!
make migrate # create db and apply migrations
- Launch Time ๐
# Fire up the services
docker-compose up -d
# Start the API server
uvicorn app.main:app --reload
Got ideas? Found a bug? Want to contribute? Check out our CONTRIBUTING.md guide and become part of something awesome! We love pull requests! ๐
- ๐ข Working: Connection handling, session management, event streaming
- ๐ก In Progress: Message processing, response generation
- ๐ด Planned: Audio services, function calling, analytics
MIT Licensed - Go wild! See LICENSE for the legal stuff.
Built with ๐ by the community, for the community
PS: Star โญ us on GitHub if you like what we're building!