Skip to content

๐Ÿฆ™ echoOLlama: A real-time voice AI platform powered by local LLMs. Features WebSocket streaming, voice interactions, and OpenAI API compatibility. Built with FastAPI, Redis, and PostgreSQL. Perfect for private AI conversations and custom voice assistants.

Notifications You must be signed in to change notification settings

fofsinx/echoOLlama

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

21 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

๐Ÿฆ™ echoOLlama: Reverse-engineered OpenAIโ€™s [Realtime API]

๐ŸŒŸ Talk to your local LLM models in human voice and get responses in realtime!

๐Ÿฆ™ EchoOLlama Banner

โš ๏ธ Active Development Alert! โš ๏ธ

We're cooking up something amazing! While the core functionality is taking shape, some features are still in the oven. Perfect for experiments, but maybe hold off on that production deployment for now! ๐Ÿ˜‰

๐ŸŽฏ What's echoOLlama?

echoOLlama is a cool project that lets you talk to AI models using your voice, just like you'd talk to a real person! ๐Ÿ—ฃ๏ธ

Here's what makes it special:

  • ๐ŸŽค You can speak naturally and the AI understands you
  • ๐Ÿค– It works with local AI models (through Ollama) so your data stays private
  • โšก Super fast responses in real-time
  • ๐Ÿ”Š The AI talks back to you with a natural voice
  • ๐Ÿ”„ Works just like OpenAI's API but with your own models

Think of it like having a smart assistant that runs completely on your computer. You can have natural conversations with it, ask questions, get help with tasks - all through voice! And because it uses local AI models, you don't need to worry about your conversations being stored in the cloud.

Perfect for developers who want to:

  • Build voice-enabled AI applications
  • Create custom AI assistants
  • Experiment with local language models
  • Have private AI conversations

๐ŸŽ‰ What's Working Now:

๐Ÿฆ™ EchoOLlama Banner

  • โœ… Connection handling and session management
  • โœ… Real-time event streaming
  • โœ… Redis-based session storage
  • โœ… Basic database interactions
  • โœ… OpenAI compatibility layer
  • โœ… Core WebSocket infrastructure

๐Ÿšง On the Roadmap:

  • ๐Ÿ“ Message processing pipeline (In Progress)
  • ๐Ÿค– Advanced response generation with client events
  • ๐ŸŽฏ Function calling implementation with client events
  • ๐Ÿ”Š Audio transcription service connection with client events
  • ๐Ÿ—ฃ๏ธ Text-to-speech integration with client events
  • ๐Ÿ“Š Usage analytics dashboard
  • ๐Ÿ” Enhanced authentication system

๐ŸŒŸ Features & Capabilities

๐ŸŽฎ Core Services

  • Real-time Chat ๐Ÿ’ฌ

    • Streaming responses via websockets
    • Multi-model support via Ollama
    • Session persistence
    • ๐ŸŽค Audio Transcription (FASTER_Whisper)
    • ๐Ÿ—ฃ๏ธ Text-to-Speech (OpenedAI/Speech)
  • Coming Soon ๐Ÿ”œ

    • ๐Ÿ”ง Function Calling System
    • ๐Ÿ“Š Advanced Analytics

๐Ÿ› ๏ธ Technical Goodies

  • โšก Lightning-fast response times
  • ๐Ÿ”’ Built-in rate limiting
  • ๐Ÿ“ˆ Usage tracking ready
  • โš–๏ธ Load balancing for scale
  • ๐ŸŽฏ 100% OpenAI API compatibility

๐Ÿ—๏ธ System Architecture

echoOLlama

Click on the image to view the interactive version on Excalidraw!

๐Ÿ’ป Tech Stack Spotlight

๐ŸŽฏ Backend Champions

  • ๐Ÿš€ FastAPI - Lightning-fast API framework
  • ๐Ÿ“ Redis - Blazing-fast caching & session management
  • ๐Ÿ˜ PostgreSQL - Rock-solid data storage

๐Ÿค– AI Powerhouse

  • ๐Ÿฆ™ Ollama - Local LLM inference
  • ๐ŸŽค faster_whisper - Speech recognition (coming soon)
  • ๐Ÿ—ฃ๏ธ OpenedAI TTS - Voice synthesis (coming soon)

๐Ÿš€ Get Started in 3, 2, 1...

  1. Clone & Setup ๐Ÿ“ฆ
git clone https://github.com/iamharshdev/EchoOLlama.git
cd EchoOLlama
python -m venv .venv
source .venv/bin/activate  # or `.venv\Scripts\activate` on Windows
pip install -r requirements.txt
  1. Environment Setup โš™๏ธ
cp .env.example .env
# Update .env with your config - check .env.example for all options!
make migrate # create db and apply migrations
  1. Launch Time ๐Ÿš€
# Fire up the services
docker-compose up -d

# Start the API server
uvicorn app.main:app --reload

๐Ÿค Join the EchoOLlama Family

Got ideas? Found a bug? Want to contribute? Check out our CONTRIBUTING.md guide and become part of something awesome! We love pull requests! ๐ŸŽ‰

๐Ÿ’ก Project Status Updates

  • ๐ŸŸข Working: Connection handling, session management, event streaming
  • ๐ŸŸก In Progress: Message processing, response generation
  • ๐Ÿ”ด Planned: Audio services, function calling, analytics

๐Ÿ“œ License

MIT Licensed - Go wild! See LICENSE for the legal stuff.


Built with ๐Ÿ’– by the community, for the community

PS: Star โญ us on GitHub if you like what we're building!

About

๐Ÿฆ™ echoOLlama: A real-time voice AI platform powered by local LLMs. Features WebSocket streaming, voice interactions, and OpenAI API compatibility. Built with FastAPI, Redis, and PostgreSQL. Perfect for private AI conversations and custom voice assistants.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published