A production-grade message queue system for agent communication and LLM backend load balancing in large-scale multi-agent systems. This system provides a scalable solution using Apache Kafka for reliable message distribution with comprehensive features for agent communication, message management, and load balancing.
- Robust Message Distribution: Built on Apache Kafka for reliable, scalable message handling
- Auto-Partitioning: Kafka topics automatically scale based on agent load
- Stateful Management: Comprehensive tracking of message states and history
- Persistence: Automatic saving of all interactions to JSON files
- RESTful API: Complete FastAPI interface for all messaging operations
- Authentication & Authorization: JWT-based secure agent authentication
- Production-Ready: Deployment with Gunicorn/Uvicorn for high performance
- Comprehensive Logging: Detailed logs using Loguru
- Containerized: Docker and Docker Compose setup for easy deployment
- Message Types: Support for different message types (chat, commands, function calls)
- Group Messaging: Ability to create agent groups for targeted communication
- Load Balancing: Built-in LLM backend load balancing capabilities
The Agent Messaging System is built with the following components:
- Kafka Backend: For reliable message distribution and queuing
- SwarmsDB Class: Core messaging logic and state management
- FastAPI Server: RESTful API for agent interaction
- Gunicorn/Uvicorn: ASGI server for production deployment
# Clone the repository
git clone https://github.com/yourusername/agent-messaging-system.git
cd agent-messaging-system
# Start the entire stack (Kafka, Zookeeper, API)
docker-compose up -d
# The API will be available at http://localhost:8000
# The Kafka UI will be available at http://localhost:8080
- Install Poetry (dependency management):
curl -sSL https://install.python-poetry.org | python3 -
- Install dependencies:
poetry install
-
Make sure Kafka is running and accessible.
-
Start the API server:
# For development
./start.sh development
# For production
./start.sh production
The system can be configured using environment variables:
# API Configuration
API_ENV=production
PORT=8000
# Security
JWT_SECRET=your_secret_key_here
JWT_ALGORITHM=HS256
TOKEN_EXPIRE_MINUTES=1440
# Kafka Configuration
KAFKA_BOOTSTRAP_SERVERS=kafka:9092
KAFKA_TOPIC_PREFIX=agent_messaging_
KAFKA_NUM_PARTITIONS=6
KAFKA_REPLICATION_FACTOR=1
# Message History Configuration
MESSAGE_HISTORY_DIR=/app/message_history
SAVE_INTERVAL_SECONDS=300
# Rate Limiting
RATE_LIMIT_PER_MINUTE=300
# Get access token
curl -X POST http://localhost:8000/auth/token \
-H "Content-Type: application/json" \
-d '{"username":"agent1", "password":"password"}'
curl -X POST http://localhost:8000/agents/register \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{"agent_id":"agent1", "description":"AI Assistant Agent"}'
curl -X POST http://localhost:8000/messages \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"content": "Hello, can you analyze this data?",
"receiver_id": "agent2",
"message_type": "command",
"priority": 2
}'
curl -X POST http://localhost:8000/agents/receive \
-H "Authorization: Bearer YOUR_TOKEN" \
-d "max_messages=10&timeout=2.0"
curl -X POST http://localhost:8000/groups \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"group_name": "analysis_team",
"agent_ids": ["agent2", "agent3", "agent4"]
}'
curl -X POST http://localhost:8000/groups/message \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"group_name": "analysis_team",
"content": "Everyone, please review these results.",
"message_type": "chat"
}'
poetry run pytest
poetry run black .
poetry run isort .
poetry run mypy .
For production deployment, we recommend using Docker Compose with appropriate environment variables for security.
- Edit the environment variables in docker-compose.yml
- Ensure the JWT_SECRET is a secure random string
- Deploy with docker-compose:
docker-compose up -d
MIT
Contributions are welcome! Please feel free to submit a Pull Request.