This project demonstrates how to autoscale RabbitMQ consumer applications using Kubernetes-based Event-Driven Autoscaler (KEDA). KEDA allows you to dynamically scale your application based on metrics from various event sources, such as RabbitMQ queue length.
In this project, we'll set up a RabbitMQ server, deploy a RabbitMQ consumer application to Kubernetes, configure KEDA to monitor the RabbitMQ queue length, and observe autoscaling behavior based on the queue load.
- Kubernetes cluster (e.g., KIND, Minikube, GKE, AKS, EKS)
- RabbitMQ server (can be installed locally or in a cloud environment)
- Docker (for building container images)
- kubectl (Kubernetes command-line tool)
- Helm (optional, for installing KEDA. I have installed KEDA using helm)
- Setup Kubernetes Cluster using KIND:
kind create cluster --name keda-cluster
- Install KEDA: Set up Kubernetes-based Event-Driven Autoscaler (KEDA) on your Kubernetes cluster. You can use Helm to install KEDA easily.
helm repo add kedacore https://kedacore.github.io/charts helm repo update helm install keda kedacore/keda --namespace keda-cluster --create-namespace
-
Set up RabbitMQ Server: Install and configure a RabbitMQ server. You can follow the instructions on the official RabbitMQ website. I have used kali linux (debian based; distribution : bullseye)
To start Rabbitmq-Server and management console. Use below commands.
#start rabbitmq-server sudo systemctl start rabbitmq-server #enable rabbitmq-server sudo systemctl enable rabbitmq-server #enabling management console rabbitmq-plugins enable rabbitmq_management
Go to Management console http://localhost:15672/
- Login user : guest and password guest
- Go queues and streams tab above
- Enter name of the queue and create the queue
- Deploy RabbitMQ Producer: Use a RabbitMQ producer application to send messages to the RabbitMQ queue. You can find an example producer script in this repository. (rabbitmq_producer.py)
pip3 install pika
python3 rabbitmq_producer.py
- Deploy RabbitMQ Consumer: Deploy a RabbitMQ consumer application to Kubernetes. This application will consume messages from the RabbitMQ queue.
kubectl apply -f rabbitmq-consumer-deployment.yaml -n keda-cluster
- Configure KEDA ScaledObject: Create a ScaledObject resource in Kubernetes to configure KEDA to monitor the RabbitMQ queue length and trigger autoscaling based on queue metrics.
kubectl apply -f rabbitmq-scaledobject.yaml -n keda-cluster
- Test Autoscaling: Send messages to the RabbitMQ queue and observe the autoscaling behavior of your RabbitMQ consumer application.
python3 rabbitmq_test.py
- Monitor Autoscaling: Monitor the application pods and scaling events:
kubectl get pods -l app=rabbitmq-consumer -n keda-cluster kubectl describe scaledobject rabbitmq-scaledobject -n keda-cluster
rabbitmq_producer.py
: Example Python script for sending messages to the RabbitMQ queue.rabbitmq_consumer_deployment.yaml
: Kubernetes deployment YAML file for deploying the RabbitMQ consumer application.rabbitmq_scaledobject.yaml
: YAML file defining the ScaledObject resource for configuring KEDA autoscaling.Dockerfile
: Dockerfile for building the RabbitMQ consumer application container image.README.md
: Project README file providing an overview, setup instructions, and other details.
- KEDA Documentation
- RabbitMQ Documentation
- Kubernetes Documentation
- Docker Documentation
- Docker image
This project is licensed under the MIT License.