Special thanks to @Yoolean
This repository seeks to provide:
- Production-worthy Kafka setup for reproducing error and loading test
- End-to-End monitoring system for Kafka
Install all monitoring tools and kafka cluster at once
./install-all.sh
Uninstall all at once
./uninstall-all.sh
Only tested in Amazon Linux2 EC2
- Recommend t2.xlarge (4CPU, 16GB) at least.
- minikube recommend t2.medium but too many pods and load only to use t2.medium.
git clone https://github.com/joyfulbean/myset
cd myset
./joyful_shell.sh
./updaterc.sh
From now, kubectl is shortened to
kp
# install minikube
git clone https://github.com/joyfulbean/myset
cd myset
./minikubeset.sh
# start minikube
sudo su -
minikube start --driver=none --kubernetes-version=v1.23.0 --force
# start minikube dashboard
kubectl proxy --address='0.0.0.0' --disable-filter=true &
minikube dashboard --url
Minikube Dashboar URL: http://ec2-ip:8001/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
Due to Recent issue related to minikube, must put
version info
.
git clone https://github.com/joyfulbean/kube-kafka-monitoring.git
cd kube-kafka-monitoring
kubectl apply -k rbac-namespace-default
kubectl apply -k zookeeper/
kubectl apply -k kafka/
Want to start each file? Use
apply -f
and change kustomization to add new file in the directory
- ./rbac-namespace-default
- create namespace kafka and monitoring and create cluster roles
- ./zookeeper
- use persistent volume for zookeeper. change zookeeper.properties in zoo-config.yml and the number of zookeeper in pzoo.yml
- ./kafka
- change broker.properties in broker-config.yml and the number of kafka in kafka.yml
Out of Cluster access is possible through broker-outside-svc
-
Zookeeper outside access is not allowed in this repo. You need to create.
-
Zookeeper inside access is available. Use zookeeper:2181
-
Kafka outside access is available. Use (ec2-public-ip:32400), (ec2-public-ip:32401),,,, and so on
-
Kafka Internal access is available. Use bootstrap.kafka:9092
kubectl apply -k cmak
kubectl apply -k linkedin-burrow/
kubectl apply -k prometheus-exporter
kubectl apply -k prometheus/
kubectl --namespace kafka patch statefulset kafka --patch "$(cat prometheus-exporter/jmx-exporter/kafka-jmx-exporter-patch.yml )"
kubectl apply -k grafana/
- cmak
- reference for CMAK
- make topic here and manage the cluster
- To Add Cluster Use Zookeeper Cluster IP: zookeeper.kafka:2181
- Dashboard URL: (ec2-ip):32336
- burrow
- reference for burrow
- can monitor the consumer lag
- check the rule set here
- Dashboard URL: (ec2-ip):32337
- Metric URL: (ec2-ip):32339/metrics
- prometheus
- reference for proemetheus
- check the metrics here.
- to add more target, add targets in prometheus-config.yml
- Dashboard URL: (ec2-ip):32334
- prometheus-exporter
- reference for kafka-exporter
- collect kafka metric.
- to add more server, add args in kafka-exporter-deploy.yml
- Metric URL: (ec2-ip):30055/metrics
- Grafana Dashboard ID: 7589
- reference for node exporter
- collect host server metric
- Metric URL: (ec2-ip):30088/metrics
- Grafana Dashboard ID: 1860
- reference for kube-state-metric exporter
- collect pods metric
- Grafana Dashboard ID:6417
- reference for kminion
- collect kafka consumr lag, cluster, topic metric
- Metric URL: (ec2-ip):30077/metrics
- Grafana Dashboard ID for topic:14013
- Grafana Dashboard ID for consumer group:14014
- Grafana Dashboard ID for cluster:14012
- [reference for jmx-exporter]
- collect jmx metric
- Metric URL: (ec2-ip):32000/metrics
- Grafana Dashboard ID for jmx:11131
- reference for kafka-exporter
- grafana-dashboard
- visualize metrics collected in prometheus
- reference for kube grafana job
- Metric URL: (ec2-ip):32335
Successed: Pub using local computer jmeter to EC2, Consume using local computer to EC2
Failed: Sub using local computer jmeter to EC2.
kubectl -k pub-sub
It creates topic and can simply test pub-sub to check whether internal kafka communication works.
- Kafka
- Kubernetes : latest
- Jmeter