Kafkometry is a lightweight Apache Kafka metric visualizer created using Svelte/SvelteKit.
- Live monitoring of key Apache Kafka metrics
- Metric component customization using Grafana Desktop
- Authentication using Google auth
- A lightweight and user friendly UI/UX built in Svelte/SvelteKit
- Active Connections
- Partition Count
- Successful Authentications
- Bytes Sent
- Records Received
- Bytes Received
Currently, the flow of data in our application is mapped by the diagram above. Our data flow begins with the Kafka cluster hosted on Confluent Cloud, with a Datagen connector that produces mock messages and events to the cloud-hosted cluster. Confluent Cloud conveniently has their own Confluent Cloud Metrics API, which exposes cluster metrics for availability at a specific HTTP endpoint. Prometheus is run with a prometheus.yml file, which is configured to set up Prometheus to scrape that exposed endpoint at a specific interval or rate. We then configure Grafana to set our local Prometheus instance as a data source, which allows the data that Prometheus scraped from the cloud-cluster to be available for visualization within Grafana. We then customize and configure Grafana dashboards, and embed them into our frontend application via iframes.
As of launch, our product and demo is currently set up with local instances of Prometheus and Grafana set up with YAML files to connect to our Confluent Cloud cluster via a Confluent Cloud API Key and Secret. To run this demo on their respective machines we currently require users to:
- Host their clusters on Confluent Cloud
- Configure a Metrics Viewer Role
- Generate their own Cloud API Key and Secret
- Install and run their own local, configured Prometheus instance
- Create a Grafana Cloud account and select Prometheus as a data source
- Fork and clone this repo
- Customize and embed their own Grafana dashboards
- Run
npm install
andnpm run dev
In its current state, there are a lot of steps that the user must complete to get Kafkometry up and running. Going forward, the Kafkometry team hopes to abstract many of these steps away to create a more seamless and intuitive user experience. We've thought about providing the necessary configuration YAML files to scrape from OUR Confluent Cloud cluster when running the users' own instances of Prometheus and Grafana so that our users will not have to create any accounts, but this still requires our users to install Prometheus on their own machine. Not to mention unsecure if we decided to post our prometheus.yml and Grafana configs that contain our Cloud API key and secret along with our Grafana credentials. Additionally, Confluent Cloud Metrics API has a rate limit on how often their endpoints can be scraped, therefore preventing the application from receiving realtime data. There's gotta be a better way!
For our next big patch, we have been working on containerizing our application with Docker! For demo purposes, we plan on spinning up a containerized cluster rather than hosting our cluster on Confluent Cloud to overcome the request rate limits imposed by Confluent Cloud Metrics API. We also plan on making the switch from using Grafana, to using Chart.js to design and render our own graphical interfaces for metrics for a superior user experience. With this containerized solution, our users can run the application off of images using a docker-compose.yaml (that we will provide) that can be run with a single docker-compose -up
command, instead of downloading, configuring, and running their own instances of Prometheus, and eliminates the need to create their own Grafana account and dashboards.
Name | GitHub | |
---|---|---|
Benjamin Dunn | ||
Mitch Gruen | ||
Alwin Zhao | ||
Vincent Do |