-
-
Notifications
You must be signed in to change notification settings - Fork 119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat: Add Kafka integration for Parseable server #936 . #1047
base: main
Are you sure you want to change the base?
Conversation
CLA Assistant Lite bot All contributors have signed the CLA ✍️ ✅ |
I have read the CLA Document and I hereby sign the CLA |
use tracing::{debug, info}; | ||
|
||
#[derive(Debug, Clone, Serialize, Deserialize)] | ||
pub struct KafkaConfig { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wouldn't want to serialize this back into a file, but I would love to extract it directly from the env vars using clap, can we do that? refer: S3Config
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same for all other config types that have to be extracted from env vars
Would you want to contribute something else? |
I see useful stuff like backpressure handling which we can augment to the changes added in #1021 |
@nitisht As someone with production-ready experience working with Kafka on both the client and broker sides, I was surprised to see the previous PR merged, as it overlaps with the work I was pursuing. While I may have missed the chance to open a draft PR earlier due to time constraints, I’d still love the opportunity to enhance and refine this feature further. My goal is to contribute to a more robust and efficient solution. I believe there’s significant room for improvement, particularly in areas like backpressure, error handling, retrying, parallel processing, consumer rebalance, protection from data loss, etc. I’d be happy to collaborate on the existing implementation to make it more solid. Let me know how I can best support these efforts. |
This is great to hear @hippalus . We're more than happy to support you in this. Would you change this PR to add these features instead? |
Of course! @nitisht As I mentioned in the PR description, I will implement the current TODOs and PR comments that @de-sh made. Then let's evaluate it. |
…le for rdkafka dependencies. Implement retrying for consumer.rcv() fn to handle temporary Kafka unavailability.
Implement KafkaMetricsCollector to collect and expose Kafka client and broker metrics. Refactor ParseableServer.init(..) and connectors::init(..).
Fixes #936
Description
This pull request implements the Kafka connector for Parseable by introducing better stream management, modularizing the code, and integrating Prometheus metrics. It also adds groundwork for dynamic configurations.
How It Works
Partition Management
Rebalance Handling
Stream Processing
Metrics
TODO
This PR has: