-
Notifications
You must be signed in to change notification settings - Fork 734
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Addon: bootstrap.kafka service with current brokers as endpoint #52
Conversation
i.e. to connect to any kafka broker and get a list of actual broker DNS names to talk to.
so this solves the problem with maintaining the BOOTSTRAP string
583fcaf
to
fabb292
Compare
The more I mess with bootstrap.servers strings the better I like this idea. For example in production you'd want a couple of brokers, but that causes error messages in scaled down clusters like #44. Let's merge it, as existing bootstrap settings will work as before. |
|
Actually I will have to revert the test change in this PR, and merge only the service. I've tested it during #84 but need to merge this to master without conflicts in ongoing test refactoring. |
and trust alarms on Under-replicated Partitions to let us know when something is really wrong. Do clients actually care about Readiness? The bootstrap service (#52) will definitely care, which is good. The `broker` service, that the StatefulSet manifest depends on (https://github.com/Yolean/kubernetes-kafka/blob/v2.1.0/50kafka.yml#L7) for naming, is without `publishNotReadyAddresses`. Clients will bootstrap, get the individual DNS names of brokers, resolve those addresses and connect directly to pods.
This reverts commit fabb292.
The test should still pass if any single broker is down.
This looks useful but I'm hesitant to merge to master due to #21, which we dealt with by removing the service in #30.
I fail to figure out how to test, let alone enforce, that this service is only used for the initial connection but not for actual consumption or production. Clients should discover brokers through this service, but not suffer from the round-robin nature of it. A danger is that when developing with 1 replica, #44 for example, there is no round-robin and thus no such issues.