You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When deploying Kafka clusters with less than three nodes the default value of 3 for offsets.topic.replication.factor prohibits users from reading any values from topics.
Writing works fine, if topics are created with few enough partitions - or auto-created which observes available broker count for the replication factor.
But when reading for the first time, Kafka internally tries to create the __consumer_offsets topic, and this is required to have three partitions by default. And until this has been created no read requests are allowed.
The broker simply keeps logging
kafka [2023-05-05 15:20:39,078] INFO [Admin Manager on Broker 1001]: Error processing create topic request CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=3, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes'
kafka org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication factor: 3 larger than available brokers: 1.
Possible solutions are:
documentation - this is not strictly speaking a bug, but also not nice
catch it during validation in the operator and log an error
automatically set this in the config when deploying less than 3 brokers
...
The text was updated successfully, but these errors were encountered:
Technical question: Let's say a customer starts with 1 node, we set the setting to 1 as well. Customer later scales up to 5 nodes, we set it to 3. What will happen with existing topics? They will keep the original 1 setting, correct?
This should change the replication factor to three at that point.
I'd have to test it to be 100% sure, but am pretty sure that this setting is actively monitored by Kafka, since it only applies to one topic specifically.
Just to make sure I don't create confusion, this setting only applies to the __consumer_offsets topic, this is not the default replication factor setting that applies to all new topics that are created without specifying a replication factor.
When deploying Kafka clusters with less than three nodes the default value of 3 for offsets.topic.replication.factor prohibits users from reading any values from topics.
Writing works fine, if topics are created with few enough partitions - or auto-created which observes available broker count for the replication factor.
But when reading for the first time, Kafka internally tries to create the
__consumer_offsets
topic, and this is required to have three partitions by default. And until this has been created no read requests are allowed.The broker simply keeps logging
Possible solutions are:
The text was updated successfully, but these errors were encountered: