-
Notifications
You must be signed in to change notification settings - Fork 615
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ReactorKafkaBinder - not respecting back pressure (identical vanilla ReactorKafka consumer does) #2957
Comments
KafkaBinder is not a reactive source or target, so it simply will not honor some of the reactive features (back pressure is one of them). That is by definition. It is imperative binder adapted to work with reactive functions - nothing more. |
Tagging @sobychacko |
@olegz This is using the Reactive Kafka Binder (i.e. the one in spring-cloud-stream-binder-kafka-reactive), not the old imperative one. Are you saying this doesn't honour back pressure either? To be clear - my issue is that the ReactorKafkaBinder in Spring Cloud Streams does not exhibit the same back pressure behaviour as when using a reactive KafkaReceiver directly (even though the Reactive Kafka Binder uses it internally). |
No, i just wanted to clarify. With reactive binder we need to look, hence tagging @sobychacko |
Ok - thanks. I'll have a stab at debugging it now. |
If I update
Then I get behaviour I want (if that's in any way useful?). |
OK. I might see the value of the
according to its Javadocs. |
Hi @artembilan, Thanks for your response. Whilst I see your point about Open to suggestions... |
Closed in favor of spring-projects/spring-integration#9215. Thank you for your contribution! |
Spring Integration |
Keep in mind to override the spring-integration version in the application since the one that Spring Cloud Stream brings is what is currently available via Boot (which is not the snapshot version right now). |
Expected behaviour
Parity in how back pressure is handled when using the SCS ReactorKafkaBinder and ReactorKafka directly.
If this isn't expected, guidance on how to achieve parity would be appreciated.
Actual behaviour
Consider two consumer implementations performing identical tasks; a simulation of some work followed by a WebClient call.
Using Reactor Kafka Directly
max.poll.records=1 for both, to keep things simple.
Using SCS Reactor Kafka Binder
In reactor kafka (example 1) we can see behaviour in-line with back pressure requirements. One record is emitted at a time. The consumer pauses as necessary.
In Spring Cloud Streams, using the ReactorKafkaBinder, this isn't the case.
Here, 100's of records are emitted to the sink:
This causes problems if a rebalance occurs during a period of heavy load as the pipeline can contain 100's of pending records.
We'd need to set an intolerably high maxDelayRebalance to get through them all or handle lots of duplicates.
Logs resembling the below are visible during a rebalance.
Presumably something in the binder/channel implementation is causing this?
Repro
See here for complete repro.
Requires a Kafka on localhost:9092 and a topic called "test".
Producer
class in test will send 100 messages.demo.reactor.DemoReactorApp
- Pure Reactor Kafka exampledemo.streams.DemoStreamsApplication
- Spring Cloud Streams exampleDEBUG logging has been enabled for the
ConsumerEventLoop
for emit visibility.Environment details
Java 21
Boot 3.2.2
SCS: 4.1.0
Reactor Kafka: 1.3.22
Loosely related issue here.
The text was updated successfully, but these errors were encountered: