-
Notifications
You must be signed in to change notification settings - Fork 70
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is Reader.consume()
supposed to throw on underflow?
#311
Comments
Reader.consume()
really supposed to throw on underflow?Reader.consume()
supposed to throw on underflow?
Hey @bittrance, Please read #287 and the associated issue to see if that explains why I added throw. Then we can discuss how we want to either add support for your specific use-case or just fix the existing implementation. |
To give a little more detail, I'm trying to use xk6-kafka to evaluate Strimzi-operated Kafka. The killer feature is that k6 with xk6-kafka can give me both a reasonably performant producer/consumer pair, and can emit Prometheus metrics (via xk6-prometheus). I'm using chaostoolkit to construct scenarios killing brokers and injecting network partitions while it monitors the Prometheus metrics generated by k6 to ensure that the producer and consumer can continue to operate. In such scenarios, interaction with Kafka can legitimately pause for tens of seconds but then resume. I don't want to set |
I had a look and I think the deadline case should be treated separately. TBH, I find the reasoning in #287 a little strange. The deadline is technically an error condition in that As an aside, one indication that the current approach is problematic is that we will report stats saying we received messages that the caller of I created a PR here bittrance#1 to show what it would look like to change the current behavior. (Because it depends on #312 , I cannot create it against upstream without including those commits; once we have decided what to do with 312, I can create a proper PR against this repo.) I understand if you do not want to change the current default behavior; we could introduce a |
@bittrance Makes sense. Using a flag to keep backward compatibility will avoid breaking changes in other peoples' scripts. |
Reading
xk6-kafka/reader.go
Line 347 in 9ccf3fe
consume({limit: 10})
reads less than 10 messages, it will throw an error, meaning we cannot inspect the messages that were actually received. There are scenarios where I don't know the exact number of messages that will arrive. This is particularly true when verifying/debugging messages over several partitions, since the client tends to return batches of messages form one partition at a time; to be sure to receive a representative sample, I need to read a largish number of messages.However, the parameter is named
limit
which makes it sounds like reading less messages than the limit would be normal, so perhaps it is the intention that the context deadline should not be treated as an error case; that the current behavior is in fact a bug?If the current behavior of erroring on underflow is intended behavior, I would like to contribute PR(s) that:
limit
to e.g.at_least
or some similar nameIf it is a bug, I'll PR a fix. How does that sound?
The text was updated successfully, but these errors were encountered: