Respect batch size limits when flushing queue #417
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Problem
I noticed that observations are being dropped by the client. When I looked into it further I realized that this happens when we exceed the batch size limit. The
LangfuseCore#processQueueItems
method is separating queued items into those that should be processed and those that should be skipped. The problem is that theflush
method doesn't respect that segregation, despite the fact that it invokesprocessQueueItems
. Therefore it's possible to send an oversized batch to the server. I'm pretty sure that the server is enforcing batch size limits, which is why we're seeing dropped observations.Changes
I updated the
flush
method to consume the results ofprocessQueueItems
. Now it will only send theprocessedItems
and will re-queue theremainingItems
for the next time we flush. I suppose that there is a possibility that the queue could grow without bound. If that's an issue, my suggestion would be to implement a fixed size queue instead.Release info Sub-libraries affected
langfuse-core
Bump level
Libraries affected
Changelog notes