Help with flushing behavior #5093
Replies: 3 comments 1 reply
-
Hey @maxfriedrich! I'm a bot here to help you with bugs, questions, and becoming a contributor. Let's dive into this Langfuse flushing issue together! The behavior you're experiencing with Here's how you can use it in Python: cb.flush() This method will attempt to send the batch immediately, and in case of network issues, it will log an error and retry the batch without throwing an exception [1]. To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other |
Beta Was this translation helpful? Give feedback.
-
Hi @maxfriedrich - yes, this is the intended behavior. Our flush logic is a classic size-limit or timeout logic, i.e. we flush either when the timeout ( Could you explain what exactly you are trying to achieve? Maybe I can help with a solution. |
Beta Was this translation helpful? Give feedback.
-
Thanks! We're running on AWS Lambda and the state we want to trace is rather large, it often needs to be truncated. We set traces=5 to help with function timeouts but it's still not ideal, e.g. this is how it looks in Datadog with default flush_at / flush_interval: The greenish boxes on top are the httpx requests by Langfuse. The red empty boxes also only occur when the Langfuse callback is configured, at this stage the state is the largest, this is retrieval before reranking / cutting off a lot of results. I think it has to do with the large state being truncated and put in a queue, not 100% sure if it happens in Langfuse or our code, but because of the Python GIL other threads can't proceed. To me this looks like too much tracing, it's also making application code a little bit slower because of the GIL, right? I tried to move it out of the main execution stage by increasing flush_at / flush_interval so the user doesn't have to wait, then this happens: which is annoying in a serverless environment, there a "force flush" would really help. If you're interested I can share an example of our big state in this stage privately. I think @matthiaslau also reached out about this, he worked on the same project. Thank you! |
Beta Was this translation helpful? Give feedback.
-
Hi, I'm experimenting with Langfuse flushing settings.
In this example, I would have expected
cb.flush()
to send a "force flush" signal to the worker thread to finish early. Instead it's waiting one full flush interval before flushing and then another before the Python process exits.Is this the intended behavior? Is there a way to "force flush" everything that's left in the queue immediately?
Beta Was this translation helpful? Give feedback.
All reactions