You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a somewhat large log file that I parse and turn into metrics, this process results in 5 different metrics each with roughly 5000 data point in it.
Those metrics, when exported using GRPC gets turned into about 20MB of data but the collector is set to accept only 4MB of data.
Given that this is just one case for one log file and the log file could be arbitrarily large, it is not reasonable to just ask our collector provider to keep increasing the max allowed payload size to fit our demands.
So to avoid that, I implemented a solution that would meterProvider.ForceFlush() metrics after N lines.
However, calling ForceFlush does not empty the metrics/datapoint buffer, meaning that the amount of data points only ever goes up, no matter how often ForceFlush is called.
Because the number of data points only ever grows, regardless of any success when publishing the metrics, the application is stuck in an error loop until all memory is consumed by the unsent metrics.
From the little I could understand surrounding this issue, this is kinda the intended behavior, because I should do something else to prevent this, but I can't figure out how to make sure that the buffer is empty after each ForceFlush call or which functions was I actually supposed to call.
Please feel free to correct any misconceptions from my side (if possible with a link to the documentation)
Environment
OS: Linux (Alpine)
Architecture: x86_64
Go Version: 1.23
opentelemetry-go version: v1.34
Steps To Reproduce
Have a large log file with a large enough carnality that will result in 5000 data points
ForceFlush the metrics halfway through the file and check the number of metric
Do the same again at the end
Roughly the same number of data points should be created on both cases, however, the second call will have about twice as many data points as the second
Expected behavior
Calling ForceFlush halfway through the logs should export about the same number of data points as calling it at the end
The text was updated successfully, but these errors were encountered:
Description
I have a somewhat large log file that I parse and turn into metrics, this process results in 5 different metrics each with roughly 5000 data point in it.
Those metrics, when exported using GRPC gets turned into about 20MB of data but the collector is set to accept only 4MB of data.
Given that this is just one case for one log file and the log file could be arbitrarily large, it is not reasonable to just ask our collector provider to keep increasing the max allowed payload size to fit our demands.
So to avoid that, I implemented a solution that would
meterProvider.ForceFlush()
metrics after N lines.However, calling
ForceFlush
does not empty the metrics/datapoint buffer, meaning that the amount of data points only ever goes up, no matter how oftenForceFlush
is called.Because the number of data points only ever grows, regardless of any success when publishing the metrics, the application is stuck in an error loop until all memory is consumed by the unsent metrics.
From the little I could understand surrounding this issue, this is kinda the intended behavior, because I should do something else to prevent this, but I can't figure out how to make sure that the buffer is empty after each
ForceFlush
call or which functions was I actually supposed to call.Please feel free to correct any misconceptions from my side (if possible with a link to the documentation)
Environment
Steps To Reproduce
ForceFlush
the metrics halfway through the file and check the number of metricExpected behavior
Calling
ForceFlush
halfway through the logs should export about the same number of data points as calling it at the endThe text was updated successfully, but these errors were encountered: