You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are using statsd-exporter in our integrations. Is there any way of sending histogram data to grafana from python code?
We are using statsd library in python.
We use statsd-exporter and victoria metrics data store to configure out metrics.
I added the mapping section as mentioned here to our integrations.yaml file.
In python code, I extended the statsd lib to send histograms. "h" here is sending only metric_sum and metric_count similar to timers. I was expecting buckets to be sent for metric name as defined in the mapping. Can someone help me out here.
My end goal is to use histogram_quantile() in grafana, i think this expects buckets to be preprocessed.
Can you share your configuration? In particular, what is your observer_type and your histogram_options? What is the result on /metrics after you send a few events? You can also run the exporter with the debug log level to observe what exactly happens on the wire.
In the exporter, you configure the buckets (and/or native histogram scaling factor) and these events are counted into the appropriate bucket. It seems (but I'm not 100% sure from your description) that you are receiving the events, but do not have buckets configured.
We are using statsd-exporter in our integrations. Is there any way of sending histogram data to grafana from python code?
We are using statsd library in python.
We use statsd-exporter and victoria metrics data store to configure out metrics.
I added the mapping section as mentioned here to our integrations.yaml file.
In python code, I extended the statsd lib to send histograms. "h" here is sending only metric_sum and metric_count similar to timers. I was expecting buckets to be sent for metric name as defined in the mapping. Can someone help me out here.
My end goal is to use histogram_quantile() in grafana, i think this expects buckets to be preprocessed.
The text was updated successfully, but these errors were encountered: