Replies: 2 comments 2 replies
-
Look here? https://github.com/open-telemetry/opentelemetry-collector/tree/main/config/confighttp |
Beta Was this translation helpful? Give feedback.
-
You mean increase MaxConnsPerHost [int] . This is a client param, isn't it, so exporter only?
or
And what is the default? There is no info about i have found. |
Beta Was this translation helpful? Give feedback.
-
Could you please direct me to the documentation that explains how to configure and optimize the settings for the number of connections, worker threads, and queue size (if applicable) for an OpenTelemetry collector, specifically also for HTTP receivers?
In our load testing environment, which is built on Azure Container Apps, we've identified an optimal point at 50 requests per second per node. This configuration utilizes neither significant CPU nor RAM resources. However, when we attempt to handle a higher volume of requests beyond this threshold, we observe an increase in error rates.
It appears that the OpenTelemetry collector's connection handling might be limited by default settings. We aim to make better use of the available CPU and RAM resources, potentially by choosing larger virtual machines, so we can efficiently manage our expected workloads without the need to scale to hundreds of nodes.
Furthermore, when dealing with an even larger number of clients that generate more connections while maintaining the same level of throughput (which we currently simulate with a low number of nodes), we face additional pressure to increase the number of requests per collector pod.
Could you kindly provide guidance or point us towards the relevant documentation to adjust and fine-tune these settings for optimal performance in our OpenTelemetry collector setup for HTTP and gRPC receivers
Beta Was this translation helpful? Give feedback.
All reactions