-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JMX exporter high memory usage. #929
Labels
Comments
@harshal-choudhari-9393 can you provide your JMX Exporter YAML configuration file? |
Sure. Here is the YAML for the JMX exporter.
|
@harshal-choudhari-9393 Can you provide your solution/resolution for future information? |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I'm employing an independent JMX exporter in conjunction with Kafka Connect to retrieve Kafka Connect metrics in Prometheus within a Kubernetes pod. Specifically, I'm utilizing version 0.20 of the JMX exporter. With approximately 9000 topics in Kafka, I've allocated 5G of memory to the pod. Upon initiating the JMX exporter, JMX memory utilization promptly surges to 100%, leading to the termination of the pod and consequently halting Kafka Connect. I retrieved high cardinality metrics and attempted to mitigate this issue by deactivating them using the provided configuration However, I am still receiving all the metrics that I have attempted to disable.
excludeObjectNames: ["kafka.log:type=Log, name=Size,*"......]
Could you please guide me on reducing memory usage for the JMX exporter and disabling unnecessary metrics? Also, Let me know if any further information is needed.
The text was updated successfully, but these errors were encountered: