You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The application and metrics runs fine, but somehow after few hours, it starts throwing the following error. And with this error the memory usage keeps on increasing, due to which it ends up consuming all of the heap memory causing the process to terminate due to Out of memory error, causing the AMQ container to restart.
Sep 18, 2024 5:39:18 AM io.prometheus.metrics.exporter.httpserver.HttpExchangeAdapter sendErrorResponseWithStackTrace
SEVERE: The Prometheus metrics HTTPServer caught an Exception while trying to send the metrics response.
java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470)
at sun.net.httpserver.Request$WriteStream.write(Request.java:391)
at sun.net.httpserver.ChunkedOutputStream.writeChunk(ChunkedOutputStream.java:125)
at sun.net.httpserver.ChunkedOutputStream.write(ChunkedOutputStream.java:87)
at sun.net.httpserver.PlaceholderOutputStream.write(ExchangeImpl.java:444)
at java.util.zip.GZIPOutputStream.finish(GZIPOutputStream.java:168)
at java.util.zip.DeflaterOutputStream.close(DeflaterOutputStream.java:238)
at io.prometheus.metrics.exporter.common.PrometheusScrapeHandler.handleRequest(PrometheusScrapeHandler.java:68)
at io.prometheus.metrics.exporter.httpserver.MetricsHandler.handle(MetricsHandler.java:43)
at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:79)
at sun.net.httpserver.AuthFilter.doFilter(AuthFilter.java:83)
at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:82)
at sun.net.httpserver.ServerImpl$Exchange$LinkHandler.handle(ServerImpl.java:675)
at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:79)
at sun.net.httpserver.ServerImpl$Exchange.run(ServerImpl.java:647)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Terminating due to java.lang.OutOfMemoryError: Java heap space
Application in deployed on ARO (Azure RedHat OpenShift)
Any reasons what might be causing this?
The text was updated successfully, but these errors were encountered:
It defaults to 10 seconds, which is default for the service monitor.
It varies between 1-1.2 MB
Not sure what do you mean by development environment. Right now the issue is in one of our test environments and have tried putting it in another cluster and the results remain same.
Yes, around 2-3 hours of broken pipe exception after which OutOfMemoryError comes.
@anadinema the exporter receives the HTTP request, scrapes the MBeans (buffering the complete response), and then tries to send the HTTP response.
Sep 18, 2024 5:39:18 AM io.prometheus.metrics.exporter.httpserver.HttpExchangeAdapter sendErrorResponseWithStackTrace
SEVERE: The Prometheus metrics HTTPServer caught an Exception while trying to send the metrics response.
java.io.IOException: Broken pipe
... typically indicates that the socket was closed by the client. This can happen if the scrape takes longer than the client scrape timeout.
I would try increasing the timeout. I believe the default is 10 seconds.
We have a setup with RedHat ActiveMQ (version 5.11.0) running and jmx_exporter_javaagent (version 1.0.1) attached to it.
Following is the config for the same:
The application and metrics runs fine, but somehow after few hours, it starts throwing the following error. And with this error the memory usage keeps on increasing, due to which it ends up consuming all of the heap memory causing the process to terminate due to Out of memory error, causing the AMQ container to restart.
Terminating due to java.lang.OutOfMemoryError: Java heap space
Application in deployed on ARO (Azure RedHat OpenShift)
Any reasons what might be causing this?
The text was updated successfully, but these errors were encountered: