Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting IO exception: Broken pipe, when binding the jmx_javaagent on activemq #999

Open
anadinema opened this issue Sep 18, 2024 · 3 comments

Comments

@anadinema
Copy link

anadinema commented Sep 18, 2024

We have a setup with RedHat ActiveMQ (version 5.11.0) running and jmx_exporter_javaagent (version 1.0.1) attached to it.

Following is the config for the same:

---
 startDelaySeconds: 0
ssl: false
lowercaseOutputName: false
lowercaseOutputLabelNames: false
rules:
  - pattern: ".*"

The application and metrics runs fine, but somehow after few hours, it starts throwing the following error. And with this error the memory usage keeps on increasing, due to which it ends up consuming all of the heap memory causing the process to terminate due to Out of memory error, causing the AMQ container to restart.

Sep 18, 2024 5:39:18 AM io.prometheus.metrics.exporter.httpserver.HttpExchangeAdapter sendErrorResponseWithStackTrace
SEVERE: The Prometheus metrics HTTPServer caught an Exception while trying to send the metrics response.
java.io.IOException: Broken pipe
	at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
	at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
	at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
	at sun.nio.ch.IOUtil.write(IOUtil.java:65)
	at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:470)
	at sun.net.httpserver.Request$WriteStream.write(Request.java:391)
	at sun.net.httpserver.ChunkedOutputStream.writeChunk(ChunkedOutputStream.java:125)
	at sun.net.httpserver.ChunkedOutputStream.write(ChunkedOutputStream.java:87)
	at sun.net.httpserver.PlaceholderOutputStream.write(ExchangeImpl.java:444)
	at java.util.zip.GZIPOutputStream.finish(GZIPOutputStream.java:168)
	at java.util.zip.DeflaterOutputStream.close(DeflaterOutputStream.java:238)
	at io.prometheus.metrics.exporter.common.PrometheusScrapeHandler.handleRequest(PrometheusScrapeHandler.java:68)
	at io.prometheus.metrics.exporter.httpserver.MetricsHandler.handle(MetricsHandler.java:43)
	at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:79)
	at sun.net.httpserver.AuthFilter.doFilter(AuthFilter.java:83)
	at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:82)
	at sun.net.httpserver.ServerImpl$Exchange$LinkHandler.handle(ServerImpl.java:675)
	at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:79)
	at sun.net.httpserver.ServerImpl$Exchange.run(ServerImpl.java:647)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

Terminating due to java.lang.OutOfMemoryError: Java heap space

Application in deployed on ARO (Azure RedHat OpenShift)

Any reasons what might be causing this?

@dhoard
Copy link
Collaborator

dhoard commented Sep 20, 2024

@anadinema

  1. What application is scraping the JMX exporter?
  2. What is the scrape interval?
  3. What size is the JMX exporter response content?
  4. Can you reproduce this in a development environment?
  5. Do you start experiencing the broken pipe exception before the OutOfMemoryError?
    • i.e. example: the broken pipe exception occurs 10 (20) minutes before the OutOfMemoryError

@anadinema
Copy link
Author

@dhoard

  1. It is the service monitor of Kubernetes doing the scraping. Adding how the config looks below:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: example-sm
  namespace: amq-namespace
spec:
  selector:
    matchLabels:
      app: amq
  endpoints:
    - port: 'http'
      path: '/metrics'
  1. It defaults to 10 seconds, which is default for the service monitor.
  2. It varies between 1-1.2 MB
  3. Not sure what do you mean by development environment. Right now the issue is in one of our test environments and have tried putting it in another cluster and the results remain same.
  4. Yes, around 2-3 hours of broken pipe exception after which OutOfMemoryError comes.

@dhoard
Copy link
Collaborator

dhoard commented Oct 25, 2024

@anadinema the exporter receives the HTTP request, scrapes the MBeans (buffering the complete response), and then tries to send the HTTP response.

Sep 18, 2024 5:39:18 AM io.prometheus.metrics.exporter.httpserver.HttpExchangeAdapter sendErrorResponseWithStackTrace
SEVERE: The Prometheus metrics HTTPServer caught an Exception while trying to send the metrics response.
java.io.IOException: Broken pipe

... typically indicates that the socket was closed by the client. This can happen if the scrape takes longer than the client scrape timeout.

I would try increasing the timeout. I believe the default is 10 seconds.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants