Skip to content

Monitoring

Vladimir Kotal edited this page Oct 15, 2020 · 26 revisions

General

The monitoring endpoints do not go through authorization checks. Also they are not restricted to localhost.

The meters are not per project on purpose. Firstly, to avoid metric cardinality explosion, secondly not to leak any private information (given the above).

Keep in mind that the names of the meters are very much volatile right now. They will stabilize over time.

web application

/metrics/prometheus serves metrics in the Prometheus format. If the web application is running on https://foo.bar/source/ then the URL for the metrics will be https://foo.bar/source/metrics/prometheus

Insert this snippet into /etc/prometheus/prometheus.yml:

  - job_name: opengrok

    metrics_path: '/source/metrics/prometheus'
    static_configs:
      # replace with actual server name and port (defaults to HTTP)
      - targets: ['localhost:8080']

and reload the Prometheus configuration. The web application metrics will become available.

Notable metrics start with:

  • jvm
  • authorization
  • requests

Indexer

The indexer metrics are exported in StatsD format.

Configuration

Here's example of complete read-only configuration:

<?xml version="1.0" encoding="UTF-8"?>
<java version="11.0.4" class="java.beans.XMLDecoder">
 <object class="org.opengrok.indexer.configuration.Configuration" id="Configuration0">

  <void property="statsdConfig">
     <void property="port">
       <int>8125</int>
     </void>
     <void property="host">
       <string>localhost</string>
     </void>
     <void property="flavor">
       <object class="java.lang.Enum" method="valueOf">
         <class>io.micrometer.statsd.StatsdFlavor</class>
         <string>ETSY</string>
       </object>
    </void>
  </void>

 </object>
</java>

Configurable options:

name type value
port int UDP port number
host String hostname
flavor StatsdFlavor enum type of statsd export

The set of Micrometer built in meters is the same as for the web application.

The StatsD export is setup using buffered output and sent via UDP to the host/port in the configuration. Even with buffering on this can generate some heavy traffic.

If the indexer is run in per project mode, a projects tag is added to all the metrics with a value containing project names separated with commas.

Example

To use statsd with Prometheus run the statsd-exporter like so with Docker:

#!/bin/bash

docker run --name=prom-statsd-exporter \
    -p 9123:9102 \
    -p 8125:8125/udp \
    -v $PWD/mapping.yml:/tmp/mapping.yml \
    prom/statsd-exporter \
        --statsd.mapping-config=/tmp/mapping.yml \
        --statsd.listen-udp=:8125 \
        --web.listen-address=:9102

The configuration in mapping.yml can look like this:

mappings:
    # usage:
    #   jvmMemoryUsed.area.nonheap.id.Compressed_Class_Space.statistic.value:XYZ
  - match: "jvmMemoryUsed.area.*.id.*.statistic.value"
    name: "jvmMemoryUsed"
    labels:
        area: "$1"
        id: "$2"

This will map statsd metric names to native Prometheus metric names with tags.

The Prometheus config snippet can look like this:

  - job_name: 'statsd'
    static_configs:
      - targets: ['localhost:9123']
        labels: {'host': 'localhost'}