This is a forked version of cloudflare's alertmanager2es with new golang layout and uses the official ElasticSearch client. It also supports Authentication.
alertmanager2es receives HTTP webhook notifications from AlertManager and inserts them into an Elasticsearch index for searching and analysis. It runs as a daemon.
The alerts are stored in Elasticsearch as alert groups.
Usage:
alertmanager2es [OPTIONS]
Application Options:
--debug debug mode [$DEBUG]
-v, --verbose verbose mode [$VERBOSE]
--log.json Switch log output to json format [$LOG_JSON]
--elasticsearch.address= ElasticSearch urls [$ELASTICSEARCH_ADDRESS]
--elasticsearch.username= ElasticSearch username for HTTP Basic Authentication
[$ELASTICSEARCH_USERNAME]
--elasticsearch.password= ElasticSearch password for HTTP Basic Authentication
[$ELASTICSEARCH_PASSWORD]
--elasticsearch.apikey= ElasticSearch base64-encoded token for authorization; if set, overrides
username and password [$ELASTICSEARCH_APIKEY]
--elasticsearch.index= ElasticSearch index name (placeholders: %y for year, %m for month and %d
for day) (default: alertmanager-%y.%m) [$ELASTICSEARCH_INDEX]
--bind= Server address (default: :9097) [$SERVER_BIND]
Help Options:
-h, --help Show this help message
It can be useful to see which alerts fired over a given time period, and perform historical analysis of when and where alerts fired. Having this data can help:
- tune alerting rules
- understand the impact of an incident
- understand which alerts fired during an incident
It might have been possible to configure Alertmanager to send the alert groups
to Elasticsearch directly, if not for the fact that Elasticsearch does not
support unsigned integers at the time of writing. Alertmanager uses an
unsigned integer for the groupKey
field, which alertmanager2es converts to a
string.
- alertmanager2es will not capture silenced or inhibited alerts; the alert notifications stored in Elasticsearch will closely resemble the notifications received by a human.
- Kibana does not display arrays of objects well (the alert groupings use an array), so you may find some irregularities when exploring the alert data in Kibana. We have not found this to be a significant limitation, and it is possible to query alert labels stored within the array.
To use alertmanager2es, you'll need:
- an Elasticsearch cluster
- Alertmanager 0.6.0 or above
To build alertmanager2es, you'll need:
git clone github.com/webdevops/alertmanager2elasticsearch
cd alertmanager2elasticsearch
make vendor
make build
alertmanager2es is configured using commandline flags. It is assumed that alertmanager2es has unrestricted access to your Elasticsearch cluster.
alertmanager2es does not perform any user authentication.
Run ./alertmanager2es -help
to view the configurable commandline flags.
- name: alertmanager2es
webhook_configs:
- url: https://alertmanager2es.example.com/webhook
By omitting a matcher, this route will match all alerts:
- receiver: alertmanager2es
continue: true
Apply this Elasticsearch template before you configure alertmanager2es to start sending data:
{
"template": "alertmanager-2*",
"settings": {
"number_of_shards": 1,
"number_of_replicas": 1,
"index.refresh_interval": "10s",
"index.query.default_field": "groupLabels.alertname"
},
"mappings": {
"_default_": {
"_all": {
"enabled": false
},
"properties": {
"@timestamp": {
"type": "date",
"doc_values": true
}
},
"dynamic_templates": [
{
"string_fields": {
"match": "*",
"match_mapping_type": "string",
"mapping": {
"type": "string",
"index": "not_analyzed",
"ignore_above": 1024,
"doc_values": true
}
}
}
]
}
}
}
We rotate our index once a month, since there's not enough data to warrant daily rotation in our case. Therefore our index name looks like:
alertmanager-2020.06
alertmanager2es will return a HTTP 500 (Internal Server Error) if it encounters a non-2xx response from Elasticsearch. Therefore if Elasticsearch is down, alertmanager2es will respond to Alertmanager with a HTTP 500. No retries are made as Alertmanager has its own retry logic.
Both the HTTP server exposed by alertmanager2es and the HTTP client that connects to Elasticsearch have read and write timeouts of 10 seconds.
alertmanager2es exposes Prometheus metrics on /metrics
.
alerts.labels.alertname:"Disk_Likely_To_Fill_Next_4_Days"
Pull requests, comments and suggestions are welcome.
Please see CONTRIBUTING.md for more information.