This is an all-in-one tool to help you visualize and report on bull! It runs as a docker container that you can spin up with local development or host wherever you see fit. The core goal of this project is to provide realtime integration of your bull queues with existing bull tooling...without needing to run write any custom code. The following is automatically included:
- Automatic discovery of your bull queues (just point this at your redis instance)
- Automatic configuration of prometheus metrics for each discovered queue
- Configurable UI support to visualize bull queues (supported:
arena
,bull-board
,bull-master
) - Sentry error reporting (just pass
SENTRY_DSN
environment variable) - Elastic ECS logging when
NODE_ENV
is set toproduction
- Bundled
oauth2_proxy
if you want to restrict access (disabled by default)
You can test it out with docker by running (if you want to access something running on your host machine and not within the docker network you can use the special hostname host.docker.internal
):
docker run -d --rm \
--name bull-monitor \
-e "NODE_ENV=development" \
-e "REDIS_HOST=host.docker.internal" \
-e "REDIS_PORT=6001" \
-e "PORT=3000" \
-e "BULL_WATCH_QUEUE_PREFIXES=bull" \
-e "UI=bull-master" \
-p 3000:3000 \
ejhayes/nodejs-bull-monitor:latest
To use with docker compose, add the following to docker-compose.yml
:
bull-monitor:
image: ejhayes/nodejs-bull-monitor:latest
ports:
- 3000:3000
environment:
REDIS_HOST: <your redis host>
REDIS_PORT: <your redis port>
BULL_WATCH_QUEUE_PREFIXES: bull
PORT: 3000
UI: bull-master
Then run docker-compose up bull-monitor
. Assuming no issues, the following paths are available:
Path | Description |
---|---|
/metrics |
Prometheus metrics |
/health |
Health endpoint (always returns HTTP 200 with OK text) |
/docs |
Swagger UI |
/docs-json |
Swagger JSON definition |
/queues |
Bull UI (currently arena or bull-board ) |
The following environment variables are supported:
Environment Variable | Required | Default Value | Description |
---|---|---|---|
ALTERNATE_PORT |
8081 |
If oauth2 proxy is enabled bull monitor will run on this port instead | |
REDIS_HOST |
x | null |
Redis host (IMPORTANT must be same redis instance that stores bull jobs!) |
REDIS_PORT |
x | null |
Redis port |
REDIS_PASSWORD |
null |
Redis password | |
REDIS_DB |
0 |
Redis database index to use (see options.db from docs) |
|
UI |
bull-board |
UI to use (supported: arena , bull-board ) |
|
BULL_WATCH_QUEUE_PREFIXES |
bull |
Bull prefixes to monitor (globs like prefix* are supported) |
|
BULL_COLLECT_QUEUE_METRICS_INTERVAL_MS |
60000 |
How often queue metrics are gathered | |
COLLECT_NODEJS_METRICS |
false |
Collect NodeJS metrics and expose via prometheus | |
COLLECT_NODEJS_METRICS_INTERVAL_MS |
60000 |
How often to calculate NodeJS metrics (if enabled) | |
REDIS_CONFIGURE_KEYSPACE_NOTIFICATIONS |
true |
Automatically configures redis keyspace notifications (typically not enabled by default). IMPORTANT: This will NOT work without keyspace notifications configured. | |
LOG_LABEL |
bull-monitor |
Log label to use | |
LOG_LEVEL |
info |
Log level to use (supported: debug , error , info , warn ) |
|
NODE_ENV |
production |
Node environment (use development for colorized logging) |
|
OAUTH2_PROXY_* |
null |
See OAuth2 Proxy docs for more details | |
PORT |
3000 |
Port to use | |
SENTRY_DSN |
null |
Sentry DSN to send errors to (disabled if not provided) | |
USE_OAUTH2_PROXY |
0 |
Enable oauth2 proxy (anything other than 1 will disable) |
To get started:
npm install
npm run services:start
npm run start:dev
If you want to run the tests:
npm run test
npm run test:e2e
To build the container (will be built/tagged as ejhayes/nodejs-bull-monitor
):
npm run ci:build
A test script is included so you can try creating and/or processing bull jobs. Examples:
# create a queue and add jobs to it (no processing)
npm run generate:create
# process queue jobs only
npm run generate:process
# create and process jobs
npm run generate
The default behavior of npm run generate
is to:
- Create
MyBullQueue
queue if it doesn't exist. - Add a dummy job every
10
milliseconds. - Add a worker that with concurrency
15
that processes up to200
jobs per1
second (jobs retried up to4
times). - Configure each job to take up to
200
milliseconds. Jobs can fail randomly.
See ./test.ts
for more details.
NOTE: metrics are available at the /metrics
endpoint
For each queue that is created the following metrics are automatically tracked.
Metric | type | description |
---|---|---|
jobs_completed_total |
gauge |
Total number of completed jobs |
jobs_failed_total |
gauge |
Total number of failed jobs |
jobs_delayed_total |
gauge |
Total number of delayed jobs |
jobs_active_total |
gauge |
Total number of active jobs |
jobs_waiting_total |
gauge |
Total number of waiting jobs |
jobs_active |
counter |
Jobs active |
jobs_waiting |
counter |
Jobs waiting |
jobs_stalled |
counter |
Jobs stalled |
jobs_failed |
counter |
Jobs failed |
jobs_completed |
counter |
Jobs completed |
jobs_delayed |
counter |
Jobs delayed |
job_duration |
summary |
Processing time for completed/failed jobs |
job_wait_duration |
summary |
Durating spent waiting for job to start |
job_attempts |
summary |
Number of attempts made before job completed/failed |
The following labels are available:
Label Name | Description |
---|---|
queue_prefix |
Queue Prefix |
queue_name |
Queue Name |
job_name |
Job name |
status |
Job status (choiced: completed , failed ) |
error_type |
Error type (uses error class name) |
Things to note about these metrics:
- Queue metrics are GLOBAL not worker specific
- Gauge metrics (
*_total
) are refreshed every 60 seconds. To change this you'll need to set environment variableBULL_COLLECT_QUEUE_METRICS_INTERVAL_MS
to another value.
You can visualize your queue metrics in Grafana! There are several pieces that need to be running for this to work:
bull-monitor
- this utility must be running (and the/metrics
endpoint should be accessible)prometheus
- you need to be running prometheus and have it configured to scrapebull-monitor
grafana
- grafana needs to be setup and configured to scrape data from prometheus
If you want to play around with a local setup of this:
# start services
npm run services:start
npm run start:dev
# generate/process dummy data
npm run generate
You can now go to: http://localhost:3001/dashboard/import and load dashboards:
Grafana Dashboard Name | Grafana ID | Description | Screenshot |
---|---|---|---|
Queue Overview | 14538 | High level overview of all monitored bull queues | |
Queue Specific | 14537 | Queue specific details |
There are 3 options currently available for UIs: bull-board
, arena
, and bull-master
.
From: https://github.com/felixmosh/bull-board#readme. This is the default UI. If you want to be explicit just set UI
environment variable to bull-board
.
From: https://github.com/hans-lizihan/bull-master. To use this UI you'll need to set the UI
environment variable to bull-master
.
From: https://github.com/bee-queue/arena. To use this UI you'll need to set the UI
environment variable to arena
.
You can restrict access to bull monitor using the built in OAuth2 proxy. To enable you'll need the following environment variables at a minimum:
USE_OAUTH2_PROXY
(must be set to1
)OAUTH2_PROXY_REDIRECT_URL
(this is what the oauth provider will be redirecting to)OAUTH2_PROXY_CLIENT_ID
OAUTH2_PROXY_SECRET_ID
Many other configuration options are possible. See the OAuth2 Proxy documentation for more information.
- This is intended as a back office monitoring solution. You should not make expose this publicly
- This is currently intended to run as a single process and should not be scaled horizontally (future todo item)
You can spin up a full local development environment by running:
# start services
npm run services:start
npm run start:dev
The following services are available (and automatically configured) at these locations:
- Grafana UI: http://localhost:3001
- Prometheus: http://localhost:3002
- SMTP (Mailhog): http:localhost: http://localhost:3003 (username:
test
, password:test
) - Redis:
localhost:6001
- SMTP Server (used by Grafana Alerts):
localhost:6002
(no auth required, no encryption)
When you are done you can get rid of everything with:
npm run services:remove
# OR if you want to stop without removing
npm run services:stop
See the roadmap for idas on how to improve this project.
Thanks goes to these wonderful people (emoji key):
Eric Hayes π π |
This project follows the all-contributors specification. Contributions of any kind welcome!