Skip to content

This docker image is built on a Dragonfly image with custom entry-point commands to resolve some S3 issues or misconfiguration when using Dragonfly with Coolify.

Notifications You must be signed in to change notification settings

Automations-Project/dragonfly-coolify-s3

Repository files navigation

Dragonfly with S3 Backups (GHCR-ready, Coolify-friendly)

A minimal wrapper image around docker.dragonflydb.io/dragonflydb/dragonfly:latest that adds a small entrypoint to configure Dragonfly's backup flags through environment variables, including saving snapshots to AWS S3 or any S3-compatible object storage (e.g., MinIO).

  • Base image: docker.dragonflydb.io/dragonflydb/dragonfly:latest
  • Exposes port: 6379
  • Health check: Built-in redis-cli ping health check (Dragonfly is Redis-compatible)
  • Publishes to GHCR via GitHub Actions

⚠️ Production Note: Dragonfly S3 cloud storage is a preview feature (per upstream docs). And this project more likely just for testing purposes.

Features

  • S3 backups via --dir s3://bucket[/optional/prefix]
  • Scheduled snapshots via --snapshot_cron "CRON"
  • Automatic load on startup and save on shutdown
  • Configurable via environment variables (no need to rebuild for config changes)

Environment variables

  • S3_BUCKET_URL: S3 URL where snapshots are stored, e.g. s3://my-bucket/dragonfly/snapshots. If unset, DIR or /data is used.
  • DIR: Local or remote directory for snapshots (fallback if S3_BUCKET_URL not set). Defaults to /data.
  • SNAPSHOT_CRON: Cron expression for automatic snapshots (e.g. 0 0 * * *).
  • DBFILENAME: Filename base for snapshots (defaults to dump-{timestamp}; Dragonfly adds .dfs).
  • S3_ENDPOINT: Endpoint for S3-compatible services (e.g. minio.example.com:9000).
  • S3_USE_HTTPS: true or false (default true).
  • S3_SIGN_PAYLOAD: true or false (default true).
  • S3_EC2_METADATA: true to enable EC2 metadata credentials (default false).
  • VMODULE: Dragonfly vmodule setting, e.g. *=3.
  • LOGTOSTDERR: true to log to stderr (default true).
  • REDIS_PASSWORD: Password for authentication (adds --requirepass flag). Coolify-compatible.
  • EXTRA_FLAGS: Extra flags appended verbatim to the dragonfly command (advanced).

Authentication and region are taken from standard AWS providers (set as needed):

  • AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN (optional)
  • AWS_REGION

Quick start

Run locally with S3:

docker run -it --rm \
  -e S3_BUCKET_URL="s3://my-dragonfly-backups/dragonfly/snapshots" \
  -e AWS_ACCESS_KEY_ID=... \
  -e AWS_SECRET_ACCESS_KEY=... \
  -e AWS_REGION=us-east-1 \
  -e SNAPSHOT_CRON="0 0 * * *" \
  -p 6379:6379 \
  ghcr.io/Automations-Project/dragonfly-coolify-s3:latest

Run locally with MinIO:

docker run -it --rm \
  -e S3_BUCKET_URL="s3://dragonfly-backups" \
  -e S3_ENDPOINT="minio.local:9000" \
  -e S3_USE_HTTPS=false \
  -e AWS_ACCESS_KEY_ID=minio \
  -e AWS_SECRET_ACCESS_KEY=minio123 \
  -e AWS_REGION=us-east-1 \
  -e SNAPSHOT_CRON="*/30 * * * *" \
  -p 6379:6379 \
  ghcr.io/Automations-Project/dragonfly-coolify-s3:latest

Local disk (no S3):

docker run -it --rm \
  -e DIR=/data \
  -e SNAPSHOT_CRON="*/5 * * * *" \
  -v dragonfly-data:/data \
  -p 6379:6379 \
  ghcr.io/Automations-Project/dragonfly-coolify-s3:latest

Coolify deployment

  • Image: ghcr.io/Automations-Project/dragonfly-coolify-s3:latest
  • Ports: expose 6379
  • Persistent storage: mount a volume to /data (used when not writing to S3 and for internal transient files)
  • Environment: set at least
    • S3_BUCKET_URL (or DIR)
    • SNAPSHOT_CRON (e.g. 0 0 * * *)
    • AWS creds/region as needed (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION)
    • Optional: S3_ENDPOINT, S3_USE_HTTPS, S3_SIGN_PAYLOAD, DBFILENAME

Coolify will pass the environment variables to the container; the entrypoint translates them into Dragonfly CLI flags.

How it works

The image copies docker-entrypoint.sh and uses it as the ENTRYPOINT. The script builds a dragonfly command line based on env vars:

  • --dir uses S3_BUCKET_URL if set, otherwise DIR, otherwise /data.
  • --snapshot_cron is added if SNAPSHOT_CRON is set.
  • --dbfilename defaults to dump-{timestamp}.
  • S3 options are mapped from envs: --s3_endpoint, --s3_use_https, --s3_sign_payload, --s3_ec2_metadata.
  • Extra flags can be injected via EXTRA_FLAGS.

If you pass explicit args to the container (e.g. docker run ... dragonfly --help) they are respected and the wrapper does not override them.

Notes and caveats

  • S3 backup/restore is a preview feature in Dragonfly. Validate on your storage implementation.
  • Ensure the container has network access to the S3 endpoint and that the region and credentials are correct.
  • The base image is the official Dragonfly image. If your environment lacks a POSIX shell, you can embed flags directly in Dockerfile by overriding ENTRYPOINT with static flags instead of using the wrapper.

License

This repo packages the upstream Dragonfly binary under their license. See upstream project for details. The files added here are provided under the repository's license.

About

This docker image is built on a Dragonfly image with custom entry-point commands to resolve some S3 issues or misconfiguration when using Dragonfly with Coolify.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published