You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am searching especially for a service, which i can use in a repeatable template way over different docker-compose stacks. My main goal is very little maintenance effort, so downtimes during backup are fine for me. That was the reason why i loved the idea of automatically stopping and starting the services under backup by volumerize. This is a very generic way to ensure data consistency between several services in a service stack.
So may it be possible to add this feature also in your backup service? When this is in place, i think your service could be an drop-in replacement for the project with >2m docker pulls and 320 github stars.
The text was updated successfully, but these errors were encountered:
It should be already supported. We supply a docker flavour. Just mount the docker socket on it (or its proxy to increase security) and use the magic job configuration env vars to stop the service before, and start it again after the backup (which happens at stage 300 daily and 500 weekly by default).
Hi there,
i am searching for a replacement for volumerize https://github.com/blacklabelops/volumerize, since this project is now deprecated blacklabelops/volumerize#59.
I am searching especially for a service, which i can use in a repeatable template way over different docker-compose stacks. My main goal is very little maintenance effort, so downtimes during backup are fine for me. That was the reason why i loved the idea of automatically stopping and starting the services under backup by volumerize. This is a very generic way to ensure data consistency between several services in a service stack.
So may it be possible to add this feature also in your backup service? When this is in place, i think your service could be an drop-in replacement for the project with >2m docker pulls and 320 github stars.
The text was updated successfully, but these errors were encountered: