-
Notifications
You must be signed in to change notification settings - Fork 71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fatal: unable to create lock in backend: repository is already locked by PID 55 on docker by root (UID 0, GID 0) #171
Comments
Looks like this is the spot where it's failing, line 94: Lines 93 to 94 in 65e361d
|
1.6.0 doesn't appear to have this issue and makes it past the forget flag. I'll stay on that version until I hear about a fix |
restic/restic#3491 may be related? |
Same issue here, no idea how to downgrade tho once the repo is in the newer state |
Not using b2 myself. Anyone wants to try out the |
Has there been any progress on this issue? Checking my restic logs today, I realized my backups haven't been working for several weeks and found this error in my logs. Following the comment above, I pulled version
Based on restic/restic#2736, it appears that the current guidance is to basically use the |
@littlegraycells Yes, manually unlocking is still the suggested advice. To be more precise, I would suggest to always have a monitoring solution for backups (which you do not seem to have). For example by sending emails in error cases (as shown in the documentation). Or, even better, using something like Healthchecks in order to make sure failures are not being missed. (I can warmly recommend the latter one, you can also host it yourself!) For me, having about 15 different servers, this procedure works very well. That said, I can see that an auto-unlock solution as proposed in the restic issue could work. But that should be implemented there. |
@djmaze Thanks. I do run a self-hosted version of healthchecks.io currently. Would you recommend running the |
Hey @djmaze, that's an interesting comment. I use Uptime Kuma to observer services directly, as well as containers an their healthcheck status. I did not kno that healthchecks can be self-hosted! Irrespective, what I found challenging with resticker is:
Do you have a solution to this? Cheers! |
That's what
Mhh.. We could implement this in resticker. But if you are using Healthchecks, you could also solve it by just pinging Healthchecks using |
If you have only one host using the repository, this might make sense, but if there is more than one (as is the case e.g. when running prunes on a bigger server, like I do) in my opinion that is too dangerous. (I could agree with a solution which automatically removes locks that are e.g. > 24 hours old. But as I said I would prefer this to be solved upstream.) |
Well currently resticker is completely unusable for many people, because of the issue detailed above. Every time it tries to backup, it goes into an infinite loop trying to lock the repo |
@razaqq Well, afaics there is still no reproducible test case. As another workaround, you could also remove |
I am also running into this issue. pre_commands with restic unlock also doesn't seem to work for me. If I unlock the repository from another machine it starts backing up again for a while only to get locked again. |
@thierrybla It would help if you could the original container / job that the lock came from. In your example the lock is quite old, maybe it was a prune which did not finish (because of lack of memory or similar)? |
It should not be lack of memory I am running 128gb of RAM but its not near full at all time. |
I just discovered that I have had a stale lock for a while now. I use the |
Running resticker latest, trying to backup both my docker volumes and a folder in my homedir to backblaze. I also use immich so I dump the immich db with the before command, and exclude some folders I don't want backed up.
docker-compose (running as a portainer stack):
log:
Any ideas on why it fails after backup is complete? I even see the repo in backblaze. It seems like the step it's failing on is scheduling the cron task. Any help would be greatly appreciated, thanks!
The text was updated successfully, but these errors were encountered: