-
Notifications
You must be signed in to change notification settings - Fork 79
restic repo initialised twice when starting up docker with both backup/pruneprune actually runs #48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
How is this even possible? Initialization happens outside of the backup or prune code, inside Also, whatever led to the issue might be resolved after PR #47 is merged. Would you be able to test the new logic with your setup? (Btw. I recognize your profile pic from some years ago, not sure from where 😄) |
I have a docker stack with two services, They both managed to initialise the repo at the same time and put it into an invalid state. This has been reported here. (BTW - I think I recognise your handle also - openhab perhaps?!) |
Hey! Which part of the issue behind your link talks about initialization? The main thing I get from the issue is that "this kind of issue" is normally linked to the storage and less to restic. Can you verify this happens with the new logic and with simultaneous startup. Let's clarify first that this is indeed a real issue, if so we need to discuss whether initialization should only happen with Right. openHAB, of course! :) |
Just the comment that talked about that error being caused by the repo being initialised twice (somehow). When I got this error I was searching for relevant posts and came across that one. I noticed when checking the docker logs for my The next time either of them attempted to do anything, or I tried to access the repo manually (via I was following the example in https://github.com/djmaze/resticker/blob/master/docker-swarm.example.yml. |
@ThomDietrich the 2 different containers (backup & prune) starting at the same time would indeed cause the initialization twice. When I tried it, all my repos were already initialized. |
@zoispag that makes sense. Thanks. Also: in the example yml files the container start with a 30min difference. For the purpose of an example I would suggest to make it 12 hours. |
Pruning will start with 30min difference. The containers will start at the same time, if they are at the same docker-compose file. |
Of course. Obviously I was talking about the cron job definition :) |
That sounds reasonable. Let's do it. |
I just realized that, at least in a swarm deployment, multiple initialization can also occur when the In my Docker swarm deployments, I sometimes make use of separate "one-shot" initialization services. They are just started once and then exit. On swarm, this is possible using a restart policy with So we could have a separate |
Of course, when using Docker compose, it would be enough to add an |
Before we discuss this any further: Is there any reason why restic should even allow this? Instead of building a workaround here maybe we can help come up with an improvement in restic itself!? |
Moved to #49 |
Would be okay for me. Then we should at least give a hint about this in the documentation (as you thankfully already began in #49). |
What about removing the auto-initialisation and requiring the user to do this step manually (with suitable documentation/instructions)? |
I don’t think that’s a good idea. I do rely on restiker initializing the repos for me. On the other hand, we can add a mild delay on the prune container on startup, to wait a minute before attempting initialization. |
I don't really like both ideas. Initializing the repo manually yourself means you have to also setup all the SSH keys, credentials etc. locally, which is prone to errors. The 1 minute wait, on the other hand, would rather be an ugly workaround and does not protect when deploying multiple backup containers simultaneously. Ideally, this should be solved on the restic side. Or for a more short-term solution, we could have a separate |
I like the carved out Let's talk To summarize the solution:
In other words: The solution is a PR that is 10% code and 90% docu changes. What do you think? |
If init is removed from both backup/prune containers, i would definitely need an ENV variable, to turn auto-init on. I am using this as part of my automation. On the other hand, does it make sense to init a repo to do prune? It should already exist. We can remove init from prune altogether. (This still does not solve the issue of multiple backup containers on swarm) |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Not stale, just didn't have the time to work on yet |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Hello,
I think what @ThomDietrich summed up here is a good approach.
I also agree with @zoispag that init shouldn't be done when pruning, as repo should already be initialised. So the optional env variable would only have effect in backup containers. For Docker Swarm, @djmaze mentioned an initialisation service, which seems the cleanest way to handle multiple backup services to me. It could also be used for Docker Compose to only have a single file, which can be better for a quickstart. @ThomDietrich Have you already worked on this issue ? It doesn't seem too complex to me, so I can help if needed. |
Triggered by a linked project I will most probably take some time to work on this issue this week. I will respond to your thoughts and suggestions later. Thanks for the push @djmaze could you please reopen the issue? Thanks |
Yes, please reopen this bug - I hit it every time I want to init a repo for a new host where both backup and prune containers are initializing at the same time. |
Oh, well, seems I overlooked the previous comment. Sorry |
Any news on this ? Right now, every new host added to our backup needs manual intervention, would be great to have this fixed. |
Sorry for the late answer. If there is no one on this, I will try to find some time to implement this in the upcoming days. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
I faced this issue while trying to set up restic on a new host with an existing repo, no my repo is broken and not usable anymore
|
I just had the same (running 1.7.0) and temporary added a After starting ( Maybe this needs a |
I noticed when I started my docker swarm stack which included both
backup
andprune
, that it attempted to (and succeeded) in initialising the same repo twice.This put the repo in a bad state and any attempts to run a backup/prune resulted in;
Can we either only attempt to initialise during
backup
? Inprune
can we check if the repo is initialised and if not then exit early (clearly nothing to do).The text was updated successfully, but these errors were encountered: