-
-
Notifications
You must be signed in to change notification settings - Fork 121
fix delete local backup #18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Could you give an example of how to reproduce this (i.e. the configuration you are using) as I did not see this behavior when testing. It's also kind of strange that the change should be doing the very same thing (as the lookup remains unchanged, it's only some other command that does the removal now). See: https://unix.stackexchange.com/a/205539/437580 |
|
Shure, this is the docker-compose: I place backups older than 2 days to test, I set cron to run every minute to test fast, the backup is created and then wait 1 minute, then starts the delete but suddenly all the files are deleted. Then runs again, creates the backup, if I add other old files, same happen, changing to find $target* -type f -mtime $BACKUP_RETENTION_DAYS -exec rm -rf '{}' ';' make all problems dissapear, only old files are deleted. |
|
I cannot reproduce this locally using your configuration. I see a backup file getting created every minute and the pruning part will log: When you say you change the command, how exactly do you do that? Do you build the Docker image locally and use it in your setup or do you change the script inside the running container? Do you keep any non-backup files inside of |
|
Ah no wait, that's interesting, so when I look at the backup files it seems to accidentally delete every other file: I would assume this is some race condition between the one minute schedule and the one minute leeway. Let me try running this on a non-conflicting real-world schedule and see if it also happens. |
|
Yeah, when i use a leeway value that does not match the cron schedule I do not see any such races. Could you try changing your setup accordingly and check whether you have your files being kept around as well? |
|
The proper fix for this problem would probably be having some sort of mutex/lock that makes sure the script cannot run multiple times in parallel. I doubt it affects many real world configurations though. |
|
I actually just ran into the inverse issue where backups would not be deleted as the |
|
So this seems to be the problem here: https://unix.stackexchange.com/questions/194863/delete-files-older-than-x-days/205539#comment902736_205539
Which is ... unexpected. I fixed this in 1.8.3, let's see what's next on the agenda. |
|
The race condition also remains, but I would wait to see if anyone is actually bitten by this. Thanks for your input. |
|
Until now seems to work fine. Thanks for the assistance. |
Changed
find $target* -delete -type f -mtime $BACKUP_RETENTION_DAYS
to
find $target* -type f -mtime $BACKUP_RETENTION_DAYS -exec rm -rf '{}' ';'
Now it deletes files correctly, the older way deletes everything.