This may be a RTFM situation I accept…
I have my multiple schedules to my S3 so that I can segregate offsite backups
I have been doing one of these for a while with a .sh file that after the backup is run zips the folder with a timestamp as name and then uses the s3cmd command line tool to do the upload, and then use Pushover with a curl to notify success, time taken and file size.
What I would love is to somehow use the build zip in the process, as we are happy that we dont need individual files and folders full of externally stored file, to make the navigation through the offsite easier
Ideal workflow would be:
Backup schedule saves files if folder keeping 0 but treated as a build so is zipped to a specific location
Either completion of that task triggers the move to offsite or it is schedules at reasonable time after the buidll shoudl have completed.
I amy have asked this before, but that was at a much earlier stage and Otto is now much more capable!
Otto 3 had the option to store zips, and we maybe could add that back. Here are the two reasons we haven’t yet.
Zipping is resource intensive. It can take a long time. And it impacts the server during that process.
We wanted to support Restore files directly from Offsites backups. Which we just shipped or are about to ship. You can now with a single click restore a file from an offsite backup to your server. Having those files in Zips, means we would couldn’t do that. We would have to download the entire zip to the server, and unzip it, then restore the one file, then throw out the rest of the files.
We still may add it back some day. But these are the issues we need to solve to do it.
Trade-offs. Single Click restore from Offsite Vs Zip. We chose to do Single Click Restores first.
Hope that helps you understand our thinking around this.
Good response… will carry on with current method for now.
Feels like because we have multiple schedules then it could be an option on the schedule at the Otto end.
In the specific case that uses this most we are less concerned about one-click restore, more that the whole day is archived and we can go back to that day (which includes masses of external PDF files, although I am workiing on some yearly archiving to OneDrive atm)
I also get that zipping is slow… howerver my Wasabi mangagement file then deletes woh days of backups once the 60 day retention period is past (we keep each 01 and 14 of month) so that action is simplified rahter than having to do it by deleting /folders recursively (may be just swings/roundabouts) but I now for sure that the director listing you get back from the API is based on etag of individual files so over that perios the file coint could go up by a factor of 10
Would it work for you if we zipped each file with its container data and sent up a folder with a single file per FileMaker file? That way you could download a single file per fmp12 files, we could still do restores of a single file, and it would save on space. It would still take a bit to send up the files and would put a load on the server, but if you opt into it that seems fair.
Would that get you closer to what you want, or is a single zip file the ideal situation still?
Definitely…
The file that I do this with (which has lots of external containers) using zip and s3cmd - and it does take a lot of time, so is started at like 22:30 so that there are no users.
they all go into one. bucket but with a naming strategy that includes date/time then sorting matches the upload order
Ok awesome. If we do a single zip per file that will work great for this application and for saving space. I’ll adjust our list to reflect this. Thanks for the feedback John!