S3 store zip rather than fmp12

This may be a RTFM situation I accept…
I have my multiple schedules to my S3 so that I can segregate offsite backups

I have been doing one of these for a while with a .sh file that after the backup is run zips the folder with a timestamp as name and then uses the s3cmd command line tool to do the upload, and then use Pushover with a curl to notify success, time taken and file size.

What I would love is to somehow use the build zip in the process, as we are happy that we dont need individual files and folders full of externally stored file, to make the navigation through the offsite easier

Ideal workflow would be:
Backup schedule saves files if folder keeping 0 but treated as a build so is zipped to a specific location
Either completion of that task triggers the move to offsite or it is schedules at reasonable time after the buidll shoudl have completed.

I amy have asked this before, but that was at a much earlier stage and Otto is now much more capable!

Otto 3 had the option to store zips, and we maybe could add that back. Here are the two reasons we haven’t yet.

  1. Zipping is resource intensive. It can take a long time. And it impacts the server during that process.
  2. We wanted to support Restore files directly from Offsites backups. Which we just shipped or are about to ship. You can now with a single click restore a file from an offsite backup to your server. Having those files in Zips, means we would couldn’t do that. We would have to download the entire zip to the server, and unzip it, then restore the one file, then throw out the rest of the files.

We still may add it back some day. But these are the issues we need to solve to do it.

Trade-offs. Single Click restore from Offsite Vs Zip. We chose to do Single Click Restores first.

Hope that helps you understand our thinking around this.

Todd

Good response… will carry on with current method for now.
Feels like because we have multiple schedules then it could be an option on the schedule at the Otto end.

In the specific case that uses this most we are less concerned about one-click restore, more that the whole day is archived and we can go back to that day (which includes masses of external PDF files, although I am workiing on some yearly archiving to OneDrive atm)
I also get that zipping is slow… howerver my Wasabi mangagement file then deletes woh days of backups once the 60 day retention period is past (we keep each 01 and 14 of month) so that action is simplified rahter than having to do it by deleting /folders recursively (may be just swings/roundabouts) but I now for sure that the director listing you get back from the API is based on etag of individual files so over that perios the file coint could go up by a factor of 10