Our production server has around 185GB of container data. Photos from inspections at customers homes. A vast majority of these are historic data that we want to keep. We add roughly 25-100 maybe per day and these are typically under a MB as we resize on insertion.
I ran my first offsite using Otto to an AWS S3 bucket and due to the number of files it took about 6hrs. The main file for the DB is about 4.5GB.
How would you suggest handling offsite backups? I assume that a zip would push faster, as it’s a single file, but I believe a zip is size limited?
Are you sending up the container data using the “Send additional container data folders” or is it being backed up with the database itself?
If you are using the “Send additional container data folders” setting, that does a sync up when you run the offsite backups, so it will not send everything every time. However, if you are sending all of the container data every time (wowee thats a lot of data), that will always take a while.
While a zip would push faster and they are not technically size limited, I wouldn’t be sure that it would actually decrease the total time that the offsite takes to send as creating the zip takes some time as well. Any gains you get from a smaller file size would be offset by the creation time. If you are more worried about the space on your S3 location, a zip might help. The zip setting does not apply if you are backing up the container data with the external container data folders and not with the database.
I think my recommendation would be to back up the container data to offsite using the external container data folders (assuming you’re already using those folders). This keeps the total size of all files on your S3 location relatively small, while ensuring you have all the files you might need stored. It will push up files but not delete files that might have been removed on the local server.
I am using the “include container folder 1” and 2. I must have missed the “additional container data” setting.
I would prefer that if I’m understanding what you are saying to mean that in future backups to the S3 bucket, only the additional container data would be sent that didn’t exist in the previous days backup?
Sorry, the “include container folder 1” and 2 settings were the ones I was talking about, I just forgot what the exact text was.
And yes, in future backups to S3 using the include container folder option it will only send up new or changes files. If you look at the structure in the offsite location (you can do this in the OttoFMS file browser) you should see a “dbs” folder and a “container1” and “container2” folder. The container folders will contain all the container files while the dbs folder will contain each individual backup of the .fmp12 files.
No worries. At first I assumed that was what you meant, but then I started second guessing myself haha
I did set the number of backups to keep at 0, assuming like FMS does, it just writes the additional containers and swaps out the main files each time.
As you noted, it’s a lot of data, and it isn’t getting smaller. Log file shows “Sending backup complete. 266400 files, totalling 194.32 GB, sent in 17860.8 seconds”.
Is it sending the container data to the “dbs” folder as well as to the “container” folders in your S3 location? Its possible that you have it set up to backup the container data with your .fmp12 files so its sending twice?
It’s interesting you mention the duplicates. I let it run last night, hoping it would be faster, however as you’ll see in this screen shot, it did double. So, what am I doing wrong?
Its hard to answer that without seeing what files got pushed. If you’re sending the external container folders they should be populating in your S3 location in a “container1” or “container2” folder. Is there also container data in the RC_Data_FMS folder in the individual backup folders? If so, you could turn off the backup of the external container folders in the FMS Admin Console, that would ensure it only gets pushed up to the container folders and not duplicated in each nightly backup.
Normally if you have the option turned on to send the container data separately I would expect to see a log in the otto-info.log about the container data getting sent, which I do not. Could you send a screenshot of your Offsite Schedule config?
Yes, OttoFMS does behave slightly differently when setting the backups to keep to 0. It will rename the created backup folder to include a timestamp before it pushes it up to the offsite location. Then it will push the files up and delete the local version of the backup. The idea is you could run a backup every night without keeping any of them on your server. The server disk space would only be used when the backup was running and sending, but you could keep the disk space requirements relatively low.
For the files you are sending are the Container Data files stored in the Additional Container Data folders or are they stored in the RC_Data_FMS folders next to the file in the Databases folder?
Ah that is why its sending them each time. The “Include Container Data Folder” settings include the additional container data folders set up in FileMaker. The RC_Data_FMS folders are always included with each backup. So in your case it will send a copy of the Container Data each time since the container data is stored alongside the databases.
If you want to store the container data separately (and not send it every time) I would store them in the additional container data folders rather than alongside the databases.
Yes, FileMaker gives you an option to specify a folder (or two) outside of the databases folder to store external container data. Check out the doc here. If you’re using this external folder OttoFMS can send that to offsites separately from the databases and it will only keep a single copy rather than one copy per backup run.
Ahhhh. Ok. Reading the document ( thank you for the link ) I see that it’s different than just externally stored containers. Interesting.
For now I will be rolling with the setup we have, as I don’t want risk issues. However planning large dev change with the main file, and I will for sure roll this into the plans.
Thank you so much for your guidance and unexpected responsiveness