@toddgeist We have come a long way since I started 1990 with Filemaker II on a Macintosh SE with 1Mb memory and no hard drive, haven’t we…? 1 floppy disk for the OS and 1 for Filemaker, including the database.
We connected a few computers with LocalTalk. Sorting posts could take the better part of an hour, but we were years ahead of our competitors who did everything by hand.
These days we work with “print management” and high res PDF’s for professional printing could be quite big – up to 1Gb each. Moving them into our FM system has been a great improvement, where we have everything that relates to a “project” in the system, where all team members can see it, and there are no files on the account manager’s local drive or a network disk, so it makes it very easy to remove everything for a full year.
Given the process we have and how long we keep projects in the system, I expect the amount of data to grow to 1Tb, possibly 1,5Tb within the next few years, but that won’t be a problem “in the future”, will it…?
PDF’s are compressed already, so there’s not a big benefit compressing them, unless we would want 1 file with the full backup. I think it sounds better handling external containers separately.
@kduval No, currently we are doing standard backups within FMS, including container data. I’ve been looking at other ways of doing this, such as ChronosSync, but when Otto has a solution that sounds like the best for me.
Our server is in a data center which is monitored 24/7 so pretty secure, but one should always have a second backup at another location. I’m thinking this should be at least weekly, but could be daily. Hopefully – and “likely” – it will never be used.
What I would like to be able to do is to keep doing the full backups from FMS to the attached backup disk (including container data), and then add the offsite backup through Otto – but here I would like to handle the container data like you explained above, so only new files will be transfered to S3.
I understand how nice it is to have your have pdfs in the database or closely connected to the database. It is very convenient.
However, I think storing gigabytes of file data in any database system is not a good idea. It makes almost every dev ops task harder, and it is risky. There are other ways to link files to database records that are much safer and much easier to maintain.
You are correct that we can’t do it the way you want. Because each time you make a backup you are creating all new container files. The s3 sync process sees those all as new files, so it will send them all.
@toddgeist You are right. I bumped into one after adding this a 8-9 years ago: files in interactive containers started go dancing. I asked around, but it seemed nobody at Claris hade heard about this.
I have since found out the reason: default cache in FM is “only” 128 Mb. If you work with larger files (in interactive containers) FM will keep fetching the file(s). We even managed to bring down the server when 2 people looked at the same heavy files at the same time. Developing this, I off course only tested with some smaller PDFs which worked just fine…
Trade off: no more interactive containers.
Next challenge: FM can’t produce a thumbnail from a print-ready PDF. The “thumbnail” will have the same weight. So now we handle the thumbnail creation outside of FM and import the thumbnails, so we show a thumbnail but can download or send the actual file. So far it has worked well.