Downloading Backups via API

Hello,

Is it possible to download a backup (listed within the FMS Backups folder in the File Manager) as a .zip archive using the Developer API? If not, what would be the recommended approach for downloading a FileMaker backup as a .zip folder (initiated from a separate Node backend app)? Thanks in advance!

Hey Nick,

There is not a way to get a signed download link from the developer API. At the moment there is no way to automate getting that link from a separate node process.

Are you trying to download the backups to a different machine automatically? I could see a couple of ways to handle this automation:

  1. Set up an offsite backup to S3, download from S3 programmatically (this is the only directly supported way to get the files off the server with OttoFMS)
  2. Manually set up a remote location that uses FTP as its destination using the rclone config file. This method is completely untested by the team at Proof+Geist, but is theoretically possible with rclone. The only problem could be with setting up the schedule in the OttoFMS console, at the UI assumes that you are using an S3 location. We are planning on introducing true support for this option in the future. Check out the RClone FTP Docs for details on the config options you would need to set.
  3. We could add an api that could get signed download links to OttoFMS. We are planning on introducing a feature that lets you get these links from the UI in the File Browser, it would not be a huge stretch to include an API that could do the same.

What is your use case for downloading the files to a separate machine?

-Kyle

Edit: I hit send too early whoops

Hey Kyle,

Thanks for the prompt response! We’re trying to allow our clients to download their FileMaker backups as a .zip package (via a web portal we manage). I watched the network devtools from the Otto FMS UI, but didn’t see the /downloadZip endpoint listed in the Developer API.

I’ve actually tried the Offsite Backups feature, but this would require the web app to download all the objects from the S3 backup and zip those on the backend (which depending on the size of the FileMaker database + container data could take a while and a lot of temporary disk space). Is it possible to have Otto zip the backup contents prior to uploading to S3?

Trying to zip the files after the files are uploaded to S3 was a little hacky and hard to manage. Since S3 doesn’t really have the concept of “directories”, trying to download/zip a particular backup (using the backup “folder” name) was pretty complicated.

Any advice would be much appreciated!

Hmm yeah that is an interesting idea. The APIs we are using for the file browser are not intended to be used outside of OttoFMS and the Cloud Console, and they do have a couple things that would be difficult to work with outside of those circumstances.

For the point about the offsite backups being zipped, we are planning on adding support for zipping the backups before they get sent up, we’re currently discussing whether we should zip individual files and container data as a zip and send up a folder or the entire backup folder as a zip and send it up.

I think it would be reasonable to get a signed download url from the OttoFMS developer API, I’ll add an API for it in the next version. It will probably take in an array of folder names, which OttoFMS will use to build the path. Then it can send back url which you can redirect the user to which will download the files. Does that sound like what you’re looking for?

-Kyle

Side note: congratulations on being post #1000 on the Ottomatic Community!

1 Like

I love the idea of zipping files before they get sent up to S3. Just to offer my two cents, I think having one single .zip archive per “backup” would be ideal, at least for my use case. I’d say 99% of the time I would want all FileMaker databases (and their respective container data) in a single .zip, as opposed to one .zip per database, if that’s what is in question.

I also really like the idea of having signed URLs for the downloads. As long as I’d have some level of discoverability in the API to “list” the directories that could then be signed and downloaded. I think for my personal use case, I’d prefer a single .zip file on S3, but I like the idea of the signed URLs as well.

Thanks!

1 Like

I also have a use-case for getting a url to download a backup. Essentially, the client needs a way to download a backup ( or a consolidated backup that we generate by caching data ), for use in the field. Sync isn’t directly an option right now, though the ideal solution eventually.