I have been tasked with testing OttoFMS and OttoDeploy, with the idea that we could replace our existing migration system for our client. In theory it should work great, but in my early testing things do not look so good. Our test environment in is AWS with two t3.2xlarge (8vcp/32GBRAM) Ubuntu Servers, with 100GB Boot, 400GB Data, 500GB Backup Drive, this was all done to relatively replicate the production environment which is on premises. Also matching the local machines is an 8GB Swap file because Todd said you need it, so I matched what the actual production servers have.
Copies of the Production files were transferred from S3 Backups to our BigTest machine in the sky. This is about 53 files and 188GB worth of stuff. The plan was to time a migration of a few files (30 Gigs maybe?) from the BigStaging Server in the sky to compare to what we are doing in real life, back on premise.
My first failed test was to try to ādeployā all the files from BigTest to BigStage. This failed miserably. Then through testing I discovered that it didnāt like the 30GB files at all. Then, I tried smaller files, and it still failed.
Finally I thought I would try a sub 1GB file, and that worked. It worked rather nicely. Next up was a 2GB file. Just a single 2GB file, and as of this writing, and based on checking htop it looks like all the work on the first machine (BigTest) in this case, is done. But the zip file is still growing in /backupfolder/OttoFMS/inbox/build_blahblahblah
It has been over two hours and itās only at 1.6 GB of 2.3.
Of course this is unacceptable, but I am more than willing to confess my ignorance on such matters, so here I am. What am I missing? Or is OttoFMS/OttoDeploy simply not capable of dealing with anything over 1 GB?