But in this case it’s the older no longer supported version that is turbo-charged.
I suspect that the pre 19.6 DMT were able to use more memory. I think they may have added what they considered to be safety measures to protect against using all the memory on a machine.
I still think your file should be migrate-able with the current DMT, and if it isn’t it might helpful to let Claris know.
Does this problem file EVER complete the migration phase in your tests.
Todd
1 Like
My last test was with the recovered file, and basically, it right to the end and hangs. That being said, this is exactly what happened when I ran the migration with FDM command line on a control computer. I essentially shut it it down after a day. Note that I was able to then open the file with the “Consistency Check” alert.
So, yesterday I ran another recovery on that test file, and I’m uploading it to the test server to see if we have success. This time I will leave the memory cranked up and see if works.
Of course doing three recoveries on a 30GB takes a lot of down time, some we are working on another possibility of recovering the clone until it’s pure (although I did this before 7 times). and then running the migration on Midge (where it “works”) and then recovering the result to see if it’s clean. (not my plan, but if it works great, if it fails then they won’t ask me to do it again).
So it sounds like this one File just won’t migrate with the FileMaker 19 and later Data Migration Tool. But it will migrate with the FileMaker 18 version. This may have something to do with corruption, but we don’t know.
I think if you were willing to report you finding to Claris they might be able to figure out what it is, and maybe improve the latest versions of the tool, or tell you how to fix your file. It might be worth a shot.
Let us know if you’d like us to help facilitate that.
Todd
Do you have the recovery log from Billings?
So I got it to migrate. YAY! With Otto, WOOO HOOO!
The procedure that worked for me was
- Recover (6 hours), [Then try and fail to migrate]
- Recover without indexes (about 30 min!) [fail to migrate with command line]
- Recover Again (another 6 hours). [soup up server, give Otto another try]
This was done on the data file, and even after doing the third recovery I still get “WARNING: problems were detected while recovering…” blah blah blah
But it seems to have been sufficient. When I popped the file back up to my test server (thanks to the help of S3 Transfer Acceleration), Souped up the server for more HP and re-ran the migration. It was happily done in 3 1/2 hours.
@dude2copy Yes I have the recovery log, all three of them, but I’m not the best person for reading said logs. I even got confused by the dang timestamp. There really aren’t a lot of errors recorded, but a lead developer suggests that they could be for custom functions that no longer exist, or Old plugins. This is an old file that has been through a lot.
My purpose is to research and test Otto. Along the way, I’ve been able to find a couple of bugs, and even how to optimize my choice of instance type for doing a migration. I just happened to run across a file that has personal issues, and then tried to pass them off on me .
Cool. Yeah, I think your testing drew attention to a couple of things the team thought could be improved, and we were able to ship those. I would highly recommend that your team take a hard look at the recovery logs and see what’s ailing. We have a lot of confidence in OttoFMS and OttoDeploy, but they cannot, ultimately, deal with underlying database problems or corruption issues. Keep us posted on your journey. If there is something we can do to make it easier or better, we want to know.
Best.
- List item
did you check to see if you have Recovery Blob table?
As recorded earlier in this thread there were a two recovery tables that were removed in both the Donor (clone file) from staging and the Data File. One of the recovery tables had 48 records with two fields, one containing little icons.