For the past few days, we have been experiencing a problem where deployments are no longer continuing. It’s getting stuck while migrating at Tables that should only take a few seconds. I had to cancel the deployment after 15 minutes or more.
At first, it was an empty, unchanged table that I had to abort several times. After we deleted it, the deployment got stuck at another table.
Deployments to a test system work. I will compress the database file in the target system and hope that this solves the problem.
This could be completely normal if a very large table needed to be migrated in record mode.
The DataMigration Tool examines the changes that were made to each table and determines if the table can migrated in “block” mode or “record” mode. It does this on it’s own as part of the migration. If it determines that a table need to go into “record” mode and there are a lot of records, this can take a long time. Each record is moved over one by one. “Block” mode just shoves the data into the file it is very fast.
If you look in the migration log you will find an entry for each table as it begins. It will show if it was processed in record mode or in block mode.
If this is the case, once you get through this longer migration, it will likely go back to a faster migration in block mode, the next time you do a migration, unless you make more changes to that same table that kick it once again into block mode.
thanks for your quick response. It was block mode each, and as written the first table was empty, the second one <100 records. And both tables had not been changed at all…
I compressed the database and at the moment it gets stuck again at the same table (27 fields, 57 records) as before.
Glad you got through it. It will be interesting to see what happens next time.
There are really just 3 or 4 reasons that a migrations slows down.
a large table gets kicked into block mode. based on your response, that isn’t what is happening here.
The migration was set to reindex a large table and recalculate stored calcs.
Your server is maxxed out. The data migration tool is subject to the constraints of your server. Each migration will grab 2 gbs of memory, and it will take every bit of a CPU it can get its hands on. If your server is tiny or is getting hammered by other things then you migration will slow way down. People run into this when they use Burstable T type AWS Servers. We don’t recommend them for this reason.
The only other thing we have ever seen interfere with the data migration is corruption. Usually that just causes the migration to fail. But, I suppose it is possible it could just slow it down.
@jhofmann I may be making an obvious comment, but be aware that a log entry is created only AFTER the table is processed. This had kept myself confused as well. So it is not “stuck” on the table that your latest log entry registered, but on the next one after it takes a longer time… This is just how the filemaker data migration tool works, and otto builds on top of it.