Hopefully this is a stupid question with an easy answer ( but thankfully I caught myself before I did the exact opposite of what I want to do). So, I have successfully Migrated Schema from a staging server to Production, and had it all come out perfect. I then manually copied those new production files back to staging so that staging would be all up to date.
But wait…Now I’m being asked to take the DATA on Production and migrated it into the Files on Staging.
Then I realized that this is not what OttoDeploy/FMS works. I also realized that I can probably address this by creating a build or some such.
I believe you’re looking for the Refresh Staging guide from our docs. Check out the video posted on that page for an example, as well as the other deployment patterns listed there.
Yep that was exactly what I was looking for, and kind of what I was thinking. The problem is that there are people above me in my office, who didn’t understand how OttoFMS couldn’t do that kind of process as easy as doing it the other way. Also, at least I didn’t do a reverse migration that then blew everything up.
In this case, I’ll need to do a single server migration. A new thing for me, I’ll name our cleaned staging files something different, and place copies of the data files on staging. Yay.
It has been a while, and now I’m returning to this question about a Reverse Migration or staging refresh.
Here’s our situation: The client has TWO, not three, servers. A Production Server and a Staging Server (There is a Development server, but it does not factor in to this migration/refresh). We need to move Production data into the staging schema.
There are 53 files which total about 200GB.
The two servers are both Ubuntu running FMServer 2023 with plenty of memory and processing power. Both servers are on site, not in the cloud.
I know that I will break up the 53 files into multiple sub-deployments so that if something fails along the way, we don’t loose all the other work done.
I know I can’t just outright replace the files on staging as a first step because I need that schema… Great… This leads up to “BUILDS.” I’ve never created a build and put it anywhere. So I need to explore this. Any guidance on this topic would be helpful for me.
The documentation indicate that the build files can be hosted on a website somewhere, but I don’t have access to web sites for the client. BUT, I do have access to network drive. So If OttoFMS can pull from the network drive then this might just work.
But…how big can those builds be? I assume that If I created a build from the full 200GB of files that that would be problematic.
-------------What I will Probably Do------------
Until I can figure out the proper way to get this done in the client’s on site environment, And given that the developer will be pushing to get it done ASAP, I will probably go to my test environment that I built in their AWS account, copy the S3 production Backups directly onto my BigTest server, then upload a set of clones from Staging to their AWS BigStage Server in the sky, and run it like a normal migration there. Then copy all the files from BigTest to S3, download to their Staging Server, and re-host.
Hey Fred, while builds would help with this, you can also do it without them! There is an alternate version of the “Refresh Staging” deployment pattern which we call the “Refresh Dev” pattern. It sounds like exactly what you need, as it moves the production files back to a server and migrates the existing files on that server to the newly copied files. You can think of it as a staging refresh with an added step of replacing the existing files with the newly created files. We have a guide on it In the docs.
As for your questions about builds, there is no limit on the size of the build, but it will fetch the build for every sub-deployment that uses it, so the limitations of network speed come into play. If you’re planning on doing different deployments for subsets of files I would match the build to those subsets. The nice thing about the build for large migrations like this is that we don’t have to recreate the copies of the files if you need to rerun the migration (unless you want a new copy of the file). This means that rerunning it a couple times to work out an error or two has much less effect on your production server.
There are a couple of ways for you to use the build. You can simply create one on your staging server, and then set that as your source. The build gets stored on the server and can be pulled from a different server. In your case you would be migrating from that build on the same server, so the build should not even need to go over the network, and it should just get grabbed and used.
For your curiosity’s sake, the FileMaker HTTP folder on the server counts as a web server for our purposes here, and you can put a build into it to be used by a different server. To use a build from a URL you need to make sure you keep the folder structure and files that are present in the OttoFMS Outbox on the web server where you place the build.
tldr: the refresh dev pattern should get you what you need. A pre-build is likely a good idea but not required, and you don’t need to move it to an external web server (unless you want to)
The one situation I would be careful of is if your Production server is a windows server. There are limitations from the FileMaker IIS web server on Windows which make large file transfers like this extraordinarily slow. To get around the issue you can move the build into the HTTP folder, or move it to the destination server manually. We have some docs on this problem if you want to know more. We are planning on releasing a better workaround for IIS in our next version of OttoFMS
Thanks Fred, let me know if you have any questions, I know that was a lot of information to throw at you all at once haha.
If I follow this procedure correctly the following occurs.
All 53 files are copied down to the staging/dev server (I’m assuming to the actual data drive, not the Otto inbox?), but they are all renamed so as not to over-write the existing files.
Do a standard single server migration moving the schema from the original files to the production files (I assume this is happening in the Otto Inbox), so make sure there is a lot of room available. BigFile.fmp12 --schema–> BigFile_staging.fmp12
Flip the files around. Basically over-write the original files with the staging files by selecting “replace” and then renaming the staging files: BigFile_staging.fmp12 renamed to BigFile.fmp12 –obliterates → original BigFile.fmp12.
I kind of get this. I don’t like having to rename everything twice, but It would do the job. And, of course, we are dealing with 200GB of fun.
That being said, let’s go back to the whole build thing. I like the idea of having a build set up Although this might not be so easy with the production data, (takes time) once done, we could let the production server go back to work. I also like the idea of creating the dev/staging schema files, because then, we could run a simple 1. replace files, 2. migrate from the build. …or evening pull from two builds.
To explore this, I created a clone build on my test staging server in AWS. the creation was fine, and I have the files in the outbox, but I can’t map those for a deployment, so I tried copying them to the FileMaker HTTP folder and that gloriously failed as well. I mucked around with some permissions, but no joy.
The documentation seems to indicate that another server could get to the outbox but I’m not seeing how to get that accomplished.
Now in the AWS environment I could potentially use an S3 bucket but I’m trying to avoid ye ol’ public.
An important point: I’m not a web guy. Used to be, a long time ago, but no more.
A deployment can pull from the outbox as the source. In OttoDeploy you’ll select the server where the build is stored as the source, and then the build itself (instead of a just in time build). You’ll need to make sure advanced options are turned on to see the build select option.
The Outbox is essentially the default build location that OttoFMS pulls from if you tell it that a build is on a different FileMaker server running OttoFMS.
If you make a build on the staging server you would be able to do the deployment in a two step deployment with the build on the staging server as an alternate source on your second sub-deployment. Essentially it would allow you to use the build in replacement of your dev server in the “Refresh Staging” pattern. That’s honestly a great alternative to the three step process of the refresh dev pattern as you don’t have any files to clean up once the deployment is complete.
Ah I see, so the build is your source, so it will need to be on your source server. In your case it sounds like you’ll be using two builds, one on the Big Test server and one on the Big Stage server. The build of your clones from Big Stage will be your source for your second sub-deployment, while the build of the copies from prod will be your source for the first sub-deployment.
OK, I think I see where the problem may be. When I created the first "Staging_Clones_1" set of files, I did not get a "download" option when all was said and done, I closed the window after it looked like it was done.
NO NO, that wasn’t it
I created another build on the real staging server of just three files. This one worked.
So I went back to my staging server and created yet another build (with less compression) and waited for it to show me the download dialog.
This all worked, but I still wasn’t seeing the builds–nor the other builds that were on that server from very early testing.
OK, I come to find out it’s just my use of the Environment Tag of the servers. (created before there was a custom option) DOH. “BigTest” is marked as a “STG” or staging, and BigStage is marked as “DEV” or development. So, I kept looking to at the server with the orange STG instead of the Green DEV.
Yesterday I was able to create a full stack build in my test environment (BigTest). This only took about 1 hour 40 min. I was also able to create a complete clone stack on the staging server (BigStage). Then I went about creating a deployment.
Part one is, of course, to replace all the files on BigStage with the super stack from BigTest. It was real annoying having switch each of the 53 files to “replace” but that part went well.
Part two is the migration, now I want to pull from the local inbox on BigStage, to get the clones that I created… Cool. I had to change all 53 files to “migrate” but there was a problem. Something was not right. When I went to review and deploy I saw the following error: Source file does not exist
It looks like you’ve managed to find a bug that I did not know about! Just found this one in our code, its an annoying one. Looks like during the validation we are using the wrong build id to fetch the build files (only when using an alternate source with a build). I’ll get a fix put together for this.
For now, you can get around the issue by creating a different build on your staging server with the same build ID as you are using as your default build source at the top. Essentially this would be creating a new copy of the Staging_Clones_2 build but with your other build id. You don’t need to use that different build ID in your sub-deployment, but when the validation runs it should see that you have the source files properly. Sorry for the inconvenience there, I;ll get that fixed in our next version of OttoDeploy.
So I think that’s four bugs I’ve found, happy to help improve the product. I’ll be waiting for the update
In the mean time, I created a new clone build but named it the same as the other build (Which was at the top) “super_stack.” But this time when I attempted to map the migration I got “Cannot deploy a file onto itself.”
I figure I can wait for the bug fix, but in the mean time I feature request with so many files involved in this migration, it would be great if one could set ALL the files to the same operation at once, rather than having to click, drop down, click, each file.
In the big scheme of things this probably isn’t really a problem, but I figured I would share.
This pattern you came up with is pretty great. It’s way simpler than what we came up with. We have some stuff coming out soon that is going to make your pattern even easier.
Here’s another suggestion for a potential Otto-future-feature. One of the things our developers want me to do (Because, leave it to the server guy ) is to use the FMDeveloperTool to rename allllllllllll the files and keep them related. That way they can have both the old version and the new version at the same time.
It would be great if OttoDeploy/FMS had an ability to build that out. Just saying.
For what it’s worth when I tested the --rename feature on a small set of files it worked great…not so much with 53, still working on it.
We have the rename feature on the roadmap, its been requested by a couple of people. I suspect it will be in one of our next two larger feature releases.
Both of the bugs you pointed out earlier in this thread (the issue with it not finding the right build and with it not allowing the migration on the same server) should be fixed in OttoDeploy version 1.2.6. I’ll also be updating our docs to provide your pattern for the Refresh Dev pattern as a solution using builds ahead of time.Thank you!!