It’s an interesting idea. I can think of a few use cases for this. But could you say more about the automated nightly builds benefit. Now that OttoFMS has recurring scheduled deployments, what does this do differently.
if I understand the recurring schedule right, it would download the build – let’s say – every night and perform the migration regardless whether it has already run successfully before.
On the other hand I would like to be able to push a new version whenever it is ready and tested, sometimes accepted, maybe every day or even in a week. All PRODs would look for a new build every night and, if there is one, perform the migration without any further interaction.
I first thought of a feature that would orchestrate individual deployments (with a sub-deployment type = ‘alias’), and maybe SimpleQ, which I haven’t tried so far, could be of assistance here, but I like the idea of decoupling the PROD instances by letting them pull their updates – a bit like the Sparkle framework.
Yes that is correct. Most people who use recurring deployments do the exact same deployment every night.
We are pretty far along a auto-updating FileMaker application path. If you look at OttoDeploy for example, it knows how to update itself. But it is done at the Application level not the Server level.
We don’t use the presence of the Manifest file as the signal, but a separate JSON file that tracks what version can be updated to what other versions. This is similar to Sparkle does it.
We are planning on incorporate this into the build process so that a build can be published to a URL with a versions.json. or something similar.
I can see how something like this might be useful at the server level. But there are a lot of details. For example, Sub deployments are atomic, they can either revert or succeed, And if they fail they can abort the rest of the deployment. If some work and some don’t should that be counted as finished? And new identical deployments fail? If so how do you retry the others.
The build does not define the deployment. The build defines the source. You could do lots of different deployments or sub-deployments with the same build. This is a key part of our support for multi-tennant use cases. The same build is used for a many sub-deployments, one for each set of customer files on a server.
Given all that I feel like the application files should know what version they are and be able to check for updates and trigger deployments.
You are absolutely right that a proper auto-update function should be part of the application, especially if it is vertical – but for all the other cases, I would love to have the simplicity of OttoFMS as a simple way. I originally wanted to ask during the “pass build information inside a clone” hack the other day if we could somehow store custom payload in the manifest that then can be passed as parameter to the post-deployment script.
What do you think of the idea of having ‘expert settings’ in form of a simple text field where additional deployment settings can be made using JSON/YAML or simple INI-style text? Experimental or very specific features, for field-testing new ones, without a GUI for each setting, and without any guarantees or support
I am pretty sure, that there are already a ton of interesting ideas that fall into that ‘yes, but…’ category.