-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add lost configuration checking to deploy command. #5718
base: 12.x
Are you sure you want to change the base?
Conversation
Hi, thanks for the PR, but I think this is the wrong approach.
on the other system
So this PR would fail the recommended workflow. How else are contrib modules supposed to install other modules that they now depend on in a new version? This didn't fundamentally change since my presentation at devdays in Seville (slide 39 and 40 https://www.slideshare.net/nuvoleweb/advanced-configuration-management-with-config-split-et-al ) |
I had a small chat with @weitzman and I think it would be helpful to illustrate why I think failing on a perfectly ok scenario is not the right solution. Imagine I maintain a contrib module and your site is using it. |
@bircher I totally get the scenario you describe. Maybe "fail" is the wrong approach but ideally we want to somehow notify or flag users with automated checks and workflows that an update hook has produced config changes that need to be exported.
|
Yeah, I think this is still valuable - we need to decide on behavior. My initial thought is to add an option |
I'm late to the party, but wanted to chime in quick too. Thank you all for keeping this moving and the good discussion. 🙏🏼 @bircher those are both valid scenarios that concern me too. In fact it is scenarios and workflows just like the ones you described that this is meant to help address. The problem is that no matter what state we end with after database updates won't matter, as we will lose whatever (config) changes were made in the update hooks when config is imported. So the net result is that the new UI module will be enabled and then subsequently disabled. This check would simply flag that and allow the user to react to the issue, which would likely entail following the exact workflow you outlined above.
That sounds like a good plan to me. Should I add that to this PR?
You are correct that the check assumes that we would want to export/commit any config changes made in an update hook into code. When that is not the case the scenario you outlined is an issue. However, I would assume that most of the time we would want to commit changes to configuration made by an update hook to code, so adding a flag that allows this to succeed in those cases seems like a good compromise to me. If this or other similar scenarios do occur more often than I'm thinking they do, then we do need to rethink this though.
I'm not sure I'm following this scenario. 🤔 If update hook are the first step in any deployment, how would the module already be enabled? |
@weitzman did you forget to push that up to this branch? I don't see any new commits. 👀 |
Pushed. Had some trouble with github CLI. |
The new changes look great to me FWIW. ✅ Thank you for adding that. 🙏🏼 |
I think this would be a tricky change to make - every deployment that changes config outside of a hook_update_N or post update will need to use the new flag. I'm not sure that this make sense to me. In my experience deployments not purely module and core updates but often contain new features built from config like new fields / content types, views etc.. |
I believe this is one of the scenarios that this change intends to highlight so that these changes can be exported to configuration. Is there a scenario where configuration changes are made via an update hook that you don't want to export to config on disk? Maybe I do not understand. |
Saw this come up on Drupal slack. I am also concerned about this change. Even if individual commits don't mix exporting database updates back to the codebase with other config changes, at my workplace we often deploy multiple commits/concerns up all at once to at least some environments such as Production, due to deploying to them less than to QA/lower environments. Having the deploy behavior simply change - particularly in the middle of the 12.x lifecycle? - would be highly disruptive. I get that the "phpass scenario" is a big problem. (And I've already seen it happen at my own workplace.) However, putting overall constraints on what kind of configuration package can be deployed up seems problematic for all the reasons already outlined. If nothing else, seems to me it would be appropriate to turn the option the other way around: default to a warning and have a flag to opt in to throwing an exception.
A good example is the Scheduler module. There were changes between the 1.x and 2.x modules to form widget display. The update hook has some logic to automatically add or update form displays accordingly. That hook outputs a note that it is making its automated best judgment regarding these form display changes, and I believe it in fact encourages the reader to double check its work and make changes if necessary, e.g. unhiding or hiding Scheduler widgets if one disagrees with the update hook's decision. |
That sounds good to me.
This really helped me understand the concern better. Thank you for sharing this. I have not encountered an update hook like that before. |
If Scheduler is "updating form displays" in an update hook and then |
I'm thinking of the following scenario:
If I then commit the above and run a deployment process that uses |
These changes would not flag that. Let me try to explain below. This PR does the following:
For the scenario above, you wouldn't encounter this at all because you exported your configuration, which would then be imported, and then there wouldn't be any additional subsequent diff between the active configuration state and the tracked configuration state.
This PR is seeking to catch scenarios where you did not export your changes and flag for you that a relative diff exists in active vs tracked configuration which can often mean that configuration was not exported after an update hook. It doesn't care where the changes came from as long as you export configuration to remove the diff between active and tracked configuration. Perhaps some illustrated examples would help us all understand this more clearly. 🤞🏼 For the sake of this example, each color could be a line in a config file or just some discrete piece of config. Failure:
Success:
Ideally there is no 🟡 diff either, but we often don't work in ideal environments. So this change allows for those differences to continue to exist without causing issues. Does that help clear this up? |
I just added an explanation to my comment above clarifying my example, as I realized it might not be as clear as it was in my head. 🙈 |
// Record the state of configuration after database updates. | ||
$process = $manager->drush($self, ConfigCommands::STATUS, [], $configStatusOptions); | ||
$process->mustRun(); | ||
$newConfigDiff = $process->getOutputAsJson(); | ||
|
||
// Check for new changes to active configuration that would be lost | ||
// during the subsequent configuration import. | ||
if ($originalConfigDiff !== $newConfigDiff) { | ||
$configDiff = array_diff( | ||
array_keys($newConfigDiff), | ||
array_keys($originalConfigDiff), | ||
); | ||
$this->logger()->warning(dt('The following config changes will be lost during config import: :config', [ | ||
':config' => implode(', ', $configDiff), | ||
])); | ||
if (!$options['ignore-mismatched-config']) { | ||
throw new \RuntimeException('Update hooks altered config that is about to be reverted during config import. Use --ignore-mismatched-config to bypass this error. Aborting.'); | ||
} | ||
} | ||
|
||
$this->logger()->success("Config import start."); | ||
$process = $manager->drush($self, ConfigImportCommands::IMPORT, [], $redispatchOptions); | ||
$process->mustRun($process->showRealtime()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This does not track what you explained in your last comment.
Here the second diff is created after the update hook has run but before the config import.
The pull request does not what you describe here because you say that But also your cases with the colours is not entirely correct, because you can not do a config export during the deployment. Success would be:
|
But I am arguing that this is still wrong! Because I might have removed 🟢 on purpose from the "tracked config" during site building after I ran the update hooks locally. |
Hi all There are workflows that use tools for automated dependency updates (like Renovate bot) that automatically update modules and, while those module updates still run a set of automated tests on the consumer projects, not having an explicit failure would not notify the dev team. As a result - those module updates will automatically go into production (in cases where auto-patching workflow is used). With a hard fail - the dev team would not need to read auto-update logs but rather see the update failures. So, whatever the outcome of this discussion is - whether to have a flag or not - could you please consider to use an erroneous exit code on failure. Thank you |
This looks good to me. I see that by default it is set to throw an error in case of a mismatch. We can keep the ignore-mismatched-config flag set to true by default so it works as it currently does unless someone wants to use this feature and set the flag to false. I am working on a way to validate missing config exports in CI and came up with the following commands, but they seem fragile. If internal working any of these commands changes, it will stop working: ddev drush updatedb --no-cache-clear --yes
ddev drush config:import --yes
ddev drush cache:rebuild --yes
ddev drush deploy:hook --yes
ddev drush config:status -> if this outputs any changes, means there are missing exports. This PR seems to fix the issue. Any update on whether it will be merged or not? |
This replaces #5713
Motivation
Current State
I added an update hook that simply did this and it failed as expected. ✅
I tested this code on the 11.x branch and then ported it to 12.x without being able to 100% test it as I don't have time to scaffold up an environment that is 12.x friendly, but it looks like it will work.
If someone else could test this and help 100% validate it that would be great.
I also added logging to help add visibility to the config items affected. Hopefully we are all on board with that and find that valuable. Happy to adjust as needed though.