Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How do I opt-in for 3-way merge? #385

Closed
redbaron opened this issue Dec 8, 2021 · 7 comments
Closed

How do I opt-in for 3-way merge? #385

redbaron opened this issue Dec 8, 2021 · 7 comments
Labels
discussion This issue is not a bug or feature and a conversation is needed to find an appropriate resolution

Comments

@redbaron
Copy link
Contributor

redbaron commented Dec 8, 2021

Hi, I read #58 and appreciate reasoning you gave there. There is however a problem I can't resolve, hence this question.

First, to specify conflict resolution rule I need to know exactly where conflict might happen. Given that neither app author nor app deployer might be aware of manual changes done to the cluster resources , it is hard to for them to anticipate and provide conflict resolution configuration.

Second, assuming one would like to opt-in into 3 way merge, how to get it working in practice? Lets take a simple ConfigMap as an example, what rebaseRule config would look like for to get 3 way merge working? More precisely:

  • if key was added manually or (existing key modified manually and not changed between last applied config and new config), keep this key in the configmap applying any other changes in the new config
  • if key was added manually or (exiting key modified manually and changed between last applied config) generate conflict error, unless value provided manually is exactly the same as in new config.

Example of this behaviur would be:

Deploy following manifest:

data:
  key1: val1

Edit configmap manually via kubectl adding new key:

data:
  key1: val2
  key2: val2   # added manually via kubectl

Deploy new manifest:

data:
  key1: val1
  key3: val3

Expected result in the cluster:

data:
  key1: val1
  key2: val2
  key3: val3

I tried following config with no success:

rebaseRules:
- path: [data]  # also tried [data, {allIndexes: true}] which resulted in panic
  type: copy
  sources: [new, existing]
  resourceMatchers:
  - apiVersionKindMatcher: {apiVersion: v1, kind: ConfigMap}
@redbaron redbaron added the carvel triage This issue has not yet been reviewed for validity label Dec 8, 2021
@rohitagg2020
Copy link
Contributor

Hi,

I am looking into the issue and will try to come up with a solution which will include ytt rebase rule.

@redbaron
Copy link
Contributor Author

redbaron commented Dec 8, 2021

There are more limitations of rebaseRule, for instance it doesn't support listMap , that is yaml list, where elements keyed by one of their properties. For example list of containers are keyed by container name, there can't be 2 containers in with the same name.

Thinking about it more, it looks like server side apply and it's conflict resolution framework is better approach. I opened #388 with suggestion to start working on it.

@cppforlife
Copy link
Contributor

There are more limitations of rebaseRule, for instance it doesn't support listMap , that is yaml list, where elements keyed by one of their properties

ytt rebase rules can do any kind of logic since it's a full blown language so it's definitely possible. in fact that's how service account's secret names are merged together in the default rebase rules. (not saying it's pretty as a first cut of this feature but it work -- There are more limitations of rebaseRule, for instance it doesn't support listMap , that is yaml list, where elements keyed by one of their properties) https://carvel.dev/kapp/docs/latest/config/#rebaserules

@cppforlife
Copy link
Contributor

cppforlife commented Dec 8, 2021

it looks like server side apply and it's conflict resolution framework is better approach

kapp tries to provide a guarantee that what gets applied results in same outcome regardless of what happened on the server. in server side apply world, for example, keys that are set by something else on the server side will never be unset by future apply that do not touch that section. this definitely "fights" against general philosophy of kapp. but lets take that convo to this other thread you started.

@cppforlife
Copy link
Contributor

Given that neither app author nor app deployer might be aware of manual changes done to the cluster resources

im actually curious of this statement... how come folks dont know where conflicts are? it sounds like for your use case its ok if cluster resources drift from desired specified config?

@redbaron
Copy link
Contributor Author

redbaron commented Dec 8, 2021

im actually curious of this statement... how come folks dont know where conflicts are? it sounds like for your use case its ok if cluster resources drift from desired specified config?

Someone comes in and adds an env var, lets say to enable logging or tune some JVM params , on a next deploy this var is removed, because deployed manifests never specified this var. For a redeploy to keep vars as is, rebase config needs to be provided (either by ci/cd deploy process or app author), but it is hard for them to know in advance where conflict might be.

it sounds like for your use case its ok if cluster resources drift from desired specified config?

exactly. I dont expect deploy tool to be authoritative on how resource actually going to look in the cluster. It is already the case that many actors modify kubernetes resources: HPA changes replicas, cloud controllers add load balancer annotations on services and ingresses, mutating admission controllers inject sidecar containers, human operators adjusting manifests to resolve productions issues, etc. These modifications need to be preserved and 3 way merge gives it for free with no extra configuration required.

@renuy renuy added discussion This issue is not a bug or feature and a conversation is needed to find an appropriate resolution and removed carvel triage This issue has not yet been reviewed for validity labels Dec 9, 2021
@cppforlife
Copy link
Contributor

exactly. I dont expect deploy tool to be authoritative on how resource actually going to look in the cluster

i see, makes sense as dedicated use case.

on a separate note, i ended up playing around with ytt rebase rule:

since it sounds like you were looking for a general solution (configmap data im assuming was just an example), im going to close this issue in favor of #388 which would address your specific use case generally.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discussion This issue is not a bug or feature and a conversation is needed to find an appropriate resolution
Projects
None yet
Development

No branches or pull requests

4 participants