Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add conflictsWith to provider schema #2084

Merged
merged 11 commits into from
May 30, 2023
Merged

add conflictsWith to provider schema #2084

merged 11 commits into from
May 30, 2023

Conversation

BBBmau
Copy link
Contributor

@BBBmau BBBmau commented Apr 21, 2023

Description

This resolves the issue that comes up when wanting to use 2 k8s configurations

Relates to #1141, updates the original PR

Acceptance tests

  • Have you added an acceptance test for the functionality being added?
  • Have you run the acceptance tests on this branch?

Output from acceptance testing:

$ make testacc TESTARGS='-run=TestAccXXX'

...

Release Note

Release note for CHANGELOG:

...

References

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

kubernetes/provider.go Outdated Show resolved Hide resolved
@BBBmau BBBmau marked this pull request as ready for review May 4, 2023 00:14
@BBBmau BBBmau requested a review from a team as a code owner May 4, 2023 00:14
kubernetes/provider.go Outdated Show resolved Hide resolved
kubernetes/provider.go Outdated Show resolved Hide resolved
kubernetes/provider.go Outdated Show resolved Hide resolved
kubernetes/provider.go Outdated Show resolved Hide resolved
kubernetes/provider.go Outdated Show resolved Hide resolved
kubernetes/provider.go Show resolved Hide resolved
@aquarhead
Copy link

aquarhead commented Jun 2, 2023

While I totally agree this change is technically correct, the old behaviour has a useful side effect.

What I've been doing for years is to only set host in the provider configuration:

provider "kubernetes" {
  host = "..."
}

The actual auth will be configured through env vars (e.g. I use KUBE_CONFIG_PATH and KUBE_CTX env var). And if the kube cluster the auth/context refers to is different than the host, it will error out thus preventing deploying things in the wrong kube cluster by accident. (The error is actually something related to certificate, but once you know what it means it serves the purpose pretty well)

This is useful in the sense that it allows Terraform code to enforce the exact cluster a config is supposed to be used, while at the same time allowing auth information to be configured through potentially different manners, not just between developers, but between local and CI as well (which iirc was certainly a case at my previous job). (And yes it did happen that we used a wrong config in the wrong cluster, hence I looked around for way to prevent it)

This is now impossible due to:

│ "host": conflicts with config_path

I'm a bit upset that this is not released as a major version even though I'll be the first to admit I was abusing the behaviour, still, I think the ability to enforce the correct cluster without hardcoding the auth has some merit.

I would like to propose an alternative solution, instead of using conflictsWith directly, could we try to merge all the different config options, and error out if anything mismatch? In my use case this will actually give useful error message like "host has conflict with context" but only if the wrong context is loaded.


Before you ask, I run EKS and I'm aware of the exec config method. But in general I'd like Terraform code itself to not enforce how authentication to be done. Also, if the terraform provider is fully configured, there could be a situation where the Terraform operates on a different cluster than kubectl is, that can be quite confusing - it happens a lot that you deploy through tf and then check things through kubectl. I think it's nice to configure both at the same time, e.g. by pointing to a context.


I want to stress again I totally see where this change is coming from, but I want to explain my (ab)use case and I do believe it will be helfpul especially for people operating many kube clusters, to be able to enforce the correct cluster in code, while keeping the auth flexible. (Maybe, we could introduce a host_match or probably certificate_match would be more correct? 🤔)

I will totally understand if it doesn't happen, but I do wish that this change be reverted in this major version and explore some other options instead, or introduce only in the next major version. We do not use Terraform lockfiles because of a whole slew of other issues, and in general we would like to use latest providers (locked to major version).

@Pell17
Copy link

Pell17 commented Jun 2, 2023

Breaking change for us. Would have been nice if this could have initially been a Warning instead of Error, giving some time to "fix" our configuration. We usually auto-upgrade providers to the newest minor version, not expecting things to break, as required by SemVer.

@Nuru
Copy link

Nuru commented Jun 2, 2023

I have a few problems with this.

As others have mentioned, this is a breaking change in a minor version release, which violates SemVer, and has caused a lot of disruption with no notice.

The documentation has not been updated to indicate what conflicts with what. For example, does username conflict with client_key?

This change removes our ability to override environment variables. In #1175, @dak1n1 explained that v2 of this provider consciously avoided taking configuration from the KUBECONFIG environment variable to avoid having it direct Terraform to the wrong cluster accidentally. (This provider uses KUBE_CONFIG_PATH and KUBE_CONFIG_PATHS instead.) In our modules, we want to further ensure the environment variables are not used, and we have been doing that by setting config_path = "". This change breaks that.

After we fix the breakage by removing config_path, now when both host and KUBE_CONFIG_PATH are set, we get the confusing error message:

│ Error: Conflicting configuration arguments
│ 
│   with provider["registry.terraform.io/hashicorp/kubernetes"],
│   on providers.tf line 14, in provider "kubernetes":
│  14:   host                   = local.eks_cluster_endpoint
│ 
│ "host": conflicts with config_path
╵

(Confusing because config_path is not present in the providers block. A more helpful error would be "host conflicts with KUBE_CONFIG_PATH".)

Suggestion

I believe the better path forward is to arrange the configuration options in a hierarchy, rather than cause them to conflict. Ideally the provider would use configuration provided directly in the provider block in preference to environment variables, and not complain (or perhaps issue a warning) if the environment variables provide a different configuration than what is specified in the provider block. This would allow the provider to support KUBECONFIG as has been requested more than once (#1973, #1175, and perhaps more) while avoiding the problem that environment variables override specifically configured authentication. But perhaps that is best deferred until v3 of this provider.

Until v3 is ready, please revert this change and discuss backwards compatible improvements instead.

jrhouston added a commit that referenced this pull request Jun 5, 2023
jrhouston added a commit that referenced this pull request Jun 5, 2023
jrhouston added a commit that referenced this pull request Jun 5, 2023
@jrhouston
Copy link
Collaborator

Thank you for your thoughtful responses @aquarhead @Pell17 @Nuru – we are going to revert this change for now. We will put out a hotfix release once the above PR is merged.

@ingnil
Copy link

ingnil commented Jun 5, 2023

This change also does not care about the value of the parameters. I just ran into a conflict between cluster_ca_certificate and insecure despite the latter being set to false.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Provider allows mutually-exclusive configuration options
8 participants