Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Source config_path provider argument from $KUBECONFIG env var #1175

Open
drewgingerich opened this issue Feb 24, 2021 · 14 comments
Open

Source config_path provider argument from $KUBECONFIG env var #1175

drewgingerich opened this issue Feb 24, 2021 · 14 comments

Comments

@drewgingerich
Copy link

drewgingerich commented Feb 24, 2021

Description

Can the config_path provider argument be sourced from $KUBECONFIG? The kubectl command uses KUBECONFIG by default so it makes sense to me that this provider would look in the same place. I think this could provide a smoother integration with other tools via consistent defaults.

My specific use-case is with Gitlab CI. Gitlab CI has a nice k8s cluster integration that automatically exports the KUBECONFG env var for each CI job. Because this provider looks for KUBE_CONFIG_PATH, I have to add some extra logic to re-export the kube config path. This sounds simple, but is made a bit more complex by what I'd consider a bug in Gitlab CI: adding

variables:
   KUBE_CONFIG_PATH: $KUBECONFIG

to my .gitlab-ci.yml script actually exports the contents of the kube config file. The solution I've found is to add a script to the container running the job, to properly export KUBE_CONFIG_PATH before running Terraform.

To maintain backward compatibility, sourcing from KUBECONFIG would be in addition to sourcing from KUBE_CONFIG_PATH. KUBE_CONFIG_PATH could have priority to help prevent surprises.

I'm wondering if this has been looked at before. The choice of KUBE_CONFIG_PATH certainly feels intentional. If so, why was it decided not to use KUBECONFIG?

If this sounds reasonable, I can work on it and submit a PR.

References

Also posted at: hashicorp/terraform-provider-helm#687

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@dak1n1
Copy link
Contributor

dak1n1 commented Feb 25, 2021

Thanks for writing up this issue! It was an intentional decision with the 2.0 release to remove the automatic reading of KUBECONFIG, since many users were having issues with the wrong cluster being targeted by the provider due to the KUBECONFIG being read. However, this is something we're open to revisiting, if there is enough support from the community.

I think we could mitigate the original issue mentioned in #909 by setting an order of precedence that prefers explicit configuration over implicit configurations (such as in the case of specifying a static configuration in the provider block, and having KUBECONFIG present on the system at the same time). In a case like that, we could use the static configuration, and only default to KUBECONFIG when no other configuration is specified. There is some initial work to support that by adding ConflictsWith to the conflicting settings, which will fix some of the configuration issues users have hit since the 2.0 release. But please do 👍 this issue if anyone in the community supports reading KUBECONFIG again.

@drewgingerich
Copy link
Author

Hi @dak1n1,

Thanks for the explanation. I can see how it would be easy to get bit if KUBECONFIG is used automatically, and your move away from it makes sense to me.

Having to set KUBE_CONFIG_PATH is not a big deal and would work fine for me if it weren't for the Gitlab bug I encountered. I've opened an issue there about it. Which is all to say, the solution to my problem should probably found on the Gitlab side of things.

If people do like the idea of being able to read KUBECONFIG, maybe an opt-in bool argument mutually exclusive with the config_path and config_paths could work? That way it's easy and people know what they're getting into.

@moseskt
Copy link

moseskt commented Jun 2, 2021

Following

@Bobonium
Copy link

Bobonium commented Aug 21, 2021

Hey @dak1n1 ,

since this has been open for some time I'd like to give my 2cents as someone who's been using terraform for multiple years with multiple different providers.

Personally I don't understand the change in the 2.0 release at all, since it introduced even more inconsistency with the Kubernetes Backend and how basically all other terraform providers operate.

I think the biggest reason to change this back again is consistency throughout existing official providers. Looking at the big 3

Here's for example what is documented for AWS credentials:

The AWS provider offers a flexible means of providing credentials for authentication. The following methods are supported, in this order, and explained below:

    Static credentials
    Environment variables
    Shared credentials/configuration file
    CodeBuild, ECS, and EKS Roles
    EC2 Instance Metadata Service (IMDS and IMDSv2)

Also credential lookup for other providers is the same for the backend provider. In case of AWS this makes the terraform setup super simple, because at most you configure an assume_role as part of your provider config. It also makes it super easy for new people to get started with AWS and Terraform, since as long as they've setup their AWS CLI, they're able to run terraform as well.

With the latest Kubernetes changes, I always need to take some extra minutes to get new people started with it, and every single one so far asked me why terraform insists on using a separate environment variable.

Lastly even if we disregard that other providers follow a different authentication pattern, it sucks from a local execution point of view as well. I'm working with quite a few clusters and for unrelated reasons it makes it easier for me to manage this by using multiple kubeconfig files and contexts (think different teams with multiple instances, where each team has its own file and each file has an entry for dev/staging/prod). My normal procedure in the past would've been to simply switch the KUBECONFIG env variable to switch cluster connections, double check I'm working on the correct cluster by using the kubectl cli and then executing terraform.
Now I can't trust the output of the kubectl command anymore in case I need to run something outside of CI/CD, because it's completely separated from the connection terraform uses, and there's also no way I can guarantee that.
This can get some very weird behavior when combined with the kubernetes backend, because it reads state from Cluster A, based on KUBECONFIG and tries to compare it with the state on Cluster B. So far I did never fuck anything up with that, but it's just a matter of time until I or someone else does so accidentally.

@omkensey
Copy link

Count me in (with my "Kubernetes community member" hat on) as favoring reading implicit kubeconfig, ideally using the same semantics and with the same ordering that are defined for kubectl doing so:

  • an explicitly defined path (kube_config_path in this case) is read as a single file
  • KUBECONFIG is read as a list of paths which are merge-read and updated (if necessary) using logic defined in the documentation for kubectl config
  • if neither of the above is set, ~/.kube/config is read as a single file
  • if none of the above is set, the compiled-in defaults are used

@timothyclarke
Copy link

@dak1n1 I've got to agree with the other comments above.
Using config other than KUBECONFIG is an anti-pattern. Having read #909 I cannot see how the description there leads to the proposed changes.

My $0.02 is that terraform within a CI/CD platform should be the first class citizen. In that environment you should have a VERY good idea of the environment and strive to ensure that environment is isolated from any action that happening concurrently. In some CI platforms that's running inside containers, in others it's copying files into working directories. eg cp ${KUBECONFIG} $(PWD)/.kubeconfig && export KUBECONFIG="$(PWD)/.kubeconfig" for copy methods. Either way KUBECONFIG should be in a VERY known state.
I can see in some situations (such as writing modules) where work is not in a well managed environment. In that setting the proposal from @omkensey provides the the best balance.

@lindhe
Copy link
Contributor

lindhe commented Jun 9, 2022

Wow, this is pretty surprising behavior. It took a lot of debugging and trial-and-error before I understood how this really works. Considering that most other Kubernetes clients (kubectl, helm, Lens, etc) works basically out-of-the-box with KUBECONFIG, I'd say this provider too should follow the usual pattern.

@github-actions
Copy link

Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you!

@github-actions github-actions bot added the stale label Jun 10, 2023
@Bobonium
Copy link

Not stale, still an issue especially when introducing the provider to new users.

@github-actions github-actions bot removed the stale label Jun 10, 2023
Copy link

Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you!

@github-actions github-actions bot added the stale label Jun 10, 2024
@lindhe
Copy link
Contributor

lindhe commented Jun 10, 2024

I think it's an important issue to keep open.

@github-actions github-actions bot removed the stale label Jun 10, 2024
@lindhe
Copy link
Contributor

lindhe commented Sep 10, 2024

@giosueDelgado You can follow an issue by clicking the Subscribe button (at the right hand side on desktop; bottom of the page on mobile). That way do you not notify everyone in the thread that you are following the issue.

@giosueDelgado
Copy link

Yes,

@giosueDelgado You can follow an issue by clicking the Subscribe button (at the right hand side on desktop; bottom of the page on mobile). That way do you not notify everyone in the thread that you are following the issue.

Hello @lindhe you are right! I update the comment to be more complete

@giosueDelgado
Copy link

There are some update about this?

In this moment I found the only workaround to use the local file but not always works as expected...
So this morning I prefer to migrate to this type of configuration but was not so easy take this information on our pipeline:

provider "kubernetes" {
  host = var.kube_host

  client_certificate     = base64decode(var.client_certificate)
  client_key             = base64decode(var.client_key)
  cluster_ca_certificate = base64decode(var.cluster_ca_certificate)

}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants