Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

changing resource name causes destroy/create conflict on terraform apply #1793

Closed
williamohara opened this issue Jul 27, 2022 · 6 comments
Closed

Comments

@williamohara
Copy link

williamohara commented Jul 27, 2022

Terraform Version, Provider Version and Kubernetes Version

Terraform version: 1.2.6
Kubernetes provider version: 2.12.1
Kubernetes version: 1.22.11

Affected Resource(s)

kubernetes_deploymnet and kubernetes_deployment_v1 - I am guessing that it happens to any rresource that relies on only using the / identifier for importing

Terraform Configuration Files

terraform {
  required_providers {
    azurerm = {
      source = "hashicorp/azurerm"
      version = ">= 2.26"
    }
    
    mysql = {
      source = "petoju/mysql"
      version = "3.0.7"
    }

    kubernetes = {
      source = "hashicorp/kubernetes"
      version = "2.12.1"
    }
  }
  required_version = ">= 0.14.9"
  backend "azurerm" {
    resource_group_name  = "redacted"
    storage_account_name = "redacted"
    container_name       = "redacted"
    key                  = "<redacted>.tfstate"
  }
}

#connects to the Database server to run user setup
provider "mysql" {
  endpoint = "${data.azurerm_mysql_server.subscripify_mysql_serv.fqdn}:3306"
  username = "${data.azurerm_mysql_server.subscripify_mysql_serv.administrator_login}@${data.azurerm_mysql_server.subscripify_mysql_serv.name}"
  password = "${data.terraform_remote_state.infra.outputs.infra_values.dbAdminPw}"
  tls      = true
}


provider "azurerm" {
  features {}
}

provider "kubernetes" {
  host = data.azurerm_kubernetes_cluster.core_cluster.kube_config[0].host
  username = data.azurerm_kubernetes_cluster.core_cluster.kube_config[0].username
  password = data.azurerm_kubernetes_cluster.core_cluster.kube_config[0].password
  client_certificate = base64decode(data.azurerm_kubernetes_cluster.core_cluster.kube_config[0].client_certificate)
  client_key = base64decode(data.azurerm_kubernetes_cluster.core_cluster.kube_config[0].client_key)
  cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.core_cluster.kube_config[0].cluster_ca_certificate)
}

provider helm {
  kubernetes {
    host = data.azurerm_kubernetes_cluster.core_cluster.kube_config[0].host
    username = data.azurerm_kubernetes_cluster.core_cluster.kube_config[0].username
    password = data.azurerm_kubernetes_cluster.core_cluster.kube_config[0].password
    client_certificate = base64decode(data.azurerm_kubernetes_cluster.core_cluster.kube_config[0].client_certificate)
    client_key = base64decode(data.azurerm_kubernetes_cluster.core_cluster.kube_config[0].client_key)
    cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.core_cluster.kube_config[0].cluster_ca_certificate)
  }
}

resource "azurerm_resource_group" "repo_rg" {
  name = data.terraform_remote_state.infra.outputs.infra_values.tenant_infra_resource_group_name
  location = data.terraform_remote_state.infra.outputs.infra_values.location
  tags = {
    "repo" = data.terraform_remote_state.infra.outputs.infra_values.tenant_infra_repo_tag
  }
}

data "terraform_remote_state" "infra" {
backend = "azurerm"
config = {
    resource_group_name = "<redacted>"
    storage_account_name = "redacted"
    container_name       = "redacted"
    key                  = "redacted.redacted-redacted.tfstate"
 }

}


data "azurerm_kubernetes_cluster" "core_cluster" {
  name = data.terraform_remote_state.infra.outputs.infra_values.k8_cluster_name
  resource_group_name = data.terraform_remote_state.infra.outputs.infra_values.core_infra_resource_group_name
}
resource "kubernetes_deployment_v1" "tenant_auth_ui_deploy2" {
  wait_for_rollout = true
  metadata {
    name = "tenant-auth-ui"
    labels = {
      app = "tenant-auth-ui"
    }
  
  }

  spec {
    replicas = 1

    selector {
      match_labels = {
        app = "tenant-auth-ui"
      }
    }

    template {
      metadata {
        labels = {
          app = "tenant-auth-ui"
        }
      }

      spec {
        automount_service_account_token = false
        enable_service_links = false
        container {
          image = "subscripifycontreg.azurecr.io/tenant-auth-ui:latest"
          name  = "tenant-auth-ui"


          port {
            container_port = 3000
            protocol = "TCP"
          }
        }
      }
    }
  }
}

Debug Output

,,,
kubernetes_deployment_v1.tenant_auth_ui_deploy: Destroying... [id=default/tenant-auth-ui]
kubernetes_deployment_v1.tenant_auth_ui_deploy2: Creating...
kubernetes_manifest.secret_provider_class_kratos: Modifying...
kubernetes_deployment_v1.tenant_auth_ui_deploy2: Creation complete after 3s [id=default/tenant-auth-ui]
kubernetes_manifest.secret_provider_class_kratos: Modifications complete after 1s
kubernetes_deployment_v1.tenant_auth_ui_deploy: Still destroying... [id=default/tenant-auth-ui, 10s elapsed]
kubernetes_deployment_v1.tenant_auth_ui_deploy: Still destroying... [id=default/tenant-auth-ui, 20s elapsed]
kubernetes_deployment_v1.tenant_auth_ui_deploy: Still destroying... [id=default/tenant-auth-ui, 30s elapsed]
kubernetes_deployment_v1.tenant_auth_ui_deploy: Still destroying... [id=default/tenant-auth-ui, 40s elapsed]
...
kubernetes_deployment_v1.tenant_auth_ui_deploy: Still destroying... [id=default/tenant-auth-ui, 9m20s elapsed]
kubernetes_deployment_v1.tenant_auth_ui_deploy: Still destroying... [id=default/tenant-auth-ui, 9m30s elapsed]
kubernetes_deployment_v1.tenant_auth_ui_deploy: Still destroying... [id=default/tenant-auth-ui, 9m40s elapsed]
kubernetes_deployment_v1.tenant_auth_ui_deploy: Still destroying... [id=default/tenant-auth-ui, 9m50s elapsed]

│ Error: Deployment (default/tenant-auth-ui) still exists



Releasing state lock. This may take a few moments...
,,,

Steps to Reproduce

  1. create a deployment using kubernetes_deployment_v1, naming the resoruce anything you want in my case I named mine "kubernetes_deployment_v1" "tenant_auth_ui_deploy"
  2. hit terraform apply to deploy to your cluster
  3. after initial deployment is successful - rename the resource to any name you want, in my case I renamed it "kubernetes_deployment_v1" "tenant_auth_ui_deploy2" (note: do not change the name of the deployment or any other meta data)
  4. terraform apply

Expected Behavior

I would have expected my deployment to be deleted completely and a new one that looks exactly like the old one to be redeployed - once it is done - I expect terraform to complete

Actual Behavior

It looks like it sent the request to delete the resource and the request to create the new one at the same time. since the resource is the same name to kubernetes terraform never is able to see it get deleted - because the new deployment has the exact same name in kubernetes even though the resource name is different in terraform

Important Factoids

Running on Azure AKS

Community Note

the best thing to do is to destroy the resource completely using targeted destroy and then apply again after renaming it.

@github-actions github-actions bot removed the bug label Jul 27, 2022
@arybolovlev
Copy link
Contributor

Hi @williamohara,

Namespace + object name(in the case of namespace-scoped resources) and just an object name(in the case of cluster-scoped resources) act as unique resource identifiers within Kubernetes. When you have 2 Terraform resources with a different name but try to manage the same Kubernetes resource, it is expected that you get the observed error message.

If your goal is to change a name of a Terraform resource, then do it and just fix the state file with terraform state mv, or use -target to control Terraform behavior, etc.

I hope it helps.

@williamohara
Copy link
Author

It does, but you don't think it's a bug? Or at least a feature request. Changing resource names is a common use case - no?

@arybolovlev
Copy link
Contributor

No, I don't think this is a bug. If we are talking about changing Terraform resource name, then Terraform will treat this as you deleted the resource and added a new one. Terraform relies on the state file:

  • Imagine, you have resource.name in your code and the state file
  • Then you rename it to resource.name2 in your code
  • Terraform will assume that you have deleted resource.name since it is in the state file, but not in the code
  • Terraform will assume that you have added resource.name2 since it is in your code, but not in the state file

Keeping that in mind, you can help Terraform and use terraform state mv to rename the resource in the state file too. In that case, Terraform won't notice any difference after you rename the resource in both your code and the state file.

Here is a quote about state mv from the link I shared in my previous comment:

You can use terraform state mv in the less common situation where you wish to retain an existing remote object but track it as a different resource instance address in Terraform, such as if you have renamed a resource block or you have moved it into a different module in your configuration.

This is exactly your case.

@yuriy-yarosh
Copy link

yuriy-yarosh commented Jul 31, 2022

Had been ranting about this a bit.
There are other similar issues in here #1782 as well.

Most of the existing Devops communities are in denial of this design issue, with a followup excuse of using terragrunt or terraspace instead to overcome it.

This issue is very widespread and had affected many enterprises... and very similar to the multi-stage build design issue docker had long time ago (there were multiple spikes developed, like grammarly's rocker and rocker-compose)

Terragrunt is not a silver bullet either and is not suitable for certain deployment scenarios.
Haven't looked into terraspace much yet.

@williamohara
Copy link
Author

I can see that as a design choice for terraform @arybolovlev ..

I would say that for a coding experience - just as a resource name is arbitrary it should be allowed to easily and arbitrarily changed. As a developer making a new infra, I may decide that I need to work on nomenclature to add some structure to my resource names. This feels like just one more step I would have to do.

As a design choice I agree- it's important to know exactly what is happening as I deploy using terraform and that the deployment is consistent across all resources (including k8 resources) - rename = delete and then replace with a new resource. This makes sense because it is a process that can be consistently applied across all resource types and behavioral consistency is key!

For this one - I actually did expect a complete destruction and replacement of my resource - but I was hoping that terraform would be smart enough to manage it all for me - maybe by completely destroying all resources that need to be destroyed before applying new ones. It may make apply time longer - but the snafu I found myself in would have been avoided. I was running terraform in a pipeline and it was a hassle to go in can cancel the run after I realized it was taking forever. I could see some logic that compares resource properties to ensure tidy destruction of old when the new looks similar. just an idea.

@github-actions
Copy link

github-actions bot commented Sep 2, 2022

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 2, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants