Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

volume - config_map is being recreated at every apply #2339

Open
TrimPeachu opened this issue Nov 14, 2023 · 4 comments
Open

volume - config_map is being recreated at every apply #2339

TrimPeachu opened this issue Nov 14, 2023 · 4 comments
Labels

Comments

@TrimPeachu
Copy link

Terraform Version, Provider Version and Kubernetes Version

Terraform version: 1.4.6
Kubernetes provider version: v2.23.0
Kubernetes version: 1.27.3-gke.100

Terraform Configuration Files

resource "kubernetes_deployment_v1" "kube-test" {
...
        volume_mount {
          name       = "configs-secrets"
          mount_path = var.configs_secrets_dir
          read_only  = true
        }

        volume {
          name = "configs-secrets"
          projected {
            sources {
              secret {
                name     = kubernetes_secret.job-secrets.metadata[0].name
                optional = false

                dynamic "items" {
                  for_each = var.k8s-secrets

                  content {
                    key  = items.value
                    path = "${items.value}/${items.value}.json"
                  }
                }
              }

              config_map {
                name     = kubernetes_config_map.job-configs.metadata[0].name
                optional = true 

                dynamic "items" {
                  for_each = setunion(toset(var.k8s-required-configmaps))

                  content {
                    key  = items.key
                    path = "${items.key}/${items.key}.json"
                  }
                }
              }
            }
          }
        }
      }

Expected Behavior

Show no changes

Actual Behavior

 # kubernetes_deployment_v1.kube-test will be updated in-place
  ~ resource "kubernetes_deployment_v1" "kube-test" {
        id               = "default/kube-test"
        # (1 unchanged attribute hidden)

      ~ spec {
            # (5 unchanged attributes hidden)

          ~ template {
              ~ spec {
                    # (12 unchanged attributes hidden)

                  ~ volume {
                        name = "configs-secrets"

                      ~ projected {
                            # (1 unchanged attribute hidden)

                          ~ sources {
                              + config_map {
                                  + name     = "job-configs"
                                  + optional = true

                                  + items {
                                      + key  = "test"
                                      + path = "test/test.json"
                                    }
                                 ...
                                }
                           - sources {
                              - config_map {
                                  - name     = "job-configs" -> null
                                  - optional = false -> null

                                  - items {
                                      - key  = "test" -> null
                                      - path = "test/test.json" -> null
                                    }
                                ...

I seem to have an issue that at first seemed similar to the #1835 but if I have understood it correctly, their problem was using service account name.

@TrimPeachu TrimPeachu added the bug label Nov 14, 2023
@alexsomesan
Copy link
Member

@TrimPeachu is this resource part of a module?
How is the value of var.k8s-required-configmaps being set? Is kubernetes_config_map.job-configs also dependant on that value?

@TrimPeachu
Copy link
Author

Hi @alexsomesan ,

This resource is not part of a module.

var.k8s-required-configmaps is defined as so:

variable "k8s-required-configmaps" {
  default = [
    "test.A",
    "test.B"
  ]
}

And correct, kubernetes_config_map.job-configs is also depended on var.k8s-required-configmaps

resource "kubernetes_config_map" "job-configs" {
  provider = kubernetes

  metadata {
    name = "job-configs"
  }

  data = merge(
    { for config in var.k8s-required-configmaps : config => file("${var.jobs_configs_dir}/${config}.json") },
    { for config in local.k8s-optional-configmaps : config => file("${var.jobs_configs_dir}/${config}.json") if fileexists("${var.jobs_configs_dir}/${config}.json") }
  )
}

However upon testing, same unwanted result occurs even if I use sth like this:

resource "kubernetes_config_map" "job-configs" {
  provider = kubernetes

  metadata {
    name = "job-configs"
  }

  data = {
    "test1" = "testA"
    "test2" = "testB"
    "test3" = "testC"
  }
}

@TrimPeachu
Copy link
Author

Hi,
is the info I have provided sufficient @alexsomesan or is there something more I can provide so you are able to assist me with this issue?

Thanks :))

@ergonab
Copy link

ergonab commented Sep 23, 2024

Not sure how issues are handled here, but this seems like a duplicate of #1358

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants