Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

instancegroup kubelet changes detected #884

Open
baldey-nz opened this issue Jan 15, 2023 · 9 comments
Open

instancegroup kubelet changes detected #884

baldey-nz opened this issue Jan 15, 2023 · 9 comments

Comments

@baldey-nz
Copy link

baldey-nz commented Jan 15, 2023

We have recently moved to v1.25.2 of the provider (originally at v1.23.5) and now every time we run a plan we see that there are changes in the kubelet settings for each of our instance groups.

For example with no code changes a plan show us:

` # module.kubernetes_kops_aws_cluster_awsuse2["sbx-awsuse2"].kops_instance_group.nodepools["workloads_r6i-us-east-2c"] will be updated in-place

~ resource "kops_instance_group" "nodepools" {
      id                           = "sbx-awsuse2.k8s.local/workloads_r6i-us-east-2c"
      name                         = "workloads_r6i-us-east-2c"
    ~ revision                     = 9 -> 10
      # (29 unchanged attributes hidden)

  ~ kubelet {
      ~ node_labels                         = {
          - "cluster"                      = "sbx-awsuse2" -> null
          - "kubernetes.io/role"           = "node" -> null
          - "node-role.kubernetes.io/node" = "" -> null
          - "nodepool"                     = "workloads_r6i" -> null
          - "region_code"                  = "awsuse2" -> null
          - "stage"                        = "sbx" -> null
        }
      ~ taints                              = [
          - "nodepool=workloads_r6i:NoSchedule",
        ]
        # (37 unchanged attributes hidden)

      - anonymous_auth {
          - value = false -> null
        }

      - cpu_cfs_quota {
          - value = false -> null
        }
    }

    # (1 unchanged block hidden)
}

`

And this also trigger the updater e.g:

`

  \# module.kubernetes_kops_aws_cluster_awsuse2["sbx-awsuse2"].kops_cluster_updater.updater will be updated in-place

  ~ resource "kops_cluster_updater" "updater" {
        id               = "sbx-awsuse2.k8s.local"
      ~ keepers          = {
          ~ "workloads_r6i-us-east-2c" = "9" -> "10"
            # (7 unchanged elements hidden)
        }
      ~ revision         = 12 -> 13
        # (2 unchanged attributes hidden)

        # (3 unchanged blocks hidden)
    }

`

The apply on a terraform plan doesnt seem to actually change anything, the next plan will show exactly the same changes.

We have managed to work around this by using a lifecycle block but this is a bit of a hack:

lifecycle {
ignore_changes = [kubelet]
}
And when planning with no changes and that set we get the expected msg returned:

No changes. Your infrastructure matches the configuration.

Any ideas on what we can do here? This wasnt happening to us on the v1.23.5 version and as mentioned we havent changed any code around kops_instance_group or kubelet. Thanks.

@baldey-nz
Copy link
Author

We have tried running a plan with v1.25.3 of the provider but we get the same outcome as the v1.25.2 version

@argoyle
Copy link
Contributor

argoyle commented Jan 16, 2023

This sounds very similar to what I saw when making #792 but after that change I haven't seen it. I'm running 1.25.3.

@argoyle
Copy link
Contributor

argoyle commented Jan 16, 2023

Those changes were part of 1.25.2 though.

@eddycharly
Copy link
Owner

Hi @baldey-nz did you apply the changes once, and it still shows a diff ?

@dkulchinsky
Copy link

dkulchinsky commented Jan 16, 2023

Hi @baldey-nz did you apply the changes once, and it still shows a diff ?

Hi @eddycharly, thanks for taking a look at this.

We tried to apply the changes, unfortunately we still see the same diff show up in a subsequent plan.

as mentioned by @baldey-nz, our initial provider version was v1.23.5, we then upgraded to v1.25.2 and also tried with v1.25.3.

@baldey-nz
Copy link
Author

Hi @eddycharly . I can confirm that on both versions v1.25.2 and v1.25.3 that once the changes have been applied it still shows a diff in a subsequent plan.

@dkulchinsky
Copy link

dkulchinsky commented Jan 16, 2023

@eddycharly, I'm a bit confused with how the kubelet block in kops_instance_group can be a Computed attribute?

at least before #792, we could set per instance group kubelet settings using this block, and indeed it looks like the instance groups that are affected by this issue all have this:

  kubelet {
    allowed_unsafe_sysctls =  local.sysctls_allowlist
  }

which we only set on 2 of the 4 instance groups we have in each cluster.

I think if kubelet is a Computed attribute now, then it cannot accept inputs from users, but then again - it should, right? or I could be misunderstood some of the mechanics here.

@argoyle
Copy link
Contributor

argoyle commented Jan 16, 2023

It should still use the user-supplied values but I think kOps sets some default-values and those are problably the changes you see for each apply. I got the kubelet-changes even if I didn't specify any values and hence my change to make them computed. Unless @eddycharly can come up with something smart to merge kOps default values with user-specified values, I would say your best bet for now is to add the "full" spec as the diff suggests since kOps will set those values anyway.

@dkulchinsky
Copy link

thanks @argoyle, for now we had to add the kubelet attribute to a lifecycle ignore list.

@eddycharly any thoughts on how this could be addressed in the provider?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants