Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provider produces inconsistent final plan when creating a network policy #2549

Closed
avinode-amagdy opened this issue Jul 15, 2024 · 5 comments
Closed

Comments

@avinode-amagdy
Copy link

avinode-amagdy commented Jul 15, 2024

Terraform Version, Provider Version and Kubernetes Version

Terraform version: 1.5.7
Kubernetes provider version: ~> 2.25
Kubernetes version: 1.28.5

Affected Resource(s)

  • kubrnetes_network_policy

Terraform Configuration Files

resource "kubernetes_network_policy" "allow_k8s_api_egress" {
  metadata {
    name      = "${var.prefix}-allow-k8s-api-egress"
    namespace = var.namespace
    labels    = var.k8s_labels
  }
  spec {
    policy_types = ["Egress"]
    pod_selector {
      match_labels = var.pod_selector_labels
    }
    # Kubernetes API access via endpoint, used on clusters with an `kubenet`
    # network_plugin.
    egress {
      dynamic "ports" {
        for_each = flatten(data.kubernetes_endpoints_v1.kubernetes.subset.*.port)
        content {
          port     = ports.value.port
          protocol = ports.value.protocol
        }
      }
      dynamic "to" {
        for_each = flatten(data.kubernetes_endpoints_v1.kubernetes.subset.*.address)
        content {
          ip_block {
            cidr = "${to.value.ip}/32"
          }
        }
      }
    }
    # Kubernetes API access via service, used on clusters with an `azure`
    # network_plugin.
    egress {
      dynamic "ports" {
        for_each = data.kubernetes_service.kubernetes.spec[0].port
        content {
          port     = ports.value.port
          protocol = ports.value.protocol
        }
      }
      dynamic "to" {
        for_each = distinct(concat(
          data.kubernetes_service.kubernetes.spec[0].cluster_ips,
          [data.kubernetes_service.kubernetes.spec[0].cluster_ip],
        ))
        content {
          ip_block {
            cidr = "${to.value}/32"
          }
        }
      }
    }
  }
}

Debug Output

Panic Output

│ Error: Provider produced inconsistent final plan
│ 
│ When expanding the plan for
│ module.k8s_api_egress.kubernetes_network_policy.allow_k8s_api_egress
│ to include new values learned so far during apply, provider
│ "registry.terraform.io/hashicorp/kubernetes" produced an invalid new value
│ for .spec[0].egress[0].ports: block count changed from 0 to 1.
│ 
│ This is a bug in the provider, which should be reported in the provider's
│ own issue tracker.
╵
╷
│ Error: Provider produced inconsistent final plan
│ 
│ When expanding the plan for
│  module.k8s_api_egress.kubernetes_network_policy.allow_k8s_api_egress
│ to include new values learned so far during apply, provider
│ "registry.terraform.io/hashicorp/kubernetes" produced an invalid new value
│ for .spec[0].egress[0].to: block count changed from 0 to 1.
│ 
│ This is a bug in the provider, which should be reported in the provider's
│ own issue tracker.
╵

Steps to Reproduce

The problem seems to happen because of using a dynamic block inside the network policy resource.

I'm not sure how to reproduce it separately because it's a part of larger project.

Expected Behavior

What should have happened?
The configuration should be applied without any issues from the first run

Actual Behavior

What actually happened?
I get the error above when I try to deploy the cluster from the first time, but when I run the configuration again it passes successfully.

Important Factoids

The cluster is deployed on Azure Kubernetes Service (AKS)

References

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@BBBmau
Copy link
Contributor

BBBmau commented Jul 16, 2024

Hi @avinode-amagdy based off of your description, it seems to be coming from wanting to use a data source as your for_each input:

      dynamic "ports" {
        for_each = flatten(data.kubernetes_endpoints_v1.kubernetes.subset.*.port)
        content {
          port     = ports.value.port
          protocol = ports.value.protocol
        }
      }

Since this would be the first run of the config, you'll be returned with 0 as the data source input. When attempting a second apply you'll then be returned with 1 since this is after the resource has been applied.

A solution that comes to mind is adding depends_on to your network_policy and inserting the information from the resource itself instead of from a data source. This will ensure that the resource has been successfully provisioned before attempting the network_policy resource.

Let me know if this makes sense, I can close this issue afterwards.

@avinode-amagdy
Copy link
Author

avinode-amagdy commented Jul 17, 2024

Thank you for your comment @BBBmau , I tried your suggestion already and that did not change the output.
also I checked the dependancy graph of this network policy with the current state and this was the output:

$ terragrun0.37.8 graph --type plan
...
...

                "[root] module.k8s_api_egress.kubernetes_network_policy.allow_k8s_api_egress (expand)" -> "[root] module.k8s_api_egress.data.kubernetes_endpoints_v1.kubernetes (expand)"
                "[root] module.k8s_api_egress.kubernetes_network_policy.allow_k8s_api_egress (expand)" -> "[root] module.k8s_api_egress.data.kubernetes_service.kubernetes (expand)"
                "[root] module.k8s_api_egress.kubernetes_network_policy.allow_k8s_api_egress (expand)" -> "[root] module.external_dns.module.k8s_api_egress.var.k8s_labels (expand)"
                "[root] module.k8s_api_egress.kubernetes_network_policy.allow_k8s_api_egress (expand)" -> "[root] module.k8s_api_egress.var.namespace (expand)"
                "[root] module.k8s_api_egress.kubernetes_network_policy.allow_k8s_api_egress (expand)" -> "[root] module.k8s_api_egress.var.pod_selector_labels (expand)"
                "[root] module.k8s_api_egress.kubernetes_network_policy.allow_k8s_api_egress (expand)" -> "[root] module.external_dns.module.k8s_api_egress.var.prefix (expand)"

...
...

and the graph did not change by adding depends_on .

@BBBmau
Copy link
Contributor

BBBmau commented Jul 18, 2024

Hello again! Did some investigating but couldn't come up with a proper reproduction where the inconsistency occurs. I do see that you're using a module called k8s_api_egress, can more information be provided about the module in use? Something must not be set correctly. @avinode-amagdy

@avinode-amagdy
Copy link
Author

Hi there, we were trying to reproduce the issue separately but we never could because the problem happens within a larger context.

We found the solution in the recommendation in this article:
https://itnext.io/beware-of-depends-on-for-modules-it-might-bite-you-da4741caac70

Also this article summarises the problem pretty much
https://medium.com/hashicorp-engineering/creating-module-dependencies-in-terraform-0-13-4322702dac4a

The problem was (on a high level) that we had module.k8s_api_egress depends on another terraform module, inside of it we have data.kubernetes_endpoints_v1.kubernetes that was not being read during the planning phase -because of how depends_on works- and caused the inconsistency after applying the plan.

Closing this issue because it turned out a misconfiguration and not a bug.

@BBBmau
Copy link
Contributor

BBBmau commented Aug 13, 2024

Hi there, we were trying to reproduce the issue separately but we never could because the problem happens within a larger context.

We found the solution in the recommendation in this article: https://itnext.io/beware-of-depends-on-for-modules-it-might-bite-you-da4741caac70

Also this article summarises the problem pretty much https://medium.com/hashicorp-engineering/creating-module-dependencies-in-terraform-0-13-4322702dac4a

The problem was (on a high level) that we had module.k8s_api_egress depends on another terraform module, inside of it we have data.kubernetes_endpoints_v1.kubernetes that was not being read during the planning phase -because of how depends_on works- and caused the inconsistency after applying the plan.

Closing this issue because it turned out a misconfiguration and not a bug.

Thanks for sharing what you discovered!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants