-
Notifications
You must be signed in to change notification settings - Fork 967
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provider produces inconsistent final plan when creating a network policy #2549
Comments
Hi @avinode-amagdy based off of your description, it seems to be coming from wanting to use a data source as your dynamic "ports" {
for_each = flatten(data.kubernetes_endpoints_v1.kubernetes.subset.*.port)
content {
port = ports.value.port
protocol = ports.value.protocol
}
} Since this would be the first run of the config, you'll be returned with A solution that comes to mind is adding Let me know if this makes sense, I can close this issue afterwards. |
Thank you for your comment @BBBmau , I tried your suggestion already and that did not change the output.
and the graph did not change by adding |
Hello again! Did some investigating but couldn't come up with a proper reproduction where the inconsistency occurs. I do see that you're using a module called |
Hi there, we were trying to reproduce the issue separately but we never could because the problem happens within a larger context. We found the solution in the recommendation in this article: Also this article summarises the problem pretty much The problem was (on a high level) that we had Closing this issue because it turned out a misconfiguration and not a bug. |
Thanks for sharing what you discovered! |
Terraform Version, Provider Version and Kubernetes Version
Affected Resource(s)
Terraform Configuration Files
Debug Output
Panic Output
Steps to Reproduce
The problem seems to happen because of using a dynamic block inside the network policy resource.
I'm not sure how to reproduce it separately because it's a part of larger project.
Expected Behavior
What should have happened?
The configuration should be applied without any issues from the first run
Actual Behavior
What actually happened?
I get the error above when I try to deploy the cluster from the first time, but when I run the configuration again it passes successfully.
Important Factoids
The cluster is deployed on Azure Kubernetes Service (AKS)
References
Community Note
The text was updated successfully, but these errors were encountered: