-
Notifications
You must be signed in to change notification settings - Fork 982
gke-cluster v19.0.0 replacing autopilot cluster #1126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Labels
Comments
@apichick @danielmarzini do you have a clue on what's happening here? |
Just FYI, here's the diff of diff --git a/modules/gke-cluster/main.tf b/modules/gke-cluster/main.tf
index bc94dd37..f4b86bf6 100644
--- a/modules/gke-cluster/main.tf
+++ b/modules/gke-cluster/main.tf
@@ -48,7 +48,18 @@ resource "google_container_cluster" "cluster" {
enable_autopilot = var.enable_features.autopilot ? true : null
# the default nodepool is deleted here, use the gke-nodepool module instead
- # node_config {}
+ # default nodepool configuration based on a shielded_nodes variable
+ node_config {
+ dynamic "shielded_instance_config" {
+ for_each = var.enable_features.shielded_nodes ? [""] : []
+ content {
+ enable_secure_boot = true
+ enable_integrity_monitoring = true
+ }
+ }
+ }
+
+
addons_config {
dynamic "dns_cache_config" {
@@ -131,7 +142,7 @@ resource "google_container_cluster" "cluster" {
dynamic "resource_limits" {
for_each = var.cluster_autoscaling.mem_limits != null ? [""] : []
content {
- resource_type = "cpu"
+ resource_type = "memory"
minimum = var.cluster_autoscaling.mem_limits.min
maximum = var.cluster_autoscaling.mem_limits.max
} |
I think we might need to skip node config if autopilot bool is set |
@joeheaton can you try with the updated module? |
Looks like that solved it! Thanks @ludoo |
Awesome! Thanks for flagging this! |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Between
daily-2022.11.11
andv19.0.0
gke-cluster has started replacing Autopilot clusters without config change. This doesn't happen immediately, but re-running the Terraform the next day causes a replace. Confirmed this behaviour twice now, no other changes, just ran Terraform, waited a day, ran it again, forces replace.I've included Terraform output below running a plan against an existing cluster.
daily-2022.11.11
behaviour:v19.0.0
behaviour:Fabric is trying to add
node_config
andnode_pool
attributesThe text was updated successfully, but these errors were encountered: