-
Notifications
You must be signed in to change notification settings - Fork 1k
Module gke-cluster with "forces replacement" due to deletion of default node pool #1275
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Info: I have no nodepools enabled at that moment - just a plain control plane. |
Thanks for reporting this, we might be at the point where we split the gke module into a standar and an autopilot one. Julio and I are discussing this right now. |
I am still having this issue with the split standard gke module. |
Ok let's look into this. Thanks for reporting! |
@vkaukeano-flexion can you paste here the module configuration, so we can reproduce exactly? |
|
@ludoo , Sorry, I am unable to provide configurations due to a contract. However, I am using the nodepool and service account module with the cluster-standard module. When I set the enable secure boot flag to false it no longer destroys and recreates the cluster with every apply. |
One example of a similar issue from one of our colleagues module "cluster" {
source = "github.com/GoogleCloudPlatform/cloud-foundation-fabric/modules/gke-cluster-standard"
project_id = var.project_id
name = "cluster-${var.name}"
location = var.region
vpc_config = {
network = google_compute_network.network.self_link
subnetwork = google_compute_subnetwork.subnet.self_link
secondary_range_names = {
pods = "pods"
services = "services"
}
master_authorized_ranges = {
rfc1918_10_8 = "10.0.0.0/8"
}
master_ipv4_cidr_block = "192.168.0.0/28"
}
enable_features = {
dataplane_v2 = true
workload_identity = true
mesh_certificates = true
}
private_cluster_config = {
enable_private_endpoint = true
master_global_access = true
}
labels = {
environment = var.name
}
} |
My versions:
I used the Foundation Fabric gke-cluster module to create a simple gke cluster without autopilot feature: in the module
After successfully provisioning the resources with the first terraform apply, running a subsequent terraform plan command yields the following result which forces replacement due to spot and preemptible options for the default nodepool:
In the Foundation Fabric
modules\gke-cluster\main.tf
I see the following block:Adding a
node_config[0].preemptible
to the list doesn't help. However, rewriting toworks perfectly and solves the problem.
I am wondering, why the node_config is created here at all. The reason is that it is created here:
So negating the condition
also helped. Is this a bug in the gke-cluster/main.tf or am I doing something wrong in my module configuration?
The text was updated successfully, but these errors were encountered: