-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Description
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment. If the issue is assigned to the "modular-magician" user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If the issue is assigned to a user, that user is claiming responsibility for the issue. If the issue is assigned to "hashibot", a community member has claimed the issue already.
Description
When a non-autoscaling node pool (with a fixed set of nodes defined in node_count
) has the value in node_count
changed, the provider calls the SetSize API call (here https://github.com/terraform-providers/terraform-provider-google/blob/1bc6bdacb86b656a8d2740e34b3ace25b8c0ce34/google/resource_container_node_pool.go#L633)
In the case of a scale down, the nodes are removed in a very orderly fashion. GKE removes them one by one, giving the pods time to be rescheduled elsewhere.
In the case of a scale up, GKE adds the new nodes quite quickly.
However, when an autoscaling node pool is resized (ie changing min_node_count
or min_node_count
), the node pool isn't resized at all.
For example: if I have a node pool with min:5/max: 10 and I change it to min:1/max:2, I'm likely to have well over 2 nodes (at least 5).
In theory the autoscaling agent operated by GKE may reduce the size of the pool, but it's not guaranteed to. It'll depend on the workloads running on the pool, and what other pools are on the cluster.
It would be helpful if terraform resized the pool to fit the new constraints:
- When the new max is lower than the old min, SetSize to the new max
- When the new min is higher than the old max, SetSize to the new min
This will make it easier to shuffle workloads around the cluster while performing maintenance.
Affected Resource(s)
- google_container_node_pool