You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Mar 14, 2023. It is now read-only.
There is scenario where many of our containers may take more than 30 secs to start. In other words, 30 seconds (probably) will not be enough for new replicas to start when VMs receive preemption signal.
Is it possible to modify this draining_timeout_when_node_expired_ms values to 45 secs to solve the above problem?
The text was updated successfully, but these errors were encountered:
Hi,
I have same issue.
According to documentation : https://cloud.google.com/compute/docs/instances/preemptible#preemption-process Compute Engine sends a preemption notice to the instance in the form of an ACPI G2 Soft Off signal. You can use a shutdown script to handle the preemption notice and complete cleanup actions before the instance stops.
If the instance does not stop after 30 seconds, Compute Engine sends an ACPI G3 Mechanical Off signal to the operating system.
I think there is no way to override this 30s deadline after G2 ACPI call.
I reduce downtime with using replica and pod anti-affinity.
Hi Team,
There is scenario where many of our containers may take more than 30 secs to start. In other words, 30 seconds (probably) will not be enough for new replicas to start when VMs receive preemption signal.
Is it possible to modify this
draining_timeout_when_node_expired_ms
values to 45 secs to solve the above problem?The text was updated successfully, but these errors were encountered: