Description
Online you can find a bunch of examples (even in the official docs) which show how to use the local-volume-provisioner in combination with PhysicalVolumeClaims in a StatefulSet.
All works fine until a node goes away and your cloud provider brings up a new one, be it due to an issue on their side or due to you bringing up some new nodes because you are upgrading K8s.
What happens in this case is that the PVC stays bound to a PV which no longer exists. This prohibits the pod in the StatefulSet from coming up until you manually delete the PVC. Now this makes sense because there is no way of knowing if the node was shut down for maintenance and if it will come back later or if it's gone forever.
However I'd just like the node to be assumed dead because I'm never going to reboot nodes intentionally, I'll just roll the cluster. If the pod can be scheduled on another node I know 100% that the node was replaced (due to my affinity settings).
Is there any official way of dealing with this or any config option I'm overseeing?
I can write a job which takes care of this but surely others must have hit this issue?!