Hi Team,
I have a 3 node K8s Cluster, where I'm installing Etcd-cluster with persistence enabled. It is found that sometimes, when upgrading the cluster with new etcd-changes, one of the instances goes to CrashLoopBackOff state.
NAME READY STATUS RESTARTS AGE
voltha voltha-etcd-cluster-client-0 1/1 Running 0 14h
voltha voltha-etcd-cluster-client-1 1/1 Running 0 14h
voltha voltha-etcd-cluster-client-2 0/1 CrashLoopBackOff 173 (4m31s ago) 14h
Since each etcd instance is associated with a PersitenceVolumeClaim & a PersistenceVolume. So, to recover from this state I have to delete the PV associated with the `voltha-etcd-cluster-client-2` instance and restart the `voltha-etcd-cluster-client-2` pod.
My question is: Is it okay if I delete the PeristenceVolume with surety that no data is lost and data is up-to-date. I don't want to end-up in a situation where I lose any data or if the data is not up-to-date.
Any help would be greatly appreciated.
Thanks,
Abhay