A pod (from a Kubernetes Job Deployment) will be an "Unknown" status. If I run `kubectl delete pod <podname>` I get "<podname> deleted" but the pod is still there, in an unknown state.
Kubernetes Version: 1.7.3
kubectl version:
client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.3", GitCommit:"029c3a408176b55c30846f0faedf56aae5992e9b", GitTreeState:"clean", BuildDate:"2017-02-22T10:12:27Z", GoVersion:"go1.8", Compiler:"gc", Platform:"darwin/amd64"}
--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-use...@googlegroups.com.
To post to this group, send email to kubernet...@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.
Calling `kubectl log` shows the logs from the container IF I have no not deleted the pod. If I delete the pod, calling `kubectl logs` on the pod that comes back returns blank.
Calling `kubectl describe will sometimes return the usual description of the pod, other times it returns nothing at all. I haven't noticed what sequence of steps causes one to happen over the other.
I run several thousand short-lived processes via Jobs a day and very rarely hit this. But when I do, the pods linger around for days in an Unknown status. It seems like this is very clearly a bug unless there is better documentation/explanation around what Unknown status means.