Agreed with all that; the console needs to improve to deal with that use case better. It's currently quite confusing when stuff doesn't start up correctly.
It's a little challenging as in the case of an app not starting, kubernetes keeps recreating and deleting docker containers which makes it a little harder on the UI.
There's a couple of pending issues in this area:
Using 'oc log' or 'docker log' commands on the CLI is useful too.
We've a few helper scripts here:
I find these two handy:
oc-log rcName
oc-bash rcName
Where rcName is the name of the RC - the script then finds the first pod ID and uses that.
The 'oc-bash' does an 'oc exec' on the first container in the first pod to give you an interactive bash shell - like doing 'docker exec' - which is handy if you wanna look around the file system etc
We should probably have a 'diagnosing failing pods' page on the website!
On point 2) the only k8s option is to scale down the RC to zero - but you're right there would then be no pods to view.
Setting a maximum restart count in k8s would be nice - eg try 3 times or something then stop recreating pods. Though I'm not sure there's a REST API to find the last failed pod if k8s stops making pods after 3 attempts though - but for this case it'd be awesome to show a failed state on the Controllers tab with a link to the last pod's log
I wonder if there's anything else we can do to help?