A little bit comments on the Fabric8-console...

30 views
Skip to first unread message

S C NG

unread,
Aug 19, 2015, 10:34:57 PM8/19/15
to fabric8
I can deploy my own Camel application to my Fabric8 Vagrant box on my local PC. I can see it from the "Apps" tab page, but I found my application is not functioning as expected. From the "Pods" tab page, I found that it keeps on restarting itself. Obviously something went wrong and so I want to perform 2 actions:

1) Check the log file for trouble-shooting... I can go to the "Pods" tab page > select my application pod > click "Logs" button to popup the Kibana dashboard, which shows the log in table form and allow searching, but I wonder if there could be a view on the Raw log file or a link to download the Raw log file... I can get it by ssh into the vagrant box, but in case it is an environment with restricted access (e.g. Production), I can only depend on the console to check the log


2) Pause/Stop the application (the Pod) to prevent it from infinity restart loop... On the UI, I can only find the "Run" or "Delete" Pod button, but I cannot find any option to "Suspend" or "Pause" the running Pod... Finally, I have to go to the "Apps" tab page and resize the number of pods to "0"...  It could be a workaround but might not suitable for all scenarios. In particular, reducing Pods to "0" resulting that I can no longer see the Pod with issue in the Pods page, and so not able to check the logs from console to trouble shoot

Just a little bit feelings and comments from my short experimenting with the console.

S C NG

unread,
Aug 19, 2015, 11:40:11 PM8/19/15
to fabric8
3) One more thing: after I have fixed the issue and redeploy the app, it can be run successfully without error. But when I got to Pods > click the Arrow button "Open a new window and connect to this container", a new browser tab page opened but it's only a blank page. I remember in older version (2.1.11) that the hawtio page with Camel dashboard should be shown...

Auto Generated Inline Image 1

James Strachan

unread,
Aug 20, 2015, 2:02:21 AM8/20/15
to S C NG, fabric8
Agreed with all that; the console needs to improve to deal with that use case better. It's currently quite confusing when stuff doesn't start up correctly.

It's a little challenging as in the case of an app not starting, kubernetes keeps recreating and deleting docker containers which makes it a little harder on the UI.

There's a couple of pending issues in this area:

Using 'oc log' or 'docker log' commands on the CLI is useful too.

We've a few helper scripts here:

I find these two handy:

oc-log rcName
oc-bash rcName

Where rcName is the name of the RC - the script then finds the first pod ID and uses that. 

The 'oc-bash' does an 'oc exec' on the first container in the first pod to give you an interactive bash shell - like doing 'docker exec' - which is handy if you wanna look around the file system etc

We should probably have a 'diagnosing failing pods' page on the website!

On point 2) the only k8s option is to scale down the RC to zero - but you're right there would then be no pods to view. 

Setting a maximum restart count in k8s would be nice - eg try 3 times or something then stop recreating pods. Though I'm not sure there's a REST API to find the last failed pod if k8s stops making pods after 3 attempts though - but for this case it'd be awesome to show a failed state on the Controllers tab with a link to the last pod's log

I wonder if there's anything else we can do to help?
--
You received this message because you are subscribed to the Google Groups "fabric8" group.
To unsubscribe from this group and stop receiving emails from it, send an email to fabric8+u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


--
James
-------
Red Hat

Twitter: @jstrachan
Email: james.s...@gmail.com
hawtio: http://hawt.io/

Open Source DevOps and Integration

Reply all
Reply to author
Forward
0 new messages