[JIRA] (JENKINS-59781) sshCommand fails to flush full log on failure

23 views
Skip to first unread message

jiangtyd@gmail.com (JIRA)

unread,
Oct 14, 2019, 4:33:03 PM10/14/19
to jenkinsc...@googlegroups.com
Damien J created an issue
 
Jenkins / Bug JENKINS-59781
sshCommand fails to flush full log on failure
Issue Type: Bug Bug
Assignee: Naresh Rayapati
Components: ssh-steps-plugin
Created: 2019-10-14 20:32
Priority: Minor Minor
Reporter: Damien J

I have a Jenkins job where I'm running Ansible through sshCommand:

sshCommand remote: remote, command: "ansible-playbook lb.yml"

However, if the last step of this Ansible playbook fails, the plugin throws an exception, but doesn't print the error logs for that final step, when running it normally in a shell would print additional logs.

Jenkins log:

TASK [lb : Configure nginx conf] ******************************************
changed: [server]
TASK [lb : Restart nginx] *************************************************
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Also: hudson.remoting.Channel$CallSiteStackTrace: Remote call to JNLP4-connect connection from <server-ip>
        at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1741)
        at hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:356)
        at hudson.remoting.Channel.call(Channel.java:955)
        at org.jenkinsci.plugins.sshsteps.steps.CommandStep$Execution.run(CommandStep.java:72)
        at org.jenkinsci.plugins.sshsteps.util.SSHStepExecution.lambda$start$0(SSHStepExecution.java:84)
...
org.hidetake.groovy.ssh.session.BadExitStatusException: Command returned exit status 2: ansible-playbook lb.yml

Shell log:

TASK [lb : Configure nginx conf] ****************************************** changed: [server]
TASK [lb : Restart nginx] *************************************************
fatal: [server]: FAILED! => {"changed": false, "msg": "Unable to start service nginx: Job for nginx.service failed because the control process exited with error code.\nSee \"systemctl status nginx.service\" and \"journalctl -xe\" for details.\n"}
	to retry, use: --limit @/ansible/lb.retry

PLAY RECAP ****************************************************************
server   : ok=1   changed=0    unreachable=0    failed=1   

I checked, and all of these log lines go to stdout.

These additional log lines are necessary for debugging the issue (the workaround right now is to try executing the ansible script directly from the node, which isn't great) so it would be great to flush them before throwing this BadExitStatusException. 

Add Comment Add Comment
 
This message was sent by Atlassian Jira (v7.13.6#713006-sha1:cc4451f)
Atlassian logo

jiangtyd@gmail.com (JIRA)

unread,
Oct 14, 2019, 4:34:03 PM10/14/19
to jenkinsc...@googlegroups.com
Damien J updated an issue
Change By: Damien J
I have a Jenkins job where I'm running Ansible through sshCommand:

{{sshCommand remote: remote, command: "ansible-playbook lb.yml"}}

However, if the last step of this Ansible playbook fails, the plugin throws an exception, but doesn't print the error logs for that final step , when running . Running it normally in a shell would print additional logs.

Jenkins log:
{noformat}

TASK [lb : Configure nginx conf] ******************************************
changed: [server]
TASK [lb : Restart nginx] *************************************************
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Also: hudson.remoting.Channel$CallSiteStackTrace: Remote call to JNLP4-connect connection from <server-ip>
        at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1741)
        at hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:356)
        at hudson.remoting.Channel.call(Channel.java:955)
        at org.jenkinsci.plugins.sshsteps.steps.CommandStep$Execution.run(CommandStep.java:72)
        at org.jenkinsci.plugins.sshsteps.util.SSHStepExecution.lambda$start$0(SSHStepExecution.java:84)
... (additional stack trace lines omitted)
org.hidetake.groovy.ssh.session.BadExitStatusException: Command returned exit status 2: ansible-playbook lb.yml
{noformat}
Shell log:
{noformat}

TASK [lb : Configure nginx conf] ****************************************** changed: [server]
TASK [lb : Restart nginx] *************************************************
fatal: [server]: FAILED! => {"changed": false, "msg": "Unable to start service nginx: Job for nginx.service failed because the control process exited with error code.\nSee \"systemctl status nginx.service\" and \"journalctl -xe\" for details.\n"}
to retry, use: --limit @/ansible/lb.retry

PLAY RECAP ****************************************************************
server   : ok=1   changed=0    unreachable=0    failed=1   
{noformat}

I checked, and all of these log lines go to stdout.

These additional log lines are necessary for debugging the issue (the workaround right now is to try executing the ansible script directly from the node, which isn't great) so it would be great to flush them before throwing this BadExitStatusException. 

jiangtyd@gmail.com (JIRA)

unread,
Oct 14, 2019, 4:34:04 PM10/14/19
to jenkinsc...@googlegroups.com
Damien J updated an issue
I have a Jenkins job where I'm running Ansible through sshCommand:

{{sshCommand remote: remote, command: "ansible-playbook lb.yml"}}

However, if the last step of this Ansible playbook fails, the plugin throws an exception, but doesn't print the error logs for that final step. Running it normally in a shell would print additional logs.

naresh.rayapati@gmail.com (JIRA)

unread,
Oct 14, 2019, 6:27:03 PM10/14/19
to jenkinsc...@googlegroups.com
Naresh Rayapati commented on Bug JENKINS-59781
 
Re: sshCommand fails to flush full log on failure

Damien J Thank you for reporting this. This is a kind of known behavior and workaround is to wait a couple of seconds at the end of the pipeline to catch up on the logs. For more info refer the comment on this old Jira https://issues.jenkins-ci.org/browse/JENKINS-57765

Let me know if you are still facing the issue after applying this workaround.. Thank you again. 

naresh.rayapati@gmail.com (JIRA)

unread,
Oct 23, 2019, 9:12:02 PM10/23/19
to jenkinsc...@googlegroups.com

damien@addepar.com (JIRA)

unread,
Oct 29, 2019, 3:36:02 PM10/29/19
to jenkinsc...@googlegroups.com

Hi Naresh Rayapati, sorry for the late response; I've been focused on some other things in the past week.

I again had a failure in the deploy, so I tried again.

Adding the post stage

    post {
        failure {
            sleep 5
        }
    }

to the end of my pipeline does, in fact, display the error.

This is not ideal, but should get the job done for now...

christian.ciach@gmail.com (JIRA)

unread,
Mar 31, 2020, 8:45:02 AM3/31/20
to jenkinsc...@googlegroups.com

This issue does not only apply to failures, but all logging of this plugin (even its own status logging, like "Executing command on name[server]: mycommand sudo: false").

Looking at the code, it seems to me that the likely issue is a missing flush()-call to the CustomLogHandler (or maybe the rootLogger) at the end of this method: https://github.com/jenkinsci/ssh-steps-plugin/blob/c35e7db86e975b257343179fd4f08dd0af8cbb42/src/main/groovy/org/jenkinsci/plugins/sshsteps/SSHService.groovy#L55

That being said.. Is it really a good Idea to register a new CustomLogHandler every time something gets logged?

This message was sent by Atlassian Jira (v7.13.12#713012-sha1:6e07c38)
Atlassian logo

christian.ciach@gmail.com (JIRA)

unread,
Mar 31, 2020, 8:51:10 AM3/31/20
to jenkinsc...@googlegroups.com
Christian Ciach edited a comment on Bug JENKINS-59781
This issue does not only apply to failures, but all logging of this plugin (even its own status logging, like "{{Executing command on name[server]: mycommand sudo: false"}}).

Looking at the code, it seems to me that the likely issue is a missing flush()-call to the CustomLogHandler (or maybe the rootLogger) at the end of this method: [https://github.com/jenkinsci/ssh-steps-plugin/blob/c35e7db86e975b257343179fd4f08dd0af8cbb42/src/main/groovy/org/jenkinsci/plugins/sshsteps/SSHService.groovy#L55]

That being said.. Is it really a good Idea to register a new CustomLogHandler every time something gets logged? The rootLogger is most likely static, so adding new handlers for each and every message will eventually lead to a ton of CustomLogHandlers that are probably never garbage collected.

christian.ciach@gmail.com (JIRA)

unread,
Mar 31, 2020, 10:04:03 AM3/31/20
to jenkinsc...@googlegroups.com
Christian Ciach edited a comment on Bug JENKINS-59781
This issue does not only apply to failures, but all logging of this plugin (even its own status logging, like "{{Executing command on name[server]: mycommand sudo: false"}}).

Looking at the code, it seems to me that the likely issue is a missing flush()-call to the CustomLogHandler (or maybe the rootLogger) at the end of this method: [https://github.com/jenkinsci/ssh-steps-plugin/blob/c35e7db86e975b257343179fd4f08dd0af8cbb42/src/main/groovy/org/jenkinsci/plugins/sshsteps/SSHService.groovy#L55]

That being said.. Is it really a good Idea to register a new CustomLogHandler every time something gets logged? The rootLogger is most likely static, so adding new handlers for each and every message will eventually lead to a ton of CustomLogHandlers that are probably never garbage collected.


Edit: I also wonder why this plugin logs its own messages through the "org.hidetake"-Logger instead of using its own logger.

christian.ciach@gmail.com (JIRA)

unread,
Mar 31, 2020, 10:04:04 AM3/31/20
to jenkinsc...@googlegroups.com
Reply all
Reply to author
Forward
0 new messages