Remote command executed on localhost instead of node

976 views
Skip to first unread message

Francesco Mazzi

unread,
May 7, 2021, 9:38:05 AM5/7/21
to rundeck-discuss
Hello, I'm testing a new simple job on a node with remote command execution. I configured a node with password authentication and I defined a new job, workflow->step->command with a mount operation, in the nodes tab i selected "Dispatch to Nodes".
When I start job I get error, watching log it seems that job is executed on the node, but I check /var/log/secure on the rundeck server I watch this:

May  7 14:08:26 xxxxx runuser: pam_unix(runuser-l:session): session opened for user rundeck by (uid=0)
May  7 14:13:00 xxxxx sudo: [lsass-pam] [module:pam_lsass]LsaPamGetCurrentPassword failed [error code: 49919]
May  7 14:13:00 xxxxx sudo: [lsass-pam] [module:pam_lsass]pam_sm_authenticate error [login:rundeck][error code:49919]
May  7 14:13:00 xxxxx sudo: rundeck : command not allowed ; TTY=unknown ; PWD=/var/lib/rundeck ; USER=root ; COMMAND=/bin/mount -t cifs xxxxxxxxxxxx

output log:

[workflow] Begin execution: node-first
preparing for sequential execution on 1 nodes
Executing command on node: xxxxx, NodeEntryImpl{tags=[], attributes={nodename=xxxxx, hostname=xxxxx, osFamily=unix, sudo-password-storage-path=keys/xxxxx, sudo-command-enabled=true, ssh-authentication=password, username=xxxxx, tags=}, project='null'}

So, why is it executed on localhost with rundeck user instead of on the remote node? Obviously I  already checked that remote node IP isn't equal to the server IP.

Any ideas? Thank you.

rac...@rundeck.com

unread,
May 7, 2021, 10:01:57 AM5/7/21
to rundeck-discuss
Hello Francesco,

That's the full job output?

Could you share your job definition and the node entry to take a look at? (please hide or change any potentially sensitive information).

Greetings!

Francesco Mazzi

unread,
May 10, 2021, 4:20:12 AM5/10/21
to rundeck-discuss
this is the job:

<joblist>
  <job>
    <defaultTab>nodes</defaultTab>
    <description></description>
    <dispatch>
      <excludePrecedence>true</excludePrecedence>
      <keepgoing>false</keepgoing>
      <rankOrder>ascending</rankOrder>
      <successOnEmptyNodeFilter>false</successOnEmptyNodeFilter>
      <threadcount>1</threadcount>
    </dispatch>
    <executionEnabled>true</executionEnabled>
    <id>96303e53-08e8-4c4c-8ba4-f757577eb591</id>
    <loglevel>INFO</loglevel>
    <name>xxxxxx</name>
    <nodeFilterEditable>false</nodeFilterEditable>
    <nodefilters>
      <filter>xxxxxxx</filter>
    </nodefilters>
    <nodesSelectedByDefault>true</nodesSelectedByDefault>
    <plugins />
    <scheduleEnabled>true</scheduleEnabled>
    <sequence keepgoing='false' strategy='node-first'>
      <command>
        <exec>sudo /bin/mount -t cifs xxxxxxxx</exec>
      </command>
    </sequence>
    <uuid>96303e53-08e8-4c4c-8ba4-f757577eb591</uuid>
  </job>
</joblist>

this is the node entry:

<project>
<node name="xxxxxxx"
  osFamily="unix"
  username="xxxxxx"
  hostname="192.168.xxxxx"
  ssh-authentication="password"
  sudo-command-enabled="true"
  sudo-password-storage-path="keys/xxxxxxx"
  />
</project>

Rundeck 3.3.10
Thank you

Francesco Mazzi

unread,
May 10, 2021, 4:32:23 AM5/10/21
to rundeck-discuss
This is the full job output:

[workflow] Begin execution: node-first
preparing for sequential execution on 1 nodes
Executing command on node: xxxxxx, NodeEntryImpl{tags=[], attributes={nodename=xxxxx, hostname=192.168.xxxxx, osFamily=unix, sudo-password-storage-path=keys/xxxxx, sudo-command-enabled=true, ssh-authentication=password, username=xxxxxx, tags=}, project='null'}
[wf:25d25a68-a4e3-4849-86a1-d6a8508e708a] Start EngineWorkflowExecutor
[wf:25d25a68-a4e3-4849-86a1-d6a8508e708a] Update conditional state: {before.step.1=true, after.step.1=false}
[wf:25d25a68-a4e3-4849-86a1-d6a8508e708a] start conditions for step [1]: []
[wf:25d25a68-a4e3-4849-86a1-d6a8508e708a] skip conditions for step [1]: [(step.1.completed == 'true')]
[wf:25d25a68-a4e3-4849-86a1-d6a8508e708a] Update conditional state: {after.step.2=false, before.step.2=true}
[wf:25d25a68-a4e3-4849-86a1-d6a8508e708a] start conditions for step [2]: [(after.step.1 == 'true')]
[wf:25d25a68-a4e3-4849-86a1-d6a8508e708a] skip conditions for step [2]: [(step.2.completed == 'true')]
[wf:25d25a68-a4e3-4849-86a1-d6a8508e708a] Update conditional state: {before.step.3=true, after.step.3=false}
[wf:25d25a68-a4e3-4849-86a1-d6a8508e708a] start conditions for step [3]: [(after.step.2 == 'true')]
[wf:25d25a68-a4e3-4849-86a1-d6a8508e708a] skip conditions for step [3]: [(step.3.completed == 'true')]
[wf:25d25a68-a4e3-4849-86a1-d6a8508e708a] Update conditional state: {before.step.4=true, after.step.4=false}
[wf:25d25a68-a4e3-4849-86a1-d6a8508e708a] start conditions for step [4]: [(after.step.3 == 'true')]
[wf:25d25a68-a4e3-4849-86a1-d6a8508e708a] skip conditions for step [4]: [(step.4.completed == 'true')]
[wf:25d25a68-a4e3-4849-86a1-d6a8508e708a] Update conditional state: {before.step.5=true, after.step.5=false}
[wf:25d25a68-a4e3-4849-86a1-d6a8508e708a] start conditions for step [5]: [(after.step.4 == 'true')]
[wf:25d25a68-a4e3-4849-86a1-d6a8508e708a] skip conditions for step [5]: [(step.5.completed == 'true')]
[wf:25d25a68-a4e3-4849-86a1-d6a8508e708a] Create rule engine with rules: RuleEngine{ruleSet=[Rule: Conditions([java.util.function.Predicate$$Lambda$733/527848366@3f0f6f36]) => DataState{state={step.2.skip=true}}, Rule: Conditions([(step.any.flowcontrol.halt == 'true')]) => DataState{state={workflow.done=true}}, Rule: Conditions([java.util.function.Predicate$$Lambda$733/527848366@2714f17b]) => DataState{state={step.1.skip=true}}, Rule: Conditions([(after.step.2 == 'true')]) => DataState{state={step.3.start=true}}, Rule: Conditions([java.util.function.Predicate$$Lambda$733/527848366@660c86aa]) => DataState{state={step.4.skip=true}}, Rule: Conditions([(workflow.keepgoing == 'false'), (step.any.state.failed == 'true')]) => DataState{state={workflow.done=true}}, Rule: Conditions([java.util.function.Predicate$$Lambda$733/527848366@2c00c3ef]) => DataState{state={step.5.skip=true}}, Rule: Conditions([]) => DataState{state={step.1.start=true}}, Rule: Conditions([(after.step.4 == 'true')]) => DataState{state={step.5.start=true}}, Rule: Conditions([java.util.function.Predicate$$Lambda$733/527848366@68326f86]) => DataState{state={step.3.skip=true}}, Rule: Conditions([(after.step.3 == 'true')]) => DataState{state={step.4.start=true}}, Rule: Conditions([(after.step.1 == 'true')]) => DataState{state={step.2.start=true}}]}
[wf:25d25a68-a4e3-4849-86a1-d6a8508e708a] Create workflow engine with state: StateLogger{state=DataState{state={job.url=http://rundeck.xxxxx:4440/project/xxxxx/execution/follow/18745, job.id=96303e53-08e8-4c4c-8ba4-f757577eb591, job.retryPrevExecId=0, after.step.2=false, job.loglevel=DEBUG, after.step.3=false, node.os-name=, after.step.1=false, job.retryInitialExecId=0, after.step.4=false, node.hostname=192.168.153.1, after.step.5=false, node.os-family=unix, job.user.name=admin, node.tags=, before.step.3=true, node.description=, before.step.4=true, before.step.5=true, node.username=xxxxxx, job.name=xxxxxxx, node.ssh-authentication=password, job.successOnEmptyNodeFilter=false, job.executionType=user, node.sudo-password-storage-path=keys/xxxxx, node.sudo-command-enabled=true, job.filter=xxxxxx, job.serverUUID=67166003-d200-4e7c-a2d6-b51a3a629858, job.wasRetry=false, job.project=xxxx, before.step.1=true, before.step.2=true, job.username=admin, node.os-arch=, job.retryAttempt=0, workflow.id=25d25a68-a4e3-4849-86a1-d6a8508e708a, node.os-version=, workflow.keepgoing=false, job.execid=18745, node.name=xxxxxx, job.serverUrl=http://rundeck.xxxxxxx:4440/, job.threadcount=1}}}
[wf:25d25a68-a4e3-4849-86a1-d6a8508e708a] Begin: Workflow begin
[wf:25d25a68-a4e3-4849-86a1-d6a8508e708a] WillProcessStateChange: state changes: init
[wf:25d25a68-a4e3-4849-86a1-d6a8508e708a] Update conditional state: {workflow.state=started}
[wf:25d25a68-a4e3-4849-86a1-d6a8508e708a] Update conditional state: {step.1.start=true}
[wf:25d25a68-a4e3-4849-86a1-d6a8508e708a] DidProcessStateChange: applied state changes and rules (changed? true): init - StateLogger{state=DataState{state={job.url=http://rundeck.comune.genova.it:4440/project/Generici/execution/follow/18745, job.id=96303e53-08e8-4c4c-8ba4-f757577eb591, job.retryPrevExecId=0, after.step.2=false, job.loglevel=DEBUG, after.step.3=false, node.os-name=, after.step.1=false, workflow.state=started, job.retryInitialExecId=0, after.step.4=false, node.hostname=192.168.xxxxx, after.step.5=false, node.os-family=unix, job.user.name=admin, node.tags=, before.step.3=true, node.description=, before.step.4=true, before.step.5=true, node.username=xxxx, job.name=Archiviazione log xxxxx, node.ssh-authentication=password, job.successOnEmptyNodeFilter=false, job.executionType=user, node.sudo-password-storage-path=keys/xxxxxx, node.sudo-command-enabled=true, job.filter=xxxxxx, job.serverUUID=67166003-d200-4e7c-a2d6-b51a3a629858, job.wasRetry=false, job.project=xxxx, before.step.1=true, step.1.start=true, before.step.2=true, job.username=admin, node.os-arch=, job.retryAttempt=0, workflow.id=25d25a68-a4e3-4849-86a1-d6a8508e708a, node.os-version=, workflow.keepgoing=false, job.execid=18745, node.name=xxxx, job.serverUrl=http://rundeck.xxxxxx:4440/, job.threadcount=1}}}
[wf:25d25a68-a4e3-4849-86a1-d6a8508e708a] WillRunOperation: operation starting: Step{stepNum=1, label='null'}
[wf:25d25a68-a4e3-4849-86a1-d6a8508e708a] LoopProgress: Pending(5) => run(1), skip(0), remain(4)
[workflow] Begin step: 1,NodeDispatch
1: Workflow step executing: CommandItem{command=[8 words]}
preparing for sequential execution on 1 nodes
Executing command on node: xxxxx, NodeEntryImpl{tags=[], attributes={nodename=xxxx, hostname=192.168.xxxxx, osFamily=unix, sudo-password-storage-path=keys/xxxx, sudo-command-enabled=true, ssh-authentication=password, username=xxxx, tags=}, project='null'}
[workflow] beginExecuteNodeStep(xxxxxx): NodeDispatch: CommandItem{command=[8 words]}
using charset: null
Current OS is Linux
Adding reference: ant.PropertyHelper
Project base dir set to: /var/lib/rundeck
Setting environment variable: RD_JOB_ID=96303e53-08e8-4c4c-8ba4-f757577eb591
Setting environment variable: RD_JOB_USERNAME=admin
Setting environment variable: RD_NODE_HOSTNAME=192.168.xxxx
Setting environment variable: RD_JOB_PROJECT=xxxxx
Setting environment variable: RD_JOB_NAME=xxxxx
Setting environment variable: RD_NODE_OS_ARCH=
Setting environment variable: RD_NODE_OS_VERSION=
Setting environment variable: RD_NODE_NAME=xxxxx
Setting environment variable: RD_JOB_THREADCOUNT=1
Setting environment variable: RD_JOB_RETRYATTEMPT=0
Setting environment variable: RD_NODE_SUDO_PASSWORD_STORAGE_PATH=keys/xxxx
Setting environment variable: RD_JOB_USER_NAME=admin
Setting environment variable: RD_NODE_SSH_AUTHENTICATION=password
Setting environment variable: RD_JOB_LOGLEVEL=DEBUG
Setting environment variable: RD_NODE_OS_NAME=
Setting environment variable: RD_JOB_SERVERUUID=67166003-d200-4e7c-a2d6-b51a3a629858
Setting environment variable: RD_NODE_OS_FAMILY=unix
Setting environment variable: RD_JOB_EXECID=18745
Setting environment variable: RD_NODE_USERNAME=xxxx
Setting environment variable: RD_NODE_SUDO_COMMAND_ENABLED=true
Setting environment variable: RD_NODE_TAGS=
Setting environment variable: RD_JOB_RETRYPREVEXECID=0
Setting environment variable: RD_JOB_SERVERURL=http://rundeck.xxxxx:4440/
Setting environment variable: RD_JOB_EXECUTIONTYPE=user
Setting environment variable: RD_JOB_WASRETRY=false
Setting environment variable: RD_JOB_SUCCESSONEMPTYNODEFILTER=false
Setting environment variable: RD_JOB_RETRYINITIALEXECID=0
Setting environment variable: RD_NODE_DESCRIPTION=
Setting environment variable: RD_JOB_FILTER=xxxxx
Executing '/bin/sh' with arguments: '-c' 'sudo /bin/mount -t cifs xxxxxx' The ' characters around the executable and arguments are not part of the command.
Execute:Java13CommandLauncher: Executing '/bin/sh' with arguments: '-c' 'sudo /bin/mount -t cifs xxxxxx' The ' characters around the executable and arguments are not part of the command.
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:
#1) Respect the privacy of others.
#2) Think before you type.
#3) With great power comes great responsibility.
sudo: no tty present and no askpass program specified
Setting project property: 1620390185672.node.xxxxx.LocalNodeExecutor.result -> 1
Result: 1
Failed: NonZeroResultCode: Result code was 1
[workflow] finishExecuteNodeStep(xxxxx): NodeDispatch: NonZeroResultCode: Result code was 1
1: Workflow step finished, result: Dispatch failed on 1 nodes: [xxxx: NonZeroResultCode: Result code was 1 + {dataContext=MultiDataContextImpl(map={ContextView(step:1, node:xxxxx)=BaseDataContext{{exec={exitCode=1}}}, ContextView(node:xxxxx)=BaseDataContext{{exec={exitCode=1}}}}, base=null)} ]
[workflow] Finish step: 1,NodeDispatch
[wf:25d25a68-a4e3-4849-86a1-d6a8508e708a] OperationFailed: operation completed, success? false: OperationCompleted(identity=[1], stepNum=1, newState=DataState{state={step.1.result.failedNodes=xxxxx, step.1.completed=true, step.any.state.failed=true, before.step.1=false, step.1.state=failure, after.step.1=true}}, stepResultCapture=StepResultCapture{stepResult=Dispatch failed on 1 nodes: [xxxxx: NonZeroResultCode: Result code was 1 + {dataContext=MultiDataContextImpl(map={ContextView(step:1, node:xxxxx)=BaseDataContext{{exec={exitCode=1}}}, ContextView(node:xxxxx)=BaseDataContext{{exec={exitCode=1}}}}, base=null)} ], stepSuccess=false, statusString='null', controlBehavior=null, resultData=MultiDataContextImpl(map={ContextView(step:1, node:xxxxx)=BaseDataContext{{exec={exitCode=1}}}, ContextView(node:xxxxx)=BaseDataContext{{exec={exitCode=1}}}}, base=null)}, success=false)
[wf:25d25a68-a4e3-4849-86a1-d6a8508e708a] WillProcessStateChange: state changes: [1]
[wf:25d25a68-a4e3-4849-86a1-d6a8508e708a] Update conditional state: {step.1.result.failedNodes=xxxxx, step.1.completed=true, step.any.state.failed=true, before.step.1=false, step.1.state=failure, after.step.1=true}
[wf:25d25a68-a4e3-4849-86a1-d6a8508e708a] Update conditional state: {workflow.done=true, step.1.start=true, step.2.start=true, step.1.skip=true}
[wf:25d25a68-a4e3-4849-86a1-d6a8508e708a] DidProcessStateChange: applied state changes and rules (changed? true): [1] - StateLogger{state=DataState{state={job.url=http://rundeck.x:4440/project/Generici/execution/follow/18745, workflow.done=true, job.retryPrevExecIdxxx=0, job.loglevel=DEBUG, node.os-name=, node.hostname=192.168.xxxx, node.tags=, job.name=xxxxx, step.1.state=failure, node.ssh-authentication=password, job.successOnEmptyNodeFilter=false, node.sudo-password-storage-path=keys/xxxx, node.sudo-command-enabled=true, job.serverUUID=67166003-d200-4e7c-a2d6-b51a3a629858, step.1.completed=true, job.wasRetry=false, job.project=xxxx, step.1.start=true, job.username=admin, workflow.id=25d25a68-a4e3-4849-86a1-d6a8508e708a, node.os-version=, workflow.keepgoing=false, step.1.skip=true, job.id=96303e53-08e8-4c4c-8ba4-f757577eb591, step.any.state.failed=true, after.step.2=false, after.step.3=false, after.step.1=true, step.2.start=true, workflow.state=started, job.retryInitialExecId=0, after.step.4=false, after.step.5=false, node.os-family=unix, job.user.name=admin, step.1.result.failedNodes=xxxx, before.step.3=true, node.description=, before.step.4=true, before.step.5=true, node.username=xxxxx, job.executionType=user, job.filter=xxxx, before.step.1=false, before.step.2=true, node.os-arch=, job.retryAttempt=0, job.execid=18745, node.name=xxxxx, job.serverUrl=http://rundeck.xxxxx:4440/, job.threadcount=1}}}
[wf:25d25a68-a4e3-4849-86a1-d6a8508e708a] WorkflowEndState: Workflow end state reached.
[wf:25d25a68-a4e3-4849-86a1-d6a8508e708a] WillShutdown: Workflow engine shutting down (interrupted? false)
[wf:25d25a68-a4e3-4849-86a1-d6a8508e708a] IncompleteOperations: Some operations were not run: 4
[wf:25d25a68-a4e3-4849-86a1-d6a8508e708a] Complete: Workflow complete: [Step{stepNum=1, label='null'}: OperationCompleted(identity=[1], stepNum=1, newState=DataState{state={step.1.result.failedNodes=xxxxx, step.1.completed=true, step.any.state.failed=true, before.step.1=false, step.1.state=failure, after.step.1=true}}, stepResultCapture=StepResultCapture{stepResult=Dispatch failed on 1 nodes: [xxxxx: NonZeroResultCode: Result code was 1 + {dataContext=MultiDataContextImpl(map={ContextView(step:1, node:xxxxx)=BaseDataContext{{exec={exitCode=1}}}, ContextView(node:xxxxx)=BaseDataContext{{exec={exitCode=1}}}}, base=null)} ], stepSuccess=false, statusString='null', controlBehavior=null, resultData=MultiDataContextImpl(map={ContextView(step:1, node:xxxxx)=BaseDataContext{{exec={exitCode=1}}}, ContextView(node:xxxxx)=BaseDataContext{{exec={exitCode=1}}}}, base=null)}, success=false)]
[wf:25d25a68-a4e3-4849-86a1-d6a8508e708a] Step [5] did not run. start conditions: [(after.step.4 == 'true')], skip conditions: [(step.5.completed == 'true')]
[wf:25d25a68-a4e3-4849-86a1-d6a8508e708a] Step [3] did not run. start conditions: [(after.step.2 == 'true')], skip conditions: [(step.3.completed == 'true')]
[wf:25d25a68-a4e3-4849-86a1-d6a8508e708a] Step [2] did not run. start conditions: [(after.step.1 == 'true')], skip conditions: [(step.2.completed == 'true')]
[wf:25d25a68-a4e3-4849-86a1-d6a8508e708a] Step [4] did not run. start conditions: [(after.step.3 == 'true')], skip conditions: [(step.4.completed == 'true')]
[workflow] Finish execution: node-first: [Workflow result: , step failures: {1=Dispatch failed on 1 nodes: [xxxx: NonZeroResultCode: Result code was 1 + {dataContext=MultiDataContextImpl(map={ContextView(step:1, node:xxxxx)=BaseDataContext{{exec={exitCode=1}}}, ContextView(node:xxxxx)=BaseDataContext{{exec={exitCode=1}}}}, base=null)} ]}, Node failures: {xxxxx=[NonZeroResultCode: Result code was 1 + {dataContext=MultiDataContextImpl(map={ContextView(step:1, node:xxxxx)=BaseDataContext{{exec={exitCode=1}}}, ContextView(node:xxxxx)=BaseDataContext{{exec={exitCode=1}}}}, base=null)} ]}, status: failed]
[Workflow result: , step failures: {1=Dispatch failed on 1 nodes: [xxxxx: NonZeroResultCode: Result code was 1 + {dataContext=MultiDataContextImpl(map={ContextView(step:1, node:xxxxx)=BaseDataContext{{exec={exitCode=1}}}, ContextView(node:x)=BaseDatxxxxaContext{{exec={exitCode=1}}}}, base=null)} ]}, Node failures: {xxxxx=[NonZeroResultCode: Result code was 1 + {dataContext=MultiDataContextImpl(map={ContextView(step:1, node:xxxxx)=BaseDataContext{{exec={exitCode=1}}}, ContextView(node:xxxxx)=BaseDataContext{{exec={exitCode=1}}}}, base=null)} ]}, status: failed]
Execution failed: 18745 in project xxxxx: [Workflow result: , step failures: {1=Dispatch failed on 1 nodes: [xxxxx: NonZeroResultCode: Result code was 1 + {dataContext=MultiDataContextImpl(map={ContextView(step:1, node:x)=BaseDataContext{{exec={exitCodxxxxxe=1}}}, ContextView(node:xxxxx)=BaseDataContext{{exec={exitCode=1}}}}, base=null)} ]}, Node failures: {xxxxxx=[NonZeroResultCode: Result code was 1 + {dataContext=MultiDataContextImpl(map={ContextView(step:1, node:xxxxx)=BaseDataContext{{exec={exitCode=1}}}, ContextView(node:xxxxx)=BaseDataContext{{exec={exitCode=1}}}}, base=null)} ]}, status: failed]

Il giorno venerdì 7 maggio 2021 alle 16:01:57 UTC+2 rac...@rundeck.com ha scritto:

rac...@rundeck.com

unread,
May 10, 2021, 10:39:14 AM5/10/21
to rundeck-discuss

Hi Francesco,

Could you try adding the ssh-password-storage-path attribute to your node? (pointing to your key storage user password, take a look at this).

<?xml version="1.0" encoding="UTF-8"?>
<project>
    <node name="node00" 
    description="node00"
    osArch="amd64"
    osFamily="unix" 
    osVersion="3.10.0-862.11.6.el7.x86_64"
    username="test" 
    hostname="192.168.33.20" 
    ssh-authentication="password" 
    ssh-password-storage-path="keys/node00"
    sudo-command-enabled="true"
    sudo-password-storage-path="keys/node00"/>
</project>

In addition, use the following Node filter name: your_node_name in your job node filter definition .

I tested with the following one:

- defaultTab: nodes
  description: ''
  executionEnabled: true
  id: 96303e53-08e8-4c4c-8ba4-f757577eb591
  loglevel: INFO
  name: Example
  nodeFilterEditable: false
  nodefilters:
    dispatch:
      excludePrecedence: true
      keepgoing: false
      rankOrder: ascending
      successOnEmptyNodeFilter: false
      threadcount: '1'
    filter: 'name: node00'
  nodesSelectedByDefault: true
  plugins:
    ExecutionLifecycle: null
  scheduleEnabled: true
  sequence:
    commands:
    - exec: sudo cat /etc/shadow
    keepgoing: false
    strategy: node-first
  uuid: 96303e53-08e8-4c4c-8ba4-f757577eb591

Hope it helps!

Francesco Mazzi

unread,
May 10, 2021, 11:46:16 AM5/10/21
to rundeck...@googlegroups.com
I made these modifications but nothing changed.
Thank you.

--
You received this message because you are subscribed to a topic in the Google Groups "rundeck-discuss" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/rundeck-discuss/PaY_rw0mNNc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to rundeck-discu...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/rundeck-discuss/565bd1d5-948d-4840-8997-cf2568acb599n%40googlegroups.com.

rac...@rundeck.com

unread,
May 10, 2021, 11:58:55 AM5/10/21
to rundeck-discuss

In addition, your log output shows sudo: no tty present and no askpass program specified, please take a look at this, this and this.

Regards!

Francesco Mazzi

unread,
May 10, 2021, 2:40:21 PM5/10/21
to rundeck...@googlegroups.com
This is normal because in my case rundeck connects to localhost instead of node, on localhost this user has no tty and isn't in sudoers.
I think I'm missing something simple but I don't know what.

Francesco Mazzi

unread,
May 11, 2021, 6:32:01 AM5/11/21
to rundeck-discuss
I made other tests, create a new test job with 2 workflow steps. The first is a remote command (whoami) and the second is an inline script (whoami).
The result is this: the remote command works but it's executed on localhost and returns rundeck, the inline script is executed on the node but it return this error:

chmod: cannot access ‘/tmp/27-18854-xxxx-dispatch-script.tmp.sh’: No such file or directory

I think because it uses rundeck user instead of the one defined in node, the file permissions of this file are the ones of node user.
So, now the questions are 2:

1) why is command executed on localhost instead of node and why is inline script executed on node? what's the difference? 
2) why is the inline script executed as rundeck user?

Thank you

rac...@rundeck.com

unread,
May 11, 2021, 7:30:11 PM5/11/21
to rundeck-discuss
Hi Francesco,

If you have set the job to be dispatched to a remote node, both steps must be executed in the remote node. (similar to my example).

What kind of Rundeck instance are you using? (RPM,DEB,WAR,Docker).
Are you using another model source in your project?
Could you share the job output screenshot to take a look at?

Could you retry following this?

Thanks!

Francesco Mazzi

unread,
May 13, 2021, 8:43:28 AM5/13/21
to rundeck-discuss
I'm using Rundeck 3.3.10 on a Centos 7.9, with rpm. It isn't a fresh installation, it's an old version 2 upgraded many times.
This is the only model source in project.
I tried to execute command "pwd" on both localhost and node, this is the result:
rundeck.png
As you can see, it seems that command is executed on localhost for both nodes (there is no /var/lib/rundeck folder on node).

Francesco Mazzi

unread,
May 18, 2021, 6:44:22 AM5/18/21
to rundeck-discuss
Can I do other tests? Thank you

rac...@rundeck.com

unread,
May 18, 2021, 9:50:19 AM5/18/21
to rundeck-discuss
Hi Francesco.

1. You can disable the node cache and try again (just for testing).
2. Check carefully all node sources and verify that you're not duplicating the localhost.
3. Also, try recreating the remote node from scratch following this step-by-step guide.

Regards.

Francesco Mazzi

unread,
May 19, 2021, 4:45:17 AM5/19/21
to rundeck-discuss
I finally solved, in PROJECT SETTINGS, "Edit configuration", "Default node executor" was set to "Local", I changed it to "SSH" and now it works. I knew it was something trivial.
Thanks for support. 

Reply all
Reply to author
Forward
0 new messages