groovy directory listing strangeness, node tag predicate not being honored.

164 views
Skip to first unread message

Timothy Wojtaszek

unread,
Feb 20, 2015, 4:01:35 PM2/20/15
to jenkins...@googlegroups.com
In my cross platform workflow build script, I am trying to iterate over directories.  I encountered some strange behavior i don't understand with groovy and the File() object.  My first attempt used File(path).eachDir {...} it works in the Script console, but not in a job.  In the job I simply had an empty list, where running the same code in Script console gave the expected array.  Then I tried File(path).listFiles ( {} ), and that worked on OSX in a job.  Moving to linux, it failed and I had to use 'sh "ls ${path}"' and parse the output to get the strings.

If that isn't odd enough, my node() blocks are predicated on a tag, but on my system I observe my darwin directory being listed from my linux node job? (output below workflow script)

Anything obviously wrong?  the dir listing works with the shell work around, but the tag predicates are a bit concerning.

Jenkins 1.598
OS X 10.10.2

java version "1.8.0_31"

Ubuntu 12.04
java 1.6.0_34

-- workflow script, change tag as appropriate

def linuxtag = "linux_amd64"
def darwintag = "darwin_amd64"

def getDirsSH(root) {
  def dirs = []

  sh "ls ${root} > clib_list.txt"
  def lines = readFile('clib_list.txt').split("\r?\n")
  lines.each { dirs << it }
  return dirs
}

def getDirsEach(root) {
  def dirs = []
  new File(root).eachDir {  dirs << it.name  }
  return dirs
}

def getDirsList(root) {
  def dirs = []
  new File(root).listFiles().each( {  dirs << it.name  })
  return dirs
}

def getDirs(root, plat) {
  println "-- SH ${plat} " + getDirsSH(root)

  println "-- List ${plat}" + getDirsList(root)

  println "-- Each ${plat}" +  getDirsEach(root)
}
def platforms = [linuxtag: { node(linuxtag) { getDirs('/tmp',  linuxtag) }}, darwintag : { node(darwintag) { getDirs('/tmp',  darwintag)}}]

node () {
  parallel platforms

-- OUTPUT
-- NOTE: my darwin machine is hosting a few slaves, so /tmp/slave2 & /tmp/slave3 directories should appear in all outputs for darwin.  They don't. Furthermore they _are_ appearing in the output of my linux tagged jobs?

Started by user tim
Running: Allocate node : Start
Running on Slave4 in /home/tester/Jenkins/workspace/wf-test
Running: Allocate node : Body : Start
Running: Execute sub-workflows in parallel : Start
Running: Parallel branch: linuxtag
Running: Parallel branch: darwintag
Running: Allocate node : Start
Running on Slave4 in /home/tester/Jenkins/workspace/wf-test@2
Running: Allocate node : Start
Running on Slave2 in /tmp/slave2/workspace/wf-test
Running: Allocate node : Body : Start
Running: Allocate node : Body : Start
Running: Shell Script
[wf-test@2] Running shell script
Running: Shell Script
+ ls /tmp
[wf-test] Running shell script
+ ls /tmp
Running: Read file from workspace
Running: Print Message
-- SH linux_amd64 [hsperfdata_tester, jffi5202641646477505413.tmp, jna--877169473, ssh-DqUiHK1589, ssh-kWoGZKsM1644, vagrant-puppet-3]
Running: Print Message
-- List linux_amd64[.keystone_install_lock, .vbox-timwojtaszek-ipc, com.apple.launchd.370pvnOGoZ, com.apple.launchd.6qgCPTwPRV, com.apple.launchd.7HpNisaxk2, com.apple.launchd.EtuALQ56t5, com.apple.launchd.i9VlHFzS1P, com.apple.launchd.vcaQbm4un1, com.apple.launchd.Wgio1P47ZU, com.apple.launchd.whfRiz1zjn, com.apple.launchd.Xv0xkmvFqi, KSOutOfProcessFetcher.0.r55jifrBu08ZlGAfPLYXKgYad4c=, KSOutOfProcessFetcher.502.r55jifrBu08ZlGAfPLYXKgYad4c=, meraki_wifi_loc.log, slave2, slave3]
Running: Print Message
-- Each linux_amd64[.vbox-timwojtaszek-ipc]
Running: Allocate node : Body : End
Running: Allocate node : End
Running: Execute sub-workflows in parallel : Body : End
Running: Read file from workspace
Running: Print Message
-- SH darwin_amd64 [KSOutOfProcessFetcher.0.r55jifrBu08ZlGAfPLYXKgYad4c=, KSOutOfProcessFetcher.502.r55jifrBu08ZlGAfPLYXKgYad4c=, com.apple.launchd.370pvnOGoZ, com.apple.launchd.6qgCPTwPRV, com.apple.launchd.7HpNisaxk2, com.apple.launchd.EtuALQ56t5, com.apple.launchd.Wgio1P47ZU, com.apple.launchd.Xv0xkmvFqi, com.apple.launchd.i9VlHFzS1P, com.apple.launchd.vcaQbm4un1, com.apple.launchd.whfRiz1zjn, meraki_wifi_loc.log, slave2, slave3]
Running: Print Message
-- List darwin_amd64[.keystone_install_lock, .vbox-timwojtaszek-ipc, com.apple.launchd.370pvnOGoZ, com.apple.launchd.6qgCPTwPRV, com.apple.launchd.7HpNisaxk2, com.apple.launchd.EtuALQ56t5, com.apple.launchd.i9VlHFzS1P, com.apple.launchd.vcaQbm4un1, com.apple.launchd.Wgio1P47ZU, com.apple.launchd.whfRiz1zjn, com.apple.launchd.Xv0xkmvFqi, KSOutOfProcessFetcher.0.r55jifrBu08ZlGAfPLYXKgYad4c=, KSOutOfProcessFetcher.502.r55jifrBu08ZlGAfPLYXKgYad4c=, meraki_wifi_loc.log, slave2, slave3]
Running: Print Message
-- Each darwin_amd64[.vbox-timwojtaszek-ipc]
Running: Allocate node : Body : End
Running: Allocate node : End
Running: Execute sub-workflows in parallel : Body : End
Running: Execute sub-workflows in parallel : End
Running: Allocate node : Body : End
Running: Allocate node : End
Running: End of Workflow
Finished: SUCCESS

Jesse Glick

unread,
Feb 26, 2015, 12:01:53 PM2/26/15
to jenkins...@googlegroups.com
On Friday, February 20, 2015 at 4:01:35 PM UTC-5, Timothy Wojtaszek wrote:
  new File(root).eachDir {  dirs << it.name  }

You cannot use java.io.File methods from a Workflow script if you are using slaves, since the script always runs on the master. If you need to inspect files beyond what readFile/writeFile provides, use a sh script or the like.

Timothy Wojtaszek

unread,
Feb 26, 2015, 1:13:00 PM2/26/15
to jenkins...@googlegroups.com
Thank you.  I didn't fully appreciate that the workflow executes from the master even within a node block.  Just to be clear, this means that for a node block, the master executing the workflow is effectively communicating with the slave for each command.  the node block simply sets the context such as machine, cwd, etc.


Cheers,
-tim

Jesse Glick

unread,
Feb 26, 2015, 3:20:38 PM2/26/15
to jenkins...@googlegroups.com
On Thursday, February 26, 2015 at 1:13:00 PM UTC-5, Timothy Wojtaszek wrote:
this means that for a node block, the master executing the workflow is effectively communicating with the slave for each command.

Correct, each `sh` step, etc. means a transaction starts over the remoting channel to the slave.
 
the node block simply sets the context such as machine, cwd, etc.

Specifically it waits in the queue (just like a top-level job) for an available slave; allocates an executor slot when it is scheduled; locks a workspace; sets the CWD and relevant environment variables; then runs its body with that context.
Reply all
Reply to author
Forward
0 new messages