os.exec error: too many open files

1,009 views
Skip to first unread message

Pablo Rozas Larraondo

unread,
Mar 7, 2016, 7:00:48 AM3/7/16
to golang-nuts
Hello gophers,

Inspired by the parallel walker proposed at "the Go programming language" book I've modified the du3 example (https://github.com/adonovan/gopl.io/blob/master/ch8/du3/main.go) introducing an os.exec call (I want to inspect files using an external command line program).

Adding any exec command generates an error at the ioutil.ReadDir(dir) call which says: "too many open files".

I cannot understand why just adding an exec.Command("echo", filename).Run() line, which is synchronous, makes the whole program fail.

Here is a link to the gist containing the modified version where the only difference with the original example is the exec command at line 96: https://gist.github.com/monkeybutter/455f08747402973150ce

This program uses a semaphore to limit concurrency and I get the error even limiting it to 1 goroutine.

Any ideas of what is causing the error?

Thanks for your help,
Pablo

Tamás Gulácsi

unread,
Mar 7, 2016, 8:26:21 AM3/7/16
to golang-nuts
ulimit -n ?

Just limit the number of concurrent os.Execs!

For example with a buffered chan struct{}, pre-filled with 8 tokens; pull a token synchronously before OS.Exec, and put the token back after.

Or push the filenames into a chan, and has n goroutines pull and OS.Exec on them.

Dave Cheney

unread,
Mar 7, 2016, 4:58:39 PM3/7/16
to golang-nuts
OS exec will consume three file descriptors at least per exec. OSX by default only permits a ulimit on 256. You can raise this limit, but it's probably a better idea to implement a semaphore to limit the number of child processes.

Thanks

Dave

Pablo Rozas-Larraondo

unread,
Mar 7, 2016, 4:59:51 PM3/7/16
to golang-nuts
Thanks Tamas for your response. As far as I can understand, my posted example limits the number of concurrent calls to the os.Exec function by using a semaphore in a very similar way to what you are proposing. This example limits executions to use 20 concurrent routines, but I can also reproduce the error setting it to just one routine.

ulimit -n is set to 2560 in my system, which is far above the number of concurrent goroutines that I'm using.

Sorry, I might be missing something obvious here but I can't figure out what it is...

Cheers,
Pablo 

Pablo Rozas-Larraondo

unread,
Mar 7, 2016, 6:01:07 PM3/7/16
to golang-nuts
Thanks Dave. I think I now understand what's happening. I'm limiting access to the dirents() function with a semaphore but not to the subsequent exec.Command() call. As one is calling the other, I thought that just limiting concurrency on the first one will also do for the second. Access to the first function happens very fast but response from os.exec.Command() is much slower, which makes them to accumulate and create the "too many open files" error.

Thanks for your help and lesson learnt for me: Don't assume when programming concurrent code and test every single step!

Cheers,
Pablo

Pablo Rozas-Larraondo

unread,
Mar 7, 2016, 6:23:43 PM3/7/16
to golang-nuts
In case someone is interested in the solution I've created a gist with a working concurrent file walker that makes exec.Command() calls:


The solution was quite straight forward once I understood the problem: Acquire the semaphore in the dirents() function and release it when the function doing the os.exec operation finishes.

Cheers,
Pablo

On Monday, March 7, 2016 at 11:00:48 PM UTC+11, Pablo Rozas-Larraondo wrote:
Reply all
Reply to author
Forward
0 new messages