Building in parallel?

1,469 views
Skip to first unread message

Roscoe Bartlett

unread,
Dec 15, 2015, 6:14:09 PM12/15/15
to Spack
Hello Spack users and devs,

I can't find any documentation about how to get spack to get individual builds of packages to use more than one process.  Most build systems allow you specify -j<N> (e.g. where <N> = 8) to build in parallel.  That makes a *huge* difference in how long it takes to build software.

-Ross




Kevin Huck

unread,
Dec 15, 2015, 6:24:51 PM12/15/15
to Spack, Roscoe Bartlett
Ross -

The builds are parallel by default:


"By default, Spack will invoke make() with a -j <njobs> argument, so that builds run in parallel. It figures out how many jobs to run by determining how many cores are on the host machine. Specifically, it uses the number of CPUs reported by Python’s multiprocessing.cpu_count().  If a package does not build properly in parallel, you can override this setting by adding parallel = False to your package.”

Thanks -
Kevin

--
You received this message because you are subscribed to the Google Groups "Spack" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spack+un...@googlegroups.com.
To post to this group, send email to sp...@googlegroups.com.
Visit this group at https://groups.google.com/group/spack.
For more options, visit https://groups.google.com/d/optout.

--
Kevin Huck
Performance Research Lab
University of Oregon



François Bissey

unread,
Dec 15, 2015, 6:27:54 PM12/15/15
to sp...@googlegroups.com
I noticed that and I would have preferred it to be adjustable.
Using all the available cores on the machine assumes that you
are using a dedicated build machine and that there is only one
job on it going on or that it will be very short.

Francois

On 12/16/15 12:24, Kevin Huck wrote:
> Ross -
>
> The builds are parallel by default:
>
> http://llnl.github.io/spack/packaging_guide.html?highlight=parallel#parallel-builds
>
> "By default, Spack will invoke make() with a -j <njobs> argument, so
> that builds run in parallel. It figures out how many jobs to run by
> determining how many cores are on the host machine. Specifically, it
> uses the number of CPUs reported by
> Python’s multiprocessing.cpu_count(). If a package does not build
> properly in parallel, you can override this setting by
> adding parallel = False to your package.”
>
> Thanks -
> Kevin
>
>> On Dec 15, 2015, at 3:14 PM, Roscoe Bartlett
>> <bartlet...@gmail.com <mailto:bartlet...@gmail.com>> wrote:
>>
>> Hello Spack users and devs,
>>
>> I can't find any documentation about how to get spack to get
>> individual builds of packages to use more than one process. Most
>> build systems allow you specify -j<N> (e.g. where <N> = 8) to build in
>> parallel. That makes a *huge* difference in how long it takes to
>> build software.
>>
>> -Ross
>>
>>
>>
>>
>>
>> --
>> You received this message because you are subscribed to the Google
>> Groups "Spack" group.
>> To unsubscribe from this group and stop receiving emails from it, send
>> an email to spack+un...@googlegroups.com
>> <mailto:spack+un...@googlegroups.com>.
>> To post to this group, send email to sp...@googlegroups.com
>> <mailto:sp...@googlegroups.com>.
>> Visit this group at https://groups.google.com/group/spack.
>> For more options, visit https://groups.google.com/d/optout.
>
> --
> Kevin Huck
> Performance Research Lab
> University of Oregon
> kh...@cs.uoregon.edu <mailto:kh...@cs.uoregon.edu>
> http://tau.uoregon.edu <http://tau.uoregon.edu/>
>
>
>
> --
> You received this message because you are subscribed to the Google
> Groups "Spack" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to spack+un...@googlegroups.com
> <mailto:spack+un...@googlegroups.com>.
> To post to this group, send email to sp...@googlegroups.com
> <mailto:sp...@googlegroups.com>.

Matthew LeGendre

unread,
Dec 15, 2015, 6:31:16 PM12/15/15
to François Bissey, sp...@googlegroups.com

You can override the make parallisism with the -j option to spack install.
So:

spack install -j 4 <spec>

-Matt
> email to spack+un...@googlegroups.com.
> To post to this group, send email to sp...@googlegroups.com.

Roscoe Bartlett

unread,
Dec 15, 2015, 7:28:21 PM12/15/15
to Spack
Bad idea to use all of the cores by default since that does not take into account the current load on the machine.  Ninja will also automatically run in parallel but at least it looks at the machine load to try not overload the machine.  In my opinion, it would be better to not do a build in parallel at all than to silently take all of the cores.  I don't know of any other tool that does that.  Instead spack should make the -jN option obvious and use it in all examples.

Todd Gamblin

unread,
Dec 15, 2015, 7:43:21 PM12/15/15
to Roscoe Bartlett, Spack
Ross:

MacPorts, Homebrew, and GNU parallel all use ncpus by default.  I don't think using -jN all over the place helps much with simplifying the documentation, but I we could at least move this higher up into the basic usage guide.

Better, I think would be to change the default policy to something more reasonable like "half the cores".  Any suggestions for how many cores is reasonable?  Kind of hard to come up with a good heuristic for shared login nodes, but I would rather not make the default slow.

Another thing that needs to be added is config file setting to change the default.

-Todd



Roscoe Bartlett

unread,
Dec 16, 2015, 12:09:40 PM12/16/15
to Spack, bartlet...@gmail.com
Okay, I found the documentation for 'spack help install" and found:

  -j JOBS, --jobs JOBS  Explicitly set number of make jobs. Default is #cpus.

I guess that is good enough.

Thanks,

-Ross

Lee, Greg

unread,
Dec 16, 2015, 12:42:25 PM12/16/15
to Roscoe Bartlett, Spack

Todd,

 

I think it would be wise to have a config file setting. I will add that some compilers require checking out a license and doing a highly-parallel build has the potential to have false build failures due to license starvation. Also, these licenses are often shared, so doing parallel builds is not friendly to other users of the licensed compiler. This ends up being policy/social issues, but keep this in mind with the config file settings. It may be worth making them on a per-compiler basis if possible.

 

                -Greg

Todd Gamblin

unread,
Dec 16, 2015, 2:41:53 PM12/16/15
to Lee, Greg, Roscoe Bartlett, Spack
Greg:

This is useful, thanks.  If there are others with opinions please send them along.  I'll look at this for the next release.

Adam Stewart

unread,
Apr 12, 2017, 3:38:45 PM4/12/17
to Spack
Hello everyone,

I know this is an old issue, but I just submitted a pull request that lets you configure the default level of parallelism used by Spack. Check out https://github.com/LLNL/spack/pull/3812 if you're interested.

Adam
Reply all
Reply to author
Forward
0 new messages