numba thread limit?

2 views
Skip to first unread message

Dustin Moore

unread,
Jan 25, 2016, 10:33:27 AM1/25/16
to Numba Public Discussion - Public
Is there a way to limit the number of threads used by the 'parallel' target?

Thanks,

Stanley Seibert

unread,
Jan 25, 2016, 11:04:02 AM1/25/16
to Numba Public Discussion - Public
Not currently, but we absolutely should add this.  Although we do not use OpenMP for our thread pool (I wish we could), I'm wondering if it would be useful for Numba's parallel target to respect the OMP_NUM_THREADS environment variable, just like MKL does.


On Mon, Jan 25, 2016 at 9:33 AM, Dustin Moore <moo...@secac.com> wrote:
Is there a way to limit the number of threads used by the 'parallel' target?

Thanks,

--
You received this message because you are subscribed to the Google Groups "Numba Public Discussion - Public" group.
To unsubscribe from this group and stop receiving emails from it, send an email to numba-users...@continuum.io.
To post to this group, send email to numba...@continuum.io.
To view this discussion on the web visit https://groups.google.com/a/continuum.io/d/msgid/numba-users/738019c7-c43a-4e2d-b25f-3d704ac5c04a%40continuum.io.
For more options, visit https://groups.google.com/a/continuum.io/d/optout.

Leopold Haimberger

unread,
Jan 25, 2016, 11:23:14 AM1/25/16
to Numba Public Discussion - Public
Good that you mentioin this issue. I have tried several environment variables as well but did not yet
raise it as an issue. 
Yes, obeying the OMP_NUM_THREADS setting would be a good solution. 

Best regards,

Leo

Stanley Seibert

unread,
Jan 25, 2016, 11:27:25 AM1/25/16
to Numba Public Discussion - Public
Just to make sure I'm clear on the semantics: Do other applications only check OMP_NUM_THREADS at startup, or before starting each parallel operation?  I assume the former, but want to check...

--
You received this message because you are subscribed to the Google Groups "Numba Public Discussion - Public" group.
To unsubscribe from this group and stop receiving emails from it, send an email to numba-users...@continuum.io.
To post to this group, send email to numba...@continuum.io.

Kevin Sheppard

unread,
Jan 25, 2016, 11:29:02 AM1/25/16
to Numba Public Discussion - Public
For NumPy it is just when the first loading.  There are more complex methods to set thread limits in NumPy (at lease for some BLAS), but changing environmental variables doesn't have any effects.

Stanley Seibert

unread,
Jan 25, 2016, 11:34:18 AM1/25/16
to Numba Public Discussion - Public
I opened this as an issue on Github so people can track when we implement it:

https://github.com/numba/numba/issues/1655

nov...@gmail.com

unread,
Jan 26, 2016, 8:51:04 AM1/26/16
to Numba Public Discussion - Public
Out of interest, how come openMP is out of the question?

Stanley Seibert

unread,
Jan 26, 2016, 8:56:59 AM1/26/16
to Numba Public Discussion - Public
As far as we're aware, OpenMP runtimes don't (yet?) offer a standard way for other languages to interact with the thread pool.  (Acquire threads, return them to the pool when execution is complete, etc)  If Numba generated C code, which we then compiled with an OpenMP-capable compiler, then this would be no problem.  However, since we generate LLVM IR, which sits below the compiler, we are at the mercy of undocumented interfaces intended for internal use.

That said, there are other LLVM-based langauges (like Rust and Julia) which I'm sure would also like to cooperate with OpenMP as well.  If they haven't already figured out a solution, I'm sure someone will eventually.  (Any pointers to projects that show how LLVM should interact with OpenMP are welcome!)


On Mon, Jan 25, 2016 at 11:39 PM, <nov...@gmail.com> wrote:
Out of interest, how come openMP is out of the question?
--
You received this message because you are subscribed to the Google Groups "Numba Public Discussion - Public" group.
To unsubscribe from this group and stop receiving emails from it, send an email to numba-users...@continuum.io.
To post to this group, send email to numba...@continuum.io.
Reply all
Reply to author
Forward
0 new messages