OpenMP

46 views
Skip to first unread message

Zachary 42!

unread,
Nov 30, 2020, 6:46:59 PM11/30/20
to dea...@googlegroups.com
Hi everyone,

I am wondering if deal.II has any OpenMP capabilities or if they would be easily incorporated?

I know the deal.II project uses TBB for multithreading but I would like to run multithreaded on computers that may not have TBB.

Cheers,

Zachary

Zachary Streeter

unread,
Nov 30, 2020, 8:05:28 PM11/30/20
to deal.II User Group
Hi everyone,

I mainly would like a good old ```#pragma omp for``` to parallelize for-loops.  Should I use a "workstream" for this?  Can someone point me in the right direction within deal.II to look for these capabilities?  I imagine all the classics like scan and reduction, etc., must be in this project.

I hope this added more clarity.

Thank you,

Zachary

Wolfgang Bangerth

unread,
Nov 30, 2020, 8:12:42 PM11/30/20
to dea...@googlegroups.com
On 11/30/20 4:46 PM, Zachary 42! wrote:
> I am wondering if deal.II has any OpenMP capabilities or if they would be easily incorporated?
>
> I know the deal.II project uses TBB for multithreading but I would like to run multithreaded on computers that may not have TBB.

Zachary,
You are of course free to use OpenMP in your own application code. deal.II
just won't parallelize under the hood if you don't have the TBB installed.

We plan on replacing the TBB for the next release by TaskFlow, another
external library that, however, only builds on C++14 features and is,
consequently, independent of the target platform. This has already been
merged, but we haven't converted some of the places where we currently use the
TBB -- any help is of course very welcome!

The TaskFlow version in the repo is 2.5, but it will have to be updated at one
point to 2.7. There is a draft pull request for this here:
https://github.com/dealii/dealii/pull/10990
But there is a bug for which it's not entirely clear to us how to work around it.

Best
W.

--
------------------------------------------------------------------------
Wolfgang Bangerth email: bang...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

Wolfgang Bangerth

unread,
Nov 30, 2020, 8:18:05 PM11/30/20
to dea...@googlegroups.com
On 11/30/20 6:05 PM, Zachary Streeter wrote:
>
> I mainly would like a good old ```#pragma omp for``` to parallelize
> for-loops.  Should I use a "workstream" for this?  Can someone point me in the
> right direction within deal.II to look for these capabilities?  I imagine all
> the classics like scan and reduction, etc., must be in this project.

The problem with #pragma omp for is that, almost universally, it only helps
you parallelize the innermost loops. But these loops are also almost
universally short and breaking them up into 16 or 64 threads leads to so many
synchronization points that it's not worth it on machines with substantial
core counts.

What you need to do instead is to parallelize the outermost loops -- say, the
loop over all cells when assembling a linear system. Here, you do orders of
magnitude more work, and consequently have orders of magnitude fewer
synchronization points. But #pragma omp for can't express the complexities of
these kinds of loops: The counter over the elements may actually be an
iteration of class type (not supported by OpenMP), the variables that are
thread-local or shared are not just simple C-style objects but classes
themselves with complex semantics, etc. As a consequence, we've always tried
to avoid using OpenMP for parallelization -- the only efficient way to do it
is using systems such as WorkFlow or task-based parallelism.

Zachary Streeter

unread,
Nov 30, 2020, 8:23:42 PM11/30/20
to deal.II User Group
Ah, I see and that makes perfect sense.

I'v been slightly exposed to TaskFlow and it's interesting y'all are planning on using it.  I will look more into this project!

Thank you,

Zachary
Reply all
Reply to author
Forward
0 new messages