How to set material id with MPI

69 views
Skip to first unread message

Phạm Ngọc Kiên

unread,
Jul 17, 2019, 9:46:05 PM7/17/19
to deal.II User Group
Hi colleagues,
I am trying to write codes to find a subset of cells that I want to set their material id.
The codes run well with 1 processor.
However, when testing with more than 1 processor, the codes did wrong things.
This is because each processor only owns a subset of cells with distributed triangulation.
Do we have a way to address this issue in deal.II?
Thank you very much.

Best regards,
Kien

Daniel Arndt

unread,
Jul 17, 2019, 10:58:16 PM7/17/19
to dea...@googlegroups.com
Kien,

It is impossible for us to tell what is going wrong with this little information. Please provide us with some more details.
What does the part of the code that is responsible for setting the material id look like?
Are you trying to set the material id on all cells or only on the locally owned (or locally relevant) ones?
How do you notice that the code produces wrong results?

Best,
Daniel

Wolfgang Bangerth

unread,
Jul 18, 2019, 4:47:33 AM7/18/19
to dea...@googlegroups.com
On 7/17/19 7:46 PM, Phạm Ngọc Kiên wrote:
> I am trying to write codes to find a subset of cells that I want to set their
> material id.
> The codes run well with 1 processor.
> However, when testing with more than 1 processor, the codes did wrong things.
> This is because each processor only owns a subset of cells with distributed
> triangulation.
> Do we have a way to address this issue in deal.II?

In addition to Daniel's questions, take a look at the documentation of the
parallel::distributed::Triangulation class documentation. It talks about
similar issues with boundary ids. I would imagine that setting material ids
poses similar challenges, and has similar solutions.

Best
W.

--
------------------------------------------------------------------------
Wolfgang Bangerth email: bang...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

Phạm Ngọc Kiên

unread,
Aug 23, 2019, 1:58:27 AM8/23/19
to dea...@googlegroups.com
Hi colleagues,
I have a question for parallel::distributed::Triangulation
When 2 cells share 1 edge, but they are living in 2 different MPI processes, how can I choose only 1 cell containing the common edge from them.
I think I have to set material id for the cell in the first process, and then tell the other one do not set it.
However, I don't know how to send the cell iterator in MPI communication.

Could you please help me to address this issue?
Thank you very much.

Best regards,
Kien



Vào Th 5, 18 thg 7, 2019 vào lúc 17:47 Wolfgang Bangerth <bang...@colostate.edu> đã viết:
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dealii+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/dealii/9afe5b13-a5db-ba22-81d1-acd9eb952d7d%40colostate.edu.
For more options, visit https://groups.google.com/d/optout.

Daniel Arndt

unread,
Aug 23, 2019, 8:48:46 AM8/23/19
to dea...@googlegroups.com
KIen,

Hi colleagues,
I have a question for parallel::distributed::Triangulation
When 2 cells share 1 edge, but they are living in 2 different MPI processes, how can I choose only 1 cell containing the common edge from them.
I think I have to set material id for the cell in the first process, and then tell the other one do not set it.
However, I don't know how to send the cell iterator in MPI communication.

Just set the material id on both processes. That is much simpler and likely faster.

Best,
Daniel

Wolfgang Bangerth

unread,
Aug 23, 2019, 12:29:02 PM8/23/19
to dea...@googlegroups.com
On 8/22/19 11:58 PM, Phạm Ngọc Kiên wrote:
>
> I have a question for parallel::distributed::Triangulation
> When 2 cells share 1 edge, but they are living in 2 different MPI processes,
> how can I choose only 1 cell containing the common edge from them.

Is your goal to make sure that only one of the two processors does some work
on these edges? If that's the case, then you need a "tie breaker" -- for
example, if the subdomain id of a locally owned cell is lower than the
subdomain of a neighboring ghost cell, then the current processor does the
work. If the locally owned cell's subdomain id is larger, then the neighboring
processor is in charge of the edge.

Phạm Ngọc Kiên

unread,
Aug 29, 2019, 8:32:01 PM8/29/19
to dea...@googlegroups.com
Dear all,
I think we have two ways to do this.
The first one is the way Prof. Wolfgang Bangerth suggested.
The second one is to load the grid in a Triangulation in all processor, then we set the material id before copying parallel::distributed::Triangulation from the Triangulation.

When I run the codes in my computer, it takes  a lot of time for p4est to load the grid.
The loading grid  step is more time consuming than solving the system of equations with a mesh containing about 100,000 cells.

I would like to thank you very much for your help.
Best,
Kien

Vào Th 7, 24 thg 8, 2019 vào lúc 01:29 Wolfgang Bangerth <bang...@colostate.edu> đã viết:
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dealii+un...@googlegroups.com.

Wolfgang Bangerth

unread,
Aug 29, 2019, 11:35:55 PM8/29/19
to dea...@googlegroups.com
On 8/29/19 6:31 PM, Phạm Ngọc Kiên wrote:
>
> When I run the codes in my computer, it takes  a lot of time for p4est to load
> the grid.
> The loading grid  step is more time consuming than solving the system of
> equations with a mesh containing about 100,000 cells.

It *shouldn't* take that long, but who knows what exactly is happening with
such big meshes. Do you think you can create a small testcase that
demonstrates this? It should really only contain of the code to read the mesh,
and the file with the mesh itself.

Phạm Ngọc Kiên

unread,
Sep 2, 2019, 10:49:03 PM9/2/19
to dea...@googlegroups.com
Dear Prof. Wolfgang Bangerth,
As the file is too large, I send it again in a compressed file in the attachment.
I am sorry for my mistake.
Best regards,
Kien

Vào Th 3, 3 thg 9, 2019 vào lúc 10:50 Phạm Ngọc Kiên <ngockie...@gmail.com> đã viết:
Dear Prof. Wolfgang Bangerth,
The attachment is my codes and the mesh for loading grid.
I think that when I run the codes on a single computer, it might take longer time than run the codes on cluster.

I would like to thank you very much for your great guidance.
Best regards,
Kien

Vào Th 6, 30 thg 8, 2019 vào lúc 12:35 Wolfgang Bangerth <bang...@colostate.edu> đã viết:
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dealii+un...@googlegroups.com.
loading-test.tar.xz

Richard Schussnig

unread,
Sep 4, 2019, 2:31:44 AM9/4/19
to deal.II User Group
Hi Pham!
From your description I do not really get why you are specifically doing this, so maybe consider the following:
I assume, you are flagging cells material ids on one locally owned part due to some custom condition - lets say some stress or function, 
you cannot formulate in the global coordinate system, so that you cannot decide on the material id simply using cell->center() or similar strategies.

The workaround i came up with is as follows:
Every cell is only flagged from the owning processor and needs to be communicated to other parts of the 
p::d:tria afterwards, where those elements are ghosts.
->As a consequence, ghost cells have different material ids depending on the side, from which they are viewed from, 
which is what we want to get rid of.

So firstly, create - in my case I already had such a space in use - the simplest possible discontinuous (!) function space,
e.g. with FE_DGQ(degree=0). 

Then, you loop over the locally owned cells and assemble simply the material id. - similar to 
assembling e.g. the right hand side. (Here the discontinuity of the function space in use comes into play, since you do not 
want to mix contributions from different elements (or you consider the internal nodes for C0, but at least 2nd order functions).
 
After that, you of course compress and convert to a ghosted vector - meaning, that you now have access to the entries of the vectors
in the ghost cells as well - which now contain your material ids.

So finally, you loop over locally owned AND ghosted cells - and set the material id in both from the vector you got, 
which is now accessible from the cells you need.

The above approach might not be the fastest one, but if you can reuse some space it might not be too bad.
If someone reading this sees any flaw I am currently not aware of, please let me know! - I am new to both C++ and dealii ; )

Kind regards & good luck coding that up!
Richard
To unsubscribe from this group and stop receiving emails from it, send an email to dea...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages