build_patches(degree) for higher order dg elements in parallel

53 views
Skip to first unread message

Justin Kauffman

unread,
Dec 13, 2016, 9:46:22 AM12/13/16
to deal.II User Group
Hi all:

I am writing an HDG code and trying to parallelize it in a fashion similar to step-40. I am having a problem with data_out.build_patches(degree). In step-40 the argument is the default value of zero, but I eventually want to run this code for higher order dg elements. I believe that the problem in parallel is that build_patches is trying to subdivide all cells for each processor (even the ghost_cells) instead of just the locally_owned_cells. I get the following error output:

An error occurred in line <934> of file </include/deal.II/grid/tria_iterator.h> in function
    Accessor& dealii::TriaRawIterator<Accessor>::operator*() [with Accessor = dealii::CellAccessor<2, 2>]
The violated condition was:
    Accessor::structure_dimension!=Accessor::dimension || state() == IteratorState::valid
Additional information:
    You tried to dereference a cell iterator for which this is not possible. More information on this iterator: level=-2, index=-2, state=invalid

Stacktrace:
-----------
#0  dealii::TriaRawIterator<dealii::CellAccessor<2, 2> >::operator*()
#1   dealii::DataOutFaces<2, dealii::DoFHandler<2, 2> >::next_face(std::pair<dealii::TriaIterator<dealii::CellAccessor<2, 2> >, unsigned int> const&)
#2   dealii::DataOutFaces<2, dealii::DoFHandler<2, 2> >::build_patches(dealii::Mapping<2, 2> const&, unsigned int)
#3   dealii::DataOutFaces<2, dealii::DoFHandler<2, 2> >::build_patches(unsigned int)

I am using deal.ii version 8.5.0-pre. In step-51 build_patches takes the dg element degree as the argument and this is true when running on a single processor, but breaks down when running on multiple processors. Any help would be greatly appreciated. 

Thank you,
- Justin

Wolfgang Bangerth

unread,
Dec 13, 2016, 12:00:18 PM12/13/16
to Justin Kauffman, deal.II User Group
Justin
Is it really working with zero subdivisions? Step 40 only uses the DataOut class, not the DataOutFaces class, so that may not be the best comparison. I don't recall whether the latter class works in parallel at all, regardless of subdivision. 
Best
Wolfgang



Sent from Samsung tablet.
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dealii+un...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Justin Kauffman

unread,
Dec 13, 2016, 12:13:28 PM12/13/16
to deal.II User Group, jak...@gmail.com, bang...@colostate.edu
Wolfgang,

Upon further investigation you are correct. If I go beyond 4 processors I get the same error even with the default zero subdivisions. 

Thank you for your response.  
- Justin

Wolfgang Bangerth

unread,
Dec 14, 2016, 2:06:07 AM12/14/16
to Justin Kauffman, deal.II User Group
On 12/13/2016 10:13 AM, Justin Kauffman wrote:
>
> Upon further investigation you are correct. If I go beyond 4 processors I get
> the same error even with the default zero subdivisions.

Justin -- I suspect that the class simply does not support parallel
triangulations. Do you have any suspicion that it may?

In any case, the error message should be better. If you want, create a minimal
testcase that shows the problem and we can at least find the place where one
should add a more meaningful error message.

Best
W.

--
------------------------------------------------------------------------
Wolfgang Bangerth email: bang...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

Justin Kauffman

unread,
Dec 14, 2016, 12:22:13 PM12/14/16
to deal.II User Group, jak...@gmail.com, bang...@colostate.edu
Wolfgang,

I've attached a small test case that simply initializes a parallel vector and then attempts to write it to file via DataOutFaces. The finite element used only lives on the faces of the cells so DataOutFaces in appropriate. 

This test case provided me with some insight into my problem. First, DataOutFaces can support parallel triangulations. I've also attached a figure showing the subdomains for the test (note that this is from grouping the individual file outputs in Paraview not writing generating a subdomain field as part of the output). The other thing this test case showed me is to be cautious in the problems I am running when running on multiple processors. I am still new to parallel computing and this problem arose because I was not careful. My triangulation did not have enough cells to partition the domain properly when using a higher number of processors (6 or 12). Increasing the number of cells removed the error and the code ran without an issue. 

This is shown in the attached code as well. I ran the code of 4, 6 and 12 processors for two different levels of refinement. The much finer mesh executed the code without problem. The coarser mesh showed the errors I had shown on the first post of this thread for 6 and 12 processors. 

Thank you for your time and helping me get to the bottom of my issue. I'm glad we were able to figure it out and that parallel support had already been extended to DataOutFaces.

- Justin
test.cc
test_output_12processors.png

Wolfgang Bangerth

unread,
Dec 15, 2016, 12:39:26 PM12/15/16
to Justin Kauffman, deal.II User Group

Justin,

> I've attached a small test case that simply initializes a parallel vector and
> then attempts to write it to file via DataOutFaces. The finite element used
> only lives on the faces of the cells so DataOutFaces in appropriate.
>
> This test case provided me with some insight into my problem. First,
> DataOutFaces can support parallel triangulations. I've also attached a figure
> showing the subdomains for the test (note that this is from grouping the
> individual file outputs in Paraview not writing generating a subdomain field
> as part of the output). The other thing this test case showed me is to be
> cautious in the problems I am running when running on multiple processors. I
> am still new to parallel computing and this problem arose because I was not
> careful. My triangulation did not have enough cells to partition the domain
> properly when using a higher number of processors (6 or 12). Increasing the
> number of cells removed the error and the code ran without an issue.
>
> This is shown in the attached code as well. I ran the code of 4, 6 and 12
> processors for two different levels of refinement. The much finer mesh
> executed the code without problem. The coarser mesh showed the errors I had
> shown on the first post of this thread for 6 and 12 processors.

I tried to run your code on 6 and 12 processors, but it finishes without an
error. With the code you sent, do you just have to do
mpirun -np 12 ./test
to trigger the error on your machine?

What version of deal.II are you using? I (re-)discovered this patch
https://github.com/dealii/dealii/pull/2054
that is part of deal.II 8.4 I believe.

Justin Kauffman

unread,
Dec 15, 2016, 1:50:49 PM12/15/16
to deal.II User Group, jak...@gmail.com, bang...@colostate.edu
Wolfgang,

I am using deal.ii 8.5.0-pre so the patch is in my version. I forgot to change my value of reputations for my triangulation to take the vector arguments. The updated code produces the error on the coarse mesh, but works as desired on the finer mesh. Since I am still new to parallel computing I was trying to run on a mesh that was too coarse for 6 and 12 processors.

Sorry for having the wrong code the first time. This version will show the error on the coarse mesh.

- Justin 
test.cc

Wolfgang Bangerth

unread,
Dec 16, 2016, 12:41:51 AM12/16/16
to Justin Kauffman, deal.II User Group

Justin,

> I am using deal.ii 8.5.0-pre so the patch is in my version. I forgot to change
> my value of reputations for my triangulation to take the vector arguments. The
> updated code produces the error on the coarse mesh, but works as desired on
> the finer mesh. Since I am still new to parallel computing I was trying to run
> on a mesh that was too coarse for 6 and 12 processors.
>
> Sorry for having the wrong code the first time. This version will show the
> error on the coarse mesh.

Great, thanks. The fix was pretty simple:
https://github.com/dealii/dealii/pull/3676
Reply all
Reply to author
Forward
0 new messages