How to do DoFRenumbering::component_wise for multilevel grid

82 views
Skip to first unread message

yy.wayne

unread,
Nov 27, 2022, 9:14:02 PM11/27/22
to deal.II User Group
Hi,

I get an error from compute_component_wise with level=0. I reduced the problem to a minimal working example, where I write a mesh and read to create a dense coarse level. The error shows even there's only one triangulation level. I assume the write & read mesh is causing this bug, since other part is similiar to dealii/tests/multigrid/renumbering_06.cc. 

Snipaste_2022-11-28_10-11-52.png
component_wise_mpi.cc
CMakeLists.txt

Wolfgang Bangerth

unread,
Nov 27, 2022, 10:26:04 PM11/27/22
to dea...@googlegroups.com
On 11/27/22 19:14, 'yy.wayne' via deal.II User Group wrote:
>
> I get an error from compute_component_wise with level=0. I reduced the problem
> to a minimal working example, where I write a mesh and read to create a dense
> coarse level. The error shows even there's only one triangulation level. I
> assume the write & read mesh is causing this bug, since other part is similiar
> to dealii/tests/multigrid/renumbering_06.cc.

Writing the mesh just outputs the active cells, and loses the hierarchy that
resulted from refinement. When you read it back in, you get a mesh with just
one level.

You need a way to write and read the mesh that preserves the refinement
levels. You can either to this via serialization, or using the
PersistentTriangulation class.

Best
W.
--
------------------------------------------------------------------------
Wolfgang Bangerth email: bang...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/


yy.wayne

unread,
Nov 27, 2022, 10:48:55 PM11/27/22
to deal.II User Group
Thanks for your quick reply.

I write mesh this way in order to create a refined coarse mesh. I tried generate a distributed mesh in serial but it gives other bugs when further refine it, therefore decide to write a mesh generated serialized and read it back.
I'll learn the PersistentTriangulation class then, thank you.

Best,
Wayne

yy.wayne

unread,
Nov 27, 2022, 11:09:43 PM11/27/22
to deal.II User Group
Hi Prof. Bangerth,

Sorry that I forget to mention it only breaks when runnning with MPI, run in serialization is good. PersistentTriangulation Class may not address this error. 
The intention of write & read a mesh here is to create a not-to-coarse coarse grid (so it's refine several times before output). 
I read the output grid and expect all cells are on level = 0. Preserving the multigrid structure is not preferred in this case.

Do using a straight GridOut destroys the mesh structure even I expect to eliminate multigrid information?

Best,
Wayne

Wolfgang Bangerth

unread,
Nov 27, 2022, 11:49:15 PM11/27/22
to dea...@googlegroups.com
On 11/27/22 21:09, 'yy.wayne' via deal.II User Group wrote:
>
> Sorry that I forget to mention *it only breaks when runnning with MPI, run in
> serialization is good.* PersistentTriangulation Class may not address this error.
> The intention of write & read a mesh here is to create a not-to-coarse coarse
> grid (so it's refine several times before output).
> I read the output grid and expect all cells are on level = 0. Preserving the
> multigrid structure is not preferred in this case.

Oh, I misread your message then. What is the error you observe?

Looking at your code, when you have this...

if(Utilities::MPI::this_mpi_process(MPI_COMM_WORLD) == 0)
{
Triangulation<dim> tria_coarse;
GridGenerator::hyper_shell(tria_coarse,
center,
inner_r,
outer_r,
n_cells);
tria_coarse.refine_global(3);

// write and re-read the mesh, so it becomes coarse mesh
std::cout << "write mesh" << std::endl;
GridOut grid_out;
grid_out.set_flags(GridOutFlags::Vtu(true));
std::ofstream out("coarse_mesh.vtk");
grid_out.write_vtk(tria_coarse, out);
out.close();
}

{// read coarse grid
tria.clear();
std::cout << "read mesh" << std::endl;
GridIn<dim> gridin;
gridin.attach_triangulation(tria);
std::ifstream fin("coarse_mesh.vtk");
gridin.read_vtk(fin);
fin.close();
std::cout << "read mesh done" << std::endl;
}

...then you will need a MPI_Barrier between the two blocks to make sure that
process 1 doesn't start reading the file before process 0 has completed
writing it.

yy.wayne

unread,
Nov 27, 2022, 11:59:59 PM11/27/22
to deal.II User Group
The error is from compute_component_wise on level = 0, the result it returns don't equal to dof_handler.n_dofs(level). 
I added a MPI_Barrier between first and second loop but error is not eliminated. 
Snipaste_2022-11-28_12-53-02.png

yy.wayne

unread,
Nov 28, 2022, 12:16:06 AM11/28/22
to deal.II User Group
Think I locate the error.
There bug do not come from mesh, but FESystem. For tests/multigrid/renumbering_06.cc, if line 90 the finite element is changed to 2 components, we get same error.

Wolfgang Bangerth

unread,
Nov 28, 2022, 11:30:15 PM11/28/22
to dea...@googlegroups.com

> The error is from compute_component_wise on level = 0, the result it returns
> don't equal to dof_handler.n_dofs(level).
> I added a MPI_Barrier between first and second loop but error is not eliminated.
> Snipaste_2022-11-28_12-53-02.png

I do not see this error. Are you running in debug mode? And with how many MPI
processes?

As a side note: The detour by writing the data to a file and then reading it
again is very inefficient, in particular if you have many MPI processes. If
you want to use GridOut/GridIn, let the former write the data into an object
of type std::ostringstream, then use Utilities::MPI::broadcast() to send the
string you get out of std::ostringstream to the other processes, and give it
to a std::istringstream object. Then use GridIn on the std::istringstream
object. This way, no data transfer to or from disk is necessary -- it all
happens in memory.

Best
Wolfgang

yy.wayne

unread,
Nov 29, 2022, 1:17:00 AM11/29/22
to deal.II User Group
I was debugging in debug mode. Just changed make to release mode and the error gone! Why running in debug mode
causes this error?

As for the side note, that's very helpful! I worried about the write&read mesh price but don't know what techniques can I use.
That's exactly what I want. Thank you.

Best,
Wayne

Wolfgang Bangerth

unread,
Nov 29, 2022, 11:15:20 AM11/29/22
to dea...@googlegroups.com
On 11/27/22 22:16, 'yy.wayne' via deal.II User Group wrote:
> There bug do not come from mesh, but FESystem. For
> tests/multigrid/renumbering_06.cc, if line 90 the finite element is changed to
> 2 components, we get same error.

I can reproduce this now. I don't know whether that function has ever been
tested for parallel programs with more than one finite element component. You
are using it to renumber the multigrid level DoFs, but that is not common for
multigrid. What is the reason why you are renumbering level DoFs? Do you
actually need the function call?


> I was debugging in debug mode. Just changed make to release mode and the
> error gone! Why running in debug mode causes this error?

The error is always there, but it is only checked in debug mode. If your
program does not run in debug mode, you shouldn't run it in release mode --
nothing you compute this way should be considered reliable.

yy.wayne

unread,
Nov 29, 2022, 7:20:00 PM11/29/22
to deal.II User Group
Well it takes several steps to Renumber on the coarsest level.
1) For a multigrid problem, I want to solve the coarsest grid directly using TrilinosWrappers::SolverDirect. The system has 2 components (real part and imaginary part). On finer level it's constructed in MatrixFree method so it can only solve BlockVectors. However  TrilinosWrappers::SolverDirect requires Vector, therefore I transform (Trilinos) BlockVectors to Vectors;
2) It works fine for fe_degree with 1 or 2 because DoFs are arranged point by point, say 0&1 on node 0, 2&3 on node 1. But when fe_degree goes to 3 and higher, some nodes don't arranged that way(in picture below rightmost, DoFs 0-7 arranged good but from 8,9 the layout is break). So I choose Renumbering the DoFs component wise so the DoF layout will be consistent. Actually it works then.
DoF.png
I'm solving a complex indefinite Helmholtz problem so it's hard to solve coarsest level iteratively.

Best,
Wayne

Wolfgang Bangerth

unread,
Dec 2, 2022, 6:49:09 PM12/2/22
to dea...@googlegroups.com
On 11/29/22 17:20, 'yy.wayne' via deal.II User Group wrote:
> **
>
> Well it takes several steps to Renumber on the coarsest level.
> 1) For a multigrid problem, I want to solve the coarsest grid directly using
> TrilinosWrappers::SolverDirect. The system has 2 components (real part and
> imaginary part). On finer level it's constructed in MatrixFree method so it
> can only solve BlockVectors. However  TrilinosWrappers::SolverDirect requires
> Vector, therefore I transform (Trilinos) BlockVectors to Vectors;
> 2) It works fine for fe_degree with 1 or 2 because DoFs are arranged point by
> point, say 0&1 on node 0, 2&3 on node 1. But when fe_degree goes to 3 and
> higher, some nodes don't arranged that way(in picture below rightmost, DoFs
> 0-7 arranged good but from 8,9 the layout is break). So I choose Renumbering
> the DoFs component wise so the DoF layout will be consistent. Actually it
> works then.

I see. Yes, what you try to do makes sense then.

Like I said in my previous email, it is uncommon to do what you're doing and
so it is entirely possible that the function you're trying to use has never
been used in the context you're looking for. You might have to take a look at
how these renumbering functions are implemented and fix the implementation --
that shouldn't be too difficult given that none of them is very complicated,
and we would of course be excited to get a patch with the fixes!

The second option you have is to use deal.II functions to query which
component a DoF corresponds to. If I understand you correctly, you're trying
to select which DoF belongs to which vector component, and that is easy if you
renumber them by component but not so easy without because it is no longer the
case that co-located DoFs have successive indices. But there are functions in
DoFTools that allow you to query which DoFs belong to which vector component,
and maybe that helps you with the selection and copying around.
Reply all
Reply to author
Forward
0 new messages