Help with TriangulationDescription and parallel::fullydistributed::Triangulation

120 views
Skip to first unread message

Rishabh Saxena

unread,
Dec 9, 2024, 10:46:25 AM12/9/24
to deal.II User Group

Dear deal.II Community,

I am working on a project where I am trying to generate a parallel mesh using TriangulationDescription with parallel::fullydistributed::Triangulation. My goal is to create a triangulation from a structured grid based on a matrix of material IDs.

I have implemented the following code to generate the mesh:

template <int dim>
void LaplaceProblem<dim>::generate_mesh_from_matrix(int Nx, int Ny, int Nz)
{
    std::vector<Point<dim>> vertices;
    std::vector<TriangulationDescription::CellData<dim>> cells;
    std::vector<unsigned int> subdomain_ids;

    unsigned int vertex_counter = 0;
    std::map<std::tuple<unsigned int, unsigned int, unsigned int>, unsigned int> vertex_map;

    // Generate vertices
    for (unsigned int i = 0; i <= Nx; ++i)
        for (unsigned int j = 0; j <= Ny; ++j)
            for (unsigned int k = 0; k <= Nz; ++k)
            {
                Point<dim> p(i, j, k);
                vertices.push_back(p);
                vertex_map[std::make_tuple(i, j, k)] = vertex_counter++;
            }

    // Generate cells
    for (unsigned int i = 0; i < Nx; ++i)
        for (unsigned int j = 0; j < Ny; ++j)
            for (unsigned int k = 0; k < Nz; ++k)
            {
                int material_id = mat_matrix[j + k * Ny][i];
                if (material_id == _mat_tau_1)
                {
                    TriangulationDescription::CellData<dim> cell;

                    cell.vertices = {vertex_map[std::make_tuple(i, j, k)],
                                     vertex_map[std::make_tuple(i + 1, j, k)],
                                     vertex_map[std::make_tuple(i, j + 1, k)],
                                     vertex_map[std::make_tuple(i + 1, j + 1, k)],
                                     vertex_map[std::make_tuple(i, j, k + 1)],
                                     vertex_map[std::make_tuple(i + 1, j, k + 1)],
                                     vertex_map[std::make_tuple(i, j + 1, k + 1)],
                                     vertex_map[std::make_tuple(i + 1, j + 1, k + 1)]};

                    cell.material_id = material_id;
                    cells.push_back(cell);
                    subdomain_ids.push_back(Utilities::MPI::this_mpi_process(_mpi_communicator));
                }
            }

    TriangulationDescription::Description<dim, dim> triangulation_description;
    triangulation_description.vertices = vertices;
    triangulation_description.cells = cells;
    triangulation_description.subdomain_ids = subdomain_ids;

    triangulation_pft.create_triangulation(triangulation_description);

    pcout << "Number of active cells: " << triangulation_pft.n_active_cells() << std::endl;
    pcout << "Number of vertices: " << triangulation_pft.n_vertices() << std::endl;
}

When I compile the code, I get the following errors:

/home/saxenar/local/programs/tortuosity_github/source/tortuosity_parallel_FDM_discriptor_MPI.cc:957:26: Fehler: »struct dealii::TriangulationDescription::CellData<3>« hat kein Element namens »vertices«
/home/saxenar/local/programs/tortuosity_github/source/tortuosity_parallel_FDM_discriptor_MPI.cc:967:26: Fehler: »struct dealii::TriangulationDescription::CellData<3>« hat kein Element namens »material_id«
/home/saxenar/local/programs/tortuosity_github/source/tortuosity_parallel_FDM_discriptor_MPI.cc:981:31: Fehler: »struct dealii::TriangulationDescription::Description<3, 3>« hat kein Element namens »vertices«
/home/saxenar/local/programs/tortuosity_github/source/tortuosity_parallel_FDM_discriptor_MPI.cc:982:31: Fehler: »struct dealii::TriangulationDescription::Description<3, 3>« hat kein Element namens »cells«
/home/saxenar/local/programs/tortuosity_github/source/tortuosity_parallel_FDM_discriptor_MPI.cc:983:31: Fehler: »struct dealii::TriangulationDescription::Description<3, 3>« hat kein Element namens »subdomain_ids«

It seems I am misunderstanding how to use TriangulationDescription::CellData and TriangulationDescription::Description.

Could you please guide me on the correct way to set the vertices, cells, and subdomain IDs manually without relying on the utility functions? I want to fully understand how to use the classes and set the data correctly for parallel::fullydistributed::Triangulation.

Thank you for your time and support.

Best regards,
Rishabh

Wolfgang Bangerth

unread,
Dec 10, 2024, 8:03:29 AM12/10/24
to dea...@googlegroups.com
On 12/9/24 08:46, Rishabh Saxena wrote:
>                   cell.vertices = {vertex_map[std::make_tuple(i, j, k)],

>[...]

> When I compile the code, I get the following errors:
>
> /home/saxenar/local/programs/tortuosity_github/source/
> tortuosity_parallel_FDM_discriptor_MPI.cc:957:26: Fehler: »struct
> dealii::TriangulationDescription::CellData<3>« hat kein Element namens »vertices«

Rishabh:
the error message says that TriangulationDescription::CellData simply does not
have member variables 'vertices', 'material_id', etc., and that is true:

https://dealii.org/developer/doxygen/deal.II/structTriangulationDescription_1_1CellData.html
The general description of the class also explains why it does not need to
store the geometric information you are trying to describe.

I believe that Peter Munch posted a piece of code here on this forum a while
ago that creates an xyz-uniform fully distributed triangulation like the one
you are setting up here. You might want to search through the archive and see
if you can find it!

Best
WB

Praveen C

unread,
Dec 10, 2024, 8:07:55 AM12/10/24
to Deal. II Googlegroup
I have this on my todo list also, but havent gotten around to it.

This example is what I plan to use to understand this type of tria


best
praveen

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en
--- You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dealii+un...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/dealii/14fb678f-ffb8-40c1-abce-29bec8078fee%40colostate.edu.

Rishabh Saxena

unread,
Dec 10, 2024, 10:25:36 AM12/10/24
to dea...@googlegroups.com

Dear Prof. Wolfgang,

Thank you for your guidance regarding the use of TriangulationDescription::CellData and for suggesting Peter Munch's work on creating fully distributed triangulations. It was very insightful, and I reviewed the example provided in the repository:
https://github.com/dealii/dealii/blob/master/tests/fullydistributed_grids/create_manually_01.cc 

Based on this reference, I implemented a method to generate a fully distributed triangulation for a small microstructure with dimensions 20x20x20.

template <int dim>
void LaplaceProblem<dim>::generate_mesh_from_matrix(int Nx, int Ny, int Nz)
{
// Prepare the construction data object
TriangulationDescription::Description<dim, dim> construction_data;
construction_data.comm = _mpi_communicator;
construction_data.cell_infos.resize(1); // Level 0 cell info

// Define the number of subdivisions and domain size
std::array<unsigned int, dim> n_subdivision = {{static_cast<unsigned int>(Nx),
static_cast<unsigned int>(Ny),
static_cast<unsigned int>(Nz)}};

//std::array<unsigned int, dim> n_subdivision = {{2,2,2}};
std::array<double, dim> sizes = {{1.0, 1.0, 1.0}}; // Assume a unit cube domain for simplicity

// Partition the mesh along the x-axis
const unsigned int mpi_rank = Utilities::MPI::this_mpi_process(_mpi_communicator);
const unsigned int mpi_size = Utilities::MPI::n_mpi_processes(_mpi_communicator);
const auto partitioning_x = Utilities::create_evenly_distributed_partitioning(mpi_rank, mpi_size, n_subdivision[0]);

if (!partitioning_x.is_empty())
{
const int start_x_mpi = partitioning_x.nth_index_in_set(0);
const int end_x_mpi = partitioning_x.nth_index_in_set(partitioning_x.n_elements() - 1);
//print start_x and end_x
std::cout << "start_x: " << start_x << std::endl;
std::cout << "end_x: " << end_x << std::endl;

// Generate vertices
std::vector<Point<dim>> vertices;
std::map<std::tuple<int, int, int>, unsigned int> local_vertex_map;
unsigned int vertex_counter = 0;

for (int i = std::max(0, start_x_mpi - 1); i <= std::min<int>(n_subdivision[0], end_x_mpi + 1); ++i)
for (int j = 0; j <= n_subdivision[1]; ++j)
for (int k = 0; k <= n_subdivision[2]; ++k)
{
Point<dim> p(sizes[0] / n_subdivision[0] * i,
sizes[1] / n_subdivision[1] * j,
sizes[2] / n_subdivision[2] * k);
vertices.push_back(p);
local_vertex_map[std::make_tuple(i, j, k)] = vertex_counter++;
}

construction_data.coarse_cell_vertices = vertices;

// Generate local cells
for (int i = start_x_mpi; i <= end_x_mpi; ++i)
for (int j = 0; j < n_subdivision[1]; ++j)
for (int k = 0; k < n_subdivision[2]; ++k)
{
int material_id = mat_matrix[j + k * n_subdivision[1]][i];
if (material_id == _mat_tau_1)
{
CellData<dim> coarse_cell;
coarse_cell.vertices[0] = local_vertex_map[std::make_tuple(i, j, k)];
coarse_cell.vertices[1] = local_vertex_map[std::make_tuple(i + 1, j, k)];
coarse_cell.vertices[2] = local_vertex_map[std::make_tuple(i, j + 1, k)];
coarse_cell.vertices[3] = local_vertex_map[std::make_tuple(i + 1, j + 1, k)];
if (dim == 3)
{
coarse_cell.vertices[4] = local_vertex_map[std::make_tuple(i, j, k + 1)];
coarse_cell.vertices[5] = local_vertex_map[std::make_tuple(i + 1, j, k + 1)];
coarse_cell.vertices[6] = local_vertex_map[std::make_tuple(i, j + 1, k + 1)];
coarse_cell.vertices[7] = local_vertex_map[std::make_tuple(i + 1, j + 1, k + 1)];
}

coarse_cell.material_id = material_id;
construction_data.coarse_cells.push_back(coarse_cell);

// Map the coarse cell index
unsigned int cell_id = i + j * n_subdivision[0] + k * n_subdivision[0] * n_subdivision[1];
construction_data.coarse_cell_index_to_coarse_cell_id.push_back(cell_id);

// Fully distributed cell data
TriangulationDescription::CellData<dim> cell_data;
cell_data.id = CellId(cell_id, {}).template to_binary<dim>();
cell_data.subdomain_id = mpi_rank;
cell_data.level_subdomain_id = 0;

construction_data.cell_infos[0].push_back(cell_data);
}
}
}

// Create the fully distributed triangulation
triangulation_pft.create_triangulation(construction_data);

// DataOut to handle distributed meshes
DataOut<dim> data_out;
data_out.attach_triangulation(triangulation_pft);

// Add dummy data to ensure mesh output is generated
Vector<double> dummy_solution(triangulation_pft.n_active_cells());
for (unsigned int i = 0; i < dummy_solution.size(); ++i)
dummy_solution[i] = 0.0;
data_out.add_data_vector(dummy_solution, "dummy");

// Build the output
data_out.build_patches();

// Output VTU file
std::string filename = "mesh-" + Utilities::int_to_string(mpi_rank, 4) + ".vtu";
std::ofstream output(filename);
data_out.write_vtu(output);

}



The VTU files generated for each processor appear correct and are divided as expected. However, I am encountering an issue during the DoF distribution phase.
Specifically, the number of DoFs increases as I use more processors. For instance, using a single processor results in 7918 DoFs, while with two processors, the count rises to 8272, and with three processors, it increases further to 8537.

This suggests that there might be an overlapping of cell faces or an issue with ghost cells being incorrectly accounted for during the DoF distribution. I suspect the problem lies in the handling of shared faces between processors or inconsistencies in cell ownership. I am currently reviewing the partitioning logic and cell indexing within TriangulationDescription::Description.

If you have any further suggestions or recommendations to address this issue, they would be greatly appreciated. Thank you once again for your support, and I look forward to your feedback.

Best regards,
Rishabh


--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dealii+un...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/dealii/14fb678f-ffb8-40c1-abce-29bec8078fee%40colostate.edu.


--
Thanks & Regards

Rishabh Saxena
Helmut Schmidt University || University of the Federal Armed Forces Hamburg
Applied Mathematics  
Holstenhofweg 85
22 043 Hamburg
Germany

Office: H 1, Room No. 1225
mesh-0001.vtu
mesh-0000.vtu
mesh-0002.vtu

Peter Munch

unread,
Dec 10, 2024, 3:35:10 PM12/10/24
to deal.II User Group
Hi Rishabh,

To me it looks like that you don't set up the ghost cells properly. See that I loop over local cells and a single layer of ghost cells: https://github.com/dealii/dealii/blob/b2108aabb2ba014b6d35029db19034bded6f135a/tests/fullydistributed_grids/create_manually_01.cc#L70-L74. Once this is fixed, dofs between subdomains will not be duplicated so that the number of dofs shouls stay constant.

Say hi to Thomas and the others!

Best,
Peter

Rishabh Saxena

unread,
Dec 11, 2024, 9:11:44 AM12/11/24
to dea...@googlegroups.com

Hi Peter,

Great to hear from you, and thank you so much for pointing this out! I revisited the implementation and corrected the setup for ghost cells as you suggested. Now, the DoFs between subdomains are handled correctly, and the total number of DoFs remains consistent. Your observation was spot on—thanks for catching that!

One question: Can I set boundary_id as usual or do we have add in construction_data?
I am attaching here boundary_ids implementation along with generate_mesh_with_matrix.

template <int dim>
void LaplaceProblem<dim>::set_boundary_ids(int Nx, int Ny, int Nz)
{
// Boundary IDs
const types::boundary_id inlet_boundary_id = 0;
const types::boundary_id outlet_boundary_id = 1;

// Iterate over all active cells
for (const auto &cell : triangulation_pft.active_cell_iterators())
{
// Only modify locally owned cells
if (cell->is_locally_owned())
{
for (unsigned int face = 0; face < GeometryInfo<dim>::faces_per_cell; ++face)
{
// Check if the face is at the inlet (x = 0)
if (std::abs(cell->face(face)->center()[0]) < 1e-12) // Near x = 0
{
cell->face(face)->set_boundary_id(inlet_boundary_id);
}
// Check if the face is at the outlet (x = Nx)
else if (std::abs(cell->face(face)->center()[0] - Nx) < 1e-12) // Near x = Nx
{
cell->face(face)->set_boundary_id(outlet_boundary_id);
}
}
}
}
}


/////This one with GHost cells is important
template <int dim>
void LaplaceProblem<dim>::generate_mesh_from_matrix(int Nx, int Ny, int Nz)
{
// Prepare the construction data object
TriangulationDescription::Description<dim, dim> construction_data;
construction_data.comm = _mpi_communicator;
construction_data.cell_infos.resize(1); // Level 0 cell info

// Define the number of subdivisions and domain size
std::array<unsigned int, dim> n_subdivision = {{static_cast<unsigned int>(Nx),
static_cast<unsigned int>(Ny),
static_cast<unsigned int>(Nz)}};

std::array<double, dim> sizes = {{1.0, 1.0, 1.0}}; // Assume a unit cube domain for simplicity

// Partition the mesh along the x-axis
const unsigned int mpi_rank = Utilities::MPI::this_mpi_process(_mpi_communicator);
const unsigned int mpi_size = Utilities::MPI::n_mpi_processes(_mpi_communicator);
const auto partitioning_x = Utilities::create_evenly_distributed_partitioning(mpi_rank, mpi_size, n_subdivision[0]);

if (!partitioning_x.is_empty())
{
const int start_x_mpi = partitioning_x.nth_index_in_set(0);
const int end_x_mpi = partitioning_x.nth_index_in_set(partitioning_x.n_elements() - 1);

// Generate vertices
std::vector<Point<dim>> vertices;
std::map<std::tuple<int, int, int>, unsigned int> local_vertex_map;
unsigned int vertex_counter = 0;

for (int i = std::max(0, start_x_mpi - 1); i <= std::min<int>(n_subdivision[0], end_x_mpi + 1); ++i)
for (int j = 0; j <= n_subdivision[1]; ++j)
for (int k = 0; k <= n_subdivision[2]; ++k)
{
Point<dim> p(sizes[0] / n_subdivision[0] * i,
sizes[1] / n_subdivision[1] * j,
sizes[2] / n_subdivision[2] * k);
vertices.push_back(p);
local_vertex_map[std::make_tuple(i, j, k)] = vertex_counter++;
}

construction_data.coarse_cell_vertices = vertices;

// Generate local cells and a single layer of ghost cells
for (int i = std::max(0, start_x_mpi - 1); i <= std::min<int>(n_subdivision[0] - 1, end_x_mpi + 1); ++i)
for (int j = 0; j < n_subdivision[1]; ++j)
for (int k = 0; k < n_subdivision[2]; ++k)
{
int material_id = mat_matrix[j + k * n_subdivision[1]][i];
if (material_id == _mat_tau_1)
{
CellData<dim> coarse_cell;
coarse_cell.vertices[0] = local_vertex_map[std::make_tuple(i, j, k)];
coarse_cell.vertices[1] = local_vertex_map[std::make_tuple(i + 1, j, k)];
coarse_cell.vertices[2] = local_vertex_map[std::make_tuple(i, j + 1, k)];
coarse_cell.vertices[3] = local_vertex_map[std::make_tuple(i + 1, j + 1, k)];
if (dim == 3)
{
coarse_cell.vertices[4] = local_vertex_map[std::make_tuple(i, j, k + 1)];
coarse_cell.vertices[5] = local_vertex_map[std::make_tuple(i + 1, j, k + 1)];
coarse_cell.vertices[6] = local_vertex_map[std::make_tuple(i, j + 1, k + 1)];
coarse_cell.vertices[7] = local_vertex_map[std::make_tuple(i + 1, j + 1, k + 1)];
}

coarse_cell.material_id = material_id;
construction_data.coarse_cells.push_back(coarse_cell);

// Map the coarse cell index
unsigned int cell_id = i + j * n_subdivision[0] + k * n_subdivision[0] * n_subdivision[1];
construction_data.coarse_cell_index_to_coarse_cell_id.push_back(cell_id);

// Fully distributed cell data
TriangulationDescription::CellData<dim> cell_data;
cell_data.id = CellId(cell_id, {}).template to_binary<dim>();
cell_data.subdomain_id = (i >= start_x_mpi && i <= end_x_mpi) ? mpi_rank : (i < start_x_mpi ? mpi_rank - 1 : mpi_rank + 1);
cell_data.level_subdomain_id = 0;

construction_data.cell_infos[0].push_back(cell_data);
}
}
}

// Create the fully distributed triangulation
triangulation_pft.create_triangulation(construction_data);
}


However, I’ve run into a new issue with the solution. Somehow, I’m getting a zero solution despite following the approach in step 40. I suspect that the system might not be assembling properly.

Do you have any insights or advice on working with parallel fully distributed meshes? I would really appreciate your thoughts.


Best Regards
Rishabh 


Peter Munch

unread,
Dec 11, 2024, 9:24:39 AM12/11/24
to deal.II User Group

Rishabh Saxena

unread,
Dec 11, 2024, 9:45:09 AM12/11/24
to dea...@googlegroups.com
I tried in this way as well but still it doesn't work. 

If you have any suggestion, would be helpful for me 



// Fully distributed cell data
TriangulationDescription::CellData<dim> cell_data;
cell_data.id = CellId(cell_id, {}).template to_binary<dim>();
cell_data.subdomain_id = (i >= start_x_mpi && i <= end_x_mpi) ? mpi_rank : (i < start_x_mpi ? mpi_rank - 1 : mpi_rank + 1);
cell_data.level_subdomain_id = cell_data.subdomain_id;

// Assign boundary IDs for inlet and outlet
if (i == 0)
cell_data.boundary_ids.emplace_back(0, 1); // Inlet boundary
else if (i + 1 == n_subdivision[0])
cell_data.boundary_ids.emplace_back(1, 2); // Outlet boundary

construction_data.cell_infos[0].push_back(cell_data);

Nils Margenberg

unread,
Dec 11, 2024, 11:09:03 AM12/11/24
to deal.II User Group
Hey there,
admittedly I didn't look close into this, but one of the things I'm sceptical about is the mesh you are using, Rishab (Remember our discussion?). In your mesh there are cells that don't share a face but only a line or a single vertex. The interior of your Triangulation is possibly disconnected. I don't know if this is what you want and how deal.II handles this. 
Usually one of the first sentences you write when you define a PDE problem is "Let Omega be a bounded domain with [...] boundary". A domain is a non-empty, open and connected set. The triangulation you posted here, is not the triangulation of a domain.
Reply all
Reply to author
Forward
0 new messages