typename DoFHandler<dim>::active_cell_iterator cell = dof_handler.begin_active(), endc = dof_handler.end();
for (; cell != endc; ++cell){ if (cell->is_locally_owned()) { std::cout << "CELL NUMBER 4 VERTEX 1: " << cell(4)->vertex(0) << std::endl; }
}
Thanks for your suggestions.
Generally, in FEM you number elements like in the below picture:
I understand of course, that the cells (elements) in deal.II are parallely distributed, but it should be possible to identify the geometrical position and ID of the elements and their corresponding nodes.
For instance the following works:
typename parallel::distributed::Triangulation<dim>::active_cell_iterator cell = triangulation.begin_active(), endc = triangulation.end();
for (; cell != endc; ++cell){
std::cout << "CELL NUMBER: " << cell->id() << std::endl;
}
Of course here the locally owned term is missing, but is it correct to access the cells like this?
I am a bit confused since it is a rather easy task which should not be such a big "deal" in deal.II.
Best regards,
S. A. Mohseni
typename DoFHandler<dim>::active_cell_iteratorcell = dof_handler.begin_active(),endc = dof_handler.end();
for (; cell!=endc; ++cell, ++c) { material_id(c) = static_cast<int>(cell->material_id()); }
std::vector<double> displacement_norms, loaded_nodes, loaded_elements;
double max_displacement;int max_loaded_node, max_loaded_element;
typename DoFHandler<dim>::active_cell_iterator cell = dof_handler.begin_active(), endc = dof_handler.end();
std::vector<unsigned int> local_dof_indices(fe.dofs_per_cell);unsigned int cell_number = 0;
for (; cell != endc; ++cell, ++cell_number){
pcout << "CELL ID: " << cell_number << std::endl; // this works surprisingly.
cell->get_dof_indices(local_dof_indices); // Output or accessing this won't work I assume.
for (unsigned int i = 0, j = 1; i < fe.dofs_per_cell; i += 2, j += 2) { double displacement_x = locally_relevant_solution(local_dof_indices[i]); double displacement_y = locally_relevant_solution(local_dof_indices[j]);
double displacement_norm = sqrt(displacement_x * displacement_x + displacement_y * displacement_y);
loaded_nodes.push_back(cell->vertex_index(v)); loaded_elements.push_back(cell_number); displacement_norms.push_back(displacement_norm); }
}
max_displacement = *max_element(displacement_norms.begin(), displacement_norms.end());
double max_index = distance(displacement_norms.begin(), max_element(displacement_norms.begin(), displacement_norms.end()));
// Node and element number of maximal displacementsmax_loaded_node = loaded_nodes[max_index];max_loaded_element = loaded_elements[max_index];
Seyed -- no, that's exactly the opposite of what you can do. "Locally owned"
means precisely that this is the kind of data the current processor has access
to. You cannot access data that is *not locally owned* (with the exception of
ghost elements).
What I understood so far is, that the Utilities::MPI::max() function computes the maximum between all processors for each row of my vector. This means if I have a vector of 20 entries, each entry has 8 values distributed on 8 cores for instance. As a result, I get 20 maximum values again, but this time they are maximal and processor independent.
Hence, my question now is how am I able to check where the maximum value comes from? Is it possible to output the processor number of the maximum value for each row?
My aim is to find the node where the maximum value exists. This is somehow cumbersome in parallel mode since I have to store information about the current CPU core, cell ID and node number. Then somehow use my previously shown distance function in C++ and check the position of the maximum value with regard to the geometry structure I stored.
Thank you. Does your approach also apply to MPICH? Since I am using MPICH, I wonder, if this command can be used there also.This struct variable, is there an example within deal.II steps? Or do you have an example in C++ I can learn from.
To be honest I am not yet very familiar with MPI itself, barely understanding deal.II parallelism ;)
I have also another idea: How about I find the maximum value regardless of the position and processor core. Then just loop again over all cells and compare the maximum value with the existing value within each cell. Would this be a very inefficient approach or is looping over cells usually computationally cheap in deal.II.
But is it necessary to #include <mpi.h> ?Since deal.II won't recognize functions such as MPI_Comm_rank(MPI_COMM_WORLD, &myrank) for instance or MPI_Reduce.Is it implemented in Utilities::MPI::something ?
int disp_size = triangulation.n_vertices();
struct{ double value; int rank;} in[disp_size], out[disp_size];
typename DoFHandler<dim>::active_cell_iterator cell = dof_handler.begin_active(), endc = dof_handler.end();
std::vector<unsigned int> local_dof_indices(fe.dofs_per_cell);unsigned int cell_number = 0;
for (; cell != endc; ++cell, ++cell_number){ if ( cell->is_locally_owned() ) { cell->get_dof_indices(local_dof_indices);
for (unsigned int i = 0, j = 1; i < fe.dofs_per_cell; i += 2, j += 2) {
unsigned int v = i / dim;
double displacement_x = locally_relevant_solution(local_dof_indices[i]); double displacement_y = locally_relevant_solution(local_dof_indices[j]);
double displacement_norm = sqrt(displacement_x * displacement_x + displacement_y * displacement_y);
in[cell->vertex_index(v)].value = displacement_norm; in[cell->vertex_index(v)].rank = Utilities::MPI::this_mpi_process(mpi_com); } }}
MPI_Reduce(in, out, disp_size, MPI_DOUBLE_INT, MPI_MAXLOC, 0, mpi_com);
std::vector<double> maximum_disp_norms(disp_size);std::vector<int> disp_ranks(disp_size), disp_nodes(disp_size);double max_displacement;int max_index, max_rank, max_node_id;
if ( Utilities::MPI::this_mpi_process(mpi_com) == 0 ){ for (int i = 0; i < disp_size; ++i) { maximum_disp_norms[i] = out[i].value; disp_ranks[i] = out[i].rank; disp_nodes[i] = i;
if ( out[i].value > 1e10 ) { maximum_disp_norms[i] = 0; disp_ranks[i] = 0; } }
// Maximum norm of displacements max_displacement = *max_element(maximum_disp_norms.begin(), maximum_disp_norms.end()); max_index = distance(maximum_disp_norms.begin(), max_element(maximum_disp_norms.begin(), maximum_disp_norms.end())); max_rank = disp_ranks[max_index]; max_node_id = disp_nodes[max_index];}
// Maximum displacement position
cell = dof_handler.begin_active(), endc = dof_handler.end();
cell_number = 0;
for (; cell != endc; ++cell, ++cell_number){ if ( cell->is_locally_owned() ) {
// This is just testwise if ( Utilities::MPI::this_mpi_process(mpi_com) == max_rank ) { std::cout << Utilities::MPI::this_mpi_process(mpi_com) << std::endl; }
for (int v = 0; v < GeometryInfo<dim>::vertices_per_cell; ++v) { if ( cell->vertex_index(v) == max_node_id ) {// std::cout << "VERTEX ID: " << cell->vertex_index(v) << " WITH COORDINATES: " << cell->vertex(v) << std::endl; } } }}
Extra question: Can we store variables or copy them to all processors? Since I am filling a variable in a locally owned cell, currently on rank 3. Then my other ranks, especially the root rank 0 has no clue of the values which are set. Hence, there has to be a possibility to copy data to other processors or let them know about the content?
- n_vertices() only gives you the number of vertices for one process, not the global one. In particular, you can't rely on the fact that this is the same for all processes. Furthermore, you (probably) don't initialize all the values of your struct. This might be the reason for the large numbers you are observing.
int disp_size = triangulation.n_vertices();
struct{
double value = 0; int rank = 0;
} in[disp_size], out[disp_size];
if ( this_mpi_process == max_rank ){ for (unsigned int i = 0; i < n_mpi_processes; ++i) { if ( i != max_rank ) MPI_Send(&position, 1, MPI_DOUBLE, i, 0, mpi_com); }}else{ MPI_Recv(&position, 1, MPI_DOUBLE, max_rank, 0, mpi_com, MPI_STATUS_IGNORE); printf("Process %d received number from process %d\n", this_mpi_process, max_rank);}
MPI_Bcast(&position, 1, MPI_DOUBLE, max_rank, mpi_com);