A data structure for distributed storage of some cell "average"

105 views
Skip to first unread message

vachanpo...@gmail.com

unread,
May 27, 2021, 12:20:49 AM5/27/21
to deal.II User Group
Dear All,

I have a quantity which has a single value in a cell. Let me call this an "average". I want to be able to access the average of neighbouring cells from every cell.

What data structure can I use?

Since don't have mesh refinement, I was thinking of mapping this average to the cell index and using a parallel vector for storage. However, LA::MPI::Vector probably will not work because cell numbering is not continuous within a subdomain. Also, I couldn't find any function that can reorder the cells which may enable use of LA::MPI::Vector.

In the worst case, I can make a regular LA::MPI::Vector (using dofs) and set the average to all dofs in a cell. I hope there is an alternative.

Thanking in anticipation,
Vachan

Wolfgang Bangerth

unread,
May 27, 2021, 9:31:12 AM5/27/21
to dea...@googlegroups.com
You are looking for cell->global_active_cell_index(), which is documented like so:

/**
* Return a globally unique cell index for the current cell,
* assuming it is not artificial. The value is identical to
* active_cell_index() if the cell is part of a serial
* triangulation.
*
* In the context of parallel triangulations, locally-owned cells
* are enumerated contiguously within each subdomain of the
* mesh. This ensures that the index returned by this function can
* be used as the index into vectors with a total of
* Triangulation::n_globally_active_cells() entries, and for which
* every process stores a contiguous part. If such a cell-data
* vector has been set up with
* parallel::TriangulationBase::global_active_cell_index_partitioner(),
* the index returned by this function can then be used to access
* the correct vector entry.
*/
types::global_cell_index
global_active_cell_index() const;

Best
W.

--
------------------------------------------------------------------------
Wolfgang Bangerth email: bang...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

vachanpo...@gmail.com

unread,
May 28, 2021, 1:08:17 AM5/28/21
to deal.II User Group
Wolfgang,

That is exactly what I need! But unfortunately I currently use version 9.1.1 and I think this was introduced in 9.2.0 (https://github.com/dealii/dealii/commit/0ec6aa676a27956a07f513585f407a954678ae52#diff-ca64fff9c161a88a3b72134cb89731e3bf3ce3b758ff386ef120d07e24312c65).

Is there any other alternative (that I can use on 9.1.1)? Otherwise I will have to update the installation.

Wolfgang Bangerth

unread,
May 28, 2021, 4:51:12 PM5/28/21
to dea...@googlegroups.com
On 5/27/21 11:08 PM, vachanpo...@gmail.com wrote:
>
> That is exactly what I need! But unfortunately I currently use version 9.1.1
> and I think this was introduced in 9.2.0
> (https://github.com/dealii/dealii/commit/0ec6aa676a27956a07f513585f407a954678ae52#diff-ca64fff9c161a88a3b72134cb89731e3bf3ce3b758ff386ef120d07e24312c65).
>
> Is there any other alternative (that I can use on 9.1.1)? Otherwise I will
> have to update the installation.

If you wait for a week or two, you can update to 9.3 (or just install the
pre-release snapshot that will, for all practical purposes, be identical).
Sticking with the old release would require you to write work-arounds for
functionality that exists today, and to stick with these work-arounds for the
indefinite future.

vachan potluri

unread,
May 28, 2021, 11:14:01 PM5/28/21
to dea...@googlegroups.com
Ok, right. Will have a new installation then!

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dealii+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/dealii/b1e38738-eead-c4e9-0530-95e099a923d6%40colostate.edu.

vachanpo...@gmail.com

unread,
Jun 8, 2021, 6:18:58 AM6/8/21
to deal.II User Group
If I want to add such a vector to DataOut, will the regular DataOut::add_data_vector() work? Or is something else required to be done?

Wolfgang Bangerth

unread,
Jun 8, 2021, 9:54:13 AM6/8/21
to dea...@googlegroups.com
On 6/8/21 4:18 AM, vachanpo...@gmail.com wrote:
> If I want to add such a vector to DataOut, will the regular
> DataOut::add_data_vector() work? Or is something else required to be done?

Yes, DataOut::add_data_vector() can take two kinds of vectors:
* Ones that have as many entries as there are DoFs
* Ones that have as many entries as there are active cells

vachan potluri

unread,
Jun 8, 2021, 10:02:17 AM6/8/21
to dea...@googlegroups.com
Thank you :).

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dealii+un...@googlegroups.com.

vachan potluri

unread,
Jun 11, 2021, 2:10:37 AM6/11/21
to dea...@googlegroups.com
Hello,

I am having an issue in using DataOut for such vector in a parallel process. I am attaching a MWE which captures my problem. I am encountering a segmentation fault (signal 11).

#include <deal.II/base/mpi.h>
#include <deal.II/base/utilities.h>
#include <deal.II/distributed/tria.h>
#include <deal.II/grid/tria_accessor.h>
#include <deal.II/grid/grid_generator.h>
#include <deal.II/lac/generic_linear_algebra.h>
#include <deal.II/fe/fe_dgq.h>
#include <deal.II/fe/mapping_q_generic.h>
#include <deal.II/dofs/dof_handler.h>
#include <deal.II/numerics/data_out.h>

#include <fstream>

using namespace dealii;
namespace LA
{
using namespace ::LinearAlgebraPETSc;
}

int main(int argc, char **argv)
{
Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1);

constexpr int dim = 3;

MPI_Comm mpi_comm(MPI_COMM_WORLD);

parallel::distributed::Triangulation<dim> triang(mpi_comm);
GridGenerator::subdivided_hyper_cube(triang, 5);
const std::shared_ptr<const Utilities::MPI::Partitioner> cell_partitioner =
triang.global_active_cell_index_partitioner().lock();
LA::MPI::Vector vec(cell_partitioner->locally_owned_range(), mpi_comm);
LA::MPI::Vector gh_vec(
cell_partitioner->locally_owned_range(),
cell_partitioner->ghost_indices(),
mpi_comm
);

DoFHandler<dim> dof_handler(triang);
FE_DGQ<dim> fe(2);
dof_handler.distribute_dofs(fe);

for(auto &cell: dof_handler.active_cell_iterators()){
if(!cell->is_locally_owned()) continue;

vec[cell->global_active_cell_index()] = cell->global_active_cell_index();
}
vec.compress(VectorOperation::insert);

gh_vec = vec;

DataOut<dim> data_out;
DataOutBase::VtkFlags flags;
flags.write_higher_order_cells = true;
data_out.set_flags(flags);

data_out.attach_dof_handler(dof_handler);

data_out.add_data_vector(gh_vec, "x");

data_out.build_patches(
MappingQGeneric<dim>(2),
2,
DataOut<dim>::CurvedCellRegion::curved_inner_cells
);

std::ofstream proc_file(
"output" + Utilities::int_to_string(
Utilities::MPI::this_mpi_process(mpi_comm),
2
) + ".vtu"
);
data_out.write_vtu(proc_file);
proc_file.close();

if(Utilities::MPI::this_mpi_process(mpi_comm) == 0){
std::vector<std::string> filenames;
for(int i=0; i<Utilities::MPI::n_mpi_processes(mpi_comm); i++){
filenames.emplace_back(
"output" + Utilities::int_to_string(i, 2) + ".vtu"
);
} // loop over processes

std::ofstream master_file("output.pvtu");
data_out.write_pvtu_record(master_file, filenames);
master_file.close();
}
return 0;
}


I suspect I am not partitioning the ghosted vector properly hence causing a mismatch in what is available and what DataOut wants. Am I missing any other step in between?

Thanking in anticipation

vachanpo...@gmail.com

unread,
Jun 14, 2021, 4:40:41 AM6/14/21
to deal.II User Group
Hello,

I have a small question and I think the issue lies here. Are these two function calls equivalent?

ghosted_vector.reinit(
  /*locally owned indices*/,
  /*ghost indices only*/,
  mpi_communicator
);

ghosted_vector.reinit(
  /*locally owned indices*/,
  /*relevant indices (owned + ghost)*/,
  mpi_communicator
);

The program runs as expected for a serial execution. I am using the first version here and may be that is causing a crash in a parallel execution.

Any reference to a use case of p::d::Triangulation::global_active_cell_index_partitioner() would be very helpful :) !

Thanks

Wolfgang Bangerth

unread,
Jun 14, 2021, 6:54:29 PM6/14/21
to dea...@googlegroups.com
On 6/11/21 12:09 AM, vachan potluri wrote:
>
> I am having an issue in using DataOut for such vector in a parallel process. I
> am attaching a MWE which captures my problem. I am encountering a segmentation
> fault (signal 11).

The problem is a mismatch of expectations. Like in many other places, when you
pass a cell-based vector to DataOut, it assumes that on every process, the
vector has one entry for each active cell of the triangulation -- i.e., on
every process it is a *local* vector -- rather than a distributed vector with
one entry for each globally active cell. In other words, for cell-based
vectors, we ignore the fact that the computation might be parallel.

vachan potluri

unread,
Jun 15, 2021, 1:15:05 AM6/15/21
to dea...@googlegroups.com
The problem is a mismatch of expectations. Like in many other places, when you
pass a cell-based vector to DataOut, it assumes that on every process, the
vector has one entry for each active cell of the triangulation -- i.e., on
every process it is a *local* vector -- rather than a distributed vector with
one entry for each globally active cell. In other words, for cell-based
vectors, we ignore the fact that the computation might be parallel.

Thank you for the response, looks like I didn't do my homework properly :P. Copying the MPI vector into a Vector<float> of size triang.n_active_cells() and adding this vector instead to DataOut works.

Thanks again!

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dealii+un...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages