Problems in handling the userdata of Distributed Triangulation

171 views
Skip to first unread message

Jack

unread,
Nov 3, 2016, 1:22:59 AM11/3/16
to deal.II User Group

 I add the userdata to each cell using the cell->user_pointer. It seems that the userdata of some cells which are changed during the refining and coarsening procedure will lose. How can make sure that the refined children cells(the coarsened parent cell) inherit the userdata from the parent (the children).

And how can I obtain the userdata of ghost cells (userdata of the neighbor cell, but the neighbor cell happen to belong to other process rather the present process)during parallel distributed computing?


Thanks very much!!

Jean-Paul Pelteret

unread,
Nov 3, 2016, 2:06:48 AM11/3/16
to deal.II User Group
Dear Jack,

You'll be happy to know that this functionality has recently been added to the developer version of deal.II via the CellDataStorage and TransferableQuadraturePointData classes. There are also a couple of examples of use in the testsuite, namely in the directory "<DEAL_II_SOURCE_DIR>/tests/base/quadrature_point_data*.cc".

Regards,
J-P

Jack

unread,
Nov 3, 2016, 3:24:11 AM11/3/16
to deal.II User Group

Dear Jean

 

Thanks so much for your suggestions. I’m really very happy for such new development.

 

I understand that using the CellDataStorage and TransferableQuadraturePointData classes  is better than user_pointer when the mesh requires coarsening and refining. 

Yet I do not understand how can I read the data of a neighbor quadrature point that happens to in a ghost cell. Could you give me some clues?

 

Thanks!

Denis Davydov

unread,
Nov 3, 2016, 4:01:34 AM11/3/16
to deal.II User Group
Hi Jack,

Could you please elaborate why you need to read the data on ghost cells? The CellDataStorage and TransferableQuadraturePointData classes are designed to do assembly,
where we always loop over locally owned cells and calculate local matrices to be assembled. If the ownership of cells change during coarsening/refinement,
the quadrature data will be shipped to a new owner.

However when designing these classes we did not consider their usage with ghost cells.
You could theoretically have some data attached to ghost cells, but I am not sure you will be able to synrchonize it easily.
Namely, when the locally owned data belonging to some processor changes, you would need to ship this to other processors
for which this cell is a ghost cell. That's not implemented, if I recall correctly.

Cheers,
Denis.

Jack

unread,
Nov 3, 2016, 5:12:38 AM11/3/16
to deal.II User Group

Dear Denis,

 

Thanks for your replies.

 

I do not have some data attached to ghost cells. But I have a variable that changes with time and its change depends on the data of neighbor cells.

For example,

const double value1 = 1, value2 =-1;

for (cell = tr.begin_active(); cell != tr.end(); ++cell)

if (cell -> is_locally_owned()){

bool flag = false;

std::vector<std::shared_ptr<MyData> > qpd = data_storage.template get_data<MyData>(cell);

for (unsigned int q = 0; q < n_q_points; ++q)

if(qpd->value[q]> 0) {flag = true; break;}

 

if(!flag)

// finding the qpd values of neighbor cells and check all the values

//if one of them is positive, then flag = true;

 

//Setting the gpd values of present cell.

for (unsigned int q = 0; q < n_q_points; ++q)

{

if(flag) qpd->value[q] = value1;

else qpd->value[q] = value2;

}

….

}


Thanks!

Denis Davydov

unread,
Nov 3, 2016, 5:32:46 AM11/3/16
to dea...@googlegroups.com
I see. In serial mode all is doable because u can go from current cell to neighbors to quadrature data and do whatever.

For the case of MPI you would still need to somehow ship the data to ghost cells. Because at some point neighbors of locally owned cells will be ghost cells.

I think it is doable by looking closer at p::d::Tria. Essentially you need a function which takes parallel triangulation, QPData including ghost cells and does synchronization of data at ghost cells within the MPI communicator by getting it from processors that own those cells and thereby the data. Do you want to give it a try and create a work-in-progress Pull-request on GitHub? One can start with a simple unit test of 4 cells in 2D and 2 MPI cores.

Ps. Essentially, i would say it all has to do with the fact that your problem is non-local.

Cheers,
Denis

03 нояб. 2016 г., в 10:12, Jack <yche...@gmail.com> написал(а):

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to a topic in the Google Groups "deal.II User Group" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/dealii/Rqg7gwmS0Rg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to dealii+un...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Denis Davydov

unread,
Nov 3, 2016, 6:17:31 AM11/3/16
to dea...@googlegroups.com
Hi Jack,

One more approach:

you first loop over all locally owned cells and determine the local flag (tru/false).
Then you still need to ship this to ghost cells on other MPI processors.
On doing that you can do your assembly as usual.

Kind regards,
Denis 

Jack

unread,
Nov 3, 2016, 6:23:41 AM11/3/16
to deal.II User Group

Hi Denis,

Thanks.

Yes, that’s right. For distributed MPI parallel computing, this is a big problem.

 

I have just learned deal.II for less than three months, so I’m still new. And also I’m not working on numerical mathematics, so it is difficult for me to solve such problem but I will try. If I figure out this problem I will post it to the forum. If you have made some progress on this problem, please tell me promptly.

 

Thanks so much.


Cheers,


Jack

Wolfgang Bangerth

unread,
Nov 3, 2016, 10:54:44 AM11/3/16
to dea...@googlegroups.com
On 11/03/2016 03:32 AM, Denis Davydov wrote:
> I see. In serial mode all is doable because u can go from current cell to
> neighbors to quadrature data and do whatever.
>
> For the case of MPI you would still need to somehow ship the data to ghost
> cells. Because at some point neighbors of locally owned cells will be ghost cells.
>
> I think it is doable by looking closer at p::d::Tria. Essentially you need a
> function which takes parallel triangulation, QPData including ghost cells and
> does synchronization of data at ghost cells within the MPI communicator by
> getting it from processors that own those cells and thereby the data. Do you
> want to give it a try and create a work-in-progress Pull-request on GitHub?
> One can start with a simple unit test of 4 cells in 2D and 2 MPI cores.

Just for reference, Rene Gassmoeller implemented something similar for
particles recently: There, we store particles on every locally owned cell, but
there are times when one also wants to access particles on ghost cells. This
is implemented here:
https://github.com/geodynamics/aspect/pull/1216
https://github.com/geodynamics/aspect/pull/1253

It's not terribly complicated because one just has to gather the information
on cells that are ghosts on some other processor, and then send this
information to all relevant processors.

Best
W.

--
------------------------------------------------------------------------
Wolfgang Bangerth email: bang...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

Jack

unread,
Nov 3, 2016, 9:00:54 PM11/3/16
to deal.II User Group, bang...@colostate.edu

Dear Prof. Bangerth,

Many thanks for your suggestions and help!

I will try this with your reference of Aspect. I hope I can figure out this smoothly.

Thanks again!

Regards,

Jack


在 2016年11月3日星期四 UTC+8下午10:54:44,Wolfgang Bangerth写道:

Jack

unread,
Nov 8, 2016, 9:06:07 PM11/8/16
to deal.II User Group, bang...@colostate.edu
Hi, everyone

I’m not familiar to ASPECT, could someone tell me how to get the data of ghost cells?

In the ASPECT, I find that it is implemented through the following functions to obtain ghost particles
in ~/aspect/source/particle/world.cc

template <int dim>

std::map<types::subdomain_id,unsigned int>

World<dim>::get_subdomain_id_to_neighbor_map() const

{


}


template <int dim>

   void

   World<dim>::exchange_ghost_particles()
     {

    }


template <int dim>

void

World<dim>::send_recv_particles
(const std::vector<std::vector<std::pair<types::LevelInd,Particle <dim> > > > &send_particles,

     std::vector<std::pair<types::LevelInd,
 
Particle<dim> > >   &received_particles)

   {

   }


Presently, I just have a simple data(for example, an object of vector<valuetype> on each cell), which is not so complex like the particle, how can I realize this this easily?

 

Thanks very much!


May modify directly in the following codes


   template <int dim>
    std::map<types::subdomain_id, unsigned int>
    World<dim>::get_subdomain_id_to_neighbor_map() const
    {
      std::map<types::subdomain_id, unsigned int> subdomain_id_to_neighbor_map;
      const std::set<types::subdomain_id> ghost_owners = this->get_triangulation().ghost_owners();
      std::set<types::subdomain_id>::const_iterator ghost_owner = ghost_owners.begin();

      for (unsigned int neighbor_id=0; neighbor_id<ghost_owners.size(); ++neighbor_id,++ghost_owner)
        {
          subdomain_id_to_neighbor_map.insert(std::make_pair(*ghost_owner,neighbor_id));
        }
      return subdomain_id_to_neighbor_map;
    }




 
    template <int dim>
    void
    World<dim>::exchange_ghost_particles()
    {
      TimerOutput::Scope timer_section(this->get_computing_timer(), "Particles: Exchange ghosts");

      // First clear the current ghost_particle information
      ghost_particles.clear();

      const std::map<types::subdomain_id, unsigned int> subdomain_to_neighbor_map(get_subdomain_id_to_neighbor_map());

      std::vector<std::vector<std::pair<types::LevelInd, Particle<dim> > > > ghost_particles_by_domain(subdomain_to_neighbor_map.size());
      std::vector<std::set<unsigned int> > vertex_to_neighbor_subdomain(this->get_triangulation().n_vertices());

      typename DoFHandler<dim>::active_cell_iterator
      cell = this->get_dof_handler().begin_active(),
      endc = this->get_dof_handler().end();
      for (; cell != endc; ++cell)
        {
          if (cell->is_ghost())
            for (unsigned int v=0; v<GeometryInfo<dim>::vertices_per_cell; ++v)
              vertex_to_neighbor_subdomain[cell->vertex_index(v)].insert(cell->subdomain_id());
        }

      cell = this->get_triangulation().begin_active();
      for (; cell != endc; ++cell)
        {
          if (!cell->is_ghost())
            {
              std::set<unsigned int> cell_to_neighbor_subdomain;
              for (unsigned int v=0; v<GeometryInfo<dim>::vertices_per_cell; ++v)
                {
                  cell_to_neighbor_subdomain.insert(vertex_to_neighbor_subdomain[cell->vertex_index(v)].begin(),
                                                    vertex_to_neighbor_subdomain[cell->vertex_index(v)].end());
                }

              if (cell_to_neighbor_subdomain.size() > 0)
                {
                  const std::pair< const typename std::multimap<types::LevelInd,Particle <dim> >::iterator,
                        const typename std::multimap<types::LevelInd,Particle <dim> >::iterator>
                        particle_range_in_cell = particles.equal_range(std::make_pair(cell->level(),cell->index()));
                  for (std::set<types::subdomain_id>::const_iterator domain=cell_to_neighbor_subdomain.begin();
                       domain != cell_to_neighbor_subdomain.end(); ++domain)
                    {
                      const unsigned int neighbor_id = subdomain_to_neighbor_map.find(*domain)->second;
                      ghost_particles_by_domain[neighbor_id].insert(
                        ghost_particles_by_domain[neighbor_id].end(),
                        particle_range_in_cell.first,
                        particle_range_in_cell.second);
                    }
                }
            }
        }

      std::vector<std::pair<types::LevelInd, Particle<dim> > > received_ghost_particles;
      send_recv_particles(ghost_particles_by_domain,
                          received_ghost_particles);

      ghost_particles.insert(received_ghost_particles.begin(),
                             received_ghost_particles.end());
    }
I'm not clear about the particles.equal_range().
The following send_receive particles is mots difficult for me to understand

  template <int dim>
    void
    World<dim>::send_recv_particles(const std::vector<std::vector<std::pair<types::LevelInd,Particle <dim> > > > &send_particles,
                                    std::vector<std::pair<types::LevelInd, Particle<dim> > >                     &received_particles)
    {
      // Determine the communication pattern
      const std::vector<types::subdomain_id> neighbors (this->get_triangulation().ghost_owners().begin(),
                                                        this->get_triangulation().ghost_owners().end());
      const unsigned int n_neighbors = neighbors.size();

      Assert(n_neighbors == send_particles.size(),
             ExcMessage("The particles to send to other processes should be sorted into a vector "
                        "containing as many vectors of particles as there are neighbor processes. This "
                        "is not the case for an unknown reason. Contact the developers if you encounter "
                        "this error."));

      unsigned int n_send_particles = 0;
      for (unsigned int i=0; i<n_neighbors; ++i)
        n_send_particles += send_particles[i].size();

#if DEAL_II_VERSION_GTE(8,5,0)
      const unsigned int cellid_size = sizeof(CellId::binary_type);
      const unsigned int particle_size = property_manager->get_particle_size() + integrator->get_data_size() + cellid_size;
#else
      const unsigned int particle_size = property_manager->get_particle_size() + integrator->get_data_size();
#endif

      // Determine the amount of data we will send to other processors
      std::vector<unsigned int> n_send_data(n_neighbors);
      std::vector<unsigned int> n_recv_data(n_neighbors);

      std::vector<unsigned int> send_offsets(n_neighbors);
      std::vector<unsigned int> recv_offsets(n_neighbors);

      // Allocate space for sending and receiving particle data
      std::vector<char> send_data(n_send_particles * particle_size);
      void *data = static_cast<void *> (&send_data.front());

      for (types::subdomain_id neighbor_id = 0; neighbor_id < n_neighbors; ++neighbor_id)
        {
          send_offsets[neighbor_id] = reinterpret_cast<std::size_t> (data) - reinterpret_cast<std::size_t> (&send_data.front());

          // Copy the particle data into the send array
          typename std::vector<std::pair<types::LevelInd,Particle <dim> > >::const_iterator
          cell_particle = send_particles[neighbor_id].begin(),
          end_particle = send_particles[neighbor_id].end();
          for (; cell_particle != end_particle; ++cell_particle)
            {
#if DEAL_II_VERSION_GTE(8,5,0)
              const typename parallel::distributed::Triangulation<dim>::cell_iterator cell (&this->get_triangulation(),
                                                                                            cell_particle->first.first,
                                                                                            cell_particle->first.second);
              const CellId::binary_type cellid = cell->id().template to_binary<dim>();
              memcpy(data, &cellid, cellid_size);
              data = static_cast<char *>(data) + cellid_size;
#endif
              cell_particle->second.write_data(data);
              data = integrator->write_data(data, cell_particle->second.get_id());
            }
          n_send_data[neighbor_id] = reinterpret_cast<std::size_t> (data) - send_offsets[neighbor_id] - reinterpret_cast<std::size_t> (&send_data.front());
        }

      // Notify other processors how many particles we will send
      std::vector<MPI_Request> n_requests(2*n_neighbors);
      for (unsigned int i=0; i<n_neighbors; ++i)
        MPI_Irecv(&(n_recv_data[i]), 1, MPI_INT, neighbors[i], 0, this->get_mpi_communicator(), &(n_requests[2*i]));
      for (unsigned int i=0; i<n_neighbors; ++i)
        MPI_Isend(&(n_send_data[i]), 1, MPI_INT, neighbors[i], 0, this->get_mpi_communicator(), &(n_requests[2*i+1]));
      MPI_Waitall(2*n_neighbors,&n_requests[0],MPI_STATUSES_IGNORE);

      // Determine how many particles and data we will receive
      unsigned int total_recv_data = 0;
      for (unsigned int neighbor_id=0; neighbor_id<n_neighbors; ++neighbor_id)
        {
          recv_offsets[neighbor_id] = total_recv_data;
          total_recv_data += n_recv_data[neighbor_id];
        }

      // Set up the space for the received particle data
      std::vector<char> recv_data(total_recv_data);

      // Exchange the particle data between domains
      std::vector<MPI_Request> requests(2*n_neighbors);
      unsigned int send_ops = 0;
      unsigned int recv_ops = 0;

      for (unsigned int i=0; i<n_neighbors; ++i)
        if (n_recv_data[i] > 0)
          {
            MPI_Irecv(&(recv_data[recv_offsets[i]]), n_recv_data[i], MPI_CHAR, neighbors[i], 1, this->get_mpi_communicator(),&(requests[send_ops]));
            send_ops++;
          }

      for (unsigned int i=0; i<n_neighbors; ++i)
        if (n_send_data[i] > 0)
          {
            MPI_Isend(&(send_data[send_offsets[i]]), n_send_data[i], MPI_CHAR, neighbors[i], 1, this->get_mpi_communicator(),&(requests[send_ops+recv_ops]));
            recv_ops++;
          }
      MPI_Waitall(send_ops+recv_ops,&requests[0],MPI_STATUSES_IGNORE);

      // Put the received particles into the domain if they are in the triangulation
      const void *recv_data_it = static_cast<const void *> (&recv_data.front());

#if !DEAL_II_VERSION_GTE(8,5,0)
      const std::vector<std::set<typename Triangulation<dim>::active_cell_iterator> >
      vertex_to_cells(GridTools::vertex_to_cell_map(this->get_triangulation()));
      const std::vector<std::vector<Tensor<1,dim> > > vertex_to_cell_centers(vertex_to_cell_centers_directions(vertex_to_cells));

      std::vector<unsigned int> neighbor_permutation;
#endif

      while (reinterpret_cast<std::size_t> (recv_data_it) - reinterpret_cast<std::size_t> (&recv_data.front()) < total_recv_data)
        {
#if DEAL_II_VERSION_GTE(8,5,0)
          CellId::binary_type binary_cellid;
          memcpy(&binary_cellid, recv_data_it, cellid_size);
          const CellId id(binary_cellid);
          recv_data_it = static_cast<const char *> (recv_data_it) + cellid_size;

          const typename parallel::distributed::Triangulation<dim>::active_cell_iterator cell = id.to_cell(this->get_triangulation());
          const Particle<dim> recv_particle(recv_data_it,property_manager->get_particle_size());
          recv_data_it = integrator->read_data(recv_data_it, recv_particle.get_id());
#else
          Particle<dim> recv_particle(recv_data_it,property_manager->get_particle_size());
          recv_data_it = integrator->read_data(recv_data_it, recv_particle.get_id());
          const std::pair<const typename parallel::distributed::Triangulation<dim>::active_cell_iterator,
                Point<dim> > current_cell_and_position =
                  GridTools::find_active_cell_around_point<> (this->get_mapping(),
                                                              this->get_triangulation(),
                                                              recv_particle.get_location());
          typename parallel::distributed::Triangulation<dim>::active_cell_iterator cell = current_cell_and_position.first;
          recv_particle.set_reference_location(current_cell_and_position.second);

          // GridTools::find_active_cell_around_point can find a different cell than
          // particle_is_in_cell if the particle is very close to the boundary
          // therefore, we might get a cell here that does not belong to us.
          // But then at least one of its neighbors belongs to us, and the particle
          // is extremely close to the boundary of these two cells. Look in the
          // neighbor cells for the particle.
          if (!cell->is_locally_owned())
            {
              const unsigned int closest_vertex = get_closest_vertex_of_cell(cell,recv_particle.get_location());
              const unsigned int closest_vertex_index = cell->vertex_index(closest_vertex);
              Tensor<1,dim> vertex_to_particle = recv_particle.get_location() - cell->vertex(closest_vertex);
              vertex_to_particle /= vertex_to_particle.norm();

              const unsigned int n_neighbor_cells = vertex_to_cells[closest_vertex_index].size();

              neighbor_permutation.resize(n_neighbor_cells);
              for (unsigned int i=0; i<n_neighbor_cells; ++i)
                neighbor_permutation[i] = i;

              std::sort(neighbor_permutation.begin(),
                        neighbor_permutation.end(),
                        std_cxx11::bind(&compare_particle_association<dim>,
                                        std_cxx11::_1,
                                        std_cxx11::_2,
                                        std_cxx11::cref(vertex_to_particle),
                                        std_cxx11::cref(vertex_to_cell_centers[closest_vertex_index])));

              // Search all of the cells adjacent to the closest vertex of the previous cell
              // Most likely we will find the particle in them.
              for (unsigned int i=0; i<n_neighbor_cells; ++i)
                {
                  try
                    {
                      typename std::set<typename Triangulation<dim>::active_cell_iterator>::const_iterator neighbor_cell = vertex_to_cells[closest_vertex_index].begin();
                      std::advance(neighbor_cell,neighbor_permutation[i]);
                      const Point<dim> p_unit = this->get_mapping().transform_real_to_unit_cell(*neighbor_cell,
                                                                                                recv_particle.get_location());
                      if (GeometryInfo<dim>::is_inside_unit_cell(p_unit))
                        {
                          cell = *neighbor_cell;
                          recv_particle.set_reference_location(p_unit);
                          break;
                        }
                    }
                  catch (typename Mapping<dim>::ExcTransformationFailed &)
                    {}
                }
            }
#endif

          const types::LevelInd found_cell = std::make_pair(cell->level(),cell->index());

          received_particles.push_back(std::make_pair(found_cell, recv_particle));
        }

      AssertThrow(recv_data_it == &recv_data.back()+1,
                  ExcMessage("The amount of data that was read into new particles "
                             "does not match the amount of data sent around."));
    }

Thanks so much!

Best 

Jack

Jack

unread,
Nov 21, 2016, 1:28:26 AM11/21/16
to deal.II User Group, bang...@colostate.edu
Hi,

I find a better way to realize this by using the MPI distributed vector like the finite element solution vector, where I save the values in each cell dof by dof using cell->set_dof_values() and then read the dof values using cell->get_dof_values(). The latter function can exactly read the dof values at the inner partition boundary. Only one problem that at a few dofs, the values read are different from the values those were set(I’m not sure these dofs are on the partition boundary or other positions), but I can check these dof values and then adjust them to the right ones. 



在 2016年11月9日星期三 UTC+8上午10:06:07,Jack写道:
Reply all
Reply to author
Forward
0 new messages