Particles and field interpolation error

40 views
Skip to first unread message

Franco Milicchio

unread,
Jul 15, 2020, 12:18:55 PM7/15/20
to deal.II User Group
Dear all,

I am playing with 9.2 and particles, but I've encountered a weird problem. If I remove a particle, the interpolated field on particle locations yields a wrong answer.

My code does (now) everything by hand:

  // Here be drangons...

  Point<2> p_0(0.1,0.5);
  auto ref_cell_0 = GridTools::find_active_cell_around_point(dof_handler, p_0);
  Point<2> ref_p_0 = StaticMappingQ1<2>::mapping.transform_real_to_unit_cell(ref_cell_0, p_0);

  Point<2> p_1(0.5,0.5);
  auto ref_cell_1 = GridTools::find_active_cell_around_point(dof_handler, p_1);
  Point<2> ref_p_1 = StaticMappingQ1<2>::mapping.transform_real_to_unit_cell(ref_cell_1, p_1);

  Point<2> p_2(0.9,0.5);
  auto ref_cell_2 = GridTools::find_active_cell_around_point(dof_handler, p_2);
  Point<2> ref_p_2 = StaticMappingQ1<2>::mapping.transform_real_to_unit_cell(ref_cell_2, p_2);

  particle_handler.insert_particle(Particles::Particle<2>(p_0, ref_p_0, 0), ref_cell_0);
  particle_handler.insert_particle(Particles::Particle<2>(p_1, ref_p_1, 1), ref_cell_1);
  particle_handler.insert_particle(Particles::Particle<2>(p_2, ref_p_2, 2), ref_cell_2);

  particle_handler.update_cached_numbers();

  std::cout << "Printing particle id, location and reference location" << std::endl;
  for(auto &p:particle_handler)
  {
    std::cout << "id: " << p.get_id() << "  loc: " << p.get_location() << "  ref_loc: " << p.get_reference_location() << std::endl;
  }

  field_on_particles.reinit(particle_handler.n_global_particles());
  Particles::Utilities::interpolate_field_on_particles(dof_handler, particle_handler, field, field_on_particles);

  std::cout << "Printing field value on particle location" << std::endl;
  for(int pp = 0; pp< particle_handler.n_global_particles(); pp++)
  {
      std::cout << "fluid id: " << pp << " value: " << field_on_particles[pp] << std::endl;
  }

  // ************************
  // NOW REMOVE A PARTICLE...
  // ************************

  for(auto itr=particle_handler.begin(); itr != particle_handler.end(); itr++)
  {
    if(itr->get_id()==1) particle_handler.remove_particle(itr);
  }

  particle_handler.update_cached_numbers();


  // ***********************
  // ...AND VALUES ARE WRONG
  // ***********************

  std::cout << "Printing particle id, location and reference location after removing particle" << std::endl;

  for(auto itr_2=particle_handler.begin(); itr_2 != particle_handler.end(); itr_2++)
  {
    std::cout << "id: " << itr_2->get_id() << "  loc: " << itr_2->get_location() << "  ref_loc: " << itr_2->get_reference_location() << std::endl;
  }

  field_on_particles.reinit(particle_handler.n_global_particles());
  Particles::Utilities::interpolate_field_on_particles(dof_handler, particle_handler, field, field_on_particles);


  std::cout << "Printing field value on particles location after removing particle" << std::endl;

  for(int ppp = 0; ppp < particle_handler.n_global_particles(); ppp++)
  {
      std::cout << "fluid id: " << ppp << " value: " << field_on_particles[ppp] << std::endl;
  }

The scalar field is generated with the X component, so its range is [0-1], and as you can see before the removal the field is perfect. After, though, I have weird numbers (forgive the ugly code and the useless reinit):


Number of active cells: 1024
Number of degrees of freedom: 1089
Printing particle id, location and reference location
id: 0  loc: 0.1 0.5  ref_loc: 0.2 1
id: 1  loc: 0.5 0.5  ref_loc: 1 1
id: 2  loc: 0.9 0.5  ref_loc: 0.8 1
Printing field value on particles location
fluid id: 0 value: 0.1   <--------------- OK!
fluid id: 1 value: 0.5   <--------------- OK!
fluid id: 2 value: 0.9   <--------------- OK!
Printing particle id, location and reference location after removing particle
id: 0  loc: 0.1 0.5  ref_loc: 0.2 1
id: 2  loc: 0.9 0.5  ref_loc: 0.8 1
Printing field value on particles location after removing particle
fluid id: 0 value: 0.1   <---------------------- STILL OK, I HOPE IT IS
fluid id: 1 value: 0.    <---------------------- ????

Am I using the particles the way they're not intended? Any hints are really welcome!

Thanks!
   Franco

PS. I am attaching a ZIP file with a minimal working code adapting the step 3 of the tutorial.


step-3.zip

Luca Heltai

unread,
Jul 15, 2020, 1:47:58 PM7/15/20
to dea...@googlegroups.com
Franco,

The interpolated field says that the field value is zero (on the line above your arrow). This is how it is documented: if a particle is removed, its interpolated value is left unchanged in the target vector. Zero in your case. 

Luca

Il giorno 15 lug 2020, alle ore 18:18, Franco Milicchio <franco.m...@gmail.com> ha scritto:


--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dealii+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/dealii/e7ff5cd5-6866-414c-8836-9890ddfc2fd7o%40googlegroups.com.
<step-3.zip>

franco.m...@gmail.com

unread,
Jul 16, 2020, 7:47:32 AM7/16/20
to deal.II User Group
Thanks, Luca.

I have read the documentation, that's why I was skeptic of my own code: I still think I am doing it wrong.

As far as I understand from the documentation, particles should be removed from the handler after updating the cache, and thus the interpolated field should not contain the old removed one. Unfortunately I don't see any mention of "removal" in Particles::Utilities, but I could overlooking something here.

Another question relative to the removal part is more of a worry. If I remove a particle, will any iterator be invalidated? If so, and say I need to remove a lot of particles, what is the best way to do this without too much overhead? As I understand, I have to remove (just one?) a particle and call a cache update at the end of the removal process.

Thanks!
    Franco

luca.heltai

unread,
Jul 16, 2020, 2:39:00 PM7/16/20
to Deal.II Users
Ciao Franco,

ParticleHandler was built with the intention of “losing” particles whenever they fall out of the domain. This is the default behaviour of the ParticleHandler, and unless you store a pointer to an existing particle somewhere, you should not be in trouble.

If you iterate over the particles using the ParticleHandler, you should be safe. The Utilities::interpolate_on_field cannot know how many particles are actually there, so it asks the ParticleHandler for the next_available_particle_index(), which should not change, even if you drop some particles.

The way the interpolator works, is by looping *through* the ParticleHandler (again, on the safe side), and deciding what to store where based on the current particle id. Unless you renumber and reset all particle ids between the removal and the interpolation, this will remain consistent.

Luca.
> To view this discussion on the web visit https://groups.google.com/d/msgid/dealii/7c4738a4-aeae-4710-9a9d-35011aaedd85n%40googlegroups.com.

franco.m...@gmail.com

unread,
Jul 20, 2020, 11:48:29 AM7/20/20
to deal.II User Group
Dear Luca,

thanks for your answer, I've moved along but I am still encountering problems in the interpolation. This seems to be a problem with the way I use PETSc.

So, as far as I understand, I need to initialize a PETSc vector of length n_global_particles * components (in my case, 2, the space dimension), and locally each process should handle n_locally_owned_particles * components. 

First, just a simple question: is n_global_particles working correctly on processes without any? It seems that it returns zero when no particles are locally owned, but the documentation, as I understand, says it should yield the same value on all processes.

The code I am using is this:

void Step3::setup_system()
{
    GridTools::partition_triangulation(n_mpi_processes, triangulation);
    dof_handler.distribute_dofs(fe);
    DoFRenumbering::subdomain_wise(dof_handler);
 
    const std::vector<IndexSet> locally_owned_dofs_per_proc = DoFTools::locally_owned_dofs_per_subdomain(dof_handler);
    const IndexSet locally_owned_dofs = locally_owned_dofs_per_proc[this_mpi_process];

    field.reinit(locally_owned_dofs, mpi_communicator);

    VectorTools::interpolate(dof_handler, initial_function, field);

    // Particles

    particle_handler.initialize(triangulation, StaticMappingQ1<2>::mapping);

    std::vector<Point<2>> particles_to_insert;

    for (const auto &cell : dof_handler.active_cell_iterators())
    {
        if (cell->subdomain_id() == this_mpi_process)
        {
            if(cell->center()(1) < 0.03)
            {
                particles_to_insert.emplace_back(cell->center());
                std::cout << "MPI_rank: "<< this_mpi_process  << "   cell_center: " << cell->center() << std::endl;
            }
        }
    }
    particle_handler.insert_particles(particles_to_insert);
    particle_handler.update_cached_numbers();

    // n_global_particles returns zero when no particles are locally owned: compute the real number
    int total_particles = Utilities::MPI::sum(particle_handler.n_global_particles(), mpi_communicator);

    for(int i = 0; i < n_mpi_processes; i++)
    {
        if(this_mpi_process == i)
            std::cout << "MPI_process: " << i
                      << "     locally_owned_particles: " << particle_handler.n_locally_owned_particles()
                      << "     total_particles: " << total_particles
                      << "     n_global_particles: " << particle_handler.n_global_particles()
                      << std::endl;
    }

    field_on_particles.reinit(mpi_communicator, total_particles*2 , particle_handler.n_locally_owned_particles()*2);
    field_on_particles.compress(VectorOperation::add); // unnecessary?

    for(int i = 0; i < n_mpi_processes; i++)
    {
        if(this_mpi_process == i)
                std::cout   << "VECTOR"
                            << "MPI_process: " << i
                            << "     vec.size: " << field_on_particles.size()
                            << "     vec.local_size: " << field_on_particles.local_size()
                            << std::endl;
    }

    Particles::Utilities::interpolate_field_on_particles(dof_handler, particle_handler, field, field_on_particles);
    
}

As you can see, I am following the Particles::Utilities::interpolate_field_on_particles, by initializing a PETSc vector with "the size of the vector must be n_locally_owned_particles times the n_components", the total size I guess it should be the n_global_particles * components. I might be completely wrong, though.

The output I am getting is this, followed by the MPI abort:

/usr/bin/mpirun -np 4 cmake-build-debug/step-3
MPI_rank: 0   cell_center: 0.015625 0.015625
MPI_rank: 0   cell_center: 0.046875 0.015625
MPI_rank: 0   cell_center: 0.078125 0.015625
MPI_rank: 0   cell_center: 0.109375 0.015625
MPI_rank: 0   cell_center: 0.140625 0.015625
MPI_rank: 0   cell_center: 0.171875 0.015625
MPI_rank: 0   cell_center: 0.203125 0.015625
MPI_rank: 0   cell_center: 0.234375 0.015625
MPI_rank: 0   cell_center: 0.265625 0.015625
MPI_rank: 0   cell_center: 0.296875 0.015625
MPI_rank: 0   cell_center: 0.328125 0.015625
MPI_rank: 0   cell_center: 0.359375 0.015625
MPI_rank: 2   cell_center: 0.390625 0.015625
MPI_rank: 2   cell_center: 0.421875 0.015625
MPI_rank: 2   cell_center: 0.453125 0.015625
MPI_rank: 2   cell_center: 0.484375 0.015625
MPI_rank: 2   cell_center: 0.515625 0.015625
MPI_rank: 2   cell_center: 0.546875 0.015625
MPI_rank: 2   cell_center: 0.578125 0.015625
MPI_rank: 2   cell_center: 0.609375 0.015625
MPI_rank: 2   cell_center: 0.640625 0.015625
MPI_rank: 2   cell_center: 0.671875 0.015625
MPI_rank: 2   cell_center: 0.703125 0.015625
MPI_rank: 2   cell_center: 0.734375 0.015625
MPI_rank: 2   cell_center: 0.765625 0.015625
MPI_rank: 2   cell_center: 0.796875 0.015625
MPI_rank: 2   cell_center: 0.828125 0.015625
MPI_rank: 2   cell_center: 0.859375 0.015625
MPI_rank: 2   cell_center: 0.890625 0.015625
MPI_rank: 2   cell_center: 0.921875 0.015625
MPI_rank: 2   cell_center: 0.953125 0.015625
MPI_rank: 2   cell_center: 0.984375 0.015625
MPI_process: 1     locally_owned_particles: 0     total_particles: 32     n_global_particles: 0
MPI_process: 0     locally_owned_particles: 12     total_particles: 32     n_global_particles: 12
MPI_process: 3     locally_owned_particles: 0     total_particles: 32     n_global_particles: 0
MPI_process: 2     locally_owned_particles: 20     total_particles: 32     n_global_particles: 20
VECTORMPI_process: 3     vec.size: 64     vec.local_size: 0
VECTORMPI_process: 1     vec.size: 64     vec.local_size: 0
VECTORMPI_process: 0     vec.size: 64     vec.local_size: 24
VECTORMPI_process: 2     vec.size: 64     vec.local_size: 40

The error is the following:

--------------------------------------------------------
An error occurred in line <220> of file </media/murer/murer/Programmi/dealii_9.2.0_MPI_P4est/include/deal.II/particles/utilities.h> in function
    void dealii::Particles::Utilities::interpolate_field_on_particles(const dealii::DoFHandler<dim, spacedim>&, const dealii::Particles::ParticleHandler<dim, spacedim>&, const InputVectorType&, OutputVectorType&, const dealii::ComponentMask&) [with int dim = 2; int spacedim = 2; InputVectorType = dealii::PETScWrappers::MPI::Vector; OutputVectorType = dealii::PETScWrappers::MPI::Vector]
The violated condition was:
    static_cast<typename ::dealii::internal::argument_type<void( typename std::common_type<decltype(interpolated_field.size()), decltype(particle_handler.get_next_free_particle_index() * n_comps)>::type)>::type>(interpolated_field.size()) == static_cast<typename ::dealii::internal::argument_type<void( typename std::common_type<decltype(interpolated_field.size()), decltype(particle_handler.get_next_free_particle_index() * n_comps)>::type)>::type>(particle_handler.get_next_free_particle_index() * n_comps)
Additional information:
    Dimension 64 not equal to 24.

Stacktrace:
-----------
#0  cmake-build-debug/step-3: void dealii::Particles::Utilities::interpolate_field_on_particles<2, 2, dealii::PETScWrappers::MPI::Vector, dealii::PETScWrappers::MPI::Vector>(dealii::DoFHandler<2, 2> const&, dealii::Particles::ParticleHandler<2, 2> const&, dealii::PETScWrappers::MPI::Vector const&, dealii::PETScWrappers::MPI::Vector&, dealii::ComponentMask const&)
#1  cmake-build-debug/step-3: Step3::setup_system()
#2  cmake-build-debug/step-3: Step3::run()
#3  cmake-build-debug/step-3: main
--------------------------------------------------------

Calling MPI_Abort now.
To break execution in a GDB session, execute 'break MPI_Abort' before running. You can also put the following into your ~/.gdbinit:
  set breakpoint pending on
  break MPI_Abort
  set breakpoint pending auto
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 255.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------

--------------------------------------------------------
An error occurred in line <220> of file </media/murer/murer/Programmi/dealii_9.2.0_MPI_P4est/include/deal.II/particles/utilities.h> in function
    void dealii::Particles::Utilities::interpolate_field_on_particles(const dealii::DoFHandler<dim, spacedim>&, const dealii::Particles::ParticleHandler<dim, spacedim>&, const InputVectorType&, OutputVectorType&, const dealii::ComponentMask&) [with int dim = 2; int spacedim = 2; InputVectorType = dealii::PETScWrappers::MPI::Vector; OutputVectorType = dealii::PETScWrappers::MPI::Vector]
The violated condition was:
    static_cast<typename ::dealii::internal::argument_type<void( typename std::common_type<decltype(interpolated_field.size()), decltype(particle_handler.get_next_free_particle_index() * n_comps)>::type)>::type>(interpolated_field.size()) == static_cast<typename ::dealii::internal::argument_type<void( typename std::common_type<decltype(interpolated_field.size()), decltype(particle_handler.get_next_free_particle_index() * n_comps)>::type)>::type>(particle_handler.get_next_free_particle_index() * n_comps)
Additional information:
    Dimension 64 not equal to 40.

Stacktrace:
-----------
#0  cmake-build-debug/step-3: void dealii::Particles::Utilities::interpolate_field_on_particles<2, 2, dealii::PETScWrappers::MPI::Vector, dealii::PETScWrappers::MPI::Vector>(dealii::DoFHandler<2, 2> const&, dealii::Particles::ParticleHandler<2, 2> const&, dealii::PETScWrappers::MPI::Vector const&, dealii::PETScWrappers::MPI::Vector&, dealii::ComponentMask const&)
#1  cmake-build-debug/step-3: Step3::setup_system()
#2  cmake-build-debug/step-3: Step3::run()
#3  cmake-build-debug/step-3: main
--------------------------------------------------------

Calling MPI_Abort now.
To break execution in a GDB session, execute 'break MPI_Abort' before running. You can also put the following into your ~/.gdbinit:
  set breakpoint pending on
  break MPI_Abort
  set breakpoint pending auto
[murerpc:11716] 1 more process has sent help message help-mpi-api.txt / mpi-abort
[murerpc:11716] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages

Process finished with exit code 255

I have tried many solutions, even initializing the vector as the filed solution, so I have excess space, but to no avail.

What am I missing?

Thanks for any help you can provide!
   Franco

luca.heltai

unread,
Jul 20, 2020, 1:21:02 PM7/20/20
to Deal.II Users
Dear Franco,

what type of triangulation are you using? If you look at the code that is run with `update_cached_numbers`

https://github.com/dealii/dealii/blob/9e50f3ac04d088a9d2ffa74a81f08348e7f98a65/source/particles/particle_handler.cc#L173

you will see that the numbers are updated (just as you are doing manually) for parallel::distributed::Triangulation objects, and otherwise the triangulation is treated as if it was a serial one.

I suspect you are using a parallel::shared::Triangulation object.

The ParticleHandler class was built with parallel::shared::Triangulation objects in mind, and only recently was made to work with serial triangulations.

You have spotted a bug in the library, in the sense that ParticleHandler should throw an exception when the Triangulation is paralell::shared, but not parallel::distributed. Since all triangulation are derived from the serial one, ParticleHandler thinks your triangulation is serial, and therefore gets incosistent numbers across the processors.

I think there are only a few places that we would need to change to make the code run also in your case, but to be on the safe side, I’d switch to parallel::distributed::Triangulation (if you can), otherwise I can assist you in opening a pull request with the required changes.

Luca.
> To view this discussion on the web visit https://groups.google.com/d/msgid/dealii/1f25a5a4-f2ad-4d71-8b12-f4f71e74ab36n%40googlegroups.com.

Message has been deleted

franco.m...@gmail.com

unread,
Jul 21, 2020, 11:14:36 AM7/21/20
to deal.II User Group
Thanks for your hint, now the parallel distributed triangulation works and global particles are correct. However, it crashes when interpolating the field on particles.

This is how I initialize the field vector:

    GridGenerator::hyper_cube(triangulation, 0, 1);
    triangulation.refine_global(5);
    dof_handler.distribute_dofs(fe);
    IndexSet fluid_owned_dofs;
    fluid_owned_dofs = dof_handler.locally_owned_dofs();
    field.reinit(fluid_owned_dofs, MPI_COMM_WORLD);
    VectorTools::interpolate(dof_handler, initial_function, field);

Something is wrong, since it seems I cannot get the output VTU correctly:

--------------------------------------------------------

An error occurred in line <1215> of file </var/folders/8z/hlb6vc015qjggytkxn84m6_c0000gn/T/heltai/spack-stage/spack-stage-dealii-9.2.0-yv7ehpigjxtbrhqi7y4menbu532z6rxi/spack-src/include/deal.II/lac/petsc_vector_base.h> in function

    void dealii::PETScWrappers::VectorBase::extract_subvector_to(const ForwardIterator, const ForwardIterator, OutputIterator) const [ForwardIterator = const unsigned int *, OutputIterator = double *]

The violated condition was: 

    index >= static_cast<unsigned int>(begin) && index < static_cast<unsigned int>(end)

Additional information: 

    This exception -- which is used in many places in the library -- usually indicates that some condition which the author of the code thought must be satisfied at a certain point in an algorithm, is not fulfilled. An example would be that the first part of an algorithm sorts elements of an array in ascending order, and a second part of the algorithm later encounters an element that is not larger than the previous one.

There is usually not very much you can do if you encounter such an exception since it indicates an error in deal.II, not in your own program. Try to come up with the smallest possible program that still demonstrates the error and contact the deal.II mailing lists with it to obtain help.


The field on particle locations is interpolated as follows:

    IndexSet locally_owned_tracer_particle_coordinates;
    locally_owned_tracer_particle_coordinates = particle_handler.locally_relevant_ids().tensor_product(complete_index_set(2));

    field_on_particles.reinit(locally_owned_tracer_particle_coordinates, MPI_COMM_WORLD);

    //field_on_particles.reinit(MPI_COMM_WORLD, total_particles*2 ,particle_handler.n_locally_owned_particles()*2);

    MPI_Barrier(MPI_COMM_WORLD);
    for(int i = 0; i < n_mpi_processes; i++)
    {
        if(this_mpi_process == i)
            std::cout   << "VECTOR FIELD ON PARTICLES"
                        << "MPI_process: " << i
                        << "     vec.size: " << field_on_particles.size()
                        << "     vec.local_size: " << field_on_particles.local_size()
                        << std::endl;
    }
    MPI_Barrier(MPI_COMM_WORLD);

    for(int i = 0; i < n_mpi_processes; i++)
    {
        if(this_mpi_process == i)
            std::cout   << "VECTOR FIELD"
                        << "MPI_process: " << i
                        << "     vec.size: " << field.size()
                        << "     vec.local_size: " << field.local_size()
                        << "     vec.range from: " << field.local_range().first << "  to: " << field.local_range().second
                        << std::endl;
    }
    Particles::Utilities::interpolate_field_on_particles(dof_handler, particle_handler, field, field_on_particles);

The interpolation error regards accessing an element out of the local range. As before, I am probably initializing vectors incorrectly:

MPI_rank: 0   cell_center: 0.015625 0.015625
[...]
MPI_rank: 1   cell_center: 0.984375 0.015625
MPI_process: 0     locally_owned_particles: 16     total_particles: 32     n_global_particles: 32
MPI_process: 1     locally_owned_particles: 16     total_particles: 32     n_global_particles: 32
MPI_process: 2     locally_owned_particles: 0     total_particles: 32     n_global_particles: 32
MPI_process: 3     locally_owned_particles: 0     total_particles: 32     n_global_particles: 32
id: 0    location: 0.015625 0.015625
[...]
VECTOR FIELD ON PARTICLESMPI_process: 1     vec.size: 64     vec.local_size: 32
VECTOR FIELD ON PARTICLESMPI_process: 3     vec.size: 64     vec.local_size: 0
VECTOR FIELD ON PARTICLESMPI_process: 2     vec.size: 64     vec.local_size: 0
VECTOR FIELD ON PARTICLESMPI_process: 0     vec.size: 64     vec.local_size: 32
VECTOR FIELDMPI_process: 0     vec.size: 2178     vec.local_size: 578     vec.range from: 0  to: 578
VECTOR FIELDMPI_process: 1     vec.size: 2178     vec.local_size: 544     vec.range from: 578  to: 1122
VECTOR FIELDMPI_process: 2     vec.size: 2178     vec.local_size: 544     vec.range from: 1122  to: 1666
VECTOR FIELDMPI_process: 3     vec.size: 2178     vec.local_size: 512     vec.range from: 1666  to: 2178
libc++abi.dylib: terminating with uncaught exception of type dealii::PETScWrappers::internal::VectorReference::ExcAccessToNonlocalElement: 
--------------------------------------------------------
An error occurred in line <104> of file </var/folders/8z/hlb6vc015qjggytkxn84m6_c0000gn/T/heltai/spack-stage/spack-stage-dealii-9.2.0-yv7ehpigjxtbrhqi7y4menbu532z6rxi/spack-src/source/lac/petsc_vector_base.cc> in function
    PetscScalar dealii::PETScWrappers::internal::VectorReference::operator double() const
The violated condition was: 
    (index >= static_cast<size_type>(begin)) && (index < static_cast<size_type>(end))
Additional information: 
    You tried to access element 218 of a distributed vector, but only elements 578 through 1121 are stored locally and can be accessed.
--------------------------------------------------------


Again, I think I am not understanding how vectors get to be initialized. I also tried to initialize them giving the total size and the local one, but I still get errors.

Any hints are greatly appreciated.
    Franco

PS. Sorry for the double post, Google comfortably has put "post" under the font selection button... 

luca.heltai

unread,
Jul 22, 2020, 2:15:30 AM7/22/20
to Deal.II Users
Dear Franco,

you need read access to the locally active dofs, not just the locally owned dofs.

Take a look at how things are done in step-70:

the vector that is used to interpolate the field on the particles is a read-only ghosted vector, that contains the locally relevant dofs


locally_relevant_solution.reinit(fluid_owned_dofs,
fluid_relevant_dofs,
mpi_communicator);


initialized right after it has been computed:

solver.solve(system_matrix, solution, system_rhs, P);
constraints.distribute(solution);
locally_relevant_solution = solution;

The particle interpolation needs read access to all dofs that belong to active cells. Those may not be owned by the current mpi process, and therefore you need a ghosted vector.

Particles::Utilities::interpolate_field_on_particles(
fluid_dh,
tracer_particle_handler,
locally_relevant_solution,
tracer_particle_velocities,
fluid_fe->component_mask(FEValuesExtractors::Vector(0)));

Best,
Luca.
> To view this discussion on the web visit https://groups.google.com/d/msgid/dealii/396b8a6f-9d4b-4004-8652-33ea2158fbf1n%40googlegroups.com.

franco.m...@gmail.com

unread,
Jul 22, 2020, 9:51:41 AM7/22/20
to deal.II User Group
Aha! That was the missing piece, thanks, you've solved the problem!

Thanks!
    Franco

Reply all
Reply to author
Forward
0 new messages