Application of inverse mass matrix in matrix-free context using cell_loop instead of scale()

35 views
Skip to first unread message

Maxi Miller

unread,
Jan 18, 2020, 6:57:24 AM1/18/20
to deal.II User Group
In step-48 the inverse mass matrix is applied by moving the inverse data into a vector and applying the function scale(), i.e. as in the following code:
        data.cell_loop(&LaplaceOperator::local_apply_cell,
                       
this,
                       dst
,
                       src
,
                       
true);
        computing_times
[0] += timer.wall_time();
        timer
.restart();

       dst
.scale(inv_mass_matrix);



Now I would like to do the same, but use a cell_loop instead of scale(). Thus, I created an additional function called "local_apply_inverse_mass_matrix" as 
    template <int dim, int degree, int n_points_1d>
   
void LaplaceOperator<dim, degree, n_points_1d>::local_apply_inverse_mass_matrix(
           
const MatrixFree<dim, Number> &                   data,
           
LinearAlgebra::distributed::Vector<Number> &      dst,
           
const LinearAlgebra::distributed::Vector<Number> &src,
           
const std::pair<unsigned int, unsigned int> &     cell_range) const
   
{
       
(void) data;
       
(void) cell_range;
        dst
= src;
        dst
.scale(inv_mass_matrix);
   
}

When running that code, though, using
        LinearAlgebra::distributed::Vector<Number> tmp(src);

        data
.initialize_dof_vector(tmp);
        data
.initialize_dof_vector(dst);
        data
.cell_loop(&LaplaceOperator::local_apply_cell,
                       
this,
                       tmp
,
                       src
,
                       
true);
        computing_times
[0] += timer.wall_time();
        timer
.restart();

        data
.cell_loop(&LaplaceOperator::local_apply_inverse_mass_matrix,
                       
this,
                       dst
,
                       tmp
,
                       
true);
        computing_times
[1] += timer.wall_time();

        computing_times
[3] += 1.;


I get the error
An error occurred in line <3338> of file </opt/dealii/include/deal.II/matrix_free/matrix_free.h> in function
   
void dealii::internal::VectorDataExchange<dim, Number, VectorizedArrayType>::compress_start(unsigned int, VectorType&) [with VectorType = dealii::LinearAlgebra::distributed::Vector<double>; typename std::enable_if<(dealii::internal::has_compress_start<VectorType>::value && dealii::internal::has_exchange_on_subset<VectorType>::value), VectorType>::type* <anonymous> = 0; int dim = 2; Number = double; VectorizedArrayType = dealii::VectorizedArray<double, 4>]
The violated condition was:
    vec
.has_ghost_elements() == false
Additional information:
   
You are trying to use functionality in deal.II that is currently not implemented. In many cases, this indicates that there simply didn't appear much of a need for it, or that the author of the original code did not have the time to implement a particular case. If you hit this exception, it is therefore worth the time to look into the code to find out whether you may be able to implement the missing functionality. If you do, please consider providing a patch to the deal.II development sources (see the deal.II website on how to contribute).

Stacktrace:
-----------
#0  ./MF_FES_RK4-Test: void dealii::internal::VectorDataExchange<2, double, dealii::VectorizedArray<double, 4> >::compress_start<dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host>, (dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host>*)0>(unsigned int, dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host>&)
#1  ./MF_FES_RK4-Test: void dealii::internal::compress_start<2, dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host>, double, dealii::VectorizedArray<double, 4>, (dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host>*)0>(dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host>&, dealii::internal::VectorDataExchange<2, double, dealii::VectorizedArray<double, 4> >&, unsigned int)
#2  ./MF_FES_RK4-Test: dealii::internal::MFWorker<dealii::MatrixFree<2, double, dealii::VectorizedArray<double, 4> >, dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host>, dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host>, Step40::LaplaceOperator<2, 2, 4>, true>::vector_compress_start()
#3  /opt/dealii/lib/libdeal_II.g.so.9.2.0-pre: dealii::internal::MatrixFreeFunctions::MPICommunication::execute()
#4  /opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.7/libtbb_debug.so.2:
#5  /opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.7/libtbb_debug.so.2:
#6  /opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.7/libtbb_debug.so.2:
#7  /opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.7/libtbb_debug.so.2:
#8  /opt/intel/compilers_and_libraries_2019.5.281/linux/tbb/lib/intel64_lin/gcc4.7/libtbb_debug.so.2:
#9  /lib64/libpthread.so.0:
#10  /lib64/libc.so.6: clone

Why that, and how can I fix it?
Thanks!

peterrum

unread,
Jan 18, 2020, 8:01:11 AM1/18/20
to deal.II User Group
Hi Maxi,

I guess I am not the correct person to explain you the reason for that assert. But what you are doing is that while calling scale you are messing with the ghost values (which prevents the compress step). 

You should do it only locally. What you might want to check it out are the new `operation_after_loop` functions provided by the cell_loops (on the master branch), which gives you the information local vector entry has been finished and you are allowed to apply e.g. the mass matrix diagonal on.

Peter

Maxi Miller

unread,
Jan 18, 2020, 8:41:20 AM1/18/20
to deal.II User Group
I.e. I should add an elementwise multiplication with the inverse mass matrix vector as postprocessing function?

peterrum

unread,
Jan 18, 2020, 11:12:34 AM1/18/20
to deal.II User Group

Maxi Miller

unread,
Jan 18, 2020, 12:05:49 PM1/18/20
to deal.II User Group
I tried implementing it as
        data.cell_loop(&LaplaceOperator::local_apply_cell,
                       
this,
                       dst
,
                       src
,

                       
//true,
                       
[&](const unsigned int start_range, const unsigned int end_range){
           
for(size_t i = start_range; i < end_range; ++i){
                dst
.local_element(i) = 0;
           
}
       
},
       
[&](const unsigned int start_range, const unsigned int end_range){
           
for(unsigned int i = start_range; i < end_range; ++i){
                dst
.local_element(i) = dst.local_element(i) * inv_mass_matrix.local_element(i);
           
}
       
});

but the result was not correct. Thus, I assume I have to do something else?
Thanks!

Daniel Arndt

unread,
Jan 18, 2020, 12:42:04 PM1/18/20
to dea...@googlegroups.com
Maxi,

As usual, it is much easier to help if you provide a complete minimal example and say how the result differs from what you expect.
Does it only scale certain vector entries? Are the results correct when running with one MPI process?

Best,
Daniel

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dealii+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/dealii/a3c92a70-323e-48a5-9490-58e0b72b8860%40googlegroups.com.

Maxi Miller

unread,
Jan 19, 2020, 7:50:58 AM1/19/20
to deal.II User Group
Hei,
I attached a working MWE. Changing between the approach suggested above and scale() is done by changing the variable use_scale_function. The output contains both the expected solution and the obtained solution.
My approach for replacing scale() with a local loop is
            data.cell_loop(&LaplaceOperator::local_apply_cell,
                           
this,
                           dst
,
                           src
,

                           
[&](const unsigned int start_range, const unsigned int end_range){
               
for(size_t i = start_range; i < end_range; ++i){
                    dst
.local_element(i) = 0;
               
}
           
},
           
[&](const unsigned int start_range, const unsigned int end_range){
               
for(unsigned int i = start_range; i < end_range; ++i){
                    dst
.local_element(i) = dst.local_element(i) * inv_mass_matrix.local_element(i);
               
}
           
})

I expect my result to follow the solution function exp(-2 * pi^2 * t) * sin(pi * x) * sin(pi * y). While that works for the scale()-approach, the result for integrated approach does not change from time step to time step. Currently I am only running it with a single MPI thread, after encountering issues when running with several threads.
Thanks!
To unsubscribe from this group and stop receiving emails from it, send an email to dea...@googlegroups.com.
main.cpp

Maxi Miller

unread,
Jan 20, 2020, 12:43:50 PM1/20/20
to deal.II User Group
Found a bug in the library, reported here: https://github.com/dealii/dealii/issues/9405, and therefore I got wrong results.
Reply all
Reply to author
Forward
0 new messages