error in SolverFGMRES related destructor

76 views
Skip to first unread message

SebG

unread,
Nov 22, 2017, 11:42:46 AM11/22/17
to deal.II User Group
Dear deal.ii user and developers,

I modified and extended the step-57 tutorial to switch between Newton and Picard iteration. If I increase a model parameter (Reynolds number), I am getting the following error.

--------------------------------------------------------
An error occurred in line <101> of file </home/sg/Downloads/dealii-8.5.1/include/deal.II/lac/vector_memory.templates.h> in function
    dealii::GrowingVectorMemory<VectorType>::~GrowingVectorMemory() [with VectorType = dealii::BlockVector<double>]
The violated condition was:
    current_alloc == 0
Additional information:
    Destroying memory handler while 1 objects are still allocated.

Stacktrace:
-----------
#0  /usr/local/deal.ii-8.5.1/lib/libdeal_II.g.so.8.5.1: dealii::GrowingVectorMemory<dealii::BlockVector<double> >::~GrowingVectorMemory()
#1  ./navier-stokes: dealii::SolverFGMRES<dealii::BlockVector<double> >::~SolverFGMRES()
#2  ./navier-stokes: navierstokes::NavierStokesProblem<2>::solve(bool)
#3  ./navier-stokes: navierstokes::NavierStokesProblem<2>::newton_iteration(unsigned int const&)
#4  ./navier-stokes: navierstokes::NavierStokesProblem<2>::nonlinear_solve()
#5  ./navier-stokes: navierstokes::NavierStokesProblem<2>::run()
#6  ./navier-stokes: main
--------------------------------------------------------

My solve method looks as follows:

template<int dim>
void NavierStokesProblem<dim>::solve(const bool use_homogeneous_constraints)
{
    TimerOutput::Scope t(computing_timer, "solve");

    const ConstraintMatrix &constraints_used = use_homogeneous_constraints ? homogeneous_constraints: constraints;

    BlockVector<double> &solution_reference = use_homogeneous_constraints ? newton_update : solution;

    const double tol = parameters.inexact_lin_solve ?
            parameters.tau * std::pow(system_rhs.l2_norm(),2): 1e-6*system_rhs.l2_norm();

    unsigned int n_iterations_A;
    unsigned int n_iterations_S;

    SolverControl solver_control(100, tol);
    SolverFGMRES<BlockVector<double>> solver(solver_control);

    computing_timer.enter_subsection ("solve - set-up preconditioner A");
    SparseILU<double> precondition_A;
    {
        SparseILU<double>::AdditionalData data;

        precondition_A.initialize(system_matrix.block(0,0),
                data);
    }
    computing_timer.leave_subsection();

    computing_timer.enter_subsection ("solve - set-up preconditioner S");
    PreconditionSSOR<SparseMatrix<double>> precondition_S;
    {
        PreconditionSSOR<SparseMatrix<double>>::AdditionalData data;
        data.relaxation = 1.2;

        precondition_S.initialize(pressure_mass_matrix, data);
    }
    computing_timer.leave_subsection();

    const bool solve_A = true;
    const LinearSolvers::BlockSchurPreconditioner<SparseILU<double>, PreconditionSSOR<SparseMatrix<double>>>
    block_preconditioner(
            system_matrix,
            pressure_mass_matrix,
            precondition_A,
            precondition_S,
            solve_A);

    constraints_used.set_zero(solution_reference);

    solver.solve(system_matrix, solution_reference, system_rhs, block_preconditioner);
    n_iterations_A = block_preconditioner.n_iter_A;
    n_iterations_S = block_preconditioner.n_iter_S;

    std::cout << std::endl
            << "   Number of FGMRES iterations: "
            << solver_control.last_step() << std::endl
            << "   Total number of iterations used for approximation of A inverse: "
            << n_iterations_A << std::endl
            << "   Total number of iterations used for approximation of S inverse: "
            << n_iterations_S << std::endl
            << std::endl;

    constraints_used.distribute(solution_reference);

}

Does anyone have an idea why the destruction of  the SolverGMRES object throws an error? Can this be related to the fact that I am using a reference to a BlockVector and not the BlockVector directly?

Apart from using a different (maybe not optimal) preconditioner the solve method principally should the thing as in step-57.

Best wishes,
Sebastian

Timo Heister

unread,
Nov 22, 2017, 6:11:19 PM11/22/17
to dea...@googlegroups.com
Sebastian,

could it be that GMRES is not converging in the given number of
iterations? You could try putting a try/catch block around the
::solve() call or you can output the residuals in each iteration
(log_history parameter of SolverControl and run
deallog.depth_console(10); at program start).
> --
> The deal.II project is located at https://urldefense.proofpoint.com/v2/url?u=http-3A__www.dealii.org_&d=DwIBaQ&c=Ngd-ta5yRYsqeUsEDgxhcqsYYY1Xs5ogLxWPA_2Wlc4&r=4k7iKXbjGC8LfYxVJJXiaYVu6FRWmEjX38S7JmlS9Vw&m=tEMHEnTrp6RvsQJU_F1hXQu5bRN40oV4aeN0efe65TY&s=yK4ycbuEZvx5EjTY4bQPg3bvUTA91zMJloZInz8IlX8&e=
> For mailing list/forum options, see
> https://urldefense.proofpoint.com/v2/url?u=https-3A__groups.google.com_d_forum_dealii-3Fhl-3Den&d=DwIBaQ&c=Ngd-ta5yRYsqeUsEDgxhcqsYYY1Xs5ogLxWPA_2Wlc4&r=4k7iKXbjGC8LfYxVJJXiaYVu6FRWmEjX38S7JmlS9Vw&m=tEMHEnTrp6RvsQJU_F1hXQu5bRN40oV4aeN0efe65TY&s=sFMafQt9xbavnQ1bym-2imAoKKHUQpStt1B9olLLIpQ&e=
> ---
> You received this message because you are subscribed to the Google Groups
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dealii+un...@googlegroups.com.
> For more options, visit https://urldefense.proofpoint.com/v2/url?u=https-3A__groups.google.com_d_optout&d=DwIBaQ&c=Ngd-ta5yRYsqeUsEDgxhcqsYYY1Xs5ogLxWPA_2Wlc4&r=4k7iKXbjGC8LfYxVJJXiaYVu6FRWmEjX38S7JmlS9Vw&m=tEMHEnTrp6RvsQJU_F1hXQu5bRN40oV4aeN0efe65TY&s=4qyWGsoaFSqzPzX7cs-TtdIde4nngSlTAOer1hx-S8Q&e= .



--
Timo Heister
http://www.math.clemson.edu/~heister/

Uwe Köcher

unread,
Nov 23, 2017, 5:45:47 AM11/23/17
to deal.II User Group
Dear Sebastian,

please follow the instructions from Timo and include additionally an object of
dealii::FGMRES<>::AdditionalData and set the max. Numbers of allowed
vectors to a smaller size than the default (I think the default is something like
30 vectors).

I'm think the allocation problem comes from your system (i.e. you don't have
enough memory to allocate more vectors, esp. FGMRES needs to store 2 
full vectors with each iteration before restarting.)

Kind regards
  Uwe

Wolfgang Bangerth

unread,
Nov 24, 2017, 12:27:17 AM11/24/17
to dea...@googlegroups.com
On 11/22/2017 09:42 AM, SebG wrote:
>
> Does anyone have an idea why the destruction of the SolverGMRES object throws
> an error? Can this be related to the fact that I am using a reference to a
> BlockVector and not the BlockVector directly?

Hard to tell, but in contrast to my esteemed colleagues, I think this may be a
bug in our code. Can you post a complete code that we can run to reproduce
this? Or if you don't want to post it publicly, can you send it to me by
personal email?

Best
W.

--
------------------------------------------------------------------------
Wolfgang Bangerth email: bang...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

SebG

unread,
Nov 24, 2017, 6:47:42 AM11/24/17
to deal.II User Group
Dear all,

of course I am willing to share my code. You can find it in the file attached. The parameter file is configured such that the GrowingVectorMemory error occurs. I think the error is due to a convergence failure of SolverGMRES inside the method BlockSchurPreconditioner::vmult. In this method the convection-diffusion system ((0,0)-block) is solved with GMRES and ILU-preconditioning.

I investigated the behaviour of the preconditioner further. If the Reynolds number is decrease to say 100, the iterative solver for the convection-diffusion system converges. I am not an expert, but does ILU-preconditioning not work for larger Reynolds numbers? I thought ILU is robust (and expensive) but it should be a first good choice.

As a second approach, I used a direct solver (SparseDirectUMFPACK) for the convection-diffusion matrix like in step-57. In this case, the issue with GMRES and ILU do not occur. Then, the FGMRES method converges for moderate Reynolds numbers of Re=200. However, for Re=400 convergence is not achieved anymore. I guess this is due to the bad approximation of the Schur complement by the pressure mass matrix. In step-57, using the pressure mass matrix somehow also works when solving the system for higher Reynolds numbers. Is this due to the Augmented Lagrange approach?

Best wishes,
Sebastian
default_parameters.prm
navier-stokes.cc

Timo Heister

unread,
Nov 24, 2017, 8:40:32 AM11/24/17
to dea...@googlegroups.com
> I am not an expert, but does
> ILU-preconditioning not work for larger Reynolds numbers? I thought ILU is
> robust (and expensive) but it should be a first good choice.

ILU(0) is not a robust preconditioner. It typically behaves well for
mass matrix like systems, but not for anything else.

> anymore. I guess this is due to the bad approximation of the Schur
> complement by the pressure mass matrix. In step-57, using the pressure mass
> matrix somehow also works when solving the system for higher Reynolds
> numbers. Is this due to the Augmented Lagrange approach?

Of course. That is the whole point of AL. Without it, the mass matrix
is only a good approximation if the viscosity term is dominant (small
Re).

Wolfgang Bangerth

unread,
Nov 27, 2017, 11:41:37 AM11/27/17
to dea...@googlegroups.com

Seb,

> of course I am willing to share my code. You can find it in the file
> attached. The parameter file is configured such that the
> GrowingVectorMemory error occurs.

Thanks. I've tried this with the current development version of deal.II
and the error disappears. I'm pretty sure I know why this is so -- the
solver does not converge, so it throws an exception and that led to a
vector that had been allocated not being freed (because we bypass the
memory_pool.free(...) call due to the exception). In the next step, the
memory pool object is being destroyed and complains that a vector that
had been allocated had not been freed, and that's why you get that error
before anything else. (Anything else = the convergence error.)

I fixed this a while back in the development version, though. Probably here:
https://github.com/dealii/dealii/pull/4953

So with the current version, I only get to see the error about
non-convergence:

An error occurred in line <1052> of file
</home/fac/f/bangerth/p/deal.II/1/install/include/deal.II/lac/solver_gmres.h>
in function
void dealii::SolverGMRES<VectorType>::solve(const MatrixType&,
VectorType&, const VectorType&, const PreconditionerType&) [with
MatrixType = dealii::SparseMatrix<double>; PreconditionerType =
dealii::SparseILU<double>; VectorType = dealii::Vector<double>]
The violated condition was:
iteration_state == SolverControl::success
Additional information:
Iterative method reported convergence failure in step 1000. The residual
in the last step was 0.0018243.

[...]


> I think the error is due to a
> convergence failure of SolverGMRES inside the method
> BlockSchurPreconditioner::vmult. In this method the convection-diffusion
> system ((0,0)-block) is solved with GMRES and ILU-preconditioning.
>
> I investigated the behaviour of the preconditioner further. If the
> Reynolds number is decrease to say 100, the iterative solver for the
> convection-diffusion system converges. I am not an expert, but does
> ILU-preconditioning not work for larger Reynolds numbers? I thought ILU
> is robust (and expensive) but it should be a first good choice.

No -- at least not with the default settings. For high-Re cases, you
need to fill more off-diagonal entries in the ILU for it to be good.
There is a recent discussion on exactly this issue on the mailing list.


> As a second approach, I used a direct solver (SparseDirectUMFPACK) for
> the convection-diffusion matrix like in step-57. In this case, the issue
> with GMRES and ILU do not occur. Then, the FGMRES method converges for
> moderate Reynolds numbers of Re=200. However, for Re=400 convergence is
> not achieved anymore. I guess this is due to the bad approximation of
> the Schur complement by the pressure mass matrix. In step-57, using the
> pressure mass matrix somehow also works when solving the system for
> higher Reynolds numbers. Is this due to the Augmented Lagrange approach?

Probably.

SebG

unread,
Dec 5, 2017, 1:01:07 PM12/5/17
to deal.II User Group
Dear Timo, dear Wolfgang,

thank you for your replies. Lesson learned that ILU with the original sparsity pattern is not a good idea for the convection-diffusion block. I added the grad-div term such that I can just use the pressure mass matrix for the Schur complement.

Further, I modified my code to switch the solver for the convection-diffusion block between SparseUMFPACK and a GMRES solver with AMG preconditioning. The class PreconditionAMG from the TrilinosWrappers with a w-cycle and a Gauss-Seidel smoother (10 steps) is used right now.

It works quite well for very low Reynolds number flows. However, if the Reynolds number is increased, the GMRES solver requires more and more iterations, see files attached. My guess for the reason of this is that there is no stabilization like streamline-diffusion that prevents oscillations on coarse grids (see e.g. the book of Elman, Silvester and Wathen chapter 6 and 7).

Can you confirm my guess from your experiences or do I just need to tune the settings of the AMG preconditioner?

Best wishes,
Sebastian
solverlog_Re001_AMG.txt
solverlog_Re100_AMG.txt

Wolfgang Bangerth

unread,
Dec 5, 2017, 3:54:37 PM12/5/17
to dea...@googlegroups.com
On 12/05/2017 11:01 AM, SebG wrote:
>
> It works quite well for very low Reynolds number flows. However, if the
> Reynolds number is increased, the GMRES solver requires more and more
> iterations, see files attached. My guess for the reason of this is that
> there is no stabilization like streamline-diffusion that prevents
> oscillations on coarse grids (see e.g. the book of Elman, Silvester and
> Wathen chapter 6 and 7).

The solver doesn't particularly care about whether the solution is
oscillatory or not (that's a question for the *discretization*, not the
solver). But if you increase the Reynolds number, the top left block of
the matrix deviates more and more from the SPD state that you have if
Re=0. Now, AMG implementations are particularly good if you have an SPD
matrix, but they deteriorate for non-symmetric, advection-dominated
problems as you get for high Re. This would explain the problem with
needing more iterations -- your preconditioner just isn't very good any
more.

You are discovering all of the problems that make solving the N-S
equations in monolithic fashion so difficult. That's also one of the
reasons why projection schemes are so popular: they only need to solve
SPD problems implicitly.
Reply all
Reply to author
Forward
0 new messages