Iterative Linear Solver Tolerance

41 views
Skip to first unread message

Jonathan Russ

unread,
Sep 13, 2019, 7:06:10 PM9/13/19
to deal.II User Group
Hello -

I have a simple question I was hoping to get advice on related to setting the linear solver tolerance for an iterative solver. In the tutorial examples (and in most of the codes I have seen and written) the solver tolerance is some fraction of the initial right hand side 2-norm (e.g. tol = 1.0e-8 * rhs.l2_norm() ). However, I'm questioning this strategy for a nonlinear problem using Newton's method to solve the nonlinear system. As the iterations progress, the initial norm of the right hand side (i.e. the norm of the residual) is decreasing substantially. For a number of problems that I am trying to solve, this causes my tolerance to drop below machine precision at some point (if this happens I take 1e-15 to be the tolerance).

I would like to have a tolerance setting strategy that allows me to have a reasonable and consistent number of iterations for each linear solve required in a Newton increment. Due to this issue the number of linear solver iterations in a single Newton increment can vary drastically (~10 to ~600) even though the properties of the matrix (like the condition number which I have checked) are not changing very much (I am using the AMG preconditioner in Trilinos and also use the one in PETSc).

Does anyone have a better strategy for setting the linear solver tolerance other than trial and error (or simply a different strategy that I could try)? From what I remember, the iterative solvers in deal.II are checking the 2-norm of the residual against the tolerance, but correct me if I am wrong please.

Thank you in advance for any comments or advice,
Jonathan

Wolfgang Bangerth

unread,
Sep 14, 2019, 12:25:15 AM9/14/19
to dea...@googlegroups.com

Jonathan,

> I have a simple question I was hoping to get advice on related to setting the
> linear solver tolerance for an iterative solver. In the tutorial examples (and
> in most of the codes I have seen and written) the solver tolerance is some
> fraction of the initial right hand side 2-norm (e.g. tol = 1.0e-8 *
> rhs.l2_norm() ). However, I'm questioning this strategy for a nonlinear
> problem using Newton's method to solve the nonlinear system. As the iterations
> progress, the initial norm of the right hand side (i.e. the norm of the
> residual) is decreasing substantially. For a number of problems that I am
> trying to solve, this causes my tolerance to drop below machine precision at
> some point (if this happens I take 1e-15 to be the tolerance).

This isn't quite the right way to see it. I believe you are thinking that
1e-16 is the machine precision, but that doesn't mean that 1e-16 is the
*smallest number a machine can represent*. It's not an *absolute* precision,
but a *relative* precision. A different way to see this is to say that in
double precision, the processor stores numbers with approximately 16 decimal
digits.

So if the norm of your right hand side (the residual of your nonlinear
problem) is already at 1e-10, then there is nothing wrong with requiring that
the linear solver gets the residual down to 1e-18=1e-8*1e-10. All you're
requiring is that the individual entries of the (linear) residual b-Ax are, on
average, 10^8 times smaller than the individual entries of the right hand side
b. That's fine.


> I would like to have a tolerance setting strategy that allows me to have a
> reasonable and consistent number of iterations for each linear solve required
> in a Newton increment.

No, that's not actually what you want. What you really want (or *should* want
;-) ) is a strategy in which you solve the first few Newton steps inexactly
(you're still far from the solution, so who cares whether you compute the
Newton update with great accuracy!) and gradually tighten the tolerance of the
linear solver as your Newton iteration gets you closer and closer to the
solution. The key word to search for in the literature is "Eisenstat-Walker
algorithm". The canonical reference for this method is this:

@article{eiwa96,
author = {Stanley C. Eisenstat and Homer F. Walker},
title = {Choosing the Forcing Terms in an Inexact {N}ewton Method},
journal = {SIAM Journal on Scientific Computing},
volume = {17},
number = {1},
pages = {16-32},
year = {1996},
doi = {10.1137/0917003},
URL = {http://dx.doi.org/10.1137/0917003},
eprint = {http://dx.doi.org/10.1137/0917003}
}

The price you will pay is that the number of linear iterations goes up as you
get closer to the solution. But the way you should really see this is that you
are saving yourself iterations in the early Newton steps.

Best
Wolfgang

--
------------------------------------------------------------------------
Wolfgang Bangerth email: bang...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/

Jonathan Russ

unread,
Sep 14, 2019, 3:25:15 PM9/14/19
to deal.II User Group
Professor Bangerth -

Thank you very much for your very helpful reply! I was thinking that the addition of large vector of numbers at or below machine precision would accumulate a lot of numerical roundoff error but I see what you mean. The Eisenstat Walker algorithm looks like exactly what I want. Thank you very much again.

Jonathan
Reply all
Reply to author
Forward
0 new messages