> > For more options, visit this group athttp://
groups.google.com/group/pyamg-user?hl=en.
>
> Ivan,
>
> This would be great. I'll send you a separate message. Examples in particular are in high demand.
>
> As for CG, you're correct, rz should be checked. I always found Matlab's version strange in that they check == instead of within some tolerance. I'll add a ticket to make sure the stopping criteria handles these cases. Also, the recomputation of r=b-Ax is a good idea. I'm guessing that all of our krylov methods could be a little more robust in this sense. I'm working on this now in fact as I'm ripping out a separate iterative solvers/preconditioners package.
>
Hi Luke,
If you are rewriting the iterative solvers package, I'd suggest to add
one more method: (preconditined) simple iteration. This is *the* test
for preconditioner optimality. Scientists/students (like me) who want
to test their preconditioner quality could potentially find great use
of it, since unlike Krylov methods, SI takes forever to converge even
if just one of the (preconditioned) matrix eigenvalues is close to
zero (or two).
What I mean by simple iteration is:
B*(xk+1 - xk) = b - Axk,
where B^-1 is the preconditioner (in PyAMG notation, B^-1 = M). Stop
criterion is the B^-1 norm of rk=b - Axk to be <= tol * B^-1 norm of
the right hand side b. This of course is assuming B is symmetric and
positive definite.
Actually, speaking about stop criteria, another useful addition would
be to add an 'exact' parameter to CG/Simple iteration (any solver)
routines. When the user knows what is the exact solution of the system
(and he is just testing the quality of his preconditioner), he could
supply exact != None, so the solver could use it to compute The True
error in whatever norm is appropriate (e.g. A-norm), and not some
approximation to it via a norm of the residual (e.g. the B^-1-norm).
Here is my code for CG and SI:
http://pastie.org/1194773 . It is not
exactly clean, but works well for my tests and illustrates my ideas.
The part where I handle rs < 0 has to be rethought, but you could just
ignore it. Also, pardon my asserts (so far I've been the only user of
my code so I didn't care enough to write exceptions).
And another great thing to add to your solvers would be a verbosity
parameter. When the matrix is large (several millions of unknowns),
iterations could take seconds and solver routines minutes. It would be
nice if the user has some easy way to see that something *is actually
happening* (callbacks are not appropriate for such a simple usecase,
besides, it's not clear what parameters to pass to the callback). For
example, in my code, if verb==True, on each iteration I print current
residual, relative residual, true error norm (if it's available) and
reduction factor.
> For the condition number estimates, these were quick ones so there's certainly room for expansions. If we can get some more robust estimates and general routines in there, that would be great. This part is probably secondary though.
>
Yes, this is definitely secondary, but here is my Lanczos code:
http://pastie.org/1194784 . It is not numerically robust (I've
implemented just the straightforward Lanczos algorithm, not the
restarted one), but at least it handles the preconditioned case. Of
course it's only for symmetric, real matrices (for now my focus is on
the symmetric case).
Btw, here's how I check the symmetry and positive definiteness of my
matrices:
http://pastie.org/1194791 . Similar routines could be useful
to PyAMG...
> I think your suggestions here are very appropriate for this list. There is not a developers list specifically, but if there are additional details you could always email me separately and we'll go from there.
I'm glad I can help. Btw, I'd prefer to send patches for stuff similar
to what I've just pointed out, but the code in google SVN repository
looks old to me... What would be reasonable in the future -- should I
generate patches against it or you have a repository with more recent
code?
>
> Luke
Ivan