I was just running some test problems with scipy.sparse.linalg.cg and
thing seem work work out fine.
To compare preconditioners, I'd like to get more detailed information
about the convergence history, but I haven't been able to figure out
how to, e.g., get the number of performed iterations or the (relative)
residual for each of those steps. Compare this with
http://www.mathworks.com/help/techdoc/ref/pcg.html.
Any hints?
Cheers,
Nico
_______________________________________________
SciPy-User mailing list
SciPy...@scipy.org
http://mail.scipy.org/mailman/listinfo/scipy-user
> Hi all,
>
> I was just running some test problems with scipy.sparse.linalg.cg and
> thing seem work work out fine.
> To compare preconditioners, I'd like to get more detailed information
> about the convergence history, but I haven't been able to figure out how
> to, e.g., get the number of performed iterations or the (relative)
> residual for each of those steps. Compare this with
> http://www.mathworks.com/help/techdoc/ref/pcg.html.
>
> Any hints?
Use the 'callback' argument.
>>> help(scipy.sparse.linalg.cg)
...
callback : function
User-supplied function to call after each iteration. It is called
as callback(xk), where xk is the current solution vector.
...
That works alright I guess.
What I do right now is creating a *global array that's filled up as
the callback function is called, after which I go ahead and plot it.
Using a global variable here seems somewhat ugly to me -- might there
be a more elegant solution at all?
Cheers,
Nico
I.e., you can make a class like
In [1]: class Foo:
...: def __call__(self,args):
...: print 'Bar'
...:
...:
In [2]: f = Foo()
In [4]: f(1)
Bar
regards,
Sebastian
Use variables from the outer scope:
def doit(M, b):
residuals = []
def callback(xk):
residuals.append(M*xk - b)
sol, info = scipy.sparse.linalg.cg(M, b, callback=callback)
return residuals
http://docs.python.org/tutorial/classes.html#python-scopes-and-namespaces
http://www.saltycrane.com/blog/2008/01/python-variable-scope-notes/
> def callback(xk):
> residuals.append(M*xk - b)
This approach works nicely from a code-beautification point of view,
thanks again for the hint.
There's a major drawback, though, which may not be easily fixed: To
calculate the residual, one has to compute an extra matrix-vector
multiplication *per step*, which effectively increases the runtime of
the CG algorithm by a factor of two (!). Ideally -- and classically --
the residual is retrieved directly from the CG method itself.
Now obviously, the callback method does not provide for residual
vector input, but I'm not quite sure what's the reason for that. In
the numerical computation world, this would be considered a major
deficiency.
Cheers,
Nico
That can quite likely be easily fixed by making a small change to Scipy.
An interesting point here is that the `gmres` routine on the other hand
only passes the residual norm to the callback, so there would be also
some other things to harmonize here.
Is there some other information that would be useful in the callback? I
can think of: residual vector, residual 2-norm, and current iterate --
what else?
If you have time, please file a bug report,
http://projects.scipy.org/scipy/newticket
so that we won't forget this issue.
--
Pauli Virtanen
Quick question, why is cg marked as deprecated?
Class Docstring:
scipy.linalg.cg is DEPRECATED!! -- use scipy.sparse.linalg.cg instead
Use Conjugate Gradient iteration to solve A x = b
While I perfectly understand that in production work, the typical use
cases require sparse matrices, there's still plenty of value for a
dense implementation of CG in scipy, I would think (especially given
how sparse matrices are a little harder to use for a beginner than
plain dense numpy arrays).
Am I missing something obvious?
Thanks,
f
There is no dense implementation. The one that was under scipy.linalg is
the same as what is in scipy.sparse.linalg -- I guess the misleading
location was a reason for moving it.
Indeed, :)
__all__ = ['bicg','bicgstab','cg','cgs','gmres','qmr']
# Deprecated on January 26, 2008
from scipy.sparse.linalg import isolve
from numpy import deprecate
for name in __all__:
oldfn = getattr(isolve, name)
oldname='scipy.linalg.' + name
newname='scipy.sparse.linalg.' + name
newfn = deprecate(oldfn, oldname=oldname, newname=newname)
exec(name + ' = newfn')
Ah, thanks :) In that case it might be worth changing the message and
warning from:
/usr/lib/python2.6/dist-packages/numpy/lib/utils.py:108:
DeprecationWarning: scipy.linalg.cg is deprecated, use
scipy.sparse.linalg.cg
warnings.warn(str1, DeprecationWarning)
to simply indicating that the *name* is deprecated, but the actual
function lives in the sparse library. Since the function already
accepts dense matrices just fine, indicating this should be enough.
I'm happy to make the changes if you agree.
Cheers,
f
Those functions are removed in 0.9, so I guess we're OK without changes.
Pauli
Ah, even better, nothing to do :)
Cheers,
f
A related idea:
Is it possible or is there a trick to tell an optimizer from a
callback function to stop?
I never used callback function, so I don't know what's possible.
I have two usecases in mind:
Stata for example switches optimizers during estimation of a
statistical model. It's possible to limit the number of iterations,
and restart with a different optimizer but it might be useful to
switch depending on the state of the optimization.
fmin_bfgs doesn't look very robust, quite often it goes of into
neverland and I can wait quite some time until I can restart with a
different optimizer. It would be useful in this case if we can inspect
what's going on and kill it automatically and switch to something more
robust.
Josef
Raising an exception will at least bail out from the optimizer.