"Bruno Luong" <b.l...@fogale.findmycountry> wrote in message <ki8r3a$frd$
1...@newscl01ah.mathworks.com>...
Bruno, thanks for the tip on [A; eye(size(x,1))] \ [b; c*x_0]! Is there more information on this?
I ran extensive tests which show that for severely ill-conditioned matrices my algorithm significantly outperforms both backslash \ and extended backslash
[A; eye(size(x,1))] \ [b; c*x_0];
However, for a sparse matrix, such as the Harwell-Boeing test matrix "west0479" with cond(A) = 1.4244e+12, the simple backslash \ gives by far the best results.
But even in "west0479" case my algorithm converges to the true solution, except why use it if the backslash \ is much faster.
Here is a sample of test results: x is the actual solution, and x_0 initial solution, and x1 the computed solution.
In this example the x_0 = zero vector and tried various parameter values from
c=1e-3 to c=1e-12.
Test case hilb(25) , cond(A) =1.9684e+18, x= vector of 1s, rhs: b = A*x ;.
Also tried x_0 = x and got similar results.
Successive approximations for extended backslash were looped based on:
x_n = [A; eye(size(x,1))] \ [b; c*x_n-1]
For hilb(25):
Simple backslash (\) :
max(norm(x-x1)) = 110.7277
Extended backslash [A; eye(size(x,1))] \ [b; c*x_0] 0 iterations:
max(norm(x-x1)) = 2.6627
Extended backslash [A; eye(size(x,1))] \ [b; c*x_0] 500 iterations:
max(norm(x-x1)) = 2.6890
Extended backslash [A; eye(size(x,1))] \ [b; c*x_0] 5,000 iterations:
max(norm(x-x1)) = 2.6890 (does not improve)
My algorithm running 500 successive approximations:
max(norm(x-x1)) = 0.0811
My algorithm running 1,000 successive approximations:
max(norm(x-x1)) = 0.0573
My algorithm running 5,000 successive approximations:
max(norm(x-x1)) = 0.0488
In the above example we can see that Extended backslash
[A; eye(size(x,1))] \ [b; c*x_0]
does not converge to the exact solution, it stays put after a few iterations.
On the other hand, my algorithm always converges, this turned out to be the case for other hilb(n) matrices and the higher the dimension n the greater outperformance by my algorithm.
I also compared the performances on
Discretization of a first-kind Fredholm integral equation with kernel K and right-hand side g given by
K(s,t) = exp(s*cos(t)) , Y(s) = 2*sinh(s)/s , and with integration intervals s in [0,pi/2] , t in [0,pi] and n=200 .
The solution is given by x(t) = sin(t) .
Reference: M. L. Baart, "The use of auto-correlation for pseudo-
rank determination in noisy ill-conditioned linear least-squares
problems", IMA J. Numer. Anal. 2 (1982), 241-247.
Discretized by the Galerkin method with orthonormal box functions;
one integration is exact, the other is done by Simpson's rule.
Per Christian Hansen, IMM, 09/16/92.
con(A) = 1.6196e+18 x_0 = zero vector and
c = 1e-09 , x= true solution, x1=computed solution
Simple backslash (\):
max(norm(x-x1)) = 199.0953
Extended backslash [A; eye(size(x,1))] \ [b; c*x_0] 0 itereations:
max(norm(x-x1)) = 0.7065
Extended backslash [A; eye(size(x,1))] \ [b; c*x_0] 5,000 itereations:
max(norm(x-x1)) = 0.7065 (not a typo)
My algorithm running 5,000 iterations:
max(norm(x-x1)) = 0.0369 (A 19 fold improvement)
Would these results be considered "novel" by the math community?
Thanks for your help,
Dan