Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Pseudoinverse in MATLAB

536 views
Skip to first unread message

Elias Kyriakides

unread,
Mar 27, 2001, 10:46:51 PM3/27/01
to
Dear friends,

Does anybody know how to get the actual code for the pseudoinverse in
MATLAB? I am really interested to see the whole thing plus the Singular
Value Decomposition SVD) and find out how it works.

I would be really grateful if somebody has it or knows how i can get it.

Thanks in advance,
Elias

dave y.

unread,
Mar 27, 2001, 11:59:47 PM3/27/01
to
Just enter 'type pinv'.

dave y.

Mustafa K. Atlihan

unread,
Mar 28, 2001, 8:52:55 AM3/28/01
to
Earlier versions of Matlab used LINPACK (or LAPACK) as their source base for
matrix manipulations. LINPACK is still considered to be a fairly descent
implementation, but I am not sure how different Matlab's internal codes are
from their LINPACK counterparts. The source code for all LINPACK routines
should be available at http://www.netlib.org/linpack or Netlib's official
ftp site.

Regards
MKA

"Elias Kyriakides" <elias_ky...@hotmail.com> wrote in message
news:3AC15EAA...@hotmail.com...

Pete Boettcher

unread,
Mar 28, 2001, 9:24:04 AM3/28/01
to
Elias Kyriakides <elias_ky...@hotmail.com> writes:

> Does anybody know how to get the actual code for the pseudoinverse in
> MATLAB? I am really interested to see the whole thing plus the Singular
> Value Decomposition SVD) and find out how it works.
>
> I would be really grateful if somebody has it or knows how i can get it.

type pinv

Or open the file $MATLABROOT/toolbox/matlab/matfun/pinv.m


-PB

Elias Kyriakides

unread,
Mar 28, 2001, 12:22:16 PM3/28/01
to
Yes, but still there is the question about the SVD. This is an internal
function and i am more interested about that.

Cleve Moler

unread,
Mar 28, 2001, 12:32:07 PM3/28/01
to
In article <3AC21DC7...@hotmail.com>,

Hi --

There is really only one SVD algorithm -- the original due to Golub
and Kahan, with improvements by Wilkinson and Reinsch. It is described
in the books by Golub and Van Loan and by Stewart. I believe that
Numerical Recipes has some code. Before version 6, MATLAB used a C
implementation of the LINPACK code. Now we use LAPACK's Fortran
subroutine DGESVD. But all of the these are basically the same idea --
orthogonal reduction to a bidiagonal form and then QR iteration to
reduce the bidiagonal to diagonal.

-- Cleve Moler

Daniel Vasilaky

unread,
Mar 13, 2013, 9:55:05 PM3/13/13
to
Elias Kyriakides <elias_ky...@hotmail.com> wrote in message <3AC15EAA...@hotmail.com>...
The SVD approach of pinv() is unstable.
I developed an algorithm that solves the problem. It’s based on Tkhonov regularization.
I will make the derivation available as a pdf file, it’s part of my PhD thesis . But if you need it now then send me an email: dvasilakyATgmail.com
For now, I will give two examples and just the code for you to try it out on your problem.


%Example 1:
format long;
A = [1 0;0 0;0 0];
b=[1;1;1];
pA = pinv(A);
x = pA*b
dA = [1 0;0 10e-6;0 0];
pdA = pinv(dA);
x1 = pdA*b

%Here is the code. To try it out, set your matrix to D in my code.
%*************start code*********************
D=A;
cst=.000000001; %regularization parametr, similar to tol
[r,c]=size(D); %get the number of row and columns of X
Y=eye(r);
Q(:,1)=D(:,1);
I=eye(r); %generate Identity matrix of dim rxr
sum=zeros([r,r]); %initialize sum with a rxc matrix with zeros
for k=1:c-1,
sum=sum + Q(:,k)*Q(:,k)'/(cst + D(:,k)'*Q(:,k));
Q(:,k+1)= (I-sum)*D(:,k+1);
end
for j=1:r,
S=zeros([r,1]); M(c,j)=((Q(:,c)'*Y(:,j)))/(cst + D(:,c)'*Q(:,c));
for i=1:c-1,
S=S + M(c-i+1,j)*D(:,c-i+1);
dm= 1/(cst + D(:,c-i)'*Q(:,c-i));
M(c-i,j)=(dm*Q(:,c-i)')*(Y(:,j) - S);
end
end
%*************end code*******

x3 = M*b
%Example 2 ill-conditioned Hilbert matrix
%OK A was rank deficient, how about ill-conditioned example where MatLab pinv() gives
%nonsense but my algorithm works.
% use pinv() to solve Dx = b and Dx = b1 where b1=b+db;
%then use my algorithm to solve Dx=b1 and look at the results.
%Please try this:

format long;
D=hilb(12);
x=ones(12,1); % create a sulution of 1s.
b=D*x; % b is the RHS and we know the solution x
db=[ %perturb b by db
.00000000000001;
.00000000000001;
.00000000000002;
-.0000000000001;
.00000000000001;
.00000000000001;
.00000000000002;
-.0000000000001;
.00000000000001;
.00000000000001;
.00000000000002;
-.0000000000001;
];
b1=b+db;
cst=.000000001; %regularization parametr, similar to tol
[r,c]=size(D); %get the number of row and columns of X
Y=eye(r);
Q(:,1)=D(:,1);
I=eye(r); %generate Identity matrix of dim rxr
sum=zeros([r,r]); %initialize sum with a rxc matrix with zeros
for k=1:c-1,
sum=sum + Q(:,k)*Q(:,k)'/(cst + D(:,k)'*Q(:,k));
Q(:,k+1)= (I-sum)*D(:,k+1);
end
for j=1:r,
S=zeros([r,1]); M(c,j)=((Q(:,c)'*Y(:,j)))/(cst + D(:,c)'*Q(:,c));
for i=1:c-1,
S=S + M(c-i+1,j)*D(:,c-i+1);
dm= 1/(cst + D(:,c-i)'*Q(:,c-i));
M(c-i,j)=(dm*Q(:,c-i)')*(Y(:,j) - S);
end
end
x4 =M*b1

Bjorn Gustavsson

unread,
Mar 14, 2013, 5:21:17 AM3/14/13
to
"Daniel Vasilaky" wrote in message <khralp$bgq$1...@newscl01ah.mathworks.com>...
> Elias Kyriakides <elias_ky...@hotmail.com> wrote in message <3AC15EAA...@hotmail.com>...
> > Dear friends,
> >
> > Does anybody know how to get the actual code for the pseudoinverse in
> > MATLAB? I am really interested to see the whole thing plus the Singular
> > Value Decomposition SVD) and find out how it works.
> >
> > I would be really grateful if somebody has it or knows how i can get it.
> >
> > Thanks in advance,
> > Elias
> >
> The SVD approach of pinv() is unstable.
> I developed an algorithm that solves the problem. It’s based on Tkhonov regularization.
>
If one goes down that route, I think the appropriate way is to first explicitly call SVD and then one can proceed from there with Tikhonov, damped SVD, truncated SVD or any other regularization, one can use generalised SVD if some pseudo-norm is preferable to stabilize the inverse. I don't see it as possible to present one single regularisation as the _right_ one for all cases, when one gets to such problems there is always some implicit (or explicit) knowledge as to what type of regularisation is appropriate that has to be taken into account.

Daniel Vasilaky

unread,
Mar 14, 2013, 3:00:06 PM3/14/13
to
"Bjorn Gustavsson" <bj...@irf.se> wrote in message <khs4qd$li1$1...@newscl01ah.mathworks.com>...
No one disagrees that there is no silver bullet when it comes ill-conditioned systems.
However, truncated SVD is a blunt instrument for handling ill-conditioned systems, not to mention its computational complexity. One can only truncate at singular values. Truncated SVD (TSVD), to my knowledge, cannot accommodate prior knowledge. I did not post the extended algorithm that includes prior information (Bayesian), but will if there is interest.
SVD is very useful as it is beautiful; I just want to make the point that it’s time to move on and perhaps overload pinv(), similarly as the backslash \ was, depending the appropriate regularization, as was pointed out above.

Greg Heath

unread,
Mar 14, 2013, 3:45:05 PM3/14/13
to
"Daniel Vasilaky" wrote in message <khralp$bgq$1...@newscl01ah.mathworks.com>...
> Elias Kyriakides <elias_ky...@hotmail.com> wrote in message <3AC15EAA...@hotmail.com>...
-----SNIP

> The SVD approach of pinv() is unstable.
-----SNIP

> %Example 1:
> format long;
> A = [1 0;0 0;0 0];
> b=[1;1;1];
> pA = pinv(A);
> x = pA*b
> dA = [1 0;0 10e-6;0 0];
> pdA = pinv(dA);
> x1 = pdA*b

That does NOT demonstrate that the SVD approach of pinv() is unstable
I think you are slightly confused about what pinv really needs to satisfy

help pinv

B = [ 1 0 ; 0 1e-5 ; 0 0 ]
pinvB = pinv(B)
pinvB = [ 1 0 0 ; 0 1e5 0]
B*pinvB = [1 0 0 ; 0 1 0; 0 0 0 ]
pinvB*B = [ 1 0 ; 0 1 ]

check1 = max(max(abs(B-B*pinvB*B)) ) % 1.694e-21
check2 = max(max(abs(pinvB-pinvB*B*pinvB)) ) % 1.456e-11
check3 = max(max(abs(pinvB*B-(pinvB*B)')) ) % 0
check4 =max( max(abs(B*pinvB-(B*pinvB)')) ) % 0

Hope this helps.

Greg

Bruno Luong

unread,
Mar 14, 2013, 4:15:07 PM3/14/13
to
Of your interests

http://www.mathworks.fr/matlabcentral/fileexchange/25453-pseudo-inverse

Including customized regularization if needed.

Bruno

Bjorn Gustavsson

unread,
Mar 15, 2013, 11:37:07 AM3/15/13
to
"Bruno Luong" <b.l...@fogale.findmycountry> wrote in message <khtb4b$aqc$1...@newscl01ah.mathworks.com>...
...and for the more general versions of inverse problems and their regularizations:

http://www.mathworks.com/matlabcentral/fileexchange/52-regtools

Daniel Vasilaky

unread,
Mar 15, 2013, 11:11:12 PM3/15/13
to
"Bjorn Gustavsson" <bj...@irf.se> wrote in message <khvf73$o39$1...@newscl01ah.mathworks.com>...
%Instead of unstable, what I should have said is that the pinv() is a discontinuous function of the data.
format long;
A = [1 0;0 0;0 0];
b=[1;1;1];
pA = pinv(A);
x = pA*b
dA = [1 0;0 10e-6;0 0];
pdA = pinv(dA);
x1 = pdA*b

max(max(abs(A-dA))) %1.0e-05
max(abs(x-x1)) %1.0+05


For the posted code setting D=dA and with cst=0.001
x2=
0.999000999000999
0.009999999000000
max(abs(x-x2)) % 0.0099999990

The the extended algorithm, not posted yet, is independent of the regularization parameter cst.
The extended algorithm is not discussed in http://www.mathworks.fr/matlabcentral/fileexchange/25453-pseudo-inverse

Bruno Luong

unread,
Mar 16, 2013, 4:55:07 AM3/16/13
to
"Daniel Vasilaky" wrote in message <ki0nsg$q7e$1...@newscl01ah.mathworks.com>...
> "Bjorn Gustavsson" <bj...@irf.se> wrote in message
> %Instead of unstable, what I should have said is that the pinv() is a discontinuous function of the data.

It is missleading to say PINV is the problem, the cause is we are dealing with ill-posed system

A*x = y.

No one claims PINV is the best approach to solve ill-posed problem, it is actually a bad one. This is well known for decated (century?).

PINV has a strict mathematical definition.

The main usage if PINV is to provides x that minimizes the 2-norm |x|
subjected to |A*x - y| = min_z | A*z - y |.

It does NOT provide a stable solution x with respect to perturbation of y and A. It can be applied on many things, not exclusively for linear ill-posed system.

Bruno

Bruno Luong

unread,
Mar 16, 2013, 5:13:09 AM3/16/13
to
"Daniel Vasilaky" wrote in message <ki0nsg$q7e$1...@newscl01ah.mathworks.com>...
The FEX is after all my own implementation. I'm not aware of your "extended" method, and the value of it.

A good way to get your algorithm better known is to publish it in scientific paper, and make the code properly implemented and available for others (through, e.g., FEX).

Bruno

Greg Heath

unread,
Mar 16, 2013, 8:28:15 AM3/16/13
to
"Daniel Vasilaky" wrote in message <ki0nsg$q7e$1...@newscl01ah.mathworks.com>...
> "Bjorn Gustavsson" <bj...@irf.se> wrote in message <khvf73$o39$1...@newscl01ah.mathworks.com>...
> > "Bruno Luong" <b.l...@fogale.findmycountry> wrote in message <khtb4b$aqc$1...@newscl01ah.mathworks.com>...
> > > Of your interests
> > >
> > > http://www.mathworks.fr/matlabcentral/fileexchange/25453-pseudo-inverse
> > >
> > > Including customized regularization if needed.
> > >
> > > Bruno
> > >
> > ...and for the more general versions of inverse problems and their regularizations:
> >
> > http://www.mathworks.com/matlabcentral/fileexchange/52-regtools
>
>
> %Instead of unstable, what I should have said is that the pinv() is a discontinuous function of the data.

Instead of unstable, what you should have said is that the pinv() is a discontinuous function of the data, GIVEN A FIXED TOLERANCE, tol.

Can you say that your method is a continuous function of the data, GIVEN A FIXED TOLERANCE, cst?

Hope this helps.

Greg

Daniel Vasilaky

unread,
Mar 18, 2013, 11:51:06 AM3/18/13
to
"Greg Heath" <he...@alumni.brown.edu> wrote in message <ki1ogv$l9g$1...@newscl01ah.mathworks.com>...
Thank you all for your comments!
Is my algorithm a continuous function of the data for a fixed perturbation parameter?
In theory, yes, and in practice one has to deal with round off error.
Here is what the algorithm can do:
Let A be an m-by-n matrix where m >= n. (for m=n A ill-conditioned, for m>n A’A ill-conditioned.)
Consider the equation Ax = b and let x_0 be some initial prior solution and c>0 a perturbation parameter in ||Ax – b|| + c||x – x_0||. The extended algorithm produces a sequence x_0, x_1, x_2, …….. which converges exponentially fast to the solution of
Ax = b for m=n and for m > n to the solution of A’Ax = A’b (normal equations) independent of the chosen initial values x_0 and c > 0.
Simple Tikhonov is a special case when x_0 is set to the zero vector, but zero is usually not the best choice in most applications.
Of course a well-chosen x_0 and c > 0 will reduce the number of successive approximations needed, but the convergence is always guaranteed. Each successive approximation takes mn flops. A reasonable stopping rule for successive approximations is: “STOP as soon as the relative residual error is greater than the prior residual error, “ i.e. , stop if
||Ax_i – b||/||b|| > ||Ax_(i-1) – b||/||b||.
This basically says STOP if round off error starts to make things worse. This may be an overkill because experiments show that one can stop much sooner.

Loosely speaking, the proof and derivation of the above result reduces Richard Bellman’s functional recurrence equation to a recurrence equation of what appears to be a recurrence equation involving approximate Householder type projections, i.e.,
(I – vv’/(c +v’v)), tempered by parameter c>0. The recurrence relation turns out to be a generalization of the Gram-Schmidt process similar to but not the same as the Householder orthogonalization. This in turn produces an approximate QR factorization of A which solves the extended perturbed problem. The orthogonalization is done once which takes approximately 2mn^2 flops.
It is possible to speed things up by Bjock’s modified Gram-Schmidt (MGS) trick.
The extended algorithm works well in my anomaly detection and reconstruction of anatomical image parts algorithms and has been accepted for publication in a biomedical engineering journal. But the above extended algorithm is only referenced there and not discussed in the publication. The derivation of the algorithm is quite tricky and the liner algebra is laborious but the convergence proof is not difficult. Not being from a math community I don’t know which journal to submit it to that may possibly accept it. Any suggestions would be greatly appreciated.

Bruno Luong

unread,
Mar 19, 2013, 12:55:06 AM3/19/13
to
"Daniel Vasilaky" wrote in message <ki7d5a$r3s$1...@newscl01ah.mathworks.com>...
> "Greg Heath" <he...@alumni.brown.edu> wrote in message <ki1ogv$l9g$1...@newscl01ah.mathworks.com>...

> Thank you all for your comments!
> Is my algorithm a continuous function of the data for a fixed perturbation parameter?
> In theory, yes, and in practice one has to deal with round off error.
> Here is what the algorithm can do:
> Let A be an m-by-n matrix where m >= n. (for m=n A ill-conditioned, for m>n A’A ill-conditioned.)
> Consider the equation Ax = b and let x_0 be some initial prior solution and c>0 a perturbation parameter in ||Ax – b|| + c||x – x_0||.

This can be solved be backslash.

[A; eye(size(x))] \ [b; c*x0]

This penalization is well known. Among fields are climate and weather forecast.

>The extended algorithm produces a sequence x_0, x_1, x_2, …….. which converges exponentially fast to the solution of
> Ax = b for m=n and for m > n to the solution of A’Ax = A’b (normal equations) independent of the chosen initial values x_0 and c > 0.
> Simple Tikhonov is a special case when x_0 is set to the zero vector, but zero is usually not the best choice in most applications.
> Of course a well-chosen x_0 and c > 0 will reduce the number of successive approximations needed, but the convergence is always guaranteed. Each successive approximation takes mn flops. A reasonable stopping rule for successive approximations is: “STOP as soon as the relative residual error is greater than the prior residual error, “ i.e. , stop if
> ||Ax_i – b||/||b|| > ||Ax_(i-1) – b||/||b||.
> This basically says STOP if round off error starts to make things worse. This may be an overkill because experiments show that one can stop much sooner.

Using gradient conjugate it probably converge even faster.

>
> Loosely speaking, the proof and derivation of the above result reduces Richard Bellman’s functional recurrence equation to a recurrence equation of what appears to be a recurrence equation involving approximate Householder type projections, i.e.,
> (I – vv’/(c +v’v)), tempered by parameter c>0. The recurrence relation turns out to be a generalization of the Gram-Schmidt process similar to but not the same as the Householder orthogonalization. This in turn produces an approximate QR factorization of A which solves the extended perturbed problem. The orthogonalization is done once which takes approximately 2mn^2 flops.

This is again a well-known stuffs.

> It is possible to speed things up by Bjock’s modified Gram-Schmidt (MGS) trick.
> The extended algorithm works well in my anomaly detection and reconstruction of anatomical image parts algorithms and has been accepted for publication in a biomedical engineering journal. But the above extended algorithm is only referenced there and not discussed in the publication. The derivation of the algorithm is quite tricky and the liner algebra is laborious but the convergence proof is not difficult. Not being from a math community I don’t know which journal to submit it to that may possibly accept it. Any suggestions would be greatly appreciated.

Please make sure there is something really novel in your algorithm before working on the publication.

Bruno

Daniel Vasilaky

unread,
Mar 21, 2013, 1:45:07 PM3/21/13
to
"Bruno Luong" <b.l...@fogale.findmycountry> wrote in message <ki8r3a$frd$1...@newscl01ah.mathworks.com>...
Bruno, thanks for the tip on [A; eye(size(x,1))] \ [b; c*x_0]! Is there more information on this?

I ran extensive tests which show that for severely ill-conditioned matrices my algorithm significantly outperforms both backslash \ and extended backslash
[A; eye(size(x,1))] \ [b; c*x_0];
However, for a sparse matrix, such as the Harwell-Boeing test matrix "west0479" with cond(A) = 1.4244e+12, the simple backslash \ gives by far the best results.
But even in "west0479" case my algorithm converges to the true solution, except why use it if the backslash \ is much faster.

Here is a sample of test results: x is the actual solution, and x_0 initial solution, and x1 the computed solution.
In this example the x_0 = zero vector and tried various parameter values from
c=1e-3 to c=1e-12.

Test case hilb(25) , cond(A) =1.9684e+18, x= vector of 1s, rhs: b = A*x ;.
Also tried x_0 = x and got similar results.
Successive approximations for extended backslash were looped based on:
x_n = [A; eye(size(x,1))] \ [b; c*x_n-1]
For hilb(25):
Simple backslash (\) :
max(norm(x-x1)) = 110.7277

Extended backslash [A; eye(size(x,1))] \ [b; c*x_0] 0 iterations:
max(norm(x-x1)) = 2.6627

Extended backslash [A; eye(size(x,1))] \ [b; c*x_0] 500 iterations:
max(norm(x-x1)) = 2.6890

Extended backslash [A; eye(size(x,1))] \ [b; c*x_0] 5,000 iterations:
max(norm(x-x1)) = 2.6890 (does not improve)

My algorithm running 500 successive approximations:
max(norm(x-x1)) = 0.0811

My algorithm running 1,000 successive approximations:
max(norm(x-x1)) = 0.0573

My algorithm running 5,000 successive approximations:
max(norm(x-x1)) = 0.0488

In the above example we can see that Extended backslash
[A; eye(size(x,1))] \ [b; c*x_0]
does not converge to the exact solution, it stays put after a few iterations.
On the other hand, my algorithm always converges, this turned out to be the case for other hilb(n) matrices and the higher the dimension n the greater outperformance by my algorithm.

I also compared the performances on
Discretization of a first-kind Fredholm integral equation with kernel K and right-hand side g given by
K(s,t) = exp(s*cos(t)) , Y(s) = 2*sinh(s)/s , and with integration intervals s in [0,pi/2] , t in [0,pi] and n=200 .
The solution is given by x(t) = sin(t) .
Reference: M. L. Baart, "The use of auto-correlation for pseudo-
rank determination in noisy ill-conditioned linear least-squares
problems", IMA J. Numer. Anal. 2 (1982), 241-247.
Discretized by the Galerkin method with orthonormal box functions;
one integration is exact, the other is done by Simpson's rule.
Per Christian Hansen, IMM, 09/16/92.

con(A) = 1.6196e+18 x_0 = zero vector and
c = 1e-09 , x= true solution, x1=computed solution

Simple backslash (\):
max(norm(x-x1)) = 199.0953

Extended backslash [A; eye(size(x,1))] \ [b; c*x_0] 0 itereations:
max(norm(x-x1)) = 0.7065

Extended backslash [A; eye(size(x,1))] \ [b; c*x_0] 5,000 itereations:
max(norm(x-x1)) = 0.7065 (not a typo)

My algorithm running 5,000 iterations:
max(norm(x-x1)) = 0.0369 (A 19 fold improvement)

Would these results be considered "novel" by the math community?

Thanks for your help,
Dan

Bjorn Gustavsson

unread,
Mar 27, 2013, 8:42:07 AM3/27/13
to
"Daniel Vasilaky" wrote in message <kifgv3$h42$1...@newscl01ah.mathworks.com>...
What you call "the extended backslash" is the matrix-algebraic formulation of the Tikhonov/minimum length solution. Which obviously will not improve by iterations.

Once you're dealing with realy ill-conditioned matrices A and right-hand sides B that come from observations you have to stop comparing the output from your algorithm with some "exact solution" - there will then be noise in B that should preferably not propagate into the solution. What I'm trying to say is that a "better match" is not what is desired but rather a "statistically correct match" given the unceratainties in B.

Daniel Vasilaky

unread,
Apr 28, 2022, 8:34:46 PM4/28/22
to

Daniel Vasilaky

unread,
Apr 29, 2022, 7:05:50 PM4/29/22
to
Daniel Vasilaky is on the web at http://daniel-vasilaky.brandyourself.com.

Daniel Vasilaky

unread,
Jun 2, 2022, 6:56:41 PM6/2/22
to
Daniel Vasialky is online at http://daniel-vasilaky.brandyourself.com

Daniel Vasilaky

unread,
Jun 17, 2022, 5:34:50 PM6/17/22
to
Daniel Vasilaky is at https://datahack.reddit.com

Daniel Vasilaky

unread,
Jun 17, 2022, 5:35:07 PM6/17/22
to
Daniel Vasilaky is on https://datahack.reddit.com

Daniel Vasilaky

unread,
Jun 19, 2022, 4:00:50 PM6/19/22
to
Daniel Vasilaky is on linkedin.com

Daniel Vasilaky

unread,
Jun 27, 2022, 1:53:04 PM6/27/22
to
Daniel Vasilaky is on at deviantart.com

Daniel Vasilaky

unread,
Jun 27, 2022, 1:53:14 PM6/27/22
to
Daniel Vasilaky is at deviantart.com

Daniel Vasilaky

unread,
Jun 28, 2022, 7:21:58 PM6/28/22
to
Daniel Vasilaky posted photography at www.pinterest.com

Daniel Vasilaky

unread,
Jun 28, 2022, 7:22:18 PM6/28/22
to
Daniel Vasilaky posted his photography at www.pinterest.com

Daniel Vasilaky

unread,
Jul 6, 2022, 7:10:39 PM7/6/22
to
On Tuesday, March 27, 2001 at 7:46:51 PM UTC-8, Elias Kyriakides wrote:
> Dear friends,
> Does anybody know how to get the actual code for the pseudoinverse in
> MATLAB? I am really interested to see the whole thing plus the Singular
> Value Decomposition SVD) and find out how it works.
> I would be really grateful if somebody has it or knows how i can get it.
> Thanks in advance,
> Elias
Daniel Vasilaky posted a photo at flickr.com

Daniel Vasilaky

unread,
Jul 6, 2022, 7:11:13 PM7/6/22
to
On Wednesday, March 13, 2013 at 6:55:05 PM UTC-7, Daniel Vasilaky wrote:

E U

unread,
Jan 22, 2024, 7:42:13 PMJan 22
to
composed by Daniel Vasilaky (fouriere...@gmail.com)
0 new messages