When projecting onto a vector space, I use of inv(A'*A) often in deriving the projection matrix. However, when A = [1 0 0; 0 1 0; 0 0 1; 0 0 0] and B = 2*A (example used from Strang's Linear Algebra book 4th ed, pg 215), I get an InexactError() when invoking inv(B'*B). However inv(A'*A) works fine (A'*A is the 3x3 identity matrix).
When B is array of Float64 (using B = 2.0 *A), the matrix inversion works - the returned array is approx. 0.25 * I (identity matrix).
Saw another post where type conversion (Int64-> Float64) was required in some matrix operation context (not inversion though). Is this type conversion a required step before matrix inversion?
Thanks