If you want to solve Ax=b and you already have A in factored form, why
do you need to compute A = L*L' ? Are you trying to verify the
correctness of the factorization? It looks like what you really want
is a sparse triangular solve, which Taucs certainly does have.
Anyway, I don't think Taucs has a standalone sparse matrix-matrix
multiply routine, but it's not hard to find implementations elsewhere.
I can find one for you if you're interested, but first you should
figure out whether you really need to compute L*L'.
mfh
It should not too difficult to multiply two sparse matrices. I do not
have a code at hand though.
The question is only practical, what matrix is easier to factorize A or
L? Could you please describe what are the property of L matrix? Is it
symmetric? Is it triangular? From what area comes your problem?
If it is easier to factorize L, than you could first solve
L y = b
and then
L' x = y
Evgenii
Evgenii is right -- if you have A already factored then it's better to
solve the two sparse triangular systems to get x, then to compute A,
factor A all over again, and then solve for x. When you solve for x
you have to solve the two sparse triangular systems anyway. But I
didn't think to ask (as Evgenii wisely did) whether the L matrix is
actually a Cholesky factor of A or whether it's just some arbitrary
sparse matrix.
mfh
The choice of L as notation confused me: one normally writes A = L*L'
for the Cholesky factorization of a symmetric positive definite matrix
A. In that case, L is lower triangular. But if your L comes from some
other source (like an optimization problem, perhaps?), then it may not
be triangular.
> I have found a program to multiply two sparse
> matrixs ,but in this program ,the matrix is not expressed as
> taucs_ccs_matrix, So it is more convenient if Taucs has this function.
> It seems that it dosen't.
CCS ("Compressed Column Storage") is a standard sparse matrix storage
format. You can learn about this and other formats in the book
"Templates for the Solution of Sparse Linear Systems," which is
available for free aat the following URL:
http://www.netlib.org/linalg/html_templates/Templates.html
mfh
The problem that you may encounter is that L*L' could be dense for a
sparse L. If this is not the case, please have a look at my sample code at
http://matrixprogramming.com/Sparse/SparseMatrix.cpp
where I use my own sparse matrix to make the multiply you need and then
convert it to the TAUCS format. I should say that I have not debugged
the code. If you give an example of L, I could do it.
If you have questions about the code, please ask.
You may also want to look at GMM++
http://home.gna.org/getfem/gmm_intro
I think it has what you need. Its drawback that it does not support
symmetric storage, at least this was some time ago. That is, it will
presumably store L*L' as unsymmetric matrix.
>>> CCS ("Compressed Column Storage") is a standard sparse matrix storage
>> format. You can learn about this and other formats in the book
>> "Templates for the Solution of Sparse Linear Systems," which is
>> available for free aat the following URL:
>>
>> http://www.netlib.org/linalg/html_templates/Templates.html
>>
> I will read this URL.
A good description of formats for sparse matrices is also in SPARSKIT
http://www-users.cs.umn.edu/~saad/software/SPARSKIT/sparskit.html
Hi Chen,
I have made an example at
http://matrixprogramming.com/Sparse/SparseMatrixExample.cpp
I have assumed that the matrix above is unsymmetric. I have found a
small bug in the function multiply - the first two loops were wrong. But
now it seems to work. I have made computations with dense matrices to
check the result and this is what I obtain
Original Matrix A
1 0 1 0
0.5 0.8 0 0
0 0.6 0.7 0
0 0 0.3 0.2
A*A^t by multiplication of dense matrices
2 0.5 0.7 0.3
0.5 0.89 0.48 0
0.7 0.48 0.85 0.21
0.3 0 0.21 0.13
A*A^t by multiplication of sparse matrices, only half was computed
2 0.5 0.7 0.3
0 0.89 0.48 0
0 0 0.85 0.21
0 0 0 0.13
Please have a look at the code. I do not know your level in C++. If you
find the code too difficult, we can consider it step by step. I would
suggest
1) To understand the dense matrix (class Matrix) and the multiplication
of dense matrices.
2) To understand the class SparseMatrix - how to work with it. See the
initialization and convert2Matrix.
3) Sparse multiply.
Along this way it may be good to takes smaller pieces of code and to
discuss them.
Best wishes,
Evgenii
Do you mean distribution of the executables compiled with -MD? It is a
weird problem but it is actually not too difficult. On the other hand
you can use gcc under Cygwin. In any case it is good to use a modern C++
compiler, as it does not make sense to invest efforts to make code
working with VC98. This way you also miss modern features of C++.
Look at the nonzero pattern of the input matrices and see whether
L*L^T results in a nearly-dense matrix. If so, try reordering the
unknowns to increase sparsity in the product matrix.
mfh
Please note that in MS VC the safe iterators are turned on by default
and even -O2 does not change it. And with safe iterators Visual C++ is
incredibly slow.
Please follow the two links
Numerics: Visual C++ vs. g++
optimization in VC++ Express Edition 2005 Options
at the bottom of this page
I am not sure if I have understood your question. If you use TAUCS, then
it is written in C. In this case, -O2 is enough. I am not sure if you
could reach more with other flags in VC.
> On Jan 17, 9:24 am, Evgenii Rudnyi <use...@rudnyi.ru> wrote:
>>>> 2.Calculate
>>>> L*LT--------------------------------------------------------19.003Second
>> Please note that in MS VC the safe iterators are turned on by default
>> and even -O2 does not change it. And with safe iterators Visual C++ is
>> incredibly slow.
>>
>> Please follow the two links
>>
>> Numerics: Visual C++ vs. g++
>> optimization in VC++ Express Edition 2005 Options
>>
>> at the bottom of this page
>>
>> http://matrixprogramming.com/Tools/CompileLinkVC.html
Here I have meant a C++ code. In this case, to reach efficiency it is
necessary to turn off save iterators in VC++ with -D_SECURE_SCL=0. -O2
does not do it!
Wait, is L the Cholesky factor of A? Then why do you need to factorize
again after computing L*L^T ?
mfh
Could you please write, how much time takes the factorization after the
multiplication?
Well, if factorization is so fast then it makes sense to invest time in
matrix multiplication. You can start with dense matrices by using
optimizied BLAS. It could be that it will be already faster, then not
optimizied sparse multiply. Please have a look at
http://matrixprogramming.com/MatrixMultiply/
I have heard about sparse BLAS but have never have tried it. It could a
good starting point.
Once more - my code was a pretty naive implementation of sparse multiply.
So far I did not need it. It is necessary just look on how it is done in
the libraries.
I would start with Sparse BLAS - see
Sparse Basic Linear Algebra Subprograms (BLAS) Library
Please have a look. I have not tried it yet but I would expect it to be
good.
The Sparse BLAS interface doesn't support sparse matrix * sparse
matrix, does it? The matrix-matrix multiply it supports was meant for
(sparse matrix) * (dense matrix) + (dense matrix), where the dense
matrices are supposed to be tall and thin (this is for iterative
methods with multiple right-hand sides).
mfh
Thank you for the information. I should confess that I have not read the
docs carefully. I have just seen matrix matrix product and thought that
that's it.
Well, the sparse matrix - sparse matrix multiply seems to be a
challenge. If I would need it, probably I would start with using matrix
vector product from Sparse BLAS.
Do you know how people do it?
I may even have a code for it buried somewhere in the aforementioned
"Sparse Matrix Converter":
http://bebop.cs.berkeley.edu/smc/
I haven't used it since 2006 so I can say if it still works. (It's
not really supposed to be in the SMC but I just left it in there.) I
haven't optimized it at all beyond just getting the sparse algorithm
right.
The usual applications of sparse matrix-matrix multiplication are
algebraic multigrid (where really what you want is a sparse triple
product) and optimization problems (where often what you really want
is A^T D A for sparse A and diagonal D). It may be handy for checking
a sparse factorization as well, but there are much cheaper ways of
checking the factorization (just solve some linear systems and see if
you get the right answer).
mfh
I plan to try Mark's code for sparse matrix multiplication. Well, I have
also a couple of idea on how it could be done that I would like to try.
Could someone post a matrix for the test?