How many cores do you have? Are you using the precompiled mac binaries for 0.2pre - those have multi-threaded BLAS disabled as it was crashing.I believe that the sparse solver is not parallelized, but the underlying BLAS calls are.
If you're seeing a 10x slowdown as compared to x=A\b in MATLAB, then it could be due to several reasons. You might be using the wrong solver. MATLAB's x=A\b is a metasolver that picks the solver based on the matrix properties. Julia might be picking the wrong solver. Or, you might not be using the multicore BLAS. I do all my flops in the BLAS in x=A\b, except for extremely sparse matrices (for which I use a non-BLAS based solver). With the BLAS, x=A\b can reach up to 50% of the theoretical peak of a multicore machine, depending on the problem (25% is more typical).
If you link to the standard Fortran BLAS, you will see a 10x slowdown, typically, as compared to x=A\b in MATLAB.
Toss me the matrix and I'll give it a try. You can upload it to http://www.cise.ufl.edu/dropbox/www .
(I'm the author of most of the sparse solvers in x=A\b in MATLAB).
thanks,
Tim
Hi,I am trying to port my MATLAB program to Julia. The for loop is about 25% faster. But the backslash is about 10 times slower. It seems in MATLAB, the backslash is parallelized automatically. Is there any plan in Julia to do this? BTW, the matrix I am solving is sparse and symmetric.
We should do the polyalgorithm that Tim pointed out. Perhaps best to have an issue for this.
-viral
We should do the polyalgorithm that Tim pointed out. Perhaps best to have an issue for this.
I believe that it is University of Florida that owns the copyright and they would lose licencing revenue. I would love it too if we could have these under the MIT licence, but it may not be a realistic expectation.
Looking at the paper is the best way to go. Jiahao has already produced the pseudo code in the issue, and we do similar things in our dense \.
-viral