Simple Conjugate Gradients coded in Julia is much faster than cholfact

1,447 views
Skip to first unread message

Eduardo Lenz

unread,
May 31, 2015, 9:37:23 PM5/31/15
to julia...@googlegroups.com
Hi.

One of my students is solving some large sparse systems (more than 20K equations). The coeficient matrix
is symmetric and positive definite, with large sparsivity (1% of non zero elements in some cases).

After playing around a little bit with cholfact we decided to compare the time with a very simple implementation
of the conjugate gradient method with diagonal scaling.

The code is in

https://gist.github.com/CodeLenz/92086ba37035fe8d9ed8#file-gistfile1-txt

And, as for example, the solution of Ax=b for

julia> A = sprand(10000,10000,0.01); A = A'+A; A=A+100*rand()*speye(10000,10000)

takes 16 seconds with cholfact(A) and 600 milliseconds !!! with DCGC (tol=1E-10)

Also, as expected, the memory consumption with CG is very low, allowing the solution
of very large systems.

The same pattern is observed for different leves of sparsivity and for different random matrices.

I would like to thank the Julia developers for such amazing tool !



Andreas Noack

unread,
Jun 1, 2015, 9:05:18 AM6/1/15
to julia...@googlegroups.com
I think the chosen matrix has very good convergence properties for iterative methods, but I agree that iterative methods are very useful to have in Julia. There is already quite a few implementations in


I'm not sure if these methods cover the one you chose, so you could have a look and see if there is something to contribute there.

Eduardo Lenz

unread,
Jun 1, 2015, 9:12:01 AM6/1/15
to julia...@googlegroups.com
Hi Andreas...Its me again :0)

Its just an example ..I agree that it is quite unfair :0)

But  we are solving very large 3D homogeneization problems (basically, 3D finite elements) and the
diference is still  very impressive. Specially if you consider the amount of memory needed to
solve such systems with cholfact when compared to this iterative method.

This code is a very basic modification of the traditional CG method, where the cofficient
matrix is changed in order to scale the diference between the larger and the smaller
eigenvalues. I am realy amazed with the speed of sparse multiplications in Julia...its
very fast !

Dominique Orban

unread,
Jun 4, 2015, 9:31:24 AM6/4/15
to julia...@googlegroups.com
FWIW, on your example, here's the time taken by the CG implemented in Krylov.jl (https://github.com/optimizers/Krylov.jl)

julia> A = sprand(10000,10000,0.01); A = A'+A; A=A+100*rand()*speye(10000,10000);
julia> b = A * ones(10000);
julia> cg(A, b, rtol=1.0e-10, atol=1.0e-10);
julia> @time cg(A, b, rtol=1.0e-10, atol=1.0e-10);
elapsed time: 0.047595623 seconds (1136352 bytes allocated)


Preallocation doesn't save much time in this case, but quite a bit of memory:

julia> using LinearOperators
julia
> Ap = zeros(10000);
julia
> op = LinearOperator(10000, Float64, p -> A_mul_B!(1.0,  A, p, 0.0, Ap))
julia
> cg(op, b, rtol=1.0e-10, atol=1.0e-10);
julia
> @time cg(op, b, rtol=1.0e-10, atol=1.0e-10)
elapsed time
: 0.0450219 seconds (241552 bytes allocated)


This CG doesn't support preconditioning, but that would be quite trivial to add.

Eduardo Lenz

unread,
Jun 8, 2015, 2:53:56 PM6/8/15
to julia...@googlegroups.com
Hi Dominique.

I was not aware of this library. Thanks !

Viral Shah

unread,
Jun 14, 2015, 6:02:16 AM6/14/15
to julia...@googlegroups.com, eduardo...@gmail.com
FWIW, the julia sparse matrix vector multiplication is very simple. I expect that further speedups possible by using libraries such as OSKI or MKL. In the ideal world, we would have autotuning kernels in Julia itself.

-viral
Reply all
Reply to author
Forward
0 new messages