Why does QR beat LU for sparse, square matrices?

641 views
Skip to first unread message

Jonas Kersulis

unread,
Apr 25, 2016, 4:49:40 PM4/25/16
to julia-users
Can someone explain why qrfact is faster and requires less memory than lufact for square, sparse matrices? LU is less computationally complex than QR, and with decent pivoting LU should be able to preserve sparsity better than QR, so I'm confused by what I'm seeing in practice.

The plots below compare computation time, memory allocation, and garbage collection time for the two factorization methods. I generated them by factorizing sprand sparse matrices. The top plot shows results for matrices with 10% nonzeros; the bottom plot shows results for matrices with 50% nonzeros. The methods seem to converge in memory performance as density increases, but LU loses to QR in both cases.

I have looked through the paper describing the multifrontal QR algorithm Julia calls, but I don't understand it fully. Before I spend more time studying it, I figured I would see if someone here knows the secret sauce that helps it beat LU.


Miles Lubin

unread,
Apr 26, 2016, 10:30:10 AM4/26/16
to julia-users
Hey Jonas,

I don't have an answer to this, but if you're looking for state-of-the-art performance for sparse linear algebra, I'd recommend using a proprietary library like PARDISO.

Miles

Steven G. Johnson

unread,
Apr 26, 2016, 12:05:00 PM4/26/16
to julia-users


On Monday, April 25, 2016 at 4:49:40 PM UTC-4, Jonas Kersulis wrote:
Can someone explain why qrfact is faster and requires less memory than lufact for square, sparse matrices? LU is less computationally complex than QR, and with decent pivoting LU should be able to preserve sparsity better than QR, so I'm confused by what I'm seeing in practice.

50% nonzeros is not sparse enough to exploit sparsity effectively, and even 10% nonzeros is probably too much.  None of the sparse algorithms are really going to work particularly well in such cases; you are better off with dense matrices.    I'm not sure comparing the sparse LU and QR algorithms in such cases is particularly meaningful.

A more realistic application of sparse matrices would be something like discretizing a PDE on a 2d mesh.  A 100x100 grid with nearest-neighbor interactions is a 10000x10000 matrix, with about 5*10000 nonzero entries, corresponding to 5/10000 = 0.05% sparsity.

(There is a famous Wilkinson quote: "A sparse matrix is any matrix with enough zeros that is worthwhile to take advantage of them."   You don't have enough zeros to be worth exploiting.)

Steven G. Johnson

unread,
Apr 26, 2016, 12:06:20 PM4/26/16
to julia-users


On Tuesday, April 26, 2016 at 12:05:00 PM UTC-4, Steven G. Johnson wrote:
A more realistic application of sparse matrices would be something like discretizing a PDE on a 2d mesh.  A 100x100 grid with nearest-neighbor interactions is a 10000x10000 matrix, with about 5*10000 nonzero entries, corresponding to 5/10000 = 0.05% sparsity.

(Note also that the sparsity is 5/n^2 for an nxn grid, so it gets more sparse as the matrix gets larger.)

Kristoffer Carlsson

unread,
Apr 26, 2016, 12:40:50 PM4/26/16
to julia-users
For a 200 x 200 grid with a 2D elasticity PDE problem I get 2.4 seconds for lufact and 8.2 seconds from qr. The resulting stiffness matrix has a density of ~ 0.00022.

Kristoffer Carlsson

unread,
Apr 26, 2016, 12:46:29 PM4/26/16
to julia-users
For completeness, cholesky factorization takes about 0.7 seconds.

Steven G. Johnson

unread,
Apr 26, 2016, 1:27:44 PM4/26/16
to julia-users
I should also mention that the timings can be quite different (probably worse) for an sprand matrix than for a matrix that comes from discretizing a mesh, even with the same number of nonzeros.    How well sparse-direct algorithms work depends a lot on the structure of the sparsity pattern.

Jonas Kersulis

unread,
Apr 26, 2016, 6:55:08 PM4/26/16
to julia-users
Good point. The matrices I actually want to factorize are about 1% nonzeros. I changed the nonzero parameter to that value and re-ran the experiment. Now LU beats QR.


On Tuesday, April 26, 2016 at 12:05:00 PM UTC-4, Steven G. Johnson wrote:

Jonas Kersulis

unread,
Apr 26, 2016, 6:57:10 PM4/26/16
to julia-users
Thanks for the tip! I'll try this out.
Reply all
Reply to author
Forward
0 new messages