Julia non-negative least squares nnls is 100x slower than R nnls

686 views
Skip to first unread message

David C Cohen

unread,
Jul 11, 2014, 12:28:15 PM7/11/14
to julia...@googlegroups.com
Hi,

I understand that Julia and R nnls do not use the same algorithm. A 400x300 problem on my machine takes about 0.02 s on R nnls, and about 2 s on Julia nnls.

The documentations on nnls is rather poor:

?nnls
nnls (generic function with 2 methods)

Am I missing something here, or is this implementation is meant to be slow?

Keith Campbell

unread,
Jul 11, 2014, 2:09:47 PM7/11/14
to julia...@googlegroups.com
Implementation details can make a very big difference in the performance of Julia code.  You can see some considerations at http://docs.julialang.org/en/latest/manual/performance-tips/.  

Also, if you post your test code, perhaps as a Gist, that will make it easier for folks to provide useful feedback.
cheers,
Keith

Kevin Squire

unread,
Jul 11, 2014, 2:31:30 PM7/11/14
to julia...@googlegroups.com
It would also be useful to know how you are timing your code. 

Cheers,
  Kevin 

Sam L

unread,
Jul 12, 2014, 3:56:02 PM7/12/14
to julia...@googlegroups.com
Base Julia does not have an implementation of NNLS, but there is a version in Optim.jl. Is that what you are using? I assume so, because that one was the easiest one for me to find.

The issue with the documentation is that there is currently no standard way of adding documentation for packages to the in REPL doc system. Optim.nnls is documented on the package's github page.

As you say, R and Julia use different algorithms for nnls.  R's is the Lawson Hanson algorithm which is tailored for the nnls problem. From what I can tell, they're using the version from netlib, possibly with a few changes.  The version in Optim.jl applies a more general box constrained minimization algorithm.  Optim.nnls was also printing a lot of deprecation warnings, which really slowed things down. I just fixed that. Pkg.checkout("Optim") should give you the fixed version.

Some timings:

R:
> system.time(for (i in 1:100) {nnls(A, y)})
   user  system elapsed
  1.036   0.017   1.054

julia Optim.nnls:
julia> function test() for i in 1:100; Optim.nnls(A,y) end end
test (generic function with 1 method)

julia> test(); @time test()
elapsed time: 12.383192097 seconds (2330486480 bytes allocated, 13.29% gc time)

So after fixing the deprecation warning julia is only about 12x slower. It's possible that the difference could be explained by the different choice of algorithms, but I'm not sure.

It would be interesting to see how an optimized implementation of Lawson Hanson (or one of the variants on the wikipedia page) does. (I tried to implement the algorithm in psuedo code form Wikipeida, but I do not think it is correct.  I've Lawson Hanson psuedocode in a textbook somewhere, but I don't recall where.)

Oliver Lylloff

unread,
Jul 14, 2014, 3:56:44 AM7/14/14
to julia...@googlegroups.com
Hello,

I've been meaning to have a look at this but have not had the time. Here's a subgradient projection approach for the nnls problem [Nocedal & Wright, “Numerical Optimaztion”]
It's not a fair comparison between Optim.nnls and the subgradient projection algorithm ATM but it should work as a discussion starter. I'm not an R user so if anyone wants to do a comparison be my guest. The problem is however, that there is no convergence criterion in the subgradient projection algorithm, only the maximum number of iterations.

Best regards,
Oliver
Reply all
Reply to author
Forward
0 new messages