Robust Inner Products

320 views
Skip to first unread message

Christoph Ortner

unread,
Feb 1, 2015, 6:37:13 AM2/1/15
to julia...@googlegroups.com

I was delighted to find that Julia has robust summation algorithms implemented in Base.

In my code I also need robust inner product. Is that implemented as well? I couldn't find it, but it is easy to write for myself
(trivially sum_kbn(x.*y), but this allocates extra memory)

But are there other similar summation-type algorithms that could benefit, and is there any interest in this?

Christoph

Simon Byrne

unread,
Feb 1, 2015, 8:22:03 AM2/1/15
to julia...@googlegroups.com
If you wanted to implement such an algorithm, you would need to robust-ify the multiplication as well, using a "two-product" style algorithm: this paper goes into a lot of detail:

Alternatively, you could use full double-double arithmetic: see

Simon

Christoph Ortner

unread,
Feb 1, 2015, 10:20:29 AM2/1/15
to julia...@googlegroups.com
thanks for the suggestions; indeed, Rumpf's paper is my main reference :) for these things. 

Simon Byrne

unread,
Feb 1, 2015, 10:38:26 AM2/1/15
to julia...@googlegroups.com
I realise I didn't actually answer your question: I can't speak as to whether it will be accepted in Base (you will probably have to open an issue or pull request to start a discussion), but at the very least it would be useful to at least have in a package somewhere. If you don't want to create your own, I'd be happy to incorporate it into DoubleDouble.jl.

Simon

Christoph Ortner

unread,
Feb 1, 2015, 3:23:13 PM2/1/15
to julia...@googlegroups.com
many thanks - well I will implement it first of all, and then see where it could go.
    Christoph


Steven G. Johnson

unread,
Feb 2, 2015, 11:59:19 AM2/2/15
to julia...@googlegroups.com
It might be nice to submit a patch to OpenBLAS to make their dot functions use pairwise summation; this is almost as accurate as KBN summation but with negligible performance penalty (for a large base case), so it should be possible to put together an attractive pull request.

For Base, currently sum_kbn requires an AbstractArray; for use-cases like this it would be nice to support a variant of the form sum_kbn(function, iterator), so you could do

     sum_kbn(pair -> conj(pair[1])*pair[2], zip(a, b))

in order to compute the kbn dot product of a and b.

Christoph Ortner

unread,
Feb 3, 2015, 4:48:20 AM2/3/15
to julia...@googlegroups.com

On Monday, 2 February 2015 16:59:19 UTC, Steven G. Johnson wrote:
It might be nice to submit a patch to OpenBLAS to make their dot functions use pairwise summation; this is almost as accurate as KBN summation but with negligible performance penalty (for a large base case), so it should be possible to put together an attractive pull request.

intersting - but wouldn't performance suffer, or does nobody care about Level1-BLAS performance? (this is really outside my domain)

 
For Base, currently sum_kbn requires an AbstractArray; for use-cases like this it would be nice to support a variant of the form sum_kbn(function, iterator), so you could do

     sum_kbn(pair -> conj(pair[1])*pair[2], zip(a, b))

in order to compute the kbn dot product of a and b.

For my own applications, I really need something much better than pairwise summation, which has ~\eps \sum |x[i]| error, so I will try this, but I'm afraid your syntax goes over my head.  I suppose, this would require to first rewrite sum_kbn?

Thanks, Christoph

 

Steven G. Johnson

unread,
Feb 3, 2015, 6:53:36 PM2/3/15
to julia...@googlegroups.com


On Tuesday, February 3, 2015 at 4:48:20 AM UTC-5, Christoph Ortner wrote:

On Monday, 2 February 2015 16:59:19 UTC, Steven G. Johnson wrote:
It might be nice to submit a patch to OpenBLAS to make their dot functions use pairwise summation; this is almost as accurate as KBN summation but with negligible performance penalty (for a large base case), so it should be possible to put together an attractive pull request.

intersting - but wouldn't performance suffer, or does nobody care about Level1-BLAS performance? (this is really outside my domain)

No, there is no performance penalty.  (Or rather, the performance penalty can easily be made 1% or less…basically unmeasurable.)   That is the whole advantage of pairwise summation — you get nearly the accuracy of Kahan summation (with only a logarithmic penalty) with nearly the performance of naive summation.   That's why pairwise sums are the default algorithm for Julia's sum function (and cumsum etc.).

Steven G. Johnson

unread,
Feb 3, 2015, 7:04:13 PM2/3/15
to julia...@googlegroups.com
On Tuesday, February 3, 2015 at 4:48:20 AM UTC-5, Christoph Ortner wrote: 
For my own applications, I really need something much better than pairwise summation, which has ~\eps \sum |x[i]| error, so I will try this, but I'm afraid your syntax goes over my head.  I suppose, this would require to first rewrite sum_kbn?

My understanding is that every summation algorithm (in bounded precision) has error that grows as sum |x[i]| multiplied by some prefactor, including compensated/Kahan summation.   For naive summation the prefactor grows as O(n), for pairwise the prefactor is O(log n), and for Kahan the prefactor is O(1).

See, e.g. Highams "Accuracy of floating-point summation" article, equation (3.11), for the forward error bound on compensated summation.

The basic reason for this is that the forward error is constrained by the condition number of the summation function (which is a property of the mathematical function in infinite precision, not a property of floating-point arithmetic or of any particular algorithm).

If you need more accuracy than that, you need to go to arbitrary precision (either via BigFloat or via something more fancy/adaptive like Shewchuk's algorithms).

Christoph Ortner

unread,
Feb 4, 2015, 10:00:46 AM2/4/15
to julia...@googlegroups.com
There are algorithms that have errors along the lines of 
     n (log(n) eps)^i
where I assumed that |x_i| ~ 1, and \sum x_i ~ 1, which is what I was looking into. With this I can go to n >> \eps^{-1}.

But maybe something like BigFloat would be more practical. I'll look into that as well - thanks.

Christoph
Reply all
Reply to author
Forward
0 new messages