The old Google Groups will be going away soon, but your browser is incompatible with the new version.
Message from discussion Non-local arithmetic. Was: Help! Which language is it?

From:
To:
Cc:
Followup To:
Subject:
 Validation: For verification purposes please type the characters you see in the picture below or the numbers you hear by clicking the accessibility icon.

More options Oct 31 2001, 7:07 am
Newsgroups: comp.lang.functional
From: Jerzy Karczmarczuk <karc...@info.unicaen.fr>
Date: Wed, 31 Oct 2001 13:02:29 +0100
Local: Wed, Oct 31 2001 7:02 am
Subject: Non-local arithmetic. Was: Help! Which language is it?
Jón Fairbairn comments the query on a language dealing with "probabilistic
numbers" :

> David Phillip Oster <os...@ieee.org> writes:
> > Yes, it does sound like interval analysis, but interval analysis is the
> > simple cousin of the language I'm looking for. [. . .] In the
> > language I'd heard about, you could specify the exact shape of the
> > probability distribution curve.

> This is surely something that one could achieve fairly easily in a
> strongly-typed functional language like Haskell? Represent the
> number-analogues using functions and compute with those.
> That might be a bit slow, but it would get the principle down. There
> are probably more efficient ways of representing things, and laziness
> might come in useful.

I am not sure about that last point.

Actually I did it, I implemented a long time ago an arithmetic package
like that, just for fun.
Let's forget about annoying problems I had in order to lift arithmetic
(+,*, etc.) to functional objects in Haskell, enough to say that Num
required Eq, and comparing function was, hm... not so kosher.

If f,g  are prob. distributions, you know what you have to do in order to
compute f+g? You have to compute the convolution integral of f and g.
Multiplication is even worse. Division? If the divisor support includes zero,
the result blows up, you end up with an infinite interval. Plenty of rather
costly numerics. Laziness won't help you. It is easier with constant intervals,
and with the conservative approach to interval combination, you take unions...

That's why from the practical point of view, interval arithmetic is not so
popular. The intervals grow up rather fast at every operation performed, and
a few operations suffice to produce results not very significant.
(I won't be categorical; my friends hate intervals, but the organizers of this
conference:

will tell you that intervals will save the world, and that critics are simply
not too competent.)

You can do better, of course. For example postulate that all distributions
*are* gaussian, and construct the best fit to a gaussian even if you divide
those distributions, neglecting the fact that in principle you might divide
by zero. Then the variances add, the standard deviations which are square roots
of variances grow more slowly.

I can't say much about the usability of such a system, I hadn't even the courage
to propose it to a workshop, although I gave the stuff to my physicist friends,
suggesting that it might be useful for an automatized error analysis...

And, anyway, I found out that delaying lazily
all those convolutions, etc. does no good, it just clogs the memory with
thunks.

==
Oh. George Russel proposes the article by Kahan, I wanted to cite it, and also
a talk given at the last PPDP, but all that concerns rounding errors rather
than a priori fuzzy numbers, with indeterminacy much bigger than the
rounding bits.

There are many articles about probabilistic intervals. The last I found is here:

the only (yours!) problem is that you should read Cyrillic...

Jerzy Karczmarczuk
Caen, France