Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Are floating-point numbers rational numbers?

17 views
Skip to first unread message

David W. Cantrell

unread,
Apr 1, 2000, 3:00:00 AM4/1/00
to
For previous discussions on this topic between Jim Carr and me,
please refer to our posts toward the end of the parent
thread "Division by zero =".

In article <8c39sd$k0h$1...@news.fsu.edu>, j...@dirac.scri.fsu.edu
(Jim Carr) wrote:
>In article <8b6qea$f6g$1...@nnrp1.deja.com>
>David W. Cantrell <DWCan...@aol.com> writes:
>>
>>Even supposing that we ignore both +Inf and -Inf and the NaNs,
>>the remaining floating-point numbers are not rational.
>
> All of the ones I use are ratios of integers.

Perhaps it's understandable that you think of them that way. I
never said that it's unreasonable to think of such floating-
point numbers as _approximating_ rational numbers. Certainly,
given a specific such floating-point number, its printed
representation in binary or decimal form looks exactly like a
standard representation of a rational number in such form.

>>If you are interested in continuing this discussion, think
>>about the definitions of "floating-point number" and "rational
>>number" next.
>
> I thought of them first.

Good. Then it should be obvious to you that the definitions are
very different. But perhaps their difference by itself would not
necessarily mean that there was any _essential_ difference
between such floating-point numbers and rational numbers.

>>Then you must ask yourself "If two sets of
>>numbers are defined in different ways, what would justify
>>saying that one set is in essence the same as the other?"
>
> How about if one of them satisfies the definition of the other?

I'm not quite sure what you mean by "satisfies". Could you
please give the definitions and then show precisely why you
think that "one of them satisfies the definition of the other",
assuming you think that to be the case?

Regards,
David Cantrell

* Sent from RemarQ http://www.remarq.com The Internet's Discussion Network *
The fastest and easiest way to search and participate in Usenet - Free!


Dave Seaman

unread,
Apr 1, 2000, 3:00:00 AM4/1/00
to
In article <37c23eec...@usw-ex0105-040.remarq.com>,

David W. Cantrell <dwcantrel...@aol.com.invalid> wrote:
>For previous discussions on this topic between Jim Carr and me,
>please refer to our posts toward the end of the parent
>thread "Division by zero =".

>In article <8c39sd$k0h$1...@news.fsu.edu>, j...@dirac.scri.fsu.edu
>(Jim Carr) wrote:
>>In article <8b6qea$f6g$1...@nnrp1.deja.com>
>>David W. Cantrell <DWCan...@aol.com> writes:

>>>Even supposing that we ignore both +Inf and -Inf and the NaNs,
>>>the remaining floating-point numbers are not rational.

>> All of the ones I use are ratios of integers.

>never said that it's unreasonable to think of such floating-
>point numbers as _approximating_ rational numbers. Certainly,
>given a specific such floating-point number, its printed
>representation in binary or decimal form looks exactly like a
>standard representation of a rational number in such form.

There is a natural injection from the floating point numbers into the
rationals, but it is not a homomorphism. The rationals obey the
associative law, but the floating point numbers do not.

--
Dave Seaman dse...@purdue.edu
Amnesty International calls for new trial for Mumia Abu-Jamal
<http://mojo.calyx.net/~refuse/mumia/021700amnesty.html>

John Savard

unread,
Apr 1, 2000, 3:00:00 AM4/1/00
to
On Sat, 01 Apr 2000 06:04:02 -0800, David W. Cantrell
<dwcantrel...@aol.com.invalid> wrote, in part:

>Perhaps it's understandable that you think of them that way. I

>never said that it's unreasonable to think of such floating-
>point numbers as _approximating_ rational numbers. Certainly,
>given a specific such floating-point number, its printed
>representation in binary or decimal form looks exactly like a
>standard representation of a rational number in such form.

Each number that can be represented exactly by a floating-point number
in a typical computer is indeed a rational number. The converse,
however, is not the case.

For a computer that is decimal instead of binary, the floating-point
numbers that are possible are numbers like

23.4167, 0.123456 * 10^-27, 8.59612 * 10^32

and all of them are either integers or rational numbers with some
power of 10 in the denominator.

On today's binary computers, therefore, all values which are exactly
represented by floating point values are either integers or rational
numbers with some power of 2 in the denominator.

(Of course, even given this restricted definition, the converse is
still not true; the number of digits, and the size of the power, are
also limited.)

John Savard (teneerf <-)
http://www.ecn.ab.ca/~jsavard/index.html

E. Robert Tisdale

unread,
Apr 1, 2000, 3:00:00 AM4/1/00
to
Dave Seaman wrote:

> The rationales obey the associative law,


> but the floating point numbers do not.

You are confused.
Whether or not operations on floating point numbers
are associative is a property of computer programming languages
not computer arithmetic itself. In general,
an optimizing compiler may reorder the operations
in an expression as long as the result is at least as accurate.

You should not presume that any set of fixed precision
floating point number is closed under the usual operations
of floating point arithmetic. Most floating point units
include extended precision representations
for temporary results in expression evaluation.


Dave Seaman

unread,
Apr 1, 2000, 3:00:00 AM4/1/00
to
In article <38E625C1...@netwood.net>,

E. Robert Tisdale <ed...@netwood.net> wrote:
>Dave Seaman wrote:

>> The rationales obey the associative law,
>> but the floating point numbers do not.

>You are confused.
>Whether or not operations on floating point numbers
>are associative is a property of computer programming languages
>not computer arithmetic itself. In general,
>an optimizing compiler may reorder the operations
>in an expression as long as the result is at least as accurate.

No, you are confused. I am not talking about programming languages or
about optimizing compilers. The fact that compilers sometimes assume
associativity even when it does not apply is beside the point. That
assumption would not be a problem if it were not for the fundamental fact
that floating point arithmetic is not associative in the first place.
That is a property of floating point arithmetic itself, not of compilers
or of programming languages.

>You should not presume that any set of fixed precision
>floating point number is closed under the usual operations
>of floating point arithmetic. Most floating point units
>include extended precision representations
>for temporary results in expression evaluation.

I did not presume anything. I simply pointed out that the natural
injection is not a homomorphism, and I provided one of several ways to
demonstrate this. I could also have pointed out that multiplicative
inverses do not always exist, and that the cancellation laws do not hold.

Extended precision arithmetic is irrelevant. Extended precision is still
finite precision, and the mapping is still not a homomorphism.

E. Robert Tisdale

unread,
Apr 1, 2000, 3:00:00 AM4/1/00
to
Dave Seaman wrote:

Oh, I am sorry. I thought that this discussion was relevant
to determining how many angles can stand on the head of a pin.


David W. Cantrell

unread,
Apr 1, 2000, 3:00:00 AM4/1/00
to
In article <8c57b7$r...@seaman.cc.purdue.edu>,

a...@seaman.cc.purdue.edu (Dave Seaman) wrote:
>In article <37c23eec...@usw-ex0105-040.remarq.com>,
>David W. Cantrell <dwcantrel...@aol.com.invalid> wrote:
>>For previous discussions on this topic between Jim Carr and me,
>>please refer to our posts toward the end of the parent
>>thread "Division by zero =".

>There is a natural injection from the floating point numbers


>into the rationals, but it is not a homomorphism. The

>rationals obey the associative law, but the floating point
>numbers do not.

Of course, Dave. I mentioned the fact that floating-point
addition is not associative in my earlier posts mentioned above.
Jim was apparently already familiar with that fact and many
others which clearly show that floating-point numbers behave
differently from rational numbers. I believe that he contends
that we should be talking about just the numbers themselves,
rather than about how they behave. I, on the other hand, contend
that it normally makes little sense to compare just sets
of "numbers" themselves, as divorced from the appropriate number
_systems_.

Regards,
David C.

Dave Seaman

unread,
Apr 1, 2000, 3:00:00 AM4/1/00
to
In article <38E62DE3...@netwood.net>,

E. Robert Tisdale <ed...@netwood.net> wrote:
>Oh, I am sorry. I thought that this discussion was relevant
>to determining how many angles can stand on the head of a pin.

Acute angles, or obtuse?

Dave Seaman

unread,
Apr 1, 2000, 3:00:00 AM4/1/00
to
In article <04152673...@usw-ex0104-031.remarq.com>,

David W. Cantrell <dwcantrel...@aol.com.invalid> wrote:

>Of course, Dave. I mentioned the fact that floating-point
>addition is not associative in my earlier posts mentioned above.
>Jim was apparently already familiar with that fact and many
>others which clearly show that floating-point numbers behave
>differently from rational numbers. I believe that he contends
>that we should be talking about just the numbers themselves,
>rather than about how they behave. I, on the other hand, contend
>that it normally makes little sense to compare just sets
>of "numbers" themselves, as divorced from the appropriate number
>_systems_.

Ok, but I thought I was pointing out that one of the systems does not
"satisfy the definition of the other." The rationals are a field, but
the floating-point numbers are not even a ring. They are not even a
subring.

E. Robert Tisdale

unread,
Apr 1, 2000, 3:00:00 AM4/1/00
to
What are floating-point numbers?

x = m*b^e

where the exponent e is an integer,
the base b is a finite positive integer
and the mantissa m is a fraction
such that 1/2 <= |m| <= 1.
A finite precision radix b mantissa m
may have a sign and magnitude representation

m = s*|m| = s*\sum_{i=1}^{p} d_i*b^{-i}

or a two's complement representation

m = s + \sum_{i=1}^{p} d_i*b^{-i}

where the sign s is in {-1, +1},
the precision p is a positive integer
and digit d_i is an integer such that
-b <= d_i < +b for signed digit representations or
0 <= d_i < +b for unsigned digit representations.
Mixed radix representations for the mantissa m
are possible but not commonly used.
Variable finite precision floating-point arithmetic
is common but no physical representation
of infinite precision floating-point (real) numbers
is possible.

Scientific notation is an example of variable precision
decimal floating-point and is often used as
an _approximate_ representation of real numbers.
Most computer architectures implement
fixed-precision binary floating-point arithmetic.
The meaning of computer arithmetic
is determined by the floating-point architecture
and/or the computer programming language.

So, to answer your question,
Yes, infinite precision floating-point numbers
are real numbers and can, of course,
represent all of the rational numbers as well.
But when people talk about floating-point numbers,
it is usually in the context of computer arithmetic and
they usually mean finite precision floating-point numbers
which can only represent a tiny subset of the rational numbers.
There are industry standards for fixed precision
floating-point computer arithmetic
but no strict mathematical definition applies.


David W. Cantrell

unread,
Apr 1, 2000, 3:00:00 AM4/1/00
to
In article <8c5gc8$r...@seaman.cc.purdue.edu>,

a...@seaman.cc.purdue.edu (Dave Seaman) wrote:
>In article <04152673...@usw-ex0104-031.remarq.com>,
>David W. Cantrell <dwcantrel...@aol.com.invalid> wrote:
>
>>Of course, Dave. I mentioned the fact that floating-point
>>addition is not associative in my earlier posts mentioned
>>above.
>>Jim was apparently already familiar with that fact and many
>>others which clearly show that floating-point numbers behave
>>differently from rational numbers. I believe that he contends
>>that we should be talking about just the numbers themselves,
>>rather than about how they behave. I, on the other hand,
>>contend that it normally makes little sense to compare just
>>sets of "numbers" themselves, as divorced from the appropriate
>>number _systems_.
>
>Ok, but I thought I was pointing out that one of the systems
>does not "satisfy the definition of the other." The rationals
>are a field, but the floating-point numbers are not even a
>ring. They are not even a subring.

Exactly right. But as far as I can tell, Jim wants to think, not
of systems, but of the sets of numbers _per_se_, with no
consideration of how the numbers behave! I have been trying, to
no avail so far, to convince him that this makes little sense.

I would be very much surprised if you and I have any
disagreement in this thread. Indeed, one of the reasons that I
started a new thread, moving the discussion from "Division by
zero =", was to attract the attention of like-minded people,
such as you, so that Jim wouldn't think I was the only person in
the world who thought that floating-point numbers are not
rational numbers!

Dave Seaman

unread,
Apr 1, 2000, 3:00:00 AM4/1/00
to
In article <099fc684...@usw-ex0102-015.remarq.com>,

David W. Cantrell <dwcantrel...@aol.com.invalid> wrote:

>>Ok, but I thought I was pointing out that one of the systems
>>does not "satisfy the definition of the other." The rationals
>>are a field, but the floating-point numbers are not even a
>>ring. They are not even a subring.

I'm not exactly sure what I meant by that last sentence, but I was
thinking about the fact that the natural injection obviously can't be a
ring homomorphism. More than that, it doesn't even behave like a ring
homomorphism (allowing for the fact that its domain is not a ring).

>Exactly right. But as far as I can tell, Jim wants to think, not
>of systems, but of the sets of numbers _per_se_, with no
>consideration of how the numbers behave! I have been trying, to
>no avail so far, to convince him that this makes little sense.

>I would be very much surprised if you and I have any
>disagreement in this thread. Indeed, one of the reasons that I
>started a new thread, moving the discussion from "Division by
>zero =", was to attract the attention of like-minded people,
>such as you, so that Jim wouldn't think I was the only person in
>the world who thought that floating-point numbers are not
>rational numbers!

I did not think you and I had any disagreement. I was merely attempting
(perhaps unsuccessfully) to shed a different light on the discussion.

Ed Hook

unread,
Apr 1, 2000, 3:00:00 AM4/1/00
to
In article <38E625C1...@netwood.net>,

"E. Robert Tisdale" <ed...@netwood.net> writes:
|> Dave Seaman wrote:

|> > The rationales obey the associative law,

|> > but the floating point numbers do not.

|> You are confused.

I'd say that Dave is exactly correct ...

|> Whether or not operations on floating point numbers
|> are associative is a property of computer programming languages

I'm not aware of any programming languages that
address this question at all.

|> not computer arithmetic itself. In general,

The lack of associativity in floating point
arithmetic is *absolutely* a property of
floating point arithmetic. The hardware has
to deliver an answer to every question
posed to it, even if the _true_ answer (the one
that you get operating in the rational field)
cannot be represented by the hardware. That's
the basic problem and it's inherent in the
design of floating point arithmetic.



|> an optimizing compiler may reorder the operations
|> in an expression as long as the result is at least as accurate.

Many times, the optimizer makes no such promise
(since it can't really _know_ that the reordered
computation will be "at least as accurate" -- that
may well be data-dependent).

That's why "stable" numerical algorithms are so
much in demand -- you don't get wildly different
answers when asking the _same_ question ...



|> You should not presume that any set of fixed precision
|> floating point number is closed under the usual operations
|> of floating point arithmetic. Most floating point units
|> include extended precision representations
|> for temporary results in expression evaluation.

Ahhhh, but ... The whole _point_ is that the
floating point numbers that your computer is
manipulating *are* "closed under the usual


operations of floating point arithmetic".

--
Ed Hook | Copula eam, se non posit
MRJ Technology Solutions, Inc. | acceptera jocularum.
NAS, NASA Ames Research Center | I can barely speak for myself, much
Internet: ho...@nas.nasa.gov | less for my employer

Virgil

unread,
Apr 1, 2000, 3:00:00 AM4/1/00
to

>>
>> How about if one of them satisfies the definition of the other?
>
>I'm not quite sure what you mean by "satisfies". Could you
>please give the definitions and then show precisely why you
>think that "one of them satisfies the definition of the other",
>assuming you think that to be the case?


Assuming that you will accept that both floating point and rational
numbers are special cases of real numbers, the "value" of every floating
point number has a representation as a quotient of integers, although
the reverse is not true.

If you believe that floats are not all rational, can you give me an
example of a floating point number which is not rational?

--
Virgil
vm...@frii.com

David W. Cantrell

unread,
Apr 1, 2000, 3:00:00 AM4/1/00
to
In article <vmhjr-B36D94....@news.frii.com>, Virgil

<vm...@frii.com> wrote:
>
>>>How about if one of them satisfies the definition of the
>>>other?
>>
>>I'm not quite sure what you mean by "satisfies". Could you
>>please give the definitions and then show precisely why you
>>think that "one of them satisfies the definition of the other",
>>assuming you think that to be the case?
>
>Assuming that you will accept that both floating point and
>rational numbers are special cases of real numbers,

Of course I do not accept that! Not only are floating-point
numbers defined very differently from real numbers (i.e.,
floating-point numbers are not defined as equivalence classes of
Cauchy sequences of rationals or as Dedekind cuts), but - and
this is what is important - they behave differently from real
numbers!

>the "value" of every floating point number has a representation
>as a quotient of integers, although the reverse is not true.

I'd feel more comfortable if you'd said "nominal value". Anyway,
their representations do indeed look like rational numbers. But
so what?

>If you believe that floats are not all rational, can you give
>me an example of a floating point number which is not rational?

Sorry, but I consider this to be almost a meaningless question.
But, since I doubt that response will satisfy you, pick any
floating-point number you wish (other than one of the infinities
or NaNs). I would say that it is not a rational number (because
it does not behave like a rational number), despite the fact
that its nominal value certainly makes it appear to be a
rational number.

David Cantrell

David W. Cantrell

unread,
Apr 1, 2000, 3:00:00 AM4/1/00
to
In article <38E6511D...@netwood.net>, "E. Robert Tisdale"
<ed...@netwood.net> wrote:
[snip]

>Variable finite precision floating-point arithmetic
>is common but no physical representation
>of infinite precision floating-point (real) numbers
>is possible.
>
>Scientific notation is an example of variable precision
>decimal floating-point and is often used as
>an _approximate_ representation of real numbers.
>Most computer architectures implement
>fixed-precision binary floating-point arithmetic.

It is also an attempt to _approximate_ real number arithmetic. I
have contended, since this discussion began (which was back in
the thread "Division by zero ="), that floating-point number
systems (like those conforming to IEEE 754) are designed to
attempt to _approximate_ the affinely extended real number
system.

>The meaning of computer arithmetic
>is determined by the floating-point architecture
>and/or the computer programming language.
>
>So, to answer your question,
>Yes, infinite precision floating-point numbers
>are real numbers and can, of course,
>represent all of the rational numbers as well.

(I thought "infinite precision floating-point numbers" was an
oxymoron!) Perhaps I am inadequately familiar with the
literature, but I have never heard of "infinite precision
floating-point numbers". Regardless, the discussion does not
concern such, but rather floating-point number systems which
have actually been implemented on digital computers.

>But when people talk about floating-point numbers,
>it is usually in the context of computer arithmetic and
>they usually mean finite precision floating-point numbers
>which can only represent a tiny subset of the rational numbers.

Well, they can only _approximate_ a tiny subset of the rational
numbers, so to speak.

>There are industry standards for fixed precision
>floating-point computer arithmetic
>but no strict mathematical definition applies.

Indeed, there is an internationally accepted standard, which I
mentioned previously.

Regards,

The Kellys

unread,
Apr 1, 2000, 3:00:00 AM4/1/00
to
I am joining this thread late. Is there a good reference for the definition
you have in mind for floating-point numbers?

>
> Sorry, but I consider this to be almost a meaningless question.
> But, since I doubt that response will satisfy you, pick any
> floating-point number you wish (other than one of the infinities
> or NaNs). I would say that it is not a rational number (because
> it does not behave like a rational number), despite the fact
> that its nominal value certainly makes it appear to be a
> rational number.
>

David W. Cantrell

unread,
Apr 1, 2000, 3:00:00 AM4/1/00
to
In article <8c650...@enews2.newsguy.com>, "The Kellys"

<adam...@colla.com> wrote:
>I am joining this thread late. Is there a good reference for
>the definition you have in mind for floating-point numbers?

For greatest convenience, look earlier in this thread for a post
by E. Robert Tisdale. He gave a definition which, at least at
first glance, looked good to me. Of course, I suppose that the
ultimate reference is the internationally accepted standard for
floating-point arithmetic (originally called IEEE 754, but now
also referred to as IEC something-or-other). I hope I'm wrong,
but I don't think it's available on the web.

Cheers,

cartoaje

unread,
Apr 1, 2000, 3:00:00 AM4/1/00
to
In article <8c5bq1$r...@seaman.cc.purdue.edu>,
a...@seaman.cc.purdue.edu (Dave Seaman) wrote:

>>Oh, I am sorry. I thought that this discussion was relevant
>>to determining how many angles can stand on the head of a pin.
>
>Acute angles, or obtuse?

There was a typographical error in his post. I believe he meant
"angels."

The question of how many angels can dance on the head of a pin
was popular in phylosophical circles in the middle ages. I
believe the usual answer was an infinite number, since angels
have no substance.

Mihai

David W. Cantrell

unread,
Apr 1, 2000, 3:00:00 AM4/1/00
to
In article <192a5ea0...@usw-ex0106-045.remarq.com>,

cartoaje <mcartoaj...@mat.ulaval.ca.invalid> wrote:
>In article <8c5bq1$r...@seaman.cc.purdue.edu>,
>a...@seaman.cc.purdue.edu (Dave Seaman) wrote:
>
>>>Oh, I am sorry. I thought that this discussion was relevant
>>>to determining how many angles can stand on the head of a pin.
>>
>>Acute angles, or obtuse?
>
>There was a typographical error in his post. I believe he meant
>"angels."

Perhaps you're being a bit obtuse. My guess was that the
original writer was trying to make a-cute variation, appropriate
for sci.math, on the standard medieval question. But, of course,
it could have been just a typogarphical error, I suppose.

>The question of how many angels can dance on the head of a pin
>was popular in phylosophical circles in the middle ages. I
>believe the usual answer was an infinite number, since angels
>have no substance.

Yes, but exactly _which_ infinite number?

Cheers,
David C.

David W. Cantrell

unread,
Apr 1, 2000, 3:00:00 AM4/1/00
to
In article <FsDHz...@cwi.nl>, "Peter L. Montgomery" <Peter-
Lawrence....@cwi.nl> wrote:
>In article <8c57b7$r...@seaman.cc.purdue.edu>

>a...@seaman.cc.purdue.edu (Dave Seaman) writes:
>
>>There is a natural injection from the floating point numbers
>>into the rationals, but it is not a homomorphism. The
>>rationals obey the associative law, but the floating point
>>numbers do not.
>
>Huh? What are the mappings of NaN (not a number) and
>+-infinity? IEEE floating point has both +0 and -0 -- are their
>images different under your `natural injection'?

In the post which began this thread, the infinities and NaNs
were excluded from discussion. Of course, the point which you
make about the two "equal yet different" zeros is well taken. In
fact, I had thought of bringing it up myself, but had supposed
that, since the signed infinities were excluded from the
discussion, it would only be fair to exclude their reciprocals
as well.

Regards,
David Cantrell

Peter L. Montgomery

unread,
Apr 2, 2000, 4:00:00 AM4/2/00
to
In article <8c57b7$r...@seaman.cc.purdue.edu>
a...@seaman.cc.purdue.edu (Dave Seaman) writes:

>There is a natural injection from the floating point numbers into the
>rationals, but it is not a homomorphism. The rationals obey the
>associative law, but the floating point numbers do not.

Huh? What are the mappings of NaN (not a number) and +-infinity?
IEEE floating point has both +0 and -0 -- are their images different
under your `natural injection'?

--
E = m c^2. Einstein = Man of the Century. Why the squaring?

Peter-Lawren...@cwi.nl Home: San Rafael, California
Microsoft Research and CWI

Dave Seaman

unread,
Apr 2, 2000, 4:00:00 AM4/2/00
to
In article <FsDHz...@cwi.nl>,

Peter L. Montgomery <Peter-Lawren...@cwi.nl> wrote:
>In article <8c57b7$r...@seaman.cc.purdue.edu>
>a...@seaman.cc.purdue.edu (Dave Seaman) writes:

>>There is a natural injection from the floating point numbers into the
>>rationals, but it is not a homomorphism. The rationals obey the
>>associative law, but the floating point numbers do not.

> Huh? What are the mappings of NaN (not a number) and +-infinity?
>IEEE floating point has both +0 and -0 -- are their images different
>under your `natural injection'?

The context of the thread had already excluded the special cases. The
actual statement I was responding to was that "all the floating point
numbers are rational", and I was attempting to make that statement
precise, while also demonstrating its shortcomings.

V-man

unread,
Apr 2, 2000, 4:00:00 AM4/2/00
to

On 1 Apr 2000, Ed Hook wrote:

> In article <38E625C1...@netwood.net>,
> "E. Robert Tisdale" <ed...@netwood.net> writes:
> |> Dave Seaman wrote:
>

> |> > The rationales obey the associative law,


> |> > but the floating point numbers do not.

I don't see why they don't. can yo give an example?

thanks

cartoaje

unread,
Apr 2, 2000, 4:00:00 AM4/2/00
to
also sprach David W. Cantrell,

>Yes, but exactly _which_ infinite number?

I should have written, "an infinity of angels, since they have
no substance."

Now for the question, "what is the highest cardinal a set of
angels simultaneously dancing on the head of a pin can have?" I
don't believe medieval science would have understood that.

Mihai

Johannes H Andersen

unread,
Apr 2, 2000, 4:00:00 AM4/2/00
to

V-man wrote:
>
> On 1 Apr 2000, Ed Hook wrote:
>
> > In article <38E625C1...@netwood.net>,
> > "E. Robert Tisdale" <ed...@netwood.net> writes:
> > |> Dave Seaman wrote:
> >
> > |> > The rationales obey the associative law,
> > |> > but the floating point numbers do not.
>
> I don't see why they don't. can yo give an example?
>
> thanks

Assume for the argument a 7 digit floating point representation.
Normally the accuracy is not exactly 7 digits because of the binary
representation in the computer.

a = 5*10^-7 , b = 5*10^-7, c = 1

(a+b)+c = 1.000001 , a+(b+c) = 1. !

Johannes

William L. Bahn

unread,
Apr 2, 2000, 4:00:00 AM4/2/00
to
But it also follows then that floating point numbers obey virtually no laws.
In many implementations, a*b != b*a - not guaranteed. -(-a) != a - not
guaranteed.

Johannes H Andersen wrote in message <38E728AC...@madasafish.com>...

John Savard

unread,
Apr 2, 2000, 4:00:00 AM4/2/00
to
On Sat, 01 Apr 2000 14:38:47 -0800, David W. Cantrell
<dwcantrel...@aol.com.invalid> wrote, in part:

>Of course I do not accept that! Not only are floating-point


>numbers defined very differently from real numbers (i.e.,
>floating-point numbers are not defined as equivalence classes of
>Cauchy sequences of rationals or as Dedekind cuts), but - and
>this is what is important - they behave differently from real
>numbers!

It is true that the set of floating point numbers isn't the same as
the set of real numbers.

>I'd feel more comfortable if you'd said "nominal value". Anyway,
>their representations do indeed look like rational numbers. But
>so what?

Certain rational numbers are what they represent.

>Sorry, but I consider this to be almost a meaningless question.

You know exactly what it means. Pick a floating-point number whose
nominal value is irrational.

>I would say that it is not a rational number (because
>it does not behave like a rational number), despite the fact
>that its nominal value certainly makes it appear to be a
>rational number.

Well, you can certainly say that, but that is not the sense in which
anyone was likely to understand your original question. Floating-point
objects in computers are used to represent real numbers; the real
numbers which they are able to represent exactly are all rational.
That is the only sense in which floating-point numbers can be
'rational' or 'not rational'; of course they form a set with
operations that isn't even a ring.

Johannes H Andersen

unread,
Apr 2, 2000, 4:00:00 AM4/2/00
to

"William L. Bahn" wrote:
>
> But it also follows then that floating point numbers obey virtually no laws.
> In many implementations, a*b != b*a - not guaranteed. -(-a) != a - not
> guaranteed.

Quite right. (you mean a*b = b*a - not guaranteed)

I once worked on one of the first CRAY-1, it did not obey a*b = b*a, but
this was fixed in a later version. I suspect that symmetric multiply should
be standard now.

Testing a floating point condition: a >= 0. is also dubious because there
can be a 0. and a -0. !

Johannes

Jeremy Boden

unread,
Apr 2, 2000, 4:00:00 AM4/2/00
to
In article <vmhjr-B36D94....@news.frii.com>, Virgil
<vm...@frii.com> writes

>
>>>
>>> How about if one of them satisfies the definition of the other?
>>
>>I'm not quite sure what you mean by "satisfies". Could you
>>please give the definitions and then show precisely why you
>>think that "one of them satisfies the definition of the other",
>>assuming you think that to be the case?
>
>
>Assuming that you will accept that both floating point and rational
>numbers are special cases of real numbers, the "value" of every floating
>point number has a representation as a quotient of integers, although
>the reverse is not true.
>
>If you believe that floats are not all rational, can you give me an
>example of a floating point number which is not rational?
>
The problem does not lie in representing floats as rationals but in
representing rationals as floating point.

For example (for a sufficiently large n),

(1/2 + 1/2^n) + 1/2^n = 1/2
whilst
1/2 + (1/2^n + 1/2^n) > 1/2

All these numbers are rational, but the nature of floating point
arithmetic means that rounding (truncation (?) ) errors often occur.

Of course, if I should know the value of 'n' then I can use extra
precision, but unless I do I can't guarantee that a floating point
computation is accurate.

--
Jeremy Boden

David W. Cantrell

unread,
Apr 2, 2000, 4:00:00 AM4/2/00
to
In article <38e73811...@news.ecn.ab.ca>,

jsa...@tenMAPSONeerf.edmonton.ab.ca (John Savard) wrote:
>On Sat, 01 Apr 2000 14:38:47 -0800, David W. Cantrell
><dwcantrel...@aol.com.invalid> wrote, in part:
>
>>Of course I do not accept that! Not only are floating-point
>>numbers defined very differently from real numbers (i.e.,
>>floating-point numbers are not defined as equivalence classes
>>of Cauchy sequences of rationals or as Dedekind cuts), but -
>>and this is what is important - they behave differently from
>>real numbers!
>
>It is true that the set of floating point numbers isn't the
>same as the set of real numbers.

Of course! But that is not what is important! The set of
equivalence classes of Cauchy sequences of rationals isn't the
same as the sets of Dedekind cuts either.

>>Sorry, but I consider this to be almost a meaningless question.
>
>You know exactly what it means.

Let me suggest that your telepathic powers can use a bit of fine
tuning. I most assuredly did consider it "to be almost a
meaningless question".

>Pick a floating-point number whose nominal value is irrational.

Of course, excluding the infinities and NaNs as we have already
done, the nominal value assigned to any floating-point number is
rational. This is patently true. I can't imagine anyone (who is
sane enough to know that 0.999...=1) arguing otherwise.

>>I would say that it is not a rational number (because
>>it does not behave like a rational number), despite the fact
>>that its nominal value certainly makes it appear to be a
>>rational number.
>
>Well, you can certainly say that, but that is not the sense in
>which anyone was likely to understand your original question.

Really? If you'll glance through this thread, you'll see that
several people understood the question in exactly the sense I
intended. Moreover, I find it hard to believe that most
mathematicians would understand it otherwise. I didn't call the
thread "Are the nominal values of floating-point numbers
rational?" after all.

>Floating-point objects in computers are used to represent real
>numbers;

Floating-point arithmetic is used in an attempt to
_approximate_ affinely extended real number arithmetic.

>the real numbers which they are able to represent exactly are
>all rational.

No, although the nominal values of the normal floating-point
objects are certainly rational. In other words, we could,
depending on context, equally well consider "+1.0" to be an
exact representation of either a rational number or of a nominal
value of a floating-point number. But that floating-point number
is not the same as that rational number! Why? Because they
_behave_ differently. If things behave differently, they are not
the same. In a nutshell, that is why floating-point numbers are
not rational numbers.

>That is the only sense in which floating-point numbers can be
>'rational' or 'not rational'; of course they form a set with
>operations that isn't even a ring.

Yes, it isn't even a ring.

David Cantrell

William L. Bahn

unread,
Apr 2, 2000, 4:00:00 AM4/2/00
to

Johannes H Andersen wrote in message <38E74397...@madasafish.com>...

>
>
>"William L. Bahn" wrote:
>>
>> But it also follows then that floating point numbers obey virtually no
laws.
>> In many implementations, a*b != b*a - not guaranteed. -(-a) != a - not
>> guaranteed.
>
>Quite right. (you mean a*b = b*a - not guaranteed)

I actually, I meant it the pretty much the way I wrote it. In words, it
would have been: in general, 'a' times 'b' is not equal to 'b' times 'a' -
at least not guaranteed. But I see your point. As it turns out, both are
correct, because inequality in not guaranteed, either.

That makes me wonder - what is the probability that each law does yield the
desired result?

cartoaje

unread,
Apr 2, 2000, 4:00:00 AM4/2/00
to
also sprach Johannes H Andersen,

>Testing a floating point condition: a >= 0. is also dubious
because there
>can be a 0. and a -0. !

In the ANSI/IEEE standard 754-1985, "comparison ignores the sign
of 0 (i. e., regards +0 as equal to -0)."

Mihai

Neil W Rickert

unread,
Apr 2, 2000, 4:00:00 AM4/2/00
to
I suggest that "floating point numbers" are not actually numbers at
all. They are, stricly speaking, floating point representations of
real numbers. That they do not obey the associative law is simply
because the process by which we make these representations is
imperfect, and the representation system we use is imperfect.


Barry Schwarz

unread,
Apr 2, 2000, 4:00:00 AM4/2/00
to
On Sun, 2 Apr 2000 00:58:49 -0500, V-man <v_me...@alcor.concordia.ca>
wrote:

>> |> > The rationales obey the associative law,
>> |> > but the floating point numbers do not.
>
>I don't see why they don't. can yo give an example?

Let n be the number of significant digits in a floating point number.
Let b be the base that number is represented in (usually binary but
not an issue).

In math b^2n - (b^2n+1) = (b^2n-b^2n) - 1 = -1.

In floating point b^2n - (b^2n+1) evaluates to 0
while (b^2n-b^2n) -1 evaluates to -1.


<<Remove the del for email>>

Jeff Stuart

unread,
Apr 2, 2000, 4:00:00 AM4/2/00
to
Take a look at any numerical analysis book.
Every book from which I have taught contains
an example of three numbers that when added
from smallest to largest yield a different floating
point number than when added from largest to
smallest. In fact, when convenient, floating point
numbers should be added from smallest to largest
because this sum is usually more accurate than the
sum in reverse order.

In order to make the example easier to follow, I
am using three digit base ten arithmetic rather
than 23-bit binary, but the behavior is the same.
All following digits are rounded.

fl(0.103) + fl(0.103) + fl(1.00)

(0.103 + 0.103) + 1.00 = (0.206) + 1.00 round first sum before summing
= 0.206 + 1.00
= (1.206) round
= 1.21
fl(sum) = 1.21


fl(1.000) + fl(0.103) + fl(0.103)

(1.00 + 0.103) + 0.103 = (1.103) + 0.103 round first sum before summing
= 1.10 + 0.103
= (1.203) round
= 1.20

fl(reverse sum) = 1.20

I hope this helps.

Jeff Stuart
Department of Mathematics
University of Southern Mississippi
Hattiesburg, MS 39406-5045


E. Robert Tisdale

unread,
Apr 3, 2000, 3:00:00 AM4/3/00
to
cartoaje wrote:

> There was a typographical error in his post.
> I believe he meant "angels."

Yes, I meant both acute and obtuse angels.


Nico Benschop

unread,
Apr 3, 2000, 3:00:00 AM4/3/00
to
William L. Bahn wrote:
>
> [... on FLP arithmetic and associative, commutative laws ...]

>
> That makes me wonder -
> what is the probability that each law does yield the desired result?
>

My guess would be, that the assoc/cmt laws will be satisfied
whenever the operands and the result are representable precisely
(i.e. without rounding, truncation, overflow or underflow).

This has little to do with probabilities, it's all deterministic,
apart from an occasional alpha-particle messing-up things;-)
--
Ciao, Nico Benschop

David W. Cantrell

unread,
Apr 3, 2000, 3:00:00 AM4/3/00
to
In article <8c8dgo$9...@ux.cs.niu.edu>, Neil W Rickert

<ricke...@cs.niu.edu> wrote:
>I suggest that "floating point numbers" are not actually numbers
>at all.

That is a reasonable suggestion. However, in the absence of any
generally accepted definition of "number", I'm sure there would
be those who would still wish to refer to floating-point objects
as numbers.

Regards,
David Cantrell

David W. Cantrell

unread,
Apr 3, 2000, 3:00:00 AM4/3/00
to
In article <TheG4.1639$X21.85691@bgtnsc05-
news.ops.worldnet.att.net>, "Spleen Splitter" <Spleen*no
spam*Spli...@hotmail.com> wrote:

>Floating point numbers are indeed rational numbers.

Precisely how do you justify your assertion?

spam*Splitter@hotmail.com Spleen Splitter

unread,
Apr 4, 2000, 3:00:00 AM4/4/00
to
Ack!

Floating point numbers are indeed rational numbers.

It is the floating point _operations_ that cause heck.

As the hardware designer related to me when I sucked wind
about his remainder being negative: "It's easier that way."

cheers,

Spleen Splitter

"David W. Cantrell" <dwcantrel...@aol.com.invalid> wrote in message news:16deffca...@usw-ex0104-033.remarq.com...


> In article <8c8dgo$9...@ux.cs.niu.edu>, Neil W Rickert
> <ricke...@cs.niu.edu> wrote:
> >I suggest that "floating point numbers" are not actually numbers
> >at all.
>
> That is a reasonable suggestion. However, in the absence of any
> generally accepted definition of "number", I'm sure there would
> be those who would still wish to refer to floating-point objects
> as numbers.
>
> Regards,

spam*Splitter@hotmail.com Spleen Splitter

unread,
Apr 4, 2000, 3:00:00 AM4/4/00
to
A representation by exponent and mantissa is clearly a rational number.

Problems are manifested in the whiles of the operators on such representatations
that are said to represent those of the real number system. In other words,
a floating point unit s*x mathematically: Loss of associativity as the
aforementioned biggie.

cheers,

Spleen Splitter

"David W. Cantrell" <dwcantrel...@aol.com.invalid> wrote in message news:0fb00b7d...@usw-ex0104-028.remarq.com...


> In article <TheG4.1639$X21.85691@bgtnsc05-
> news.ops.worldnet.att.net>, "Spleen Splitter" <Spleen*no
> spam*Spli...@hotmail.com> wrote:
>

> >Floating point numbers are indeed rational numbers.
>

> Precisely how do you justify your assertion?
>

David W. Cantrell

unread,
Apr 4, 2000, 3:00:00 AM4/4/00
to
In article <OPhG4.35371$pK3.738691@bgtnsc04-

news.ops.worldnet.att.net>, "Spleen Splitter" <Spleen*no
spam*Spli...@hotmail.com> wrote:
>A representation by exponent and mantissa is clearly a rational
>number.

No, although I'll agree that such gives a floating-point number
the _appearance_ of a rational number, so to speak. But
appearances can, as it this case, be deceiving! If things behave
differently, then they are not the same.
Note the adage: "If it walks like a duck and quacks like a duck,
it's a duck." In other words, behavior [rather than mere
appearance] is the true key to identification.

>Problems are manifested in the whiles of the operators on such
>representatations that are said to represent those of the real
>number system. In other words, a floating point unit s*x
>mathematically: Loss of associativity as the aforementioned
>biggie.

OK, so since rationals numbers are associative under addition,
while floating-point numbers are not, the two types of numbers
behave differently. Thus, they are not the same.

Cheers,

Dik T. Winter

unread,
Apr 4, 2000, 3:00:00 AM4/4/00
to
In article <0e7fd3a4...@usw-ex0102-084.remarq.com> David W. Cantrell <dwcantrel...@aol.com.invalid> writes:
> OK, so since rationals numbers are associative under addition,
> while floating-point numbers are not, the two types of numbers
> behave differently. Thus, they are not the same.

I would just say that the operations are different...
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/

milo gardner

unread,
Apr 4, 2000, 3:00:00 AM4/4/00
to
Dear group,


I have not read all 41 discussions here. Maybe someone did mention
that is problem in the form of:

1 = 1/2 + 1/4 + 1/8 + 1/16 + 1/32 + 1/64 + ...

was clearly known as an infinite series prior to 2,100 BC

such that rational numbers could not be generally written.

Pryamid builders from Egypt knew this fact, and before 2100 BC

began a duplation method to write finite series for any n/p

n/pq number.

The RMP lists the traditional duplation method as if it was the

only ancient method. However, by reading the EMLR, as also purchased

by Henry Rhind in the late 1850s (and strangely not attempted to

be read until 1927), its 26 rational number finite series used the

following form 11 times:

n/pq = n/A x A/pq

where A = 5, 25.

The most interesting case is

1/8 = 1/25 x 25/8

= 1/5 x 25/40

= 1/5 x (3/5 + 1/40)

= 1/5 x (1/5 + 1/3 + 1/15 + 1/40)

= 1/25 1/15 1/75 1/200

an out of order series, that can not have been computed any other

manner than stated above.


You may ask, why is this important?


Well, the RMP 2/nth table used a general rule to write


2/pq = 2/A x A/pq, with A = (p + 1)


There were there exceptions to this rule, two following the

more optimal series (smaller last term) where A = (p + q)

was used for 2/35 and 2/91.

The last exception, 2/95, was simply a mod 5 version of the

RMP 2/p rule, that was used in EVERY CASE, following the

form,


2/p = 1/A + (2A -p)/Ap

where A was a highly composite number, chosen from the range

p/2 < A < p


with the divisors of A being uniquely used to find (2A -p)

by applying an LCM - red auxiliary - rule (to reference

an Ahmes notation).


That is to say, this method of always converting infinite

series to finite series, without rounding off, was used by

Greeks, Romans, Byzantines and many others, until our base 10

decimal system began to erase a history of this subject only

400 years ago.

Greeks, Hibeh Papyrus, writing in 300 BC wrote in an n/45 table

using a generalized form,


n/pq = 1/A = (nA -pq)/Apq


showing that Greeks did not invent number theory, as classical

scholars continue to 'keep their eyes wide shut' today.


Thanks for allowing this little 'eye-opener' to gain a little

21st century band width. Hultsh was correct in 1895 when he

suggested a historical solution to the RMP 2/p series. Is it

about time that computer experts begin to link their method

of binary arithmetic to its oldest known 'forefathers' and

offer classical scholars another chance to properlu read

historical mathematical documents, as written?


Regards to all,

Milo Gardner
Sacrmento, CA

(once a military cryptanalyst, finding the simple 2/pq and n/pq rules
of ancient documents very easy to read, but very hard to sell ----
on the internet and especially in the academic community - that
wishes to follow the 1920's closed books of DE SMITH and others.).

milo gardner

unread,
Apr 4, 2000, 3:00:00 AM4/4/00
to

David W. Cantrell

unread,
Apr 4, 2000, 3:00:00 AM4/4/00
to
In article <FsHqL...@cwi.nl>, "Dik T. Winter"

<Dik.W...@cwi.nl> wrote:
>In article <0e7fd3a4...@usw-ex0102-084.remarq.com> David
>W. Cantrell <dwcantrel...@aol.com.invalid> writes:
>>OK, so since rationals numbers are associative under
>>addition, while floating-point numbers are not, the two types
>>of numbers behave differently. Thus, they are not the same.
>
>I would just say that the operations are different...

If so, then what would justify saying that floating-point numbers
are rational numbers? They certainly aren't defined the same way.
Floating-point numbers are defined as certain strings of bits (as
in IEEE 754) while rational numbers are defined as certain
equivalence classes of ordered pairs of integers (as in many math
texts).

But, of course, sometimes things that are defined differently can
still be considered to be "essentially the same". For example,
Dedekind cuts and equivalence classes of Cauchy sequences of
rationals might on the surface seem to be very different.
And certainly they are actually different. But, with respect to
the field operations for example, they behave the same. That's
what's important. Thus, I'm happy to call both of them by the
name "real numbers".

Similarly, despite the fact that floating-point and rational
numbers are defined differently, I'd still be happy to say that
floating-point numbers "are" rational numbers, so to speak, if
they behaved like rational numbers. But, as we all know, they
don't. So I must say that floating-point numbers are not rational
numbers.

But, if you have a good counterargument, Dik, let's hear it!

Regards,

V-man

unread,
Apr 4, 2000, 3:00:00 AM4/4/00
to David W. Cantrell
I'm not the person who wrote that, but you could always write floating
point numbers with a fraction. Can you find a floating point number taht
can't be represented by a fraction.

floating point numbers exist in the world of computers (the physical
universe). They have limited precision and that is all.

For example, if you try to save an irrational number such as sqrt(2) into
a computers memory as a float number, it gets rationalized. Do you agree
on that?

V-man

On Mon, 3 Apr 2000, David W. Cantrell wrote:

> In article <TheG4.1639$X21.85691@bgtnsc05-


> news.ops.worldnet.att.net>, "Spleen Splitter" <Spleen*no
> spam*Spli...@hotmail.com> wrote:
>

> >Floating point numbers are indeed rational numbers.
>
> Precisely how do you justify your assertion?
>

David W. Cantrell

unread,
Apr 4, 2000, 3:00:00 AM4/4/00
to
In article
<Pine.OSF.4.10.100040...@alcor.concordia.ca>,

V-man <v_me...@alcor.concordia.ca> wrote:
>I'm not the person who wrote that, but you could always write
>floating point numbers with a fraction. Can you find a floating
>point number taht can't be represented by a fraction.

No I can't, in the sense that all floating-point numbers have
"representations" which do indeed look like rational numbers. But
things can look alike and yet not be the same!

>For example, if you try to save an irrational number such as
>sqrt(2) into a computers memory as a float number, it gets
>rationalized. Do you agree on that?

Loosely speaking, sure. But to be precise, when Sqrt(2) is
"converted" to a floating-point number, the result is just that,
namely a floating-point number, which is not exactly the same
thing as any standard type of number in pure mathematics. And of
course, I'd normally expect the behavior of that floating-point
number to more closely match the behavior of a rational number
than of Sqrt(2).

Cheers,

Dann Corbit

unread,
Apr 4, 2000, 3:00:00 AM4/4/00
to
"David W. Cantrell" <dwcantrel...@aol.com.invalid> wrote in message
news:0cb96020...@usw-ex0104-033.remarq.com...

They are a subset of the rational numbers. They lack all the important
properties of the actual set (e.g., they are not closed under addition,
subtraction, multiplication or division). However, the only thing that you
can represent in a floating point number corresponds directly to a rational
number. In fact, that exact rational number can be calculated and
displayed.

Just because they lack the properties of the rational numbers does not mean
that they are not a subset of them.
--
C-FAQ: http://www.eskimo.com/~scs/C-faq/top.html
"The C-FAQ Book" ISBN 0-201-84519-9
C.A.P. Newsgroup http://www.dejanews.com/~c_a_p
C.A.P. FAQ: ftp://38.168.214.175/pub/Chess%20Analysis%20Project%20FAQ.htm

John Savard

unread,
Apr 4, 2000, 3:00:00 AM4/4/00
to
milog...@juno.com (milo gardner) wrote, in part:

>Is it
>about time that computer experts begin to link their method
>of binary arithmetic to its oldest known 'forefathers' and
>offer classical scholars another chance to properlu read
>historical mathematical documents, as written?

While the unit fractions of the ancient Egyptians, as described in the
Rhind Mathematical Papyrus, are of some interest as a topic of
recreational mathematics, and the Egyptians and others did indeed make
some impressive mathematical accomplishments before the Greeks,

it is also still true that it was the Greeks who, by looking at the
subject in a more systematic and theoretical manner, laid the greatest
part of the foundation on which our current level of mathematical
progress is based, and

it is also true that simpler and easier to understand and manipulate
notations, such as decimal numeration, also assist mathematicians as
well.

Hence, I am not surprised that unit fractions are felt to be of a very
limited interest.

John Savard (jsavard<at>ecn<dot>ab<dot>ca)
http://www.ecn.ab.ca/~jsavard/crypto.htm

David W. Cantrell

unread,
Apr 4, 2000, 3:00:00 AM4/4/00
to
In article <5jqG4.1243$4b3.1315@client>, "Dann Corbit"

For this to be true, certain bit strings would literally have to
be certain equivalence classes of ordered pairs of integers.
Clearly, such is not the case.

>They lack all the important properties of the actual set (e.g.,
>they are not closed under addition, subtraction, multiplication
>or division). However, the only thing that you can represent in
>a floating point number corresponds directly to a rational
>number. In fact, that exact rational number can be calculated
>and displayed.

Absolutely correct! But having such a "direct correspondence"
does not mean that a floating-point number _is_ a rational number
at all! (Having a one-to-one correspondence between the elements
of set A and the elements of a subset of set B, we cannot say
that A must be a subset of B.)

>Just because they lack the properties of the rational numbers
>does not mean that they are not a subset of them.

That's certainly true. What it does mean is that no "isomorphism"
can exist between the (ordinary) floating-point numbers and a
subset of the rationals. Thus, the floating-point numbers are not
"in essence" like rational numbers. And, since floating-point
numbers are also not literally rational numbers, what could


justify saying that floating-point numbers are rational numbers?

Nothing, in my opinion.

Regards,

jonathan miller

unread,
Apr 4, 2000, 3:00:00 AM4/4/00
to
Dave Seaman wrote:

> There is a natural injection from the floating point numbers into the
> rationals, but it is not a homomorphism. The rationals obey the


> associative law, but the floating point numbers do not.

No they (the rationals, that is) don't. Addition over the rationals
obeys the associative law.

By the way, if I write "1+2=4", which of 1, 2, and 4 do you consider to
be not a number?

Assuming you answer this the way I expect you will, why do you give the
floating-point algorithm for addition (that isn't addition, since it
doesn't obey the addition laws) more credit than you give me?

Jon Miller


jonathan miller

unread,
Apr 4, 2000, 3:00:00 AM4/4/00
to
"David W. Cantrell" wrote:

> Why? Because they _behave_ differently. If things behave differently, they are
> not
> the same. In a nutshell, that is why floating-point numbers are not rational
> numbers.

I'm still confused as to why the conclusion isn't just that the floating-point
implementation of addition is flawed. If a student gives an incorrect answer,
we just mark it wrong, we don't say he's working with an "alternative number
system". Why give the machine (or even IEEE) more credit?

So we know the machine sometimes gives the wrong answer and it's up to the user
to figure it out and work around it. So what? Why invent another class of
numbers?

Jon Miller


Dave Seaman

unread,
Apr 4, 2000, 3:00:00 AM4/4/00
to
In article <38EA462D...@nashville.com>,

jonathan miller <jonatha...@nashville.com> wrote:
>Dave Seaman wrote:

>> There is a natural injection from the floating point numbers into the
>> rationals, but it is not a homomorphism. The rationals obey the
>> associative law, but the floating point numbers do not.

>No they (the rationals, that is) don't. Addition over the rationals
>obeys the associative law.

The rationals are a field. Fields obey the associative laws (two of
them, in fact).

>By the way, if I write "1+2=4", which of 1, 2, and 4 do you consider to
>be not a number?

Insufficient information. At least one of the five symbols must have a
nonstandard meaning, but it is impossible to say which.

>Assuming you answer this the way I expect you will, why do you give the
>floating-point algorithm for addition (that isn't addition, since it
>doesn't obey the addition laws) more credit than you give me?

What are the addition laws? Do ordinal numbers obey the addition laws?

The word "number" is not a precisely defined term in mathematics. We
know what "natural numbers" and "rational numbers" and "ordinal numbers"
are, but we don't really know what "numbers" are. I am not prepared to
rule out the possibility that "floating point numbers" might be numbers
of some sort, but they are not rational numbers.

--
Dave Seaman dse...@purdue.edu
Amnesty International calls for new trial for Mumia Abu-Jamal
<http://mojo.calyx.net/~refuse/mumia/021700amnesty.html>

Lynn Killingbeck

unread,
Apr 4, 2000, 3:00:00 AM4/4/00
to
(snip)

Just tacking this on to the latest posting in this long thread...

Would it take all the fun out of this thread, if the participants
actually _defined_ what they mean by 'rational number'? I poked my nose
into Hardy and Wright, and did not see a definition. Ditto in Ore, no
definition. Knuth, in 1.2.1 has "A rational number is the ratio
(quotient) of two integers, p/q, where q is positive." Into a mixed
number_theory/programming book, and I find "If a,b are elements of Z
with b not equal to 0, then a/b is a rational number." (Doing my best
ASCII-ization of the math symbols.) That's already two different
definitions! And, by either of these, floating point numbers are
rational. I did not see anything, in any book, that defined rationals in
terms of IEEE or other representations. Neither of these two definitions
includes _operations_ on the numbers as part of the definition.

As long as some of the participants include the _operations_ in their
definition, and other participants do not include the operations, there
clearly will be no agreement about whether or not floating point numbers
(in general, let alone specifically IEEE formats) are rational.

Just 2 cents worth from a bored observer...

Lynn Killingbeck

David W. Cantrell

unread,
Apr 4, 2000, 3:00:00 AM4/4/00
to
In article <38EA48B8...@nashville.com>, jonathan miller

<jonatha...@nashville.com> wrote:
>"David W. Cantrell" wrote:
>
>>Why? Because they _behave_ differently. If things behave
>>differently, they are not the same. In a nutshell, that is why
>>floating-point numbers are not rational numbers.
>
>I'm still confused as to why the conclusion isn't just that the
>floating-point implementation of addition is flawed.

Assuming that you're dealing with floating-point numbers, how
could addition be improved? (Any brilliant ideas should be
directed to IEEE.) Such addition has its inherent limitations,
yes; but it seems a bit unfair to describe it as flawed.

>If a student gives an incorrect answer, we just mark it wrong,
>we don't say he's working with an "alternative number system".
>Why give the machine (or even IEEE) more credit?

Because they're doing the best they can under the imposed
constraints?

>So we know the machine sometimes gives the wrong answer and it's
>up to the user to figure it out and work around it.

Again, I don't think that's really a fair statement. If the
machine follows the rules of floating-point arithmetic, then its
answer is correct for that arithmetic.

>So what? Why invent another class of numbers?

Huh? I'm certainly not proposing a new class of numbers.

spam*Splitter@hotmail.com Spleen Splitter

unread,
Apr 5, 2000, 3:00:00 AM4/5/00
to
"David W. Cantrell" <dwcantrel...@aol.com.invalid> wrote in message news:091cb23a...@usw-ex0104-033.remarq.com...

> In article <5jqG4.1243$4b3.1315@client>, "Dann Corbit"
> <dco...@solutionsiq.com> wrote:
> >"David W. Cantrell" <dwcantrel...@aol.com.invalid> wrote
> >in message news:0cb96020...@usw-ex0104-033.remarq.com...
> >> In article <FsHqL...@cwi.nl>, "Dik T. Winter"
> >><Dik.W...@cwi.nl> wrote:
> >>>In article <0e7fd3a4...@usw-ex0102-084.remarq.com>
> >>>David W. Cantrell <dwcantrel...@aol.com.invalid> writes:
> >>>>OK, so since rationals numbers are associative under
> >>>>addition, while floating-point numbers are not, the two
> >>>>types of numbers behave differently. Thus, they are not the
> >>>>same.
> >> >

> >> >I would just say that the operations are different...
> >>
> >>If so, then what would justify saying that floating-point
> >>numbers are rational numbers? They certainly aren't defined the
> >>same way. Floating-point numbers are defined as certain strings
> >>of bits (as in IEEE 754) while rational numbers are defined as
> >>certain equivalence classes of ordered pairs of integers (as in
> >>many math texts).
> >>
> >>But, of course, sometimes things that are defined differently
> >>can still be considered to be "essentially the same". For
> >>example, Dedekind cuts and equivalence classes of Cauchy
> >>sequences of rationals might on the surface seem to be very
> >>different. And certainly they are actually different. But, with
> >>respect to the field operations for example, they behave the
> >>same. That's what's important. Thus, I'm happy to call both of
> >>them by the name "real numbers".

Actually one can construct a mathematical universe in which the Cauchy
Reals and the Dedekind Reals are not the same (but not the one in which
most folks think).

> >>
> >>Similarly, despite the fact that floating-point and rational
> >>numbers are defined differently, I'd still be happy to say
> >>that floating-point numbers "are" rational numbers, so to
> >>speak, if they behaved like rational numbers. But, as we all

> >>know, they don't. So I must say that floating-point numbers are
> >>not rational numbers.
> >>


> >> But, if you have a good counterargument, Dik, let's hear it!
> >
> >They are a subset of the rational numbers.
>
> For this to be true, certain bit strings would literally have to
> be certain equivalence classes of ordered pairs of integers.
> Clearly, such is not the case.

To over-pontificate, the IEEE floating point numbers can have a NaN
represented (Not-a-Number), so there's a slight difference from the
Rationals.

>
> >They lack all the important properties of the actual set (e.g.,
> >they are not closed under addition, subtraction, multiplication
> >or division). However, the only thing that you can represent in
> >a floating point number corresponds directly to a rational
> >number. In fact, that exact rational number can be calculated
> >and displayed.
>
> Absolutely correct! But having such a "direct correspondence"
> does not mean that a floating-point number _is_ a rational number
> at all! (Having a one-to-one correspondence between the elements
> of set A and the elements of a subset of set B, we cannot say
> that A must be a subset of B.)

It appears that in your discourse that your usage of "is" means
equality. So you assert the bit-string representation of a floating
number can never be the same thing as an equivalence class of certain
pairs of integers.

Unfortunately, it seems to follow from your line of reasoning that
the integers are not rational numbers since an integer is not an
equivalence class of certain pairs of integers. Next, the rationals
are not reals since the elements of the equivalence classes are
totally different.

I daresay we are on a dangerous course here.

>
> >Just because they lack the properties of the rational numbers
> >does not mean that they are not a subset of them.
>
> That's certainly true. What it does mean is that no "isomorphism"
> can exist between the (ordinary) floating-point numbers and a
> subset of the rationals. Thus, the floating-point numbers are not
> "in essence" like rational numbers. And, since floating-point
> numbers are also not literally rational numbers, what could
> justify saying that floating-point numbers are rational numbers?
> Nothing, in my opinion.

This should really go over well with Khan who designed that bloody IEEE
standard (and just think of the nightmares he has given hardware designers,
like Intel's). These nightmares continue in the programmers who try
to use 'em, and most programmers are not numerical analysts.

To get the semantics right: a non-NaN floating point number is a rational
number, but the floating point system doesn't come close to the full majesty
of the real number system. Nonetheless, the floating point system is a
reasonable approximation.

Afterall, some engineer somewhere sure thought floating point numbers were
rational enough when he ran spice to determine the nature of his electronic
circuits that brought your message forth.

That seems like good enough reason...

cheers,

Spleen Splitter


>
> Regards,

David W. Cantrell

unread,
Apr 5, 2000, 3:00:00 AM4/5/00
to
In article <NuyG4.12791$TM.8...@bgtnsc06-news.ops.worldnet.att.net>,

"Spleen Splitter" <Spleen*no spam*Spli...@hotmail.com> wrote:
> "David W. Cantrell" <dwcantrel...@aol.com.invalid> wrote in
> message news:091cb23a...@usw-ex0104-033.remarq.com...

> > Absolutely correct! But having such a "direct correspondence"
> > does not mean that a floating-point number _is_ a rational number
> > at all! (Having a one-to-one correspondence between the elements
> > of set A and the elements of a subset of set B, we cannot say
> > that A must be a subset of B.)
>
> It appears that in your discourse that your usage of "is" means
> equality. So you assert the bit-string representation of a floating
> number can never be the same thing as an equivalence class of certain
> pairs of integers.
>
> Unfortunately, it seems to follow from your line of reasoning that
> the integers are not rational numbers since an integer is not an
> equivalence class of certain pairs of integers. Next, the rationals
> are not reals since the elements of the equivalence classes are
> totally different.
>
> I daresay we are on a dangerous course here.

I daresay that this precisely the standardly used course, whether you
like it or not. The reason that it is not dangerous is that the
appropriate isomorphisms can be established. For example, following
this commonly used course, although the integers Z are not a subset of
the rational numbers Q, we can establish an isomorphism between the
system (Z, +, *, 0, 1) and the system (QZ, +, *, 0, 1) where QZ is a
certain subset of Q. The elements of QZ might be called "integer"
rationals, perhaps. This is sometimes expressed by saying that "the
integers are isomorphic to a subset of the rationals". However, such a
statement is actually rather imprecise for our discussion here because
it makes reference merely to the sets, rather than to the algebraic
structures involving them.

The important thing to note is that any such isomorphism between
floating-point structures and rational structures is impossible.

> > >Just because they lack the properties of the rational numbers
> > >does not mean that they are not a subset of them.
> >
> > That's certainly true. What it does mean is that no "isomorphism"
> > can exist between the (ordinary) floating-point numbers and a
> > subset of the rationals. Thus, the floating-point numbers are not
> > "in essence" like rational numbers. And, since floating-point
> > numbers are also not literally rational numbers, what could
> > justify saying that floating-point numbers are rational numbers?
> > Nothing, in my opinion.
>
> This should really go over well with Khan who designed that bloody
> IEEE standard

His name is Kahan, actually. But I am not criticizing floating-point
arithmetic. Considering the constraints of a digital computer etc.,
that arithmetic may, for all I know, do close to an optimal job of
approximating affinely extended real arithmetic.

> To get the semantics right: a non-NaN floating point number is a
> rational number,

Rather, any non-NaN, non-Inf floating point number clearly "corresponds
with", but is not actually the same as, some rational number.

>but the floating point system doesn't come close to the full majesty
>of the real number system. Nonetheless, the floating point system is a
>reasonable approximation.

Presumably so, given the confines of a digital computer.

Cheers,
David Cantrell


Sent via Deja.com http://www.deja.com/
Before you buy.

Dik T. Winter

unread,
Apr 5, 2000, 3:00:00 AM4/5/00
to
In article <38EA48B8...@nashville.com> jonathan miller <jonatha...@nashville.com> writes:
> "David W. Cantrell" wrote:
> > Why? Because they _behave_ differently. If things behave differently, they
> > are not the same. In a nutshell, that is why floating-point numbers are
> > not rational numbers.
>
> I'm still confused as to why the conclusion isn't just that the
> floating-point implementation of addition is flawed.

Not in IEEE. If the result can not be represented exactly, a nearby result
is returned and a flag (INEXACT) is set.

Dik T. Winter

unread,
Apr 5, 2000, 3:00:00 AM4/5/00
to
In article <0cb96020...@usw-ex0104-033.remarq.com> David W. Cantrell <dwcantrel...@aol.com.invalid> writes:
> In article <FsHqL...@cwi.nl>, "Dik T. Winter"
> <Dik.W...@cwi.nl> wrote:
> >I would just say that the operations are different...
>
> If so, then what would justify saying that floating-point numbers

> are rational numbers? They certainly aren't defined the same way.
> Floating-point numbers are defined as certain strings of bits (as
> in IEEE 754) ...

Not in the books I read. There floating-point numbers are defined by
a formula: m * r ^ e, with r the radix, m the mantissa and e the exponent.
There are limits on the values for m and e, but in many floating-point
systems (though not in IEEE) the numbers actually do form equivalence
classes through the pair of numbers [m,e]: [m*r,e] == [m,e+1], etc.
The actual implementation in bitstrings, or whatever strings, comes later,
and is not even spelled out in full in IEEE. (In addition, there may be
some special representatione: NaN's, infinities, indetermineds, etc.)

The operations on floating-point numbers (though spelled +, -, *
and /) are not actually the operations expected, and many numerical
mathematicians do represent them as the symbol with an enclosing
circle to clearly distinguish them. (See for instance texts by
J.H. Wilkinson.)

David W. Cantrell

unread,
Apr 5, 2000, 3:00:00 AM4/5/00
to
In article <FsJou...@cwi.nl>, "Dik T. Winter"

<Dik.W...@cwi.nl> wrote:
>In article <0cb96020...@usw-ex0104-033.remarq.com> David
>W. Cantrell <dwcantrel...@aol.com.invalid> writes:
> > In article <FsHqL...@cwi.nl>, "Dik T. Winter"
> > <Dik.W...@cwi.nl> wrote:
> > >I would just say that the operations are different...
> >
>>If so, then what would justify saying that floating-point
>>numbers are rational numbers? They certainly aren't defined the
>>same way. Floating-point numbers are defined as certain strings
>>of bits (as in IEEE 754) ...
>
>Not in the books I read. There floating-point numbers are
>defined by a formula: m * r ^ e, with r the radix, m the
>mantissa and e the exponent.

Sure, that's fine (although I've always tended to think of a
floating-point number as being quite literally what is stored in
the computer).

>There are limits on the values for m and e, but in many
>floating-point systems (though not in IEEE) the numbers actually
>do form equivalence classes through the pair of numbers [m,e]:
>[m*r,e] == [m,e+1], etc. The actual implementation in
>bitstrings, or whatever strings, comes later, and is not even
>spelled out in full in IEEE.

Yes, you're right I think.

>(In addition, there may be some special representatione: NaN's,
>infinities, indetermineds, etc.)
>
>The operations on floating-point numbers (though spelled +, -, *
>and /) are not actually the operations expected, and many
>numerical mathematicians do represent them as the symbol with an
>enclosing circle to clearly distinguish them.

Of course. And this is a good idea.

spam*Splitter@hotmail.com Spleen Splitter

unread,
Apr 6, 2000, 3:00:00 AM4/6/00
to

"David W. Cantrell" <david_w_...@my-deja.com> wrote in message news:8cf91p$iej$1...@nnrp1.deja.com...
> In article <NuyG4.12791$TM.8...@bgtnsc06-news.ops.worldnet.att.net>,

> "Spleen Splitter" <Spleen*no spam*Spli...@hotmail.com> wrote:
> > "David W. Cantrell" <dwcantrel...@aol.com.invalid> wrote in
> > message news:091cb23a...@usw-ex0104-033.remarq.com...

>
> > > Absolutely correct! But having such a "direct correspondence"
> > > does not mean that a floating-point number _is_ a rational number
> > > at all! (Having a one-to-one correspondence between the elements
> > > of set A and the elements of a subset of set B, we cannot say
> > > that A must be a subset of B.)
> >
> > It appears that in your discourse that your usage of "is" means
> > equality. So you assert the bit-string representation of a floating
> > number can never be the same thing as an equivalence class of certain
> > pairs of integers.
> >
> > Unfortunately, it seems to follow from your line of reasoning that
> > the integers are not rational numbers since an integer is not an
> > equivalence class of certain pairs of integers. Next, the rationals
> > are not reals since the elements of the equivalence classes are
> > totally different.
> >
> > I daresay we are on a dangerous course here.
>
> I daresay that this precisely the standardly used course, whether you
> like it or not. The reason that it is not dangerous is that the
> appropriate isomorphisms can be established. For example, following
> this commonly used course, although the integers Z are not a subset of
> the rational numbers Q, we can establish an isomorphism between the
> system (Z, +, *, 0, 1) and the system (QZ, +, *, 0, 1) where QZ is a
> certain subset of Q. The elements of QZ might be called "integer"
> rationals, perhaps. This is sometimes expressed by saying that "the
> integers are isomorphic to a subset of the rationals". However, such a
> statement is actually rather imprecise for our discussion here because
> it makes reference merely to the sets, rather than to the algebraic
> structures involving them.
>
> The important thing to note is that any such isomorphism between
> floating-point structures and rational structures is impossible.

Isomorphisms are fine and wonderful, and I enourage the usage of them.
However, your technique of reasoning about the constituents of a set
(namely your sticking to elements of equivalence classes).

Clearly, there's a large number of the floating point numbers, along
with the attendant perfidious floating point operation, that map
perfectly into the rationals.

You seem to be insisting that because not every 'corner' is perfect
(ahem, rounding et al) that floating point numbers are hopelessly
incapable of mimicing or 'being' a rational number. Do you have
any idea over what subset of F x F that the operations are perfect?

Your argument seems equivalent then to saying that if I took the set
of rational numbers and tossed out most of them, then what was left
was not rational numbers.

So from my point of view one can perfectly say: consider the set of
rational consisting of the mantissa and exponent in specified ranges,
and call these the floating point numbers. The typical operations
yield their typical result as long as the result is in this specified
set, otherwise do something special.

So just where along the way did these rational numbers lose their
rationality?

>
> > > >Just because they lack the properties of the rational numbers
> > > >does not mean that they are not a subset of them.
> > >
> > > That's certainly true. What it does mean is that no "isomorphism"
> > > can exist between the (ordinary) floating-point numbers and a
> > > subset of the rationals. Thus, the floating-point numbers are not
> > > "in essence" like rational numbers. And, since floating-point
> > > numbers are also not literally rational numbers, what could
> > > justify saying that floating-point numbers are rational numbers?
> > > Nothing, in my opinion.
> >
> > This should really go over well with Khan who designed that bloody
> > IEEE standard
>

> His name is Kahan, actually. But I am not criticizing floating-point
> arithmetic. Considering the constraints of a digital computer etc.,
> that arithmetic may, for all I know, do close to an optimal job of
> approximating affinely extended real arithmetic.

You sure haven't given much justification that floating point numbers
aren't rational numbers. The above doesn't seem to help your case either.

>
> > To get the semantics right: a non-NaN floating point number is a
> > rational number,
>

> Rather, any non-NaN, non-Inf floating point number clearly "corresponds
> with", but is not actually the same as, some rational number.

lol...I always thought of an infinity as not a rational number...

Jussi Piitulainen

unread,
Apr 6, 2000, 3:00:00 AM4/6/00
to
"Spleen Splitter" <Spleen*no spam*Spli...@hotmail.com> writes:

> Your argument seems equivalent then to saying that if I took the set
> of rational numbers and tossed out most of them, then what was left
> was not rational numbers.

I think, if you took a subset of rational numbers and closed them
under addition, subtraction, multiplication and division (bar zero),
then he might agree to call the resulting set a set of rational
numbers: they would behave like rationals, commuting and associating
and having additive and multiplicative inverses.
--
Jussi

David W. Cantrell

unread,
Apr 6, 2000, 3:00:00 AM4/6/00
to
In article <EwXG4.2207$4A1.102566@bgtnsc05-

news.ops.worldnet.att.net>, "Spleen Splitter" <Spleen*no
spam*Spli...@hotmail.com> wrote:

>Isomorphisms are fine and wonderful, and I enourage the usage
>of them. However, your technique of reasoning about the
>constituents of a set (namely your sticking to elements of
>equivalence classes).

Huh...what about it?

>Clearly, there's a large number of the floating point numbers,
>along with the attendant perfidious floating point operation,
>that map perfectly into the rationals.

ALL non-Inf, non-NaN floating-point numbers may be said to "map
perfectly into the rationals", so to speak. The problem with
your statement is the condition "along with the attendant
perfidious floating point operation[s]". Due to the perfidy of
the operations, such a perfect mapping is impossible if the set
of floating-point numbers is sufficiently large to be useful for
ordinary scientific computations.

>You seem to be insisting that because not every 'corner' is
>perfect (ahem, rounding et al) that floating point numbers are
>hopelessly incapable of mimicing or 'being' a rational number.

You've misunderstood my position. Floating-point numbers are
certainly capable of approximating the behavior of rational
numbers; I've never said otherwise. But imperfectly mimicing
something and actually behaving precisely like that something
are very different.

>Do you have any idea over what subset of F x F that the
>operations are perfect?

Perhaps you should say subsets, rather than subset. But, to
answer your question more-or-less, look at the last sentence in
my second paragraph above this one.

>Your argument seems equivalent then to saying that if I took
>the set of rational numbers and tossed out most of them, then
>what was left was not rational numbers.

In a sense, you're quite right. Numbers, of whatever sort, are
properly dealt with in systems. Any alteration to the
characteristics of the system destroys that system; even the
smallest tear in the fabric causes the whole piece to unravel,
so to speak.

>So from my point of view one can perfectly say: consider the
>set of rational consisting of the mantissa and exponent in
>specified ranges, and call these the floating point numbers.
>The typical operations yield their typical result as long as
>the result is in this specified set,

But alas, this is not true for any useful floating-point
systems. I had certainly thought that you were aware of this
problem.

>otherwise do something special.
>
>So just where along the way did these rational numbers lose
>their rationality?

Is that a rational question? (Actually, my answer is implied by
my second paragraph above.)

bri...@eisner.decus.org

unread,
Apr 6, 2000, 3:00:00 AM4/6/00
to
In article <05a1f794...@usw-ex0103-018.remarq.com>, David W. Cantrell <dwcantrel...@aol.com.invalid> writes:
> In article <EwXG4.2207$4A1.102566@bgtnsc05-

>>So from my point of view one can perfectly say: consider the
>>set of rational consisting of the mantissa and exponent in
>>specified ranges, and call these the floating point numbers.
>>The typical operations yield their typical result as long as
>>the result is in this specified set,
>
> But alas, this is not true for any useful floating-point
> systems. I had certainly thought that you were aware of this
> problem.

Let M be the set of "model numbers" exactly represented in a particular
floating point model. It will be a finite subset of the reals and,
for any typical implementation will be a finite subset of the rationals.

Let F(x) be the mapping function from M to the floating point numbers.

I read the paragraph you are responding to as calling for:

For all x, y in M, if x+y in M then F(x) "+" F(y) = F(x+y)
For all x, y in M, if x-y in M then F(x) "-" F(y) = F(x-y)
For all x, y in M, if x*y in M then F(x) "*" F(y) = F(x*y)
For all x, y in M, if x/y in M then F(x) "/" F(y) = F(x/y)

I believe this requirement is met by most, if not all, widely used
floating point systems. I seem to recall that the Ada specification
(ANSI 1815A) calls for this behavior as one of the requirements
for a compliant floating point implementation.

But I could be wrong. Feel free to educate me.

John Briggs bri...@eisner.decus.org

David W. Cantrell

unread,
Apr 6, 2000, 3:00:00 AM4/6/00
to
In article <2000Apr6.091851.1@eisner>, bri...@eisner.decus.org
wrote:
>In article <05a1f794...@usw-ex0103-018.remarq.com>, David
>W. Cantrell <dwcantrel...@aol.com.invalid> writes:
>>In article <EwXG4.2207$4A1.102566@bgtnsc05-

>>>So from my point of view one can perfectly say: consider the
>>>set of rational consisting of the mantissa and exponent in
>>>specified ranges, and call these the floating point numbers.
>>>The typical operations yield their typical result as long as
>>>the result is in this specified set,
>>
>> But alas, this is not true for any useful floating-point
>> systems. I had certainly thought that you were aware of this
>> problem.
>
>Let M be the set of "model numbers" exactly represented in a
>particular floating point model. It will be a finite subset of
>the reals and, for any typical implementation will be a finite
>subset of the rationals.
>
>Let F(x) be the mapping function from M to the floating point
>numbers.
>
>I read the paragraph you are responding to as calling for:
>
>For all x, y in M, if x+y in M then F(x) "+" F(y) = F(x+y)
>For all x, y in M, if x-y in M then F(x) "-" F(y) = F(x-y)
>For all x, y in M, if x*y in M then F(x) "*" F(y) = F(x*y)
>For all x, y in M, if x/y in M then F(x) "/" F(y) = F(x/y)

I read the paragraph as calling for more than that! But you may
well have interpreted "The typical operations yield their typical
result as long as the result is in this specified set" as its
author intended; I really don't know.

When I gave my response, I was considering something like the
following scenario.

Thinking of adding rational numbers, we would say that
1.00 + 0.104 + 0.104 + 0.102 = 1.31
Note that I have used only "typical operations" above.

Now for convenience (following Jeff Stuart's response earlier in
this thread), let's do the same sort of calculation using
three-significant-digit base-ten arithmetic
1.00 + 0.104 + 0.104 + 0.102 =
1.10 + 0.104 + 0.102 =
1.20 + 0.102 =
1.30

So this result does not agree with 1.31, although 1.31 is indeed
"in this specified set" of three-significant-digit base-ten
numbers. To me, it seems that I shown that "the typical
operations" do not necessarily "yield their typical result" even
though "the result is in this specified set".

spam*Splitter@hotmail.com Spleen Splitter

unread,
Apr 11, 2000, 3:00:00 AM4/11/00
to

"David W. Cantrell" <dwcantrel...@aol.com.invalid> wrote in message news:05a1f794...@usw-ex0103-018.remarq.com...
> In article <EwXG4.2207$4A1.102566@bgtnsc05-

> news.ops.worldnet.att.net>, "Spleen Splitter" <Spleen*no
> spam*Spli...@hotmail.com> wrote:
>
> >Isomorphisms are fine and wonderful, and I enourage the usage
> >of them. However, your technique of reasoning about the
> >constituents of a set (namely your sticking to elements of
> >equivalence classes).
>
> Huh...what about it?
>

...is flawed, to complete my sentence.

You can certainly ascertain equality in such a way, but not isomorphisms,
or commutative diagrams in general.

So if you want to say that there is no isomorphic embedding of the floating
point system into the rational number system, that's fine, and there's no
argument from anybody on this planet.

But that's a system consisting of the floating point numbers, and the floating
point operations, not just the floating point numbers alone. So it sounds
like we agree.

> >Clearly, there's a large number of the floating point numbers,
> >along with the attendant perfidious floating point operation,
> >that map perfectly into the rationals.
>

> ALL non-Inf, non-NaN floating-point numbers may be said to "map
> perfectly into the rationals", so to speak. The problem with
> your statement is the condition "along with the attendant
> perfidious floating point operation[s]". Due to the perfidy of
> the operations, such a perfect mapping is impossible if the set
> of floating-point numbers is sufficiently large to be useful for
> ordinary scientific computations.

I don't see a problem in my statement. I do see a problem that most
scientists and engineers have no clue how to do numerical analysis.
Plenty of filters have failed in the transition of modeling from
floating point arithmetic to fixed point arithmetic inside the actual
filter, just as planets move poorly when a junior programmer tries
his hand at celestial mechanics by naively using floating point ops.

>
> >You seem to be insisting that because not every 'corner' is
> >perfect (ahem, rounding et al) that floating point numbers are
> >hopelessly incapable of mimicing or 'being' a rational number.
>

> You've misunderstood my position. Floating-point numbers are
> certainly capable of approximating the behavior of rational
> numbers; I've never said otherwise. But imperfectly mimicing
> something and actually behaving precisely like that something
> are very different.
>

> >Do you have any idea over what subset of F x F that the
> >operations are perfect?
>

> Perhaps you should say subsets, rather than subset. But, to
> answer your question more-or-less, look at the last sentence in
> my second paragraph above this one.
>

> >Your argument seems equivalent then to saying that if I took
> >the set of rational numbers and tossed out most of them, then
> >what was left was not rational numbers.
>

> In a sense, you're quite right. Numbers, of whatever sort, are
> properly dealt with in systems. Any alteration to the
> characteristics of the system destroys that system; even the
> smallest tear in the fabric causes the whole piece to unravel,
> so to speak.

I'll certainly agree that the _systems_ are different, but I'll
still argue that most floating point numbers are rationals, and
that numerical analysis should be the glue between the two so that
computations are still effective.

>
> >So from my point of view one can perfectly say: consider the
> >set of rational consisting of the mantissa and exponent in
> >specified ranges, and call these the floating point numbers.
> >The typical operations yield their typical result as long as
> >the result is in this specified set,
>

> But alas, this is not true for any useful floating-point
> systems. I had certainly thought that you were aware of this
> problem.

It is true plenty of the time. I have had to help build some
floating point adders and multipliers, and a lot of corner
cases create nightmares for the hardware designer.

And, no, I have never worked for Intel :)

>
> >otherwise do something special.
> >
> >So just where along the way did these rational numbers lose
> >their rationality?
>

> Is that a rational question? (Actually, my answer is implied by
> my second paragraph above.)

Your point appears simply to be that if one says "floating point number"
that you interpret that to mean "floating point system". You appear
to be reluctant of distinguishing between the two, while I am not.
In any event, not terribly elucidating on the fundamental natures of
any of the above mentioned number systems.

cheers,

Spleen Splitter


David W. Cantrell

unread,
Apr 11, 2000, 3:00:00 AM4/11/00
to
I hope that my response below will answer not only the questions
posed here by Spleen Splitter but also those posed in a previous
post by Lynn Killingbeck.

In article <hbKI4.1617$fV.125695@bgtnsc05-


news.ops.worldnet.att.net>, "Spleen Splitter" <Spleen*no
spam*Spli...@hotmail.com> wrote:

>So if you want to say that there is no isomorphic embedding of
>the floating point system into the rational number system,

>that's fine, and there's noargument from anybody on this planet.
>
>But that's a system consisting of the floating point numbers,


>and the floating point operations, not just the floating point
>numbers alone. So it sounds like we agree.

Yes, we agree so far.

>I'll certainly agree that the _systems_ are different, but I'll
>still argue that most floating point numbers are rationals,

So I must then ask:
Precisely what is the basis of your argument that the non-Inf,
non-NaN floating-point numbers should be regarded as actually
being rational numbers?

I hope your argument is not based merely on the fact that
certain representations can be used to specify precisely either
some rational number or some floating-point number. For example,
given some particular floating-point system, "0.5" does specify
a certain floating-point number precisely; and of course, "0.5"
also specifies a certain rational number precisely. But this
does not allow us to conclude that that floating-point number
and that rational number are actually the same. They might
simply have the same name, "0.5", and yet not actually be the
same thing at all. ("Bat" is a name for both a type of wooden
stick and for a type of flying mammal, but this obviously
doesn't allow us to conclude that a certain type of wooden stick
is a certain type of flying mammal.)

>Your point appears simply to be that if one says "floating
>point number" that you interpret that to mean "floating point
>system".

Yes; if one says "floating-point number", then one is speaking
of an object *within* some floating-point system (consisting of
numbers, relations, and operations). Similarly, if one
says "rational number", then one is speaking of an object
*within* the rational number system (consisting of numbers,
relations, and operations).

>You appear to be reluctant of distinguishing between the two,
>while I am not.

I am indeed reluctant to consider numbers of any sorts outside
the contexts of their appropriate systems! While I won't go
quite as far as saying that it is complete nonsense to do so, I
do say that it makes little sense to do so. Why do I have this
attitude? Well, let me ask you: What good are numbers if you
can't do anything with them (other than just using them as
labels, just as one might use letters of the alphabet)? My
answer, by the way, would be that numbers are essentially
useless (as numbers, as opposed to mere labels) if you can't do
anything with them! And note that, removed from their systems,
you can't really do anything with them, since you have no
operations on or relations between the numbers. Thus, even if
the objects called floating-point numbers and those called
rational numbers were defined in precisely the same way, it
would make essentially no difference to this discussion, in my
opinion, because within their respective systems, these types of
numbers behave differently. It is the behavior of numbers which
truly defines them. The actual objects which are the numbers
themselves are simply vehicles which can be used, in conjunction
with operations and relations, to define behavior.

If things don't behave in the same way, they are not the same
things. On the other hand, if things behave the same in all
respects, then for all practical purposes they may be considered
to be the same, even if they are actually defined
differently. "If it walks like a duck and quacks like a duck,
it's a duck." I would extend this adage by saying: If it merely
*looks* like a duck, be careful because it might not really be a
duck!

In a sense, floating-point numbers *look* like rational numbers;
but they don't behave like them. Thus, floating-point numbers
are not actually rational numbers.

Dann Corbit

unread,
Apr 11, 2000, 3:00:00 AM4/11/00
to
Is the following a subset of the rational numbers?
S={1/2, 1/4, 1/8, 0, 2/1, 4/1, 8/1}
And if it is, does this set have the same properties of the set of rational
numbers?

You seem to be arguing that since the set of computer floating point numbers
is not the set of rational numbers {or does not share their properties},
then floating point cannot be a subset of the rational numbers. Certainly,
the set S above has very, very few of the properties of the set of rational
numbers. And yet each element of the set is a member of the rational
numbers.

Is and element of computer floating point not a rational number simply
because one subset from which it is taken does not have the properties of R?

If we examine the properties of any subset, we are certainly not guaranteed
to have the same properties as the superset.

That argument seems irrational to me.
;-)

Floating point numbers clearly are rational numbers. But just as obviously,
they do not have the properties of R. They are simply contained in it.

Jeff Stuart

unread,
Apr 12, 2000, 3:00:00 AM4/12/00
to
If I take a new Lexus sedan and encase it in five tons of cement,
it is arguably still an automobile, but it certainly no longer behaves
like one. The problem with stating the floating point numbers are
just a subset of the rationals is akin to arguing that my Lexus in
a cement block is still a car. Sure, it has tires, an engine, and a
steering wheel, but can you drive it around? In the same manner,
the numbers in the set of floating point numbers are individually
clearly rational numbers, however, you cannot "drive" them around
using the rules of rational arithmetic. It seems to me that few
posters on this discussion claim that the individual floating point
numbers are rationals, rather, they are concerned that when
"driven" around, they do not behave like rational numbers.
NO ONE cares about individual floating point numbers, just as
almost no one would care about a Lexus in a five ton block of
cement. Since floating point numbers are only useful when they
are being "driven", i.e., used in computations, it seems to be
foolish to focus on them individually.

Jeff Stuart

Dann Corbit wrote in message ...

David W. Cantrell

unread,
Apr 12, 2000, 3:00:00 AM4/12/00
to
In article <DoSI4.2047$rH.3625@client>, "Dann Corbit"

<dco...@solutionsiq.com> wrote:
>Is the following a subset of the rational numbers?
>S={1/2, 1/4, 1/8, 0, 2/1, 4/1, 8/1}

Well, it certainly is a perfectly reasonable way of
*representing* a subset of Q, the set of rational numbers

>And if it is, does this set have the same properties of the set
>of rational numbers?

Of course, you and I both know that it does not.

>You seem to be arguing that since the set of computer floating

>point numbers is not the set of rational numbers {or does not
>share their properties}, then floating point cannot be a subset
>of the rational numbers.

That's not my argument. If it were, then I would have to say that
your set S above cannot be reasonably construed as being a subset
of Q.

>Certainly, the set S above has very, very few of the properties
>of the set of rational numbers. And yet each element of the set
>is a member of the rational numbers.
>
>Is and element of computer floating point not a rational number
>simply because one subset from which it is taken does not have
>the properties of R?

(I suppose you meant Q, rather than R.) Anyway, certainly the
elements in any subset of Q are rational numbers, even though the
properties of that subset (such as say, being finite, perhaps)
don't match the properties of Q itself.

>If we examine the properties of any subset, we are certainly not
>guaranteed to have the same properties as the superset.

Right.

>That argument seems irrational to me.
>;-)
>
>Floating point numbers clearly are rational numbers.

No! Clearly if you look at how these types of numbers are
*defined*, you'll see that floating-point numbers are not
actually rational numbers. But please realize that I consider
this fact to be almost irrelevant to this discussion (for reasons
concerning behavior, as mentioned in my previous post).

>But just as obviously, they do not have the properties of R.
>They are simply contained in it.

No, the set of floating-point numbers is not a subset of Q. If
you need to see a standard definition of "rational number", try
looking at the FAQ site for this newsgroup. As far as the
definition of floating-point number goes, an internationally
accepted standard, IEEE 754, says that a floating-point number is
"A bit-string..." Rational numbers are not defined as
bit-strings.

David W. Cantrell

unread,
Apr 12, 2000, 3:00:00 AM4/12/00
to
In article <uoA$iiFp$GA.306@cpmsnbbsa04>, "Jeff Stuart"

<1123...@email.msn.com> wrote:
>If I take a new Lexus sedan and encase it in five tons of
>cement, it is arguably still an automobile, but it certainly no
>longer behaves like one. The problem with stating the floating
>point numbers are just a subset of the rationals is akin to

>arguing that my Lexus in a cement block is still a car.

Thanks, Jeff, for putting the argument on a "concrete" basis!

>Sure, it has tires, an engine, and a steering wheel, but can you
>drive it around? In the same manner, the numbers in the set of
>floating point numbers are individually clearly rational
>numbers,

Technically, I must disagree with this. (See my recent response
to Dann, for example.) But the important thing, as you clearly
realize, is the behavior of the numbers, rather than the
particular types of abstract objects which they are.

>however, you cannot "drive" them around using the rules of
>rational arithmetic. It seems to me that few posters on this
>discussion claim that the individual floating point
>numbers are rationals, rather, they are concerned that when
>"driven" around, they do not behave like rational numbers.
>NO ONE cares about individual floating point numbers, just as
>almost no one would care about a Lexus in a five ton block of
>cement. Since floating point numbers are only useful when they
>are being "driven", i.e., used in computations, it seems to be
>foolish to focus on them individually.

Yes indeed!

Dann Corbit

unread,
Apr 12, 2000, 3:00:00 AM4/12/00
to
"David W. Cantrell" <dwcantrel...@aol.com.invalid> wrote in message
news:0d0f8bc8...@usw-ex0102-015.remarq.com...
[snip]
There are two different arguments in motion, and clearly we have a case of
apples and oranges.
Just as you have already agreed that:

S={1/2, 1/4, 1/8, 0, 2/1, 4/1, 8/1}

is an acceptable representation of a subset of the rational numbers, in a
similar way, every floating point number is *in fact* a ratio of two
integers {ignoring NAN, INF values.}

Whether we represent them as a bit string or two numbers written on paper
with a line between them, is irrelevant, as I am sure you realize.

However, I must firmly agree with your premise that floating point numbers
do not *behave* like rational numbers just as the subset S (above) does not
behave like rational numbers. But I don't think any computer scientist
would have reasonably tried to argue that they do.

So then, what are we really arguing about? I'm not sure. ;)

In any case, why don't we just settle on:

"Computer floating point representations generally consist of a subset of
the ratio of two integral values which do not behave as rational numbers but
are a proper subset, ignoring INF and NAN types."

Dann Corbit

unread,
Apr 12, 2000, 3:00:00 AM4/12/00
to
"Jeff Stuart" <1123...@email.msn.com> wrote in message
news:uoA$iiFp$GA.306@cpmsnbbsa04...

> If I take a new Lexus sedan and encase it in five tons of cement,
> it is arguably still an automobile, but it certainly no longer behaves
> like one. The problem with stating the floating point numbers are
> just a subset of the rationals is akin to arguing that my Lexus in
> a cement block is still a car. Sure, it has tires, an engine, and a

> steering wheel, but can you drive it around? In the same manner,
> the numbers in the set of floating point numbers are individually
> clearly rational numbers, however, you cannot "drive" them around

> using the rules of rational arithmetic. It seems to me that few
> posters on this discussion claim that the individual floating point
> numbers are rationals, rather, they are concerned that when
> "driven" around, they do not behave like rational numbers.
> NO ONE cares about individual floating point numbers, just as
> almost no one would care about a Lexus in a five ton block of
> cement. Since floating point numbers are only useful when they
> are being "driven", i.e., used in computations, it seems to be
> foolish to focus on them individually.

You are wrong. We must understand exactly what individual representations
allow in order to have any chance of understanding how they behave.

By the same argument, the set T = {0, 1/2, 1/3, 1/4, ...}
Does not contain rational numbers.
Will you be able to explain its properties without being able to describe an
element of the set?

We both know that this set is not like the set of rational numbers, but does
share *some* of its properties. Once we can clearly define what an
arbitrary element looks like (and perhaps more importantly what it *can't*
look like) then we can say how the set behaves, given enough study.

The ability to describe an element is crucial to understand many sorts of
sets. Perhaps all sets. You'd need a real mathematician to clear that one
up, though.

David W. Cantrell

unread,
Apr 12, 2000, 3:00:00 AM4/12/00
to
In article <_23J4.1006$0B1.1942@client>, "Dann Corbit"

<dco...@solutionsiq.com> wrote:
>"David W. Cantrell" <dwcantrel...@aol.com.invalid> wrote
>in message news:0d0f8bc8...@usw-ex0102-015.remarq.com...

>Just as you have already agreed that:
>
> S={1/2, 1/4, 1/8, 0, 2/1, 4/1, 8/1}
>

>is an acceptable representation of a subset of the rational
>numbers,

True. But thankfully, just as I did, you said that it is a
*representation* of such a subset. To elaborate, although "1/2"
certainly represents a rational number, it is not a rational
number itself. The rational number represented as "1/2" is an
equivalence class whose members are normally thought of as
certain ordered pairs of integers (as in the FAQ site for this
newsgroup, see
http://www.cs.unb.ca/~alopez-o/math-faq/node16.html#SECTION003250
00000000000000). Perhaps here it would be more convenient to
think of the members of the equivalence class as being formal
fractions instead. If so, the rational number represented as
"1/2" is the set (equivalence class) whose members are 1/2,
-1/(-2), 2/4, -2/(-4), 3/6, -3/(-6), ... Please note that, by
defining this rational number in this way, we have built into the
definition itself that the rational number represented as "1/2"
is the same as that represented as "-3/(-6)" and so on.

>in a similar way, every floating point number is *in fact* a
>ratio of two integers {ignoring NAN, INF values.}

Every such floating-point number does have a representation (such
as 0.5 or, if you prefer, 1/2) which is identical to a
representation which is also appropriate for a rational number.
But having identical representations does not necessarily mean
that the objects represented are the same! In the case under
discussion, the objects are different. If the floating-point
number which you would call "1/2" were the same as the rational
number of that same name, then implicit in the definition of that
floating-point number itself (without reference to any
operations!) would be the fact that the fractions 1/2 and -3/(-6)
represent the same floating-point number. But I can't see how the
bit-string defining that floating-point number could possibly
imply such a fact.

>Whether we represent them as a bit string or two numbers written
>on paper with a line between them, is irrelevant, as I am sure
>you realize.

Well, if a rational number were actually nothing but "two numbers
written on paper with a line between them", then we would be
having a different discussion.

>In any case, why don't we just settle on:
>
>"Computer floating point representations generally consist of a
>subset of the ratio of two integral values which do not behave
>as rational numbers but are a proper subset, ignoring INF and
>NAN types."

Unfortunately, I have yet to see how I could settle on any
statement that would imply that the set of floating-point numbers
is actually a subset of Q.

Dann Corbit

unread,
Apr 12, 2000, 3:00:00 AM4/12/00
to
"David W. Cantrell" <dwcantrel...@aol.com.invalid> wrote in message
news:0308537a...@usw-ex0104-028.remarq.com...

> In article <_23J4.1006$0B1.1942@client>, "Dann Corbit"
> <dco...@solutionsiq.com> wrote:
> >"David W. Cantrell" <dwcantrel...@aol.com.invalid> wrote

Why should the bit string representation have to imply anything? In what
way does '1/2' imply that all ratios where the denominator is exactly half
the numerator are equivalent? It is an invisible theory behind the
representation, and not contained in the symbol itself. Similarly, a ratio
of two integers (or, more to the point, any such ratio that can be
calculated) will map to the same internal representation as well. Consider
the following C program:

#include <limits.h>
#include <stdlib.h>
#include <stdio.h>

#define BITMASK(b) (1 << ((b) % CHAR_BIT))
#define BITSLOT(b) ((b) / CHAR_BIT)
#define BITTEST(a, b) ((a)[BITSLOT(b)] & BITMASK(b))

void dump_bits(double);
int main(void)
{
int i;
double num,
den;

double ratio;
for (i = 1; i < 25; i++) {
num = rand() + 1; /* increment to prevent divide by zero */
den = num + num;
ratio = num / den;
printf("numerator = %f, denominator = %f, ratio = %f\n", num, den,
ratio);
printf("bit dump = ");
dump_bits(ratio);
}
return 0;
}
void dump_bits(double ratio)
{
int i;
char *p = (char *) &ratio;
for (i = 0; i < sizeof(double) * CHAR_BIT; i++) {
if (BITTEST(p, i))
printf("1");
else
printf("0");
}
putchar('\n');
}

And the output:

C:\tmp>foobar
numerator = 42.000000, denominator = 84.000000, ratio = 0.500000
bit dump = 0000000000000000000000000000000000000000000000000000011111111100
numerator = 18468.000000, denominator = 36936.000000, ratio = 0.500000
bit dump = 0000000000000000000000000000000000000000000000000000011111111100
numerator = 6335.000000, denominator = 12670.000000, ratio = 0.500000
bit dump = 0000000000000000000000000000000000000000000000000000011111111100
numerator = 26501.000000, denominator = 53002.000000, ratio = 0.500000
bit dump = 0000000000000000000000000000000000000000000000000000011111111100
numerator = 19170.000000, denominator = 38340.000000, ratio = 0.500000
bit dump = 0000000000000000000000000000000000000000000000000000011111111100
numerator = 15725.000000, denominator = 31450.000000, ratio = 0.500000
bit dump = 0000000000000000000000000000000000000000000000000000011111111100
numerator = 11479.000000, denominator = 22958.000000, ratio = 0.500000
bit dump = 0000000000000000000000000000000000000000000000000000011111111100
numerator = 29359.000000, denominator = 58718.000000, ratio = 0.500000
bit dump = 0000000000000000000000000000000000000000000000000000011111111100
numerator = 26963.000000, denominator = 53926.000000, ratio = 0.500000
bit dump = 0000000000000000000000000000000000000000000000000000011111111100
numerator = 24465.000000, denominator = 48930.000000, ratio = 0.500000
bit dump = 0000000000000000000000000000000000000000000000000000011111111100
numerator = 5706.000000, denominator = 11412.000000, ratio = 0.500000
bit dump = 0000000000000000000000000000000000000000000000000000011111111100
numerator = 28146.000000, denominator = 56292.000000, ratio = 0.500000
bit dump = 0000000000000000000000000000000000000000000000000000011111111100
numerator = 23282.000000, denominator = 46564.000000, ratio = 0.500000
bit dump = 0000000000000000000000000000000000000000000000000000011111111100
numerator = 16828.000000, denominator = 33656.000000, ratio = 0.500000
bit dump = 0000000000000000000000000000000000000000000000000000011111111100
numerator = 9962.000000, denominator = 19924.000000, ratio = 0.500000
bit dump = 0000000000000000000000000000000000000000000000000000011111111100
numerator = 492.000000, denominator = 984.000000, ratio = 0.500000
bit dump = 0000000000000000000000000000000000000000000000000000011111111100
numerator = 2996.000000, denominator = 5992.000000, ratio = 0.500000
bit dump = 0000000000000000000000000000000000000000000000000000011111111100
numerator = 11943.000000, denominator = 23886.000000, ratio = 0.500000
bit dump = 0000000000000000000000000000000000000000000000000000011111111100
numerator = 4828.000000, denominator = 9656.000000, ratio = 0.500000
bit dump = 0000000000000000000000000000000000000000000000000000011111111100
numerator = 5437.000000, denominator = 10874.000000, ratio = 0.500000
bit dump = 0000000000000000000000000000000000000000000000000000011111111100
numerator = 32392.000000, denominator = 64784.000000, ratio = 0.500000
bit dump = 0000000000000000000000000000000000000000000000000000011111111100
numerator = 14605.000000, denominator = 29210.000000, ratio = 0.500000
bit dump = 0000000000000000000000000000000000000000000000000000011111111100
numerator = 3903.000000, denominator = 7806.000000, ratio = 0.500000
bit dump = 0000000000000000000000000000000000000000000000000000011111111100
numerator = 154.000000, denominator = 308.000000, ratio = 0.500000
bit dump = 0000000000000000000000000000000000000000000000000000011111111100

As you can see, these equivalent ratios map to equivalent values.

> >Whether we represent them as a bit string or two numbers written
> >on paper with a line between them, is irrelevant, as I am sure
> >you realize.
>
> Well, if a rational number were actually nothing but "two numbers
> written on paper with a line between them", then we would be
> having a different discussion.
>
> >In any case, why don't we just settle on:
> >
> >"Computer floating point representations generally consist of a
> >subset of the ratio of two integral values which do not behave
> >as rational numbers but are a proper subset, ignoring INF and
> >NAN types."
>
> Unfortunately, I have yet to see how I could settle on any
> statement that would imply that the set of floating-point numbers
> is actually a subset of Q.

If this is true, then S is not a subset of Q either.

spam*Splitter@hotmail.com Spleen Splitter

unread,
Apr 13, 2000, 3:00:00 AM4/13/00
to
"Dann Corbit" <dco...@solutionsiq.com> wrote in message news:o73J4.1031$0B1.1325@client...

Actually, I agree with David Cantrell in quite a large number of ways,
but I'm intent on extracting the difference in our thinking :)

I most assuredly agree with his contention that you should approach
"floating point numbers" as only having context in the entirety of the
system for which they were designed. This approach is the only one
of any long term value across systems of all types. For example, it's
clear that floating point codes are not stable across various implementations
of floating point units: helicopters fly in one simulation, but crash in
another run with the same code, but on a different type of machine.

Still, I have some difficulty in absorbing the entirety of his particular
argument. For example, I readily concede that floating point numbers are
but a poor mimicry of the full stunning power of the rationals (much less
for the reals and those poor saps stooping to use floating point representations
of pi, e, sqrt(2),...not even optical computers will help form a better
representation...)

So if the floating point numbers are not rationals, then:

o There are no natural numbers on a computer.
o There are no integers on a computer.

And this, simply because the addition is not closed for either number system.

So there you have it. You can only have Z8, Z16, Z32, maybe Z64 (i.e., those
with modular arithmetic) and those wacky floating point numbers on a computer.

So David Cantrell is most assuredly correct in these particulars, and not
just about the floating point numbers on a computer, but _all_ numerical
representations of natural numbers, integers, rationals, reals et al on any
machine with a finite 'tape' (sorry Turing...you actually get to have the
rationals...just not the reals...).

But my thinking works a bit differently, particularly when I try to use
a computer to crank on some problem for which it was only generically
constructed for. When I need to use the natural numbers for indexing
into an array, I have to make some assumptions about implementation, like
upper bound. Similar for the integers when, say, using them as latticular
points in some n-dimensional space. The rationals (and yes, the reals)
get a tad bit trickier on a machine, and there are more sharp edges to
watch out for than just some bounding problem with the other two. Sometimes
I even use the modular arithmetic precisely because it is modular. In all
cases I am assuming a connection between the two types of number systems
that I am in control of, in that I understand the mapping between the two
so that planets don't move halfway across the sky in a heartbeat. In other
words, I have to bolt down the straps in my code to do exception checking
to make sure I don't wander off the track.

However, I've had plenty of numerical analysis, have built floating point
units, have done plenty of numerical coding from celestial mechanics to
timing analysis and parasitic extraction of electronic circuits, and still
have trouble getting everything to work. Definitely not work for the
numerically timid. From my own tortured experience I will most assuredly
argue that the floating point unit is a tough beast to tame to get it to
look remotely like the real numbers.

There is still nonetheless a managable connection between the two, and for
that reason I believe that I can maintain that floating point numbers are
rationals (or reals), as I can manage them to my satisfaction. This is
just the same as my thinking with regards to the natural numbers or the
integers on a machine, just more complicated is all. So I'm willing to
let David Cantrell maintain that the two systems are distinct, but I won't
let that disturb my managing complexity between the realms by my making
meaningful identifications of representations in both.

Mapping of the reals--since many scientists and engineers deal with continuous
quantities--onto the floating point numbers is often only the first step in
getting some algorithm to work. The digital filters in cell phones, for
example, typically use fixed point arithmetic, which has yet again more
differences from the reals, and even their own accursed brethern the floating
point numbers. This still does not stop folks from making identifications
between all three said realms, and the cell phone does work--at least the
ones you buy.

There are those nearly aprocryphal floating point units that maintain an upper
and lower bound through each calculation, essentially two floating point numbers
representing each rational. Such 'interval arithmetic' is a bit more fascinating
than just the single floating point representation (but hard to convince management
to build...maybe somebody will slip one in on some VLIW machine that has multiple
fpu's...waitaminute...where's that fpga/PCI board I had around here...)

> The ability to describe an element is crucial to understand many sorts of
> sets. Perhaps all sets. You'd need a real mathematician to clear that one
> up, though.

lol...I'm a real mathematician...no...that's not good...I'm a rational mathematician...
nah...that's corny...no wait...I got it...I'm a _categorical_ mathematician...
you might consider thinking in an alternate foundation of mathematics than
clunky old 19th century set theory....go for the gusto...go categorical... :)

cheers,

Spleen Splitter


Milo Gardner

unread,
Apr 13, 2000, 3:00:00 AM4/13/00
to
Spleen Splinter raised several valid and interesting points. I too
support David C's point of view, from time to time. However, generally
algorithmic numbers in the form of floating-point numbers are far to
awkard. Ancients working in the 500 AD Akhmim Papyrus, decoded by
Kevin Brown with respect to 4/17 and 8/17 table members used the
binomial product form,

n/p = (1/Q + 1/p)(1/u + 1/v) + 1/c

that allows c to be negative number (as others have also suggested
on this thread). Rational numbers are powerful even outside of our
modern algorithmetic point of view. The ancients stressed algebraic
indentites by mentally selecting a term, such as 1/Q, u, v, c here and
there, opening parametrics up in interesting ways.

For additional details, the sci.math archive dated 22 April 1996
"Egyptian Fractions, 31/311 in four terms" may be of interest.

Regards to all,


Milo Gardner

David W. Cantrell

unread,
Apr 13, 2000, 3:00:00 AM4/13/00
to
In article <nq7J4.2243$0B1.3456@client>,

"Dann Corbit" <dco...@solutionsiq.com> wrote:
> "David W. Cantrell" <dwcantrel...@aol.com.invalid> wrote in
> message news:0308537a...@usw-ex0104-028.remarq.com...

It shouldn't have to. (You're just talking about a representation,
after all.)

> In what way does '1/2' imply that all ratios where the denominator is
> exactly half the numerator are equivalent?

It doesn't. (You're just talking about a representation, rather than
what it actually represents in some particular context.)

> It is an invisible theory behind the representation, and not contained
> in the symbol itself.

Yes indeed. In the context of rational numbers, the representation "1/2"
does conceal "an invisible theory", so to speak. (Isn't such concealment
of messy details one of the reasons that we humans like to use
representations?) Furthermore, in a different context, if we are
speaking of floating-point numbers for example, then the representation
"1/2" would also conceal a substantially different "invisible theory".

> Similarly, a ratio of two integers (or, more to the point, any such
> ratio that can be calculated) will map to the same internal >
representation as well.

I've never questioned that (and thus I've snipped your long
demonstration).

> As you can see, these equivalent ratios map to equivalent values.
>
> > >Whether we represent them as a bit string or two numbers written
> > >on paper with a line between them, is irrelevant, as I am sure
> > >you realize.
> >
> > Well, if a rational number were actually nothing but "two numbers
> > written on paper with a line between them", then we would be
> > having a different discussion.
> >
> > >In any case, why don't we just settle on:
> > >
> > >"Computer floating point representations generally consist of a
> > >subset of the ratio of two integral values which do not behave
> > >as rational numbers but are a proper subset, ignoring INF and
> > >NAN types."
> >
> > Unfortunately, I have yet to see how I could settle on any
> > statement that would imply that the set of floating-point numbers
> > is actually a subset of Q.
>
> If this is true, then S is not a subset of Q either.

S may *equally*well* be taken to *represent*precisely* either a subset
of the set of floating-point numbers on some computer system or a subset
of Q. But, of course, "{1/2, 1/4, 1/8, 0, 2/1, 4/1, 8/1}" is not a
subset of Q any more than the word "apple" has nutritive value.

Regards,

0 new messages