Google Grupper støtter ikke lenger nye Usenet-innlegg eller -abonnementer. Historisk innhold er fortsatt synlig.

[Slightly OT]: More on ints and floats

Sett 0 ganger
Hopp til første uleste melding

Tim Daneliuk

ulest,
7. apr. 2003, 18:03:5307.04.2003
til
OK, I don't want to resurrect another interminable discourse in ints,
floats, and how they are/ought/can be handled in the language. But, I
have a sort of theoretical mathematical question which the whole
business brought to mind.

As I understand it, integers and floats are distinct mathematical
entities. A colleague of mine claims, that insfar as we use them in
computing, ints are merely a proper subset of floats. He furthermore
asserts that (again as regards to computing) the distinction between
them was made as a purely as a practical matter because floating point
arithmetic was historically computationally expensive. He argues that
any place one can use an int, these days (with cheap FP hardware), one
could use a float 0 extended to the precision of the machine and get
equivalent computational results. Say the hardware supported 4 digits of
precision. He is arguing that:


3/4.0000 is equivalent to 3.0000/4.0000

(Never mind the old Python modulo vs. divison debate.)

In effect he is saying that, unless there is a practical
performance/cost issue at hand, there is no real reason to
differentiate between ints and floats in practical programming
problems.

As a matter of 'pure' mathematics, I argued that ints and floats are
very different critters. My argument (which is no doubt formally very
weak) is that the integer 3 and the float 3.0000 are different because
of the precision problem. For instance, the integer 3 exists at a single
invariant point on the number line, but 3.0000 represents all numbers
from 2.99995 through 3.00004.

Could one of the genius mathematicians here bring some clarity to this
discussion, please?

TIA,
--
----------------------------------------------------------------------------
Tim Daneliuk tun...@tundraware.com
PGP Key: http://www.tundraware.com/PGP/TundraWare.PGP.Keys.txt

Chad Netzer

ulest,
7. apr. 2003, 18:55:4307.04.2003
til
On Mon, 2003-04-07 at 15:03, Tim Daneliuk wrote:
> He argues that
> any place one can use an int, these days (with cheap FP hardware), one
> could use a float 0 extended to the precision of the machine and get
> equivalent computational results. Say the hardware supported 4 digits of
> precision. He is arguing that:
>
>
> 3/4.0000 is equivalent to 3.0000/4.0000

If you have a 32 bit int, and a 32 bit float, the ints are NOT a proper
subset of the float operations. You need at least 64 bit doubles, to
have the same granularity as a 32 bit int. If you use 64 bit ints, then
even a "long double" may not have 64 bits of integer granularity. If
you try to use a float as a file pointer, or a double as a 64 bit long
file pointer, you WILL run into a whole lot of problems on large data
sets. See below:

$ python
Python 2.2.2 (#1, Mar 21 2003, 23:01:54)
>>> import sys
>>> import Numeric as Num
>>>
>>> a = Num.array( sys.maxint, Num.Float32 )
>>> b = Num.array( 1, Num.Float32 )
>>>
>>> sys.maxint
2147483647
>>> a
2147483648.0
>>> a == sys.maxint
0
>>> b
1.0
>>> a-b == sys.maxint
0
>>> a-b
2147483648.0
>>> a-b-b
2147483648.0

So, subtracting 1 from a large valued float does not change it's value
(and in fact subtracting 64 from this same float will not change it).


Because you need (typically) a 64 bit double to have all the granularity
of a 32 bit int. On most modern CPUs, although the CPUs have made much
progress on doing fast float/double operations, ints are still typically
faster. Often there are more integer execution units then float units
in the chip itself (ie. more integer operations can be retired than
floats or doubles, in the same amount of time). Which "vector" floating
point processors, the difference in speed is not as cut and dried, but
the precision porblem still exists.


So, I'd say your friend is basically wrong.


Steve Holden

ulest,
7. apr. 2003, 21:03:2507.04.2003
til
"Tim Daneliuk" <tun...@tundraware.com> wrote in message
news:fjss6b....@boundary.tundraware.com...

> OK, I don't want to resurrect another interminable discourse in ints,
> floats, and how they are/ought/can be handled in the language. But, I
> have a sort of theoretical mathematical question which the whole
> business brought to mind.
>
> As I understand it, integers and floats are distinct mathematical
> entities. A colleague of mine claims, that insfar as we use them in
> computing, ints are merely a proper subset of floats. He furthermore
> asserts that (again as regards to computing) the distinction between
> them was made as a purely as a practical matter because floating point
> arithmetic was historically computationally expensive. He argues that
> any place one can use an int, these days (with cheap FP hardware), one
> could use a float 0 extended to the precision of the machine and get
> equivalent computational results. Say the hardware supported 4 digits of
> precision. He is arguing that:
>
>
> 3/4.0000 is equivalent to 3.0000/4.0000
>
> (Never mind the old Python modulo vs. divison debate.)
>
> In effect he is saying that, unless there is a practical
> performance/cost issue at hand, there is no real reason to
> differentiate between ints and floats in practical programming
> problems.
>
Well, that depends on how each is represented. Suppose you had a platform
with 64-bit integers and 64-bit floats. It would be impossible to represent
every integer as a floating-point number because the mantissa in the floats
will be shorter than the integer precision.

Of course, now we are moving towards infinite-precision integers (with
hidden widening to long when required) this problem will always exist.

> As a matter of 'pure' mathematics, I argued that ints and floats are
> very different critters. My argument (which is no doubt formally very
> weak) is that the integer 3 and the float 3.0000 are different because
> of the precision problem. For instance, the integer 3 exists at a single
> invariant point on the number line, but 3.0000 represents all numbers
> from 2.99995 through 3.00004.
>

As a matter of pure mathematics the integers are a subset of the reals,
which are a subset of the complex numbers.

Unfortunately pure mathematics and floating-point arithmetic only tend to
intersect in the brains of people like Tim Peters.

> Could one of the genius mathematicians here bring some clarity to this
> discussion, please?
>

So I guiess it's the timbot you want to hear from then, and not me :-)

regards
--
Steve Holden http://www.holdenweb.com/
Python Web Programming http://pydish.holdenweb.com/pwp/
Did you miss PyCon DC 2003? Would you come to PyCOn DC 2004?

Terry Reedy

ulest,
7. apr. 2003, 21:08:5207.04.2003
til

"Tim Daneliuk" <tun...@tundraware.com> wrote in message
news:fjss6b....@boundary.tundraware.com...
> OK, I don't want to resurrect another interminable discourse in
ints,
> floats, and how they are/ought/can be handled in the language. But,
I
> have a sort of theoretical mathematical question which the whole
> business brought to mind.

> As I understand it, integers and floats are distinct mathematical
> entities. A colleague of mine claims, that insfar as we use them in
> computing, ints are merely a proper subset of floats.

These are two different metaphysical (and quasi-religious) viewpoints.
The entanglement of this issue with that of Python's representation of
int division contributed somewhat unnecessary 'heat' to the int
division debate. (And its removal help lead to its resolution.)

In my view, counts (0, 1, 2, ...) are the basic numbers -- whether
considered to be primitive (axiomatic) or derived (from functions,
sets, or logic values taken as protonumbers). They are the possible
answers to the question "How many (discrete units)?" Integers are
pairs (differences) of counts, just as rationals are pairs (ratios) of
integers, and complex Xs are pairs of Xs (with yet a third set of
operation definitions). To me, the fact that the parent set can be
matched with a subset of the daughter set does not make it identical
to or 'merely' that subset.

I am aware that in set theory, isomorphism between sets is usually
considered to mean identity. That is why there is (considered to be)
only 'one' empty set. On the other hand, a) number theory does not
have to be governed by the same rules as set theory and b)
'isomorphism' is partly a matter of definition. I could claim that
the fact, for instance, that count 2 does not have a square root while
real 2.0 does makes them not 'fully' isomorphic. This gets to the CS
view that types are defined by the operations allowed on the instances
(at least roughly put).

There is also a third (non-Platonic) view: numbers are purely human
invention. Thus, decisions about identity are arbitrary, and hence
arguments about the 'truth' of such is at least somewhat nonesensical.
Thus, my argument above would only make sense if recast as one about
pragmatics.

Take your pick.

Terry J. Reedy


Steven Taschuk

ulest,
7. apr. 2003, 23:09:0307.04.2003
til
Quoth Tim Daneliuk:
[...]

> As I understand it, integers and floats are distinct mathematical
> entities. A colleague of mine claims, that insfar as we use them in
> computing, ints are merely a proper subset of floats. [...]

Not if both use the same amount of storage.

For example, with four decimal digits to work with, the usual
unsigned int representation can represent 9999. But with a
floating point representation in which one digit is dedicated to
the exponent, 9999 cannot be represented; it is between the two
representable numbers 9.99e3 and 1.00e4.

> [...] He furthermore


> asserts that (again as regards to computing) the distinction between
> them was made as a purely as a practical matter because floating point
> arithmetic was historically computationally expensive. He argues that
> any place one can use an int, these days (with cheap FP hardware), one
> could use a float 0 extended to the precision of the machine and get

> equivalent computational results. [...]

This may be true if you assume that overflow and underflow never
occur, and that error never accumulates to significant proportions.

But it is easy to construct counterexamples. Working again with
my notional four-decimal-digit computer, unsigned ints can
correctly compute
9980 + 0017 = 9997
while the floating point representation has rounding error for the
same computation:
9.98e3 + 1.70e1 = 9.99e3
(or 1.00e4, depending on rounding mode).

Perhaps your friend's notion of a "practical programming problem"
is one in which the precision needed is much less than the
precision available. (Lots of problems do indeed have this
property; but lots do not.)

[...]


> As a matter of 'pure' mathematics, I argued that ints and floats are
> very different critters. My argument (which is no doubt formally very
> weak) is that the integer 3 and the float 3.0000 are different because
> of the precision problem. For instance, the integer 3 exists at a single
> invariant point on the number line, but 3.0000 represents all numbers
> from 2.99995 through 3.00004.
>
> Could one of the genius mathematicians here bring some clarity to this
> discussion, please?

I'm hardly a genius mathematician, but here's a few comments.

First, I disagree that the int 3 represents exactly the integer 3.
If int 3 is the result of wrapping int addition, it can be said to
represent some integer congruent to 3 modulo 2^w (where w is the
word size in bits). If the int 3 is the result of a floor
division (that is, round-to-negative-infinity division of ints)
then it can be said to represent some number in [3,4). (This is
exactly analogous to rounding error with floats.)

There are at least two ways to approach these situations formally.

One way is to describe computer addition, etc., as operators
directly on real numbers. For example, you could define the int
addition operator (+) by
x (+) y := (floor(x) + floor(y)) mod 10^4
In this way of thinking, the int 3 and the float 3.0 are not
distinct; the difference is actually between the operators 'int
addition' and 'float addition'.

Another way is to describe the computer values as entities
distinct from real numbers, and to define computer addition, etc.,
as operators on these entities, with a separate function which
maps from these entities to real numbers. In this way of
thinking, ints and floats may be different kinds of entities.
(The operators are also distinct in this view.)

In any case, the important practical and theoretical question is
this: if you do a computer calculation and get a value x, what can
you infer about the number you would have gotten if you'd done
real arithmetic? For such questions, whether you use fixed-point
or floating-point representations, what precision the
representations offer, and so forth, are vitally important points.

--
Steven Taschuk stas...@telusplanet.net
"Its force is immeasurable. Even Computer cannot determine it."
-- _Space: 1999_ episode "Black Sun"

Tim Peters

ulest,
7. apr. 2003, 21:45:2407.04.2003
til
[Steve Holden]
> ...

> As a matter of pure mathematics the integers are a subset of the reals,
> which are a subset of the complex numbers.
>
> Unfortunately pure mathematics and floating-point arithmetic only tend to
> intersect in the brains of people like Tim Peters.

The relationship can be formalized easily enough, but it's not so useful to
do so unless you work in that field: computer floats are a peculiarly lumpy
and finite subset of the rational numbers, but the computer-float basic
operations (+ - * /) "usually" deliver different results than the same-named
operations on rationals. The float operations can be defined precisely, but
the definitions are difficult to work with and reason about. Knuth wasn't
the only one to note that many a good mathematician has given up in
frustration after trying to rigorously analyze just a few lines of
floating-point code.

For that reason I don't think there's much value in conflating computer ints
and computer floats: the identities the latter fail to satisfy are
endlessly surprising to people. Like addition and multiplication aren't
associative for floats, cancellation laws are shot all to hell for floats
(x+y == x+z doesn't imply y==z for floats), and so on. This makes reasoning
hard.

OTOH, 754 float arithmetic has an inexact flag, set whenever the result of a
basic float operation differs from what the rational operation on the same
inputs would deliver. Computer languages (including Python) overwhelmingly
failed to give users sane ways to get at this flag (among others). If you
could get at it, it would be a lot easier to use floats for speed and
automatically detect when results became approximations.

Note that the IBM proposal for decimal arithmetic:

http://www2.hursley.ibm.com/decimal/

doesn't distinguish between floats and integers. It's an interesting mix
where the number of significant digits isn't unbounded, but where the
maximum is user-specified. Make that big enough, and you can safely use the
same arithmetic for floating and integer calculations. Figuring out how big
"big enough" is in advance can be tricky, though. Setting the max # of
digits to 999999999 is a good trick -- provided you don't care how long
division takes <wink>.


Lulu of the Lotus-Eaters

ulest,
7. apr. 2003, 22:07:5707.04.2003
til
Tim Daneliuk <tun...@tundraware.com> wrote previously:

|As I understand it, integers and floats are distinct mathematical
|entities. A colleague of mine claims, that insfar as we use them in
|computing, ints are merely a proper subset of floats.

Floats are peculiar creatures. They really are not much like Real or
Rational numbers, even though it is often convenient to pretend they
are. The trick about floating point numbers is that although they are
extremely useful for representing real-life (fractional) quantities,
operations on them do not obey the arithmetic rules we learned in middle
school: associativity, transitivity, commutativity; moreover, many very
ordinary-seeming numbers can be represented only approximately with
floating point numbers. For example:

>>> 1./3
0.33333333333333331
>>> .3
0.29999999999999999
>>> 7 == 7./25 * 25
0
>>> 7 == 7./24 * 24
1

Tim knows this, of course, but it is worth emphasizing. Moreover, a
quote the Timbot found for this list a couple years ago is worth
reiterating:

Many serious mathematicians have attempted to analyze a
sequence of floating point operations rigorously, but found
the task so formidable that they have tried to be content
with plausibility arguments instead.
-- Donald Knuth (_The Art of Computer Programming_, Third Edition,
Addison-Wesley, 1997; ISBN: 0201896842, vol. 2, p. 229):

Yours, Lulu...

--
Keeping medicines from the bloodstreams of the sick; food from the bellies
of the hungry; books from the hands of the uneducated; technology from the
underdeveloped; and putting advocates of freedom in prisons. Intellectual
property is to the 21st century what the slave trade was to the 16th.

L.C.

ulest,
8. apr. 2003, 02:28:3708.04.2003
til
I would ask a different question. What exactly would be the BENEFIT
of doing everything in FP ?

I believe programming usually deals with practical, performance/cost
issues, so discarding that to make a point does make the issue sort
of academical anyway.

Integers are more efficient in storage space as they do not have to
store a mantissa. They are also precise, as you mentioned. Floats
have the bad habit of not fitting properly in our binary formats and
thus propagating errors even within their precision range (try
representing 0.3 or 0.1).

After all, they make hammers in different sizes for a reason.

--
L.C. (Laurentiu C Badea)

Tim Daneliuk <tun...@tundraware.com> wrote in message news:<fjss6b....@boundary.tundraware.com>...

Tim Daneliuk

ulest,
8. apr. 2003, 12:50:0808.04.2003
til

As a side note, this question was ultimately motivated by an entirely
different sort of "pure math" question: Does (integer) 1 = 1.0?
Engineers, physicists, and computer scientists all would
say "no" because of the precision problem with the latter number.
I am unclear, though, on whether a mathematician understands
1.0 to mean 1.0000000... (a single number) or the _neighborhood_
around 1 with one digit precision.


Thanks to all who took the time to respond...

Cameron Laird

ulest,
8. apr. 2003, 13:14:3108.04.2003
til
In article <kauu6b....@boundary.tundraware.com>,
Tim Daneliuk <tun...@tundraware.com> wrote:
.
.

.
>As a side note, this question was ultimately motivated by an entirely
>different sort of "pure math" question: Does (integer) 1 = 1.0?
>Engineers, physicists, and computer scientists all would
>say "no" because of the precision problem with the latter number.
>I am unclear, though, on whether a mathematician understands
>1.0 to mean 1.0000000... (a single number) or the _neighborhood_
>around 1 with one digit precision.
.
.
.
I'm a (former) mathematician. I've always taken "1.0"
to be a local, linguistic act, requiring me to infer
the speaker's context. "1.0" means different things in
the mouths of different people.

The open interval (0.95, 1.05) is one of the plausible
interpretations. It's not among the first half-dozen
likely to come to my mind, though; maybe in the top
ten ...
--

Cameron Laird <Cam...@Lairds.com>
Business: http://www.Phaseit.net
Personal: http://phaseit.net/claird/home.html

Michael Hudson

ulest,
8. apr. 2003, 13:27:0708.04.2003
til
Tim Daneliuk <tun...@tundraware.com> writes:

> I am unclear, though, on whether a mathematician understands 1.0 to
> mean 1.0000000... (a single number) or the _neighborhood_ around 1
> with one digit precision.

I'm a mathematician, albeit not a numerical analyst, and I don't think
considering a float as the interval of real numbers to which it is the
closest floating point approximation is a particularly helpful point
of view.

Floats are perfectly normal rational numbers; it's the *operations*
which are weird.

Cheers,
M.

--
The rapid establishment of social ties, even of a fleeting nature,
advance not only that goal but its standing in the uberconscious
mesh of communal psychic, subjective, and algorithmic interbeing.
But I fear I'm restating the obvious. -- Will Ware, comp.lang.python

Lulu of the Lotus-Eaters

ulest,
8. apr. 2003, 14:03:0508.04.2003
til
Tim Daneliuk <tun...@tundraware.com> wrote previously:
|I am unclear, though, on whether a mathematician understands
|1.0 to mean 1.0000000... (a single number) or the _neighborhood_
|around 1 with one digit precision.

It depends what type of mathematician (and what mathematical context).
"Pure" math deals with approximations and confidence levels less often
than does applied math. So in those areas, mathematicians are likely to
have a first thought of Real or Rational numbers. In statistics,
however, I assume every mathematician would read 1.0 as both a number
and a precision.

Yours, Lulu...

--
mertz@ _/_/_/_/ THIS MESSAGE WAS BROUGHT TO YOU BY: \_\_\_\_ n o
gnosis _/_/ Postmodern Enterprises \_\_
.cx _/_/ \_\_ d o
_/_/_/ IN A WORLD W/O WALLS, THERE WOULD BE NO GATES \_\_\_ z e


Lulu of the Lotus-Eaters

ulest,
8. apr. 2003, 14:09:2908.04.2003
til
|Floats are perfectly normal rational numbers; it's the *operations*
|which are weird.

Aren't you being slightly disingenuous here?

Any numbers are more-or-less only meaningful in terms of the operations
they enter into. The Peano integers, or the Cauchy or Dedekind Reals,
are not really interesting because of the beauty of their construction,
but rather because it is relatively easy to define operations on them
that match our intuitions about how arithmetic is -supposed- to behave.
The operations are the whole point of the entities.

Likewise, a float isn't *really* a number at all. It's really just a
pattern of bits, such as '01000001010000010100000101000001'. Maybe
those bits are a way of representing the string "AAAA", or maybe they
are a way of representing a -thing- that enters into operations that act
(a little bit) like addition, multiplication, etc. Just being a bit
pattern doesn't make it a *number* at all.

Greg Ewing (using news.cis.dfn.de)

ulest,
8. apr. 2003, 21:27:4108.04.2003
til
L.C. wrote:
> I would ask a different question. What exactly would be the BENEFIT
> of doing everything in FP ?

Simpler hardware and instruction set?

On the Burroughs 6000 series, integers were just denormalised
floats, and there was only one set of ADD, SUB etc. instructions
which would take any combination of ints, floats and doubles
and Do The Right Thing. (There were tag bits which distinguished
floats from doubles, but not ints from floats -- they were
both just "numbers".)

Not sure whether it actually made the hardware any simpler
(Burroughs hardware tended, in fact, to be insanely complicated)
but Burroughs obviously thought it was a good idea.

--
Greg Ewing, Computer Science Dept,
University of Canterbury,
Christchurch, New Zealand
http://www.cosc.canterbury.ac.nz/~greg

Bengt Richter

ulest,
8. apr. 2003, 21:26:5108.04.2003
til
On Tue, 08 Apr 2003 17:14:31 -0000, cla...@lairds.com (Cameron Laird) wrote:

>In article <kauu6b....@boundary.tundraware.com>,
>Tim Daneliuk <tun...@tundraware.com> wrote:
> .
> .
> .
>>As a side note, this question was ultimately motivated by an entirely
>>different sort of "pure math" question: Does (integer) 1 = 1.0?
>>Engineers, physicists, and computer scientists all would
>>say "no" because of the precision problem with the latter number.

IMO there is no "precision problem" with the unique abstract number
normally associated with "1.0" or "1" or "ord('\x01')" or "I" or "one"
or "ein" or "uno" or "+1e0" or ... ;-)

>>I am unclear, though, on whether a mathematician understands
>>1.0 to mean 1.0000000... (a single number) or the _neighborhood_
>>around 1 with one digit precision.
> .
> .
> .
>I'm a (former) mathematician. I've always taken "1.0"
>to be a local, linguistic act, requiring me to infer
>the speaker's context. "1.0" means different things in
>the mouths of different people.
>

Well put. Of course, any sign whose inferred purpose was to
get your attention can be substituted for "1.0" in the above,
generalizing from speech to include other communicative acts ;-)

>The open interval (0.95, 1.05) is one of the plausible
>interpretations. It's not among the first half-dozen
>likely to come to my mind, though; maybe in the top
>ten ...

Half open [.095, 1.05) or [1.0, 2.0) seem more useful to me
if the intent is to imply intervals, but a single exact number
is the interpretation I prefer (and is in any case necessary
as a reference for the definition of the intervals).

Which brings me to my point: IMO the distinction between
abstract entities vs. their various concrete representations
is crucial to communicating clearly about ... well, just about
anything.

E.g., concretely, you can use your hands to represent floating
point .01 to 99. very simply: Use your fingers to represent
the digits in BCD, and expose one thumb to indicate the decimal
point position. Or if you like binary, you can take advantage
of your opposable thumb to put one between selected fingers
for .00000001 through 10000000. base 2. Or you can use various
squiggly marks made visible to the communicatee. Or you can tap
out a pattern on someone's knee, say in morse code, or do the same
with audible beeps, or hand someone a tray with several bowls
of pebbles, etc., etc. Not to mention intermediate hidden (to
human senses) physical representations that play a role in
storage and transmission of numeric information, such as in computers.

The point is, a single abstract entity can have any number of
representations. "Floating point number" can thus be interpreted
to indicate a selected state of one of the many possible concrete
floating point representation mechanisms, together with the
exact abstract numerical value represented.

If you want to refer to an exact interval related to the exact value,
that requires additional information, which you can supply implicitly
or explicitly by context (cf Cameron's comment above). IMO a single
exact abstract value typically retains it primacy as an interpretation
of the representation, whatever additional definitions extend relative
to that through some special convention of interpretation and implicit
information.

Another point that may become clearer if one thinks about abstractions
vs representations is that computers necessarily implement concrete
transformations of _representations_ -- i.e., they do not directly implement
abstract mathematical operations. Pouring the contents of two bowls of pebbles
into a single empty bowl can represent integer addition, but when a bowl
overflows, that is a special effect of the _implementation_ of addition, not of
addition as an abstract integer operation.

The special effects encountered in transforming floating point representations
to implement abstract mathematical operations are also mostly properties of
the representation mechanisms, not of the abstract math, but they happen at
many points _throughout_ the interval of real numbers that contains the domain
of operands and range of results with exact floating point representations. The
special effects don't just happen at the extremes, as with integers.
This makes it trickier to think about.

The special effects derive from the necessity of choosing a representation
of an exact number as a result where a representation of the exact mathematical
result may not be available. Often it _is_ available, and the result _is_ an exact
representation of the mathematically correct result, but often, too, that's not possible
for a particular representation mechanism.

I.e., the representations of two numbers in floating point can correspond exactly
to intended abstract operands, but it is possible that the implementation of addition
will transform the FPU state to yield a resulting FPU representation (which necessarily
has a specific exact abstract numerical interpretation), but whose abstract value is not
identical to the expected mathematically correct value in the abstract realm.

[OT] BTW, when thinking about such realities as wars and foreign populations, or immediate
surroundings, including one's body, etc., it is also interesting to note that there are
representation issues involved. I.e., (IMO) one's subjective experience of reality is an experience
of one's state of mind -- however that arose -- and thus is an experience of a _representation_
of reality, not of reality itself[1]. And perception of reality has a context, like perception of "1.0".

People are so full of context ;-) Otherwise how could they disagree so violently about reality?
(Of course the above reflects my internal representations of reality and results from mechanisms
for transforming those representations ;-)

[1] Except insofar as the state of mind itself is a part of reality, of course.

Regards,
Bengt Richter

James Gregory

ulest,
9. apr. 2003, 14:47:3209.04.2003
til
On Tue, 2003-04-08 at 18:27, Michael Hudson wrote:
> Tim Daneliuk <tun...@tundraware.com> writes:
>
> > I am unclear, though, on whether a mathematician understands 1.0 to
> > mean 1.0000000... (a single number) or the _neighborhood_ around 1
> > with one digit precision.
>
> I'm a mathematician, albeit not a numerical analyst, and I don't think
> considering a float as the interval of real numbers to which it is the
> closest floating point approximation is a particularly helpful point
> of view.

To my mind, 1.0 is a member of R, 1 is a member of Z. But I do crazy
things like think of "1/2" as the number that yields 1 when multiplied
by 2. Not 0.5. Nor the quantity of apple you get when sharing an apple
with one other person.

But it's context dependent, and that's one of the properties that having
all "numbers" as objects affords you. It means that my software can tell
if a particular number is a member of GF(5) (The Galois Field of 5
elements), and then it can infer that "1/2" actually means 3 (3 * 2 = 6,
6 % 5 = 1, as per the above definition).

What does 1/2 mean to you in Z? Does it mean the same thing in R?

(Z is the set of all integers, R is the set of all reals in case it
wasn't clear)

James.

Michael Hudson

ulest,
9. apr. 2003, 06:13:4109.04.2003
til
Lulu of the Lotus-Eaters <me...@gnosis.cx> writes:

> |Floats are perfectly normal rational numbers; it's the *operations*
> |which are weird.
>
> Aren't you being slightly disingenuous here?

Maybe. But you've chopped the bit of my post where I was making my
real point: floating point numbers (in the compter sense) are not best
thought of as intervals of real numbers.

> Any numbers are more-or-less only meaningful in terms of the operations
> they enter into. The Peano integers, or the Cauchy or Dedekind Reals,
> are not really interesting because of the beauty of their construction,

Well, in the case of the reals, the construction is of no interest
because there is only one model up to unique isomorphism...

And you're talking to the wrong mathematician: I do not relish in the
Bourbaki-style games of building all of mathematics up from axiomatic
set theory. I'll grudgingly grant that it may be important that it
can be done, but all attempts to show me the constructions will be
studiously ignored.

> but rather because it is relatively easy to define operations on them
> that match our intuitions about how arithmetic is -supposed- to behave.
> The operations are the whole point of the entities.

Well, yeah, but there's a perfectly well defined, injective map from

{CS-floats} -> Q

but it doesn't commute with the arithmetic operations. That's all I
meant.

> Likewise, a float isn't *really* a number at all. It's really just a
> pattern of bits, such as '01000001010000010100000101000001'. Maybe
> those bits are a way of representing the string "AAAA", or maybe they
> are a way of representing a -thing- that enters into operations that act
> (a little bit) like addition, multiplication, etc. Just being a bit
> pattern doesn't make it a *number* at all.

Well, be like that then :-) It depends what you mean by "float". I
always try very hard to know which category I'm working in. Is {-1,
1} just a random subset of the set of integers? No, it's the group of
units of the initial object of the category of commutative rings!

This is one thing Grothendieck got absolutely right: one should
always, always, always consider the relative situation (i.e. think
about morphisms rather than maps). So a float is a bit pattern such
as '01000001010000010100000101000001', but to be a float it has to
come with an arrow that explains it's interpretation.

Cheers,
M.

--
ZAPHOD: Listen three eyes, don't try to outwierd me, I get stranger
things than you free with my breakfast cereal.
-- The Hitch-Hikers Guide to the Galaxy, Episode 7

Michael Hudson

ulest,
9. apr. 2003, 06:14:4109.04.2003
til
James Gregory <ja...@anchor.net.au> writes:

> On Tue, 2003-04-08 at 18:27, Michael Hudson wrote:
> > Tim Daneliuk <tun...@tundraware.com> writes:
> >
> > > I am unclear, though, on whether a mathematician understands 1.0 to
> > > mean 1.0000000... (a single number) or the _neighborhood_ around 1
> > > with one digit precision.
> >
> > I'm a mathematician, albeit not a numerical analyst, and I don't think
> > considering a float as the interval of real numbers to which it is the
> > closest floating point approximation is a particularly helpful point
> > of view.
>
> To my mind, 1.0 is a member of R, 1 is a member of Z. But I do crazy
> things like think of "1/2" as the number that yields 1 when multiplied
> by 2. Not 0.5. Nor the quantity of apple you get when sharing an apple
> with one other person.
>
> But it's context dependent, and that's one of the properties that having
> all "numbers" as objects affords you. It means that my software can tell
> if a particular number is a member of GF(5) (The Galois Field of 5
> elements), and then it can infer that "1/2" actually means 3 (3 * 2 = 6,
> 6 % 5 = 1, as per the above definition).
>
> What does 1/2 mean to you in Z?

Nothing.

> Does it mean the same thing in R?

Well, no :-)

Cheers,
M.

--
The above comment may be extremely inflamatory. For your
protection, it has been rot13'd twice.
-- the signature of "JWhitlock" on slashdot

Alex Martelli

ulest,
12. apr. 2003, 16:08:5312.04.2003
til
Cameron Laird wrote:
...

> I'm a (former) mathematician. I've always taken "1.0"
> to be a local, linguistic act, requiring me to infer
> the speaker's context. "1.0" means different things in
> the mouths of different people.

I'm an engineer, and I fully concur with you (once you
generalize "different people" to include the SAME people
speaking ``with different hats on'', so to speak).

When I speak about a given software program, and I assert
that "1.0 is much better than 1.1, 'cause they added far
more bugs than features in the upgrade", I expect it to be
quite clear that I'm using "release numbers" -- strange
beasts that can well have more than one decimal point
(e.g., the interlocutor might well object "That's unfair,
you're really compating 1.0.3 with 1.1.0!"...;-).

In another context, though, I might well use "1.0" as being
distinct from "1.00" because the latter implies I've measured
"it" (whatever "it" we're talking about) "to two decimal
places" (i.e., in such a context saying "the length is
1.00" DOES mean I claim it's between 0.995 and 1.005,
while if I said "the length is 1.0" I'd be making a far
less precise claim). That's perhaps the sense in which
some people say that a floating-point number "stands
for an interval" (?).

In another context yet, I might well be using "1.0" as
quite equivalent to "float(1)" (after all, "1.0" IS the
form in which both str and repr render float(1)...:-).
I'd be likely to use that in just about any programming
language because I never remember what languages let me
skip the trailing zero AND "1." is less readable anyway;-).


Alex

Martin Maney

ulest,
12. apr. 2003, 23:13:2812.04.2003
til
Lulu of the Lotus-Eaters <me...@gnosis.cx> wrote:
> Floats are peculiar creatures.

Best description I've ever seen was this classic: floats are like a
pile of sand; every time you move on you lose a little sand and pickup
a little dirt.

0 nye meldinger