On Fri, Oct 3, 2014 at 7:34 AM, kcrisman <
kcri...@gmail.com> wrote:
>> I ran exactly into this some time ago while sanity-checking some
>> high-precision MPFR computations with the results of Wolfram Alpha (which
>> also wraps real literals into its own, I presume exact, representation).
>>
>
> Default precision for reals is 53 bits, if I recall correctly, so the extra
> zeros are "assumed".
It's not the *default* precision, but the minimum precision. There is
no default. The number of bits of precision is determined by the
number of digits you enter. However, it would be dumb if 1.5 had
way too few bits of precision, so precision is determined by number
digits entered with a minimum of 53 bits.
sage: 1.2908240834908209348208340283482394820384028340823048203480238.prec()
206
sage: 1.5.prec()
53
That we print all the trailing zeros is to be more explicit about the
precision. Compare:
sage: 1.5
1.50000000000000
sage: float(1.5)
1.5
In Python all floats have the same precision, so you get no extra
information by seeing trailing zeros. In sage you do:
sage: RealField(300)(1.5)
1.50000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
sage: 1.5
1.50000000000000
And no discussion about precision is complete without mentioning
RealIntervalField:
sage: RealIntervalField(300)(1.5)
1.5000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000?
-- William
>
>>
>> On 3 October 2014 11:42, Volker Braun <
vbrau...@gmail.com> wrote:
>>>
>>> Real literals are wrapped in their own type on the Sage command line,
>>> they are not hardware-floats (like in plain Python):
>>>
>>> sage: 0.001
>>> 0.00100000000000000
>>> sage: type(_)
>>> <type 'sage.rings.real_mpfr.RealLiteral'>
>>
>>
>> I am curious, why the extra zeroes in that representation in the second
>> line?
>
--
William Stein
Professor of Mathematics
University of Washington
http://wstein.org
wst...@uw.edu