Question about Arbitrary Precision Logarithms And Number of Correct Digits

93 views
Skip to first unread message

ref...@uncg.edu

unread,
Oct 2, 2014, 8:28:15 PM10/2/14
to sage-...@googlegroups.com
Dear All,

I appreciate any help that I can get with this question.  Thank you in advance.

I've been converting some of my algorithms that wrote using Sage into C.  I noticed that it appears that MPFR and Sage do not match when taking logarithms of certain values at arbitrary precision.  As an example, consider the following lines in Sage:

RR = RealField(1000)
x = RR(.001)
a= log(x)

When I do this, Sage says that:

a=-6.90775527898213705205397436405309262280330446588631892809998370290271782903205744070799161526879489502590335212685874590022857639524842026999886210729634506844872162497666404253139968447869959558518051592689613319788653849009866686309465966023963100242321272982309546514680294481817443885821320066631

When comparing to MPFR, if I run the attached code in the command line (that I believe does the same thing as above) using

gcc -Wall -W -Werror logs.c -o logs -lm -lmpfr -lgmp
./logs

The output of the program is:

-6.90775527898213703123729265233140770652732943274746853066220338998698620110531195719818129463092278351819792580811173018723242118875465470783068399942176413540835294300881628169088830077469356711717015036454074809229573906590734510694077140442685761387606643232451253919766892057408972733372702726501595

When comparing the digits though, it appears that only around the first 16 or so digits are the same.  To see this, I put the value that Sage returns against the value that MPFR returns on top of each other and highlight where the digits start to disagree in red:

-6.90775527898213705205397436405309262280330446588631892809998370290271782903205744070799161526879489502590335212685874590022857639524842026999886210729634506844872162497666404253139968447869959558518051592689613319788653849009866686309465966023963100242321272982309546514680294481817443885821320066631

-6.90775527898213703123729265233140770652732943274746853066220338998698620110531195719818129463092278351819792580811173018723242118875465470783068399942176413540835294300881628169088830077469356711717015036454074809229573906590734510694077140442685761387606643232451253919766892057408972733372702726501595

Does anyone see why the values would be so much different?  I thought that Sage called MPFR to find these values.

Thanks,

Rick

P.S. I'm using Sage version 5.12, gcc version 4.7.3, and as far as I know I'm using the newest version of MPFR.


logs.c

François Bissey

unread,
Oct 2, 2014, 8:32:45 PM10/2/14
to sage-...@googlegroups.com
On Thu, 02 Oct 2014 17:28:15 ref...@uncg.edu wrote:
> When comparing the digits though, it appears that only around the first 16
> or so digits are the same.

That's were you can infer that something has been converted or left
as a double rather than a mpfr real number.

Francois

ref...@uncg.edu

unread,
Oct 2, 2014, 8:35:49 PM10/2/14
to sage-...@googlegroups.com
Francois,

Thank you for your reply.  That's what I figured also, but where is the difference between the two codes?

Thanks,

Rick

François Bissey

unread,
Oct 2, 2014, 8:44:50 PM10/2/14
to sage-...@googlegroups.com
Don't really know at this stage, I am not familiar enough with that area
of sage to say whether or not your "a" will be defined on RR or a standard
numpy real or something else.

Francois

On Thu, 02 Oct 2014 17:35:49 ref...@uncg.edu wrote:
> Francois,
>
> Thank you for your reply. That's what I figured also, but where is the
> difference between the two codes?
>
> Thanks,
>
> Rick
>
> On Thursday, October 2, 2014 8:32:45 PM UTC-4, François wrote:

ref...@uncg.edu

unread,
Oct 2, 2014, 9:27:30 PM10/2/14
to sage-...@googlegroups.com
Francois,

Never mind, I think I figured it out.  Apparently, sage initializes .001 in MPFR using the fact that .001 = 1/1000.  Using the appropriate set methods in MPFR fixes the issue.  Attached is the code that makes Sage and MPFR agree.

Apparently, if I treat .001 as a float or double in MPFR the value MPFR returns does not agree with Sage.  If I treat .001=1/1000 though, and initialize the rational with mpq, then take the logarithm the output will agree.

This makes sense I guess.  Perhaps .001 is not exactly representable in base 2 which causes the issue.

Thanks,

Rick
logs.c

John Cremona

unread,
Oct 3, 2014, 4:19:06 AM10/3/14
to SAGE devel
As an indication that the Sage output was the correct one, you can add the line

sage: exp(a)

and see

0.00100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000

John
> --
> You received this message because you are subscribed to the Google Groups
> "sage-devel" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sage-devel+...@googlegroups.com.
> To post to this group, send email to sage-...@googlegroups.com.
> Visit this group at http://groups.google.com/group/sage-devel.
> For more options, visit https://groups.google.com/d/optout.

Jeroen Demeyer

unread,
Oct 3, 2014, 4:34:11 AM10/3/14
to sage-...@googlegroups.com
On 2014-10-03 03:27, ref...@uncg.edu wrote:
> Francois,
>
> Never mind, I think I figured it out. Apparently, sage initializes .001
> in MPFR using the fact that .001 = 1/1000. Using the appropriate set
> methods in MPFR fixes the issue. Attached is the code that makes Sage
> and MPFR agree.

Alternatively, use
mpfr_set_str(x, "0.001", 0, MPFR_RNDN)

Volker Braun

unread,
Oct 3, 2014, 5:42:07 AM10/3/14
to sage-...@googlegroups.com
Real literals are wrapped in their own type on the Sage command line, they are not hardware-floats (like in plain Python):

sage: 0.001
0.00100000000000000
sage: type(_)
<type 'sage.rings.real_mpfr.RealLiteral'>

On Friday, October 3, 2014 2:27:30 AM UTC+1, ref...@uncg.edu wrote:
Apparently, if I treat .001 as a float or double in MPFR the value MPFR returns does not agree with Sage.  If I treat .001=1/1000 though, and initialize the rational with mpq, then take the logarithm the output will agree.

This makes sense I guess.  Perhaps .001 is not exactly representable in base 2 which causes the issue.

Francesco Biscani

unread,
Oct 3, 2014, 10:13:47 AM10/3/14
to sage-...@googlegroups.com
I ran exactly into this some time ago while sanity-checking some high-precision MPFR computations with the results of Wolfram Alpha (which also wraps real literals into its own, I presume exact, representation).

On 3 October 2014 11:42, Volker Braun <vbrau...@gmail.com> wrote:
Real literals are wrapped in their own type on the Sage command line, they are not hardware-floats (like in plain Python):

sage: 0.001
0.00100000000000000
sage: type(_)
<type 'sage.rings.real_mpfr.RealLiteral'>

I am curious, why the extra zeroes in that representation in the second line?

kcrisman

unread,
Oct 3, 2014, 10:34:00 AM10/3/14
to sage-...@googlegroups.com
I ran exactly into this some time ago while sanity-checking some high-precision MPFR computations with the results of Wolfram Alpha (which also wraps real literals into its own, I presume exact, representation).


Default precision for reals is 53 bits, if I recall correctly, so the extra zeros are "assumed".

William A Stein

unread,
Oct 3, 2014, 11:11:21 AM10/3/14
to sage-...@googlegroups.com
On Fri, Oct 3, 2014 at 7:34 AM, kcrisman <kcri...@gmail.com> wrote:
>> I ran exactly into this some time ago while sanity-checking some
>> high-precision MPFR computations with the results of Wolfram Alpha (which
>> also wraps real literals into its own, I presume exact, representation).
>>
>
> Default precision for reals is 53 bits, if I recall correctly, so the extra
> zeros are "assumed".

It's not the *default* precision, but the minimum precision. There is
no default. The number of bits of precision is determined by the
number of digits you enter. However, it would be dumb if 1.5 had
way too few bits of precision, so precision is determined by number
digits entered with a minimum of 53 bits.

sage: 1.2908240834908209348208340283482394820384028340823048203480238.prec()
206
sage: 1.5.prec()
53

That we print all the trailing zeros is to be more explicit about the
precision. Compare:

sage: 1.5
1.50000000000000
sage: float(1.5)
1.5

In Python all floats have the same precision, so you get no extra
information by seeing trailing zeros. In sage you do:

sage: RealField(300)(1.5)
1.50000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
sage: 1.5
1.50000000000000

And no discussion about precision is complete without mentioning
RealIntervalField:

sage: RealIntervalField(300)(1.5)
1.5000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000?

-- William

>
>>
>> On 3 October 2014 11:42, Volker Braun <vbrau...@gmail.com> wrote:
>>>
>>> Real literals are wrapped in their own type on the Sage command line,
>>> they are not hardware-floats (like in plain Python):
>>>
>>> sage: 0.001
>>> 0.00100000000000000
>>> sage: type(_)
>>> <type 'sage.rings.real_mpfr.RealLiteral'>
>>
>>
>> I am curious, why the extra zeroes in that representation in the second
>> line?
>
> --
> You received this message because you are subscribed to the Google Groups
> "sage-devel" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sage-devel+...@googlegroups.com.
> To post to this group, send email to sage-...@googlegroups.com.
> Visit this group at http://groups.google.com/group/sage-devel.
> For more options, visit https://groups.google.com/d/optout.



--
William Stein
Professor of Mathematics
University of Washington
http://wstein.org
wst...@uw.edu

kcrisman

unread,
Oct 3, 2014, 11:55:06 AM10/3/14
to sage-...@googlegroups.com
>
> Default precision for reals is 53 bits, if I recall correctly, so the extra
> zeros are "assumed".

It's not the *default* precision, but the minimum precision.  There is
no default.   The number of bits of precision is determined by the


Thanks for that clarification - I was only thinking of the types of (short) input the OP was mentioning, but naturally this is quite right. 

rjf

unread,
Oct 4, 2014, 11:40:42 AM10/4/14
to sage-...@googlegroups.com
You can more easily check the accuracy of log by using floats that are representable exactly in binary.  like
1, 0.5, 0.25, ... if that is your objective.

0.1 read in and converted to a nearby binary double float number is approximately 

1.000000000000000055511151231257827021181583404541e-1

For your information,  in Mathematica one can get extra precision by adding enough zeros (more than 16 or so)
and in my opinion the design of the significance arithmetic there is deeply flawed.  I hope that Sage and
sympy do not take Mathematica as a guide for what the user should see.

RJF


Reply all
Reply to author
Forward
0 new messages