Clearly there is some numerical noise issues, which need to be investigated, but
are probably trivial. But what is odd is that the "Got" value lacks the leading zero
Has anyone seen this sort of problem before, where a leading zero is missing?
Any ideas of a likely cause?
You wrote 4 августа 2010 г., 14:49:47:
> sage: maxima('asinh(1.0)')
> Clearly there is some numerical noise issues, which need to be
> investigated, but are probably trivial.
I think that it is more than trivial issues. You can't require maxima
or any other numerical program to return same results up to 16th digit
with every combination of CPU, compiler, optimization settings.
You can get 0,.......543 when you compile with -O1, and 0,.......999
when you compile with -O2. Well, you can try to make results same up
to the last bit, but it is hard to achieve even with optimization
The right move is to compare results using only 10-14 leading digits
(depending on problem and its stability properties). However I don't
know how to do it using doctest framework. Python tries to output
numbers with full precision, and there is no way to tell doctest
framework to compare decimal fractions using only N leading digits.
With best regards,
Yes, there is. The doctest below will compare only the digits listed:
If you look through the Sage library, you'll see lots of places where
we use the triple-dot "wildcard" to ignore the last few digits of a
You wrote 5 августа 2010 г., 20:17:11:
>> (depending on problem and its stability properties). However I don't
>> know how to do it using doctest framework. Python tries to output
>> numbers with full precision,and there is no way to tell doctest
>> framework to compare decimal fractions using only N leading digits.
> Yes, there is. The doctest below will compare only the digits listed:
> sage: maxima('asinh(1.0)')
Thanks! I didn't noticed this ELLIPSIS option while reading doctest's
documentation. Everything becomes a bit simpler now :)
Yes, that's what I mean by trivial. But the missing leading zero is
With all the numerical noise issues I've seen in Sage, the three dots
solves its. So if we expect
we can change that to
and the test will pass.
However, what if we get
That's very close to 1, but not a single digit is the same. I'm not
sure how one would handle that case.
You wrote 5 августа 2010 г., 20:44:53:
> With all the numerical noise issues I've seen in Sage, the three dots
> solves its. So if we expect
> but get
> we can change that to
> and the test will pass.
> However, what if we get
> That's very close to 1, but not a single digit is the same. I'm not
> sure how one would handle that case.
Hmmm... Didn't thought about this situation yet. Definitely we can't
solve this problem with any kind of regular expressions. One possible
solution is to round data before printing. So both 1.00000000000001
and 0.99999999999999 will become 1.000000.
As for me (testing interaction between ALGLIB and Sage), I can write
function which prints arrays/matrices with rounding up to specified
number of digits. Such doctests will look like
> sage: a = sqrt(2)
> sage: my_own_print(a,4)
But it will be more test than doc (less human readable). Don't know
whether it is worth using beyond Sage wrapper for ALGLIB.
> Hmmm... Didn't thought about this situation yet. Definitely we can't
> solve this problem with any kind of regular expressions. One
> possible solution is to round data before printing. So both
> 1.00000000000001 and 0.99999999999999 will become 1.000000.
...however, we still can have problems when rounding X=0.000499999 up
to three digits. With original X we will have 0.000. But with
perturbation as small as 0.000000002 we will round to 0.001.
My automated Python-ALGLIB tests just calculate difference between
desired and actual answers, but they are not doctests; they are just
automatically generated Python scripts. We can't calculate differences
Thank you. That makes perfect sense.
BTW, do you have any ideas why the second failure at #9099 might occur
sage -t -long devel/sage/sage/symbolic/expression.pyx
Why should the zero be missing in the case observed on Solaris x86?i
In other words, why do we see .8813735870195429 instead of
0.8813735870195429 ? Te exact value of the last digit is unimportant,
and one clearly can't expect to get all digits spot on, but one would
hope that a zero should be printed when its needed.
I've seen a couple of approaches used in this case.
First, simply change the doctest. For example, if you're testing a
numerical root-finder on the polynomial x^2-x, change the polynomial
Second, include the test code in the doctest. Change:
sage: abs(foo() - 1) < 1e-12
This is less preferred, because while it keeps the doctest useful as a
test, it can reduce its value as documentation (especially if the test
code is more complicated than the above).
Well, it seems likely that the issue is within maxima (either in
maxima itself or in the Common Lisp implementation). To verify that
the problem has nothing to do with Sage, you can use "sage -maxima"
and try the test case on both machines:
cwitty@red-spider:~/sage$ ./sage -maxima
;;; Loading #P"/home/cwitty/sage/local/lib/ecl/defsystem.fas"
;;; Loading #P"/home/cwitty/sage/local/lib/ecl/cmp.fas"
;;; Loading #P"/home/cwitty/sage/local/lib/ecl/sysfun.lsp"
Maxima 5.20.1 http://maxima.sourceforge.net
using Lisp ECL 10.2.1
Distributed under the GNU Public License. See the file COPYING.
Dedicated to the memory of William Schelter.
The function bug_report() provides bug reporting information.
(Don't forget the semicolon on the "asinh(1.0);" line; without that,
maxima will wait forever for more input.)
If the results do differ with "sage -maxima", I guess the next step
would be to report the problem to the maxima mailing list or the bug
tracker (bearing in mind that the problem may actually be in the
Common Lisp implementation we use).
I've never done this, but the most logical thing to me seems to be to look at
the absolute magnitude of the relative error, for all cases except when the
expected value is 0. But I guess on cases where the numbers are not close to 1,
or other special cases one could think up, the current system is probably the
Carl, thank you for your input.
I tried what you suggested, and see the result you show above on the Solaris 10
SPARC system, and a similar result, but lacking the leading zero, on the Solaris
10 x86 system.
I've stuck the full outputs at
and have emailed both the Maxima and ECL mailing lists.
Unfortunately, we are not running the latest versions of either ECL or Maxima,
so I expect I'm likely to be asked "do you see this is the latest version?"
IIRC, there were some problems when we tried to update ECL and Maxima recently,
so its not as simple as one might like to determine this.
That would be nice. But it's a bit tricky to write, since the doctest
check happens solely on the basis of string comparison; to do
something clever like looking at relative error would require some
sort of marking in the expected output to turn on the relative-error
check, and some sort of parser to find the numbers in the
expected/found outputs, and possibly some sort of marking on a
per-number basis in the expected output to say what relative error is
allowed on that number. (If the expected output is 1.23456*x^10000,
with a relative error allowed of 1e-3, you probably don't want to
allow 1.23456*x^10001 !)
I would be in favor of something like this, if we could decide on a
set of markings in the expected output that didn't interfere too much
with the documentation aspect; but nobody has written it yet.
The three-dots notation is much, much stupider than this... it is just
a wildcard that will match an arbitrary string of characters (the
equivalent of .* as a regex, or * as shell globbing). And we didn't
even write it; it's a standard part of the Python doctest system.
And the three dots is much prettier to read too.
Personally, I just choose doctests whose results are not subject to
the 1.0000000000 vs. 0.99999999999 kind of noise, unless I want to
make a point that the function nails the result right on (in which
case 1.000000000 is a good doctests for that).