Yes!
I can confirm this behaviour. The problem doesn't seem to be restricted to
__pow__, i.e. replacing ^i by products also gives slower results when using
using libSingular. However, mod p it seems libSingular wins. Hence my guess:
we are using different memory allocation functions for GMP and these functions
might be slower (for Singular's use case)?
Cheers,
Martin
--
name: Martin Albrecht
_pgp: http://pgp.mit.edu:11371/pks/lookup?op=get&search=0x8EF0DC99
_otr: 47F43D1A 5D68C36F 468BAEBA 640E8856 D7951CCF
_www: http://martinralbrecht.wordpress.com/
_jab: martinr...@jabber.ccc.de
Singular uses omalloc for memory management. This library was developed
by Olaf Bachmann specifically for Singular's use case. This blog post
has some slides about it:
http://singular-team.blogspot.com/2011/09/sicsa2011.html
I don't think we can use omalloc in Sage since it is not thread safe.
Using GMP with two different memory allocators is not an option either.
So I am not sure if this problem can be solved easily.
On the other hand, any help to make omalloc thread safe would be much
appreciated. :) Interested parties should contact
libsingular-devel@googlegroups.
Cheers,
Burcin
Yes, and the observed slowdown might be related to us not using it for GMP
integers.
This doesn't make a difference for the case discussed here: libSingular uses
the same optimisation as Singular via pexpect, i.e. we translate between GMP
integers and the small in-place integers when getting/setting coefficients. So
the only difference I can think of is the memory manager. Back in the days of
Sage 1.4 (!) I compared various malloc replacements:
http://wiki.sagemath.org/MallocReplacements
Sure, but it's still worrisome that a computation is slower in libSingular
than in Singular. If it's down to the different memory manager, then there
isn't much we (can/will) do about, in which case it's at least good to know:
e.g. computing a GB using pexpect might be faster if there is only one common
root for example.
Sure, but it's still worrisome that a computation is slower in libSingular
than in Singular.
That is not what is happening here. libsingular is definitely not
converting the output to Sage datastructures. The Sage polynomials
x,y,z are simply Cython level wrappers around a pure Singular
datastructure.
sage: type(x)
<type 'sage.rings.polynomial.multi_polynomial_libsingular.MPolynomial_libsingular'>
I'm pretty surprised by this benchmark.
The timing difference on my computer is even more:
sage: time test1(x,y,z)
Time: CPU 4.89 s, Wall: 4.89 s
sage: time test2(x,y,z)
Time: CPU 0.15 s, Wall: 1.73 s
It really comes down to this (following exactly the code that Simon posted):
sage: time v = (x+y+z)^100
Time: CPU 0.23 s, Wall: 0.23 s
sage: xx=singular(x);yy=singular(y);zz=singular(z)
sage: time v = (xx+yy+zz)^100
Time: CPU 0.00 s, Wall: 0.06 s
Why is exponentiation dramatically faster in case 2 than 1? Note
that the *wall time* is what matters in both cases, by the way.
-- William
> A good benchmark would be e.g. computing the dimension of some idea
> where there is not much output.
> Also, timing finally finished:
> sage: timeit('test1(*P.gens())')
> 5 loops, best of 3: 2.2 s per loop
> sage: timeit('test2(*P.gens())')
> 5 loops, best of 3: 1.49 s per loop
> sage: timeit('test3(*P.gens())')
> 5 loops, best of 3: 32.3 s per loop
>
> --
> To post to this group, send an email to sage-...@googlegroups.com
> To unsubscribe from this group, send an email to
> sage-devel+...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/sage-devel
> URL: http://www.sagemath.org
>
--
William Stein
Professor of Mathematics
University of Washington
http://wstein.org
I think we should compare which function the Singular interpreter calls and
which we call and also see if we get the same performance if we switch to
OMalloc (I'm not suggesting to switch to OMalloc, just for testing)