I'm putting this thread on [sage-devel] because it isn't a support question
anymore. First, some background:
On Friday 05 September 2008, phil wrote on [sage-support]:
> I have a matrix that is composed of multivariant polynomial
> entries. I want to compute its determinant. The problem is that it
> is very slow or runs out of memory. For example,
> R.<x,y> = QQ[]
> C = random_matrix(R,10,10)
> Cdet = C.determinant() # this line takes a long time
>
> If you have more variables, it will run out of memory instead (on a 32
> bit installation).
>
> Is there a more efficient way to do this? Would using symbolic
> expressions then coercing back to the polynomial ring be better?
My reply was on [sage-support] too:
> Here's a workaround:
>
> sage: R.<x,y> = QQ[]
> sage: C = random_matrix(R,8,8)
> sage: %time d = C.determinant()
> CPU times: user 2.64 s, sys: 0.00 s, total: 2.65 s
> Wall time: 2.67 s
>
> sage: %time d2 = R(C._singular_().det())
> CPU times: user 0.04 s, sys: 0.01 s, total: 0.05 s
> Wall time: 0.15 s
>
> sage: d2 == d
> True
and eventually I provided a patch which is going to be in 3.1.2.
I also wrote on [sage-support]:
> One thing that really puzzles me is that Magma doesn't seem to scale well
> w.r.t. to this particular benchmark.
>
> sage: R.<x,y> = QQ[]
> sage: C = random_matrix(R,10,10)
>
> sage: %time d = C.determinant() # going to be in 3.1.2
> CPU times: user 0.34 s, sys: 0.00 s, total: 0.34 s
> Wall time: 0.34 s
>
> sage: CM = magma(C)
> sage: t = magma.cputime(); d2 = CM.Determinant(); magma.cputime(t)
> 0.59999999999999998
>
> sage: C = random_matrix(R,14,14)
> sage: %time d = C.determinant()
> CPU times: user 2.58 s, sys: 0.00 s, total: 2.58 s
> Wall time: 2.60 s
> sage: CM = magma(C)
> sage: t = magma.cputime(); d2 = CM.Determinant(); magma.cputime(t)
> 27.84 # note that Magma also eats lots and lots of memory
>
> sage: C = random_matrix(R,15,15)
> sage: %time d = C.determinant()
> CPU times: user 4.49 s, sys: 0.00 s, total: 4.49 s
> Wall time: 4.55 s
> sage: CM = magma(C)
> sage: t = magma.cputime(); d2 = CM.Determinant(); magma.cputime(t)
> 68.590000000000003
>
> sage: C = random_matrix(R,16,16)
> sage: %time d = C.determinant()
> CPU times: user 6.98 s, sys: 0.00 s, total: 6.98 s
> Wall time: 7.00 s
> sage: CM = magma(C)
> sage: t = magma.cputime(); d2 = CM.Determinant(); magma.cputime(t)
> 168.41
> sage: magma(d) == d2
> True
>
> sage: R.<x,y> = GF(32003)[]
> sage: C = random_matrix(R,16,16)
> sage: %time d = C.determinant()
> CPU times: user 0.78 s, sys: 0.00 s, total: 0.78 s
> Wall time: 0.92 s
> sage: CM = magma(C)
> sage: t = magma.cputime(); d2 = CM.Determinant(); magma.cputime(t)
> 64.920000000000002
> sage: magma(d) == d2
> True
>
> So I wonder if Singular's implementations are just really good or if Magma
> is just particularly bad for this particular benchmark. I have no feeling
> how fast these things should be.
an finally on Tuesday 09 September 2008, phil wrote on [sage-support]
> Thanks for the tip. After making that change, Sage no longer crashes
> with an out of memory error on 64 bit Debian. However, I may need to
> make additional changes. The computation is still going after 3
> days. Memory usage is slowly increasing and is now at 6.4 GB.
>
> The entries of the matrix are composed of coefficients extracted from
> some matrix operations on 4 3x3 matrices so there are 38 variables in
> the polynomial ring. Specifically, if anyone is wondering, I am
> trying to compute left hand side of equation 7 in "Five point motion
> estimation made easy" by Li and Hartley (http://users.rsise.anu.edu.au/
> ~hongdong/new5pt_cameraREady_ver_1.pdf).
> The solver given on Li's webpage using Maple within Matlab to compute
> the determinant at runtime after the coefficients are given as numbers
> in the problem. I want to pre-compute the determinant with the
> coefficients specified by constants. That way at run time all you
> need to do is evaluate the expression replacing the constants with the
> numerical values.
Thoughts?
Martin
--
name: Martin Albrecht
_pgp: http://pgp.mit.edu:11371/pks/lookup?op=get&search=0x8EF0DC99
_www: http://www.informatik.uni-bremen.de/~malb
_jab: martinr...@jabber.ccc.de
I agree that this is a question of which algorithm is used.
Singular uses two algorithms:
poly smCallDet(ideal I)
which uses Bareiss
and
poly singclap_det(matrix m)
which seems to be a multi-modular approach. The heuristic to choose between
the two is implemented in smCheckDet.
How fast ist Giac for this particular computation?
Cheers,
Are these some kind of benchmark(et)ing examples? Where can I find the input
matrices? Sorry, never heard that name before ... which isn't that
surprising, since I never thought about this computation before the bug
report on [sage-support]. :-)
Okay, found it:
# the original benchmark is over ZZ, but that's really slow in Sage
---------------
P.<x1,x2,x3,x4,x5> = PolynomialRing(QQ, 5)
M = MatrixSpace(P, 26)
w = [ [ 1, 1, 1, 7, x4, 12, x3, 17, x2, 22, x1 ],
[ 2, 2, 1, 8, x4, 13, x3, 18, x2, 23, x1 ],
[ 3, 3, 1, 9, x4, 14, x3, 19, x2, 24, x1 ],
[ 4, 4, 1, 10, x4, 15, x3, 20, x2, 25, x1 ],
[ 5, 5, 1, 26, 1, 1, 0, 1, 0, 1, 0 ],
[ 6, 2, x5, 6, 1, 12, x3, 17, x2, 22, x1 ],
[ 7, 3, x5, 7, 1, 13, x3, 18, x2, 23, x1 ],
[ 8, 4, x5, 8, 1, 14, x3, 19, x2, 24, x1 ],
[ 9, 5, x5, 9, 1, 15, x3, 20, x2, 25, x1 ],
[10, 10, 1, 26, 1, 1, 0, 1, 0, 1, 0 ],
[11, 2, x5, 7, x4, 11, 1, 17, x2, 22, x1 ],
[12, 3, x5, 8, x4, 12, 1, 18, x2, 23, x1 ],
[13, 4, x5, 9, x4, 13, 1, 19, x2, 24, x1 ],
[14, 5, x5, 10, x4, 14, 1, 20, x2, 25, x1 ],
[15, 15, 1, 26, 1, 1, 0, 1, 0, 1, 0 ],
[16, 2, x5, 7, x4, 12, x3, 16, 1, 22, x1 ],
[17, 3, x5, 8, x4, 13, x3, 17, 1, 23, x1 ],
[18, 4, x5, 9, x4, 14, x3, 18, 1, 24, x1 ],
[19, 5, x5, 10, x4, 15, x3, 19, 1, 25, x1 ],
[20, 20, 1, 26, 1, 1, 0, 1, 0, 1, 0 ],
[21, 2, x5, 7, x4, 12, x3, 17, x2, 21, 1 ],
[22, 3, x5, 8, x4, 13, x3, 18, x2, 22, 1 ],
[23, 4, x5, 9, x4, 14, x3, 19, x2, 23, 1 ],
[24, 5, x5, 10, x4, 15, x3, 20, x2, 24, 1 ],
[25, 25, 1, 26, 1, 1, 0, 1, 0, 1, 0 ],
[26, 1, x5, 6, x4, 11, x3, 16, x2, 21, x1 ] ]
m = M.matrix()
for i in range(0,26):
for j in range(0,5):
m[i, (w[i][2*j+1])-1] = w[i][2*j+2]
tinit = cputime()
qqq = m.determinant()
print "M1 =", cputime(tinit), "(SAGE)";
del P, M, w, m, qqq
---------------
This takes 0.001 seconds on my notebook, Magma is in the same ballpark, i.e.
the example is way too small for a comparison.
M2 (again, over QQ) takes 2.2 seconds in Sage 3.1.2.rc2 and 2.02 seconds in
Magma (over ZZ).
Cheers,
Martin
It turns out, here Sage/Singular is doing worse than Magma:
O1 (det1) = 0.720 (MAGMA)
O1 (det2) = 1.550 (MAGMA)
O1 = 2.420 (MAGMA)
O2 = 2.110 (MAGMA)
factor = 1.180 (MAGMA)
O1 (det1) = 41.457698 (Sage)
O1 (det2) = 58.837056 (Sage)
O1 = 78.369086 (Sage)
O2 = 4.439325 (Sage)
factor = ????? (didn't wait till it finished)
Thanks for bringing those benchmarks to my attention, it really helps me to
get an idea on how these things are supposed to behave.
> (at least the det and gcd parts).
> I guess you had to turn to QQ to call singular via your patch,
> correct?
Yep.
Flamebait. Discouraging. Useless, etc.
Bernard, I think your suggestion to more and more try actually
using multiple algorithms in parallel and terminating the slower
one should be pursued further. To what extent have you tried
this already with giac, and how has it gone?
William
What is "memory interference"?
William
>
> >
>
--
William Stein
Associate Professor of Mathematics
University of Washington
http://wstein.org
The user interface to Magnus is structured to use multiple procedures
at the same time. Magnus is a special CAS for infinite group theory.
Most of the procedures are not guaranteed to terminate so the idea
used was to run the procedures in parallel. The interface allows you
to choose from the applicable procedures and to specify the percentage
of CPU to give to each procedure. If one procedure does succeed it
"poisons" the other procedures. Magnus is available on Sourceforge.
You might find some interesting ideas from its unique lab workbench
front end (known as the zero-learning curve interface) by Gilbert
Baumslag.
A similar mechanism could be using the Sage notebook.
Tim Daly