SAGE 2.8.5 finally adds a method for LLL lattice reduction to the
Matrix_integer_dense class. For now it only wraps NTL but we actually have
two more implementations available in SAGE which are to be wrapped
conveniently soon. However, as ticket
http://trac.sagemath.org/sage_trac/ticket/723
points out, they are all blown away by MAGMA.
sage: a = random_matrix(ZZ,200)
sage: time b=a.lll() # NTL, delta = 3/4
CPU times: user 10.63 s, sys: 0.03 s, total: 10.66 s
Wall time: 10.67
sage: m = magma(a)
sage: time b = m.LLL() # MAGMA, delta = 3/4
CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s
Wall time: 1.72
sage: p = pari(a)
sage: time b = p.qflll(1) # Pari
CPU times: user 10.47 s, sys: 0.03 s, total: 10.51 s
Wall time: 10.61
sage: g = gap(a)
sage: time b = g.LLLReducedBasis() # GAP
killed after two minutes
The relevant MAGMA documentation is at
http://www.msri.org/about/computing/docs/magma/html/text833.htm
The relevant PARI documentation is at
The relevant NTL documentation is at:
http://www.shoup.net/ntl/doc/LLL.txt
Also, Google came up with
http://www.math.uu.nl/people/vdkallen/lllimplementations.html
which could be relevant or not, I don't know.
The point of this mail is to ask you -- dear LLL wizards -- (a) why Magma is
so much faster and (b) what we can do about it.
Thoughts?
Martin
--
name: Martin Albrecht
_pgp: http://pgp.mit.edu:11371/pks/lookup?op=get&search=0x8EF0DC99
_www: http://www.informatik.uni-bremen.de/~malb
_jab: martinr...@jabber.ccc.de
Thanks for wrapping NTL's LLL. But NTL only has integer LLL, where
the input is a basis for the lattice in Z^n, while for some
applications it is also useful to have a version which takes as input
the Gram matrix over R (i.e. any positive definite symmetric real
matrix). For example that is what is useful for reducing bases of
elliptic curves. Pari has that but NTL does not.
John
--
John Cremona
Damien provides C code under GPL and can be found on his web page:
http://perso.ens-lyon.fr/damien.stehle/english.html
It might take some art to decide on optimal parameters, but linking it
into
SAGE should provide the same asymptotic performance as in Magma.
--David
Thanks! From what I can see it is probably best to:
(first) try the FP variants of NTL's LLL first (I overlooked those before)
(then) try Damien's fast.c implementation, also the proved version seems
interesting, given the discussions on this list.
These use real arithmetic "internally", I think, but do not alow real
input data as in my previous posting.
> (then) try Damien's fast.c implementation, also the proved version seems
> interesting, given the discussions on this list.
>
Certainly! This would be very good.
John
> Thoughts?
> Martin
>
> --
> name: Martin Albrecht
> _pgp: http://pgp.mit.edu:11371/pks/lookup?op=get&search=0x8EF0DC99
> _www: http://www.informatik.uni-bremen.de/~malb
> _jab: martinr...@jabber.ccc.de
>
>
> >
>
--
John Cremona
Yes, it seems so. I am not very familiar with GP/Pari, which function
implements what you want (gflllgram)? It is probably quite easy to wrap it
for you once I know what to call.
David, many thanks for pointing this out!! It's extremely interesting.
I hope somebody (e.g., Martin?) can build Damien's code and
do some benchmarks very soon.
-- William
Right now I am wrapping NTL's FP variants. The reason NTL was beaten so badly
is because MAGMA uses FP variants while NLT's LLL uses exact arithmetic.
However, MAGMA can do FP by default, because they have a proveable FP
implementation (via Damien).
So I am wrapping the NTL stuff first and see how fast we are. Then I'll take a
look at Damien's stuff.
Btw. If a user deliberately chooses LLL_FP and the global proof == True, shall
we raise an exception?
John
On 9/21/07, Martin Albrecht <ma...@informatik.uni-bremen.de> wrote:
>
--
John Cremona
So, after I understood a bit better what makes LLL algorithms fast, here is
what I came up with so far.
I wrapped the approximate LLL versions of NTL. Joel, thank you 1000 times for
making it possible to strip one layer for the NTL interface because I wrote
18 Pyrex interfaces (for 9 overloaded C++ functions) for NTL's LLL alone. And
I didn't cover everything!
From what I read MAGMA also uses approximate variants (overall they have 8 LLL
implementations according to the manual), but they can do so because they can
prove that the result is LLL reduced. That should translate to Damien's
provable GPL implementation.
However, I am not entirely sure what the result from the floating point
variants in NTL is. Shoup writes: "Routines are provided for lattice basis
reduction, including both exact-aritmetic variants (slow but sure) and
floating-point variants (fast but only approximate)." What I do not
understand is, if a not LLL reduced basis can be returned by the FP variants
without that an exception is raised. I would appreciate input from somebody
more experienced here.
As an example consider the example from the MAGMA documentation:
sage: B = MatrixSpace(IntegerRing(), 50, 51)(0
sage: for i in range(50): B[i,0] = ZZ.random_element(2**10000)
sage: for i in range(50): B[i,i+1] = 1;
Trying the fastest one:
sage: B.LLL(algorithm="NTL:LLL_FP")
LLL_FP: numbers too big...use LLL_XD
...
<type 'exceptions.RuntimeError'>
We try what NTL suggested:
sage: time B.LLL(algorithm="NTL:LLL_XD")
50 x 51 dense matrix over Integer Ring
CPU time: 43.55 s, Wall time: 44.73 s
So, can we be sure now that this is correct?
MAGMA's time:
sage: M = magma(B)
sage: time C = M.LLL()
Wall: 15.53 s
Also, MAGMA seems to use an heuristic to choose which LLL implementation out
of eight is the best (and they say it is not very good) so I wonder if we
should also make a choice if algorithm==None (defaults to exact version for
now).
So NTL is still behind but not as bad as before. I assume qflllgram() for John
is next on the LLL List and then I'l have to look into Damien's GPL code.
Martin
PS: #723 updated.
EXAMPLES:
sage: M = Matrix(ZZ, 2, 2, [5,3,3,2]) ; M
[5 3]
[3 2]
sage: U = M.lllgram(); U
[-1 1]
[ 1 -2]
sage: U.transpose() * M * U
[1 0]
[0 1]
This is only for integer matrix, altough the code should work as is
for matrices with real coefficients. Meanwhile, you can actually use
pari's qflllgram from sage (just use something like
pari(M).qflllgram())
Gonzalo
John
--
John Cremona