LLL

52 views
Skip to first unread message

Martin Albrecht

unread,
Sep 21, 2007, 5:55:24 AM9/21/07
to Sage Development
Hi everyone,

SAGE 2.8.5 finally adds a method for LLL lattice reduction to the
Matrix_integer_dense class. For now it only wraps NTL but we actually have
two more implementations available in SAGE which are to be wrapped
conveniently soon. However, as ticket

http://trac.sagemath.org/sage_trac/ticket/723

points out, they are all blown away by MAGMA.

sage: a = random_matrix(ZZ,200)
sage: time b=a.lll() # NTL, delta = 3/4
CPU times: user 10.63 s, sys: 0.03 s, total: 10.66 s
Wall time: 10.67

sage: m = magma(a)
sage: time b = m.LLL() # MAGMA, delta = 3/4
CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s
Wall time: 1.72

sage: p = pari(a)
sage: time b = p.qflll(1) # Pari
CPU times: user 10.47 s, sys: 0.03 s, total: 10.51 s
Wall time: 10.61

sage: g = gap(a)
sage: time b = g.LLLReducedBasis() # GAP
killed after two minutes

The relevant MAGMA documentation is at

http://www.msri.org/about/computing/docs/magma/html/text833.htm

The relevant PARI documentation is at

http://pari.math.u-bordeaux.fr/dochtml/html.stable/Vectors,_matrices,_linear_algebra_and_sets.html#qflll

The relevant NTL documentation is at:

http://www.shoup.net/ntl/doc/LLL.txt

Also, Google came up with

http://www.math.uu.nl/people/vdkallen/lllimplementations.html

which could be relevant or not, I don't know.

The point of this mail is to ask you -- dear LLL wizards -- (a) why Magma is
so much faster and (b) what we can do about it.

Thoughts?
Martin

--
name: Martin Albrecht
_pgp: http://pgp.mit.edu:11371/pks/lookup?op=get&search=0x8EF0DC99
_www: http://www.informatik.uni-bremen.de/~malb
_jab: martinr...@jabber.ccc.de

John Cremona

unread,
Sep 21, 2007, 7:43:19 AM9/21/07
to sage-...@googlegroups.com
Implementing LLL well is a black art. I don't know enough to know
why the Magma one is so much better, but at least part of the reason
may be that the experts in LLL are using Magma for their development
work. Hence, once everyone in te world uses Sage, ...

Thanks for wrapping NTL's LLL. But NTL only has integer LLL, where
the input is a basis for the lattice in Z^n, while for some
applications it is also useful to have a version which takes as input
the Gram matrix over R (i.e. any positive definite symmetric real
matrix). For example that is what is useful for reducing bases of
elliptic curves. Pari has that but NTL does not.

John


--
John Cremona

David Kohel

unread,
Sep 21, 2007, 9:00:40 AM9/21/07
to sage-devel
Since LLL and LLLGram in Magma V2.13 are written by Damien Stehle,
using his asymptotically better algorithm. The previous version was
not
even mathematically correct (adhoc "improvements" or post-processing
could destroy the LLL reduction condition).

Damien provides C code under GPL and can be found on his web page:

http://perso.ens-lyon.fr/damien.stehle/english.html

It might take some art to decide on optimal parameters, but linking it
into
SAGE should provide the same asymptotic performance as in Magma.

--David

Martin Albrecht

unread,
Sep 21, 2007, 9:19:14 AM9/21/07
to sage-...@googlegroups.com

Thanks! From what I can see it is probably best to:

(first) try the FP variants of NTL's LLL first (I overlooked those before)
(then) try Damien's fast.c implementation, also the proved version seems
interesting, given the discussions on this list.

John Cremona

unread,
Sep 21, 2007, 9:57:37 AM9/21/07
to sage-...@googlegroups.com
On 9/21/07, Martin Albrecht <ma...@informatik.uni-bremen.de> wrote:
>
> On Friday 21 September 2007, David Kohel wrote:
> > Since LLL and LLLGram in Magma V2.13 are written by Damien Stehle,
> > using his asymptotically better algorithm. The previous version was
> > not
> > even mathematically correct (adhoc "improvements" or post-processing
> > could destroy the LLL reduction condition).
> >
> > Damien provides C code under GPL and can be found on his web page:
> >
> > http://perso.ens-lyon.fr/damien.stehle/english.html
> >
> > It might take some art to decide on optimal parameters, but linking it
> > into
> > SAGE should provide the same asymptotic performance as in Magma.
> >
> > --David
>
> Thanks! From what I can see it is probably best to:
>
> (first) try the FP variants of NTL's LLL first (I overlooked those before)

These use real arithmetic "internally", I think, but do not alow real
input data as in my previous posting.

> (then) try Damien's fast.c implementation, also the proved version seems
> interesting, given the discussions on this list.
>

Certainly! This would be very good.

John


--
John Cremona

Martin Albrecht

unread,
Sep 21, 2007, 10:10:00 AM9/21/07
to sage-...@googlegroups.com
> > (first) try the FP variants of NTL's LLL first (I overlooked those
> > before)
>
> These use real arithmetic "internally", I think, but do not alow real
> input data as in my previous posting.

Yes, it seems so. I am not very familiar with GP/Pari, which function
implements what you want (gflllgram)? It is probably quite easy to wrap it
for you once I know what to call.

William Stein

unread,
Sep 21, 2007, 3:44:34 PM9/21/07
to sage-...@googlegroups.com

David, many thanks for pointing this out!! It's extremely interesting.

I hope somebody (e.g., Martin?) can build Damien's code and
do some benchmarks very soon.

-- William

Martin Albrecht

unread,
Sep 21, 2007, 3:54:41 PM9/21/07
to sage-...@googlegroups.com
> I hope somebody (e.g., Martin?) can build Damien's code and
> do some benchmarks very soon.

Right now I am wrapping NTL's FP variants. The reason NTL was beaten so badly
is because MAGMA uses FP variants while NLT's LLL uses exact arithmetic.
However, MAGMA can do FP by default, because they have a proveable FP
implementation (via Damien).

So I am wrapping the NTL stuff first and see how fast we are. Then I'll take a
look at Damien's stuff.

Btw. If a user deliberately chooses LLL_FP and the global proof == True, shall
we raise an exception?

John Cremona

unread,
Sep 21, 2007, 5:02:45 PM9/21/07
to sage-...@googlegroups.com
qflllgram is the gp function I use. It takes as input the gram matrix
and outputs the unimodular transform.

John

On 9/21/07, Martin Albrecht <ma...@informatik.uni-bremen.de> wrote:
>


--
John Cremona

Martin Albrecht

unread,
Sep 21, 2007, 7:40:13 PM9/21/07
to sage-...@googlegroups.com
> (first) try the FP variants of NTL's LLL first (I overlooked those before)
> (then) try Damien's fast.c implementation, also the proved version seems
> interesting, given the discussions on this list.

So, after I understood a bit better what makes LLL algorithms fast, here is
what I came up with so far.

I wrapped the approximate LLL versions of NTL. Joel, thank you 1000 times for
making it possible to strip one layer for the NTL interface because I wrote
18 Pyrex interfaces (for 9 overloaded C++ functions) for NTL's LLL alone. And
I didn't cover everything!

From what I read MAGMA also uses approximate variants (overall they have 8 LLL
implementations according to the manual), but they can do so because they can
prove that the result is LLL reduced. That should translate to Damien's
provable GPL implementation.

However, I am not entirely sure what the result from the floating point
variants in NTL is. Shoup writes: "Routines are provided for lattice basis
reduction, including both exact-aritmetic variants (slow but sure) and
floating-point variants (fast but only approximate)." What I do not
understand is, if a not LLL reduced basis can be returned by the FP variants
without that an exception is raised. I would appreciate input from somebody
more experienced here.

As an example consider the example from the MAGMA documentation:

sage: B = MatrixSpace(IntegerRing(), 50, 51)(0
sage: for i in range(50): B[i,0] = ZZ.random_element(2**10000)
sage: for i in range(50): B[i,i+1] = 1;

Trying the fastest one:

sage: B.LLL(algorithm="NTL:LLL_FP")
LLL_FP: numbers too big...use LLL_XD
...
<type 'exceptions.RuntimeError'>

We try what NTL suggested:

sage: time B.LLL(algorithm="NTL:LLL_XD")
50 x 51 dense matrix over Integer Ring
CPU time: 43.55 s, Wall time: 44.73 s

So, can we be sure now that this is correct?

MAGMA's time:

sage: M = magma(B)
sage: time C = M.LLL()
Wall: 15.53 s

Also, MAGMA seems to use an heuristic to choose which LLL implementation out
of eight is the best (and they say it is not very good) so I wonder if we
should also make a choice if algorithm==None (defaults to exact version for
now).

So NTL is still behind but not as bad as before. I assume qflllgram() for John
is next on the LLL List and then I'l have to look into Damien's GPL code.

Martin

PS: #723 updated.

Gonzalo Tornaria

unread,
Sep 21, 2007, 10:47:53 PM9/21/07
to sage-...@googlegroups.com
Check out Matrix_integer_dense.lllgram, e.g.

EXAMPLES:
sage: M = Matrix(ZZ, 2, 2, [5,3,3,2]) ; M
[5 3]
[3 2]
sage: U = M.lllgram(); U
[-1 1]
[ 1 -2]
sage: U.transpose() * M * U
[1 0]
[0 1]

This is only for integer matrix, altough the code should work as is
for matrices with real coefficients. Meanwhile, you can actually use
pari's qflllgram from sage (just use something like
pari(M).qflllgram())

Gonzalo

John Cremona

unread,
Sep 22, 2007, 5:22:08 AM9/22/07
to sage-...@googlegroups.com
For more on what NTL does, the documentation is quite good but
otherwise try asking Victor Shoup directly. I have done that in the
past and he has been helpful.

John


--
John Cremona

Reply all
Reply to author
Forward
0 new messages