Unimodular transformation matrix of LLL algorithm

108 views
Skip to first unread message

Santanu Sarkar

unread,
Sep 27, 2020, 8:35:17 AM9/27/20
to sage-...@googlegroups.com

Dear all,
   I have a matrix M1 with integer entries with 90 rows and 6 columns.
After applying LLL algorithm of M1, I get M2=M1.LLL(). I want to get
corresponding unimodular transformation matrix T such that
T*M1=M2. We can find T by
T=M2*M1.pseudoinverse() or T== M1.solve_left(M2), but determinant of T becomes 0 i.e.,  T.det()=0.
I want T.det()=1.

Best regards,
Santanu

Samuel Lelievre

unread,
Sep 27, 2020, 10:05:59 AM9/27/20
to sage-devel
There is a ticket for that with some hints on how to do it.

- Sage Trac ticket 25191
  Add flag for returning LLL transformation matrix

Hopefully someone can expand on the hints, or even
push a branch to the ticket.

Martin R. Albrecht

unread,
Sep 27, 2020, 11:15:33 AM9/27/20
to sage-...@googlegroups.com
Hi there,

This should do the trick:

sage: from fpylll import *
sage: A = random_matrix(ZZ, 6, 90)
sage: U = IntegerMatrix.identity(6)
sage: B = LLL.reduction(IntegerMatrix.from_matrix(A), U).to_matrix(matrix(ZZ, 6,
90))
sage: U = U.to_matrix(matrix(ZZ, 6,6))
sage: B == U*A
True
sage: abs(U.det())
1

Cheers,
Martin
--

_pgp: https://keybase.io/martinralbrecht
_www: https://malb.io

Santanu Sarkar

unread,
Sep 27, 2020, 1:43:45 PM9/27/20
to sage-...@googlegroups.com
Dear Martin,
    Thank you so much. It works!
Can we make it faster? 
It took 17 seconds for my problem but 
M1.LLL() took only 3 seconds. Of course I understand we 
are calculating extra matrix U.  

Thanks again for your help. 

Regards,
Santanu 



--
You received this message because you are subscribed to the Google Groups "sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/sage-devel/87h7rjmesx.fsf%40googlemail.com.

Dima Pasechnik

unread,
Sep 27, 2020, 2:20:28 PM9/27/20
to sage-devel


On Sun, 27 Sep 2020, 18:43 Santanu Sarkar, <sarkar.sa...@gmail.com> wrote:
Dear Martin,
    Thank you so much. It works!
Can we make it faster? 
It took 17 seconds for my problem but 
M1.LLL() took only 3 seconds. Of course I understand we 
are calculating extra matrix U.  
one needs to do some Cython programming, as suggested by
the trac ticket mentioned above.


Nils Bruin

unread,
Sep 27, 2020, 3:42:00 PM9/27/20
to sage-devel
You could do the same thing as you do with Gaussian elimination to track the row operations: augment the matrix with an identity matrix.
In order for the augmentation to not affect your LLL reduction, you'd want to multiply your original matrix by a large constant, so that the augmented coordinates do not affect the norms significantly.

On Sunday, September 27, 2020 at 11:20:28 AM UTC-7, Dima Pasechnik wrote:
To unsubscribe from this group and stop receiving emails from it, send an email to sage-...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sage-...@googlegroups.com.

Santanu Sarkar

unread,
Sep 27, 2020, 4:48:03 PM9/27/20
to sage-...@googlegroups.com
Dear Nils,
   Thank you so much for your comments.  
I consider Matrix E=[I,M1], where I is identity matrix. 
Then reduction of E took 100 seconds. Hence I am not 
going any advantage. 

Regards,
Santanu 


To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/sage-devel/7a0e4e01-74b0-4076-beb5-c8fc4dbb05eeo%40googlegroups.com.

Nils Bruin

unread,
Sep 27, 2020, 10:57:17 PM9/27/20
to sage-devel


On Sunday, September 27, 2020 at 1:48:03 PM UTC-7, Santanu Sarkar wrote:
Dear Nils,
   Thank you so much for your comments.  
I consider Matrix E=[I,M1], where I is identity matrix. 
Then reduction of E took 100 seconds. Hence I am not 
going any advantage. 

Try [10^b*M1,I] with b = 50 or so (it depends on your problem what a "large" number is). I'm not promising that it's faster, but it's worth a try. If you do not scale M1, then I would expect you're getting back different answers, unless all the vectors in the span of M1 already have very large norm.

I now also see that you're looking at a very "overgenerated" lattice (span of 90 vectors in ZZ^6?) Such a lattice almost surely has a very small covolume. I'd expect that just a HNF is actually quite fast to compute. The transformation matrix in such a case is also highly non-unique. It's not so surprising that keeping track of it is very much more expensive in this case (although I'm not sure it explains the full factor that you see).

I'd expect that something like 8 combinations of the 90 generating vectors already generate the same lattice. Now you're looking at LLL-reducing a 8x6 matrix rather than a 90x6 matrix!



 
Regards,
Santanu 


Santanu Sarkar

unread,
Sep 28, 2020, 8:34:46 AM9/28/20
to sage-...@googlegroups.com
Dear Nils,
    Thanks a lot for your help. 

On Mon, 28 Sep 2020 at 08:27, Nils Bruin <nbr...@sfu.ca> wrote:


On Sunday, September 27, 2020 at 1:48:03 PM UTC-7, Santanu Sarkar wrote:
Dear Nils,
   Thank you so much for your comments.  
I consider Matrix E=[I,M1], where I is identity matrix. 
Then reduction of E took 100 seconds. Hence I am not 
going any advantage. 

Try [10^b*M1,I] with b = 50 or so (it depends on your problem what a "large" number is). I'm not promising that it's faster, but it's worth a try. If you do not scale M1, then I would expect you're getting back different answers, unless all the vectors in the span of M1 already have very large norm.

I now also see that you're looking at a very "overgenerated" lattice (span of 90 vectors in ZZ^6?) Such a lattice almost surely has a very small covolume. I'd expect that just a HNF is actually quite fast to compute. The transformation matrix in such a case is also highly non-unique. It's not so surprising that keeping track of it is very much more expensive in this case (although I'm not sure it explains the full factor that you see).

I'd expect that something like 8 combinations of the 90 generating vectors already generate the same lattice. Now you're looking at LLL-reducing a 8x6 matrix rather than a 90x6 matrix!
Yes, 8 vectors generate same lattice. How can we use this information to find a matrix U such that 
UM1=M1.LLL() with determinant of U=\pm 1? I see the following code to find coordinate vectors is very slow
in the actual problem.  

from sage.modules.free_module_integer import IntegerLattice
A = random_matrix(ZZ, 3, 3, x=-2, y=2)
L = IntegerLattice(A, lll_reduce=False);
v=vector([-1,0,0])
print L.coordinates(v)


Please help me. 

Regards,
Santanu 
 
To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/sage-devel/7ae8410f-f391-4a81-941a-9860d76db8a4o%40googlegroups.com.

Martin R. Albrecht

unread,
Sep 28, 2020, 11:31:11 PM9/28/20
to sage-...@googlegroups.com
Hi there,

Sage is using FP(y)LLL under the hood, so there shouldn’t be that much of a performance difference. As far as I can see, both Sage and FPyLLL have the same defaults so it’s not clear to me why you see this performance difference.

Santanu Sarkar

unread,
Sep 30, 2020, 6:02:14 AM9/30/20
to sage-...@googlegroups.com
Dear Martin,
  Kindly see the following code.

from fpylll import (IntegerMatrix, LLL)
row=160
col=100
tt=cputime()
A = random_matrix(ZZ,  row,col,x=-2000, y=2000)
U = IntegerMatrix.identity(row)
tt=cputime()
B = LLL.reduction(IntegerMatrix.from_matrix(A), U).to_matrix(matrix(ZZ, row,col))
print cputime(tt)
tt=cputime()
A=A.LLL()
print cputime(tt)

Lattice reduction takes 49 seconds in the first case and 7 seconds in the second case. 
Maybe I am missing something. 

Best regards,
Santanu 

Martin R. Albrecht

unread,
Oct 2, 2020, 2:38:24 AM10/2/20
to sage-...@googlegroups.com
Seems like the cost of doing business, the entries of U are pretty big:

sage: print(RR(log(abs(U[0][0]),2)))
1277.66401749439

Dima Pasechnik

unread,
Oct 2, 2020, 5:39:30 AM10/2/20
to sage-devel
On Fri, Oct 2, 2020 at 7:38 AM 'Martin R. Albrecht' via sage-devel
<sage-...@googlegroups.com> wrote:
>
> Seems like the cost of doing business, the entries of U are pretty big:

Could it be mostly the overhead of treating U as a part of input?
After all, LLL produces a sequence of (sparse?) linear transformations
applied to the input,
doesn't fplll provide a facility to keep track of them?
NTL does offer such an option in its LLL implementation, and while
it's default LLL is slower, U is computed quite fast,
the computation that return U is only two times longer than the one
without returning U.

sage: fl=flatten(list(map(list,list(A))))
sage: M=ntl.mat_ZZ(row,col,fl)
sage: tt=cputime(); rr=M.LLL(); cputime(tt)
18.279667000000018
sage: M=ntl.mat_ZZ(row,col,fl)
sage: tt=cputime(); rr=M.LLL(return_U=True); cputime(tt)
40.468421000000035
sage: tt=cputime(); rr=A.LLL(); cputime(tt)
7.265420000000063

Dima
> To view this discussion on the web visit https://groups.google.com/d/msgid/sage-devel/874knfe0py.fsf%40googlemail.com.
Reply all
Reply to author
Forward
0 new messages