fplll cuda enumeration

9 views
Skip to first unread message

si...@pohmann.de

unread,
Dec 20, 2020, 1:42:50 PM12/20/20
to fplll...@googlegroups.com, jens.zu...@uni-passau.de
Hello,

the library for cuda enumeration is finished. I planned now to transfer the ownership to fplll, and create a pull request for the main library to adjust the readme. Is this ok for everyone?

Best regards,
Simon Pohmann

Simon Pohmann

unread,
Jan 4, 2021, 2:42:13 PMJan 4
to fplll-devel
Hi all,

here is an update on what Martin and me have discussed so far:

Mainly, we had the idea to write some tests in a way that allow to test most parts using only a CPU, as there will be no Cuda device available for CI. Apart from this, we have finished a benchmark of enumeration on knapsack matrices, and I am currently working on one with pruning.

Here is the complete discussion:

Simon Pohmann writes:
> Hi Martin,
>
> I completely understand that these questions are important :). Here is what I have decided so far:
>
>> - I assume we cannot do continuous testing on this library since no-one will give us a CUDA card in the cloud to do so. Or, can CUDA be emulated for testing? I'd really like to move beyond "but it works on my machine"
>
> Complete testing is probably impossible if we do not have a GPU available - AFAIK Cuda cannot be emulated. However, I have written most code to also run on a CPU (single-threaded), as this helped a lot with debugging. Therefore, it would be possible without too much effort to write tests that also run on a system without GPU. Of course, we will not be able to test synchronization etc. using that approach. The logic itself should be testable, though. Apart from that, I have tested the code already on two machines, but I will not be able to provide one for CI.

Okay, that’s at least something.

>> - What commitments can you give to maintaining the library, i.e. deal with bug reports, feature requests etc?
>
> Hard to say in the long term, but for the next years I definitely will be able to provide at least bug fixes and small feature requests.

That’s fine. No need to commit for life, but at least for the near future it’s good to have some support.

>> - How does the performance compare to single core CPU X core CPU, other CUDA implementations both without pruning and with pruning, i.e. in a realistic setting. If you need CPU cores to benchmark, I can provide those, but I don't have a CUDA card on those servers.
>
> Well, I have been provided a CUDA server by the University of Passau, however I do not have root-privileges, and none of the dependencies of fplll are installed (neither autotools, nor gmp/mpfr; nvcc and make/gcc are present however). I tried copying binaries or generated Makefiles, but neither did work. If you have an idea how I could make fplll run on that machine, it would help a lot. Apart from this, is there a way to execute just the enumeration with pruning and lll-preprocessing in fplll? AFAIK fplll -a svp never uses pruning, which is what I have used for current benchmarks.

That’s annoying. We could maybe ask Marc to run benchmarks on his system? I’ll check with my IT department if we can setup something here, too.

But if you can make the installation work: The easiest way to run enumeration with preprocessing is to call BKZ::svp_reduction() on your basis:

https://github.com/fplll/fplll/blob/master/fplll/bkz.cpp#L275

Cheers,
Martin

> Merry Christmas and a happy new year,
> Simon
>
> Martin R. Albrecht wrote:
>> Hi Simon,
>>
>> I think we should have public benchmarks for this, or, more generally, here are some questions I have about the library:
>>
>> - What commitments can you give to maintaining the library, i.e. deal with bug reports, feature requests etc?
>>
>> - I assume we cannot do continuous testing on this library since no-one will give us a CUDA card in the cloud to do so. Or, can CUDA be emulated for testing? I'd really like to move beyond "but it works on my machine"
>>
>> - How does the performance compare to single core CPU X core CPU, other CUDA implementations both without pruning and with pruning, i.e. in a realistic setting. If you need CPU cores to benchmark, I can provide those, but I don't have a CUDA card on those servers.
>>
>> Don't take these questions the wrong way: I want to see this library get the exposure it deserves but such questions should be answered to make that happen.
>>
>> Cheers,
>> Martin
>>
>> Simon Pohmann writes:
>> > Hi Martin,
>> >
>> > I have asked in fplll-devel.
>> >
>> > Additionally, the first benchmark is finished now, one with pruning will follow soon :). The long running times in higher dimensions made me measure only four lattices for each dimension, so the distribution is quite chaotic. Still, I have attached a plot, in case you want to use the data as soon as possible.
>> >
>> > Best Regards,
>> > Simon
>> >
>> > Martin R. Albrecht wrote:
>> >> Hi Simon,
>> >>    
>> >> Yep, sending an e-mail to the Google group would be good.
>> >>
>> >> I assume those dimension 60 enumerations are without pruning? Those are nice synthetic benchmarks. It would super cool, though, if you could also provide benchmarks with pruning and preprocessing, as we do in BKZ. This would give real world performance.
>> >>
>> >> Cheers,
>> >> Martin
>> >>
>> >> Simon Pohmann writes:
>> >> > Hi Martin,
>> >> >
>> >> >> What's the plan here in more detail? A new project https://github.com/fplll/enum-cuda ?
>> >> >
>> >> > This was what I had in mind, yes. I believe someone proposed this during the fplll days, and it definitely would help with visibility :)
>> >> >
>> >> >> If yes, we should ask on fplll-devel.
>> >> >
>> >> > So I should create question in the google groups forum? The performance report will sadly need some more days, executing many enumerations in 60 dimensions seems to need its time, whatever implementation is used...
>> >> >
>> >> > Best Regards,
>> >> > Simon
>> >> >
>> >> > Martin R. Albrecht wrote:
>> >> >> Hi Simon,
>> >> >>
>> >> >> Oh, cool!
>> >> >>
>> >> >> What's the plan here in more detail? A new project https://github.com/fplll/enum-cuda ?
>> >> >>
>> >> >> If yes, we should ask on fplll-devel. I assume you're proposing this move for greater visibility?
>> >> >>
>> >> >> Cheers,
>> >> >> Martin
>> >> >>
>> >> >> Simon Pohmann writes:
>> >> >> > Hi Martin,
>> >> >> >
>> >> >> > the library for cuda enumeration is finished (at least a first, hopefully stable version). I planned now to transfer the ownership to fplll and create a pull request for the main library to add a reference to the readme. Is there some important point I forgot?
>> >> >> > A short, preliminary report will follow soon :)
>> >> >> >
>> >> >> > Best regards,
>> >> >> > Simon Pohmann
Reply all
Reply to author
Forward
0 new messages