#4965

0 views
Skip to first unread message

Martin Albrecht

unread,
Jan 11, 2009, 5:16:43 PM1/11/09
to sag...@googlegroups.com
Hi [sage-nt],

I've wrapped FLINT's zmod_poly_t for Z/nZ[x] where n is word sized. The
speed-up is very nice so it was worth the effort ... well, except that I have
a bug I can't figure out.

I have failures in

sage/schemes/hyperelliptic_curves/hyperelliptic_padic_field.py

and

sage/sage/schemes/elliptic_curves/ell_number_field.py

both I don't really know very well. My problem is to identify the function in
Polynomial_zmod_flint (that I wrote) that gives wrong results.

I'd greatly appreciate some input from someone who's more familiar with those
files. I guess it is something rather simple but I can't see it right now
(possibly due to sleep deprivation.).

The relevant ticket is:

http://trac.sagemath.org/sage_trac/ticket/4965

Cheers,
Martin

--
name: Martin Albrecht
_pgp: http://pgp.mit.edu:11371/pks/lookup?op=get&search=0x8EF0DC99
_otr: 47F43D1A 5D68C36F 468BAEBA 640E8856 D7951CCF
_www: http://www.informatik.uni-bremen.de/~malb
_jab: martinr...@jabber.ccc.de

John Cremona

unread,
Jan 11, 2009, 5:30:59 PM1/11/09
to sag...@googlegroups.com
I'll try to look at the second of those since it seems to crop up in
code which I wrote. But I em extremely busy at the moment so cannot
promise anything very quickly.

John

2009/1/11 Martin Albrecht <ma...@informatik.uni-bremen.de>:

Craig Citro

unread,
Jan 11, 2009, 11:48:46 PM1/11/09
to sag...@googlegroups.com
> I've wrapped FLINT's zmod_poly_t for Z/nZ[x] where n is word sized. The
> speed-up is very nice so it was worth the effort ... well, except that I have
> a bug I can't figure out.
>

So this is totally orthogonal to what you're asking, but have you
tried any speed comparisons against zn_poly?

-cc

Burcin Erocal

unread,
Jan 12, 2009, 5:29:02 AM1/12/09
to sag...@googlegroups.com
Hi,

AFAIK, FLINT 1.1 uses zn_poly for zmod_poly_t.

It's easier to use zmod_poly_t from FLINT since since this way we
get gcd's, factorization, etc. all in the same interface.


Cheers,

Burcin

Martin Albrecht

unread,
Jan 12, 2009, 6:37:35 AM1/12/09
to sag...@googlegroups.com

Note that Sage ships FLINT 1.0.10 so we don't have zn_poly via FLINT yet. I
could hack a a simple function

Polynomial_zmod_flint.multiply_zn_poly

which would call zn_poly. As far as I understand, the formats are very
compatible anyway (arrays of ulongs), or am I missing something?

Btw. David what are your feelings about zn_poly in Sage. Shall we use it via
FLINT or is there any benefit from the stand alone version?

Cheers,
Martin

PS: The speed-up is mainly due to a tighter wrapper (the old one had two
layers of Python/Cython (it went via ntl.ZZ_px) and always used ntl.ZZ_px and
never lzz_px.

David Harvey

unread,
Jan 12, 2009, 10:29:31 AM1/12/09
to sage-nt


On Jan 12, 6:37 am, Martin Albrecht <m...@informatik.uni-bremen.de>
wrote:

> which would call zn_poly. As far as I understand, the formats are very
> compatible anyway (arrays of ulongs), or am I missing something?
>
> Btw. David what are your feelings about zn_poly in Sage. Shall we use it via
> FLINT or is there any benefit from the stand alone version?

zn_poly currently does not have a complete interface, so I don't think
it makes much sense to use it directly yet. In particular there are no
division or gcd functions. FLINT apparently supplies those, so at
least you get a complete interface that way.

I eventually plan to include a zn_poly_t type in zn_poly which wraps
zn_array_*, provides automatic memory management etc. More object-
oriented. I am also planning a packed array type, so e.g. modulus = 3
doesn't use 64 bits per coefficient. This will be transparent for a
program using zn_poly_t. But that won't happen for a while yet.

david

Bill Hart

unread,
Jan 12, 2009, 12:28:40 PM1/12/09
to sage-nt
Hi all, a couple of things to note.

1) FLINT only uses zn_poly for FLINT 1.1 fmpz_poly stuff at present
(note *NOT* zmod_poly at this stage - though this is certainly still
planned).

2) zn_poly is certainly faster than zmod_poly, and FLINT *should* use
zn_poly to make it easy and faster for sage. I don't know what the
exact differences are in speed, but I know it is 40% in some medium
sized cases with diminishing returns for larger problems, and perhaps
in other cases for small polynomials (where FLINT will likely also get
improved eventually) it may be a factor of 2 or 3 at present, I simply
don't know. David may be able to answer this. Of course after Python/
Cython overhead this becomes a smaller difference.

3) There are some problems with "just using" zn_poly.

3a) zn_poly doesn't offer a packed type yet. To rival Magma, or at
least get the best performance possible, this seems very important.
Note, FLINT 1.1 doesn't have a packed type yet either. FLINT 2 does,
but that won't be available for almost a year.

3b) It is difficult to see how to use zn_poly in FLINT. FLINT needs
to use the same packed array type for Z/nZ polynomials, matrices over
Z/nZ and also for exponent vectors for multivariable polynomials (both
multipolynomials over Z and Z/pZ). The problem is, zn_poly is a
separate library, not part of FLINT, and it certainly doesn't do all
these other things. The only way I can see to use zn_poly at present
is to convert from FLINT format to zn_poly format and back again *when
that will be faster than using FLINT directly*. I can do that, but
this is all turning out to be much more complicated than I hoped. It
doesn't help that the format for packed arrays in FLINT is still not
fully decided.

4) zn_poly's eventual packed arrays may differ from those eventually
used in FLINT. At this stage it looks like I need a packed array
format which stores things as packed arrays of 8, 16, 32 or 64 bit
limbs, or as doubles and packed doubles, i.e. it must provide numerous
packed formats and convert between them all automatically. It's a
difficult problem to solve.

4a) One doesn't want to have to rewrite every algorithm to handle
all the different kinds of packed format one uses. At the same time,
one doesn't want to wrap all these formats in some kind of easy to use
uber wrapper, as then the time taken to access any individual
coefficient becomes way too high, and you lose most of the benefit of
having a packed array in the first place. I believe using iterators
over a very thin structure may be optimal in both cases, and I've been
experimenting with that (and finding out about its downfalls).

4b) In some cases one wants to pack into entries of 8 bits. One can
use an array of chars for that. Whilst this is fast, on some machines
the order of the bytes within limbs will be different than on others.
This is fine if you are doing scalar operations, where the order of
bytes within limbs doesn't matter. But it is worthless for algorithms
where the order is important. In the experimental packed vector format
in FLINT 2 there is a compile time option to choose between arrays of
chars and actually packing 8 bit entries 8 bits into limbs using
arithmetic operations (which preserve arithmetic order in limbs). The
problem is the packing option is slower than the array option. I
haven't had time to solve this problem yet, or to even analyse all the
algorithms I am going to want to implement and convince myself that it
can all be done with the faster arrays of chars. Probably it is not
possible.

4c) On some machines uint8_t, uint16_t, uint32_t may not all be
available. Don't know, haven't checked. I think c99 only requires the
interface to provide them when they are available as native formats on
the machine hardware. uint64_t is not required on 32 bit machines, I
know that for sure.

4d) On some machines one absolutely *must* pack one or more entries
into a double for optimum performance over some problem ranges. Any
basic zmod_t format needs to take that into account, whether used for
polynomials or not.

In short, this problem is waaaay complicated and just using zn_poly
for polynomial multiplication over Z/pZ is not necessarily going to be
easy.

All I can offer at present is the following:

a) Wrap FLINT and we'll eventually sort it out. FLINT will offer much
higher level stuff than zn_poly. Eventually we'll figure out how to
milk all the speed from zn_poly in FLINT.

b) I want to offer people the option of compiling FLINT without
zn_poly. So FLINT will still have to have its own code for everything.
It just may not be as highly optimised as David's stuff. A compile
time option will allow people to link against zn_poly for maximum
performance. Perhaps one way around the problem will be to have two
versions of zmod_poly, one which wraps zn_poly and uses the zn_poly
packed format, the other of which uses FLINT native code and FLINT's
own format. A conversion will have to be performed for the algorithms
which need to convert. But this would be an absolute nuisance when it
comes to implementing multipolynomial stuff over Z/pZ, so I am
reluctant to do things this way.

*IF* David and I could agree on a common low level packed format which
was sufficiently capable of dealing with all the problems in FLINT
that it is needed for, then this problem could be avoided altogether.
But I have to say that the different emphases of FLINT and zn_poly
make this already a tall order. I reckon we are going to end up just
converting to zn_poly format and back again when it is faster to do
so.

Anyhow, it seems like now is a good time to begin discussing these
issues. Maybe David and I can hack something out between us. I think I
understand the problems much better now than I did before, having had
a basic go at implementing all of the above in FLINT experimentally
using the packed vector format I came up with. But at a minimum I need
to have a packed double vector format as well.

David, any comments??

Bill.

John Cremona

unread,
Jan 12, 2009, 12:31:34 PM1/12/09
to sag...@googlegroups.com
Is this number theory? Just wondering....

John

2009/1/12 Bill Hart <goodwi...@googlemail.com>:

William Stein

unread,
Jan 12, 2009, 12:48:45 PM1/12/09
to sag...@googlegroups.com
On Mon, Jan 12, 2009 at 9:31 AM, John Cremona <john.c...@gmail.com> wrote:
>
> Is this number theory? Just wondering....
>
> John

:-) I always wonder about that. It's all about FLINT = "Fast LIbrary
for Number Theory", so that suggests it's number theory. It's also
about replacing functionality in Sage that currently uses NTL =
"Number Theory Library". I think working with the integers modulo n
is number theory, since it's usually chapter 1 or 2 of any basic book
on number theory. It's certainly the sort of thing that one could
imagine seeing talks at in any ANTS (Algorithmic Number theory
symposium) conference. So I think this is *computational* number
theory.

On the other hand, my gut instinct is that it isn't number theory, any
more than linear algebra over GF(p) is number theory.

-- William
--
William Stein
Associate Professor of Mathematics
University of Washington
http://wstein.org

David Harvey

unread,
Jan 12, 2009, 4:40:01 PM1/12/09
to sage-nt
On Jan 12, 12:28 pm, Bill Hart <goodwillh...@googlemail.com> wrote:

> David, any comments??

Nope. Don't really have the time for it right now.

david

Bill Hart

unread,
Jan 12, 2009, 5:50:22 PM1/12/09
to sage-nt
It's really what I call "core arithmetic", not number theory as such.
We don't really have a sage-core list. But I have a problem with
generating too many lists. Sage development becomes too fractured.
People only sign up to a limited number of lists and eventually lists
can lose momentum if the number of active members drops too low.

It looks like we can't really have a meaningful discussion about this
topic for now anyway. We'll have to come back to it when David has
some time. Maybe sage-devel is a better place for it when the time
comes.

Bill.

On 12 Jan, 17:48, "William Stein" <wst...@gmail.com> wrote:
> On Mon, Jan 12, 2009 at 9:31 AM, John Cremona <john.crem...@gmail.com> wrote:
>
> > Is this number theory?  Just wondering....
>
> > John
>
> :-)  I always wonder about that.  It's all about FLINT = "Fast LIbrary
> for Number Theory", so that suggests it's number theory. It's also
> about replacing functionality in Sage that currently uses NTL =
> "Number Theory Library".    I think working with the integers modulo n
> is number theory, since it's usually chapter 1 or 2 of any basic book
> on number theory.    It's certainly the sort of thing that one could
> imagine seeing talks at in any ANTS (Algorithmic Number theory
> symposium) conference.  So I think this is *computational* number
> theory.
>
> On the other hand, my gut instinct is that it isn't number theory, any
> more than linear algebra over GF(p) is number theory.
>
>  -- William
>
>
>
>
>
> > 2009/1/12 Bill Hart <goodwillh...@googlemail.com>:

Bill Hart

unread,
Jan 12, 2009, 6:23:36 PM1/12/09
to sage-nt
I need to add something to my original response here, which is not at
all clear on reading it.

1 and 2 in my list are arguments for using zn_poly *now* in FLINT (for
zmod_poly), i.e. the FLINT 1.x series. This should be done/will be
done, as soon as I find the time to do it. Officially I am currently
working on MPIR and on a theta function project for which I am
travelling to Florida in about a month.

3 and following are about the issues for FLINT 2.0 with using zn_poly.
Those issues do not need to be resolved right now. They don't affect
whether or not I use zn_poly in the FLINT 1.x series.

Bill.
Reply all
Reply to author
Forward
0 new messages