--
You received this message because you are subscribed to the Google Groups "sage-nt" group.
To post to this group, send an email to sag...@googlegroups.com.
To unsubscribe from this group, send email to sage-nt+u...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/sage-nt?hl=en-GB.
Are you planning to use log tables for speed? That's why Givaro is so
fast. It doesn't seem like you mention anything like that on your
page.
> If someone wants to work on this (I can't right now), feel free to get in
> touch with me. My main suggestion would be to try to incorporate the model
> from sage.rings.polynomials.polynomial_template.
When you copy code out to make your new code, please, please think
hard and benchmark each function. The code in
sage.rings.polynomials.polynomial_template has some annoying
inefficiencies due to lack of very careful benchmarking and use of
Cython, which surprised me last summer... (Of course, one can make
this sort of complaint about most code...)
--
William Stein
Professor of Mathematics
University of Washington
http://wstein.org
Are you planning to use log tables for speed? That's why Givaro is so
fast. It doesn't seem like you mention anything like that on your
page.
> If someone wants to work on this (I can't right now), feel free to get inWhen you copy code out to make your new code, please, please think
> touch with me. My main suggestion would be to try to incorporate the model
> from sage.rings.polynomials.polynomial_template.
hard and benchmark each function. The code in
sage.rings.polynomials.polynomial_template has some annoying
inefficiencies due to lack of very careful benchmarking and use of
Cython, which surprised me last summer... (Of course, one can make
this sort of complaint about most code...)
Going up to 2^20 would only use a few MB, since the memory usage is
O(field cardinality). You just store logs of all elements, but you
still have to add them. It's a 1xn table, not an nxn table.
I wrote to the Givaro author just now and he wrote back to say that
Givaro already supports bigger fields, though he's not really
answering Kiran's question (which includes F_q, and emphasizes that
the speed must be optimal, since non-optimal speed is already in Sage
for big fields):
"Dear William,
ZpzDom<Std64> should work with larger primes.
(as Modular<double> from LinBox should work with primes up to roughly 2^26)
Best, Jean-Guillaume Dumas."
>> > If someone wants to work on this (I can't right now), feel free to get
>> > in
>> > touch with me. My main suggestion would be to try to incorporate the
>> > model
>> > from sage.rings.polynomials.polynomial_template.
>>
>> When you copy code out to make your new code, please, please think
>> hard and benchmark each function. The code in
>> sage.rings.polynomials.polynomial_template has some annoying
>> inefficiencies due to lack of very careful benchmarking and use of
>> Cython, which surprised me last summer... (Of course, one can make
>> this sort of complaint about most code...)
>
> That surprises me as well. Do you think the inefficiencies are inherent in
> the general approach, or would a better implementation address the problem?
The general approach is fine, I think. It's entirely an issue of
taking even more care in the implementation.
> Once I actually start working on it, I would be interested in talking to you
> about specific inefficiencies that you've noticed. And I'll try to
> benchmark things as I go.
Cool.
> David
"
Also, note that
GFqDom<long long> is limited by the memory (of size 3p) and precomputations
(on most architectures should nowadays go around 2^22).
So ZpzDom<Std64> is the only one to go up to 32 bits primes (except of
course ZpzDom<Integer> which only has GMP as limits).
Note also that if extension fields more than prime fields are required
Extension<> will use the largest possible base field for polynomial
coefficients.
Best,
--
Jean-Guillaume Dumas."
PARI now has specialized code to deal with finite fields. This code is
currently not used in Sage because it is a recent addition to PARI. So
wrapping PARI's new finite field code would be a clear improvement.
This would be quite a big project though.
Can you give an example of a (simple) computation which demonstates this
performance difference? Just to see whether using PARI's finite field
code would make a difference.
Jeroen.
It's striking:
sage: k.<a> = GF(3^9); x=k.random_element();y=k.random_element();
timeit('x*y'); timeit('x+y')
625 loops, best of 3: 219 ns per loop
625 loops, best of 3: 221 ns per loop
sage: k.<a> = GF(3^11); x=k.random_element();y=k.random_element();
timeit('x*y'); timeit('x+y')
625 loops, best of 3: 14 µs per loop
625 loops, best of 3: 13.5 µs per loop
sage: 14/.219
63.9269406392694
# Convert a Sage finite field element to a PARI finite field element
sage: def pari_ffelt(x): parix=pari(x); return
parix.lift().subst(parix.variable(), "ffgen((%s).mod)"%parix)
# Sage (= old PARI) vs. new PARI
sage: k.<a> = GF(3^11); x=k.random_element(); y=k.random_element();
sage: px = pari_ffelt(x); py = pari_ffelt(y);
sage: timeit('x*y'); timeit('x+y'); timeit('x^1000')
625 loops, best of 3: 23 �s per loop
625 loops, best of 3: 17 �s per loop
625 loops, best of 3: 314 �s per loop
sage: timeit('px*py'); timeit('px+py'); timeit('px^1000')
625 loops, best of 3: 3.16 �s per loop
625 loops, best of 3: 1.06 �s per loop
625 loops, best of 3: 33 �s per loop
# Givaro vs. new PARI
sage: k.<a> = GF(3^10); x=k.random_element(); y=k.random_element();
sage: px = pari_ffelt(x); py = pari_ffelt(y);
sage: timeit('x*y'); timeit('x+y'); timeit('x^1000')
625 loops, best of 3: 217 ns per loop
625 loops, best of 3: 221 ns per loop
625 loops, best of 3: 1.41 �s per loop
sage: timeit('px*py'); timeit('px+py'); timeit('px^1000')
625 loops, best of 3: 3.14 �s per loop
625 loops, best of 3: 1.03 �s per loop
625 loops, best of 3: 30.2 �s per loop
Jeroen.