this Fateman benchmark is implemented in:
https://github.com/bluescarni/piranha/blob/5fe6e71e17192295b6c9e1d7fc23f29ada1d868c/tests/fateman2.cpp
and it takes just 8.4s on my 4 core machine:
$ time ./fateman2_perf
Running 1 test case...
8.404437s wall, 32.150000s user + 0.050000s system = 32.200000s CPU (383.1%)
*** No errors detected
Freeing MPFR caches.
Setting shutdown flag.
real 0m8.662s
user 0m32.383s
sys 0m0.068s
True, it's implemented in parallel, but it also prints the total CPU
time summed over cores, which is 32.2s. You can of course also just
force it on one core:
$ time ./fateman2_perf 1
Running 1 test case...
29.264847s wall, 29.240000s user + 0.050000s system = 29.290000s CPU (100.1%)
*** No errors detected
Freeing MPFR caches.
Setting shutdown flag.
real 0m29.519s
user 0m29.465s
sys 0m0.062s
Then it takes 29.2s.
I CCed Francesco, I hope I ran it correctly. I am using
https://github.com/bluescarni/piranha/commit/5fe6e71e17192295b6c9e1d7fc23f29ada1d868c.
I am not quite sure which representation Piranha uses internally.
> * Dense representation
> * Nemo could be a lot faster
> * Currently Nemo uses the classical algorithm for three generic rings over
> flint's fmpz_poly (which is used for R) and binary exponentiation. But this
> challenge is more fun if other algorithms are allowed. So long as the
> representation remains dense.
We are in the process of integrating it into CSymPy
> Any predictions on the rate of improvement in this computation over the next
> few years in Sage? In Nemo? I have a prediction.
(https://github.com/sympy/csympy) to do fast polynomial manipulation.
On Wed, Aug 20, 2014 at 2:05 PM, Bill Hart <goodwi...@googlemail.com> wrote:Because I didn't have time to dive into them.
> You did the same thing as Ondrej. You used machine words instead of bignums.
> And I already said Julia does that *at the console* just as fast *with
> machine words*. A point people seem to miss consistently when it suits them!
>
> The importance of microbenchmarks like this is not to compute the answer as
> fast as you can, but to use them as a means of identifying and improving
> performance of systems as a whole.
>
> My point is that this can be done extremely fast *with* bignums, typed at
> the console. And that does matter, regardless of what you think. It
> exercises:
>
> * bignums
> * loops
> * the language itself
> * garbage collection
>
> And it was the first of three offerings. I notice no one even bothered to
> touch the other two cases. Why? Because they are not microbenchmarks!
There is @cython.comple and cython.inline. It's a bit "heavy" for a
> Cython could be formidable if it had a jit and parametric types. But feel
> free to ignore progress.
JIT (yeah, it shells out to a standard C compiler the first time) but
probably technically meets the definition (and takes advantage of
knowing, for example, runtime argument type).
Which code?
>
> The best time I am aware of for closed source software is with exponent 40
> instead of 30, and the time is 33.9s on 12 cores. Perhaps you can try
Exponent 40 took almost half an hour on 4 cores:
> Piranha with exponent 40 and see how it compares. It doesn't scale very well
> with the number of cores, but unfotunately I don't have timings for 4 cores.
1684.955474s wall, 6629.100000s user + 2.840000s system =
6631.940000s CPU (393.6%)
It's weird it would scale so bad with the increasing exponent.
Exponent 20:
>
> The best times I managed with my own code were apparently about 3.4s with
> exponent 20 on a single core at 2.2GHz. Again perhaps you can time Piranha
> and see how long it takes.
$ ./fateman1_perf
Running 1 test case...
0.550569s wall, 1.830000s user + 0.010000s system = 1.840000s CPU (334.2%)
$ ./fateman1_perf 1
Running 1 test case...
1.680002s wall, 1.670000s user + 0.010000s system = 1.680000s CPU (100.0%)
So 0.55s on 4 cores and 1.68s on 1 core.
Ondrej
I do apologise. It takes time T using flint for Z[x]. The authors of flint are William Hart, Fredrik Johansson, Sebastian Pancratz, David Harvey, Andy Novocin and dozens of other authors.The generic arithmetic is part of the Sage library, written in Cython and Python, and contains code written by authors too numerous to list, but including William Stein, David Harvey, Robert Bradshaw....
Ondrej
You specify path where things get installed, no root needed. I don't have root access to most of my computers.
Sent from my mobile phone.