I tried naive things like setting x and y to be integers of a certain type, and thensage: %timeit x^yfor example, but I always get "" The slowest run took 59.81 times longer than the fastest. This could mean that an intermediate result is being cached. ""
This makes sense, but I'm not sure what else to try. Individual " %time x^y " statements seem to show no difference between ZZ and numpy.int, for example, which puzzles me (overhead?). Exact same issues when defining the factorial via
fac= lambda n : 1 if n == 0 else n*fac(n-1)
sage: %time _=fac(500)
CPU times: user 397 µs, sys: 0 ns, total: 397 µs
Wall time: 344 µs
sage: %time _=factorial(500)
CPU times: user 14 µs, sys: 6 µs, total: 20 µs
Wall time: 21.9 µs
So here is my question: does anybody know of a basic test/piece of code that would illustrate the difference in speed between various types of integers and/or floats?
sage: numpy.int(2) ^ numpy.int(32)
4294967296
sage: numpy.int(2) ^ numpy.int(64)
18446744073709551616L #notice the L, typical of python ints
sage: numpy.int32(2) ^ numpy.int32(32)
/Applications/SageMath/src/bin/sage-ipython:1: RuntimeWarning: overflow encountered in int_scalars
#!/usr/bin/env python
0
sage: %time testfunc(ZZ(1234123), ZZ(1234123), operator.add)
CPU times: user 69.3 ms, sys: 787 µs, total: 70.1 ms
Wall time: 70.8 ms
sage: %time testfunc(int(1234123), int(1234123), operator.add)
CPU times: user 42.3 ms, sys: 631 µs, total: 42.9 ms
Wall time: 43.5 ms
sage: %time testfunc(numpy.int(1234123), numpy.int(1234123), operator.add)
CPU times: user 44 ms, sys: 538 µs, total: 44.5 ms
Wall time: 45 ms
sage: %time testfunc(numpy.int32(1234123), numpy.int32(1234123), operator.add)
CPU times: user 74.2 ms, sys: 822 µs, total: 75 ms
Wall time: 77.7 ms
-- numpy.int32 or int.64: like "int" initially, but works mod 2^32 or
2^64, and gives an overflow warning when it happens. No increase in
speed, for general reasons which I will just call "overhead" for lack
of a better understanding. (Still good for numpy functions,
obviously).
On Sat, Dec 3, 2016 at 10:53 PM, Ralf Stephan <> wrote:
“Both ZZ and numpy use libgmp internally “
No, ZZ uses libgmp (actually really MPIR, which is a fork of GMP), and numpy uses Python’s ints/longs. Python’s int/long type is arbitrary precision, despite the confusing naming. It only implements relatively naive algorithms – karatsuba, etc., – and not the asymptotically fast Fourier transform methods that GMP (and MPIR) implement and highly optimize. So Sage’s ZZ will start beating the pants off of Python (and numpy) when the numbers get large.
Example – try this with various values of “digits” and you’ll see ZZ being arbitrarily faster than Python’s longs:
def bench(digits):
print "digits =", digits
global n, m, n1, m1, n2, m2 # timeit uses global namespace
n = ZZ.random_element(10^digits)
m = ZZ.random_element(10^digits)
%timeit n*m
n1 = int(n)
m1 = int(m)
%timeit n1*m1
import numpy
n2 = numpy.int(n)
m2 = numpy.int(m)
%timeit n2*m2
bench(10)
bench(100)
bench(1000)
bench(10000)
bench(100000)
bench(1000000)
# At 1 million digits (on this test machine): ZZ is 46.5 times faster than Python ints:
870 / 18.7