Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Going past the float size limits?

5 views
Skip to first unread message

jimmy.mu...@gmail.com

unread,
Oct 26, 2007, 6:35:54 PM10/26/07
to
Hello all
It would be great if I could make a number that can go beyond current
size limitations. Is there any sort of external library that can have
infinitely huge numbers? Way way way way beyond say 5x10^350 or
whatever it is?

I'm hitting that "inf" boundary rather fast and I can't seem to work
around it.

Thanks!

Diez B. Roggisch

unread,
Oct 26, 2007, 6:55:21 PM10/26/07
to

Chris Mellon

unread,
Oct 26, 2007, 6:56:05 PM10/26/07
to pytho...@python.org

What in the world are you trying to count?

Guilherme Polo

unread,
Oct 26, 2007, 6:57:33 PM10/26/07
to jimmy.mu...@gmail.com, pytho...@python.org
2007/10/26, jimmy.mu...@gmail.com <jimmy.mu...@gmail.com>:

> Hello all
> It would be great if I could make a number that can go beyond current
> size limitations. Is there any sort of external library that can have
> infinitely huge numbers? Way way way way beyond say 5x10^350 or
> whatever it is?

Check the decimal module

>
> I'm hitting that "inf" boundary rather fast and I can't seem to work
> around it.
>
> Thanks!
>

> --
> http://mail.python.org/mailman/listinfo/python-list
>


--
-- Guilherme H. Polo Goncalves

J. Cliff Dyer

unread,
Oct 26, 2007, 6:58:32 PM10/26/07
to jimmy.mu...@gmail.com, pytho...@python.org
hmm.

Python 2.5.1 (r251:54863, Apr 18 2007, 08:51:08) [MSC v.1310 32 bit
(Intel)] on win32
<snip>
>>> 5 * (10 ** 700)
50000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000
000000000000000000000000000000000000000000000000000L


Do you really need more than 700 places of precision? Once your numbers
are that large, surely you can use integer math, right? (FYI 5 * (10 **
10000) works just as well.

Cheers,
Cliff

Matt McCredie

unread,
Oct 26, 2007, 7:00:36 PM10/26/07
to jimmy.mu...@gmail.com, pytho...@python.org


You have a couple of options.
1. Use long if that is appropriate for your data, they can be as large
as you want (eventually you will reach memory constraints, but that
isn't likely)
2. There is a decimal type, which is based on long (I think) and can
have a decimal portion.

to use longs:
x = 5 * 10**350

to use decimal:
import decimal
x = decimal.Decimal("5e350")

You will probably want to read up on the decimal module.

Matt

jimmy.mu...@gmail.com

unread,
Oct 26, 2007, 7:29:34 PM10/26/07
to
On Oct 26, 6:56 pm, "Chris Mellon" <arka...@gmail.com> wrote:

The calculation looks like this

A = 0.35
T = 0.30
C = 0.25
G = 0.10

and then I basically continually multiply those numbers together. I
need to do it like 200,000+ times but that's nuts. I can't even do it
1000 times or the number rounds off to 0.0. I tried taking the inverse
of these numbers as I go but then it just shoots up to "inf".

Hrvoje Niksic

unread,
Oct 26, 2007, 7:54:58 PM10/26/07
to
jimmy.mu...@gmail.com writes:

> The calculation looks like this
>
> A = 0.35
> T = 0.30
> C = 0.25
> G = 0.10
>
> and then I basically continually multiply those numbers together. I
> need to do it like 200,000+ times but that's nuts. I can't even do it
> 1000 times or the number rounds off to 0.0. I tried taking the inverse
> of these numbers as I go but then it just shoots up to "inf".

>>> import gmpy
>>> A = gmpy.mpf('0.35')
>>> B = gmpy.mpf('0.30')
>>> C = gmpy.mpf('0.25')
>>> D = gmpy.mpf('0.10')
>>> result = gmpy.mpf(1)
>>> for n in xrange(200000):
... result *= A
... result *= B
... result *= C
... result *= D
...
>>> result
mpf('7.27023409768722186651e-516175')

It's reasonably fast, too. The above loop took a fraction of a second
to run on an oldish computer.

Grant Edwards

unread,
Oct 26, 2007, 8:00:12 PM10/26/07
to

>> What in the world are you trying to count?
>
> The calculation looks like this
>
> A = 0.35
> T = 0.30
> C = 0.25
> G = 0.10

The bases in DNA?

> and then I basically continually multiply those numbers together. I
> need to do it like 200,000+ times but that's nuts.

Exactly. It sure looks like what you're doing is nuts.

> I can't even do it 1000 times or the number rounds off to 0.0.
> I tried taking the inverse of these numbers as I go but then
> it just shoots up to "inf".

Can you explain what it is you're trying to calculate?

--
Grant Edwards
gra...@visi.com

Stéphane Larouche

unread,
Oct 26, 2007, 8:03:26 PM10/26/07
to pytho...@python.org
<jimmy.musselwhite <at> gmail.com> writes:

> The calculation looks like this
>
> A = 0.35
> T = 0.30
> C = 0.25
> G = 0.10
>
> and then I basically continually multiply those numbers together. I
> need to do it like 200,000+ times but that's nuts. I can't even do it
> 1000 times or the number rounds off to 0.0. I tried taking the inverse
> of these numbers as I go but then it just shoots up to "inf".

I suggest you add the logarithm of those numbers.

Stéphane

jimmy.mu...@gmail.com

unread,
Oct 26, 2007, 8:07:34 PM10/26/07
to
On Oct 26, 8:03 pm, Stéphane Larouche <stephane.larou...@polymtl.ca>
wrote:

Well I'd add the logarithms if it was me that made the algorithm. I
don't think I understand it all that well. My professor wrote it out
and I don't want to veer away and add the logs of the values because I
don't know if that's the same thing or not.

mensa...@aol.com

unread,
Oct 26, 2007, 8:21:04 PM10/26/07
to

As mentioned elsewhere, gmpy is a possible solution. You can do
the calculations with unlimited precision rationals without
introducing any rounding errors and then convert the final
answer to unlimited precision floating point without ever
hitting 0 or inf:

>>> import gmpy
>>> A = gmpy.mpq(35,100)
>>> b = A**200000
>>> c = gmpy.mpf(b)
>>> gmpy.fdigits(c)
'4.06321735803245162316e-91187'

Steven D'Aprano

unread,
Oct 26, 2007, 9:35:28 PM10/26/07
to
On Fri, 26 Oct 2007 16:29:34 -0700, jimmy.musselwhite wrote:

> On Oct 26, 6:56 pm, "Chris Mellon" <arka...@gmail.com> wrote:
>> On 10/26/07, jimmy.musselwh...@gmail.com <jimmy.musselwh...@gmail.com>
>> wrote:
>>
>> > Hello all
>> > It would be great if I could make a number that can go beyond current
>> > size limitations. Is there any sort of external library that can have
>> > infinitely huge numbers? Way way way way beyond say 5x10^350 or
>> > whatever it is?
>>
>> > I'm hitting that "inf" boundary rather fast and I can't seem to work
>> > around it.
>>
>> What in the world are you trying to count?
>
> The calculation looks like this
>
> A = 0.35
> T = 0.30
> C = 0.25
> G = 0.10
>
> and then I basically continually multiply those numbers together. I need
> to do it like 200,000+ times but that's nuts.


Because this is homework, I'm not going to give you the answer. But I
will give you *almost* the answer:

(A*T*C*G)**200000 = ?.03?10875*10**-51?194

Some of the digits in the above have been deliberately changed to
question marks, to keep you honest.

> I can't even do it 1000
> times or the number rounds off to 0.0. I tried taking the inverse of
> these numbers as I go but then it just shoots up to "inf".

Hint:

If we multiply A*T*C*G, we get 0.002625.

That's the same as 0.2625 with a (negative) scale factor of 2.

Another hint: for calculations of this nature, you can't rely on floating
point because it isn't exact. You need to do everything in integer maths.

A third hint: don't re-scale the product too often.

Enough clues?

--
Steven.

Hendrik van Rooyen

unread,
Oct 27, 2007, 4:24:41 AM10/27/07
to pytho...@python.org
<jimmy.mu...@gmail.com> wrote:

Yeah right. Nuts it is.

0.35*0.3*0.25*0.1 is approximately a third of a third of a
quarter of a tenth, or more precisely 2.625 parts in a thousand.

So after the second set of mutiplies, you have about 6.89 parts in a million,
and then 0.18 parts in a billion after the third, and so on - the exponent grows
by between -3 and -2 on every iteration.

So 0.002625**200000 is a number so small that its about as close as
you can practically get to bugger-all, as it is less than 10 ** -400000,
and more than 10**-600000

Now I have heard rumours that there are approximately 10**80 elementary
particles in the universe, so this is much less than one of them, even if my
rumour is grossly wrong.

A light year is of the order of 9.46*10**18 millimetres, and no human has ever
been that far away from home. Call it 10**19 for convenience. So your number
slices the last millimetre in a light year into more than 10**399981 parts.

Have you formulated the problem that you are trying to solve properly?

- Hendrik

Carl Banks

unread,
Oct 27, 2007, 5:28:05 AM10/27/07
to
On Oct 26, 8:00 pm, Grant Edwards <gra...@visi.com> wrote:

> On 2007-10-26, jimmy.musselwh...@gmail.com <jimmy.musselwh...@gmail.com> wrote:
>
> >> What in the world are you trying to count?
>
> > The calculation looks like this
>
> > A = 0.35
> > T = 0.30
> > C = 0.25
> > G = 0.10
>
> The bases in DNA?
>
> > and then I basically continually multiply those numbers together. I
> > need to do it like 200,000+ times but that's nuts.
>
> Exactly. It sure looks like what you're doing is nuts.
>
> > I can't even do it 1000 times or the number rounds off to 0.0.
> > I tried taking the inverse of these numbers as I go but then
> > it just shoots up to "inf".
>
> Can you explain what it is you're trying to calculate?

Looks pretty obviously to be a calculation of the probability that
bases in the given ratios would randomly form a particular sequence.

What use this would be I cannot tell.


Carl Banks

Steven D'Aprano

unread,
Oct 27, 2007, 9:12:29 AM10/27/07
to
On Sat, 27 Oct 2007 10:24:41 +0200, Hendrik van Rooyen wrote:

> So 0.002625**200000 is a number so small that its about as close as you
> can practically get to bugger-all, as it is less than 10 ** -400000, and
> more than 10**-600000

If you read the rest of the thread, you'll see I give a much more
accurate estimate. It's approaching 10**-520000.


> Now I have heard rumours that there are approximately 10**80 elementary
> particles in the universe, so this is much less than one of them, even
> if my rumour is grossly wrong.
>
> A light year is of the order of 9.46*10**18 millimetres, and no human
> has ever been that far away from home. Call it 10**19 for convenience.
> So your number slices the last millimetre in a light year into more than
> 10**399981 parts.

Numbers like 10**520000 (the reciprocal of the product found) is a
perfectly reasonable number if you're dealing with (say) permutations.
Admittedly, even something of the complexity of Go only has about 10**150
possible moves, but Go is simplicity itself compared to (say) Borges'
Library of Babel or the set of all possible genomes.

It's not even what mathematicians call a "large number" -- it can be
written using ordinary notation of powers. For large numbers that can't
be written using ordinary notation, see here:

http://en.wikipedia.org/wiki/Large_number
http://www.scottaaronson.com/writings/bignumbers.html

For instance, Ackermann's Sequence starts off quite humbly:

2, 4, 27 ...

but the fourth item is 4**4**4**4 (which has 10,154 digits) and the fifth
can't even be written out in ordinary mathematical notation.


Calculating numbers like 10**520000 or its reciprocal is also a very good
exercise in programming. Anyone can write a program to multiply two
floating point numbers together and get a moderately accurate answer:

product = X*Y # yawn

But multiplying 200,000 floating point numbers together and getting an
accurate answer somewhere near 10**-520000 requires the programmer to
actually think about what they're doing. You can't just say:

A,T,C,G = (0.35, 0.30, 0.25, 0.10)
product = map(operator.mul, [A*T*C*G]*200000)

and expect to get anywhere.

Despite my fear that this is a stupid attempt by the Original Poster's
professor to quantify the old saw about evolution being impossible
("...blah blah blah hurricane in a junk yard blah blah Concorde blah blah
blah..."), I really like this homework question.

--
Steven.

mensa...@aol.com

unread,
Oct 27, 2007, 11:04:58 AM10/27/07
to
On Oct 27, 8:12?am, Steven D'Aprano <st...@REMOVE-THIS-

cybersource.com.au> wrote:
> On Sat, 27 Oct 2007 10:24:41 +0200, Hendrik van Rooyen wrote:
> > So 0.002625**200000 is a number so small that its about as close as you
> > can practically get to bugger-all, as it is less than 10 ** -400000, and
> > more than 10**-600000
>
> If you read the rest of the thread, you'll see I give a much more
> accurate estimate. It's approaching 10**-520000.
>
> > Now I have heard rumours that there are approximately 10**80 elementary
> > particles in the universe, so this is much less than one of them, even
> > if my rumour is grossly wrong.
>
> > A light year is of the order of 9.46*10**18 millimetres, and no human
> > has ever been that far away from home. Call it 10**19 for convenience.
> > So your number slices the last millimetre in a light year into more than
> > 10**399981 parts.
>
> Numbers like 10**520000 (the reciprocal of the product found) is a
> perfectly reasonable number if you're dealing with (say) permutations.
> Admittedly, even something of the complexity of Go only has about 10**150
> possible moves, but Go is simplicity itself compared to (say) Borges'
> Library of Babel or the set of all possible genomes.

And numbers of that size needn't be intractable.
The run time to generate a Collatz sequence is
logarithmic to the starting number. A number with
53328 digits only takes about 2.5 million iterations.
Of course, you can't test ALL the numbers of that
size, but often we're researching only certain types,
such as the ith kth Generation Type [1,2] Mersenne
Hailstone:

Closed form: Type12MH(k,i)
Find ith, kth Generation Type [1,2] Mersenne Hailstone
using the closed form equation

2**(6*((i-1)*9**(k-1)+(9**(k-1)-1)/2+1)-1)-1

2**5-1 generation: 1
2**29-1 generation: 2
2**245-1 generation: 3
2**2189-1 generation: 4
2**19685-1 generation: 5
2**177149-1 generation: 6
2**1594325-1 generation: 7
2**14348909-1 generation: 8
2**129140165-1 generation: 9
2**1162261469-1 generation:10

1.141 seconds

Generation 10 has over a billion bits or >300
million digits. I had to stop there because an
exponent of 32 bits gives an "outrageous exponent"
error.

The closed form formula only works for a very
specific type of number. The research goal was
to come with a generic algorithm that works with
any Type and see if the algorithm obtains the
same results:

Verify Type12MH Hailstones:
Find ith, kth Generation Type (xyz) Hailstone
using the non-recursive equation

(gmpy.divm(y**(k-1)-prev_gen[2],y-x,y**(k-1))/
y**(k-2))*xyz[1]**(k-1)+prev_gen[3]

where i=((hailstone-geni(k,1,xyz))/(xyz[1]**k))+1

2**5-1 generation: 1
2**29-1 generation: 2
2**245-1 generation: 3
2**2189-1 generation: 4
2**19685-1 generation: 5
2**177149-1 generation: 6
2**1594325-1 generation: 7
2**14348909-1 generation: 8
2**129140165-1 generation: 9
2**1162261469-1 generation:10

4.015 seconds

There are legitimate uses for such large numbers
and Python's ability to handle this was what got
me interested in using Python in the first place.

>
> It's not even what mathematicians call a "large number" -- it can be
> written using ordinary notation of powers. For large numbers that can't
> be written using ordinary notation, see here:
>

> http://en.wikipedia.org/wiki/Large_numberhttp://www.scottaaronson.com/writings/bignumbers.html

Hendrik van Rooyen

unread,
Oct 28, 2007, 3:44:17 AM10/28/07
to pytho...@python.org
"Steven D'Aprano" <stev....e.com.au> wrote:

> Calculating numbers like 10**520000 or its reciprocal is also a very good
> exercise in programming. Anyone can write a program to multiply two
> floating point numbers together and get a moderately accurate answer:
>
> product = X*Y # yawn
>
> But multiplying 200,000 floating point numbers together and getting an
> accurate answer somewhere near 10**-520000 requires the programmer to
> actually think about what they're doing. You can't just say:
>
> A,T,C,G = (0.35, 0.30, 0.25, 0.10)
> product = map(operator.mul, [A*T*C*G]*200000)
>
> and expect to get anywhere.
>
> Despite my fear that this is a stupid attempt by the Original Poster's
> professor to quantify the old saw about evolution being impossible
> ("...blah blah blah hurricane in a junk yard blah blah Concorde blah blah
> blah..."), I really like this homework question.

yes it got me going too - and I even learned about the gmpy stuff - so it was
a bit of a Good Thing...

- Hendrik

Paul Rubin

unread,
Oct 28, 2007, 3:22:13 PM10/28/07
to
Steven D'Aprano <st...@REMOVE-THIS-cybersource.com.au> writes:
> You can't just say:
> product = map(operator.mul, [A*T*C*G]*200000)
> and expect to get anywhere.

from math import log


A,T,C,G = (0.35, 0.30, 0.25, 0.10)

c,m = divmod(200000*log(A*T*C*G,10), 1)
print "%.3fe%d"%(m, int(c))

Paul Rubin

unread,
Oct 28, 2007, 3:26:19 PM10/28/07
to
Paul Rubin <http://phr...@NOSPAM.invalid> writes:
> c,m = divmod(200000*log(A*T*C*G,10), 1)
> print "%.3fe%d"%(m, int(c))

Barf. Left as exercise: fix the error.


Fredrik Johansson

unread,
Oct 28, 2007, 5:50:10 PM10/28/07
to pytho...@python.org, jimmy.mu...@gmail.com
jimmy.musselwhite at gmail.com wrote:
> Hello all
> It would be great if I could make a number that can go beyond current
> size limitations. Is there any sort of external library that can have
> infinitely huge numbers? Way way way way beyond say 5x10^350 or
> whatever it is?

> I'm hitting that "inf" boundary rather fast and I can't seem to work
> around it.

> Thanks!

mpmath (http://code.google.com/p/mpmath/) supports nearly unlimited
exponents (as large as fit in your computer's memory); much larger
than gmpy. It is also essentially compatible with regular floats with
respect to rounding, function support, etc, if you need that.

Fredrik

0 new messages