tahnks,
Srini
Add more friends to your messenger and enjoy! Go to http://messenger.yahoo.com/invite/
It's been compiled for 64-bit processors, so it uses 64-bit pointers
and 64-bit small integers. And I would think it would only work w/ C
extension libraries also compiled for 64-bit CPUs. So, the same
meaning 64-bit has for anything else really.
Cheers,
Chris
--
I have a blog:
http://blog.rebertia.com
A 64bit build of Python doesn't necessarily mean that the small integer
type is 64bit, too. Python uses the datatype long on all platforms. On
Windows sizeof(long) == 4 (four bytes) on 32 and 64 versions of Windows.
Christian
While Chris' answer is correct, it doesn't show the consequences
of using a 64-bit Python. Primarily, these are:
- strings, Unicode objects, lists, dicts, and tuples can have more than
2**31 elements.
- you can load 64-bit DLLs into the Python process, and 64-bit
applications (such as a 64-bit Apache or IIS process) can load
the Python interpreter into their address spaces.
- you need a 64-bit operating system to run Python
The first item is only relevant if
a) you have that much data that you want to put into a single
container, and
b) you have that much memory to keep the entire container in
memory. For a list with 2**31 elements, you need 16GiB of
memory to represent the list alone, not counting the actual
data (e.g. for a list of 2**31 Nones). For a dict, you need
more than 48GiB for the dict alone. For a byte string,
2GiB are enough to get past the 2**31 elements boundary.
I just noticed that this description is not completely correct:
in a 32-bit process, the upper size of collections is actually
smaller than 2**31. For a list, you can have only up to 2**30
elements in the list; on many operating systems, only 2**29.
For a dict, the maximum number of elements is even smaller,
around 250 millions. So you would need a 64-bit Python already
to get past these boundaries.
Regards,
Martin