Thedifference is mostly about the size of a Pointer/Reference. On 64 bit machines, you can reference an address in a 64 bit address-range (thus giving you 2^64 bytes of memory). On 32 bit you can only address 2^32 bytes (=4 GB). Now if you look at current computers it is obvious why the world is moving to 64 bit: 32 bit can't easily address all the RAM anymore.
On other architectures the differences between 64 and 32 bit are less obvious. For example the Nintendo 64 (remember that?) was a 64 bit machine but most of its code was 32 bit. So in that case 64 bit served more as a marketing trick.
The currently-accepted answer is generally correct but not specifically so. There really isn't a single thing called a "32-bit CPU" or a ""64-bit CPU" - that's a description that refers to only one small part of the architecture of the CPU. In particular, it references the number of address selection lines between the CPU and the memory, i.e. the so-called address space available for memory operations.
In the days of yore when the CPU when people used to sit down and weave (wrap) the wires between a processor and the memory, you would have had to use either 32 or (theoretically, because it didn't exist at the time) 64 wires between the CPU and the memory controller that would be used to specify which memory address you wanted to access. For example, let's say we have a 2-bit memory architecture: sending 00 would select address 0, 01 would select address 1, 10 would select address 2, and 11 would select address 3. This 2-bit gives us 2^2 bytes of RAM (4 bytes).
If you take a 32-bit CPU and you add on 32 more wires between the CPU and the memory controller so that you're magically able to support more memory, you now have a "64-bit CPU" that can run 32-bit code or 64-bit code. What does this mean and how does this happen? Well, let's take our 2-bit CPU from the earlier part of this answer and add another wire, turning it into a 3-bit CPU, taking us from 4 bytes to 2^3 or 8 bytes of RAM.
Existing "2-byte" code will run, setting the values of the last 2 wires like indicated above (00-11). We'll wire the extra connection to be zero by default, so actually when the 2-byte code runs, when it selects 00, it's actually selecting 000 and when it selects 11 it's actually selecting 011. Easy.
Now a programmer wants to write "native" 3-byte code and writes her software to take advantage of the extra address space. She tells the CPU that she knows what she's doing and that she'll take manual control of the new, extra wires. Her software knows about the extra wire(s) and correctly sends out 000-111, giving her full access to the range of memory supported by the this new CPU architecture.
But that's not how it has to happen. In fact, that's normally not how things happen. When 64-bit CPUs were first introduced (and there were many), they all went with entirely new architectures/designs. They didn't just tack on an additional 32 wires and say "here you go, this is a 64-bit CPU you can use in 32-bit or 64-bit mode," but rather they said "This is our new CPU and it only takes programming in this entirely new machine language, behaves in this entirely new way, solves a bazillion different problems far more elegantly than the old x86/i386 32-bit CPUs ever did, and it's a native 64-bit architecture. Have fun."
That was the story of the Intel Itanium, now famously known as the "Itanic" because of how massively it sank. It was supposed to herald in the new 64-bit era, and it was something to behold. Variable length instructions, huge caches, 64-bit address space, tons of registers, super exciting, super cool, and super hard to convince everyone to recompile or rewrite 20 years of legacy code for. This was back when AMD and Intel were actually competing, and AMD had the brilliant idea of saying "let's forget all this 'solve all the world's problems' business and just add 32 more wires to the i386 and make a 32-bit compatible 64-bit CPU" and the x86_64 CPU architecture was born.
In fact, if you look at the kernel names and sources for major operating systems (Linux, Windows, BSDs, etc) you'll find them littered with references to AMD64 CPUs and AMD64 architecture. AMD came up with a winning strategy to get everyone to switch over to the 64-bit world while preserving compatibility with 32-bit applications, in a way that a 32-bit OS could run on 64-bit hardware or even 32-bit applications could run on a 64-bit OS on 64-bit hardware. Intel followed suite sooner rather than later with their "Intel EM64T" architecture (which was basically identical to AMD64) and x86_64 won out while the Itanic and others like MIPS64 and ALPHA64 were seen no more in the desktop/server market.
tl;dr amd64 aka x86_64 CPUs are designed to be compatible with both 32- and 64-bit kernel and code, but most 64-bit CPUs are decidedly not backwards compatible in the same way. A 32-bit CPU can access at most 4GiB of memory, while a 64-bit CPU can access a stunning 16 EiBs (16 1024^6 bytes, or 4 billion times as much memory as 4GiB).
Both a 32 and 64 bit OS can run on a 64 bit processor, but the 64 bit OS can use full-power of the 64bit processor (larger registers, more instructions) - in short it can do more work in same time.A 32 bit processor supports only 32 bit Windows OS.
Unless you're doing computational intensive tasks then you won't notice a difference between the 32-bit and 64-bit versions of your OS. I am running Windows 7 Home Premium 64-bit and haven't had a problem yet in terms of getting something to run. Windows does a great job at running 32-bit software.
From a pure performance perspective, the answer will rely on the apps you're running. The 64-bit instructions are more efficient, but the memory pointers are larger, meaning less code will fit in cache. On average, the two effects cancel each other out, but there are cases where one or the other will dominate.
The Longer Answer (correcting some misconceptions in other answers):Everyone here mentioned the advantage of 62bit systems as being able to use more than 4GiB RAM. Since PAE ( ) was introduced into most kernels, 32bit system can handle more RAM just fine. I'm also advising anyone stumbling on this question that 32bit x86 is not "maintained" as well these days (ala 2015+). There's a lot of software that's written for AMD64 only. I think Ubuntu dropped 32bit last LTS, and Debian is one of the few that still supports it (because Debian supports even dead/dying platforms -- which 32bit x86 is.)Also, consider that nearly every OS is multi-arch (both lib and lib32) so 32bit software works fine without a significant performance hit. 64bit software cannot run on a 32bit system, but 32bit software can on a 64bit system (provided the developers aren't like the PCSX2 team who expect the package maintainers to create their own 64bit fork or put up with it being 32bit-only /rant). Anyway, for better or worse, it's a consideration.
The Take Home: The situation is in reverse now. 64bit x86 is de facto now and 32bit will become deprecated. While PAE allows 32bit machines to use more than 4GiB RAM, it's advisable you use an AMD64 (x86_64, 64bit) OS because your 32bit stuff will still run just fine, but so will 64bit stuff.
Windows programs are limited to 2GiB max. Using PEA you can address more memory but you are still limited to 2GiB per program. (Read: You can open multiple 2GiB programs. E.g. three 2GiB programs. But not a single 5GiB one)
For a notebook, which has likely less than 4 GB of memory, 64 bit Windows would be overkill. The smart move is to run 32 bit Windows, which also makes it more likely that all drivers work and that most software runs.
As long as you have driver support I would have suggested 64 bit windows as well. You can try it and see if your apps perform any differently. Generally it's been my experience that the 64 bit windows multitasks alot better. I migrated a friend of mine who is a big gamer from 32 to 64 bit and he was able to go from haveing 2 WOW clients open (with framerate issues) to 4 with no issues. Others have changed at my office have had no real difference in performance running office apps.
Your friend is a poor techie. Unless you have more than 3GB of RAM, there's no reason to use 64bit, and your processor will handle 32bit just as well - there's no rule saying 32bit processors are better at 32bit tasks.
I have been trying to read up on 32-bit and 64-bit processors ( -bit_processing). My understanding is that a 32-bit processor (like x86) has registers 32-bits wide. I'm not sure what that means. So it has special "memory spaces" that can store integer values up to 2^32?
All calculations take place in the registers. When you're adding (or subtracting, or whatever) variables together in your code, they get loaded from memory into the registers (if they're not already there, but while you can declare an infinite number of variables, the number of registers is limited). So, having larger registers allows you to perform "larger" calculations in the same time. Not that this size-difference matters so much in practice when it comes to regular programs (since at least I rarely manipulate values larger than 2^32), but that is how it works.
Also, certain registers are used as pointers into your memory space and hence limits the maximum amount of memory that you can reference. A 32-bit processor can only reference 2^32 bytes (which is about 4 GB of data). A 64-bit processor can manage a whole lot more obviously.
Since microprocessor needs to talk to other parts of computer to get and send data i.e. memory, data bus and video controller etc. so they must also support 64-bit data transfer theoretically. However, for practical reasons such as compatibility and cost, the other parts might still talk to microprocessor in 32 bits. This happened in original IBM PC where its microprocessor 8088 was capable of 16-bit execution while it talked to other parts of computer in 8 bits for the reason of cost and compatibility with existing parts.
3a8082e126