32-bit X86 Or X64

0 views
Skip to first unread message

Adriana Gowen

unread,
Aug 5, 2024, 9:45:47 AM8/5/24
to penpironew
Incomputer architecture, 32-bit computing refers to computer systems with a processor, memory, and other major system components that operate on data in 32-bit units.[1][2] Compared to smaller bit widths, 32-bit computers can perform large calculations more efficiently and process more data per clock cycle. Typical 32-bit personal computers also have a 32-bit address bus, permitting up to 4 GB of RAM to be accessed, far more than previous generations of system architecture allowed.[3]

32-bit designs have been used since the earliest days of electronic computing, in experimental systems and then in large mainframe and minicomputer systems. The first hybrid 16/32-bit microprocessor, the Motorola 68000, was introduced in the late 1970s and used in systems such as the original Apple Macintosh. Fully 32-bit microprocessors such as the HP FOCUS, Motorola 68020 and Intel 80386 were launched in the early to mid 1980s and became dominant by the early 1990s. This generation of personal computers coincided with and enabled the first mass-adoption of the World Wide Web. While 32-bit architectures are still widely-used in specific applications, the PC and server market has moved on to 64 bits with x86-64 since the mid-2000s with installed memory often exceeding the 32-bit 4G RAM address limits on entry level computers. The latest generation of mobile phones have also switched to 64 bits.


The world's first stored-program electronic computer, the Manchester Baby, used a 32-bit architecture in 1948, although it was only a proof of concept and had little practical capacity. It held only 32 32-bit words of RAM on a Williams tube, and had no addition operation, only subtraction.


Memory, as well as other digital circuits and wiring, was expensive during the first decades of 32-bit architectures (the 1960s to the 1980s).[4] Older 32-bit processor families (or simpler, cheaper variants thereof) could therefore have many compromises and limitations in order to cut costs. This could be a 16-bit ALU, for instance, or external (or internal) buses narrower than 32 bits, limiting memory size or demanding more cycles for instruction fetch, execution or write back.


However, the opposite is often true for newer 32-bit designs. For example, the Pentium Pro processor is a 32-bit machine, with 32-bit registers and instructions that manipulate 32-bit quantities, but the external address bus is 36 bits wide, giving a larger address space than 4 GB, and the external data bus is 64 bits wide, primarily in order to permit a more efficient prefetch of instructions and data.[7]


On the x86 architecture, a 32-bit application normally means software that typically (not necessarily) uses the 32-bit linear address space (or flat memory model) possible with the 80386 and later chips. In this context, the term came about because DOS, Microsoft Windows and OS/2[9] were originally written for the 8088/8086 or 80286, 16-bit microprocessors with a segmented address space where programs had to switch between segments to reach more than 64 kilobytes of code or data. As this is quite time-consuming in comparison to other machine operations, the performance may suffer. Furthermore, programming with segments tend to become complicated; special far and near keywords or memory models had to be used (with care), not only in assembly language but also in high level languages such as Pascal, compiled BASIC, Fortran, C, etc.


The 80386 and its successors fully support the 16-bit segments of the 80286 but also segments for 32-bit address offsets (using the new 32-bit width of the main registers). If the base address of all 32-bit segments is set to 0, and segment registers are not used explicitly, the segmentation can be forgotten and the processor appears as having a simple linear 32-bit address space. Operating systems like Windows or OS/2 provide the possibility to run 16-bit (segmented) programs as well as 32-bit programs. The former possibility exists for backward compatibility and the latter is usually meant to be used for new software development.


In digital images/pictures, 32-bit usually refers to RGBA color space; that is, 24-bit truecolor images with an additional 8-bit alpha channel. Other image formats also specify 32 bits per pixel, such as RGBE.


In digital images, 32-bit sometimes refers to high-dynamic-range imaging (HDR) formats that use 32 bits per channel, a total of 96 bits per pixel. 32-bit-per-channel images are used to represent values brighter than what sRGB color space allows (brighter than white); these values can then be used to more accurately retain bright highlights when either lowering the exposure of the image or when it is seen through a dark filter or dull reflection.


For example, a reflection in an oil slick is only a fraction of that seen in a mirror surface. HDR imagery allows for the reflection of highlights that can still be seen as bright white areas, instead of dull grey shapes.


Using input data tool I tried to connect to our 32-bit Oracle OCI database and I got an error message "OCILogon2 Error:ORA-12514:TNS:listerner does not know of service requested in connect descriptor" What does this error mean? How do I get connected to an oracle database?


Yes to the first question and no to the second question; it's a virtual machine. Your problems are probably related to unspecified changes in library implementation between versions. Although it could be, say, a race condition.


There are some hoops the VM has to go through. Notably references are treated in class files as if they took the same space as ints on the stack. double and long take up two reference slots. For instance fields, there's some rearrangement the VM usually goes through anyway. This is all done (relatively) transparently.


Also some 64-bit JVMs use "compressed oops". Because data is aligned to around every 8 or 16 bytes, three or four bits of the address are useless (although a "mark" bit may be stolen for some algorithms). This allows 32-bit address data (therefore using half as much bandwidth, and therefore faster) to use heap sizes of 35- or 36-bits on a 64-bit platform.


The Java JNI requires OS libraries of the same "bittiness" as the JVM. If you attempt to build something that depends, for example, on IESHIMS.DLL (lives in %ProgramFiles%\Internet Explorer) you need to take the 32bit version when your JVM is 32bit, the 64bit version when your JVM is 64bit. Likewise for other platforms.


"If you compile your code on an 32 Bit Machine, your code should only run on an 32 Bit Processor. If you want to run your code on an 64 Bit JVM you have to compile your class Files on an 64 Bit Machine using an 64-Bit JDK."


You know, dinosaures are still alive... And since I forced my computer to install a 32-bit Tableau I got access to SAP, but since I have a 64-bit Alteryx there is no ongoing communication between SAP and Alteryx:(


I'm currently working with VIs developed within 32-bit Labview. Roughly 60% of the machines where I work are 32-bit, but many are being progressively changed to 64-bit. Because I'm on a 64-bit kernel, I was instructed to install 64-bit Labview.


I guess this can be answered by, is there a difference in 'bitness' before compiling? I tried opening a VI developed in 32-bit in my 64-bit labview, and a VI mean to set the display was not compatible with my 64-bit version of Labview. If I CAN develop VIs in 64-bit, but compile them on 32-bit, perhaps we could use a Conditional Disable structure to load bit-dependant VIs depending on the current kernel the .exe is running on?


However, it you are just making the 32 bit executables in the end, just install LabVIEW 32-bit. I actually do not even bother with LabVIEW 64-bit even though all of my machines are 64-bit just because there are a lot of toolkits that have not been ported to 64-bit yet. 64-bit machines run the 32-bit executables with no issues.


The VI's/sourcecode is compatible, so yes, you can do that. A caveat regarding toolkits and DLL's though as they'd also need to follow the bitness. If the program is raw/basic LV, that it should work well.


Who gave these "instructions"? Nobody here would instruct you to install 64bit LabVIEW on a 64bit OS, so this might be some random blabbering from some uninformed IT guy. Go back and ask for the reason behind the "recommendation".


Unless you need to memory space of LabVIEW 64bit for massive data volumes (and also have sufficient RAM to support it!). It is highly recommended to use LabVIEW 32bit on Windows 64bit. There are still drivers and toolkits that are not supported under 64bit so you save yourself a lot of potential headaches sticking with LabVIEW 32bit (see also). 32bit LabVIEW actually runs better on 64bit windows, because it can use a full 4GB of memory.




It was a misunderstanding of the differences between the 32-bit and 64-bit versions of Labview, as well as general curiosity about transitioning to 64-bit development. I promise you, it was not a random IT guy.


This topic comes up very often by beginners. I think NI should create a kind of "warning" pop-up information window, so whenever someone tries to download or install LabVIEW x64, a clear info should appear there stating that unless you really know why you need x64 LV (like large RAM usage, etc.), just install instead the 32 bitness one...

3a8082e126
Reply all
Reply to author
Forward
0 new messages