Incomputer architecture, 8-bit integers or other data units are those that are 8 bits wide (1 octet). Also, 8-bit central processing unit (CPU) and arithmetic logic unit (ALU) architectures are those that are based on registers or data buses of that size. Memory addresses (and thus address buses) for 8-bit CPUs are generally larger than 8-bit, usually 16-bit. 8-bit microcomputers are microcomputers that use 8-bit microprocessors.
The IBM System/360 introduced byte-addressable memory with 8-bit bytes, as opposed to bit-addressable or decimal digit-addressable or word-addressable memory, although its general-purpose registers were 32 bits wide, and addresses were contained in the lower 24 bits of those addresses. Different models of System/360 had different internal data path widths; the IBM System/360 Model 30 (1965) implemented the 32-bit System/360 architecture, but had an 8-bit native path width, and performed 32-bit arithmetic 8 bits at a time.[1]
The first widely adopted 8-bit microprocessor was the Intel 8080, being used in many hobbyist computers of the late 1970s and early 1980s, often running the CP/M operating system; it had 8-bit data words and 16-bit addresses. The Zilog Z80 (compatible with the 8080) and the Motorola 6800 were also used in similar computers. The Z80 and the MOS Technology 6502 8-bit CPUs were widely used in home computers and second- and third-generation game consoles of the 1970s and 1980s. Many 8-bit CPUs or microcontrollers are the basis of today's ubiquitous embedded systems.
8-bit microprocessors were the first widely used microprocessors in the computing industry, marking a major shift from mainframes and minicomputers to smaller, more affordable systems. The introduction of 8-bit processors in the 1970s enabled the production of personal computers, leading to the popularization of computing and setting the foundation for the modern computing landscape.
8-bit CPUs use an 8-bit data bus and can therefore access 8 bits of data in a single machine instruction. The address bus is typically a double octet (16 bits) wide, due to practical and economical considerations. This implies a direct address space of 64 KB (65,536 bytes) on most 8-bit processors.
Most home computers from the 8-bit era fully exploited the address space, such as the BBC Micro (Model B) with 32 KB of RAM plus 32 KB of ROM. Others like the very popular Commodore 64 had full 64 KB RAM, plus 20 KB ROM, meaning with 16-bit addressing not all of the RAM could be used by default (e.g. from the included BASIC language interpreter in ROM);[3] without exploiting bank switching, which allows for breaking the 64 KB (RAM) limit in some systems. Other computers would have as low as 1 KB (plus 4 KB ROM), such as the Sinclair ZX80 (while the later very popular ZX Spectrum had more memory), or even only 128 bytes of RAM (plus storage from a ROM cartridge), as in an early game console Atari 2600 and thus 8-bit addressing would have been enough for the RAM, if it would not have needed to cover ROM too). The Commodore 128, and other 8-bit systems, meaning still with 16-bit addressing, could use more than 64 KB, i.e. 128 KB RAM, also the BBC Master with it expandable to 512 KB of RAM.
While in general 8-bit CPUs have 16-bit addressing, in some architectures you have both, such as in the MOS Technology 6502 CPU, where the zero page is used extensively, saving one byte in the instructions accessing that page, and also having 16-bit addressing instructions that take 2 bytes for the address plus 1 for the opcode.
Some index registers are 8-bit such as the two in the 6502. This limits the size of the arrays addressed using indexed addressing instructions to objects of up to 256 bytes without requiring more complicated code. Other 8-bit CPUs, such as Motorola 6800 and Intel 8080, have 16-bit index registers.
The first commercial 8-bit processor was the Intel 8008 (1972) which was originally intended for the Datapoint 2200 intelligent terminal. Most competitors to Intel started off with such character oriented 8-bit microprocessors. Modernized variants of these 8-bit machines are still one of the most common types of processor in embedded systems.
The MOS Technology 6502, and variants of it, were used in personal computers, such as the Apple I, Apple II, Atari 8-bit computers, BBC Micro, PET, VIC-20, and in home video game consoles such as the Atari 2600 and the Nintendo Entertainment System.
8-bit processors continue to be designed today for general education about computer hardware, as well as for hobbyists' interests. One such CPU was designed and implemented using 7400-series integrated circuits on a breadboard.[5][6] Designing 8-bit CPU's and their respective assemblers is a common training exercise for engineering students, engineers, and hobbyists. FPGA's are used for this purpose.
Hello. I am very new to ImageJ/Fiji and need to analyze some stainings using the threshold method. However, when I try to convert my green slide from my RGB stack to an 8-bit version, the software throws this exception and prevents me from continuing the analysis. Any idea what I should do to prevent this? Thank you very much for any help you could provide me!
Hello Jorge. Thank you very much for helping me. I am trying to upload the image to this post, however it seems that the format is not accepted (bioformat/.svs). Is there any other way I can share this type of file with you? Sorry for all the inconvenience caused. Best, Sarah.
But correctly performed color deconvolution can separate out the contribution of multiple stain components, like orange vs red -
image1516835 176 KB
It is far easier to threshold your stain of interest with the background orange staining separated out.
Thank you very much for this precious help. I tried this morning, and the error message no longer pops up. However, the final image does not appear with the same background colour (it used to appear dark grey, but now it is white), meaning will have to re-analyse my previous files (which is absolutely fine!).
Thank you very much for your help. I downloaded QuPath this morning and had a try. I can see it helping me for future analyses, but I feel like I need to learn how to use it well first for sure. Thanks for the deconvolution tip! I will definitely look into it.
et cetera. What real life systems are there today, where this holds true?(I'm not sure if there are differences between C and C++ regarding this, or if it's actually language agnostic. Please retag if neccessary.)
There are, however, current machines (mostly DSPs) where the smallest type is larger than 8 bits -- a minimum of 12, 14, or even 16 bits is fairly common. Windows CE does roughly the same: its smallest type (at least with Microsoft's compiler) is 16 bits. They do not, however, treat a char as 16 bits -- instead they take the (non-conforming) approach of simply not supporting a type named char at all.
The DEC-10 byte pointer hardware worked with arbitrary-size bytes. The byte pointer included the byte size in bits. I don't remember whether bytes could span word boundaries; I think they couldn't, which meant that you'd have a few waste bits per word if the byte size was not 3, 4, 9, or 18 bits. (The DEC-10 used a 36-bit word.)
Unless you're writing code that could be useful on a DSP, you're completely entitled to assume bytes are 8 bits. All the world may not be a VAX (or an Intel), but all the world has to communicate, share data, establish common protocols, and so on. We live in the internet age built on protocols built on octets, and any C implementation where bytes are not octets is going to have a really hard time using those protocols.
It's also worth noting that both POSIX and Windows have (and mandate) 8-bit bytes. That covers 100% of interesting non-embedded machines, and these days a large portion of non-DSP embedded systems as well.
The size of a byte was at first selected to be a multiple of existing teletypewriter codes, particularly the 6-bit codes used by the U.S. Army (Fieldata) and Navy. In 1963, to end the use of incompatible teleprinter codes by different branches of the U.S. government, ASCII, a 7-bit code, was adopted as a Federal Information Processing Standard, making 6-bit bytes commercially obsolete. In the early 1960s, AT&T introduced digital telephony first on long-distance trunk lines. These used the 8-bit -law encoding. This large investment promised to reduce transmission costs for 8-bit data. The use of 8-bit codes for digital telephony also caused 8-bit data "octets" to be adopted as the basic data unit of the early Internet.
As an average programmer on mainstream platforms, you do not need to worry too much about one byte not being 8 bit. However, I'd still use the CHAR_BIT constant in my code and assert (or better static_assert) any locations where you rely on 8 bit bytes. That should put you on the safe side.
Firstly, the number of bits in char does not formally depend on the "system" or on "machine", even though this dependency is usually implied by common sense. The number of bits in char depends only on the implementation (i.e. on the compiler). There's no problem implementing a compiler that will have more than 8 bits in char for any "ordinary" system or machine.
In the past I've come across some Analog Devices DSPs for which char is 16 bits. DSPs are a bit of a niche architecture I suppose. (Then again, at the time hand-coded assembler easily beat what the available C compilers could do, so I didn't really get much experience with C on that platform.)
char is also 16 bit on the Texas Instruments C54x DSPs, which turned up for example in OMAP2. There are other DSPs out there with 16 and 32 bit char. I think I even heard about a 24-bit DSP, but I can't remember what, so maybe I imagined it.
Another consideration is that POSIX mandates CHAR_BIT == 8. So if you're using POSIX you can assume it. If someone later needs to port your code to a near-implementation of POSIX, that just so happens to have the functions you use but a different size char, that's their bad luck.
3a8082e126