8bit and 16-bit, for video games, specifically refers to the processors used in the console. The number references the size of the words of data used by each processor. The 8-bit generation of consoles (starting with Nintendo's Famicom, also called Nintendo Entertainment System) used 8-bit processors; the 16-bit generation (starting with NEC/Hudson's PC Engine, also called TurboGrafx-16) used a 16-bit graphics processor. This affects the quality and variety in the graphics and the music by affecting how much data can be used at once; Oak's answer details the specifics of graphics.
A bit or binary digit is the basic unit of information in computing and telecommunications; it is the amount of information that can be stored by a digital device or other physical system that can usually exist in only two distinct states.
Now, note that in modern times, things like "8-bit music" and "16-bit graphics" don't necessarily have anything to do with processors or data size, as most machinery doesn't run that small anymore. They may instead refer specifically to the style of music or graphics used in games during those generations, done as a homage to nostalgia. 8-bit music is the standard chiptune fare; the graphics were simplistic in terms of colour. 16-bit music is higher quality but often still has a distinct electronic feel, while the graphics got much more complex but still largely 2-dimensional and 240p resolution.
8-bit, 16-bit, 32-bit and 64-bit all refer to a processor's word size. A "word" in processor parlance means the native size of information it can place into a register and process without special instructions. It also refers to the size of the memory address space. The word size of any chip is the most defining aspect of it's design. There are several reasons why it is so important:
The difference in word size has a dramatic impact on the capabilities and performance of a given chip. Once you get up to 32-bits, the differences mainly become those of refinement (unless you are running a really big application, like genetic analysis or counting all the stars in the galaxy big).
The term "8-bit graphics" literally means that every pixel uses 8 bits for storing the color value - so only 256 options. Modern systems use 8 bits to store each color channel, so every pixel typically uses 24 bits.
The same applies for music, 8 bit means a maximum of 256 levels of your sound output level (per sample, the temporal resolution is another issue), which is too coarse to provide sounds that do not sound Chiptune (or noisy, if still trying PCM sound) to the human ear. 16 bits per sample is what the CD standard uses, by the way. But 16 bit music more refers to Tracker music, whose limits are similar to those of popular game consoles with a 16 bit processor.
Another interesting point is that an 8 bit input device is limited1 to 8 boolean button states split up into the four directions of the D-pad plus four buttons. Or a 2 button joystick with 3 bits (a mere 8 levels, including the sign!) remaining for both the x- and y-axis.
So, for originally old games, 8 bit / 16 bit might be considered referring to the system's capabilities (but consider Grace's point about the incosistency in the label "8 bit"). For a retro game, consider the question whether it would be theoretically possible to obey the mentioned constraints (neglecting shader effects like Bloom), although you might have to allow some "cheating" - I'd consider a sprite based game using 8x16 squares sprites still 8 bit even if sprites could be floating at any position in HD resolution and the squares were 16x16 pixels each...
1) well obviously you can use 2 times 8 bit to circumvent that limit, but as BlueRaja points out in a comment on Grace's answer, considering the accumulator register to be 8 bit only as well, that would cause a performance loss. Also, it would be cheating your way to 16 bit IMHO
Way back in the day the bit size of a CPU was a reference to how wide the processors registers where. A CPU typically has several registers in which you can move data around and do operations on it. For example add 2 numbers together then store the results in another register. In the 8 bit era the registers were 8 bits wide and of you had a big number like 4000 it wouldn't fit in a single register so your would have to do two operations to simulate a 16 bit operation. For example if you have got 10,000 gold coins you would need to use to add instructions to add them together. One to handle the lower 8bits and another to add the upper 8bits(With carrying taken into account). Where as a 16bit system could have just done it in one operation. You may remember in the legend of Zelda you would max out at 255 rupees as its the largest unsigned 8bit number possible.
Nowadays registers in a CPU come in all different sizes so this isn't really good measure anymore. For example the SSE Registers in the amd64 processors of today are 256 bits wide(For real) but the processors are still considered 64 bit. Lately these days most people are thinking of the addressing size the CPU is capable of supporting. It seems the bit size of a machine is really based on the current trends of hardware of the time. But for me I still consider the size of a native integer register which seems correct even today and still matches the addressing size of the CPU as well. Which makes since since the native integer size of a register is typically the same size as a memory pointer.
Despite all the interesting technical discussions provided by other contributors, the 8-bit and 16-bit descriptors for gaming consoles don't mean anything consistently. Effectively, 16-bit is only meaningful as a marketing term.
Most 8-bit consoles had 16-bit physical addressing space (256 bytes wouldn't get you very far.) They used segmenting schemes but so did the Turbo Grafx 16. The Genesis had a cpu capable of 32-bit addressing.
Total possible color palette is owned by the graphics circuitry and however the palette table is expressed is for its needs. You wouldn't expect this to have much correlation across systems, and it doesn't.
This doesn't fully describe graphics capabilities of the systems even in terms of colors, which had other features like special layer features, or specifics of their sprite implementations or other details. However, it does accurately portray the bit depth of the major features.
So you can see there are many features of systems which can be measured in bit-size which don't have a requirement to agree, and there is no particular grouping around any feature that is 16-bit for consoles grouped this way. Moreover, there is no reason to expect that consumers would care at all about word size or data paths. You can see that systems with "small" values here were regardless very capable gaming platforms for the time.
Essentially "16-bit" is just a generation of consoles which received certain marketing in a certain time period. You can find a lot more commonality between them in terms of overall graphics capability than you can in terms of any specific bitness, and that makes sense because graphics innovation (at a low cost) was the main goal of these designs.
"8-bit" was a retroactive identification for the previous consoles. In the US this was a the dominant Nintendo Entertainment System and the less present Sega Master System. Does it apply to an Atari 7800? A 5200? An Intellivision? An atari 2600 or colecovision or an Odyssey 2? Again, there is no bitness boundary that is clear among these consoles. By convention, it probably only includes the consoles introduced from around 1984 to 1988 or so, but this is essentially a term we apply now that was not used then and refers to no particular set of consoles, except by convention.
When talking about the the retro gaming 8bit, 16bit, and 64bit. It simply means the amount of pixels used to create the images for example the NES and Sega Mega Drive are very blocky and has large pixels 8bit the SNES and Sega Genesis improve this to "16 bit" and the N64 masters this concept to 64-bit and so on to 128 to 256 and eventually to 1080 HD. Even though it is and was slightly out of context.
Nintendo power in the early 90s actually created these "terms" when they released articles about how Nintendo 8bit power was so much better then Sega. Each to their own but anyways they did this because 99% of the people would have no clue what they were actually talking about.
Na computao, os dados so essencialmente cargas em circuitos eltricos complexos, identificadas como bits, sendo 0 ou 1, na ausncia ou presena de carga. J os bytes, so agrupamentos de 8 bits necessrios para codificar um caractere em um registro de memria digital, uma vez que esse era o limite de bits que as primeiras CPUs conseguiam processar por ciclo.
Mesmo bits e bytes sendo duas medidas bsicas na computao com correspondncia direta, elas so utilizadas em contextos diferentes na Tecnologia da Informao. Pela similaridade de nome, inclusive, comum a confuso entre elas. Para resolver isso, hoje o Canaltech vai te explicar porque importante saber diferenciar essas unidades e identificar quando cada uma delas utilizada.
Para compreender melhor o assunto, o Professor Max Miller Silveira, do Instituto Federal de Educao, Cincia e Tecnologia do Rio Grande do Norte, explicou ao Canaltech desde os conceitos bsicos at aplicaes prticas de bits e bytes em nossas vidas.
Os computadores so mquinas complexas que desde sua inveno se baseiam em circuitos eletrnicos por onde passam correntes eltricas. A programao de mquina mais bsica consiste em utilizar trilhas metlicas por onde transita uma corrente eltrica, combinada com transistores e outros componentes eletrnicos para guiar essa corrente e capacitores para armazenar essas cargas.
Na prtica, um bit justamente a representao matemtica de um capacitor dentro daquele circuito, indicando se ele est carregado (1 bit) ou no (0 bit). Tanto por isso, a linguagem bsica de programao de mquina binria.
3a8082e126