64 Bits To Bytes

0 views
Skip to first unread message

Nichele Seibel

unread,
Aug 5, 2024, 3:07:41 AM8/5/24
to noodnesacon
NoteThe following information is providedin part by the Extreme Science and Engineering Discovery Environment(XSEDE), a National Science Foundation (NSF) project that provides researcherswith advanced digital resources and services that facilitatescientific discovery. For more, see the XSEDE website.

Because bits are so small, you rarely work with information one bit ata time. Bits are usually assembled into a group of eight to form abyte. A byte contains enough information to store asingle ASCII character, like "h".


Many hard drive manufacturers use a decimal number system to defineamounts of storage space. As a result, 1 MB is defined as one millionbytes, 1 GB is defined as one billion bytes, and so on. Since yourcomputer uses a binary system as mentioned above, you may notice adiscrepancy between your hard drive's published capacity and thecapacity acknowledged by your computer. For example, a hard drivethat is said to contain 10 GB of storage space using a decimal systemis actually capable of storing 10,000,000,000 bytes. However, in abinary system, 10 GB is 10,737,418,240 bytes. As a result, instead ofacknowledging 10 GB, your computer will acknowledge 9.31 GB. This is not a malfunction but a matter of differentdefinitions.


Note: The names and abbreviations for numbers ofbytes are easily confused with the notations for bits. Theabbreviations for numbers of bits use a lower-case "b" instead of anupper-case "B". Since one byte is made up of eight bits, thisdifference can be significant. For example, if a broadband Internetconnection is advertised with a download speed of3.0 Mbps, its speed is 3.0 megabitsper second, or 0.375 megabytes per second (whichwould be abbreviated as 0.375 MBps). Bits and bit rates(bits over time, as in bits per second [bps]) are most commonly usedto describe connection speeds, so pay particular attention whencomparing Internet connection providers and services.


This document was developed with support from National Science Foundation (NSF) grants 1053575 and 1548562. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF.


Bit (b) is a measurement unit used in binary system to store or transmit data, like internet connection speed or the quality scale of an audio or a video recording. A bit is usually represented with a 0 or a 1. 8 bits make 1 byte. A bit can also be represented by other values like yes/no, true/false, plus/minus, and so on. A bit is one of the fundamental units used in computer technology, information technology, digital communication, as well as for storing, processing and transmitting various types of data.


Byte is the basic unit of digital information transmission and storage, used extensively in information technology, digital technology, and other related fields. It is one of the smallest units of memory in computer technology, as well as one of the most basic data measurement units in programming. The earliest computers were made with the processor supporting 1 byte commands, because in 1 byte you can send 256 commands. 1 byte consists of 8 bits, which go together as one unit in storage, processing or transmission of digital information.


The size of the byte has historically been hardware-dependent and no definitive standards existed that mandated the size. Sizes from 1 to 48 bits have been used.[4][5][6][7] The six-bit character code was an often-used implementation in early encoding systems, and computers using six-bit and nine-bit bytes were common in the 1960s. These systems often had memory words of 12, 18, 24, 30, 36, 48, or 60 bits, corresponding to 2, 3, 4, 5, 6, 8, or 10 six-bit bytes. In this era, bit groupings in the instruction stream were often referred to as syllables[a] or slab, before the term byte became common.


The modern de facto standard of eight bits, as documented in ISO/IEC 2382-1:1993, is a convenient power of two permitting the binary-encoded values 0 through 255 for one byte, as 2 to the power of 8 is 256.[8] The international standard IEC 80000-13 codified this common meaning. Many types of applications use information representable in eight or fewer bits and processor designers commonly optimize for this usage. The popularity of major commercial computing architectures has aided in the ubiquitous acceptance of the 8-bit byte.[9] Modern architectures typically use 32- or 64-bit words, built of four or eight bytes, respectively.


The unit symbol for the byte was designated as the upper-case letter B by the International Electrotechnical Commission (IEC) and Institute of Electrical and Electronics Engineers (IEEE).[10] Internationally, the unit octet, symbol o, explicitly defines a sequence of eight bits, eliminating the potential ambiguity of the term "byte".[11][12]


The term byte was coined by Werner Buchholz in June 1956,[4][13][14][b] during the early design phase for the IBM Stretch[15][16][1][13][14][17][18] computer, which had addressing to the bit and variable field length (VFL) instructions with a byte size encoded in the instruction.[13] It is a deliberate respelling of bite to avoid accidental mutation to bit.[1][13][19][c]


Another origin of byte for bit groups smaller than a computer's word size, and in particular groups of four bits, is on record by Louis G. Dooley, who claimed he coined the term while working with Jules Schwartz and Dick Beeler on an air defense system called SAGE at MIT Lincoln Laboratory in 1956 or 1957, which was jointly developed by Rand, MIT, and IBM.[20][21] Later on, Schwartz's language JOVIAL actually used the term, but the author recalled vaguely that it was derived from AN/FSQ-31.[22][21]


Early computers used a variety of four-bit binary-coded decimal (BCD) representations and the six-bit codes for printable graphic patterns common in the U.S. Army (FIELDATA) and Navy. These representations included alphanumeric characters and special graphical symbols. These sets were expanded in 1963 to seven bits of coding, called the American Standard Code for Information Interchange (ASCII) as the Federal Information Processing Standard, which replaced the incompatible teleprinter codes in use by different branches of the U.S. government and universities during the 1960s. ASCII included the distinction of upper- and lowercase alphabets and a set of control characters to facilitate the transmission of written language as well as printing device functions, such as page advance and line feed, and the physical or logical control of data flow over the transmission media.[18] During the early 1960s, while also active in ASCII standardization, IBM simultaneously introduced in its product line of System/360 the eight-bit Extended Binary Coded Decimal Interchange Code (EBCDIC), an expansion of their six-bit binary-coded decimal (BCDIC) representations[d] used in earlier card punches.[23]The prominence of the System/360 led to the ubiquitous adoption of the eight-bit storage size,[18][16][13] while in detail the EBCDIC and ASCII encoding schemes are different.


In the early 1960s, AT&T introduced digital telephony on long-distance trunk lines. These used the eight-bit μ-law encoding. This large investment promised to reduce transmission costs for eight-bit data.


In Volume 1 of The Art of Computer Programming (first published in 1968), Donald Knuth uses byte in his hypothetical MIX computer to denote a unit which "contains an unspecified amount of information ... capable of holding at least 64 distinct values ... at most 100 distinct values. On a binary computer a byte must therefore be composed of six bits".[24] He notes that "Since 1975 or so, the word byte has come to mean a sequence of precisely eight binary digits...When we speak of bytes in connection with MIX we shall confine ourselves to the former sense of the word, harking back to the days when bytes were not yet standardized."[24]


The development of eight-bit microprocessors in the 1970s popularized this storage size. Microprocessors such as the Intel 8080, the direct predecessor of the 8086, could also perform a small number of operations on the four-bit pairs in a byte, such as the decimal-add-adjust (DAA) instruction. A four-bit quantity is often called a nibble, also nybble, which is conveniently represented by a single hexadecimal digit.


Historically, the term octad or octade was used to denote eight bits as well at least in Western Europe;[25][26] however, this usage is no longer common. The exact origin of the term is unclear, but it can be found in British, Dutch, and German sources of the 1960s and 1970s, and throughout the documentation of Philips mainframe computers.


In the International System of Quantities (ISQ), B is also the symbol of the bel, a unit of logarithmic power ratio named after Alexander Graham Bell, creating a conflict with the IEC specification. However, little danger of confusion exists, because the bel is a rarely used unit. It is used primarily in its decadic fraction, the decibel (dB), for signal strength and sound pressure level measurements, while a unit for one-tenth of a byte, the decibyte, and other fractions, are only used in derived units, such as transmission rates.


The lowercase letter o for octet is defined as the symbol for octet in IEC 80000-13 and is commonly used in languages such as French[27] and Romanian, and is also combined with metric prefixes for multiples, for example ko and Mo.


While the difference between the decimal and binary interpretations is relatively small for the kilobyte (about 2% smaller than the kibibyte), the systems deviate increasingly as units grow larger (the relative deviation grows by 2.4% for each three orders of magnitude). For example, a power-of-10-based terabyte is about 9% smaller than power-of-2-based tebibyte.

3a8082e126
Reply all
Reply to author
Forward
0 new messages