Fact-checked by Grok 2 weeks ago

Byte

A byte is a unit of digital information in computing and digital communications that most commonly consists of eight bits. A single byte is capable of representing 256 distinct values, ranging from 0 to 255 in decimal notation, making it suitable for encoding individual characters, small integers, or binary states. The term "byte" was coined in July 1956 by Werner Buchholz, a German-born American computer scientist, during the early design phase of the IBM 7030 Stretch supercomputer. Buchholz deliberately respelled "bite" as "byte" to denote an ordered collection of bits while avoiding confusion with the existing term "bit." Initially, the size of a byte varied across systems—for instance, early computers used 4-bit or 6-bit groupings—but it was standardized as 8 bits in the 1960s with the IBM System/360 mainframe series, which adopted the 8-bit Extended Binary Coded Decimal Interchange Code (EBCDIC) for character encoding. In modern computing, bytes form the basic building block for data storage, memory allocation, and transmission, enabling the representation of text, images, and executable code. They underpin character encoding schemes such as ASCII, which assigns 128 characters to the first 7 bits of a byte (with the eighth bit often used for parity or extension), and variable-length Unicode formats like UTF-8, where ASCII-compatible characters occupy one byte and others use multiple bytes. Larger data volumes are quantified using binary multiples of the byte, including the kilobyte (1 KiB = 1,024 bytes in computing contexts, though sometimes approximated as 1,000 bytes in decimal systems), megabyte (1 MiB = 1,048,576 bytes), and higher units up to yottabytes. This hierarchical structure is essential for measuring file sizes, bandwidth, and storage capacity in digital systems.

Definition and Fundamentals

Core Definition

A byte is a unit of digital information typically consisting of eight bits, enabling the representation of 256 distinct values ranging from 0 to 255 in decimal notation. This structure allows bytes to serve as a fundamental building block for data storage, processing, and transmission in computing systems. A bit, the smallest unit of digital information, represents a single binary digit that can hold either a value of 0 or 1. By grouping eight such bits into a byte, computers can encode more complex data efficiently, supporting operations like arithmetic calculations and character representation that exceed the limitations of individual bits. The international standard IEC 80000-13:2008 formally defines one byte as exactly eight bits, using the term "byte" (symbol B) as a synonym for "octet" to denote this eight-bit quantity and recommending its use to avoid ambiguity with historical variations. For example, a single byte can store one ASCII character, such as 'A', which corresponds to the decimal value 65.

Relation to Bits

A byte is an ordered collection of bits, standardized in modern computing to eight bits, that is typically treated as a single binary number representing integer values from 00000000 (0 in decimal) to 11111111 (255 in decimal). This structure allows a byte to encode 256 distinct states, as each bit can independently be 0 or 1, yielding $2^8 possible combinations. The numerical value of a byte is determined by its binary representation using positional notation, where each bit's position corresponds to a power of 2. The value V of an 8-bit byte is calculated as V = \sum_{i=0}^{7} b_i \cdot 2^i where b_i is the value of the i-th bit (either 0 or 1), and i = 0 denotes the least significant bit. For example, the binary byte 10101010 converts to 170 in decimal, computed as $1 \cdot 2^7 + 0 \cdot 2^6 + 1 \cdot 2^5 + 0 \cdot 2^4 + 1 \cdot 2^3 + 0 \cdot 2^2 + 1 \cdot 2^1 + 0 \cdot 2^0 = 128 + 32 + 8 + 2 = 170. In computing systems, bytes play a crucial role by serving as the smallest addressable unit of memory, enabling efficient referencing and manipulation of data in larger aggregates beyond individual bits. This byte-addressable design facilitates operations on contiguous blocks of memory, such as loading instructions or storing variables, which would be impractical at the bit level due to the granularity mismatch.

History and Etymology

Origins of the Term

The term "byte" was coined in July 1956 by IBM engineer Werner Buchholz during the early design phase of the IBM Stretch computer, a pioneering supercomputer project aimed at advancing high-performance computing. Buchholz introduced the word as a more concise alternative to cumbersome phrases like "binary digit group" or "bit string," which were used to describe groupings of bits in data processing. Etymologically, "byte" derives from "bit" with the addition of the suffix "-yte," intentionally respelled from the more intuitive "bite" to prevent confusion with the existing term "bit" while evoking the idea of a larger "bite" of information. This playful yet practical choice reflected the need for a unit that signified a meaningful aggregation of bits, larger than a single binary digit but suitable for computational operations. In its early conceptual role, the byte was proposed as a flexible data-handling unit larger than a bit, specifically to encode characters, perform arithmetic on variable-length fields, and manage instructions in the bit-addressable architecture of mainframes like the Stretch. This addressed the limitations of processing data solely in isolated bits, enabling more efficient handling of textual and numerical information in early computer systems. The first documented use of "byte" appeared in the June 1959 technical paper "Processing Data in Bits and Pieces" by Buchholz, Frederick P. Brooks Jr., and Gerrit A. Blaauw, published in the IRE Transactions on Electronic Computers, where it described a unit for variable-length data operations in the context of Stretch's design. Although the term originated three years earlier in internal IBM discussions, this publication marked its entry into the broader technical literature, predating its adoption in the IBM System/360 architecture.

Evolution of Byte Size

In the early days of computing, the size of a byte varied across systems to suit specific hardware architectures and data encoding needs. The IBM 7030 Stretch supercomputer, introduced in 1959, employed a variable-length byte concept, but typically utilized 6-bit bytes for binary-coded decimal (BCD) character representation, allowing efficient packing of decimal digits within its 64-bit words. Similarly, 7-bit bytes were common in telegraphic and communication systems, aligning with the structure of early character codes like the International Telegraph Alphabet No. 5, a 7-bit code supporting 128 characters. Some minicomputers, such as the DEC PDP-10 from the late 1960s, adopted 9-bit bytes to divide 36-bit words into four equal units, facilitating operations on larger datasets like those in time-sharing systems. The transition to an 8-bit byte gained momentum in the mid-1960s, propelled by emerging character encoding standards that required more robust representation. The American Standard Code for Information Interchange (ASCII), standardized in 1963, defined 7 bits for 128 characters, but practical implementations often added an 8th parity bit for error checking in transmission, effectively establishing an 8-bit structure. IBM's Extended Binary Coded Decimal Interchange Code (EBCDIC), developed in 1964 for the System/360 mainframe series, natively used 8 bits to encode 256 possible values, including control characters and punched-card compatibility, influencing enterprise computing architectures. The IBM System/360, announced in 1964, played a crucial role in this standardization by adopting a consistent 8-bit byte across its compatible family of computers, facilitating data interchange and software portability. This shift aligned with the growing need for international character support and efficient data processing beyond decimal-centric designs. By the 1970s, the 8-bit byte had become the de facto standard, driven by advancements in semiconductor technology and microprocessor design. Early dynamic random-access memory (DRAM) chips, such as Intel's 1103 introduced in 1970, provided 1-kilobit capacities in a 1024 × 1 bit organization. Systems using these chips often combined multiple devices to form 8-bit bytes, aligning with emerging standards for compatibility and efficiency. The Intel 8080 microprocessor, released in 1974, further solidified this by processing data in 8-bit units across its 16-bit architecture, enabling the proliferation of affordable personal computers and embedded systems. This standardization improved memory efficiency, as 8-bit alignments reduced overhead in addressing and arithmetic operations compared to uneven sizes like 6 or 9 bits. Formal standardization affirmed the 8-bit byte in international norms during the late 20th century. The IEEE 754 standard for binary floating-point arithmetic, published in 1985, implicitly relied on 8-bit bytes by defining single-precision formats as 32 bits (four bytes) and double-precision as 64 bits (eight bytes), ensuring portability across hardware. The ISO/IEC 2382-1 vocabulary standard, revised in 1993, explicitly defined a byte as a sequence of eight bits, providing a consistent terminology for information technology. This was reinforced by the International Electrotechnical Commission (IEC) in 1998 through amendments to IEC 60027-2, which integrated the 8-bit byte into binary prefix definitions for data quantities, resolving ambiguities in storage and transmission metrics.

Notation and Standards

Unit Symbols and Abbreviations

The official unit symbol for the byte is the uppercase letter B, as established by international standards to represent a sequence of eight bits. This symbol is defined in IEC 80000-13:2025, which specifies that the byte is synonymous with the octet and uses B to denote this unit in information science and technology contexts. The standard also aligns with earlier guidelines in IEC 60027-2 (2000), which incorporated conventions for binary multiples introduced in 1998 and emphasized consistent notation for bytes and bits. To prevent ambiguity, particularly in data rates and storage metrics, the lowercase b is reserved for the bit or its multiples (e.g., kbit for kilobit), while B exclusively denotes the byte. The National Institute of Standards and Technology (NIST) reinforces this distinction in its guidelines on SI units and binary prefixes, stating that one byte equals 1 B = 8 bits, and recommending B for all byte-related quantities to avoid confusion with bit-based units. Similarly, the International Electrotechnical Commission (IEC) advises against using non-standard symbols like "o" for octet, as it deviates from the unified B notation and could lead to errors in technical documentation. In formal writing and standards-compliant contexts, abbreviations should use B without periods or pluralization (e.g., 8 B for eight bytes), following general SI symbol rules for upright roman type and no modification for plurality. Informal usage in prose often spells out "byte" fully or employs B inline, but avoids ambiguous lowercase "b" for bytes to maintain clarity. For example, storage capacities are expressed as 1 KB = 1024 B in binary contexts, distinguishing from kbit or kb for kilobits (1000 bits). Guidelines from authoritative bodies like NIST and the IEC continue to prioritize B to ensure unambiguous communication in computing and measurement applications. These conventions promote standardized unit symbols to support global interoperability.

Definition of Multiples

Multiples of bytes provide a standardized way to express larger quantities of digital information, commonly applied in contexts such as data storage, memory capacity, and bandwidth measurement. These multiples incorporate prefixes that scale the base unit of one byte (8 bits) by powers of either 10, aligning with the decimal system used in general scientific measurement, or powers of 2, which correspond to the binary nature of computing architectures. In 1998, the International Electrotechnical Commission (IEC) established binary prefixes through the amendment to International Standard IEC 60027-2 to clearly denote multiples based on powers of 2, avoiding ambiguity in computing applications, with the latest revision in IEC 80000-13:2025 adding new prefixes for binary multiples. Under this system, the prefix "kibi" (Ki) represents $2^{10} bytes, so 1 KiB = $2^{10} bytes = 1024 bytes; "mebi" (Mi) represents $2^{20} bytes, so 1 MiB = $2^{20} bytes = 1,048,576 bytes; and the scale extends through prefixes like gibi (Gi, $2^{30}), tebi (Ti, $2^{40}), pebi (Pi, $2^{50}), exbi (Ei, $2^{60}), zebi (Zi, $2^{70}), up to yobi (Yi, $2^{80}), where 1 YiB = $2^{80} bytes. Concurrently in 1998, the International System of Units (SI) prefixes were endorsed for decimal multiples of bytes to maintain consistency with metric conventions, defining scales based on powers of 10. For instance, the prefix "kilo" (k) denotes $10^3 bytes, so 1 kB = $10^3 bytes = 1000 bytes; "mega" (M) denotes $10^6 bytes, so 1 MB = $10^6 bytes = 1,000,000 bytes; and the progression continues with giga (G, $10^9), tera (T, $10^{12}), peta (P, $10^{15}), exa (E, $10^{18}), zetta (Z, $10^{21}), and yotta (Y, $10^{24}), where 1 YB = $10^{24} bytes. In general, the value of a byte multiple can be expressed as \text{Value} = \text{prefix_factor} \times \text{byte_size}, where byte_size is 1 byte and prefix_factor equals $10^n for decimal prefixes or $2^n for binary prefixes, with n being the specific exponent for the chosen prefix (e.g., n=3 for kilo or kibi).

Variations and Conflicts in Multiples

Binary-Based Units

Binary-based units, also referred to as binary prefixes, are measurement units for digital information that are multiples of powers of 2, aligning with the fundamental binary architecture of computers. These units were formalized by the International Electrotechnical Commission (IEC) in its 1998 standard IEC 60027-2, which defines prefixes such as kibi (Ki), mebi (Mi), and gibi (Gi) to denote exact binary multiples of the byte. For instance, 1 kibibyte (KiB) equals $2^{10} = 1,024 bytes, while 1 gibibyte (GiB) equals $2^{30} = 1,073,741,824 bytes. This standardization was later incorporated into the updated IEC 80000-13:2008, emphasizing their role in data processing and transmission. The adoption of binary-based units gained traction for their precision in contexts like random access memory (RAM) capacities and file size reporting, where alignment with hardware addressing is crucial. Operating systems such as Microsoft Windows commonly report file sizes using these binary multiples—for example, displaying 1 KB as 1,024 bytes in File Explorer—to reflect actual storage allocation in binary systems. The IEC promoted these units to eliminate ambiguity in computing applications, ensuring that measurements for volatile memory like RAM and non-volatile storage like files accurately represent binary-scaled data. A key advantage of binary-based units lies in their seamless integration with computer memory addressing, where locations are numbered in powers of 2; for example, $2^{20} addressable bytes precisely equals 1 mebibyte (MiB), facilitating efficient hardware design and software calculations without conversion overhead. The general formula for calculating the size in bytes is $2^{10 \times n}, where n is the prefix order (e.g., n=1 for kibi, n=2 for mebi). Thus, 1 tebibyte (TiB) = $2^{40} = 1,099,511,627,776 bytes. Common binary prefixes are summarized below:
Prefix NameSymbolFactorBytes (for byte multiples)
kibibyteKiB$2^{10}1,024
mebibyteMiB$2^{20}1,048,576
gibibyteGiB$2^{30}1,073,741,824
tebibyteTiB$2^{40}1,099,511,627,776
pebibytePiB$2^{50}1,125,899,906,842,624
These units provide conceptual clarity for computational efficiency, in contrast to decimal-based units that scale by powers of 10 for metric consistency.

Decimal-Based Units

Decimal-based units for byte multiples adhere to the International System of Units (SI) prefixes, employing powers of 10 for scalability in information storage and transfer. Per ISO/IEC 80000-13:2008, the kilobyte (kB) is defined as exactly 1,000 bytes, or $10^3 bytes, establishing the foundational decimal progression. This system scales linearly: the megabyte (MB) equals $10^6 bytes (1,000,000 bytes), the gigabyte (GB) equals $10^9 bytes (1,000,000,000 bytes), and the petabyte (PB) equals $10^{15} bytes (1,000,000,000,000,000 bytes). The general expression for these units is given by the equation \text{Size in bytes} = 10^{3 \times n}, where n denotes the prefix order (n=1 for kilo-, n=2 for mega-, up to n=5 for peta- and n=6 for exa-). For example, 1 exabyte (EB) comprises $10^{18} bytes. These decimal conventions are prevalent in hard drive manufacturing and networking protocols, prioritizing consumer familiarity with metric measurements over computational binary alignments. ISO/IEC 80000-13:2008 further endorses this for information technology, recommending SI prefixes to enhance clarity in storage and data rate expressions. A key distinction arises when comparing decimal units to their binary counterparts: 1 GB (decimal) totals 1,000,000,000 bytes, while 1 GiB equals 1,073,741,824 bytes ($2^{30}), yielding roughly a 7% variance that manifests as reduced apparent capacity in binary-displaying operating systems. For a 1 TB drive labeled in decimal terms (1,000,000,000,000 bytes), systems report approximately 931 GB, illustrating this practical implication.

Historical Disputes and Resolutions

During the 1980s and 1990s, significant ambiguity surrounded the definition of byte multiples like the kilobyte (KB), with computing hardware and software conventionally interpreting 1 KB as 1024 bytes based on binary powers of two, while hard disk drive (HDD) manufacturers increasingly adopted decimal interpretations of 1000 bytes to inflate advertised capacities. This divergence, driven by HDD marketing strategies to highlight larger storage sizes, caused widespread consumer frustration as operating systems reported usable space closer to 93% of the labeled amount due to binary calculations. To address the growing confusion, the International Electrotechnical Commission (IEC) approved a set of binary prefixes in December 1998, including "kibi" (symbol Ki) for 210 or 1024, "mebi" (Mi) for 220 or 1,048,576, and similar terms up to "yobi" (Yi) for 280, explicitly distinguishing them from decimal SI prefixes. In 2000, the U.S. National Institute of Standards and Technology (NIST) endorsed these IEC binary prefixes, recommending their use in technical contexts to avoid ambiguity and aligning with international standards for data storage and memory specifications. Legal actions further highlighted the issue. In 2006, Western Digital settled a class-action lawsuit alleging deceptive advertising of gigabyte (GB) capacities on drives like the 80 GB and 120 GB models, where actual binary-displayed space fell short; the settlement required the company to disclose its decimal definition (1 GB = 1,000,000,000 bytes) on product packaging, websites, and software for five years, along with providing free data recovery tools to affected customers. A similar 2007 class-action against Seagate resulted in a settlement offering refunds equivalent to 5% of purchase price (up to $7 per drive) to millions of customers and mandating clearer labeling of decimal versus binary interpretations to prevent future misleading claims. In the European Union, the Unfair Commercial Practices Directive (2005/29/EC) has prohibited misleading advertisements on storage capacities, enabling national authorities to pursue cases against vendors for deceptive decimal labeling without binary disclaimers, thereby reinforcing consumer protections against such discrepancies. More recently, the 2025 edition of ISO/IEC 80000-13 on quantities and units in information technology reaffirms the IEC binary prefixes and urges their consistent adoption alongside decimal ones to fully resolve lingering ambiguities in byte multiple definitions across global standards.

Applications in Computing

Storage and Memory

In data storage devices such as hard disk drives (HDDs) and solid-state drives (SSDs), the byte serves as the fundamental addressable unit, with data organized into sectors that are typically multiples of bytes. Traditional HDD sectors measure 512 bytes, representing the smallest unit for reading or writing data since the early 1980s. Modern HDDs often employ Advanced Format technology with 4,096-byte (4 KiB) physical sectors to enhance storage density and error correction on high-capacity drives exceeding 1 terabyte. SSDs, while internally using pages and blocks rather than traditional tracks and sectors, emulate 512-byte or 4,096-byte sectors for compatibility with operating systems and software. This emulation, known as 512e for 512-byte presentation, allows seamless integration but can introduce overhead in read-modify-write operations on native 4,096-byte structures. Computer memory, particularly random-access memory (RAM), is structured in bytes, enabling fine-grained access in byte-addressable architectures prevalent in modern systems. In byte-addressable memory, each unique address corresponds to a single byte (8 bits), allowing the CPU to directly read or write individual bytes without unnecessary overhead. The x86 architecture, widely used in personal computers, employs byte-addressable memory, where 32-bit or 64-bit addresses reference bytes in RAM. For instance, an 8 GB RAM module, as defined by manufacturers in decimal notation, equates to 8 × 10^9 bytes. File systems allocate storage space in bytes by grouping sectors into larger clusters, ensuring efficient management of data on disks. A cluster, the basic allocation unit, consists of one or more consecutive 512-byte sectors, such as 4 sectors totaling 2,048 bytes, with files occupying whole clusters even if partially filled. In the File Allocation Table (FAT) filesystem, cluster size is calculated as the product of bytes per sector (e.g., 512) and sectors per cluster (a power of 2, up to 128), limiting maximum sizes to 32 KB for broad compatibility, though larger clusters up to 256 KB are supported in modern implementations. This byte-based allocation minimizes fragmentation while tracking file sizes and locations precisely in bytes. In processor caches, which bridge the speed gap between CPU and memory, data is transferred in fixed-size lines typically measuring 64 bytes to optimize bandwidth and exploit spatial locality. Intel's IA-32 and Intel 64 architectures specify 64-byte cache lines, where accessing any byte fetches the entire line into the cache. Similarly, AMD's Zen microarchitecture uses 64-byte cache lines, enabling efficient prefetching of adjacent data. Storage capacities have evolved dramatically in byte terms, from the 1970s-era 3.5-inch high-density floppy disks holding 1.44 MB (1,474,560 bytes in binary notation, derived from 2,880 sectors of 512 bytes each) to contemporary SSDs offering 1 TB (1 × 10^12 bytes in decimal notation). This progression reflects advances in density, with early floppies limited by magnetic media constraints and modern SSDs leveraging flash memory for terabyte-scale byte storage.

Data Processing and Encoding

In data processing, bytes serve as the fundamental unit for encoding and manipulating information within computational systems. Character encoding schemes, such as UTF-8, represent Unicode characters using a variable number of bytes, typically ranging from 1 to 4 bytes per character to accommodate the full range of code points up to U+10FFFF. This variable-width approach ensures backward compatibility with ASCII, where the first 128 characters (0x00 to 0x7F) are encoded in a single byte, effectively utilizing 7 bits while reserving the eighth bit for extension. ASCII, formalized as a 7-bit standard, thus fits entirely within one byte in modern 8-bit systems, enabling efficient handling of basic Latin text without additional overhead. Central processing units (CPUs) process bytes through arithmetic logic units (ALUs) that perform operations on 8-bit values, such as addition, subtraction, and bitwise manipulations, forming the basis for more complex computations on multi-byte data types. For instance, low-level assembly instructions like BSWAP on x86 architectures reverse the byte order within a 32-bit or 64-bit register, facilitating data conversion between different endian formats during processing. These operations highlight the byte's role in granular data handling, where registers and ALUs treat sequences of bytes as building blocks for integers, floating-point numbers, and other structures. In network protocols, bytes define the structure and transmission of data packets. Ethernet frames, as specified in IEEE 802.3, carry a payload with a maximum transmission unit (MTU) of 1500 bytes, excluding headers, to balance efficiency and error detection in local area networks. Bandwidth is commonly measured in bytes per second, with Gigabit Ethernet theoretically supporting up to approximately 125 MB/s (megabytes per second), though practical rates like 100 MB/s account for protocol overhead and real-world conditions. Encoding techniques further illustrate byte manipulation in data processing. Base64, a method for representing binary data in an ASCII-compatible format, converts every 3 bytes of input into 4 characters from a 64-symbol alphabet, increasing the data size by about 33% to ensure safe transmission over text-based protocols. Endianness, the ordering of bytes in multi-byte integers, affects storage and processing; for example, a little-endian system like x86 stores the least significant byte at the lowest memory address, which can require byte swaps when interfacing with big-endian networks like IP protocols.

References

  1. [1]
    Bits and Bytes
    Byte · One byte = collection of 8 bits · e.g. 0 1 0 1 1 0 1 0 · One byte can store one character, e.g. 'A' or 'x' or '$' ...
  2. [2]
    Byte | Samsung Semiconductor Global
    The smallest practical unit for expressing information is the byte, which is made up of eight bits. A single byte can represent 256 combinations of data.
  3. [3]
    Werner Buchholz Coins the Term "Byte", Deliberately Misspelled to ...
    Byte was a deliberate respelling of bite to avoid accidental confusion with bit. "Early computers used a variety of 4-bit binary coded decimal Offsite Link (BCD) ...
  4. [4]
    What is a Byte? Definition & Conversion - Ascendant Technologies
    Oct 17, 2024 · Defined as the fundamental unit of digital information, a byte consists of eight bits and has the capacity to represent 256 unique values. This ...
  5. [5]
    Character encodings: Essential concepts
    UTF-8 uses 1 byte to represent characters in the ASCII set, two bytes for characters in several more alphabetic blocks, and three bytes for the rest of the BMP.
  6. [6]
    Definitions of the SI units: The binary prefixes
    one kilobit, 1 kbit = 103 bit = 1000 bit ; one byte, 1 B = 23 bit = 8 bit ; one mebibyte, 1 MiB = 220 B = 1 048 576 B ; one megabyte, 1 MB = 106 B = 1 000 000 B.
  7. [7]
    Understanding file sizes | Bytes, KB, MB, GB, TB, PB, EB, ZB, YB
    May 8, 2025 · Storage in computers is typically measured in multiples of bytes, with common units like kilobytes (KB), megabytes (MB), and gigabytes (GB).
  8. [8]
    [PDF] IEC-80000-13-2008.pdf - iTeh Standards
    In English, the name byte, symbol B, is used as a synonym for octet. Here byte means an eight-bit byte. However, byte has been used for numbers of bits other ...
  9. [9]
    ASCII, decimal, hexadecimal, octal, and binary conversion table - IBM
    ASCII, decimal, hexadecimal, octal, and binary conversion table ; @, 64, 40, 100, 1000000 ; A, 65, 41, 101, 1000001.
  10. [10]
    Byte - Glossary | CSRC - NIST Computer Security Resource Center
    A group of eight bits that is treated either as a single entity or as an array of eight individual bits.
  11. [11]
    [PDF] Lecture 3: Bits, Bytes, Binary - cs.Princeton
    Decimal to binary conversion in Python def dectobinary(num): if num == 0: return "0" binary = "" while num > 0: remainder = str(num % 2) binary = binary + ...
  12. [12]
    Lecture notes on Addressibility - cs.wisc.edu
    Addressability refers to the size of memory elements that are given consecutive addresses. If each byte has a unique address, we have byte addressing.
  13. [13]
    [PDF] CS107, Lecture 2
    Computer memory is just a large array of bytes. It is byte-addressable; you can't address a bit in isolation, only a full byte. • Computers still fundamentally ...
  14. [14]
    Werner Buchholz - IEEE Computer Society
    In July 1956, he coined the term byte, a unit of digital information to describe an ordered group of bits, as the smallest amount of data that a computer could ...
  15. [15]
    Byte - Etymology, Origin & Meaning
    Originating in 1956 American English, "byte" was coined by German-born IBM scientist Werner Buchholz; it means a unit of digital info, typically eight bits.Missing: bite | Show results with:bite
  16. [16]
  17. [17]
    Byte definition by The Linux Information Project (LINFO)
    May 14, 2005 · A byte (represented by the upper-case letter B), is a contiguous sequence of a fixed number of bits that is used as a unit of memory, storage and instructions ...<|control11|><|separator|>
  18. [18]
  19. [19]
    [PDF] Reference Manual - 7030 Data Processing System - Bitsavers.org
    A maximum of four zone bits can be used, which makes the decimal character size, or byte size, a maximum of eight bits. Decimal arith- metic makes use of ...
  20. [20]
  21. [21]
    1970: Semiconductors compete with magnetic cores
    Semiconductor IC memory concepts were patented as early as 1963. Commercial chips appeared in 1965 when Signetics, Sunnyvale, CA produced an 8-bit scratchpad ...
  22. [22]
    ISO/IEC 2382-1:1993 - Information technology — Vocabulary — Part 1
    Presents, in English and French, 144 terms in the following fields: general terms, information representation, hardware, software, programming, applications ...Missing: byte bits
  23. [23]
  24. [24]
    About bits and bytes: prefixes for binary multiples
    This gives us the possibility to report the difference in a technically correct way: 1000 bytes are 1 kilobyte (Kbyte), and 1024 bytes are instead 1 kibibyte ( ...
  25. [25]
  26. [26]
    Configure client-specific message size limits - Microsoft Learn
    Apr 30, 2025 · To convert the value into kilobytes, multiply by 1024. To convert the value into bytes, multiply by 1048756 (1024*1024). Note that the size ...<|control11|><|separator|>
  27. [27]
    Storage capacity measurement standards | Seagate US
    Storage capacity is measured in binary (1024 bytes/KB) and decimal (1000 bytes/KB) systems. The industry standard is decimal, causing confusion.
  28. [28]
    Western Digital Settles Hard-Drive Capacity Lawsuit - CRN
    Jun 28, 2006 · With the settlement, Western Digital will be required to specify its gigabyte definition on its Web site and future product packaging, including ...
  29. [29]
    Seagate facing multi-million bill after settling hard drive lawsuit - ITPro
    Nov 2, 2007 · Hard drive maker will change its packaging and offer $7 refunds to customers following a legal challenge that it overstated hard disk sizes.
  30. [30]
    IEC 80000-13:2025 - Quantities and units — Part 13 - ISO
    In stockThis document specifies names, symbols and definitions for quantities and units used in information science and technology.Missing: 80000-13:2022 decimal
  31. [31]
    Sector Definition - What is a sector on a computer hard disk?
    ### Summary of Sector Size for HDD and SSD
  32. [32]
    Disk Block Size | pclt.sites.yale.edu
    Mar 10, 2010 · New hard drives are increasing their sector size (the smallest unit of data that you can physically read or write to the disk) from 512 bytes to 4096 bytes.Missing: addressable | Show results with:addressable
  33. [33]
    Byte and Word Addressable Memory - GeeksforGeeks
    Oct 30, 2025 · In byte-addressable memory, each memory cell stores 1 byte (8 bits), and each address corresponds to a single byte. This design allows the CPU ...
  34. [34]
    [PDF] Intel® 64 and IA-32 Architectures Software Developer's Manual
    Intel technologies features and benefits depend on system configuration and may require enabled hardware, software, or service activation. Learn more at intel.
  35. [35]
    What Would Be the Total Capacity of an Intel® SSD After Installation?
    For decimal notation, 1 GB equals 1,000,000,000 bytes. For binary notation, 1 GB equals 1,073,741,824 bytes.
  36. [36]
    Introducing computing and IT: 5.3 Formatting a hard disk | OpenLearn
    Almost all file systems create sectors that can hold 512 bytes of data. The sectors are grouped together in clusters. So a cluster is a larger unit of memory ...
  37. [37]
    FAT Filesystem - Elm-chan.org
    Number of sectors per allocation unit. In the FAT file system, the allocation unit is called Cluster. This is a block of one or more consecutive sectors and ...
  38. [38]
    [PDF] Intel(R) 64 and IA-32 Architectures Optimization Reference Manual
    Jun 7, 2011 · This is the Intel 64 and IA-32 Architectures Optimization Reference Manual, covering Intel products, and includes chapters 1-13.
  39. [39]
    [PDF] Software Optimization Guide for the AMD Zen4 Microarchitecture
    Aug 7, 2024 · Cache line size is 64 bytes; 32 bytes (two 16-byte aligned blocks from within a cache line) are fetched in a cycle. Functions associated with ...
  40. [40]
    Why Is a 3.5 Floppy Diskette 1.44 MB and Not 1.47 MB?
    Sep 7, 2025 · If you were to use the conventional base 2 or 1,024 KB to figure out the capacity of a floppy diskette size, it would be 1,474,560 bytes.Missing: exact | Show results with:exact
  41. [41]
    SSD Advertised Size Vs. Operating System's Reported Size - Solidigm
    Storage Capacity Can Be Measured In Different Units -- Such As Bytes And Bits ... 1 TB ≈ 931,32 GB 2 TB ≈ 1.862,64 GB. Questions? Check out our Community ...
  42. [42]
    RFC 3629: UTF-8, a transformation format of ISO 10646
    UTF-8, the object of this memo, has a one-octet encoding unit. It uses all bits of an octet, but has the quality of preserving the full US-ASCII [US-ASCII] ...
  43. [43]
    ASCII table - Table of ASCII codes, characters and symbols
    ASCII, stands for American Standard Code for Information Interchange. It is a 7-bit character code where each individual bit represents a unique character.ASCII (7-bit) · Ascii 7 · ASCII Characters · Extended ASCII
  44. [44]
    BSWAP — Byte Swap
    To swap bytes in a word value (16-bit register), use the XCHG instruction. When the BSWAP instruction references a 16-bit register, the result is undefined. In ...
  45. [45]
    RFC 4648 - The Base16, Base32, and Base64 Data Encodings
    This document describes the commonly used base 64, base 32, and base 16 encoding schemes. It also discusses the use of line-feeds in encoded data.
  46. [46]
    Understanding Big and Little Endian Byte Order - BetterExplained
    Big endian machine: Stores data big-end first. When looking at multiple bytes, the first byte (lowest address) is the biggest. Little endian machine: Stores ...