Fact-checked by Grok 2 weeks ago

Digital data

Digital data refers to represented in a , format using bits— the smallest units of , each capable of holding a value of either 0 or 1— for storage, processing, and transmission within computing systems. This representation leverages the two-state nature of electronic switches in computers, where 0 typically denotes an "off" state and 1 an "on" state, enabling efficient encoding of complex . Bits are commonly grouped into bytes, consisting of 8 bits, which serve as the fundamental unit for manipulation and allow for 256 possible values (0 to 255) per byte. In digital systems, is stored in byte-addressable , where each byte has a unique , facilitating organized access and allocation across segments like static data, , and based on the data's lifecycle. Common types of digital include numeric (such as and real numbers), textual (encoded via standards like ASCII or ), logical (true/false values), visual (images represented by pixels in binary, grayscale, or color formats), and audio (sampled waveforms). These representations often employ fixed-length formats to balance precision and storage efficiency; for instance, an 8-bit unsigned ranges from 0 to 255, while signed variants use schemes like to include negative values. Digital data forms the backbone of modern and , encompassing raw values or sets of values that represent specific concepts, which become meaningful only upon and contextualization. In the digital age, it is generated in vast quantities—such as petabytes annually from projects like the —enabling global collaboration but posing challenges in accuracy verification, long-term preservation, and adaptation to evolving technologies. Techniques like (lossless for exact recovery or lossy for approximation) further optimize storage for data.

Fundamentals

Definition

Digital data refers to that is represented using values, most commonly in the form of digits (bits) consisting of 0s and 1s, which allows for precise , , and in systems. This nature enables digital data to be exactly replicated without degradation, distinguishing it from continuous representations and facilitating reliable manipulation through computational operations. In contrast to analog data, which varies continuously and is susceptible to signal degradation over time or distance—such as the grooves on a vinyl record wearing down with repeated playback, leading to reduced audio fidelity—digital data is quantized into finite states, preserving integrity during copying and , as exemplified by (CD) audio where encoding ensures consistent quality across duplicates. The origins of digital data are rooted in electronic systems designed for efficient , , and of , with Claude Shannon's 1948 formulation of establishing the bit as the fundamental unit of , quantifying uncertainty in communication channels without reference to meaning. The prevalence of digital data has grown dramatically; according to estimates by Hilbert and López, digital formats accounted for less than 1% of the world's total technological storage capacity in 1986 but expanded to 94% by 2007 due to exponential advances in technologies.

Representation

data is fundamentally represented as sequences of bits, where each bit is a that can hold one of two values: or 1. In electronic systems, a bit value of typically corresponds to a level (near V, representing an "off" state), while 1 corresponds to a level (such as 3.3 V or 5 V, representing an "on" state). These bits form the basic building blocks, allowing complex information to be encoded through patterns of 0s and 1s. A byte, the most common grouping of bits, consists of 8 bits and can represent 256 distinct values (from 0 to 255 in ). For example, in the ASCII encoding scheme, the uppercase letter 'A' is represented as the byte 01000001 in . Higher-level abstractions build on bits for efficiency. A comprises 4 bits, capable of representing 16 values (0 to 15 in ), often used in notation where each maps to a single hex digit (0-9 or A-F). A word, which is machine-dependent, refers to the standard number of bits processed by a in a single operation; common sizes include 32 bits in 32-bit architectures and 64 bits in 64-bit systems. provides a compact way to denote , with each pair of hex digits representing a byte; for instance, the binary 11111111 (all 1s in a byte) is written as 0xFF. Various data types structure bits to represent specific kinds of . Integers can be unsigned, using all bits for to cover non-negative values (e.g., 0 to 2n-1 for n bits), or signed, reserving the most significant bit as a in representation to include negative values (e.g., -2n-1 to 2n-1-1). Floating-point numbers follow the standard, which defines formats like single-precision (32 bits: 1 , 8 exponent bits, 23 mantissa bits) and double-precision (64 bits: 1 , 11 exponent bits, 52 mantissa bits) to approximate real numbers via in binary. Text is encoded using standards like , which assigns unique code points (numbers) to characters from diverse writing systems, typically stored as sequences of bytes in encodings such as UTF-8. Images are represented as grids of , where each pixel's value captures color intensity; in RGB format, a pixel uses three 8-bit channels (0-255) for , , and components to form over 16 million colors. Storage capacity scales through hierarchical units starting from the bit. Common units include the byte (8 bits), (KB, approximately 103 bytes), (MB, 106 bytes), (GB, 109 bytes), terabyte (TB, 1012 bytes), and petabyte (PB, 1015 bytes), often using decimal prefixes for marketing while prefixes (e.g., kibibyte, 210 bytes) apply in technical contexts. The total bit capacity of a storage unit is calculated as: \text{total bits} = \text{number of units} \times \text{bits per unit} For example, 1 TB (1012 bytes) equals 8 × 1012 bits, since each byte holds 8 bits.

Conversion

Analog-to-Digital

The process of analog-to-digital (A/D) conversion transforms continuous analog signals, such as those from sensors or audio sources, into discrete digital data suitable for computational processing and storage. This conversion is essential in digital systems, where analog signals representing real-world phenomena like sound waves or voltage variations must be discretized in both time and amplitude domains. The two primary stages are sampling, which captures the signal at regular intervals, and quantization, which maps continuous amplitude values to finite digital levels. These steps ensure faithful representation while introducing controlled approximations to enable digital handling. Sampling adheres to the Nyquist-Shannon sampling theorem, which stipulates that to accurately reconstruct a continuous signal without , the sampling f_s must be at least twice the highest component f_{\max} in the signal's . Formally, this is expressed as: f_s \geq 2 \times f_{\max} For instance, in recording, compact discs use a sampling rate of 44.1 kHz to capture frequencies up to 22 kHz, encompassing the full human . Failure to meet this criterion results in distortion, as higher frequencies fold into lower ones during . Quantization follows sampling by approximating each sample's to the nearest level from a predefined set, introducing quantization as the difference between the actual and assigned values. In uniform quantization, the number of levels ; , 16-bit audio quantization provides levels, allowing fine-grained over the signal's . This manifests as , with the () for a full-scale sinusoidal input given by: \text{SNR} = 6.02n + 1.76 \, \text{dB} where n is the number of bits, establishing a theoretical limit on conversion fidelity. An analog-to-digital converter (ADC) typically comprises key components: a sample-and-hold circuit to capture and stabilize the input signal during conversion, a quantizer to map the held voltage to discrete levels, and an encoder to output the corresponding binary code. One common architecture is the successive approximation ADC (SAR ADC), which iteratively compares the input against a digitally controlled reference using a binary search algorithm, refining the digital output bit by bit over multiple clock cycles for balanced speed and power efficiency. Applications of A/D conversion span diverse fields, including audio digitization via (PCM), where sampled and quantized signals enable compact storage and transmission in formats like ; video processing through frame capture, discretizing pixel intensities at high rates for ; and sensor interfaces in (IoT) devices, converting environmental measurements like temperature or motion into digital form for remote monitoring. By 2025, precision ADCs in , such as smartphones, commonly process over 1 million samples per second to support features like high-resolution and real-time .

Symbol-to-Digital

Symbol-to-digital conversion transforms discrete, human-readable symbols—such as text characters, graphical icons, or visual patterns—into representations suitable for digital processing and storage. This process primarily relies on encoding schemes that map each symbol to a unique sequence of bits, enabling efficient transmission and manipulation by computers. For textual symbols, the American Standard Code for Information Interchange (ASCII) employs a fixed 7-bit code to represent 128 basic characters, including uppercase and lowercase letters, digits, and punctuation, providing a foundational standard for early text handling. Extending this capability, serves as the dominant encoding for , using variable-length byte sequences (1 to 4 bytes) to accommodate a broader array of international symbols while maintaining backward compatibility with ASCII. In imaging contexts, color symbols are digitized via the RGB model, where each 's hue is defined by three 8-bit integer values (0-255) for red, green, and blue components, yielding over 16 million possible colors per pixel in standard formats like . Input devices play a crucial role in capturing and converting these symbols through systematic mechanisms. Keyboards, for instance, utilize polling, in which the host computer repeatedly queries the keyboard's at regular intervals to detect key presses; upon detection, the device generates a scan code that is mapped to a code like ASCII or UTF-8. Scanners, meanwhile, employ optical scanning to digitize printed symbols: a source illuminates the document, sensors capture reflected intensities as analog signals, and these are thresholded into values representing black or white modules. These mechanisms ensure symbols are systematically polled or scanned into form without loss of identity. Various encoding schemes optimize this conversion for efficiency and capacity. , introduced by in 1952, exemplifies variable-length encoding by assigning shorter binary codes to more frequent symbols and longer ones to rarer ones, minimizing overall bit usage in data streams while ensuring prefix-free decoding for lossless reconstruction. An early precursor to such binary-like systems is , developed in the 1830s, which maps alphabetic symbols to sequences of dots (short signals) and dashes (long signals) separated by spaces, effectively using two states to encode messages over telegraph lines. In contemporary applications, QR codes illustrate advanced symbol encoding by arranging data into a grid of black and white squares; they support four modes—numeric, alphanumeric, byte/binary, and —to encode up to 7,089 numeric characters or equivalent in other modes per symbol, facilitating quick digital readout via scanners. The proliferation of symbol-to-digital conversion has driven the near-total dominance of digital storage, with global data volumes projected to reach 181 zettabytes by 2025, overwhelmingly in formats as analog fades. In digital cameras, this process is evident through (CCD) sensors, where incoming photons generate electron charges proportional to light intensity at each photosite; these charges are serially shifted and converted to pixel values via an on-chip , forming the . A key challenge in this domain is the ongoing evolution of character sets to handle global linguistic diversity; starting from ASCII's limited 128 symbols, adopted a 21-bit architecture in 1996, theoretically supporting 1,114,112 code points, with 159,801 characters encoded by 17.0 in 2025 to encompass scripts from 150+ languages.

States

Binary States

Digital data fundamentally relies on binary states, which represent the two discrete values of 0 and 1. Logically, these states correspond to false and true, or off and on, forming the basis of in . Physically, they are implemented through distinguishable electrical or material properties that can be reliably detected and switched. In electronic circuits, binary states are typically encoded using voltage levels. For instance, in Transistor-Transistor Logic (TTL) systems operating at 5V, a low state (0) is defined as 0 to 0.8 V, while a high state (1) ranges from 2 V to 5 V, with undefined regions in between to provide noise margins. These thresholds ensure robust signal interpretation despite variations in manufacturing or environmental conditions. Similar conventions apply in other logic families, such as , but remains a standard reference for many digital interfaces. Binary states are stored in various by exploiting physical properties that can hold one of two stable configurations. In , such as hard disk drives, data is encoded in the orientation of magnetic domains on a thin ferromagnetic layer; one direction represents 0, and the opposite represents 1, with read heads detecting these via changes in . Optical , like , use microscopic pits and lands on a reflective surface: pits scatter light to indicate one state (often 0), while lands reflect it for the other (1), though actual bit encoding relies on transitions between them for reliable detection. In , NAND flash memory employs floating-gate transistors where the presence or absence of trapped electrons in the gate alters the transistor's , distinguishing charged (0) from uncharged (1) states that persist without power. Switching between binary states enables computation through logic gates, which perform basic Boolean operations on inputs. The AND gate outputs 1 only if all inputs are 1; the OR gate outputs 1 if any input is 1; and the NOT gate inverts the input (0 to 1 or vice versa). These are realized using transistors as switches: in a MOSFET, a low gate voltage keeps it off (cut-off, representing 0), blocking current, while a high voltage turns it on (saturation, representing 1), allowing current flow. Combinations of such transistor switches form the gates, underpinning all digital logic circuits. Reliability of binary states is critical, as errors can corrupt . Modern achieves correctable bit rates on the order of $10^{-11} per bit per hour, correcting single-bit flips through redundant . However, classical systems face fundamental limits from , such as in transport, which imposes a minimum probability scaling with signal and , preventing perfect in high-speed operations.

Data Lifecycle States

Digital data progresses through distinct lifecycle states—at rest, in transit, and in use—each requiring tailored measures to mitigate risks associated with , transmission, and processing. These states highlight the dynamic nature of , where vulnerabilities can arise from unauthorized , , or , emphasizing the need for layered protections aligned with established frameworks. encompasses digital information stored on persistent media without active or movement, such as in or filesystems. This state is particularly susceptible to threats like physical theft of devices or unauthorized internal . Secure practices include full-disk encryption using the (AES) with 256-bit keys (AES-256), a symmetric endorsed by the National Institute of Standards and Technology (NIST) for protecting sensitive data in long-term . AES-256 operates on 128-bit blocks and is widely implemented in systems like encrypted hard drives and to prevent data exposure if the medium is compromised. Data in transit refers to digital data actively transferred across networks or between systems, exposing it to interception during communication. Common protocols facilitating this include the Transmission Control Protocol/Internet Protocol (TCP/IP) suite, which handles reliable data delivery over the , and , which layers (TLS) encryption atop HTTP to safeguard against . A key vulnerability in this state is the , where an adversary positions themselves between sender and receiver to capture or alter data streams. To counter such risks, and certificate validation are essential, ensuring and during movement. Data in use describes digital data being actively processed or accessed within active memory, such as volatile (), where it is temporarily loaded for or . This state is vulnerable to memory scraping or attacks during runtime operations. Protection relies on mechanisms like (RBAC), which assigns permissions based on predefined user roles within an organization, limiting exposure to only authorized personnel and processes. RBAC integrates with operating systems and applications to enforce least-privilege principles, reducing the during data manipulation. Overarching security for these states is framed by the CIA triad—, , and —which provides a foundational model for protection. prevents unauthorized disclosure through techniques like AES-256; ensures accuracy and unaltered state via cryptographic hashing functions such as SHA-256, a 256-bit secure hash algorithm that produces a unique digest for verifying changes. maintains accessible through redundancy, such as configurations or distributed storage, guarding against denial-of-service disruptions. Recent analyses underscore the heightened risks to and in use compared to static storage. Digital in these lifecycle states relies on representation (0s and 1s) for underlying storage and processing, as outlined in the Binary States section.

Properties

Core Properties

Digital data possesses several inherent characteristics that define its nature and utility in computational systems. These core properties—exact reproducibility, , , and structured with —enable reliable storage, transmission, and processing, distinguishing digital representations from continuous analogs. One fundamental property is exact reproducibility, which allows digital data to be copied perfectly without or loss of fidelity. Unlike analog signals, where accumulates during each duplication—leading to progressive —digital data consists of discrete states that can be replicated identically using simple bitwise operations. This ensures that multiple copies remain indistinguishable from the original, supporting applications like archival storage and where consistency is paramount. Granularity refers to the discrete, hierarchical structure of digital data, organized into manipulable units ranging from the smallest to larger aggregates like , , and datasets. A , the unit, represents a single value (0 or 1) and serves as the foundation for all higher-level structures; for instance, eight form a , which can encode characters or instructions, while group these into named collections with for organization. This layered discreteness facilitates precise operations, such as selective editing at the level or bulk transfer at the level, without affecting unrelated portions. Determinism in digital data processing ensures that identical inputs always produce identical outputs, governed by predictable rules like . In digital circuits, operations rely on logic gates that implement Boolean functions—such as AND, OR, and NOT—yielding outputs solely dependent on input values, independent of timing variations or implementation details. This predictability underpins the reliability of algorithms and hardware, allowing engineers to verify system behavior through formal analysis and . Digital data functions as structured symbols interpretable by machines through defined syntax and synchronization mechanisms. Formats like use schemas to enforce rules on data organization, such as key-value pairs and nested objects, enabling parsers to validate and information consistently across systems. is achieved via headers in data packets, which include for alignment (e.g., sequence numbers or timestamps), and embedded clocks or encoding schemes that maintain timing, ensuring receivers correctly interpret symbol sequences without drift.

Operational Properties

Operational properties of digital data encompass techniques for managing errors, optimizing storage and transmission efficiency, and ensuring security during handling. These operations are essential for reliable data processing in computing and communication systems, allowing digital data to be manipulated without loss of integrity or excessive resource consumption. Error detection and correction mechanisms are fundamental to maintaining data accuracy during storage and transmission. Parity bits provide a simple method for single-bit error detection by appending a check bit that ensures the total number of 1s in a data word is even or odd; for even parity, the parity bit is the XOR of all data bits, enabling detection of any odd number of errors but not correction. More robust error correction uses Hamming codes, such as the (7,4) code, which employs 3 parity bits to protect 4 data bits in a 7-bit codeword, achieving a minimum of 3 to correct single-bit errors and detect double-bit errors; the from parity checks identifies the erroneous bit position. Cyclic redundancy checks (CRC) offer efficient detection of burst errors through division in : the message is treated as a multiplied by x^k (where k is the degree of the generator ), then divided by the generator G(x), and the CRC is the of degree less than k, appended to the message for transmission; at the , division yielding zero confirms . Data compression reduces storage and transmission requirements by exploiting redundancies. preserves all original data, achieving ratios of 2:1 to 4:1 for text and similar structured data through methods like Lempel-Ziv-Welch (LZW) in archives, which builds a of repeated phrases to encode them with shorter codes based on entropy reduction. , suitable for perceptual media, discards less noticeable information, enabling higher ratios such as 100:1 for video by prioritizing human visual fidelity, as in for images where coefficients are quantized to remove high-frequency details below perceptual thresholds. Transmission of digital over is constrained by physical limits and requires to encode bits onto carrier signals. The defines the maximum error-free rate C as C = B \log_2(1 + \mathrm{SNR}), where B is the in Hz and SNR is the , establishing a theoretical upper bound on throughput without deriving the proof here. Common schemes include (ASK), which varies carrier to represent binary states (e.g., presence for 1, absence for 0), and (FSK), which shifts the carrier frequency between two values for each bit, providing robustness against noise at the cost of wider . Security operations on digital data, particularly hashing, ensure integrity by producing fixed-size digests that detect tampering. The algorithm, once widely used, became vulnerable to collision attacks after 2005, with practical exploits demonstrated by 2008 allowing forged data with identical hashes, prompting deprecation for integrity checks. As of 2025, both the family (e.g., ) and , standardized in 2015 as a sponge-based construction resistant to known attacks, are approved by NIST for integrity verification in protocols and systems, with remaining widely used; offers variants like SHA3-256 for 256-bit outputs.

History

Early Systems

The origins of digital data systems can be traced to ancient mechanical precursors that employed , symbolic representations of information. Mechanical devices further exemplified early data manipulation. The , originating around 2400 BCE in , used movable beads on rods to represent numerical values in a positional system, typically base-10, facilitating arithmetic through positional and serving as a precursor to computational tools. arithmetic itself was formalized centuries later by in 1703, who in his treatise Explication de l'Arithmétique Binaire outlined addition, subtraction, and multiplication using only the digits 0 and 1, providing a rigorous mathematical foundation for systems reliant on two-state logic. This work emphasized binary's simplicity and universality, influencing subsequent digital developments. Theoretical advancements complemented these inventions, with Alan Turing's 1936 paper on computable numbers establishing foundational concepts for digital computation, and John von Neumann's 1945 report defining the stored-program architecture crucial for processing digital data. The saw the emergence of engineered systems that explicitly digitized information for communication and automation. In 1801, invented the Jacquard loom, which employed punched cards—perforated with holes to indicate presence (1) or absence (0)—to control the weaving of complex textile patterns, marking an early use of binary-encoded instructions for mechanical control. Building on this, designed the in 1837, a proposed general-purpose that would use punched cards for both inputting data and programming operations, allowing conditional branching and looping in a manner foreshadowing modern computing. Concurrently, Samuel F. B. Morse patented the electric telegraph in 1837, incorporating , a system of short dots and long dashes transmittable as electrical pulses, which paralleled binary signaling by distinguishing two distinct states for encoding alphabetic and numeric characters. Theoretical advancements complemented these inventions, with publishing The Mathematical Analysis of Logic in 1847, introducing as a symbolic system for logical operations on variables (true/false or 1/0), including , disjunction, and negation, which became indispensable for digital circuit design. The transition to electronic precursors occurred in the late 1930s and early 1940s. completed the Z1 in 1938, the world's first programmable computer, a mechanical device using and punched film for instructions, though unreliable due to its moving parts; Zuse's collaborator Helmut Schreyer advocated for relays to enable electronic switching, influencing later iterations like the relay-based Z3 in 1941. The Colossus, operational from late 1943, represented the first large-scale programmable electronic digital computer, built by using approximately 1,600 to 2,400 for high-speed operations and cryptanalytic tasks at , though it lacked a stored-program architecture.

Modern Developments

The transistor era marked a pivotal shift in digital data handling, beginning with the invention of the in December 1947 by and Walter Brattain at Bell Laboratories, under the direction of , which enabled reliable amplification and switching of electronic signals essential for . This breakthrough was followed by the development of the junction in 1948, improving and for computational applications. The advent of the in 1958, independently conceived by at and realized in practice by at , allowed multiple s, resistors, and capacitors to be fabricated on a single chip, dramatically increasing data processing density and efficiency. In 1965, Intel co-founder observed in his seminal paper that the number of transistors on an would roughly double every year, a prediction revised to every two years in 1975, driving in digital data manipulation capabilities. This "Moore's Law" held through advances in and materials until approximately 2025, when physical limits in led to a plateau, shifting focus to architectural innovations like 3D stacking for continued performance gains. The digital revolution accelerated with the establishment of in 1969 by the U.S. Department of Defense's Advanced Research Projects Agency (), which implemented packet-switching to enable reliable data transmission across distributed networks, laying the groundwork for interconnected digital systems. By the , the transition to the commercial internet transformed digital data accessibility, with the National Science Foundation's decommissioning of its in 1995 allowing private sector expansion, spurred by the release of the protocols in 1993 and the growth of Internet Service Providers offering public access. This era also witnessed explosive growth in , evolving from the 305 RAMAC's 3.75 megabytes on 50 platters in 1956 to solid-state drives reaching capacities of up to 122.88 terabytes as of 2025, enabling the storage and retrieval of vast digital archives at unprecedented speeds and densities. Advancements in frameworks and further revolutionized digital data management starting in the mid-2000s, with the release of in 2006 providing an open-source distributed storage and processing system capable of handling petabyte-scale datasets across clusters of commodity . This was complemented by the launch of the dataset in 2010, which curated over 14 million annotated images across 21,000 categories, serving as a foundational resource for training models in visual recognition tasks. By 2025, global creation had reached approximately 181 zettabytes annually, driven by devices, social media, and cloud services, according to forecasts. AI training processes now routinely operate on petabyte-scale datasets, with large language models requiring distributed storage systems to process and fine-tune massive corpora for improved accuracy and generalization. Emerging technologies are pushing digital data beyond classical binary limits, exemplified by quantum bits (qubits) that leverage superposition and entanglement to represent multiple states simultaneously, as outlined in IBM's 2025 roadmap, featuring the Nighthawk processor with 120 qubits to advance error-corrected computations. These systems, such as the 156-qubit Heron processor integrated into modular architectures, enable exploratory applications in optimization and simulation that classical computers struggle with due to exponential complexity. Parallelly, DNA-based storage offers ultra-high density, with Microsoft Research prototypes potentially storing up to 215 petabytes per gram of synthetic DNA, far surpassing electronic media in longevity and compactness for archival purposes.

References

  1. [1]
    Data representation – CS 61 2019
    The electronics of digital computers are based on the bit, the smallest unit of storage, which a base-two digit: either 0 or 1. More complicated objects are ...
  2. [2]
    [PDF] Data Representation
    Data representation is how data is stored, processed, and transmitted. Data is stored in digital formats using bits (0s and 1s) and a bit is a binary digit.
  3. [3]
    Data In The Computer
    Data in computers is represented using binary coding, with each bit (0 or 1) implemented by a switch. Bytes, of 8 bits, are the basic unit of data.
  4. [4]
    Data Representation in Digital Computers - University of Scranton
    Among the most common types of data are numeric, textual (composed of characters), logical (i.e., true and false values), visual (i.e., images), and audio (i.e. ...Missing: definition | Show results with:definition
  5. [5]
    Glossary: Data vs. information | resources.data.gov
    Data is defined as a value or set of values representing a specific concept or concepts. Data become 'information' when analyzed and possibly combined with ...
  6. [6]
    Research Data in the Digital Age - NCBI
    Digital technologies require the translation of phenomena and objects into digital representations, which can introduce inaccuracies into the data. Digital data ...Descriptions Of Terms Used... · Research Data · Raw And Processed Data
  7. [7]
    SI110: Digital Data
    Digital data consists solely of 0's and 1's. An individual 0 or 1 value is called a bit. So to represent a piece of information, you need to be able to express ...
  8. [8]
    Digital Data Management Glossary | Department of Energy
    Digital data encompass a variety of information stored in digital form, including experimental, observational, and simulation data.
  9. [9]
    Collecting Data - CS50 AP
    Analog signals are continuous, meaning their values are constantly changing over time. Digital signals, however, are discrete, meaning that we capture a ...
  10. [10]
    Does music sound better on vinyl records? | Tufts Now
    Jul 11, 2016 · Dynamic range. The difference between the loudest and softest sounds an LP can play is about 70 decibels (dB). CDs can handle over 90 dB.
  11. [11]
    [PDF] A Mathematical Theory of Communication
    The capacity to transmit information can be specified by giving this rate of increase, the number of bits per second required to specify the particular signal ...
  12. [12]
    The World’s Technological Capacity to Store, Communicate, and Compute Information
    ### Summary of Digital vs Analog Storage Capacity (1986–2007) and Projections
  13. [13]
    [PDF] How Much Information is There in the “Information Society”?
    While merely 1 % of the world's capacity to store information was in digital format in 1986, our digital memory represented 25% of the total in the year 2000, ...
  14. [14]
    3.1: Digital Signals and Gates - Workforce LibreTexts
    Mar 19, 2021 · An absence of voltage represents a binary “0” and the presence of full DC supply voltage represents a binary “1.”
  15. [15]
    4. Binary and Data Representation - Dive Into Systems
    In binary, each signal corresponds to one bit (binary digit) of information: a zero or a one. It may be surprising that all data can be represented using just ...
  16. [16]
    Bits and Bytes
    At the smallest scale in the computer, information is stored as bits and bytes. In this section, we'll learn how bits and bytes encode information. Bit. a "bit" ...
  17. [17]
    ASCII - Binary Character Table
    Letter, ASCII Code, Binary, Letter, ASCII Code, Binary. a, 097, 01100001, A, 065, 01000001. b, 098, 01100010, B, 066, 01000010.
  18. [18]
    Bits, Nibbles, and Bytes - Binary - SparkFun Learn
    Each 1 or 0 in a binary number is called a bit. From there, a group of 4 bits is called a nibble, and 8-bits makes a byte. Bytes are a pretty common buzzword ...
  19. [19]
    [PDF] CS107, Lecture 2
    Lecture 2 covers bits, bytes, hexadecimal, integer representations, unsigned and signed integers, and casting/combining types. Bits represent on/off states, 8 ...
  20. [20]
    Hexadecimal - SparkFun Learn
    Converting from hex to binary is a lot like converting binary to hex. Simply take a hex digit and turn it into four binary digits. Repeat until your number is ...Introduction · Hex Basics · Converting To/From Decimal · Converting To/From Binary
  21. [21]
    IEEE Floating-Point Representation | Microsoft Learn
    Aug 3, 2021 · The IEEE-754 standard describes floating-point formats, a way to represent real numbers in hardware. There are at least five internal formats ...
  22. [22]
    Unicode Standard
    The Unicode Standard is the universal character encoding designed to support the worldwide interchange, processing, and display of the written texts of the ...
  23. [23]
    Image-1 Introduction to Digital Images
    How to represent the color of a pixel? RGB - red/green/blue scheme; Make any color by combining red/green/blue lights; There are other schemes, but RGB is ...
  24. [24]
    Definitions of the SI units: The binary prefixes
    one kilobit, 1 kbit = 103 bit = 1000 bit ; one byte, 1 B = 23 bit = 8 bit ; one mebibyte, 1 MiB = 220 B = 1 048 576 B ; one megabyte, 1 MB = 106 B = 1 000 000 B.
  25. [25]
    Chapter 20: Analog to Digital Conversion
    Jan 20, 2021 · An ADC samples an analog waveform at uniform time intervals and assigns a digital value to each sample. The digital value appears on the ...
  26. [26]
    The Nyquist–Shannon Theorem: Understanding Sampled Systems
    May 6, 2020 · The Nyquist sampling theorem, or more accurately the Nyquist-Shannon theorem, is a fundamental theoretical principle that governs the design of mixed-signal ...Missing: primary | Show results with:primary
  27. [27]
    [PDF] MT-001: Taking the Mystery out of the Infamous Formula,"SNR ...
    This tutorial first derives the theoretical quantization noise of an N-bit analog-to-digital converter. (ADC). Once the rms quantization noise voltage is known, ...<|separator|>
  28. [28]
    [PDF] MT-021 ADC Architectures II: Successive Approximation ADCs
    The basic successive approximation ADC is shown in Figure 1. It performs conversions on command. In order to process ac signals, SAR ADCs must have an input ...
  29. [29]
  30. [30]
    Precision Analog-to-Digital Converter Chip Market Trends
    Oct 20, 2025 · Sampling-rate performance continues to climb, as 60% of precision ADCs now feature rates exceeding 1 mega-sample-per-second (MSPS), compared to ...
  31. [31]
    Portable Network Graphics (PNG) Specification (Third Edition) - W3C
    Jun 24, 2025 · This document describes PNG (Portable Network Graphics), an extensible file format for the lossless, portable, well-compressed storage of static and animated ...
  32. [32]
    Big data statistics: How much data is there in the world? - Rivery
    May 28, 2025 · By 2025, global data is projected to reach 181 zettabytes, with significant contributions from AI-driven content, social media, IoT devices, and ...Missing: dominance | Show results with:dominance
  33. [33]
    CCD and CMOS: Filmless Cameras - Electronics | HowStuffWorks
    A CCD transports the charge across the chip and reads it at one corner of the array. An analog-to-digital converter (ADC) then turns each pixel's value into a ...
  34. [34]
    The Transistor, Explained - Intel Newsroom
    Gates. The three elementary logic gates, the most basic units of logic that transistors can form, include AND, OR and NOT gates. Basic gates can be combined ...
  35. [35]
    Logic level standards - Jiegec's Knowledge Base
    Logic level standards. The logic level standard defines how to convert the digital signal 0 or 1 into high and low levels, and how to convert the high and ...TTL logic level · CMOS logic level · RS232 · RS485
  36. [36]
    Magnetic Hard Disk - an overview | ScienceDirect Topics
    A magnetic hard disk is defined as a data storage device that relies on magnetism to store binary data on a circular platter made of magnetic material, ...
  37. [37]
    [PDF] for hard disk drives - Carnegie Mellon University
    Storing bits. In a HDD, binary data are stored in a thin magnetic layer, called the recording medium, deposited on a substrate.
  38. [38]
    Reading data from a CD-ROM - GeeksforGeeks
    May 31, 2021 · Track is made of lands and pits to represent once and zeros of binary data. ... Binary data are put together and CD-ROM has been read. So ...
  39. [39]
    What is a floating gate transistor? - TechTarget
    Jul 26, 2023 · Flash memory works by adding (charging) or removing (discharging) electrons to and from a floating gate. A bit's 0 or 1 state depends upon ...Missing: binary | Show results with:binary
  40. [40]
    Logic Gates Using Transistors As Saturated Switches
    Logic gates circuits using transistors is a fun way to learn the Boolean expressions of AND, OR, NAND, NOR and NOT from RTL circuits.
  41. [41]
    How to use a MOSFET as a switch? - Power & Beyond
    Sep 6, 2023 · The transistor MOSFET works as a switch in two operating modes- cut-off and saturation region. MOSFET acts as a short circuit or closed switch ...
  42. [42]
    [PDF] Memory Errors in Modern Systems - cs.wisc.edu
    We discuss two levels of chipkill ECC: chipkill-detect ECC, which can detect but not correct any error in a single DRAM chip; and chipkill-correct, which can ...
  43. [43]
    Detection of Quantum Signals Free of Classical Noise via Quantum ...
    Extracting useful signals is key to both classical and quantum technologies. Conventional noise filtering methods rely on different patterns of signal and ...<|control11|><|separator|>
  44. [44]
    [PDF] Advanced Encryption Standard (AES)
    May 9, 2023 · The AES algorithm is capable of using cryptographic keys of 128, 192, and 256 bits to encrypt and decrypt data in blocks of 128 bits. 4.
  45. [45]
    What is TCP/IP and How Does it Work? - TechTarget
    Sep 26, 2024 · TCP/IP stands for Transmission Control Protocol/Internet Protocol and is a suite of communication protocols used to interconnect network devices on the ...
  46. [46]
    Data Encryption - Data at Rest vs In Transit vs In Use - Mimecast
    Dec 18, 2024 · Data encryption is a core component of modern data protection strategy, helping businesses protect data in transit, in use and at rest.
  47. [47]
    What is Data at Rest | Security & Encryption Explained - Imperva
    Data at rest is at risk from accidental damage, hackers, and insider threats, who can digitally access the data or physically steal the data storage media. To ...Missing: implications | Show results with:implications
  48. [48]
    What is Role-Based Access Control | RBAC vs ACL & ABAC - Imperva
    Role-based access control (RBAC), also known as role-based security, is a mechanism that restricts system access. It involves setting permissions and privileges ...
  49. [49]
    What is Azure role-based access control (Azure RBAC)?
    Mar 12, 2024 · Azure RBAC is an authorization system built on Azure Resource Manager that provides fine-grained access management to Azure resources.
  50. [50]
    What is the CIA Triad and Why is it important? - Fortinet
    The three letters in "CIA triad" stand for Confidentiality, Integrity, and Availability. The CIA triad is a common model that forms the basis for the ...
  51. [51]
    Cost of a Data Breach Report 2025 - IBM
    IBM's global Cost of a Data Breach Report 2025 provides up-to-date insights into cybersecurity threats and their financial impacts on organizations.Missing: transit | Show results with:transit<|separator|>
  52. [52]
  53. [53]
    [PDF] Computers, Materiality, and What It Means for Records to Be “Born ...
    While a bit is the smallest unit of information that can be stored or manipulated in a PC, a byte (a combination of 8 bits) is the basic storage unit in the ...
  54. [54]
    [PDF] On Determinism - Columbia CS
    For example, we consider a combinational digital logic circuit to be deterministic even though the delays of its gates, and hence its detailed temporal behavior ...
  55. [55]
    [PDF] Lecture 3: Signaling and Clock Recovery - UCSD CSE
    Take an input stream of bits (digital data). ◇ Modulate some physical media to send data (analog). ◇ Demodulate the signal to retrieve bits (digital again).
  56. [56]
    [PDF] From Signals to Packets
    Why Do We Need Encoding? • Keep receiver synchronized with sender. • Create control symbols, in addition to regular data symbols.
  57. [57]
    [PDF] Lecture 18: Error Detection and Correction
    Nov 27, 2021 · ▷ A single overall parity-check equation detects single errors. ▷ Hamming codes use m equations to correct one error in 2m - 1 bits.
  58. [58]
    [PDF] Detecting and Correcting Bit Errors - cs.Princeton
    How many bit errors can we detect? • Suppose the minimum Hamming distance between any pair of codewords is dmin. • Then, we can detect at most dmin.
  59. [59]
    [PDF] A tutorial on CRC computations - IEEE Micro
    The remainder of a sum of polynomials under division by another polynomial is simply the sum of the re- mainders of the individual polynomials. So, s' (x)= ...
  60. [60]
    [PDF] Fundamentals of Data Compression - Stanford Electrical Engineering
    Sep 9, 1997 · Typical lossless compression ratios: 2:1 to 4:1. Can do better on ... In general lossy compression may also include a lossless.
  61. [61]
    [PDF] Low Cost Video For Distance Education - NSUWorks
    Video data is “lossy”. That is, data can be lost and still have acceptable ... after compression at ratios at 100:1, can be decompressed with close to ...<|separator|>
  62. [62]
    [PDF] Shannon Capacity
    Shannon capacity is equal to Shannon capacity for an AW G N channel w ith SNR γ, given by B log2(1 + γ), averaged over the distribution of γ. The capacity- ...
  63. [63]
    Attacks on Cryptographic Hashes in Internet Protocols
    Nov 6, 2012 · ... weaknesses that have already been demonstrated in both MD5 and SHA-1. ... hash/sha-3/index.html>. [Stevens] "Cryptanalysis of MD5 & SHA-1 ...
  64. [64]
    Hash Functions | CSRC - NIST Computer Security Resource Center
    Jan 4, 2017 · Currently only the four fixed-length SHA-3 algorithms are approved hash algorithms, providing alternatives to the SHA-2 family of hash functions ...NIST Policy · SHA-3 Standardization · SHA-3 Project · News & UpdatesMissing: data | Show results with:data
  65. [65]
    DNA as a digital information storage device: hope or hype? - PMC
    May 4, 2018 · Moreover, DNA can be read as a code (encoded as a sequence of four nitrogen bases) in both directions, a property which ensures more chances of ...
  66. [66]
    [PDF] Understanding Our Genetic Inheritance
    more than 5 million base pairs altogether--the human genome comprises about 3 billion base pairs of DNA and is nearly 1,000 times larger than that of a ...Missing: digital- | Show results with:digital-
  67. [67]
    Gallery of Final Projects - CS50's Introduction to Programming with ...
    Of course, the first means of counting the world is our own wooden ruler, but the abacus was used for arithmetic in 2400 BC in the Babylonian Empire. The ...Missing: BCE | Show results with:BCE
  68. [68]
    [PDF] Development of the Binary Number System and the Foundations of ...
    The formalization of the system and its additions and refinements over the course of 200+ years ultimately led to the creation of electronic circuitry ...
  69. [69]
    Jacquard Loom
    Oct 27, 1998 · In 1801, the Frenchman Joseph Jacquard invented a loom in which the raising of the warp threads was controlled by punched cards.
  70. [70]
    [PDF] The Analytical Engine
    Sep 23, 2021 · Babbage's initial notes on the Analytical Engine appear in 1837, but ... The programs were encoded on punched cards in the manner of ...
  71. [71]
    What Hath God Wrought: The Electrical Telegraph - People @EECS
    1837: Morse code -- different combinations of dots and dashes encode the letters of the alphabet and the numerials.
  72. [72]
    George Boole - Stanford Encyclopedia of Philosophy
    Apr 21, 2010 · George Boole (1815–1864) was an English mathematician and a founder of the algebraic tradition in logic. He worked as a schoolmaster in ...The Context and Background... · The Mathematical Analysis of... · Boole's Methods
  73. [73]
    3.6 Zuse's program-controlled calculators | Bit by Bit
    He completed a prototype, later named the Z1, in 1938; a large jumble of moving plates, the machine was entirely mechanical and didn't work very well, but it ...
  74. [74]
    The Modern History of Computing (Stanford Encyclopedia of Philosophy)
    ### Summary of Early Computers from Stanford Encyclopedia of Computing History
  75. [75]
    How the First Transistor Worked - IEEE Spectrum
    Nov 20, 2022 · The first transistor used gold foil contacts on germanium. A positive voltage at the emitter and negative at the collector created a junction, ...
  76. [76]
    The invention of the transistor | IEEE Journals & Magazine
    The transistor's invention involved the point contact transistor in 1947, the junction transistor in 1948, and the first grown junction transistor in 1950.<|separator|>
  77. [77]
    July 1958: Kilby Conceives the Integrated Circuit - IEEE Spectrum
    Jun 27, 2018 · In July, Jack S. Kilby of Texas Instruments came up with the monolithic idea. His patent application described it as “a novel miniaturized ...
  78. [78]
    Moore's Law - Intel
    The integrated circuit was only six years old in 1965 when Gordon Moore articulated "Moore's Law," the principle that would guide microchip development from ...
  79. [79]
    Is Moore's Law really dead? | Penn Today - University of Pennsylvania
    Feb 11, 2025 · Lee addresses the end of Moore's Law, and suggests that the future will have less abundant, and less democratic, dispersement of chips. “If ...
  80. [80]
    ARPANET | DARPA
    The ARPANET was established in the last months of the 1960s, but the first major demonstration of its networking capabilities took place in Washington D.C., in ...
  81. [81]
    A Brief History of the Internet - Internet Society
    Commercialization of the Internet involved not only the development of competitive, private network services, but also the development of commercial products ...<|control11|><|separator|>
  82. [82]
    1956: First commercial hard disk drive shipped | The Storage Engine
    Model 350 RAMAC unit stored the equivalent of 3.75 megabytes of data on 50 large disks.
  83. [83]
    D5-P5336 High-Density Data Center SSD | Solidigm
    D5-P5336 has up to 61.44TB capacity with up to 2PB in a 1U. Tremendous Scalability for High-Density Storage Environments.
  84. [84]
    A Brief History of the Hadoop Ecosystem - Dataversity
    May 27, 2021 · In 2006, Yahoo! adopted Apache Hadoop to replace its WebMap application. During the process, in 2007, Arun C. Murthy noted a problem and wrote ...
  85. [85]
  86. [86]
    IBM Quantum Roadmap
    2025 ... The Nighthawk processor with high qubit connectivity, capable of running 5,000 gates on 120 qubits, will be delivered through the IBM Quantum Platform.
  87. [87]
    IBM and RIKEN Unveil First IBM Quantum System Two Outside of ...
    Jun 23, 2025 · IBM Quantum System Two at RIKEN is powered by IBM's 156-qubit IBM Quantum Heron, the company's best performing quantum processor to-date.
  88. [88]
    DNA Storage - Microsoft Research
    Using DNA to archive data is an attractive possibility because it is extremely dense (up to about 1 exabyte per cubic millimeter) and durable (half-life of over ...News & features · Publications · People · GroupsMissing: prototypes gram