Fact-checked by Grok 2 weeks ago

Bit depth

Bit depth refers to the number of binary digits (bits) used to represent the value of each sample in digital signals, such as the in audio or the color in images, directly influencing the , , and of the representation. In essence, it quantizes continuous analog signals into finite steps, where each additional bit roughly doubles the number of possible values (2^n, with n as the bit depth), enabling finer gradations but also increasing storage requirements. In and video, bit depth typically describes the bits allocated per or per color (e.g., , , in RGB), determining the range of tones or colors that can be captured and displayed. For instance, an 8-bit per supports 256 levels per , yielding 16.7 million colors in a 24-bit RGB (8 bits × 3 s), which is standard for most consumer displays and sufficient for photographic reproduction without visible banding in typical viewing conditions. Higher depths, such as 16-bit per (65,536 levels), provide enhanced tonal gradations for professional editing, reducing posterization in gradients and supporting () workflows, while 32-bit floating-point formats allow virtually unlimited range for advanced . These variations are crucial in fields like and , where insufficient bit depth can introduce quantization artifacts, limiting post-processing flexibility. In , bit depth defines the resolution of each sample's amplitude, affecting the (SNR) and the ability to capture quiet sounds without distortion from the . Each bit contributes approximately 6 to the , so 16-bit audio—common in compact discs ()—offers 65,536 amplitude levels and a theoretical of 96 , adequate for consumer playback but prone to audible quantization noise in quiet passages. Professional recording standards favor 24-bit depth, providing 16.8 million levels and a 144 , which minimizes the to inaudible levels (below -144 ) and allows greater headroom for mixing without clipping. Beyond that, 32-bit floating-point audio extends the range to over 1,500 , primarily used in digital audio workstations (DAWs) for internal processing to preserve accuracy during effects application. Overall, bit depth is a foundational in , balancing against computational and demands, with applications spanning to scientific and archival preservation. Advances in and codecs continue to support higher depths, enabling more lifelike reproductions, though human perception limits the practical benefits beyond 24-bit for most scenarios.

Fundamentals

Definition

Bit depth refers to the number of digits, or bits, used to represent the value of each sample in a digitized , such as the of an audio or the and color components of an . This quantization process assigns discrete numerical values to continuous signal variations, determining the precision with which the original analog information can be captured and reproduced. The bit depth directly influences the number of possible discrete levels available for representation, calculated as $2^n, where n is the bit depth. For example, an 8-bit depth provides $2^8 = 256 levels, allowing for 256 distinct or steps. In contrast to sampling rate, which specifies the of capturing samples over time to preserve temporal details, bit depth focuses on the vertical by quantizing the range of each individual sample. Foundational examples illustrate this: a 1-bit depth yields only two levels ( on/off states), while a 16-bit depth offers $2^{16} = [65{,}536](/page/65,536) levels for greater in signal approximation.

Mathematical Representation

Bit depth b determines the number of discrete quantization levels L available to represent an analog signal in digital form, given by the formula L = 2^b. For example, an 8-bit representation yields $2^8 = 256 levels, allowing finer granularity than a 4-bit representation with only 16 levels. This exponential relationship underscores how each additional bit doubles the resolution, enabling more precise approximations of continuous values. The quantization process introduces an error modeled as the difference between the original signal and its quantized value, bounded by \pm \Delta / 2, where \Delta is the quantization step size defined as \Delta = full-scale range / 2^b. For a full-scale range spanning from -X_{\max} to X_{\max}, the step size thus becomes \Delta = 2X_{\max} / 2^b, assuming uniform quantization across the range. This error, often treated as additive , has a variance of \Delta^2 / 12 under the assumption of within each bin. A key performance metric is the (SQNR), which quantifies the ratio of signal power to quantization power and approximates $6.02b + 1.76 for uniform quantization of a full-scale . This formula derives from the signal power of $1/2 for a unit-amplitude sine and the noise power of \Delta^2 / 12, yielding an improvement of approximately 6 per bit. Higher bit depths thus exponentially enhance SQNR, though practical limits arise from other sources. In binary representation, bit depth b encodes values using b bits, with distinctions between unsigned and signed formats. Unsigned representations span 0 to $2^b - [1](/page/1), suitable for non-negative signals like light intensities. Signed representations, common for signals with negative amplitudes such as audio waveforms, employ , where the most significant bit indicates sign (0 for positive, 1 for negative) and the range is -2^{b-1} to $2^{b-1} - [1](/page/1). To form the of a negative value, one inverts all bits of the positive magnitude and adds , facilitating arithmetic operations without separate sign handling. While bit depth inherently limits precision—the smallest distinguishable change given by \Delta—accuracy, or faithfulness to the original signal, can be preserved through dithering. Dithering involves adding low-level before quantization to randomize error, decorrelating it from the signal and preventing like granular or limit cycles. This technique trades a small increase in overall for improved , effectively extending perceived beyond the nominal bit depth.

Applications in Audio

Audio Bit Depth

In , bit depth specifies the number of bits allocated to each sample in (PCM), determining the precision with which amplitude levels of the analog sound wave are quantized into discrete digital values. This quantization process maps continuous voltage variations to a of levels, where higher bit depths allow for finer gradations and reduced quantization error. The standard employs 16-bit PCM, yielding 65,536 possible levels per sample for each of the two channels. In contrast, formats commonly utilize 24-bit depth, expanding to over 16 million levels to capture subtler nuances during recording and mastering. Bit depth addresses independently of sampling , which governs temporal sampling frequency as per the Nyquist-Shannon sampling theorem; together, they define the overall of PCM representation, with bit depth focusing on vertical () quantization rather than horizontal (time) discretization. Linear PCM maintains a fixed bit depth throughout, preserving all quantized samples without alteration, as in uncompressed WAV or AIFF files. Compressed formats like , however, apply perceptual coding to achieve lower by psychoacoustically discarding inaudible spectral components, thereby reducing the effective bit depth and overall resolution compared to linear PCM. To mitigate quantization —manifesting as artifacts or —dithering introduces a low-level, uncorrelated signal prior to requantization, randomizing errors and linearizing the process for more natural representation. Techniques such as high-pass noise-shaped dither further minimize audible by concentrating it in less sensitive frequency regions, enhancing perceived audio quality without significantly raising the overall .

Impact on Sound Quality

Bit depth fundamentally influences the of signals, with each bit contributing approximately 6 of resolution. In a 16-bit system, this yields a theoretical of about 96 , spanning from the near silence to the maximum full-scale without clipping. This range accommodates the vast majority of musical and speech content, where peak levels rarely exceed 80-90 above the threshold of hearing in typical environments. Quantization inherently produces as the is approximated to discrete levels, resulting in a floor at approximately -(6.02 * b + 1.76) relative to , where b is the number of bits. For 16-bit audio, this positions the around -98 , which falls below the audible for most listeners within the 20 Hz to 20 kHz , especially when masked by the signal itself. At lower bit depths, such as 8-bit, the becomes more prominent and can degrade perceived clarity in quiet passages. Beyond noise, quantization can introduce distortion artifacts, particularly harmonic distortion from signal truncation during rounding or clipping. These nonlinear effects manifest as unwanted overtones that alter the original waveform's . Noise shaping techniques address this by shifting quantization error to higher frequencies outside the audible band, effectively reducing in-band while preserving overall . Perceptually, bit depth interacts with human hearing limits, which span roughly 120-130 from the faintest detectable sounds to painful levels. A 16-bit depth suffices for consumer applications, as its 96 range covers typical dynamic contrasts in music and speech without audible under normal conditions. In contrast, 24-bit audio extends to 144 , surpassing human perceptual thresholds but offering critical headroom—up to 48 more than 16-bit—for , allowing gains and effects without introducing additional quantization errors or clipping. Studies indicate that while subtle differences may be discernible with training, 24-bit primarily benefits professional workflows rather than direct listening. The primary trade-off with higher bit depths is increased and demands; for instance, 24-bit files are 50% larger than equivalent 16-bit files at the same sample rate, escalating data requirements without commensurate perceptual improvements for end-user playback beyond 16-bit. This makes 16-bit a practical for , balancing and , while reserving deeper bits for capture and where prevents cumulative errors.

Applications in Imaging

Color Bit Depth

In digital imaging, color bit depth refers to the number of bits used to represent the intensity of each color channel, typically , , and (RGB), per . This determines the precision and range of color values that can be captured and displayed. For instance, 8 bits per channel allows 256 discrete levels (2^8) for each , resulting in a total bit depth of 24 bits per pixel when multiplied by three channels. Common color models leverage specific bit depths to balance storage efficiency and visual fidelity. The standard, widely used for web and consumer displays, employs 8 bits per channel, enabling approximately 16.7 million distinct colors (256^3). In contrast, (HDR) imaging often utilizes 10 bits per channel, supporting about 1.07 billion colors (1024^3) to accommodate wider gamuts and brighter highlights. Higher bit depths enhance color gamut representation and precision by providing finer gradations, particularly in smooth transitions like skies or skin tones. With 8 bits per channel, quantization steps can lead to visible banding or posterization in gradients, where subtle color shifts appear as abrupt steps due to limited levels. Increasing to 10 or more bits per channel mitigates this, distributing values more evenly to reduce artifacts and improve perceptual smoothness. Channel configurations vary by application, with defined as 24-bit RGB (8 bits per channel), which approximates the full for most consumer uses. Professional workflows, such as those in and , often employ deep color modes like 30-bit (10 bits per channel) or 48-bit (16 bits per channel), allowing for extensive color manipulation without introducing banding during editing. Quantization in color spaces involves encoding these bit values to align with human visual perception, contrasting linear encoding—which represents proportionally—with gamma-corrected encoding. Linear encoding preserves physical accuracy but inefficiently allocates limited bits to brighter tones, potentially causing quantization errors in shadows; applies a nonlinear curve (typically around 2.2 for ) to devote more levels to darker areas, optimizing bit depth usage and minimizing visible banding while matching the eye's logarithmic sensitivity to .

Grayscale and Other Modes

In grayscale imaging, each pixel is represented by a single channel of intensity values, where the bit depth determines the number of distinguishable shades of gray. An 8-bit grayscale image provides 256 levels of gray, ranging from pure black (0) to pure white (255), which is standard for most general-purpose digital images due to its balance of quality and storage efficiency. Higher bit depths, such as 16-bit, offer 65,536 shades, enabling finer gradations essential for applications requiring high precision, like scientific visualization or medical diagnostics. Indexed color modes use a palette-based approach to represent images with a limited color set, effectively reducing the bit depth for the image data while referencing a separate . In an 8-bit indexed format, each requires only 8 bits to one of 256 predefined colors from the palette, allowing for compact storage in scenarios where a full color spectrum is unnecessary, such as web graphics or systems. High-bit modes, including 12-bit and 14-bit formats in , capture a wider per , providing greater latitude for post-processing adjustments without introducing visible artifacts. A 12-bit file can encode up to 4,096 levels per , while 14-bit extends this to 16,384 levels, preserving subtle tonal variations in highlights and shadows that would otherwise be lost in lower-depth formats. Specialized applications leverage bit depth in additional channels or domains, such as alpha channels for , where the alpha value typically matches the bit depth of the primary channels to control opacity levels smoothly. In medical imaging, 16-bit depth is common for scans, accommodating the full range of Hounsfield units (from -1,024 to over 3,000) to differentiate tissue densities accurately without truncation. Lower bit depths conserve storage and processing resources but can lead to contouring artifacts—visible steps or bands in smooth gradients—due to insufficient quantization levels for gradual transitions, similar to quantization noise in audio but manifesting as spatial discontinuities in images.

Bit Depth in Video and Other Media

Video Standards

Digital video standards define bit depth as the number of bits used to represent the color or value for each or in a video , typically ranging from 8 bits to 12 bits or more depending on the format and resolution. In standard-definition () and high-definition () video, 8-bit bit depth is commonly used under the standard, which supports 16.7 million colors per pixel in RGB or color spaces, sufficient for most broadcast and consumer applications. For ultra-high-definition (UHD) and high-dynamic-range () content, higher bit depths like 10-bit or 12-bit become essential to accommodate wider color gamuts and reduce visible banding artifacts in gradients. The evolution of video standards has progressively increased bit depth to match advancing display technologies and content demands. The BT.709 standard, established by the (ITU) in 1990 and revised in subsequent years, primarily relies on 8-bit processing for SD and signals, limiting dynamic range to approximately 256 levels per channel. In contrast, the BT.2020 standard, introduced in 2012 for and 8K UHD, supports 10-bit and 12-bit depths to enable and wider color spaces like , allowing over 1 billion colors at 10-bit and up to 68.7 billion at 12-bit, which is critical for professional production and streaming services. Proprietary formats like extend this further, using up to 12-bit per channel for enhanced contrast and color accuracy in HDR10+ compatible ecosystems. In color spaces, widely used in video encoding to separate (Y) from (Cb and Cr), bit depth allocation is influenced by ratios that effectively alter the precision per channel. For instance, 4:2:2 subsampling allocates full bit depth (e.g., 10 bits) to the Y channel while halving the resolution for Cb and Cr, resulting in an average of about 8 effective bits per channel across the frame, which preserves detail in motion-heavy scenes without excessive bandwidth. This approach, standardized in BT.601 for and extended in BT.709 and BT.2020, optimizes storage and transmission while maintaining perceptual quality, though 4:4:4 subsampling uses full bit depth for all channels in high-end applications like . HDR video standards mandate a minimum of 10-bit bit depth to support expanded dynamic ranges exceeding 1,000 nits of brightness, preventing quantization errors like banding in dark or transitional areas that are prominent in 8-bit footage. Formats such as and HLG (Hybrid Log-Gamma), defined in BT.2020, leverage 10-bit processing to deliver peak brightness up to 10,000 nits theoretically, enhancing realism in streaming platforms like and . Video compression codecs handle bit depth differently, impacting final output quality. H.264/AVC, a staple for video since 2003, typically supports 8-bit external bit depth but uses higher internal precision (up to 10 bits) during processing to minimize errors, though it struggles with 10-bit due to limited native support. HEVC (H.265), introduced in 2013, natively accommodates 10-bit and 12-bit depths, enabling better compression efficiency for content by reducing bitrate needs by up to 50% compared to H.264 at equivalent quality, as verified in ITU evaluations. This makes HEVC the preferred codec for modern broadcast and streaming, balancing bit depth fidelity with practical delivery constraints.
Standard/FormatTypical Bit DepthResolution SupportKey Use CaseSource
BT.7098-bit/Broadcast TVITU BT.709
BT.202010/12-bitUHD//8KHDR StreamingITU BT.2020
Dolby Vision12-bitUHD/Cinema/OTTDolby Vision Specs
H.264/AVC8-bit (10-bit internal)Legacy VideoITU H.264
HEVC/H.26510/12-bitUHD/Modern StreamingITU H.265

Storage and Processing Implications

Higher bit depth in video directly impacts requirements, as each or sample requires more bits to represent the increased number of tonal levels. For , is calculated as the product of horizontal pixels, vertical pixels, , duration, and bit depth per , resulting in a linear scaling with bit depth. For instance, increasing from 8-bit to 10-bit per channel enlarges the by 25%, assuming all other parameters remain , while a shift to 12-bit adds another 20% relative to 10-bit. This scaling arises because 10-bit encoding uses 10 bits per color compared to 8 bits, demanding proportionally more for the same and . In practical workflows, such as raw video capture, 10-bit files from professional cameras can be 25% larger than equivalent 8-bit versions before . Processing higher bit depth video imposes greater computational demands on CPU and GPU resources, primarily due to the need for more precise operations. Operations like , filtering, and encoding in 10-bit or 12-bit workflows often require floating-point computations to handle the expanded , significantly increasing cycle counts; for example, 10-bit HEVC encoding can take 3 to 5 times longer than 8-bit encoding at constant quality. This overhead is particularly evident in applications, where higher bit depths can increase by over 4 times in unoptimized scenarios, such as or devices. Hardware limitations further constrain high bit depth video handling, with GPU architectures serving as a primary in rendering pipelines. Most consumer GPUs from 2017 onward, such as NVIDIA's RTX series and AMD's RX, natively support 10-bit processing and output via decoders like NVDEC or VCN, enabling smooth 10-bit playback. However, bottlenecks arise in VRAM capacity and ; for instance, editing 10-bit footage on GPUs with under 8GB VRAM can cause stuttering due to frequent data transfers over PCIe, especially in multi-layer timelines. Professional workflows often require workstation-grade GPUs like NVIDIA or AMD for reliable 10-bit rendering without fallback to CPU processing, as integrated pipelines may throttle performance by 30-40% under sustained loads. In video transmission and streaming, bit depth influences bandwidth allocation, creating trade-offs between efficiency and fidelity. 8-bit streams prioritize lower bandwidth—typically 20-30% less than 10-bit equivalents at the same resolution and compression—for broad compatibility and reduced latency, making them suitable for standard dynamic range (SDR) delivery over limited networks. Conversely, 10-bit streams demand higher bitrates to preserve gradient smoothness in high dynamic range (HDR) content, often increasing bandwidth by 25% to mitigate banding artifacts during compression with codecs like HEVC. Platforms like Netflix employ 10-bit HDR10 for premium streaming, balancing quality against data costs, while 8-bit remains the default for efficiency in mobile or low-bandwidth scenarios. As of 2025, future trends point toward expanded adoption of 12-bit and higher processing in AI-enhanced media pipelines to minimize artifacts in upscaled or synthesized content. AI-driven techniques, such as neural-network-based super-resolution and denoising, leverage 12-bit precision to better reconstruct details from lower-depth sources, reducing quantization errors by up to 40% in HDR workflows. Codec advancements like Versatile Video Coding (VVC) further enable efficient 12-bit handling, with AI integration optimizing computational trade-offs for real-time applications in streaming and virtual production. This shift supports artifact-free enhancements in AI-generated media, driven by hardware improvements in tensor cores for parallel bit-depth operations.

Other Media

Beyond video, bit depth plays a key role in other formats. In and , APIs like and typically process textures and framebuffers at 8-16 bits per channel for integer formats, with 32-bit floating-point options for to avoid precision loss in lighting calculations. and animation workflows often use formats like , supporting 16-bit half-float or 32-bit full-float per channel to capture wide dynamic ranges in , enabling seamless integration with video pipelines. In , the Academy Color Encoding System (ACES) employs 16-bit half-float for scene-referred data, ensuring color fidelity across production stages.

Comparisons and Evolution

Common Bit Depths

In , bit depths are selected to balance audio fidelity, visual accuracy, storage efficiency, hardware compatibility, and processing demands, with lower depths favored for broad despite reduced , while higher depths support professional workflows at the expense of larger file sizes and computational resources.

Audio

Common bit depths in digital audio include 16-bit for consumer applications, such as compact discs, where it provides sufficient for standard playback while maintaining compatibility with legacy devices. recording and production typically employ 24-bit depth to capture greater nuance and headroom during mixing and mastering. Digital audio workstations (DAWs) often utilize 32-bit floating-point format internally to preserve precision across multiple processing stages without introducing quantization errors.

Imaging

In , 8-bit per channel (24-bit RGB) remains ubiquitous for web graphics and standard displays due to its efficiency and support in most browsers and software, enabling 16.7 million colors suitable for general viewing. For professional printing and editing, 16-bit per channel is prevalent, offering enhanced gradation for and avoiding banding in high-end workflows. (HDR) imaging commonly adopts 32-bit floating-point representation to handle extended ranges in formats like , facilitating seamless in .

Video

Digital video standards frequently use 8-bit or 10-bit depths for broadcast and streaming, with 8-bit sufficing for standard dynamic range (SDR) content in H.264/AVC codecs to ensure wide device compatibility. and high-end production favor 12-bit depth, as in formats like ProRes or DNxHR, to support workflows and minimize artifacts during .

Cross-Domain

Across domains, 1-bit depth is applied in dithered images for transmission or simple displays, where spatial dithering simulates grayscales using patterns of pixels. In scientific simulations, such as or climate modeling, 64-bit floating-point precision is standard to maintain over complex computations.
DomainCommon Bit DepthsTypical Use Cases
Audio16-bitConsumer playback (e.g., )
24-bitProfessional recording
32-bit floatDAW processing
Imaging8-bit/channel and standard graphics
16-bit/channel and professional editing
32-bit
Video8/10-bitBroadcast and streaming
12-bit and production
Cross-Domain1-bitDithered
64-bitScientific simulations

Historical Development

The concept of bit depth emerged in the early days of digital computing, where limited hardware constrained representations to 1-4 bits per in displays. In the , systems like the 2250 display, introduced in 1964 as part of the System/360 family, operated in mode with effectively 1-bit depth, allowing only binary on/off states for vector lines on a 1024x1024 . Early experiments in the late and early built on this, using 1-bit depth for simple images, with 4-bit extensions appearing by the mid- to support limited or color palettes in research prototypes. In audio, (PCM) experiments in the 1970s laid the groundwork for standardized bit depths, transitioning from roots to music . By , Soundstream's achieved 16-bit resolution at 37.5 kHz sampling, enabling commercial viability for high-fidelity audio. The 1982 launch of the (CD) format, developed by and , established 16-bit PCM at 44.1 kHz as the consumer standard, providing approximately 96 of . (DAT), introduced in 1987, used 16-bit depth at up to 48 kHz sampling and was widely adopted in professional studios by the mid-1990s. The 1990s also saw advancements to 24-bit depths in formats like early workstations and hard disk recording systems, providing a 144 for enhanced nuance in production. Digital imaging evolved similarly, with 6-bit depth used in early 1970s satellite imagery, such as NASA's , advancing to 8-bit by the early 1980s for grayscale scans in scientific applications. By 1987, IBM's (VGA) standard introduced (256 shades via palette), marking a shift to affordable personal displays. The 1990s brought 24-bit , with 16.7 million hues per , popularized in graphics cards like those supporting , enabling photorealistic rendering without palettes. In the 2010s, 10-bit and higher depths emerged for 4K imaging, particularly in photography and displays, allowing over 1 billion colors to reduce banding in gradients. Video bit depth transitioned during the 1980s from analog to digital formats, with SMPTE's D1 standard in 1986 using 8-bit YCbCr sampling at 4:2:2 ratios for broadcast-quality component digital video. The 2000s saw high-definition television (HDTV) standards, such as ATSC in 1998, predominantly at 8-bit depth for 1080i/p resolutions, balancing quality and bandwidth in consumer adoption. By the 2020s, streaming platforms like Netflix and Disney+ supported 12-bit depths in HDR formats such as Dolby Vision, enhancing color precision for 4K and 8K content over IP networks. Advancements in bit depth were profoundly influenced by , which predicted the doubling of transistors on chips roughly every two years, driving down costs and enabling the computational power needed for processing higher-depth media from the onward. Standardization bodies like the (IEC) and Society of Motion Picture and Television Engineers (SMPTE) formalized these evolutions; for instance, IEC 60908 defined the 16-bit in 1987, while SMPTE updates through 2025, including ST 2110 for IP video, accommodate up to 12-bit and beyond for professional workflows.

References

  1. [1]
    Bit depth and preferences - Adobe Help Center
    May 24, 2023 · Bit depth specifies how much color information is available for each pixel in an image. More bits of information per pixel result in more available colors.Missing: definition | Show results with:definition
  2. [2]
    Digital audio basics: audio sample rate and bit depth
    ### Summary of Bit Depth and Related Concepts in Digital Audio
  3. [3]
    Bit Depth - Digital Imaging Tutorial - Basic Terminology
    BIT DEPTH is determined by the number of bits used to define each pixel. The greater the bit depth, the greater the number of tones (grayscale or color) ...
  4. [4]
    [PDF] Digital Audio Basics | UW-IT
    Bit Depth. The bit depth of digital audio refers to the resolution of a single sample. A bit is a single binary value, a zero or a one.
  5. [5]
    Frequently asked questions about Digital Audio and Video
    Oct 11, 2019 · "Bit depth" is the amount of data used to describe a specific section of source material. The preferred bit depth for audio recording is 24 bits ...
  6. [6]
    Bit Depth | NIST - National Institute of Standards and Technology
    Jan 15, 2025 · Bit Depth. the number of bits (binary digits) used to specify the brightness or color range of each pixel in an image sensor.Missing: definition explanation<|control11|><|separator|>
  7. [7]
    [PDF] Digital Representation
    Solution: quantize the amplitude. Bit depth specifies the number of bits allocated to each sample of audio. In order to achieve a good approximation of the ...
  8. [8]
    Digital Images | Radiology | SUNY Upstate
    In general, if n bits are used to code for one pixel, the number of discrete values is 2n. 8 bits (equal to one Byte) can code for 256 discrete values (shades ...
  9. [9]
    [PDF] CHAPTER 6 FUNDAMENTALS OF DIGITAL AUDIO
    Bit depth of a digital audio is also referred to as resolution. • For digital audio, higher resolution means higher bit depth. © 2016 Pearson Education ...
  10. [10]
    Audio Recording Standards - UO Blogs - University of Oregon
    The higher the bit depth then the less rounding off, which means a more accurate digital recording of the sound (see previous image).
  11. [11]
    [PDF] 5 Chapter 5 Digitization - Juniata College Faculty Maintained Websites
    The higher bit depth gives a wider range of sound amplitudes that can be recorded. The smaller bit depth loses more of the quiet sounds when they are rounded ...
  12. [12]
    [PDF] Fundamentals Digital Audio
    Digital audio involves sound as waves, digitization via sampling and quantization, and the effects of sampling rate and bit depth on quality.
  13. [13]
    [PDF] Digital Signal Processing Lecture Outline ADC Anti-Aliasing Filter ...
    Quantization error grows out of bounds beyond code boundaries. ❑. We define the full scale range (FSR) as the maximum input range that satisfies. |eq|≤Δ/2.
  14. [14]
    [PDF] Fundamentals of Quantization - Stanford Electrical Engineering
    Mar 20, 2006 · Overload range: Inputs not within a bin cause quantizer overload. (saturation), an error of greater than ∆/2. Quantization. 75. Page 76 ...
  15. [15]
    [PDF] MT-001: Taking the Mystery out of the Infamous Formula,"SNR ...
    The formula SNR = 6.02N + 1.76dB represents the theoretical signal-to-noise ratio of a perfect N-bit ADC, over the dc to fs/2 bandwidth.
  16. [16]
    [PDF] A Technical Tutorial on Digital Signal Synthesis - IEEE Long Island
    If the DAC is operated at its fullscale output level, then the ratio of signal power to quantization noise power (SQR) is given by: SQR = 1.76 + 6.02B (dB).
  17. [17]
    IM 250 -- Digital Recording Details
    May 17, 2019 · Word Size or bit depth. The number of bits used to represent a single audio wave (the word size) directly affects the achievable noise level of ...
  18. [18]
    [PDF] ANALOG-DIGITAL CONVERSION
    Twos complement, for conversion purposes, consists of a binary code for positive magnitudes (0 sign bit), and the twos complement of each positive number to ...
  19. [19]
    (PDF) Quantization and Dither: A Theoretical Survey - ResearchGate
    Rectangular dither ensures that the quantizer is asymptotically unbiased (mean absolute error converges to 0 when averaging samples) with uncorrelated ...
  20. [20]
    [PDF] Technical Document AESTD1002.2.15-02 Recommendation for ...
    Bit Depth: Bit depth at which the audio file was created. Revision Number: A 2-digit revision identifier with an “R” preceding it is listed last. The higher ...
  21. [21]
  22. [22]
    Linear Pulse Code Modulated Audio (LPCM)
    Mar 26, 2024 · ISO/IEC 60908: Audio recording - Compact disc digital audio system ... bit-depth. Audio CDs use 44.1 kHz sampling rate with 16-bit ...
  23. [23]
    Audio Engineering Society - Convention Paper
    “High-Definition” audio use sampling rates of 88.2 kHz, 96 kHz and. 192 kHz. Resolution: This corresponds to the bit-depth used to encode the samples. The ...
  24. [24]
    Digital Audio Basics: Sample Rate and Bit Depth - PreSonus
    The sample size—more accurately, the number of bits used to describe each sample—is called the bit depth or word length. The number of bits transmitted per ...
  25. [25]
    Perceptual Coding of High-Quality Digital Audio - Index of /
    The magic of audio coding lies in the combination of signal processing algorithms like advanced filterbanks, quantization and coding, and con- sideration of ...
  26. [26]
    [PDF] A Brief Introduction to Sigma Delta Conversion - Educypedia
    6.02N 1.76. +. (. )dB 10LOG. 10. FS. 2FC. -----------....... dB. +. = QUANTIZATION NOISE. QUANTIZATION NOISE. FREQUENCY BAND. OF INTEREST. fC.
  27. [27]
    Basics of Sound, the Ear, and Hearing - Hearing Loss - NCBI - NIH
    Thus, the dynamic range of hearing covers approximately 130 dB in the frequency region in which the human auditory system is most sensitive (between 500 and ...The Auditory System · Auditory Perception · Sound DetectionMissing: bit | Show results with:bit
  28. [28]
    [PDF] Lecture 24: Dithering and mastering - DSpace@MIT
    May 16, 2018 · ... headroom. • Bounce to disc a 24 bit stereo mix. • Create a new ... • Create a new 24 bit session for mastering processing. • Bounce to ...
  29. [29]
    Bit Depth Tutorial - Cambridge in Colour
    Bit depth quantifies how many unique colors are available in an image's color palette in terms of the number of 0's and 1's, or "bits," which are used to ...
  30. [30]
    The Ins and Outs of HDR ― What is HDR? | EIZO
    For example, an 8-bit display can show roughly 16.77 million different colors, while a 10-bit display can show roughly 1.07 billion colors. Difference in ...
  31. [31]
    Definition of 24-bit color | PCMag
    Using three bytes per pixel in a display system (eight bits for each red, green and blue subpixel). Also called True Color and RGB color.
  32. [32]
    What is a 30 Bit Photography Workflow?
    Aug 29, 2018 · A 30 bit workflow is aimed at displaying that many colors on the screen for you to see and work with. And that's where the problem lies.Missing: deep | Show results with:deep
  33. [33]
    Understanding Gamma Correction - Cambridge in Colour
    Gamma defines the relationship between a pixel's numerical value and its actual luminance, translating between eye and camera light sensitivity.Why Gamma Is Useful · Gamma Workflow: Encoding &... · Display Gamma
  34. [34]
    Gamma, Tonal Response Curve, and related concepts - Imatest
    Gamma (γ) is the average slope relating the logarithm of pixel levels to the logarithm of exposure, and is the relationship called the Tonal Response Curve.Gamma And Mtf Measurement · Logarithmic Encoding · Appendix I: Monitor Gamma
  35. [35]
    [PDF] Conserve O Gram Volume 22 Issue 1: Understanding Bit Depth
    The tones of a grayscale image with a bit depth of 8 ranges from 0 (black) to 255 (white) and all the 254 shades of gray in between. depths ranging from 8 to ...
  36. [36]
    Digital Image Processing - Gray-Level Resolution - Interactive Tutorial
    Feb 11, 2016 · Digital images having higher gray-level resolution are composed with a larger number of gray shades and are displayed at a greater bit depth ...
  37. [37]
    Understanding Photoshop color modes - Adobe Help Center
    May 24, 2023 · Indexed Color mode produces 8‑bit image files with up to 256 colors. When converting to indexed color, Photoshop builds a color lookup table ( ...
  38. [38]
    14-bit vs 12-bit RAW - Can You Tell The Difference?
    Jun 13, 2015 · 12-bit image files can store up to 68 billion different shades of color. 14-bit image files store up to 4 trillion shades.
  39. [39]
    How Many “Bits” Do I Need - 8, 12, or 14? - Nikonians
    Dec 9, 2013 · 8-bit JPEG has 16.7M colors, 12-bit RAW has 68B, and 14-bit RAW has 4.4T. 14-bit RAW is recommended for potential color storage.
  40. [40]
    Chapter 8, "PNG Basics" - libpng.org
    Only 8-bit and 16-bit grayscale images may have an alpha channel, which must match the bit depth of the gray channel. The full TIFF specification supports two ...
  41. [41]
    Hounsfield unit | Radiology Reference Article | Radiopaedia.org
    Sep 25, 2024 · This range adequately covers all tissues in a human body. CT images from newer scanners store data at 16-bit depth, allowing for 216 = 65 536 ...
  42. [42]
    Effects of 16-bit CT imaging scanning conditions for metal implants ...
    Dec 11, 2017 · Therefore, 16-bit CT images are reconstructed by extending the bit depth of CT (3–5), and this reconstruction generates a wide range of CT ...
  43. [43]
    Decontouring: prevention and removal of false contour artifacts
    Sometimes an image may be available in a certain bit-depth, but the content is actually lower because there was a bit-depth limitation that occurred earlier in ...
  44. [44]
  45. [45]
    How Video Bit Depth Affects File Size - Larry Jordan
    Aug 4, 2019 · As bit-depth increases by 2, file size increases by 25%. And, thankfully for the amount of storage they would require, video files above 16-bit ...
  46. [46]
    10 bit color vs 8 bit color vs bitrate in video - VEGAS Community
    Apr 14, 2022 · All else being equal, the Bitrate and File Size of a true 10 bit encode (from 10 bit source!) will be 1/3 larger than its 8 bit counterpart. If ...
  47. [47]
    Energy Reduction Opportunities in HDR Video Encoding - arXiv
    Jun 17, 2024 · A downside to this development is the increased processing cost because of a higher bit depth of the source videos. Instead of using 8 bit ...
  48. [48]
    High-Bit-Depth Geometry Representation and Compression in ...
    Dec 11, 2024 · As immersive videos comprise multiple two-dimensional videos, compressing these videos individually incurs a high computational cost.
  49. [49]
  50. [50]
    VRAM Bottleneck in Vegas Pro 22 – Considering GPU Upgrade to ...
    Nov 21, 2024 · I'm considering upgrading to a Sapphire Radeon PULSE RX 7900 XTX with 24GB GDDR6 VRAM, which has double the VRAM of my current card.
  51. [51]
    Bit Depth in Video: 8-bit, 10-bit, and 12-bit Explained - Cincopa
    Apr 14, 2025 · 10-bit is used in HDR streaming (e.g., Netflix HDR10 content), color grading workflows, and professional cameras. It minimizes color banding and ...
  52. [52]
    The State of the Video Codec Market 2025 - Streaming Media
    Mar 28, 2025 · I'm here to help you decide whether it's time to go all in on AV1, VVC, LCEVC, or EVC or whether it's better to stick with H.264, VP9, and HEVC.Vvc · Av1 · Video Coding For Machines
  53. [53]
    2: Key Digital Principles
    2.3 Bit Depth: The bit depth fixes the dynamic range of an encoded audio event or item. 24 bit audio theoretically encodes a dynamic range that approaches ...
  54. [54]
    3. Framework for Assessing Preservation Aspects of Large-Scale ...
    High spatial resolution and bit depth are ideal; however, they alone do not guarantee satisfactory images. Therefore, more emphasis must be placed on assessing ...
  55. [55]
    Digital Audio Basics - UW-IT
    The CD standard is 16 bits, but during the production process, it is advised to work in 24 bits and dither down to a lower bit depth before distribution. For ...
  56. [56]
    Basics: Digital Audio - Transom
    Nov 7, 2002 · The “CD standard” is 16 bits, but more and more recorders and editing systems are capable of recording at 24 bits or higher.Basics: Digital Audio · Sample Rates And Bits · 16 Vs. 24
  57. [57]
    Bit Depth | AIMM
    In general, a bit depth of 24 bits is considered the standard for professional audio recording and production, while a bit depth of 16 bits is commonly used for ...
  58. [58]
    Digital audio basics: audio sample rate and bit depth - iZotope
    What does bit depth mean in audio recording? Bit depth determines the resolution of each sample, affecting the dynamic range and noise floor of the recording.
  59. [59]
    Scanning and Digitization Basics - F&M College Library
    Aug 7, 2024 · 8-bit grayscale: 8 bits per pixel for 256 different shades of gray, including black and white. bitonal: 1 bit per pixel, in which each pixel is either white or ...
  60. [60]
    Bit depth - Untitled Document
    Common values for bit depth range from 1 to 64 bits per pixel. In most cases, Lab, RGB, grayscale, and CMYK images contain 8 bits of data per color channel ...Missing: graphics | Show results with:graphics
  61. [61]
    [PDF] High dynamic range imaging and tonemapping
    Oct 9, 2017 · • Some image formats store integer 12-bit or 16-bit images. • HDR images are floating point 32-bit or 64-bit images. Page 51. How do we store ...
  62. [62]
    Codecs in common media types - MDN Web Docs - Mozilla
    Jul 6, 2025 · The bit depth of the luma and color component values; permitted values are 8, 10, and 12. CC. A two-digit value indicating which chroma ...Codec Options By Container · Vp9 · Avc Profiles
  63. [63]
    [PDF] What is 4K and how do I transmit it? - BICSI
    Jan 21, 2019 · Left: color bit depth 12 bits (via Rec.2020). Center: color bit depth 10 bits (via Rec.2020). Right: color bit depth 8bits (via Rec.709 ).<|separator|>
  64. [64]
    [PDF] Digital Image Processing - Jennifer Burg
    In digital imaging, dithering a grayscale image involves changing the bit-depth of the image from eight bits per pixel to one bit per pixel. This means that ...Missing: monochrome usage
  65. [65]
    Climate Modeling in Low Precision: Effects of Both Deterministic and ...
    Jan 27, 2022 · Climate models, however, still compute in 64 bits, and adapting to lower precision requires a detailed analysis of rounding errors. We develop ...
  66. [66]
    Simulation framework for X-ray grating interferometry optimization
    Jan 10, 2025 · The bit-depth is also a crucial parameter. A comparison between 32- and 64-bit precision is provided in Supplement 1. When the array exceeds ...
  67. [67]
    The IBM 2250 Display Unit - Columbia University
    The IBM 2250 Display Unit was originally shipped with the IBM 1130 computer, introduced in 1965. The 2250 could also be attached to IBM 360-series mainframes, ...
  68. [68]
    15.1 Early Hardware – Computer Graphics and Computer Animation
    The depth (the number of bits used to represent the intensity and color of each pixel) increased from 1 to 8 to accommodate color, and then to 24 (8 bits ...
  69. [69]
    [PDF] The Dawn of Commercial Digital Recording
    By 1976, Stockham had a digital recording company, Soundstream, and a recorder capable of 16-bit resolution with a sampling rate of 37.5kIEz. The Soundstream ...
  70. [70]
  71. [71]
    JPEG-1 standard 25 years: past, present, and future reasons for a ...
    Aug 31, 2018 · The Journal of Electronic Imaging publishes papers that are normally considered in the design, engineering, and applications of electronic ...
  72. [72]
    Computer Graphics/Imaging Display Hardware History
    Jan 15, 2025 · A hardware split screen feature allows an arbitrary window on a 2048 x 1536 (x 8 bit) virtual memory. 8 graphic overlay planes are provided.
  73. [73]
    In-Depth On Bit Depth - DT Heritage
    May 5, 2023 · A 16-bit camera system will have higher dynamic range, lower noise, and better smooth gradients compared to a camera with lower bit depth.
  74. [74]
    [PDF] Study Group Report High-Dynamic-Range (HDR) Imaging Ecosystem
    SMPTE, ARIB and ITU-R have published interface standards that provide sufficient bit depth and color sub- sampling as well as signaling of a wider color ...
  75. [75]
  76. [76]
    How Moore's Law has helped create today's video cameras
    Aug 25, 2015 · Moore's Law has changed almost every aspect of technology and continues to do so. Here's an in-depth look at how it has helped to bring us today's amazing ...
  77. [77]
    ISO/IEC 23001-8:2013 - Coding-independent code points
    ISO/IEC 23001-8:2013 defines various code points and fields that establish properties of a video or audio stream that are independent of the compression ...Missing: depth | Show results with:depth