Adaptive scalable texture compression
Adaptive Scalable Texture Compression (ASTC) is a lossy, block-based texture compression algorithm developed collaboratively by engineers at ARM Ltd. and AMD, introduced in 2012 to address the limitations of prior formats by offering unprecedented flexibility in bit rates, color formats, and texture dimensions while maintaining high visual quality.[1] It divides textures into fixed 128-bit blocks, supporting 2D block sizes from 4×4 to 12×12 texels and 3D blocks from 3×3×3 to 6×6×6 texels, enabling bit rates ranging from 0.89 to 8 bits per texel for 2D and 0.59 to 4.74 bits per texel for 3D.[1][2] ASTC's core innovation lies in its adaptive encoding, which allows per-block customization of color components (from 1 to 4 channels), dynamic range (low dynamic range [LDR] or high dynamic range [HDR]), and sRGB support, using gradient-based interpolation with up to four partitions per block and per-texel weights for precise color reconstruction.[1] This scalability outperforms earlier standards like S3TC/DXT and PVRTC in quality at equivalent bit rates—for instance, achieving approximately 2 dB higher PSNR than PVRTC at 2 bits per texel—while reducing memory bandwidth and storage demands in graphics applications.[2] The format employs efficient techniques such as Bounded Integer Set Encoding (BISE) for weight quantization and procedural partition functions to minimize artifacts across diverse content like diffuse maps, normal maps, and emissive textures.[1] Standardized by the Khronos Group as extensions to OpenGL, OpenGL ES, and Vulkan (e.g., KHR_texture_compression_astc_ldr), ASTC has been integrated into hardware from multiple vendors, including ARM Mali GPUs since the Mali-T600 series, NVIDIA GPUs starting with the Kepler architecture (GeForce GTX 600 and later), and AMD Radeon GPUs, enabling real-time decompression in mobile, desktop, and console rendering pipelines.[3] Its adoption in platforms like Android and game engines such as Unity facilitates optimized asset delivery, particularly for high-resolution textures where bandwidth efficiency is critical.[4] Tools like the open-source Arm ASTC Encoder (astcenc) support compression and decompression, with quality presets balancing speed and fidelity for developers.[5]Introduction
Overview
Adaptive Scalable Texture Compression (ASTC) is a lossy, block-based texture compression algorithm developed jointly by ARM and AMD to reduce memory bandwidth and storage requirements in graphics rendering applications.[1][6] It enables efficient handling of texture data in real-time rendering pipelines, particularly in resource-constrained environments.[7] The primary goals of ASTC include providing scalable bit rates ranging from 0.89 to 8 bits per texel, supporting both 2D and 3D textures, and accommodating low dynamic range (LDR), sRGB, and high dynamic range (HDR) color spaces.[1][8] This flexibility allows developers to balance compression efficiency and visual fidelity based on specific use cases, such as mobile games or high-end simulations.[6] ASTC offers key advantages over predecessors like S3TC/DXT and ETC, including higher compression efficiency and superior visual quality at comparable bit rates, which enhances performance in mobile and embedded GPUs by lowering power consumption and memory usage.[1][7] In its basic workflow, input images are divided into variable-sized blocks—ranging from 4×4 to 12×12 texels for 2D and 3×3×3 to 6×6×6 voxels for 3D—each compressed into a fixed 128-bit unit for storage and decompressed on-the-fly during rendering.[1][9] ASTC has been adopted as a Khronos Group extension for OpenGL, OpenGL ES, and Vulkan APIs, ensuring broad interoperability across graphics hardware.[8][6]History and Development
The development of Adaptive Scalable Texture Compression (ASTC) was initiated by Jørn Nystad and colleagues at ARM Ltd., in close collaboration with AMD, to address limitations in flexibility and efficiency of prior texture formats.[1][10] This effort aimed to create a versatile, block-based compression algorithm capable of supporting a wide range of bit rates and color spaces while maintaining high visual quality.[1] ASTC was first publicly presented at the High Performance Graphics (HPG) conference in June 2012, where the core algorithm design, including its adaptive partitioning and endpoint encoding methods, was detailed in a seminal paper by the ARM team.[11][1] Just two months later, on August 6, 2012, the Khronos Group officially adopted ASTC as a royalty-free extension to the OpenGL and OpenGL ES APIs, introducing the GL_KHR_texture_compression_astc_ldr for low dynamic range (LDR) textures and GL_KHR_texture_compression_astc_hdr for high dynamic range (HDR) support.[7] This standardization was driven by the need for a unified format to consolidate fragmented, vendor-specific compression schemes like PVRTC and ETC, which had proliferated in mobile graphics but lacked broad interoperability and scalability.[7] Subsequent advancements included the addition of ASTC support in the Vulkan API with its initial 1.0 specification release in February 2016, enabling efficient integration across modern graphics pipelines. ARM further supported ongoing refinements through open-source initiatives, notably releasing the astc-encoder tool in 2013 as a reference implementation for compressing and decompressing ASTC textures, which has since evolved into a widely used resource for developers.[12][5]Technical Specifications
Compression Principles
Adaptive Scalable Texture Compression (ASTC) employs a block-based approach where images are divided into fixed-size 128-bit blocks, each covering a variable number of texels—such as pixels in 2D textures or voxels in 3D volumes—allowing for adaptive partitioning to optimize compression efficiency across diverse content types.[1] This structure enables the format to handle varying spatial resolutions within the same compressed stream, with each block independently encoded to represent local image features through partitioning into 1 to 4 regions, or partitions, selected from a predefined set of 3072 procedural partitioning patterns to minimize reconstruction error.[1] Within each partition, colors are approximated by quantizing two endpoint colors per region, using 5 to 11 bits per component depending on the selected mode, which balances precision against bit budget allocation.[1] The encoding process further refines this approximation by applying 2D integer-coordinate weights, either directly per texel or via bilinear interpolation from a compact weight grid, to blend the endpoints and reconstruct the original colors.[13] Weight grid modes support single-channel weights with variable precisions ranging from 2 bits to 8 bits, configurable per block to trade off detail representation against compression ratio, and incorporate error minimization through least-squares optimization to select the best-fitting parameters during encoding.[1] These weights are encoded using efficient techniques like Bounded Integer Sequence Encoding to pack values optimally within the 128-bit limit.[1] The block header encapsulates critical metadata, including the partition count, endpoint modes, color space indicators, and weight grid selector, ensuring all necessary information for reconstruction is compactly stored.[13] Decoding in ASTC relies on hardware-friendly integer arithmetic for rapid interpolation and color reconstruction, performing bilinear weight interpolation followed by endpoint blending in a single pass per block.[1] It supports both low dynamic range (LDR) images at 8 bits per channel and high dynamic range (HDR) content using up to 20-bit floating-point values, with LDR employing linear interpolation and HDR utilizing a pseudo-logarithmic scale to preserve perceptual quality across wide intensity ranges.[13] In partitioned blocks, each texel is assigned to one partition and its color is reconstructed by interpolating between the two endpoints of that partition using the texel weight: \text{Color} = (1 - w) \cdot \text{endpoint}_0 + w \cdot \text{endpoint}_1 where w is the normalized weight (derived from the grid and interpolated to [0,1]) for the texel, applied component-wise across 1 to 4 color channels including optional alpha.[1] A key innovation of ASTC is its adaptive scalability, which permits fine-grained trade-offs between image quality and bit rate—ranging from below 1 bit per pixel to 8 bits per pixel—without requiring format changes or re-encoding, achieved through flexible allocation of bits to partitions, endpoints, and weights.[1] This is complemented by advanced quantization strategies that enhance peak signal-to-noise ratio (PSNR), with ASTC demonstrating superior performance over predecessors like S3TC and PVRTC; for instance, at comparable bit rates, ASTC achieves PSNR gains of approximately 2 dB over PVRTC at 2 bits per texel due to optimized endpoint and weight precision selection.[1]Supported Color Formats
Adaptive Scalable Texture Compression (ASTC) supports a variety of channel configurations to accommodate different texture types, including 1-channel luminance (L), 2-channel luminance-alpha (LA), 3-channel RGB, and 4-channel RGBA formats.[1] Alpha handling is integrated through separate endpoint modes, allowing for opaque, straight, or premultiplied alpha in RGBA blocks, which enables efficient representation of transparency in textures.[1] The format accommodates multiple dynamic ranges, including low dynamic range (LDR) with 8 bits per channel using unsigned normalized values, sRGB for gamma-corrected LDR content to preserve perceptual uniformity, and high dynamic range (HDR) with 16-20 bit floating-point values per channel for RGB or RGBA inputs.[1] This flexibility ensures compatibility across standard graphics workflows, from traditional 8-bit images to high-fidelity HDR rendering. ASTC employs 16 color endpoint modes, which support varying numbers of channels (1-4, including alpha) and dynamic ranges (10 for LDR and 6 for HDR), such as direct RGB encoding, blue-contract for correlated channels, and delta-based representations, to optimize precision across formats including transparency.[1] Quantization levels range from 5 to 11 bits, allowing adaptive allocation of bits within the fixed 128-bit block structure to balance quality and compression efficiency across formats.[1] Special features include void-extent mode, which designates blocks as constant-color for uniform regions, reducing overhead in sparse textures.[1] The format also supports mipmapping through scalable block sizes and texture arrays for layered content, enhancing its utility in real-time rendering.[1] Designed for integration with modern graphics pipelines, ASTC maintains high perceptual quality in sRGB workflows by supporting native sRGB decoding, ensuring colors appear consistent without additional transformations during rendering.[1][14]Block Configurations
2D Block Footprints and Bit Rates
Adaptive Scalable Texture Compression (ASTC) supports 14 distinct 2D block footprints, ranging from 4×4 to 12×12 texels, each encoded into a fixed 128-bit block.[14] These configurations enable scalable bit rates from 8 bits per texel (bpt) down to approximately 0.89 bpt, allowing developers to balance texture quality against storage efficiency for 2D surface textures in both low dynamic range (LDR) and high dynamic range (HDR) formats.[9] The bit rate for each footprint is calculated as 128 bits divided by the number of texels in the block, providing precise control over compression density.[14] Within the 128-bit block, approximately 12 to 16 bits are allocated to the header, which encodes block mode, partition information, and color endpoint modes, while the remaining bits are distributed between color endpoints (typically around 48 to 96 bits depending on the mode) and texel weights (the balance, quantized to 2 to 8 bits per weight). This adaptive allocation ensures flexibility in representing color gradients and details across the block. Smaller footprints, such as 4×4 or 5×4, are suited for high-detail areas like normal maps or sharp edges where preserving fine features is critical, whereas larger footprints like 10×10 or 12×12 excel in smooth gradients or low-frequency content, optimizing the quality-to-bitrate trade-off by spreading the fixed bit budget over more texels. The following table lists all 14 2D block footprints, including dimensions, texel count, and exact bit rates:| Dimensions | Texels per Block | Bit Rate (bpt) |
|---|---|---|
| 4×4 | 16 | 8.00 |
| 5×4 | 20 | 6.40 |
| 5×5 | 25 | 5.12 |
| 6×5 | 30 | 4.27 |
| 6×6 | 36 | 3.56 |
| 8×5 | 40 | 3.20 |
| 8×6 | 48 | 2.67 |
| 10×5 | 50 | 2.56 |
| 10×6 | 60 | 2.13 |
| 8×8 | 64 | 2.00 |
| 10×8 | 80 | 1.60 |
| 10×10 | 100 | 1.28 |
| 12×10 | 120 | 1.07 |
| 12×12 | 144 | 0.89 |
3D Block Footprints and Bit Rates
ASTC extends its compression capabilities to 3D textures by employing volumetric blocks that maintain the fixed 128-bit block size used in 2D, but with variable footprints to accommodate depth dimensions. This allows for efficient compression of volumetric data, where each block covers a 3D grid of voxels rather than a 2D grid of texels. The encoding process adapts the 2D approach by extending the weight grid and interpolation mechanisms to three dimensions, enabling similar header structures for color endpoints and weights while supporting up to 216 voxels per block.[8][15] The bit rate for 3D blocks is determined by the formula \frac{128}{\text{width} \times \text{height} \times \text{depth}} bits per voxel (bpv), providing scalable compression ratios from high-fidelity representations to aggressive size reduction. This formula yields precise rates for each supported footprint, allowing developers to balance quality and storage based on application needs. Supported color formats mirror those of 2D blocks, including LDR and HDR variants for flexibility in volumetric data.[8] ASTC's 3D compression finds primary applications in volumetric rendering scenarios, such as simulating smoke or particle flows in games, light fields for advanced graphics effects, and scene parameter storage for real-time rendering. It is particularly valuable in voxel-based game worlds and scientific visualization tasks requiring efficient handling of 3D datasets.[16][17] The following table lists all supported 3D block footprints, including dimensions, voxels per block, and corresponding bit rates:| Width | Height | Depth | Voxels per Block | Bit Rate (bpv) |
|---|---|---|---|---|
| 3 | 3 | 3 | 27 | 4.74 |
| 4 | 3 | 3 | 36 | 3.56 |
| 4 | 4 | 3 | 48 | 2.67 |
| 4 | 4 | 4 | 64 | 2.00 |
| 5 | 4 | 4 | 80 | 1.60 |
| 5 | 5 | 4 | 100 | 1.28 |
| 5 | 5 | 5 | 125 | 1.02 |
| 6 | 5 | 5 | 150 | 0.85 |
| 6 | 6 | 5 | 180 | 0.71 |
| 6 | 6 | 6 | 216 | 0.59 |
Implementation and Support
Hardware Implementations
ARM Mali GPUs have provided hardware support for ASTC decoding since the Mali-T620, introduced in 2013 as part of the Midgard architecture. Full support for high dynamic range (HDR) ASTC formats became available starting with the Mali-G71 in the Bifrost architecture in 2016.[18] This widespread adoption in Android devices has made ASTC a standard for mobile texture compression, enabling efficient rendering in resource-constrained environments.[19] AMD Radeon GPUs do not feature native hardware decoding for ASTC. Software fallbacks via drivers, such as those in the Mesa Gallium3D stack, have been available since 2018, allowing ASTC use on Radeon hardware through compute shader emulation for both low dynamic range (LDR) and HDR profiles.[20] Apple's A-series GPUs support LDR ASTC decoding starting from the A8 chip in 2014, integrated directly into the Metal API for seamless use in iOS and macOS applications.[21] Full HDR ASTC support arrived with the A13 Bionic in 2019, enhancing capabilities for advanced rendering techniques like high-fidelity lighting.[22] Imagination Technologies' PowerVR GPUs have included native ASTC hardware decoding since the Series6XT architecture in 2013, encompassing the Rogue family for both 2D and 3D textures.[23] Intel integrated GPUs provided full ASTC hardware support from Skylake (Gen9) in 2015 through Tiger Lake (Gen12), enabling efficient texture handling in integrated graphics for laptops and desktops.[24] However, this hardware support was removed in the Arc (Alchemist) architecture starting in 2022, primarily due to licensing considerations; software emulation remains available via drivers like Vulkan's ANV for LDR profiles.[25][26] NVIDIA's Tegra SoCs have supported ASTC decoding since the Kepler-based Tegra K1 in 2014, with full implementation in mobile platforms like those used in Nintendo Switch.[3] Desktop GeForce GPUs provide native hardware support for ASTC decoding starting with the Kepler architecture (GeForce GTX 600 series) in 2012. Qualcomm Adreno GPUs introduced LDR ASTC hardware decoding with the 4xx series in 2014, compatible with Snapdragon 415 and later.[27] HDR support became available on Android 13 and later for the 7xx series in 2023, via extensions like GL_KHR_texture_compression_astc_hdr.[20] On modern GPUs, ASTC decoding typically incurs a low latency of approximately 1-2 cycles per block, minimizing overhead during rendering.[28] Compared to uncompressed RGBA textures at 32 bits per pixel, ASTC at typical bit rates (e.g., 8 bits per pixel) can achieve up to 75% bandwidth savings, significantly reducing memory traffic in bandwidth-limited scenarios.[29] Khronos Group API extensions, such as OES_texture_compression_astc, facilitate cross-platform ASTC usage in OpenGL ES and Vulkan.[8]Software Tools and Encoders
The Arm ASTC Encoder (astcenc) is an open-source reference implementation for compressing and decompressing ASTC textures, available on GitHub since 2013, and supports all ASTC block sizes and color formats including LDR and HDR modes.[5] The tool offers command-line operation with options for quality presets, multi-threading for improved performance, and perceptual error metrics to optimize visual quality during encoding. As of version 5.1.0 released in November 2024, it includes optimizations for Arm Scalable Vector Extension 2 (SVE2) and further speed enhancements across platforms without altering image quality. Basis Universal, developed by Binomial LLC, serves as a transcoder that generates ASTC-compatible output from its supercompressed intermediate format, enabling efficient storage and runtime conversion in KTX2 containers for cross-platform asset delivery.[30] It supports ASTC block sizes starting from 4x4, with UASTC mode providing higher quality encoding that transposes to ASTC, and includes supercompression via Zstandard or Brotli to reduce file sizes beyond standard ASTC.[31] This tool integrates into asset pipelines for formats like glTF, allowing developers to target ASTC while maintaining compatibility with devices lacking native Basis support. NVIDIA Texture Tools Exporter, part of the NVIDIA Texture Tools (NVTT) suite, provides ASTC encoding and decoding capabilities through its GUI and CLI interfaces, with CUDA-accelerated compression for faster processing of high-resolution images.[32] Version 2024.1.1, released in August 2024, supports LDR ASTC formats alongside BC1-BC7, enabling quality tuning via adaptive rounding and mipmapping directly in the exporter.[33] It is commonly used in content creation workflows with tools like Adobe Photoshop plugins for seamless texture optimization. AMD Compressonator offers a graphical user interface for ASTC compression, supporting encoding to various block sizes and integration with the Radeon GPU Profiler for performance analysis during asset preparation.[34] The latest version 4.5, from January 2024, includes ASTC in its codec library (with optional enablement for builds), alongside support for KTX2 output and Brotli-G supercompression to enhance workflow efficiency. It facilitates perceptual quality adjustments and batch processing, making it suitable for developers targeting AMD hardware ecosystems. Khronos Group utilities, such as the KTX-Software library, enable ASTC handling in asset pipelines by supporting encoding, decoding, and validation within KTX2 containers for glTF models, ensuring compliance with Khronos standards for 3D content interchange. These tools include loaders for OpenGL, Vulkan, and WebGL, with utilities like ktxvalidate for checking ASTC texture integrity, and integrate with glTF pipelines to streamline deployment of compressed assets across runtimes. ASTC encoding remains computationally intensive, often requiring seconds per image on CPUs due to exhaustive partition searches and endpoint optimizations, though tools mitigate this via multi-threading and preset modes; hardware decoding, in contrast, occurs rapidly at runtime on supported GPUs. Optimizations like perceptual metrics in encoders help balance quality and speed by prioritizing visually important features over uniform error minimization.Extensions and Applications
Universal ASTC
Universal ASTC (UASTC) is a restricted subset of the Adaptive Scalable Texture Compression (ASTC) format, introduced in 2019 by Binomial LLC as part of their Basis Universal GPU texture codec.[30] It employs 4x4 pixel blocks at 8 bits per texel for standard modes, utilizing a simplified 128-bit block encoding with 19 modes for low dynamic range (LDR) content and 24 modes for high dynamic range (HDR). In 2025, support for 6x6 blocks was added for HDR content, using a 75-mode ASTC HDR compressor at approximately 3.56 bits per texel to improve efficiency while maintaining transcodability.[35] These modes incorporate up to two partitions and specific endpoint and weight configurations, such as integer weight bits for compatibility with BC7, to facilitate straightforward transcoding to various GPU formats.[30] UASTC builds on core ASTC color formats but limits variability to enhance efficiency.[36] The primary purpose of UASTC is to serve as the foundation for Basis Universal's supercompression scheme, enabling highly compact intermediate files in formats like .basis or KTX2 that can be transcoded on-the-fly to target GPU compressed textures such as ASTC, BC7, ETC2, or BC1 without additional quality degradation.[30] This approach achieves supercompression ratios of up to 6:1 compared to uncompressed or standard GPU-compressed textures, particularly beneficial for HDR workflows where input data at 48 bits per pixel is reduced to 8 bits per texel while preserving visual fidelity (e.g., PSNR values exceeding 56 dB in tests). The 6x6 HDR extension further optimizes this for lower bit rates.[36] In KTX2 containers, UASTC data benefits from layered Zstandard lossless compression, further optimizing storage for web and mobile delivery without loss of transcodability.[37] Key restrictions in UASTC include exclusive support for 2D textures, no 3D block configurations, and limitation to LDR RGBA for the base mode, with HDR variants restricted to RGB without alpha or signed values. The 6x6 mode follows similar constraints but supports upsampled weight grids and 1-3 subsets.[38] It forbids advanced ASTC features like three or four partitions, dual-plane encoding, weight grid upsampling (except in 6x6 HDR), and certain partition patterns or invalid FP16 values to ensure compatibility and simplicity.[36] These 128-bit blocks are fully decodable as standard ASTC on compliant hardware or transcodable to other formats like BC6H for HDR with minimal error.[30] UASTC offers advantages over full ASTC, including faster encoding and decoding due to its constrained mode set in 4x4 blocks, which reduces computational overhead in transcoders. The 6x6 extension provides additional flexibility for HDR at the cost of slightly higher complexity.[30] As an open-source, royalty-free format, it is particularly suited for web and mobile applications requiring efficient texture delivery.[30] Integration is widespread, with support in glTF 2.0 via KTX2 files for 3D model interchange, WebGL through WebAssembly-based transcoding, and game engines such as Unity and Unreal Engine for runtime texture handling.[39] It forms a core component of the Khronos Group's texture basis extension in KTX 2.0, ratified in 2021 to standardize supercompressed textures across OpenGL, Vulkan, and other APIs.Use Cases and Comparisons
Adaptive Scalable Texture Compression (ASTC) finds extensive application in mobile gaming, where it enables significant bandwidth and memory savings in Android and Vulkan-based titles. For instance, in resource-constrained environments, ASTC reduces texture loading times and runtime memory usage, as demonstrated by compressing a 1.1 MB uncompressed texture to as low as 70 KB at 8x8 block size, compared to 299 KB for PNG format.[19] This efficiency is particularly beneficial for high-resolution assets in games developed with engines like Unreal Engine, supporting scalable bit rates from 0.89 to 8 bits per pixel (bpp) to balance quality and performance.[3] In virtual reality (VR) and augmented reality (AR) scenarios, ASTC supports HDR textures essential for immersive environments, such as in Gear VR deployments via [Unreal Engine](/page/Unreal Engine), where it optimizes texture streaming to minimize latency and artifacts during head-tracked rendering.[40] For web graphics, ASTC integration through the WEBGL_compressed_texture_astc extension allows efficient delivery of 3D assets in WebGL applications, including browser-based games and visualizations, by leveraging hardware acceleration on compatible devices.[41] Additionally, in asset streaming pipelines like those in [Unreal Engine](/page/Unreal Engine) 5, ASTC facilitates dynamic loading of large texture sets, reducing initial download sizes while maintaining visual fidelity across platforms.[42] Comparisons with other formats highlight ASTC's versatility, particularly at lower bit rates. At 2 bpp for low-dynamic-range (LDR) RGB textures, ASTC achieves peak signal-to-noise ratio (PSNR) values 4-8 dB higher than PVRTC across standard test images, enabling superior quality at constrained bandwidths typical in mobile scenarios.[1] Versus ETC2, ASTC at 3.56 bpp (6x6 blocks) delivers comparable or better quality at a lower effective rate, with PSNR exceeding ETC2 by approximately 0.7 dB, while offering faster compression relative to ETC2's software encoders.[43] At 4 bpp, ASTC outperforms S3TC (DXT1) by about 1.5 dB PSNR, translating to roughly 30% better perceptual quality for similar file sizes.[44] For high-dynamic-range (HDR) content, ASTC at 8 bpp provides mPSNR gains of 2-5 dB over BC6H, making it more versatile for mixed LDR/HDR workflows without sacrificing detail in specular highlights or lighting maps.[1] In contrast to video codecs like AV1 adapted for textures, ASTC excels in random-access scenarios due to its block-based design, avoiding the sequential decoding overhead of AV1 while achieving similar compression ratios for static assets.[45] At higher rates like 8 bpp for LDR, ASTC matches BC7's PSNR around 40-50 dB but supports a broader range of block sizes and color channels, enhancing adaptability across devices.[1]| Format | Bit Rate (bpp) | Typical PSNR (LDR RGB, dB) | HDR Support | Primary Platforms | Key Advantages |
|---|---|---|---|---|---|
| ASTC | 2-8 (variable) | 34-50 (e.g., 38 at 2 bpp) | Yes (mPSNR 45-55 at 8 bpp) | Mobile (Android/Vulkan), WebGL | Flexible blocks, high quality at low rates |
| S3TC (DXT1) | 4 (fixed) | 32-40 | No | Desktop, WebGL | Broad support, fast decode |
| ETC2 | ~4 (fixed) | 35-42 | Limited | Mobile (OpenGL ES) | Good mobile quality, alpha support |
| BC7 | 8 (fixed) | 40-50 | No (use BC6H for HDR) | Desktop, Consoles | High fidelity, but less scalable |