Fact-checked by Grok 2 weeks ago

Subnormal number

In the floating-point arithmetic standard, a subnormal number (also called a denormal number) is defined as a non-zero finite number whose is less than that of the smallest positive in a given , allowing representation of values closer to zero than would otherwise be possible. Subnormal numbers are represented with an exponent field set to zero (the minimum biased exponent) and a non-zero field, where the implicit leading bit of the is treated as zero rather than one, resulting in reduced precision compared to s. For example, in binary32 single-precision , the smallest positive subnormal number is $2^{-149} \approx 1.40129846432 \times 10^{-45}, while in binary64 double-precision, it is $2^{-1074} \approx 4.9406564584124654 \times 10^{-324}. This encoding extends the underflow gradually, avoiding abrupt transitions to zero. The inclusion of subnormal numbers in , first introduced in the 1985 revision and refined in later versions like 2008 and 2019, serves to mitigate underflow errors by providing a continuum of small values, thereby improving and accuracy in computations involving tiny magnitudes, such as in scientific simulations and . However, they can incur performance penalties on some hardware due to special handling, leading to options for flushing them to zero in certain contexts.

Basic Concepts

Terminology

In the IEEE 754 floating-point arithmetic standard, the preferred terminology is "subnormal number" to describe non-zero representable values with magnitudes smaller than the smallest in a given format. This term was introduced to emphasize their role in extending the range toward zero without abrupt loss of precision. In earlier revisions of the standard, such as , these were also called "denormalized numbers," a that highlights the absence of an implicit leading 1 in their representation. Historical synonyms include "denormal number" and "gradual underflow numbers," the latter reflecting their function in enabling gradual underflow rather than flushing tiny results directly to zero. Subsequent standards, like IEEE 754-2008 and IEEE 754-2019, retain "subnormal number" as the primary term while defining "denormalized number" as equivalent. These numbers address underflow issues by providing a continuum of small values, avoiding the pitfalls of abrupt underflow. The concept of underflow itself differs from subnormal numbers: underflow denotes the condition where a computed result is too small for the format, with "abrupt underflow" replacing it with zero and "gradual underflow" utilizing subnormals for smoother transitions. Informally, very small subnormals are sometimes termed "tiny numbers" in technical discussions of floating-point behavior near zero. In older literature and pre-IEEE contexts, these values were often referred to as "unnormalized numbers," particularly in discussions of without . Such terminology appears in early papers on unnormalized representations, predating the formal adoption of subnormal or denormal terms.

Definition

In binary floating-point arithmetic, subnormal numbers (also referred to as denormalized numbers) are non-zero values whose magnitude is smaller than that of the smallest positive in a given . They are represented when the exponent field is zero and the field is non-zero, enabling gradual underflow rather than abrupt transition to zero. This representation extends the toward zero, filling the gap between zero and the minimum normalized value. The value of a subnormal number is mathematically expressed as (-1)^{s} \times 2^{E_{\min}} \times (0.f), where s is the (0 for positive, 1 for negative), E_{\min} = 2 - 2^{k-1} is the minimum exponent (with k being the number of bits in the exponent field), and $0.f denotes the fractional formed by the bits of the significand field interpreted without an implicit leading 1 (i.e., f = \sum_{i=1}^{p} b_{i} \cdot 2^{-i}, where p is the in bits and b_{i} are the significand bits). In contrast to normalized numbers, which have an implicit leading 1 in the and full of p bits, subnormal numbers have a leading 0, resulting in reduced that decreases as the value approaches zero. For example, in the single-precision format (with k=8 and p=24), E_{\min} = -126, so subnormal values range from approximately $2^{-149} (the smallest positive subnormal) to just below $2^{-126}. This smallest subnormal is $2^{-126} \times 2^{-23} \approx 1.40129846432 \times 10^{-45}.

Historical Development

Origins and Motivation

In early floating-point systems of the and , underflow posed a significant challenge, as results smaller than the tiniest normalized representable value were abruptly flushed to , creating a sharp discontinuity that led to catastrophic precision loss in iterative algorithms and other numerical computations. This "underflow cliff" meant that small but nonzero values could vanish entirely, disrupting the expected behavior of operations like —such as yielding when subtracting two nearly equal nonzero numbers—and causing in scientific simulations where gradual accumulation of tiny quantities is common. Hardware implementations, including the introduced in , exemplified this approach by lacking mechanisms for intermediate values and instead defaulting to on underflow, which compounded portability issues across diverse computer architectures of the era. The concept of gradual underflow emerged as a solution in the late 1960s, with I. B. proposing denormalized numbers in 1967 to fill the gap between zero and the smallest normalized value, allowing for a smoother transition and better preservation of relative accuracy. Building on this, and collaborators advanced the idea throughout the 1970s, advocating for subnormal numbers to enable continuous range extension without abrupt loss, particularly during consultations for systems like calculators and early IEEE standardization efforts starting in 1977. highlighted the perils of abrupt underflow in his 1969 analysis, noting how it introduced large relative errors that undermined the reliability of seminumerical algorithms. Key motivations for these developments centered on enhancing in scientific computing, where avoiding "underflow cliffs" prevents small errors from amplifying into major inaccuracies, and on reducing the burden on programmers who otherwise needed to implement workarounds for underflow anomalies. By blending underflow effects with ordinary errors, gradual underflow aimed to maintain properties like the equivalence of equality checks and difference computations, fostering more robust software across disciplines reliant on precise . This foundational work laid the groundwork for its formal adoption in the standard.

Standardization in IEEE 754

The standard, formally titled IEEE Standard for Floating-Point Arithmetic, mandated the support of subnormal numbers in floating-point formats to enable gradual underflow, thereby extending the range of representable values below the smallest and mitigating abrupt transitions to zero during underflow conditions. This requirement ensured that underflowing results could be represented with reduced precision rather than being flushed to zero, preserving in computations involving very small magnitudes. Subsequent revisions maintained and expanded this feature. The IEEE 754-2008 standard, IEEE Standard for Floating-Point Arithmetic, confirmed the mandatory inclusion of subnormals in binary formats while introducing them to floating-point formats for the first time, allowing similar gradual underflow behavior in base-10 representations. The 2019 revision, IEEE Standard for Floating-Point Arithmetic, further refined these provisions by recommending precise handling of subnormals in operations such as fused multiply-add (FMA), which computes (x × y) + z as a single rounded operation and may produce subnormal results without intermediate overflow or underflow exceptions. These updates aimed to enhance interoperability and accuracy across binary and arithmetic in diverse computing environments. The inclusion of subnormals in was significantly influenced by the advocacy of , a principal architect of the standard often referred to as its "chaplain" for his ongoing efforts to promote faithful implementations. Kahan emphasized the mathematical and practical benefits of gradual underflow, arguing that subnormals prevent anomalies in error analysis and maintain monotonicity in floating-point operations, drawing from his earlier implementations on systems like the IBM 7094. His leadership in the committee ensured that subnormals became a core requirement, countering proposals for simpler abrupt underflow mechanisms. While the standard requires full support for subnormals to achieve conformance, some hardware implementations provide optional modes, such as flush-to-zero (FTZ) or denormals-are-zero (DAZ), that treat subnormals as zero for performance reasons; however, these modes are explicitly non-conforming when enabled and are intended for specialized applications where precision loss is acceptable. The standards thus prioritize gradual underflow as the default behavior to uphold numerical reliability.

Representation and Properties

Binary Floating-Point Formats

In binary floating-point formats defined by the IEEE 754 standard, subnormal numbers are encoded using a biased exponent field of all zeros (E = 0), a non-zero trailing significand field T, and the sign bit S as for normalized numbers. This encoding distinguishes subnormals from zero (where T = 0) and allows representation of values smaller than the smallest normalized number without abrupt underflow to zero. For the single-precision binary32 format (32 bits total: 1 sign bit, 8 exponent bits, 23 significand bits), the exponent bias is 127, so the minimum unbiased exponent emin = -126. Subnormal numbers in this format range from the smallest positive value of $2^{-126} \times 2^{-23} = 2^{-149} (when T = 1) to just below the smallest normalized value of $2^{-126} (when T = 2^{23} - 1). The significand is interpreted without an implicit leading 1, providing 23 bits of precision rather than the 24 bits of normalized numbers. In the double-precision binary64 format (64 bits total: 1 , 11 exponent bits, 52 bits), the is 1023, yielding emin = -. Subnormals here span from $2^{-[1022](/page/1022)} \times 2^{-52} = 2^{-1074} (T = 1) to just below $2^{-[1022](/page/1022)} (T = 2^{52} - 1), with 52 bits of due to the absent implicit bit, compared to 53 bits for normalized values. A representative bit pattern for the smallest positive subnormal in single precision is 0 00000000 00000000000000000000001 in binary ( 0x00000001), where the is 0, the exponent field is all zeros, and the has a 1 in the least significant bit. This contrasts with normalized numbers, where the exponent field ranges from 1 to 254 (biased) and an implicit 1 precedes the for full .

Arithmetic Behavior

In conforming to , operations such as addition and subtraction can produce subnormal results when the exact result has a magnitude smaller than the smallest positive but greater than zero. Similarly, of two numbers—whether both subnormal, one subnormal and one normal, or both normal but yielding a tiny product—may result in a subnormal if the product's magnitude falls below the normal range threshold. For instance, in formats, if the preliminary exponent of the operation's result is less than emin (the minimum exponent for normalized numbers), the is denormalized by right-shifting it to align with emin, effectively filling the leading bit position with zero and extending the representable range gradually toward zero. The process in implementations detects potential subnormals during the post-operation adjustment phase, where the is examined for its leading one position. If the result qualifies as subnormal (exponent fixed at emin with less than 1 in normalized form), it remains denormalized to preserve as much as possible through gradual underflow, avoiding an abrupt flush to zero. This adjustment ensures that subnormals provide a of representable values with decreasing as the magnitude approaches zero, rather than a sudden gap. Underflow handling, as specified in IEEE 754 clause 7.4, occurs when a non-zero result is tiny—specifically, when its rounded value has magnitude less than the smallest positive and is inexact. In the default mode, such results are rounded to the nearest representable subnormal (or zero if tinier), the underflow flag is raised, and the inexact exception is signaled if applicable, enabling gradual underflow to mitigate precision loss compared to abrupt underflow to zero. A representative example of precision loss arises in the multiplication of two subnormal numbers near the underflow boundary in single- binary format, where each operand has a with several leading zeros after (effective below 24 bits). The product's , after and right-shifting to fit emin = -126, may retain even fewer significant bits—potentially only 10-15 bits—demonstrating how subnormals trade for extended , with the result rounded accordingly but signaling underflow due to the inexact tiny value.

Performance and Implementation

Computational Overhead

Subnormal numbers impose notable computational overhead in floating-point processing primarily because they necessitate specialized handling within the (FPU). Unlike normalized numbers, which benefit from an implicit leading 1 in the and standard exponent alignment, subnormals require explicit detection of their zero leading bit and additional shifting to normalize them during arithmetic operations like and . This often triggers mechanisms, such as underflow traps to the operating system, to manage the gradual underflow behavior mandated by IEEE 754. Performance benchmarks reveal that subnormal operations can be dramatically slower than their normalized counterparts across various CPU architectures. For example, on Pentium 4 processors, denormal floating-point operations exhibit slowdowns of up to 131 times, while on Sun UltraSPARC IV systems, the penalty reaches 520 times due to reliance on kernel traps. Similarly, modern x86 processors like i7 show subnormal multiplications taking over 200 cycles compared to just 4 cycles for normalized ones, highlighting the FPU's optimization for prevalent normalized cases. The overhead becomes particularly pronounced in iterative algorithms where subnormals can accumulate over repeated operations. In a micro-benchmark simulating averaging—a proxy for accumulative computations—up to 94% of values turn subnormal after 1000 iterations, causing substantial overall slowdowns in loops common to numerical methods like . This accumulation amplifies latency as each subsequent operation contends with the extra detection and shifting required. In hardware lacking native subnormal support, software emulation exacerbates the issue by falling back to exception handlers that emulate operations in user or kernel space, incurring latencies of hundreds of clock cycles per instruction. Such emulation is common in older or cost-optimized processors, further degrading throughput in latency-sensitive workloads.

Disabling Mechanisms

Subnormal numbers, also known as denormal numbers, can introduce significant computational overhead in due to their special handling requirements. To mitigate this, software mechanisms allow disabling subnormals by flushing them to zero, treating them as exact zeros in operations. One primary technique is the flushing to zero (FTZ) mode, an optional feature in -compliant implementations that sets subnormal inputs to zero before operations and flushes subnormal outputs to zero afterward. This mode deviates from strict gradual underflow but is supported on many architectures to prioritize over full in boundary cases. Compiler flags provide a convenient way to enable such optimizations at build time. For instance, the flag -ffast-math (or -Ofast) implicitly activates denormals-are-zero (DAZ) and FTZ by linking against a initializer that sets the relevant flags, allowing aggressive floating-point rearrangements while treating subnormals as zero. In Microsoft Visual C++ (MSVC), the /fp:fast option enables other speed-focused transformations, such as faster but less precise division and square root implementations, but does not automatically set DAZ or FTZ modes; these require explicit configuration using functions like _controlfp_s from <float.h> to modify the floating-point word. At runtime, programmers can toggle these modes using library functions or intrinsics for finer control. In C/C++, Intel's SSE intrinsics from <xmmintrin.h>, such as _MM_SET_FLUSH_ZERO_MODE(_MM_FLUSH_ZERO_ON) and _MM_SET_DENORMALS_ZERO_MODE(_MM_DENORMALS_ZERO_ON), modify the MXCSR to enable FTZ and DAZ on x86 processors. On Windows with MSVC, the _controlfp_s function from <float.h> can set equivalent control word bits for denormal handling. These approaches are portable across supported platforms but require architecture-specific code for broader compatibility. While these disabling mechanisms yield substantial speedups—often by avoiding the slower subnormal arithmetic paths—they introduce trade-offs in numerical accuracy. Applications sensitive to underflow, such as those in or scientific simulations, may experience altered results or accumulated errors when subnormals are prematurely zeroed, potentially violating conformance in those scenarios. Developers must evaluate such impacts case-by-case to balance performance gains against precision requirements.

Hardware-Specific Techniques

In x86 architectures supporting and AVX extensions, subnormal number handling is controlled through the MXCSR register, a 32-bit control and that governs floating-point . The Denormals-Are-Zero (DAZ) bit (bit 6) treats subnormal input operands as zero during computations, while the Flush-To-Zero (FTZ) bit (bit 15) flushes subnormal results from underflow to zero instead of generating subnormals. These bits can be set using the LDMXCSR instruction in , which loads a 32-bit value into MXCSR from memory (e.g., setting bits 6 and 15 to enable both modes), or via intrinsics such as _mm_setcsr in C/C++ code to modify the register directly. In compilers, the _MM_SET_DAZ_FTZ macro simplifies enabling both DAZ and FTZ simultaneously. On architectures with and VFP extensions, subnormal handling is managed via the Floating-Point Status and (FPSCR). The Flush-to-Zero (FZ) bit (bit 24) enables flush-to-zero mode, treating subnormal inputs and results as zero to improve . These bits are set using VMSR ( Move to Status Register) in or equivalent intrinsics, such as loading a value into a general-purpose register with the FZ bit set and then executing VMSR FPSCR, Rn; older VFP implementations may use MSR for similar control. operations always apply flush-to-zero regardless of the FZ bit in some modes, prioritizing over full compliance. Other platforms provide hardware-specific controls for subnormal handling. In PowerPC architectures, the Floating-Point Status and Control Register (FPSCR) includes the (Non-IEEE mode) bit (bit 0), which, when set, enables flush-to-zero behavior for underflow results on supported processors like the 604, bypassing subnormal generation in non-IEEE compliant modes. For GPUs, CUDA environments handle subnormals through compiler directives rather than direct register access; the -ftz=true flag during compilation flushes single-precision denormals to zero on devices with compute capability 2.0 and higher, with hardware support ensuring this for operations like fused multiply-add. Verification of these modes involves querying the respective status registers. On x86, the STMXCSR instruction or _mm_getcsr intrinsic stores the current MXCSR value to memory or returns it, allowing inspection of DAZ and FTZ bits. In ARM, VMRS (Vector Move from Status Register) FPSCR, Rn retrieves the FPSCR contents for bit examination. PowerPC uses the MFPSCR (Move from FPSCR) instruction to read the register, checking the NI bit, while CUDA verification relies on runtime queries of device compute capability via cudaGetDeviceProperties or comparing computation results against known IEEE 754 behavior on CPU.

Applications and Considerations

Numerical Analysis Implications

Subnormal numbers, through the mechanism of gradual underflow, play a crucial role in preserving by avoiding abrupt transitions to zero, which can otherwise introduce significant precision loss in computations involving small values. This gradual degradation ensures that relative errors remain bounded by times the underflow threshold, rather than jumping discontinuously to the full underflow threshold magnitude, thereby supporting reliable convergence in iterative algorithms such as those used in (ODE) integrators. For instance, in ODE solvers, gradual underflow prevents sudden zeroing of small residuals, allowing step-size control and error estimation to proceed smoothly without spurious instability. However, the variable precision inherent in subnormal representations—where the relative unit in the last place (ulp / |x|) increases as values approach zero—can lead to non-monotonic error behavior in operations like and . In sums of positive terms, for example, adding a subnormal to a may not preserve strict monotonicity due to the uneven spacing, potentially complicating error bounds in backward stability analyses. Similarly, products involving subnormals can exhibit inconsistent errors, as the effective length shortens, which may amplify discrepancies in algorithms sensitive to precise small-value handling. In matrix factorizations, such as Gaussian elimination or Cholesky decomposition, subnormals contribute to stability by ensuring small pivots or residuals are represented accurately rather than flushed to zero, maintaining the backward error at levels comparable to roundoff. Disabling subnormals, as in abrupt underflow modes, can cause instability; for example, in Gaussian elimination on nearly singular matrices, flushing small entries to zero may lead to incorrect factorizations, failing to detect singularity or producing forward errors orders of magnitude larger than with gradual underflow. A similar issue arises in the fast Fourier transform (FFT), where gradual underflow preserves small frequency components during iterative refinements, but disabling it can introduce artificial zeros that distort spectral accuracy and convergence. To mitigate potential issues, numerical analysts recommend rigorous testing of algorithms with and without subnormal support, leveraging exception flags such as underflow and inexact to detect and isolate subnormal-dependent behaviors. Tools like compiler options to flush subnormals to zero (e.g., FTZ/DAZ flags on x86) should be used selectively, with validation runs comparing results against full IEEE compliance to ensure stability across hardware. This practice is essential for high-reliability software, as it reveals cases where subnormals are beneficial versus those where they introduce unnecessary complexity.

Modern Usage and Alternatives

In modern programming environments, subnormal numbers are handled in accordance with standards to ensure gradual underflow. For instance, in utilizes subnormal numbers to represent values between zero and the smallest , filling the underflow gap while potentially incurring performance costs due to hardware handling. Similarly, Java's strictfp modifier enforces strict floating-point semantics, guaranteeing platform-independent behavior that includes proper support for subnormals and avoids optimizations that might flush them to zero. Post-2010 developments have reinforced subnormal support in key standards and architectures. The IEEE 754-2019 revision maintains the definition and use of subnormal numbers for binary formats, emphasizing their role in gradual underflow without introducing major changes from the 2008 version, while clarifying terminology such as equating "subnormal" with "denormal." RISC-V's floating-point extensions (F and D) comply with IEEE 754-2008, providing hardware support for subnormals in single- and double-precision operations, which aids portability in embedded systems. However, debates persist regarding subnormals in low-power devices, where their processing introduces significant performance interference and energy overhead—up to 100x slower than normal operations—prompting proposals to flush them to zero for efficiency in resource-constrained environments. As of 2025, alternatives continue to evolve. Posits eliminate subnormals by using tapered precision for gradual underflow without special encodings, and recent research includes hardware implementations for configurable convolutional neural networks (CNNs) and evaluations showing improved accuracy in sparse linear solvers compared to bfloat16. The Conference for Next Generation Arithmetic (CoNGA'25) highlights ongoing advancements in posit and related formats. In AI, 8-bit floating-point (FP8) formats, standardized jointly by NVIDIA, Arm, and Intel, avoid subnormals to reduce latency and memory use in training and inference, similar to bfloat16, with implementations on GPUs like the H100. Scaled integers avoid subnormals by shifting values into the normalized floating-point range through multiplication by a scaling factor, preventing underflow in fixed-point emulations. In digital signal processing (DSP), block-floating-point formats use a shared exponent for data blocks, normalizing the entire set to sidestep subnormal precision loss and overflow, as implemented on TMS320C54x processors for FFT computations. Future trends in hardware accelerators favor disabling subnormals to prioritize speed, particularly with formats like bfloat16 (BF16), where subnormals are flushed to zero to reduce in training—aligning with implementations in TPUs, , and architectures, as full support adds unnecessary overhead for ML workloads. extensions for BF16 similarly propose flushing for hardware efficiency in low-power devices.

References

  1. [1]
    None
    Summary of each segment:
  2. [2]
    IEEE Floating-Point Representation | Microsoft Learn
    Aug 3, 2021 · Subnormals. It's possible to represent numbers of smaller magnitude than the smallest number in normalized form. They're called subnormal or ...<|control11|><|separator|>
  3. [3]
    IEEE Arithmetic
    Single-format subnormal numbers were called single-format denormalized numbers in IEEE Standard 754. The 23-bit fraction combined with the implicit leading ...
  4. [4]
    [PDF] IEEE Standard 754 for Binary Floating-Point Arithmetic
    May 31, 1996 · Subnormal numbers, also called “ Denormalized,” allow UNDERFLOW to occur Gradually; this means that gaps between adjacent floating-point numbers ...
  5. [5]
    [PDF] A Brief Tutorial on Gradual Underflow - People @EECS
    Jul 8, 2005 · Gaps between a floating-point number x and its nearest neighbors do not increase when |x| decreases, affecting error-analyses and monotonicity.Missing: terminology | Show results with:terminology
  6. [6]
    [PDF] Underflow and the Reliability of Numerical Software | Vol. 5, No. 4
    Gradual underflow, on the other hand, returns an unnormalized floating point number less than ,in magnitude which approximates the tiny result. These ...<|separator|>
  7. [7]
    Underflow - Oracle® Developer Studio 12.5: Numerical Computation ...
    Underflow occurs, roughly speaking, when the result of an arithmetic operation is so small that it cannot be stored in its intended destination format.
  8. [8]
    Unnormalized Floating Point Arithmetic | Journal of the ACM
    Algorithms for floating point computer arithmetic are described, in which fractional parts are not subject to the usual normalization convention.
  9. [9]
    [PDF] What every computer scientist should know about floating-point ...
    Floating-point arithmetic is considered an esotoric subject by many people. This is rather surprising, because floating-point is ubiquitous.
  10. [10]
    IEEE Floating Point Standard
    The minimum positive subnormal number is the smallest positive number representable in IEEE single format. The minimum positive normal number is often referred ...
  11. [11]
    An Interview with the Old Man of Floating-Point - People @EECS
    The primary objection to Gradual Underflow arose from a fear that it would degrade the performance of the fastest arithmetics. Microcoded implementations like ...
  12. [12]
    What Every Computer Scientist Should Know About Floating-Point ...
    The term ulps will be used as shorthand for "units in the last place." If the result of a calculation is the floating-point number nearest to the correct result ...
  13. [13]
    [PDF] A Personal History of the Rise and Fall of IEEE Std 754 - Posithub
    Mar 2, 2023 · 1950s: John von Neumann opposes floating-point hardware, saying it would lead to lazy ignorance of rounding errors.
  14. [14]
    754-2008 - IEEE Standard for Floating-Point Arithmetic
    Aug 29, 2008 · This standard specifies interchange and arithmetic formats and methods for binary and decimal floating-point arithmetic in computer programmingMissing: subnormals | Show results with:subnormals
  15. [15]
  16. [16]
    [PDF] IEEE Standard 754 for Binary Floating-Point Arithmetic
    Oct 1, 1997 · Infinities, SNaNs, NaNs and Subnormal numbers necessitate four more special cases. IEEE Single and Double have no Nth bit in their significant ...
  17. [17]
    Denormal Numbers - Intel
    These numbers are called denormalized numbers or denormals(newer specifications refer to these as subnormal numbers). Denormal computations use hardware and/or ...
  18. [18]
    [PDF] Quantifying the Interference Caused by Subnormal Floating-Point ...
    This paper gives a simple micro-benchmark which can be used to demonstrate and quantify the impact of subnormal floating point values on a computation. The ...Missing: overhead numbers
  19. [19]
    [PDF] On Subnormal Floating Point and Abnormal Timing
    Nov 12, 2014 · As with IEEE-. 754 floating point, a particular fixed variable can hold the value of a real number, of positive Infinity, of negative Infin- ity ...
  20. [20]
    Subnormal Number Execution Speed - MATLAB & Simulink
    Subnormal values are a special category of floating-point values that are too close to 0.0 to be represented as a normalized value.
  21. [21]
  22. [22]
    /fp (Specify floating-point behavior) | Microsoft Learn
    Nov 22, 2021 · The /fp options specify whether the generated code allows floating-point environment changes to the rounding mode, exception masks, and subnormal behavior.Missing: GCC | Show results with:GCC
  23. [23]
    Beware of fast-math - Simon Byrne
    Apr 6, 2024 · Beware of fast-math. One of my more frequent rants, both online and in person, is the danger posed by the "fast-math" compiler flag.
  24. [24]
    Basic Issues in Floating Point Arithmetic and Error Analysis
    An operation that underflows yields a subnormal number or possibly zero; this is called gradual underflow. The alternative, simply returning a zero, is called ...
  25. [25]
    numpy.finfo — NumPy v2.3 Manual
    As in the IEEE-754 standard [1], NumPy floating point types make use of subnormal numbers to fill the gap between 0 and smallest_normal .
  26. [26]
    The strictfp Keyword in Java | Baeldung
    Nov 13, 2019 · In this tutorial, we'll learn how to use strictfp in Java to ensure platform-independent floating-point computations. ... In this quick tutorial, ...
  27. [27]
    Someone's been messing with Python's floating point subnormals
    Aug 13, 2024 · Subnormal floating point doesn't fix this, it just moves the impacts of underflow to smaller magnitudes that are less likely to be seen.
  28. [28]
    [PDF] IEEE Standard for Floating-Point Arithmetic
    Jul 22, 2019 · A subnormal number does not use the full precision available to normal numbers of the same format. supported format: A floating-point format ...
  29. [29]
    Evaluation of the RISC-V Floating Point Extensions F/D - chciken
    Aug 6, 2023 · RISC-V's F extension adds single-precision floating-point support, compliant with the 2008 revision of the IEEE 754 standard for floating-point arithmetic.Story & Motivation · History & Background · Related Work · Results & Discussion
  30. [30]
    [PDF] Beating Floating Point at its Own Game: Posit Arithmetic
    A new data type called a posit is designed as a direct drop-in replacement for IEEE Standard 754 floating-point numbers (floats). Unlike earlier forms of ...
  31. [31]
    Subnormal Numbers
    These numbers are called subnormal numbers or subnormals(older specifications refer to these as denormal numbers).Missing: terminology English literature
  32. [32]
    [PDF] A Block Floating Point Implementation on the TMS320C54x DSP
    The block floating point algorithm takes it a step further by tracking the signal strength from stage to stage to provide a more comprehensive scaling strategy ...
  33. [33]
    BFloat16 processing for Neural Networks on Armv8-A - Arm Developer
    Aug 29, 2019 · Several major CPU and GPU architectures, and Neural Network accelerators (NPUs), have announced an intention to support BF16. The advantages of ...<|control11|><|separator|>
  34. [34]
    Subnormal flushing · Issue #51 · riscv/riscv-bfloat16 - GitHub
    Jun 20, 2023 · ARM does have an optional extended BF16 support, where subnormal support becomes selectable, and NVIDIA also supports subnormals4. As far as I'm ...