Fact-checked by Grok 2 weeks ago

Rounding

Rounding is the process of approximating a numerical quantity to a simpler value, typically by adjusting it to the nearest multiple of a specified unit such as a or a certain number of places, for convenience in calculations or representation. This approximation introduces a small error known as , which becomes particularly significant in extended numerical computations or when operations involve small denominators. In and related fields, rounding is essential for simplifying complex , estimating results, and managing in data presentation. Common techniques include rounding to a fixed number of places or , where the immediately following the rounding position determines whether to increase the last retained . For instance, one common rule is that if the to be dropped is 5 or greater, the preceding is incremented; otherwise, it remains unchanged (though conventions vary for exactly 5). Advanced applications, such as in , employ specific rounding modes defined by standards like , including round to nearest (with ties to even), round toward zero, and directed roundings toward positive or negative infinity. These modes ensure consistent behavior in computational systems, minimizing bias in iterative algorithms and scientific simulations. Rounding also plays a critical role in statistics, where rules such as summing unrounded components before rounding the total help avoid distortion in aggregated results.

Fundamentals

Definition and Purpose

Rounding is the process of approximating a numerical by reducing the number of digits it contains, typically by selecting the closest from a predefined set, such as multiples of a power of ten or a specified level. This technique replaces the original number with a simpler form that maintains proximity to the true , though it inherently introduces a small of inaccuracy. The primary purpose of rounding is to facilitate practical applications across various domains by balancing simplicity and utility. In numerical computation, it enables the representation of real numbers within constrained formats, such as fixed-width integers or floating-point registers, which cannot accommodate infinite precision. For instance, computers use rounding to conform to standards like , ensuring operations produce results that are as close as possible to exact values given hardware limitations. In measurement and scientific reporting, rounding aligns values with the appropriate number of , reflecting the inherent uncertainty of instruments and avoiding overstatement of precision. Everyday uses include financial transactions, such as rounding amounts to the nearest , which streamlines calculations and aligns with monetary denominations. A basic example illustrates this: the value 3.14159 rounded to one place becomes 3.1, discarding the trailing digits while preserving the essential . This process motivates consideration of types, where measures the direct difference between the original and rounded value (e.g., |3.14159 - 3.1| = 0.04159), and relative error normalizes it by the original (e.g., 0.04159 / 3.14159 ≈ 0.013), highlighting the proportional impact especially for small numbers. Such errors underscore rounding's between and fidelity, with specific methods like round half up—where values exactly halfway round away from zero—applied contextually to minimize bias.

Rounding Error and Precision

In numerical computations, two primary types of errors arise from approximating real numbers: and rounding error. Truncation error, often resulting from chopping or directed of digits, introduces a systematic , typically towards zero, where the absolute error is bounded by the unit in the last place (ulp) but always non-negative for positive numbers. In contrast, rounding error, which rounds to the nearest representable value, produces an unbiased error with bounds symmetric around zero, limiting the maximum absolute error to half the ulp. This distinction is critical because can accumulate over multiple operations, while rounding to nearest minimizes long-term drift in statistical or iterative processes. The maximum from rounding a to n places is \leq 0.5 \times 10^{-n}, as the deviation cannot exceed half the spacing between representable values at that . For instance, rounding \pi \approx 3.1415926535 to two decimal places yields 3.14, introducing an of approximately 0.00159. The relative error, calculated as the absolute error divided by the , is about 0.000506, illustrating how rounding affects proportional accuracy. Relative is further quantified through , where rounding to k significant figures preserves relative accuracy to roughly $5 \times 10^{-k}, ensuring the leading digits reflect the measurement's reliability without implying undue certainty in trailing digits. This approach balances precision loss by focusing on the most meaningful digits, though excessive rounding can degrade the number of reliable in subsequent calculations. In floating-point systems, such as IEEE 754, the unit roundoff u defines the fundamental relative precision limit, given by u = 2^{-p} for a p-bit mantissa (e.g., u \approx 1.11 \times 10^{-16} for double precision with p = 53). This u bounds the relative rounding error in representation and arithmetic operations, where the computed result \mathrm{fl}(x) satisfies |\mathrm{fl}(x) - x| \leq u |x|. Directed rounding modes, like rounding toward zero, deviate from this by introducing bias similar to truncation, potentially amplifying errors in magnitude-dependent computations.

Rounding to Integers

Directed Rounding

Directed rounding refers to a class of rounding operations in which the result is systematically biased toward a fixed direction—either toward positive or negative , or toward or away from zero—regardless of the input value's proximity to the rounding boundaries. These modes, also known as directed rounding modes in standards, prioritize directional consistency over minimizing error magnitude and are essential for applications demanding predictable bias, such as bounding computations or hardware implementations. The floor function, denoted \lfloor x \rfloor, performs rounding down by selecting the greatest integer less than or equal to x, always directing toward negative . This operation yields \lfloor 3.7 \rfloor = 3 for positive values and \lfloor -3.7 \rfloor = -4 for negative values, ensuring the result never exceeds the input. Conversely, the ceiling function, denoted \lceil x \rceil, rounds up to the smallest integer greater than or equal to x, directing toward positive . Examples include \lceil 3.7 \rceil = 4 and \lceil -3.7 \rceil = -3, where the result is always at least as large as the input. Rounding toward zero, often called , produces an whose is no greater than that of x, effectively discarding the while biasing toward the . For instance, \operatorname{trunc}(3.7) = 3 and \operatorname{trunc}(-3.7) = -3. In contrast, rounding away from zero increases the for non-integer inputs, acting as the opposite of ; thus, \operatorname{away}(3.7) = 4 and \operatorname{away}(-3.7) = -4. This mode increments digits away from the unless the is zero. These directed modes find key applications in specialized domains. are integral to , where computes the lower bound and the upper bound of result intervals to guarantee enclosure of the exact value despite rounding uncertainties. toward zero is the default behavior in integer division across many programming languages, simplifying computation by discarding remainders without directional ambiguity for positive operands. Unlike nearest-integer methods, which aim for minimal by selecting the closest representable value, directed rounding enforces a uniform directional shift, making it suitable for conservative error propagation but introducing predictable systematic errors.

Nearest Integer Rounding

Nearest integer rounding selects the closest to a given x, minimizing the absolute distance |x - n| where n is an . This method differs from directed rounding by prioritizing proximity rather than a fixed , but it requires explicit tie-breaking rules when x is exactly halfway between two integers (i.e., the is 0.5). The most common tie-breaking rule is round half up, also known as arithmetic rounding, which rounds halfway cases away from zero. For example, 2.5 rounds to 3 and -2.5 rounds to -3. This approach is prevalent in educational settings and basic computational tools due to its simplicity. For positive numbers, it can be implemented using the formula \lfloor x + 0.5 \rfloor. Round half down, by contrast, rounds halfway cases toward : 2.5 to 2 and -2.5 to -2. This preserves the magnitude less aggressively than half up and is sometimes used in contexts requiring conservative adjustments. Round half away from explicitly directs ties away from regardless of sign, aligning with half up for positives but ensuring consistency: 2.5 to 3 and -2.5 to -3. It is optional in the standard for certain operations. Round half toward mirrors half down, rounding ties to the nearer closer to for consistency across signs. A statistically unbiased alternative is round half to even (bankers' rounding), which resolves ties by selecting the even integer. Examples include 2.5 to 2, 3.5 to 4, and 4.5 to 4. This method reduces average rounding bias over multiple operations, making it the default mode in the IEEE 754 floating-point standard for binary and decimal arithmetic. It is particularly valuable in financial and scientific computing to avoid systematic errors in summations or averages. Round half to odd, though less common, rounds halfway cases to the nearest odd integer: 2.5 to 3, 3.5 to 3, and 4.5 to 5. This variant can balance errors in specific applications where even parity is undesirable, but it sees limited adoption compared to half to even.

Preparatory and Randomized Rounding

Preparatory rounding techniques adjust numerical values prior to or reduction in to minimize accumulated errors in computations. One common method involves the use of guard digits, where extra digits are retained during intermediate calculations to preserve information that might otherwise be lost in or operations, followed by rounding to the target . This approach reduces errors compared to direct , as demonstrated in analyses of where guard digits ensure that operations like addition yield results bounded by . Randomized rounding methods introduce controlled during the rounding process to values, particularly at decision boundaries like ties, thereby averaging out systematic biases over multiple operations and improving long-term accuracy in iterative or parallel computations. These techniques contrast with deterministic rounding by distributing rounding errors randomly, which prevents error accumulation in and maintains unbiased expectations. Alternating tie-breaking is a deterministic variant that cycles between rounding up and down when the fractional part is exactly 0.5, such as alternating half-up and half-down to balance biases without requiring random number generation. Random tie-breaking employs a probabilistic choice specifically at halfway cases, rounding up or down with equal 50% probability when the fractional part is 0.5, to eliminate directional bias in such instances. Stochastic rounding generalizes this by selecting the nearest with probability proportional to the from the value; for a number x = n + f where n is the part and $0 \leq f < 1, the probability of rounding up to n+1 is f, and down to n is $1 - f. This method ensures unbiased rounding on average, as the expected value equals the original number. In machine learning, stochastic rounding is applied during quantization of neural network weights and activations to low precision, reducing variance in gradient estimates and enabling training with 16-bit fixed-point representations that achieve accuracy comparable to 32-bit floating-point. In parallel computing, it mitigates error growth in large-scale simulations by randomizing rounding in distributed operations, enhancing stability in low-precision environments. For example, applying stochastic rounding to 3.3 yields 3 with probability 0.7 and 4 with probability 0.3, while 3.7 yields 4 with probability 0.7 and 3 with probability 0.3, preserving the expected value in both cases. As a deterministic alternative to these randomized approaches, half-to-even rounding (also known as banker's rounding) resolves ties by selecting the even integer, though it does not fully eliminate bias in non-random data.

Comparison of Integer Rounding Methods

Integer rounding methods vary in their approach to handling fractional parts, particularly in tie situations where the fractional part is exactly 0.5, leading to trade-offs in bias, accuracy, and determinism. A systematic comparison reveals differences in directional bias, where directed methods systematically favor one direction, while nearest-integer methods aim for minimal error but differ in tie resolution. Monotonicity, the property that rounding preserves the order of inputs (i.e., if x \leq y, then \round(x) \leq \round(y)), holds for most standard methods but can be affected by inconsistent tie-breaking in some implementations. Preparatory rounding, often used as an intermediate step to reduce error propagation in multi-step computations, and randomized rounding, which introduces probability to mitigate bias, add further dimensions to these comparisons. The following table summarizes key integer rounding methods, focusing on their tie-breaking rules for halfway cases, bias characteristics, monotonicity, and examples for 2.5 and -2.5 (assuming standard definitions where "up" refers to toward positive infinity unless specified otherwise). Bias is described qualitatively: directed methods exhibit systematic directional bias, while nearest methods have average bias near zero except where ties introduce skew.
MethodTie Rule (for 0.5)BiasMonotonicityExample: 2.5Example: -2.5
FloorAlways down (toward -∞)Negative (or zero)Yes2-3
CeilingAlways up (toward +∞)Positive (or zero)Yes3-2
Truncation (toward zero)Always toward zeroToward zeroYes2-2
Round half up (to +∞)Toward +∞PositiveYes3-2
Round half down (to -∞)Toward -∞NegativeYes2-3
Round half to evenTo nearest even integerUnbiased on averageYes2-2
Round half away from zeroAway from zeroAway from zeroYes3-3
Stochastic (randomized)Probabilistic (50% each way)Unbiased (zero expected)No (probabilistic)2 or 3 (50%)-3 or -2 (50%)
Preparatory (e.g., dithered)Adjusted based on prior errorReduced propagation biasVariesDepends on contextDepends on context
Note: Preparatory rounding does not have a fixed tie rule, as it typically incorporates prior rounding errors to prepare for subsequent operations, often reducing overall bias in chains of computations. Directed rounding methods, such as floor and ceiling, introduce a consistent bias in one direction, which can accumulate in iterative algorithms but is useful for guaranteeing bounds (e.g., ceiling ensures sufficient allocation by rounding up). In contrast, nearest-integer methods like round half up exhibit a slight positive bias due to always resolving ties upward, leading to systematic overestimation over many operations. Round half to even and stochastic methods achieve zero expected bias, making them preferable for applications requiring statistical neutrality, though half up remains common in intuitive, single-step calculations despite its skew. Randomized methods, including stochastic rounding, eliminate directional bias entirely but introduce variance, which can be beneficial in optimization contexts like machine learning where it helps escape local minima. Accuracy is often measured by mean squared error (MSE), where MSE = variance + (bias)^2; thus, unbiased methods like half to even generally yield lower MSE than biased alternatives for the same variance level, as the bias term vanishes. For instance, in simulations of summation operations, deterministic biased rounding shows higher relative errors compared to stochastic variants, which maintain low bias at the cost of increased short-term variance. Directed methods have higher MSE in unbiased estimation tasks but excel in scenarios prioritizing worst-case guarantees over average performance. Preparatory rounding can further improve accuracy in multi-step processes by distributing errors more evenly, though its effectiveness depends on the specific error model. In practice, directed methods like ceiling are employed in resource allocation, such as file system block sizing, to avoid underestimation (e.g., ensuring at least the required space by rounding up). Round half to even is widely adopted in financial computations to prevent cumulative positive bias from repeated rounding, as seen in standards for monetary calculations where neutrality preserves fairness over transactions. Stochastic and preparatory rounding find use in numerical simulations and machine learning training, where unbiased error distribution enhances convergence and reduces variance in gradient-based methods. Each method has distinct pros and cons: directed rounding offers predictability and monotonicity for bounding but suffers from bias accumulation; round half up is intuitive and simple yet introduces positive skew unsuitable for averages; half to even provides unbiased results with determinism but may confuse users due to non-intuitive ties (e.g., 2.5 to 2); half away from zero is symmetric in magnitude but biases away from zero, increasing error in centered data; stochastic rounding eliminates bias and aids optimization but lacks reproducibility; preparatory methods mitigate propagation issues in pipelines at the expense of added complexity. The choice depends on whether bias tolerance, determinism, or average accuracy is prioritized.

Rounding to Non-Integers

Multiples and Scales

Rounding to multiples involves adjusting a numerical value to the nearest multiple of a specified step size d > 0, where d represents the or precision unit. This extends the concept of rounding to integers by applying the operation on a normalized , effectively targeting points spaced by d rather than by 1. For instance, rounding 17 to the nearest multiple of 5 yields 15, as 17 is closer to 15 than to 20. The standard formula for rounding to the nearest multiple of d is [\round\left(\frac{x}{d}\right) \times d](/page/Round), where \round denotes the nearest rounding applied to the scaled input x / d. This method leverages nearest rounding as its underlying mechanism to determine the appropriate before rescaling. In cases of ties, where the scaled value is exactly halfway between two integers (e.g., x / d = k + 0.5 for k), the same tie-breaking rules as in rounding apply, such as rounding half up to the next multiple. Practical examples abound in everyday applications. In currency handling, values are often rounded to the nearest , where d = 0.01, ensuring transactions align with monetary denominations; for example, $1.235 rounds to $1.24. Similarly, measurements may be rounded to the nearest 10 units for simplicity in reporting, such as approximating 169 cm to 170 cm when estimating height in rough scales. Directed variants provide one-sided rounding to multiples for specific needs. The floor operation to a multiple, given by \floor\left(\frac{x}{d}\right) \times d, rounds down to the largest multiple not exceeding x, useful for conservative estimates in financial or contexts where underestimation avoids overcommitment; for example, 17 to the nearest multiple of 5 yields 15. rounding, \ceil\left(\frac{x}{d}\right) \times d, rounds up analogously but is less common for conservatism.

Logarithmic and Scaled Rounding

Logarithmic rounding approximates a positive number x to the nearest power of a b > 1, which is effective for compressing wide-ranging data into a compact representation while emphasizing relative scales. The possible target values are b^k for k, spaced evenly on a logarithmic . For example, with b = 10, rounding 250 selects between 100 ($10^2) and 1000 ($10^3); since 250 is closer to 100 in relative terms, it rounds to 100. The computation proceeds by finding the exponent k = \round(\log_b x), where \round denotes rounding to the nearest (with ties typically resolved away from zero or to even, depending on convention), and the result is b^k. This formula derives from the property that distances on a correspond to multiplicative factors, ensuring the minimizes relative deviation. Scaled rounding builds on this by varying the step size proportionally to the number's magnitude, often aligning with scientific notation to achieve uniform relative accuracy across scales. For instance, numbers near $10^2 might use steps of 10, while those near $10^3 use steps of 100, effectively rounding the mantissa while preserving the exponent. This is evident in file size notations, where values are scaled to units like KB (\approx 10^3 bytes) or MB ($10^6 bytes), rounding to the nearest unit for readability over exponential ranges. Similarly, map scales are frequently adjusted to "nice" ratios like 1:100000, selecting powers or multiples that simplify representation without losing essential proportion. These methods excel in providing consistent relative , where the error as a of the remains bounded (typically under 50% for nearest selection), unlike uniform rounding which yields growing relative errors for small values. This makes them valuable in fields like scientific visualization and data summarization, where absolute is secondary to proportional insight.

Floating-Point and Fractional Rounding

Floating-point arithmetic relies on standardized rounding to manage the limited of binary representations. The standard defines four primary rounding modes for floating-point operations: round to nearest (with ties to even), round toward positive infinity, round toward negative infinity, and round toward zero. These modes ensure consistent behavior across computations, mirroring integer rounding but applied to the normalized () in binary form. In binary floating-point, a number is expressed as \pm (1.f) \times 2^e, where f is the of the with p-1 bits for p (e.g., p=24 for single precision, including the implicit leading 1). When the exact result exceeds this precision, the is rounded to the nearest representable value according to the selected mode. To perform the rounding accurately, implementations use extra bits beyond the mantissa: a guard bit (the first bit after the mantissa), a round bit (the next), and a sticky bit (the logical OR of all remaining lower bits). These bits capture information lost during alignment or computation, enabling correct decisions for rounding up or down while minimizing errors. For instance, in round-to-nearest mode, if the guard bit is 1 and the round or sticky bit indicates additional magnitude, the mantissa increments; ties are resolved by checking the least significant bit of the mantissa for evenness. This mechanism ensures that floating-point operations achieve correctly rounded results, as required by IEEE 754. A practical example of decimal-to-binary floating-point rounding occurs with the decimal 0.1, which in binary is the infinite series $0.0001100110011\ldots_2. In single-precision (23 explicit mantissa bits), this normalizes to $1.1001100110011001100110011\ldots_2 \times 2^{-4}, which rounds to $1.10011001100110011001101_2 \times 2^{-4} under round-to-nearest ties-to-even, resulting in the stored value 0x3DCCCCCD (), slightly greater than exact 0.1. Such rounding introduces small errors but maintains consistency in binary hardware. Beyond binary representations, rounding to simple rational fractions involves approximating a x to the nearest multiple of k/m, where k and m are . The method multiplies x by m, rounds the product to the nearest k (using any desired mode, often to nearest), and divides by m to obtain k/m. For example, to round 0.3 to the nearest multiple of $1/8 = 0.125, compute $8 \times 0.3 = 2.4, round to 2, then $2/8 = 0.25. In practical contexts like , measurements such as ingredient volumes are often rounded to the nearest $1/4 cup (0.25 cups) for simplicity and measurability with tools. This approach preserves usability while controlling to at most $1/(2m).

Binning and Available Values

In rounding to available values, a is approximated by selecting the element from a predefined finite set that minimizes the to the target value, typically using the or a domain-specific . This approach is essential in fields where only a limited number of standard values are feasible for or use, ensuring practical approximations without custom . The general involves computing the from the input to each set member and choosing the minimum, which can be optimized to O(log n) time if the set is sorted. A prominent example is the selection of values from the E12 series, standardized for 10% components, which includes 12 values per such as 10, 12, 15, 18, 22, 27, 33, 39, 47, 56, 68, and 82, scaled by powers of 10. When designing a requiring a specific , engineers round the calculated value to the nearest E12 by minimizing the relative or to these options, as defined in IEC 60063. This series derives from the 12th of 10 to evenly distribute values logarithmically across decades, facilitating coverage. For arbitrary binning, such as in construction for , values are grouped into custom s, and each is represented by the 's or center to approximate the within that . Assignment to the closest occurs by checking which contains the value, with the serving as the rounded target to minimize average deviation under assumptions. This method is a form of quantization where boundaries define the sets, and the representative value provides a compact summary for . In image processing, color quantization applies this principle by mapping each pixel's RGB value to the nearest color in a reduced palette, using in to preserve visual fidelity while limiting the number of distinct colors. For instance, reducing a 24-bit to an 8-bit palette involves finding the palette entry with the smallest metric for each . Similarly, educational grading systems often numerical scores into letter grades (e.g., A for 90-100, B for 80-89) by assigning to the whose midpoint is closest, though thresholds are sometimes used instead of pure distance minimization. Challenges arise with unevenly spaced discrete sets, as simple scaling or arithmetic shortcuts are unavailable, necessitating a complete or logarithmic search over all elements to identify the nearest, which becomes computationally intensive for large sets. Floating-point rounding represents a regular case of this binning, where values are snapped to the nearest representable number in the finite set defined by the format's precision and exponent range.

Specialized Applications

Image and Signal Processing

In image and signal processing, rounding during quantization often introduces visible artifacts such as banding in images or harmonic distortion in audio signals, which can degrade perceptual quality. Dithering addresses this by intentionally adding low-level noise to the signal prior to rounding, randomizing the quantization error to make it less perceptible and more closely resemble natural noise. This technique linearizes the quantization process, ensuring that the average output over multiple samples matches the input, thereby masking artifacts like contouring or false edges. A prominent implementation of dithering in is , which systematically propagates the rounding to neighboring rather than relying solely on random noise. In the Floyd-Steinberg algorithm, for each , the value is rounded to the nearest available level (e.g., 0 or 1 in halftoning), and the e is computed as e = x - \round(x), where x is the original value. This is then distributed to adjacent unprocessed using a fixed , such as \frac{7}{16} to the right neighbor, \frac{3}{16} to the below-left, \frac{5}{16} below, and \frac{1}{16} below-right, ensuring the error is diffused spatially without accumulating locally. This method, introduced in , remains widely adopted for its balance of computational efficiency and visual quality. Error diffusion dithering finds key applications in image , where continuous-tone images are converted to limited palettes for or , and in audio quantization, such as reducing from 24-bit to 16-bit during digital-to-analog conversion to prevent quantization noise from manifesting as audible . For instance, applying Floyd-Steinberg dithering to an 8-bit image reduced to 1-bit produces a output that retains subtle textures and gradients, unlike plain rounding, which results in blocky, posterized regions with prominent banding along smooth transitions. The primary benefits of dithering in these contexts include reduced visibility of quantization-induced banding and improved preservation of fine details, leading to outputs that better approximate the original signal's perceptual characteristics without requiring additional bits. rounding serves as a related approach, where rounding decisions incorporate probabilistic elements to decorrelate errors, akin to simpler forms of .

Numerical Computation Challenges

In numerical computations, rounding errors can accumulate and propagate in ways that undermine the reliability of algorithms, particularly in multi-step processes like summations or function evaluations. One approach to mitigate this is arithmetic, which introduces into the rounding process to simulate higher-precision arithmetic. By randomly choosing the rounding direction (e.g., up or down) for each operation with equal probability, the errors behave like uncorrelated random variables, allowing their statistical properties to be analyzed and averaged out over multiple runs to approximate the exact result with reduced bias. This technique, originally proposed to assess and bound rounding error , enables the simulation of on standard hardware by repeating computations and taking ensemble averages, effectively reducing the variance of the error distribution. To achieve exact or near-exact results despite inevitable rounding in finite-precision arithmetic, techniques such as compensated summation are employed. These methods track and correct the rounding errors introduced at each step of a computation, such as in summing a series of floating-point numbers. For instance, in compensated summation, after adding two numbers a and b to get the rounded sum s = \text{fl}(a + b), an error term e = a + b - s is computed and compensated in subsequent additions, effectively recovering the lost precision without requiring higher-precision intermediates. This approach, which can double the effective precision of a sum (e.g., making a 64-bit summation behave like 128-bit), is particularly valuable in numerical linear algebra and scientific simulations where error accumulation is a concern. Seminal work by Ogita, Rump, and Oishi formalized accurate summation algorithms that guarantee a faithfully rounded result—a floating-point number adjacent to the exact sum—under mild conditions on the input data. Double rounding arises when a computation involves successive rounding operations at different precisions, such as in fused multiply-add instructions or conversions between formats, potentially introducing additional error not present in a single rounding to the final precision. For example, in extended-precision intermediates like the 80-bit format (with 64-bit mantissa), computing \text{round}(\text{round}(x, 53 \text{ bits}), 24 \text{ bits}) may differ from \text{round}(x, 24 \text{ bits}) because the intermediate rounding to 53 bits (double precision) can shift the value away from the nearest representable 24-bit (single precision) number. This discrepancy, which can lead to errors up to 1.5 ulps (units in the last place) instead of 0.5 ulps in single rounding, is bounded and analyzed using tools like the Sterbenz lemma. The lemma states that if two positive floating-point numbers a and b satisfy a/2 \leq b \leq 2a, then their difference a - b is exactly representable without rounding error, providing a foundation for proving that double rounding does not always degrade subtraction accuracy in such cases. These bounds are crucial for verifying the correctness of hardware operations and software libraries handling mixed precisions. A particularly challenging issue in numerical computation is the table-maker's dilemma, which concerns the implementation of correctly rounded elementary functions like or sine in floating-point libraries. Correct rounding requires that for every possible input, the output is the floating-point number nearest to the true mathematical result (or following a specified tie-breaking rule), but achieving this demands exhaustive verification across the entire input domain, often $2^{53} values for double precision, to identify "hard-to-round" cases where the result lies extremely close to a midpoint between representables. These cases, which may require high-precision arguments or modular computations to resolve, can take years of computational effort to certify, as seen in the development of the CRlibm library for correctly rounded math functions. The dilemma arises because standard algorithms using approximations or lookups may fail to guarantee correct rounding without such rigorous testing, impacting applications in where certified accuracy is essential.

Observational and Search Contexts

In meteorological observations, particularly those conducted by the (NWS) in the United States, temperatures are rounded to the nearest whole degree , with midpoint values (e.g., .5) rounded up toward positive infinity for positive temperatures and toward zero for negative ones. For instance, +3.5°F rounds to +4°F, while -3.5°F rounds to -3°F, and -3.6°F rounds to -4°F. This convention ensures consistent reporting in surface weather observations, such as METARs, where temperatures below zero are prefixed with "M" to indicate negativity. Wind speeds in these observations are similarly standardized, rounded to the nearest 5 knots, with calm (less than 3 knots) reported as 0 knots. Direction is rounded to the nearest 10 degrees, facilitating uniform data transmission and analysis in and applications. A notable quirk arises with negative zero in reporting: values between -0.4°F and -0.1°F round to 0°F but may be encoded as "M00" in METARs to preserve the indication that the measurement was subzero, aiding calculations involving thermal properties or historical comparisons without losing directional context for derived metrics like . This preservation of information prevents errors in downstream computations, such as those integrating with for vector-based analyses. In search and database contexts, rounding numerical data stored as strings can disrupt lexical ordering, leading to counterintuitive results in sorted lists or queries. For example, a value rounded to "3.10" may sort before "3.2" due to character-by-character comparison ("3.1" prefix precedes "3.2"), but inconsistent decimal places—such as "9.9" versus a rounded "10.0"—can invert numerical order, with "10.0" appearing before "9.9" because '1' < '9'. This affects applications like cataloging observational , where unnormalized string representations cause apparent misordering. Such inconsistencies extend to database queries on rounded reports, where exact matches fail if source retains while queries use rounded equivalents, resulting in missed . Conversely, multiple unrounded values converging on the same rounded figure (e.g., 22.4°F and 22.6°F both to 22°F) can produce unintended duplicates in aggregated search results, complicating analyses of historical datasets. Directed rounding modes, as occasionally applied in observational protocols, mitigate some mismatches by enforcing consistent but require careful alignment across storage and retrieval systems.

Historical and Practical Aspects

Development of Rounding Techniques

The earliest known use of rounding techniques appears in ancient around 2000 BCE, where scribes employed the (base-60) system to approximate measurements in economic and astronomical records. In administrative texts from the Old Babylonian Kingdom of , rounding was systematically applied to quantities like grain or labor allocations, often truncating or adjusting fractional parts to simplify calculations on clay tablets while minimizing errors in practical contexts. This approach reflected the limitations of notation, where precise fractions were expressed but frequently rounded to whole or convenient units for usability. In geometry, approximations emerged as a tool for handling irrational lengths, with mathematicians like (c. 287–212 BCE) using bounding intervals to round values such as π between 3 + 10/71 and 3 + 1/7 through the . These techniques prioritized rigorous bounds over exact values, influencing later geometric computations by emphasizing controlled approximation to avoid overestimation or underestimation in proofs. During the medieval period, Islamic scholars advanced concepts akin to ; for instance, (c. 1380–1429) in his 1427 treatise The Key to Arithmetic detailed decimal-based rounding for , computing π to 16 decimal places by iteratively refining approximations. Al-Kashi's methods, which involved carrying over digits and limiting precision to essential figures, facilitated high-accuracy astronomical calculations and bridged with practical rounding. By the 19th century, rounding gained prominence in statistics, with analyzing measurement errors—including those from —in anthropometric data during the 1880s, as explored in his 1889 work Natural Inheritance, where he quantified how affected estimates. This adoption highlighted rounding's role in error propagation, prompting statisticians to model it as a source of in empirical distributions. In the 20th century, the standard, ratified in 1985, formalized rounding modes for , mandating default round-to-nearest with ties to even to ensure reproducibility across computations. Key innovations included bankers' rounding, a method historically used in financial contexts to mitigate cumulative by rounding halves to the nearest even . Stochastic rounding, proposed by and William Goldstine in the early 1950s amid simulations for , introduced probabilistic decisions at midpoints to reduce variance in iterative algorithms. The evolution of rounding progressed from manual logarithmic and trigonometric tables—reliant on hand-computed approximations by figures like Henry Briggs in the —to computational modes in the mid-20th century, where electronic calculators automated modes like or rounding to fixed precision, enhancing efficiency in scientific simulations. This shift, accelerated by early computers like in the , integrated rounding into hardware to balance accuracy and speed, laying groundwork for modern numerical libraries.

Implementations in Programming

In programming, rounding functions are essential for handling numerical precision in computations involving floating-point numbers. These functions vary across languages in their default behaviors, particularly in how they resolve ties (halfway cases like 0.5). For instance, Java's Math.[round](/page/Round)(double a) method returns the closest long to the argument by adding 0.5 and then taking the , effectively rounding halfway cases toward positive —for example, Math.[round](/page/Round)(0.5) yields 1, while Math.[round](/page/Round)(-0.5) yields 0. Similarly, Python's built-in round() function, introduced in version 3.0, employs banker's rounding ( half to even) to minimize bias in repeated operations; thus, round(0.5) returns 0, round(1.5) returns 2, and round(2.5) returns 2. The C standard library provides functions like round(double x), which rounds to the nearest integer, with halfway cases rounded away from zero regardless of the current floating-point rounding mode—round(0.5) returns 1.0, and round(-0.5) returns -1.0. Related functions such as lround(double x) return the result as a long integer, enabling integer-based computations. In JavaScript, Math.round(x) also rounds to the nearest integer, but its handling of halfway cases follows a pattern similar to adding 0.5 and flooring: Math.round(0.5) returns 1, Math.round(-0.5) returns -0 (effectively 0), Math.round(1.5) returns 2, and Math.round(-1.5) returns -1. Specialized libraries extend these capabilities with configurable modes. In , a library for numerical computing, np.round(a, decimals=0) rounds elements to the nearest using half-even rounding for ties, consistent with Python's built-in behavior; for example, np.round([0.5, 1.5, 2.5]) yields [0., 2., 2.]. also supports np.[floor](/page/Floor) and np.ceil for directional rounding—np.[floor](/page/Floor) toward negative and np.ceil toward positive —while handling negative numbers symmetrically; np.[floor](/page/Floor)([-0.1]) returns [-1.], and np.ceil([-0.1]) returns [0.]. Ties and negatives are managed to avoid bias, but users must specify modes explicitly for non-default behaviors like half up via custom implementations. Practical examples illustrate these functions alongside common pitfalls. For flooring and ceiling in Python, the following code demonstrates directional rounding:
python
import math

print(math.floor(3.7))   # 3
print(math.ceil(3.7))    # 4
print(math.floor(-3.7))  # -4
print(math.ceil(-3.7))   # -3
A well-known issue arises from binary floating-point representation, where decimal fractions like 0.1 cannot be stored exactly, leading to rounding errors in arithmetic. In Python, 0.1 + 0.2 evaluates to approximately 0.30000000000000004, not exactly 0.3, causing comparisons like 0.1 + 0.2 == 0.3 to return False. Similar discrepancies occur in Java, JavaScript, and C, often requiring epsilon-based comparisons or decimal libraries for precision-sensitive applications. Portability challenges stem from varying default rounding modes and floating-point implementations across languages, even when adhering to standards for binary representation. For example, a value rounded half-even in may round half-away-from-zero in , yielding different results for inputs like 2.5 (2 in , 3 in ), which can introduce subtle bugs in cross-language or cross-platform code. Developers must document and test rounding behaviors to ensure consistency, especially in numerical libraries or distributed systems.

Rounding Standards and Conventions

The standard for defines four primary rounding modes: round to nearest (with ties to even), round toward zero, round toward positive infinity, and round toward negative infinity, with round to nearest, ties to even serving as the default mode to minimize bias in repeated operations. This standard, originally published in 1985 and revised in 2008 to include decimal floating-point formats, ensures consistent handling of inexact results across computing environments. In financial contexts, rounding conventions vary by jurisdiction and standard; for instance, in the Generally Accepted Accounting Principles (), round half up is commonly used, where values ending in 5 or greater are rounded upward to the nearest whole unit, to align with reporting precision in . While specifies the number of decimal places for representation (e.g., two for most fiat currencies), it does not prescribe a universal rounding mode, leaving implementation to regional practices such as half-even rounding in some international banking systems to reduce cumulative errors. Scientific conventions emphasize , where rounding follows the half-up rule: if the digit following the last significant figure is 5 or greater, the preceding digit is increased by one, ensuring results reflect the precision of the original measurements without introducing undue bias. The National Institute of Standards and Technology (NIST) provides guidelines for measurements, recommending rounding to the same decimal place as the uncertainty's least significant digit, often using round half to even to avoid systematic overestimation in and error propagation. ISO 80000-1 outlines general rules for quantities and units, including rounding numbers to maintain consistency with the (SI), such as rounding to the nearest multiple of the unit's while applying arithmetic conventions like half up for isolated values. Post-1985 revisions to , particularly the 2008 update incorporating decimal formats, have influenced these measurement standards by promoting interoperable rounding for binary-decimal conversions. Emerging applications in model quantization, particularly post-2020 techniques like adaptive rounding, lack a unified standard, with methods such as AdaRound optimizing low-bit representations through data-driven adjustments rather than fixed modes, highlighting ongoing efforts toward in high-impact deployments. Recent advancements as of 2024 include Rounding 2.0 for improved complexity analysis in low-precision arithmetic and model-preserving adaptive rounding techniques for quantization. For example, meteorological conventions often round temperature readings to the nearest using half up to balance readability and precision in forecasts.

References

  1. [1]
    Rounding -- from Wolfram MathWorld
    The process of approximating a quantity, be it for convenience or, as in the case of numerical computations, of necessity. If rounding is performed on each ...
  2. [2]
    Rules Of Rounding Off Data
    Rule # 1: If the digit to be dropped is greater than 5, then add "1" to the last digit to be retained and drop all digits farther to the right. For example:
  3. [3]
    Rounding and estimation: 1.5 Significant figures - The Open University
    To round to a given number of significant figures, first count from the first significant digit to the number required (including zeros).Missing: methods | Show results with:methods
  4. [4]
    Directed Rounding Arithmetic Operations ISO WG21 Document
    Dec 5, 2008 · There are 4 rounding modes specified by IEEE-754-1985: to the nearest (default), towards zero, towards plus infinity and towards minus infinity.
  5. [5]
    Algorithms for Stochastically Rounded Elementary Arithmetic ...
    Mar 26, 2021 · The algorithms require that the hardware be compliant with the IEEE 754 floating-point standard and that a floating-point pseudorandom number ...
  6. [6]
    Tutorial:Rounding of numbers - Statistics Explained - Eurostat
    When reporting totals, do not add up rounded addends, but add unrounded numbers or percentages and round the total. As a consequence of the general rule, the ...<|control11|><|separator|>
  7. [7]
    Definition--Place Value Concepts--Rounding | Media4Math
    Rounding is a mathematical technique used to approximate numbers to a specified place value, making them easier to work with in calculations or reports.<|separator|>
  8. [8]
    Rounding Definition (Illustrated Mathematics Dictionary) - Math is Fun
    Rounding means making a number simpler but keeping its value close to what it was. The result is less accurate, but easier to use.
  9. [9]
    IEEE Arithmetic
    The IEEE standard requires correct rounding for typical numbers whose magnitudes range from 10-44 to 10+44 but permits slightly incorrect rounding for larger ...
  10. [10]
    IEEE 754 arithmetic and rounding - Arm Developer
    IEEE 754 defines different rounding rules to use when calculating arithmetic results. Note. The ARM Compiler toolchain does not support floating-point exception ...
  11. [11]
    4.6: Significant Figures and Rounding - Chemistry LibreTexts
    Feb 5, 2024 · The purpose in rounding off is to avoid expressing a value to a greater degree of precision than is consistent with the uncertainty in the measurement.
  12. [12]
  13. [13]
    What is Rounding Numbers? Rules, Definition, Examples, Facts
    Rounding off is nothing but estimation. Estimating the actual number to its nearby number is called rounding off.
  14. [14]
    [PDF] Absolute and relative error - cs.wisc.edu
    Jan 24, 2013 · Absolute error is |q - ˆq|, where q is the exact value and ˆq is the computed value. Relative error is |q - ˆq|/|q|.
  15. [15]
    Rounding Methods - Math is Fun
    Rounding means making a number simpler but keeping its value close to what it was. The result is less accurate, but easier to use. Example: 7.3 rounds to 7.
  16. [16]
    What Every Computer Scientist Should Know About Floating-Point ...
    For example rounding to the nearest floating-point number corresponds to an error of less than or equal to .5 ulp. However, when analyzing the rounding error ...
  17. [17]
    Roundoff Error - WPI
    If you round off to n decimal places, the error associated with this will be no more than .000...05 (where there are n 0s before the 5) or 5 x 10 -(n+1) in ...
  18. [18]
    IEEE 754 arithmetic and rounding - Arm Developer
    IEEE 754 defines different rounding rules to use when calculating arithmetic results. Arithmetic is generally performed by computing the result of an operation ...<|separator|>
  19. [19]
    Floor Function -- from Wolfram MathWorld
    The floor function gives the largest integer less than or equal to x, also called the greatest integer function.
  20. [20]
    Math.Truncate Method (System) - Microsoft Learn
    The number is rounded to the nearest integer towards zero. Truncate(Decimal) Calculates the integral part of a specified decimal number.
  21. [21]
    [PDF] A Rigorous Framework for Fully Supporting the IEEE Standard for ...
    Interval arithmetic, that is, arithmetic on infinite sets of real ... ing certain common mathematical functions, such as bxc (floor) and dxe (ceiling).
  22. [22]
    Operator - Visual Basic - Microsoft Learn
    Sep 15, 2021 · This is known as truncation. The result data type is a numeric ... The following example uses the \ operator to perform integer division.
  23. [23]
    [PDF] Comparison of different rounding modes - arXiv
    May 31, 2020 · The rounding-to-the-nearest methods vary in different tie breaking rules, such as round half up, round half down, round half to even, round half ...
  24. [24]
    Rounding Algorithms Compared - EE Times
    Next, we applied a round-ceiling algorithm to the accumulator while maintaining the round-half-up on the coefficients and the round-floor on the remainder of ...
  25. [25]
    Everything you wanted to know about rounding (but were too afraid ...
    Jun 29, 2015 · ... round numbers. The technique he taught us is known as arithmetic rounding , also called round-half-up . In this case, values such as 3.1 ...
  26. [26]
    Rounding Methods Calculator
    Aug 1, 2025 · Round Half Odd · x < 5 round toward zero · x = 5 round to the nearest odd (non-even) value · x > 5 round away from zero.Round Half Away from Zero · Round Half Even (Bankers... · Round Half Odd
  27. [27]
    Stochastic rounding: implementation, error analysis and applications
    Mar 9, 2022 · Stochastic rounding (SR) randomly maps a real number x to one of the two nearest values in a finite precision number system.
  28. [28]
    [PDF] Stochastic Rounding: Implementation, Error Analysis, and Applications
    Stochastic rounding randomly maps a real number to one of the two nearest values in a finite precision number system. First proposed for use in computer.
  29. [29]
    Stochastic Rounding 2.0, With a View Towards Complexity Analysis
    Nov 1, 2024 · Stochastic rounding is a probabilistic rounding mode that is surprisingly effective in large-scale computations and low-precision ...
  30. [30]
    How to express "round up to the nearest 2" in mathematical notation?
    Aug 18, 2017 · To round to a multiple of some given number, we can first divide by that number, round to an integer, then multiply by that number.<|control11|><|separator|>
  31. [31]
    MROUND function - Microsoft Support
    MROUND returns a number rounded to the desired multiple. Syntax: MROUND(number, multiple) The MROUND function syntax has the following arguments:
  32. [32]
    Rounding numbers - Learning Hub
    Rules for rounding. Identify the place value you are rounding to. For example, for rounding currency to the nearest cent use 2 decimal places (hundredths).
  33. [33]
    What is Rounding To The Nearest Tens? Definition, Steps, Examples
    Example 1: Round 523 to the nearest ten. · Example 2: Round 147.32 to the nearest tens. · Example 3: Eddie is 169 cm tall. · Exception: · Example 4: Round the ...
  34. [34]
    CEILING, FLOOR, ROUND: Google Sheets Functions Guide 2025
    Nov 11, 2024 · The CEILING, FLOOR, and ROUND functions in Google Sheets uniquely handle numbers, whether rounding up, down, or to a specified decimal place.Missing: directed | Show results with:directed<|control11|><|separator|>
  35. [35]
    Round a number up to nearest multiple - Excel formula
    Sep 25, 2020 · To round a number up to the nearest specified multiple (i.e. round a price up to the nearest dollar) you can use the CEILING function.
  36. [36]
    logarithmic rounding - File Exchange - MATLAB Central - MathWorks
    May 8, 2009 · roundl is a MATLAB function that does base-10 logarithmic rounding. Syntax: y= roundl(x, mode, n) The value y is x rounded to a number of the form a*10^n.
  37. [37]
    How to get the Order Of Magnitude of a Number with TSQL
    Jun 28, 2016 · For example, the order of magnitude of 1500 is 3, since 1500 may be written as 1.5×103. Here is the trick: CAST(ROUND(LOG10(@Number),0) as INT).
  38. [38]
    Map scale rounding : r/gis - Reddit
    Sep 21, 2018 · I need to use a pre-existing map sheet and then convert it to a scale that I could use. I ended up with a 1:286 and I know I should round it.Question on Map Scales! : r/gis - RedditELI5: What's the point of using a 'log scale' in a graph? What does it ...More results from www.reddit.comMissing: logarithmic | Show results with:logarithmic
  39. [39]
    [PDF] Floating Point and IEEE 754 - NVIDIA Docs
    Oct 2, 2025 · The IEEE 754 standard defines four rounding modes: round-to-nearest, round towards positive, round towards negative, and round towards zero. ...
  40. [40]
    Rounding algorithms - Arm Developer
    The IEEE 754-2008 standard adds an additional rounding mode. In the case of round to nearest, it is now also possible to round numbers that are exactly halfway ...
  41. [41]
    [PDF] IEEE 754 Floating Point Representation
    Round to the nearest representable number. If exactly halfway between, round to representable value w/ 0 in LSB (i.e. nearest even fraction). Round towards 0.
  42. [42]
    Numeric Considerations for Native Floating-Point
    The IEEE 754 standard requires that the result of an elementary arithmetic operation such as addition, multiplication, or division is correctly rounded. A ...
  43. [43]
    15. Floating-Point Arithmetic: Issues and Limitations — Python 3.14 ...
    Floating-point numbers are represented in computer hardware as base 2 (binary) fractions. For example, the decimal fraction 0.625 has value 6/10 + 2/100 + 5 ...
  44. [44]
    What is the best way to understand and convert to 8 fraction form?
    Aug 14, 2025 · The standard method is to multiply the decimal by 8 and then round to the nearest whole number. Remember, some decimals will be approximations ...
  45. [45]
    How to Convert Measurements in Baking Recipes - Allrecipes
    Mar 15, 2021 · Here's everything you need to know about how to convert measurements given in volume (cups, tablespoons, and teaspoons) to grams or ounces.
  46. [46]
    [PDF] Evaluating Ecohydrological Model Sensitivity to Input Variability with ...
    Jul 18, 2022 · Data binning is a form of quantization in which the variables that fall into a bin are replaced by a value representative of that bin ( ˆX), ...Missing: statistics | Show results with:statistics
  47. [47]
    (PDF) Quantization and Dither: A Theoretical Survey - ResearchGate
    A theoretical survey of multibit quantization is presented, beginning with the classical model of undithered quantization and proceeding to modern statistical ...
  48. [48]
    Monte Carlo arithmetic: how to gamble with floating point and win
    Monte Carlo arithmetic illustrates the potential for tools to support new kinds of a posteriori round-off error analysis.
  49. [49]
    A Class of Fast and Accurate Summation Algorithms
    ... accurate summation algorithm (such as compensated summation, or recursive summation in higher precision). We give a rounding error analysis to show that ...
  50. [50]
    [PDF] Some issues related to double rounding - Guillaume Melquiond
    According to Sterbenz' Lemma (Remark 3.4), s − a ∈. Fp. Therefore, z = s − a, and b − z = a + b − s. If there is no double rounding at line 3 of ...
  51. [51]
    [PDF] When double rounding is odd - Hal-Inria
    Dec 5, 2014 · For example, to get a constant C cor- rectly rounded in 24, 53 and 64 bits, it is enough to store it (oddly rounded) with 66 bits. Another exam ...
  52. [52]
    Correctly rounded functions for better arithmetic - IEEE Xplore
    The problem of correctly rounding these functions is called the "Table Maker's Dilemma". After a survey on this problem, we give some new results.
  53. [53]
    [PDF] Automated Surface Observing System (ASOS)
    All mid-point temperature values are rounded up (e.g., +3.5°F rounds up to +4.0°F; -3.5°F rounds up to -. 3.0°F; while -3.6 °F rounds to -4.0 °F). The ACU ...
  54. [54]
    [PDF] Training Guide in Surface Weather Observations
    There is also a rule when rounding negative numbers when dealing with temperatures. If the fractional part to be dropped is greater than one-half, the ...
  55. [55]
    JetStream Max: Surface Weather Plot Symbols - NOAA
    Feb 10, 2025 · A combination of long/short barbs and pennants indicate the speed of the wind rounded to the nearest 5 knots. Calm wind is indicated by a large ...
  56. [56]
    [PDF] JO 7900.5C - Surface Weather Observing
    Dec 21, 2012 · Sub-zero temperatures must be prefixed with an M (minus). ... reported as M00 to indicate that the actual temperature is below zero but rounded to ...<|separator|>
  57. [57]
    Goldilocks Rounding: Achieving Balance Between Accuracy ... - NIH
    Jan 5, 2021 · The error introduced by rounding exact numbers may result in misleading conclusions and the interpretation of study findings. For example, ...Missing: queries temperature
  58. [58]
    Epidemiology of Rounding Error - PMC - NIH
    Dec 22, 2024 · On the other hand, numeric error may result from improper rounding or data truncation which, in effect, compromises the credibility of study ...
  59. [59]
    The Making of a Scribe - SpringerLink
    The Making of a Scribe: Errors, Mistakes and Rounding Numbers in the Old Babylonian Kingdom of Larsa | SpringerLink.
  60. [60]
    al-Kashi (1390 - 1450) - Biography - MacTutor History of Mathematics
    Jamshid al-Kashi was an Islamic mathematician who published some important teaching works and anticipated Stevin's work on decimals. Thumbnail of al-Kashi View ...
  61. [61]
    Bankers' Rounding | Fabulous adventures in coding
    Sep 26, 2003 · The Banker's Rounding algorithm produces better results because it does not bias half-quantities consistently down or consistently up.
  62. [62]
  63. [63]
    Built-in Functions
    ### Summary of Python's `round()` Function
  64. [64]
    Math.round() - JavaScript - MDN Web Docs
    Jul 10, 2025 · The Math.round() static method returns the value of a number rounded to the nearest integer.Missing: behavior | Show results with:behavior
  65. [65]
    Inconsistent Rounding of Printed Floating-Point Numbers
    Oct 13, 2010 · Why isn't the same rounding mode used in all environments? It's because there is no standard, unlike for decimal to floating-point conversions, ...
  66. [66]
    IEEE 754-2008 - IEEE SA
    Aug 29, 2008 · This international standard specifies interchange and arithmetic formats and methods for binary and decimal floating-point arithmetic in ...
  67. [67]
    Rounding Error: What it is, How it Works, Examples - Investopedia
    A rounding error is a mathematical miscalculation caused by altering a number to an integer or one with fewer decimals.
  68. [68]
    [PDF] GLP 9 Rounding Expanded Uncertainties and Calibration Values
    2. Rounding Methods. Option A. Even/Odd Method1. Use the following rules to round measurement data using the even/odd rounding rules, consistent with the ...
  69. [69]
    [PDF] Up or Down? Adaptive Rounding for Post-Training Quantization
    Abstract. When quantizing neural networks, assigning each floating-point weight to its nearest fixed-point value is the predominant approach. We find that,.<|separator|>