Fact-checked by Grok 2 weeks ago

CORDIC

CORDIC, short for COordinate Rotation DIgital Computer, is an iterative algorithm that computes elementary functions including trigonometric ratios (sine, cosine, tangent), hyperbolic functions, square roots, multiplications, divisions, and logarithms using only additions, subtractions, bit shifts, and a small lookup table for arctangents. Originally developed by Jack E. Volder in 1959 for efficient hardware realization of trigonometric computations in real-time airborne navigation systems, it avoids complex operations like multiplication and division, making it ideal for resource-constrained digital environments. The algorithm operates in two primary modes: rotation mode, which rotates a given by a specified to yield functions like , and vectoring mode, which aligns an input to the x-axis to compute magnitudes and angles, enabling polar-to-Cartesian conversions. It achieves this through a series of micro-rotations by predefined angles \arctan(2^{-i}) for iteration step i, where each step involves conditional additions or subtractions scaled by powers of two, ensuring within a limited range (typically |z| < 1.743 radians for circular coordinates). A key characteristic is the introduction of a scaling factor (approximately 0.6073 for infinite iterations in rotation mode) due to vector length preservation in each step, which must be compensated in implementations for accuracy. Generalized by J.S. Walther in 1971 to include hyperbolic and linear coordinates, CORDIC has evolved into a versatile tool for fixed-point arithmetic in embedded systems. Over five decades, advancements have extended its use to field-programmable gate arrays (FPGAs), digital signal processors, and applications in adaptive filtering, graphics acceleration, eigenvalue decomposition, and wireless communications, maintaining its efficiency with minimal hardware overhead.

Overview

Definition and Purpose

CORDIC, an acronym for COordinate Rotation DIgital Computer, is an iterative algorithm that enables the computation of a wide range of elementary mathematical functions through simple arithmetic operations. Developed for real-time airborne computation in the late 1950s, it addresses the need for efficient numerical processing in resource-limited digital systems, such as those used in aerospace applications. The primary purpose of CORDIC is to evaluate functions including trigonometric ratios (sine, cosine, ), hyperbolic functions (, , ), logarithms, square roots, multiplications, and divisions, all without relying on complex hardware like multipliers. This is achieved by decomposing the computations into a series of coordinate rotations using only bit shifts, additions or subtractions, and a compact table of precomputed constants known as arctangents. Key advantages of CORDIC include its low hardware implementation cost, as it eliminates the need for multiplication circuitry, making it ideal for fixed-point arithmetic in embedded systems. Additionally, the algorithm scales effectively with desired precision, allowing for straightforward adjustments in bit width while maintaining computational efficiency across various precision levels. These attributes have made CORDIC a staple in digital signal processing, graphics, and control systems where hardware simplicity and performance are critical.

Basic Principle

The CORDIC algorithm fundamentally relies on decomposing an arbitrary rotation in the plane into a weighted sum of elementary micro-rotations. These micro-rotations are defined by angles \theta_i = \arctan(2^{-i}) for i = 0, 1, 2, \dots, which correspond to the natural binary fractions of the angle. This decomposition enables the approximation of the rotation using only shifts, additions, and subtractions, avoiding the need for multiplications or divisions in the core computation. The choice of these specific angles ensures that each micro-rotation can be implemented efficiently through bit shifts by i positions, making CORDIC particularly suitable for hardware implementations. The core iterative process in the circular coordinate system updates the coordinates (x_i, y_i) and the residual angle z_i as follows: \begin{align*} x_{i+1} &= x_i - d_i y_i \cdot 2^{-i}, \\ y_{i+1} &= y_i + d_i x_i \cdot 2^{-i}, \\ z_{i+1} &= z_i - d_i \theta_i, \end{align*} where d_i = \pm 1 is a decision variable selected at each step, and \theta_i = \arctan(2^{-i}) are precomputed constants stored in a lookup table to facilitate rapid access during iterations. This table eliminates the need for on-the-fly computation of the arctangents, enhancing efficiency. The iterations proceed until the residual angle z_n approaches zero after n steps, yielding the rotated coordinates. Each micro-rotation scales the vector length by \cos(\theta_i), resulting in an overall scale factor K = \prod_{i=0}^{\infty} \cos(\arctan(2^{-i})) \approx 0.60725. To obtain the true rotated vector, the final outputs must be multiplied by the compensation factor $1/K \approx 1.64676, which can be precomputed or approximated. The algorithm converges for initial angles satisfying |z_0| < \sum_{i=0}^{\infty} \arctan(2^{-i}) \approx 1.118 radians (approximately 64.1 degrees), ensuring the decomposition covers the required range without redundancy in the angle series.

History

Invention and Early Use

The CORDIC algorithm was invented by Jack E. Volder between 1956 and 1959 while he worked in the aeroelectronics department at Convair, specifically to support the development of a digital navigation computer for the Convair B-58 Hustler supersonic bomber. Volder's motivation stemmed from the need to replace bulky analog resolvers with efficient digital methods for computing trigonometric functions, enabling real-time coordinate transformations essential for aircraft guidance. His initial work culminated in an internal Convair report titled "Binary Computation Algorithms for Coordinate Rotation and Function Generation" dated June 15, 1956, which laid the foundational concepts for iterative shift-and-add operations. Volder first publicly described the full CORDIC technique in his seminal 1959 paper, "The CORDIC Trigonometric Computing Technique," published in the IRE Transactions on Electronic Computers. This publication detailed how CORDIC could perform rotations and vectoring using only additions, subtractions, and bit shifts, making it ideal for the limited resources of early digital computers during the transition from vacuum tubes to transistors. The algorithm addressed the computational demands of missile trajectory calculations and inertial navigation, where high precision and speed were critical in fixed-point arithmetic environments. Early hardware realizations of CORDIC emerged in the 1960s, with Convair completing a demonstration system called CORDIC I in 1960 to solve radar fix-taking problems for aerospace applications. This system marked the initial practical implementation, integrated into the B-58's guidance computer to facilitate onboard digital processing of navigation data in real time. Although no patent was filed by Volder himself, the technique's adoption in military hardware underscored its immediate value in advancing digital computation for defense projects.

Key Developments and Extensions

In 1971, John S. Walther extended the CORDIC algorithm to encompass hyperbolic and linear coordinate systems, enabling a unified computation of elementary functions such as sine, cosine, tangent, hyperbolic tangent, multiplication, and division through simple parameter variations. This generalization was pivotal for the implementation in the Hewlett-Packard HP-35 calculator, the first handheld scientific calculator, where it facilitated efficient transcendental function evaluation without dedicated multipliers. During the 1980s and 1990s, CORDIC saw widespread integration into digital signal processing (DSP) hardware, particularly in VLSI architectures for applications like filtering, discrete Fourier transforms (DFT), and discrete cosine transforms (DCT). Notable advancements included pipelined designs for high-throughput processing, as demonstrated in early CMOS implementations for vector arithmetic. Additionally, floating-point CORDIC variants compliant with the IEEE 754 standard emerged, enhancing accuracy in DSP units by handling exponent differences and achieving full precision for elementary operations. Retrospectives in the 2000s, marking 50 years since CORDIC's invention, highlighted its evolution toward parallel and pipelined architectures to address latency and throughput bottlenecks. These reviews emphasized techniques like higher-radix iterations (e.g., radix-4) and angle recoding, which reduced computation steps by up to 50% while maintaining hardware efficiency, making CORDIC suitable for embedded systems and matrix decompositions such as (SVD). Recent developments through 2025 have focused on FPGA optimizations for AI accelerators, with unified architectures leveraging for both linear multiply-accumulate operations and nonlinear activations in neural networks. For instance, pipelined designs have achieved up to 2.5× resource savings and 3× power efficiency compared to prior systolic array methods, enabling scalable recurrent neural network (RNN) acceleration on resource-constrained hardware. These advancements address post-2010 trends in low-power, high-precision computing for edge AI.

Algorithm Fundamentals

Iterative Process

The iterative process of the CORDIC algorithm begins with initialization of the input vector and angle accumulator. In rotation mode, the initial values are set as x_0 = x, y_0 = y, and z_0 = \theta, where (x, y) is the input vector to be rotated by angle \theta. In vectoring mode, the initialization is x_0 = x, y_0 = y, and z_0 = 0, aiming to align the vector with the x-axis. The decision variable d_i is determined at each step as d_i = \operatorname{sign}(z_i) in rotation mode to reduce the magnitude of z_i, or d_i = -\operatorname{sign}(y_i) in vectoring mode to drive y_i toward zero. The core iteration loop proceeds sequentially for i = 0 to n-1, where n is the number of iterations chosen based on desired precision, typically n \approx W for W-bit accuracy in fixed-point arithmetic. At each iteration, the updates are applied using only shifts and additions: \begin{align*} x_{i+1} &= x_i - d_i y_i 2^{-i}, \\ y_{i+1} &= y_i + d_i x_i 2^{-i}, \\ z_{i+1} &= z_i - d_i \tan^{-1}(2^{-i}). \end{align*} These steps approximate the rotation by the arctangent angle \alpha_i = \tan^{-1}(2^{-i}), with the shift $2^{-i} replacing multiplication. The process repeats until the residual angle or vector component is sufficiently small, achieving convergence within the iteration count. Adaptations for hyperbolic or linear coordinate systems modify the updates accordingly, such as using \tanh^{-1}(2^{-i}) for hyperbolic rotations. Error analysis reveals several sources of inaccuracy inherent to the finite-precision implementation. Quantization error arises from truncating iterations, bounding the residual angle by \alpha_n = \tan^{-1}(2^{-n}), and from rounding in fixed-point arithmetic, with mean squared error (MSE) accumulating as approximately n \epsilon^2 / 3 where \epsilon = 2^{-b} for b fractional bits. Range limitations stem from the algorithm's convergence domain, restricting input angles to approximately |\theta| < \sum_{i=0}^{n-1} \arctan(2^{-i}) (approximately 1.743 radians for large n) in circular coordinates to prevent overflow. Additionally, each iteration introduces a scale factor K_i = \sqrt{1 + 2^{-2i}}, resulting in an overall gain K_n = \prod_{i=0}^{n-1} K_i \approx 1.64676 for infinite iterations in circular mode; this is corrected post-iteration by multiplying outputs by $1/K_n, often precomputed as a constant. Precision trade-offs depend on the arithmetic format. Fixed-point implementations, common in hardware due to simplicity, require extra guard bits (e.g., 2-3 beyond W) to mitigate rounding and overflow, achieving full W-bit accuracy with n \approx W + \lceil \log_2 n \rceil iterations including guard digits. Floating-point variants offer dynamic range but incur higher complexity for exponent handling and scale factor adjustment per iteration, often necessitating initial fixed-point conversion for efficiency. Overflow is managed by restricting input magnitudes (e.g., \sqrt{x^2 + y^2} < 1) and incorporating MSB guard bits in the datapath to accommodate the scale expansion.

Coordinate Systems

The CORDIC algorithm operates within distinct coordinate systems that define the geometric framework for its iterations, enabling efficient computation of various functions through vector rotations or transformations using only shifts and additions. These systems—circular, hyperbolic, and linear—each employ predefined "angle" sequences derived from powers of 2, tailored to specific mathematical spaces: the Euclidean plane for circular, the Lorentz (Minkowski) plane for hyperbolic, and a linear accumulation for the linear system. The choice of system determines the update rules for the vector components and the associated convergence properties. In the circular coordinate system, introduced by Volder, the algorithm performs rotations in the Euclidean plane by decomposing an arbitrary rotation angle θ into a sum of elementary angles α_i = \arctan(2^{-i}) for i = 0, 1, 2, \dots. Each iteration rotates the vector by ±α_i, selected to approximate θ while maintaining orthogonality. The infinite sum of these angles converges to π/2 radians (90°), but practical implementations with finite precision (e.g., 32 bits) achieve convergence for |θ| up to approximately 99.7°, beyond which pre-rotation or range reduction is required to avoid divergence. This system incurs a scale factor K_c ≈ 1.64676 due to the magnitude growth from successive rotations, necessitating compensation by initializing vectors with a factor of 1/K_c. The hyperbolic coordinate system, extended by Walther, adapts CORDIC for transformations in the Lorentz plane, using elementary angles α_i = \artanh(2^{-i}) for i = 0, 1, 2, \dots to compute hyperbolic functions via "pseudo-rotations" that preserve the Minkowski metric. Convergence requires careful iteration selection, as the standard sequence leads to non-convergence for certain angles; specifically, iterations at i = 4, 13, 40, \dots must be repeated to ensure the sum remains bounded within the convergence interval, typically |θ| < 2.710 rad (≈155°). Unlike the circular case, the scale factor K_h ≈ 0.8281 is less than 1, resulting in magnitude shrinkage, and double iterations on select steps (e.g., i=4 and i=13 for higher precision) further stabilize the process. The linear coordinate system, also from Walther's unification, treats 2^{-i} as the "angle" increments for i = 0, 1, 2, \dots, enabling multiplication and division by accumulating products in a z-register without geometric rotation. Here, the y-component updates as y_{i+1} = y_i + d_i \cdot 2^{-i} x_i (with d_i = \pm 1 based on the sign of the residual), while x remains unchanged, effectively performing linear shifts and adds for scalar operations. This system has no inherent scale factor, as there is no vector magnitude alteration, and convergence is not angle-limited but precision-dependent. These systems are unified through a generalized rotation formula that adapts the update rules via a parameter m: x_{i+1} = x_i - m d_i 2^{-i} y_i and y_{i+1} = y_i + d_i 2^{-i} x_i, where m = 1 for circular coordinates, m = 0 for linear, and m = -1 for hyperbolic (with d_i = \sgn(z_i) or mode-specific). This formulation, with σ_i effectively incorporating m and iteration adjustments, allows a single algorithmic structure to handle all cases by selecting the appropriate m and angle table.

Modes of Operation

Rotation Mode

In rotation mode, the CORDIC algorithm rotates an input vector (x_0, y_0) by a specified angle z_0, producing approximate coordinates of the rotated vector after a series of iterative micro-rotations. This mode is designed to efficiently compute the transformation without requiring a full multiplier, relying instead on shifts and adds. The process begins with the initial vector and angle, and proceeds through n iterations where each step applies a small rotation by \arctan(2^{-i}) radians, approximating the total rotation z_0. The direction of each micro-rotation is determined by the sign of the current residual angle z_i: d_i = +1 if z_i > 0 (counterclockwise ) and d_i = -1 if z_i < 0 (clockwise ), ensuring that z_{i+1} = z_i - d_i \arctan(2^{-i}) progressively reduces z_n toward zero. The vector components are updated as x_{i+1} = x_i - d_i 2^{-i} y_i and y_{i+1} = y_i + d_i 2^{-i} x_i, using only arithmetic shifts for the scaling factor $2^{-i}. This decision rule distinguishes rotation mode from vectoring mode, where the goal is angle extraction rather than applying a known angle. Upon convergence after sufficient iterations (typically 16–32 for precision), the outputs are the scaled rotated coordinates: x_n \cdot K \approx x_0 \cos z_0 - y_0 \sin z_0 and y_n \cdot K \approx x_0 \sin z_0 + y_0 \cos z_0, where K \approx 0.607252935 is the constant arising from the product of \cos(\arctan(2^{-i})) over all iterations, requiring post-scaling for accurate results. Rotation mode finds application in scenarios requiring vector transformations, such as matrix rotations in graphics processing and phase rotations in digital signal modulation schemes.

Vectoring Mode

In vectoring mode, the CORDIC algorithm processes an input vector (x_0, y_0) to compute its polar coordinates, specifically the angle \theta \approx \atantwo(y_0, x_0) and the magnitude r \approx \sqrt{x_0^2 + y_0^2}. This mode operates by iteratively rotating the vector toward the positive x-axis until the y-component approaches zero, effectively decomposing the vector into its radial and angular components using only shift-and-add operations. The process begins with an initial angle accumulator z_0 = 0, and each iteration applies a micro-rotation determined by the current vector's orientation to nullify the y-coordinate progressively. The decision logic in vectoring mode relies on the sign of the current y-component: d_i = -\sgn(y_i), where \sgn denotes the sign function (positive for non-negative, negative otherwise). This choice directs the rotation to counteract the y-displacement, aligning the vector with the x-axis. The iterative updates are given by: \begin{align*} x_{i+1} &= x_i - d_i \cdot y_i \cdot 2^{-i}, \\ y_{i+1} &= y_i + d_i \cdot x_i \cdot 2^{-i}, \\ z_{i+1} &= z_i - d_i \cdot \arctan(2^{-i}). \end{align*} After n iterations, the y-component y_n \approx 0, the accumulated angle z_n \approx \theta, and the x-component x_n \approx r / K, where K = \prod_{i=0}^{n-1} \cos(\arctan(2^{-i})) \approx 0.60725 is the inherent scaling factor resulting from the pseudo-rotations. To obtain the true magnitude, the output is scaled by K (r \approx K \cdot x_n). This mode converges for input angles within approximately -1.74 to 1.74 radians without preprocessing, but extensions handle wider ranges. Handling the full 360-degree range requires initial sign adjustments to map the input vector to the first quadrant, followed by angle corrections based on the original signs of x_0 and y_0. Specifically, if x_0 < 0, the angle is offset by \pi (or -\pi depending on the y-sign); if y_0 < 0 and x_0 > 0, subtract \pi; these adjustments ensure the computed z_n matches the convention. The magnitude computation remains quadrant-independent due to the squaring in the . This preprocessing enables accurate across all quadrants while preserving the algorithm's efficiency.

Computable Functions

Trigonometric Functions

CORDIC computes sine and cosine functions in rotation mode within the circular coordinate system by successively rotating an initial unit vector through the desired angle θ using predefined arctangent increments. The algorithm begins with the initialization x_0 = 1, y_0 = 0, and z_0 = \theta, where θ is restricted to the range [-\pi/2, \pi/2] for convergence. After n iterations, the final values approximate x_n \approx K \cos \theta and y_n \approx K \sin \theta, where the scale factor K = \prod_{i=0}^{n-1} \sqrt{1 + 2^{-2i}} \approx 1.64676 accounts for the magnitude expansion inherent in the rotation steps. Post-processing involves dividing both x_n and y_n by K to obtain the exact sine and cosine values, ensuring accuracy without multiplications during the core iterations. In vectoring mode, CORDIC calculates the arctangent function by aligning an input to the x-axis and accumulating the rotation angle. For \atantwo(i, r), the process initializes with x_0 = r, y_0 = i, and z_0 = 0, assuming r > 0 and the angle in the first quadrant. The iterations drive y toward zero, yielding z_n \approx \atan(i/r) as the output angle, while x_n \approx K \sqrt{r^2 + i^2}. This mode leverages the same arctangent table as rotation mode but selects rotation directions based on the sign of y to minimize the y-component. To extend arctangent to the full atan2 function over (-\pi, \pi], quadrant adjustments are applied via initial vector modifications before entering vectoring mode. The input coordinates (x, y) are transformed by swapping and negating components to map the point to the first quadrant—for example, in quadrant, set x' = y, y' = -x—and a quadrant offset (0, π/2, π, or -π/2) is added to the resulting angle from vectoring mode. This preprocessing ensures the CORDIC core operates within its convergence range while correctly handling the full angular span. Square root computation emerges as a byproduct in circular vectoring mode, where the final x-component represents the magnitude after . Initializing with x_0 = a, y_0 = b, and z_0 = 0 for non-negative a and b, the output x_n / K \approx \sqrt{a^2 + b^2} provides the following the iterations and scale factor compensation. For computing the single-argument \sqrt{v} (v ≥ 0), vectoring mode is used instead, with initial values x_0 = v + 0.25, y_0 = v - 0.25, z_0 = 0 after normalizing v to the range [0.25, 1] by bit shifting; the result is x_n / K_h \approx \sqrt{v}. This approach avoids explicit multiplications and is efficient for systems.

Hyperbolic and Linear Functions

The CORDIC algorithm extends beyond circular coordinates to using a based on the hyperbolic tangent, enabling efficient computation of and without multiplications. In mode (parameter m = -1), the iterations employ arctangent hyperbolic increments α_i = artanh(2^{-i}), with repeated steps at specific indices (such as i=4 and i=13) to ensure within the range |θ| < 1.118 radians. The scale factor K_h for this mode is approximately 0.8282, requiring post-processing compensation for accurate results. To compute hyperbolic sine and cosine in rotation mode, initialize the vector as x_0 = 1, y_0 = 0, z_0 = θ, and drive z_n to 0 by selecting iteration directions σ_i = sign(z_i). The iterations are: \begin{align*} x_{i+1} &= x_i + \sigma_i \, 2^{-i} y_i, \\ y_{i+1} &= y_i + \sigma_i \, 2^{-i} x_i, \\ z_{i+1} &= z_i - \sigma_i \, \artanh(2^{-i}). \end{align*} After n iterations, the outputs approximate y_n / K_h ≈ sinh θ and x_n / K_h ≈ cosh θ, where the hyperbolic scale factor K_h = ∏_{i=0}^{n-1} \sqrt{1 - 2^{-2i}} accounts for the magnitude growth. This setup leverages the hyperbolic rotation matrix identity, transforming the initial vector (1, 0) by the angle θ. The natural logarithm can be derived in hyperbolic vectoring mode by exploiting the relation ln x = 2 \artanh\left( \frac{x-1}{x+1} \right) for 0 < x ≤ 1. Initialize x_0 = (1 + x)/(1 - x), y_0 = 1, z_0 = 0, and drive y_n to 0 by selecting σ_i = -sign(y_i). Using the same hyperbolic iterations as above, the result z_n ≈ - (1/2) ln x, so ln x ≈ -2 z_n after scaling. Equivalently, alternative initializations like x_0 = x + 1, y_0 = x - 1 yield z_n ≈ (1/2) ln x directly, with ln x ≈ 2 z_n; the choice depends on normalization for positive x > 1 or adjustment for the domain. Convergence requires the same iteration skips as in rotation mode to avoid divergence. Exponential functions are computed via hyperbolic rotations, as exp θ = cosh θ + sinh θ. In hyperbolic rotation mode, initialize x_0 = 1 / K_h, y_0 = 0, z_0 = θ; after iterations, x_n + y_n ≈ exp θ. For like arcsinh u, vectoring mode initializes x_0 = \sqrt{1 + u^2}, y_0 = u, z_0 = 0, yielding z_n ≈ arcsinh u = \ln\left( u + \sqrt{u^2 + 1} \right), which supports exponential computations through functional inverses in chained operations. In linear mode (m = 0), CORDIC simplifies to straight-line trajectories for and , with angle increments α_i = 2^{-i} and no inherent scaling (K_l = 1). The iterations become x_{i+1} = x_i, y_{i+1} = y_i + σ_i 2^{-i} x_i, z_{i+1} = z_i - σ_i 2^{-i}. For in rotation mode, set x_0 to the multiplicand, y_0 = 0, z_0 to the multiplier, driving z_n to 0; the result is y_n ≈ x_0 z_0. For in vectoring mode, set x_0 to the , y_0 to the , z_0 = 0, driving y_n to 0; the result is z_n ≈ y_0 / x_0. This mode converges over an unlimited range, making it suitable for without skips.

Implementation

Software Approaches

Software implementations of the CORDIC algorithm emphasize sequential iteration to perform rotations and vectoring using arithmetic shifts and additions, avoiding multiplications to suit resource-constrained environments like processors or numerical prototyping tools. These approaches precompute an arctangent for the micro-rotation angles \alpha_j = \tan^{-1}(2^{-j}) and apply the iterations in a loop, with results scaled by the gain factor K \approx 0.60725 to compensate for the elongation effect. Finite-precision effects introduce errors bounded by the tail of the series; for example, with 12 iterations, the maximum error is less than $2^{-12} in the computed values. A basic pseudocode for rotation mode, which rotates an input (x_0, y_0) by an \theta, is as follows:
x ← x₀
y ← y₀
z ← θ
for j = 0 to n-1:
    d = 1 if z ≥ 0 else -1
    x_new = x - d · y · 2^{-j}
    y_new = y + d · x · 2^{-j}
    z_new = z - d · αⱼ
    x ← x_new; y ← y_new; z ← z_new
x_out ← x · K
y_out ← y · K
Here, \alpha_j are precomputed from the arctan table as an , and n determines (typically 16–32 iterations). The final outputs approximate x_0 \cos \theta - y_0 \sin \theta and x_0 \sin \theta + y_0 \cos \theta, respectively. In embedded systems, fixed-point arithmetic in languages like C enables efficient execution on microcontrollers without floating-point units. Implementations often use a Qm.n format (e.g., Q2.30 for 32-bit integers, reserving 2 bits for sign and integer part), where shifts correspond to integer right-shifts and additions handle the accumulations. A representative C function for sine and cosine computation in rotation mode, assuming a precomputed table cordic_ctab[] for \alpha_j scaled to fixed-point, is:
c
void cordic(int theta, int *s, int *c, int n) {
    int k, d, tx, ty, tz;
    int x = cordic_1K, y = 0, z = [theta](/page/Theta);  // cordic_1K ≈ 0.60725 in fixed-point
    n = (n > CORDIC_NTAB) ? CORDIC_NTAB : n;
    for (k = 0; k < n; ++k) {
        d = (z >= 0) ? 1 : -1;  // Direction decision
        tx = x - d * (y >> k);
        ty = y + d * (x >> k);
        tz = z - d * cordic_ctab[k];
        x = tx; y = ty; z = tz;
    }
    *c = x; *s = y;
}
This code processes angles in the range -\pi/2 to \pi/2; quadrant adjustments handle full-circle rotations. For 32-bit fixed-point, reaches about 30 bits after scaling, suitable for . For simulation and prototyping, floating-point implementations in with facilitate rapid development and verification against library functions like numpy.sin and numpy.cos. The translates directly to a loop using floating-point operations, with the arctan table generated via numpy.arctan(1.0 / (1 << np.arange(n))). 's vectorized arrays enable batch processing of multiple angles, though the core loop remains scalar for simplicity. Such setups are ideal for analyzing convergence or testing variants, with double-precision yielding errors below machine epsilon after sufficient iterations. Custom modules often wrap this for reuse in scientific computing workflows. Optimizations in software focus on reducing computational overhead while maintaining accuracy. Early termination checks if the remaining angle |z| falls below a threshold (e.g., $2^{-m} for m-bit precision), halting iterations to save cycles in low-precision scenarios. Vectorization leverages SIMD instructions (e.g., via NumPy broadcasting or C intrinsics like SSE/AVX) to parallelize iterations across multiple vectors, achieving speedups of 4–8x on modern CPUs for batch computations. Table compression approximates later \alpha_j values (where $2^{-j} is small) using linear or polynomial fits, reducing storage from O(n) to O(log n) entries with negligible error increase for n > 16. In finite precision, these techniques bound accumulated rounding errors to within n \cdot \epsilon, where \epsilon is the , ensuring reliable results for or high-throughput applications. Libraries for CORDIC are predominantly custom-built for specific needs, as standard numerical packages prioritize general-purpose functions over algorithmic primitives. In embedded contexts, developers integrate tailored C routines into firmware, such as for series, to compute with cycle counts under 100 per call in fixed-point. For simulation, Python-based custom libraries using provide flexible error analysis, with finite-precision bounds verified against double-precision references to confirm convergence within 10^{-10} relative error after 20 iterations.

Hardware Designs

The basic of CORDIC relies on a shift-add chain, where each iteration performs vector rotations or extractions using only shifts, additions, and subtractions, often enhanced by carry-save adders (CSAs) to reduce propagation delays in the arithmetic units. This design minimizes multiplier usage, making it suitable for resource-constrained environments, as the core computations involve accumulating angles via precomputed arctangent values and updating vector components through simple arithmetic. CORDIC implementations typically adopt either iterative or pipelined configurations, balancing and throughput. In iterative designs, a single processing element () executes all iterations sequentially, achieving low area but with proportional to the number of iterations (usually 16–32 for fixed-point precision), while pipelined or unrolled variants replicate PEs across stages to enable , reducing at the cost of increased hardware resources and enabling higher clock frequencies for sustained throughput. For example, fully unrolled pipelined architectures can process one result per clock cycle after initial , ideal for applications. On field-programmable gate arrays (FPGAs), CORDIC leverages lookup tables (LUTs) to store arctangent constants, minimizing usage, while dedicated (DSP) slices handle the add-shift operations for efficient parallelism. Unified designs, such as the DA-VINCI core proposed in 2025, integrate multiple functions (e.g., trigonometric, , and linear) into a single reconfigurable core using dynamic mode selection to share hardware resources across operations, achieving up to 2.5× resource savings compared to separate modules on FPGAs such as Virtex-7. These implementations often employ in DSP slices for versatility in tasks. As of 2025, extensions to workloads, such as CORDIC-based functions for recurrent neural networks, further demonstrate its adaptability in accelerators. In application-specific integrated circuits (), CORDIC is integrated into processor extensions or dedicated accelerators, such as co-processors in microcontrollers for trigonometric computations, offering superior power efficiency over general-purpose units. For instance, low-power ASIC variants using gate-level optimizations achieve sub-milliwatt operation for devices, with dynamic voltage scaling to further reduce energy per iteration. While not natively in intrinsics, similar shift-add based trig units in embedded ASICs draw from CORDIC principles for vectorized math in power-sensitive domains. Key trade-offs in CORDIC hardware involve , which scales with iteration count in iterative modes, versus parallelism in unrolled designs that boost throughput but inflate area by 4–8x. Radix-4 variants address this by processing multiple elementary rotations per step, halving the required (e.g., from 16 to 8 for 16-bit precision) through wider adders and modified angle sets, albeit with a 20–30% increase in per-iteration for reduced overall . These enhancements prioritize high-throughput scenarios like pipelines over minimal-area nodes.

Applications

Digital Signal Processing

CORDIC plays a significant role in () applications, particularly where efficient computation of trigonometric, hyperbolic, and linear functions is required for real-time signal manipulation without relying on complex multipliers. Its iterative shift-and-add operations enable low-power, hardware-friendly implementations in resource-constrained environments like FPGAs and , making it ideal for tasks involving adjustments and operations in frequency-domain processing. In (FFT) and inverse FFT (IFFT) algorithms, CORDIC facilitates phase rotation for efficient complex multiplications, especially in rotation mode where it applies successive micro-rotations to align vectors using precomputed angles. This approach replaces traditional multiplications with simple additions and shifts, reducing hardware complexity and power consumption while maintaining accuracy for multi-carrier systems such as OFDM in wireless communications. For instance, in radix-2 FFT architectures, CORDIC-based rotators compute phase shifts on-the-fly, achieving higher operating frequencies and lower area overhead compared to lookup-table methods. For Hilbert transforms and quadrature demodulation, CORDIC in vectoring mode computes the atan2 function to extract phase information from in-phase (I) and quadrature (Q) signal components, enabling precise phase detection essential for modulation schemes like QPSK or FM. The Hilbert transform generates the analytic signal by shifting the phase of negative frequencies by 90 degrees, after which CORDIC determines the instantaneous phase via vector magnitude and angle calculation, supporting applications in coherent demodulation and interference cancellation. This method is particularly efficient in software-defined radios, where it avoids division-heavy operations by approximating the arctangent through rotations, thus improving throughput in real-time phase-locked loops. In adaptive filters, such as those employing the least mean squares (LMS) algorithm, CORDIC supports through and operations, computing magnitudes for step-size adjustment in normalized LMS (NLMS) variants to enhance . By using linear mode for and circular mode for (via magnitude computation), CORDIC enables efficient power of input signals, reducing to signal in echo cancellation and tasks. This lowers computational overhead in pipelined architectures, allowing adaptive filters to track varying channel conditions with minimal latency. Recent applications of CORDIC in leverage its computation capabilities for direction-of-arrival and precoding matrix adjustments in massive systems, addressing the need for rapid phase alignments in millimeter-wave arrays. In hybrid architectures, CORDIC performs divisions and rotations to optimize , enabling low-complexity calculations that with antenna counts while mitigating quantization errors in real-time baseband processing.

Graphics and Embedded Systems

In computer graphics, CORDIC facilitates efficient computation of 3D rotations and projections, particularly through its rotation and vectoring modes for transforming vertices and calculating surface normals. Vectoring mode normalizes vectors to derive unit normals for lighting calculations, while rotation mode applies transformation matrices to rotate objects in real-time rendering pipelines. This approach is especially valuable in graphics processing units (GPUs) where high-throughput geometry operations are required, as demonstrated in formulations that decompose 3D graphics primitives into CORDIC iterations for vector rotations and interpolations used in shading and perspective projections. Historically, CORDIC enabled the first handheld scientific calculators, such as the introduced in 1972, by providing an efficient means to compute using simple shift-and-add operations on limited hardware. This breakthrough allowed portable devices to perform sine, cosine, and arctangent calculations with high accuracy, paving the way for modern low-power implementations in wearables like smartwatches. In contemporary wearables, optimized CORDIC variants deliver for and fitness tracking while minimizing energy consumption, often achieving up to 40% dynamic power reduction through approximations. In , CORDIC supports forward and computations critical for manipulator control and pose estimation, leveraging mode for handling non-linear joint transformations and linear mode for coordinate scaling. For instance, pipelined CORDIC architectures solve for six-degree-of-freedom arms like the robot, requiring up to 25 parallel processors to compute joint angles from end-effector positions in . Emerging trends in edge as of 2025 integrate CORDIC into low-power accelerators for AI workloads on robotic edge devices, enhancing efficiency in dynamic environments through optimized computations for neural networks.

Advanced Variants

Double-step (double rotation)

The double iterations variant of the CORDIC algorithm, also known as the merged or double-step CORDIC, optimizes the standard iterative process by pairing consecutive iterations (i and i+1) into a single computational step, effectively halving the total number of iterations required for a given . This technique combines the rotation matrices of two adjacent steps, resulting in a composite angle defined as \alpha_i = \arctan(2^{-2i} + 2^{-(2i+1)}). The updated equations then become: \begin{align*} x_{i+1} &= x_i - d_i (2^{-2i} + 2^{-(2i+1)}) y_i, \\ y_{i+1} &= y_i + d_i (2^{-2i} + 2^{-(2i+1)}) x_i, \\ z_{i+1} &= z_i - d_i \alpha_i, \end{align*} where d_i = \pm 1 is the direction decision based on the sign of z_i, and the shifts are implemented using more complex but fewer adder operations compared to the single-step method. A primary benefit of this approach is the extension of the convergence range in rotation mode beyond approximately 99 degrees in the standard CORDIC, enabling accurate computation for larger input angles without pre-reduction steps. Additionally, the scale factor for merged iterations requires compensation similar to standard CORDIC but adjusted for the composite steps, which can be applied post-computation to recover the original vector magnitude. Overall, this leads to hardware savings, reducing the number of shifters by roughly (1 + 9/(n+1))/2 while maintaining computational throughput. Despite these advantages, the introduces slightly more complex designs due to the non-power-of-two shifts in the composite terms, though the net reduction in iteration stages offsets this in pipelined or folded architectures. It finds applications in high-precision trigonometric evaluations within scientific , such as rotations in simulations and orbit calculations, where extended range and efficiency are critical.

Other Enhancements

Higher variants of the CORDIC , such as radix-4 and radix-8, extend the traditional radix-2 approach by processing multiple bits per , enabling larger steps and thereby reducing the total number of required for a given precision level. In radix-4 CORDIC, for instance, the employs a base-4 number system with precomputed sets that allow approximately twice as many bits to be handled per step compared to radix-2, halving the while maintaining computational accuracy in vector and . This enhancement comes at the cost of increased complexity, including larger lookup tables for the expanded set and more sophisticated units to manage the wider representations. Hybrid CORDIC architectures integrate table-based lookup mechanisms with iterative CORDIC stages to achieve higher and , particularly in applications demanding rapid . Typically, a small provides an initial coarse approximation of the rotation angle, after which the remaining fine adjustment is performed using a reduced number of CORDIC iterations, minimizing overall without sacrificing accuracy. This approach is especially beneficial for high-speed direct digital synthesizers, where the hybrid design enables operation at frequencies up to 380 MHz in technology by balancing the strengths of precomputed tables for coarse estimation and CORDIC's additive for refinement. Vector CORDIC extends the algorithm to support of multiple input , facilitating simultaneous computations in array-based or SIMD-like environments. By replicating CORDIC stages across vector elements, this variant processes batches of data in a single pass, significantly improving throughput for tasks involving multiple rotations or transformations, such as in pipelines. The architecture integrates seamlessly as a extension, reducing for operations while preserving the core CORDIC's resource efficiency. Recent advancements in adaptive CORDIC, particularly for accelerators as of 2025, introduce dynamic mode selection based on input characteristics to optimize performance and resource utilization. These designs adjust iteration counts and coordinate systems (e.g., circular or ) in , tailoring the algorithm to the specific demands of functions like tanh in neural networks, thereby reducing and enhancing in AI hardware. For example, scaling-free adaptive variants eliminate traditional scale factor corrections, achieving up to 4.64× improvements in throughput for memory-constrained accelerators without compromising precision, as demonstrated in pipelined architectures for diverse AI workloads including DNNs, Transformers, and RNNs (as of March 2025).

Iterative Methods

Digit-by-digit methods represent a class of iterative algorithms for arithmetic operations such as division, square root, and multiplication, relying on redundant number systems to enable efficient shift-and-add operations without full carry propagation in each step. The SRT (Sweeney, Robertson, and Tocher) division algorithm, for instance, generates quotient digits one at a time by estimating the next digit based on a limited set of most significant bits from the partial remainder and divisor, using a redundant digit set (typically {-1, 0, 1} for radix-2) to allow overlap in decisions and avoid trial subtractions. This approach extends to square root computation, where similar digit selection logic approximates the next root digit by subtracting shifted versions of the current root from the remainder, again leveraging redundancy for speed in hardware implementations. These methods share the shift-add paradigm with CORDIC but are tailored to algebraic functions rather than transcendental ones, often requiring small lookup tables for digit selection to ensure correctness. Series expansion iterations, such as those based on Taylor series for computing exponential and logarithmic functions, provide another iterative framework for function approximation using polynomial terms that can be evaluated via repeated multiplications and additions. For the natural exponential function, the Taylor series \exp(x) = \sum_{n=0}^{\infty} \frac{x^n}{n!} is truncated and computed iteratively, with each term derived from the previous by multiplication by x/n, potentially optimized with shift-add approximations for powers of 2 in the input range. However, unlike CORDIC's purely additive updates that avoid long carry chains, these iterations involve multiplications that propagate carries across the full word length, increasing hardware complexity and latency in implementations without specialized multipliers. Logarithm approximations follow a similar recursive structure, often using the series for \ln(1+x), but face analogous issues with scaling and precision control in iterative hardware designs. The arithmetic-geometric mean (AGM) offers an for evaluating complete elliptic integrals and deriving \pi, starting from initial values a_0 and b_0 and repeatedly applying arithmetic and geometric means until convergence: a_{n+1} = \frac{a_n + b_n}{2}, b_{n+1} = \sqrt{a_n b_n}. This process converges quadratically, enabling high-precision computation of \pi using the Gauss–Legendre , which iterates the AGM starting from a_0 = 1, b_0 = 1/\sqrt{2}, along with auxiliary sequences, and extends to elliptic integrals essential in physics and . Despite its rapid convergence, AGM is less hardware-friendly than shift-add methods due to the required and operations in each iteration, which demand more complex circuitry compared to simple additions and shifts. In comparison to these iterative approaches, CORDIC demonstrates advantages in table size and structural uniformity, employing a compact set of precomputed arctangent constants for rotations across a wide range of functions, while maintaining a consistent shift-add pattern that simplifies pipelining and parallelization in hardware. Digit-by-digit methods like SRT require modest selection tables but lack CORDIC's versatility for non-algebraic functions, series expansions incur higher computational overhead from multiplications, and AGM's reliance on roots/divisions limits its efficiency in resource-constrained environments. As a , CORDIC's design balances accuracy and simplicity effectively for applications.

Table-Based Alternatives

Table-based alternatives to CORDIC for computing , such as , rely on precomputed values stored in to achieve low-latency evaluations, often combined with techniques to reduce table size while maintaining accuracy. These methods contrast with CORDIC's iterative shift-and-add approach by prioritizing parallel access and minimal arithmetic operations at the expense of increased usage. For instance, lookup tables can store sine values at discrete points, enabling rapid retrieval for arguments within a small range, typically after range reduction to [0, π/2]. In direct lookup schemes with interpolation, a coarse table of sine (or cosine) values is augmented by linear or quadratic interpolation to approximate values between entries, achieving high precision with reduced memory compared to full-resolution tables. For example, a 256-entry table (8-bit addressing) combined with linear interpolation can yield 16-bit accuracy for sine computations, requiring only a few additions and multiplications per evaluation, suitable for applications like direct digital synthesis where latency is critical. Quadratic interpolation further improves accuracy by fitting a parabola through three table points, minimizing error in the interpolated region, though it increases computational overhead slightly. Such methods have been demonstrated in hardware implementations for frequency synthesizers, where table sizes as small as 16 entries suffice with higher-order interpolation for low-power wireless base stations. Polynomial approximations represent another table-based hybrid, where polynomials derived via Chebyshev series provide near-optimal error bounds over a fixed interval, evaluated efficiently using the Horner scheme to minimize multiplications. For , range reduction maps the input to [-π/2, π/2], after which a low- (e.g., degree 5-7) approximates sine or cosine with errors below 2^{-16} using truncated Chebyshev expansions, which exhibit equioscillation for properties. These polynomials require multipliers, unlike CORDIC, but enable pipelined designs on FPGAs for floating-point evaluations, with the Horner nesting multiplications to reduce count to roughly the polynomial degree. Seminal work on such approximations highlights their use in compile-time generation for embedded systems, balancing speed and area through automated optimization. Bhaskara I's method offers a historical table-free approximation for sine using algebraic identities, providing a rational expression that avoids lookups entirely for low-precision needs. In his 7th-century treatise Mahābhāskariya, Bhaskara derived the formula \sin x \approx \frac{16 x (\pi - x)}{5 \pi^2 - 4 x (\pi - x)} for $0 \leq x \leq \pi, which achieves relative errors under 0.2% in the central range through geometric insights into circle quadrants, predating modern iterative methods. This approach, while limited to about 4-5 decimal digits of accuracy, underscores early non-tabular alternatives relying on multiplications and divisions. Trade-offs of table-based methods versus CORDIC center on , area, and : direct lookup with offers constant-time computation ideal for high-throughput scenarios like signal generation, but demands significant (e.g., thousands of bytes for 16-bit ) and performs best over fixed ranges, whereas CORDIC excels in resource-constrained environments with only small arctan tables and no multipliers. Polynomial methods bridge the gap by using compact tables for coefficients alongside arithmetic units, providing faster execution for small but higher power due to multiplications; however, CORDIC remains preferable for variable and multiplier-free designs in systems. The (M, p, k)-friendly points scheme exemplifies a , using multi-level tables for range reduction to tiny intervals before lookup, achieving sub-1% area overhead over CORDIC in FPGA implementations while doubling speed for sine/cosine pairs.

References

  1. [1]
    The CORDIC Trigonometric Computing Technique - IEEE Xplore
    CORDIC is a special-purpose digital computer for real-time airborne computation. In this computer, a unique computing technique is employed which is especially ...
  2. [2]
    [PDF] Chapter 24 - People @EECS
    The CORDIC algorithm was first introduced by Volder [1] for the computation of trigonometric functions, multiplication, division and datatype conversion, and.
  3. [3]
    50 Years of CORDIC: Algorithms, Architectures, and Applications
    **Summary of "50 Years of CORDIC: Algorithms, Architectures, and Applications"**
  4. [4]
    [PDF] 50 Years of CORDIC: Algorithms, Architectures, and Applications
    Sep 2, 2009 · The purpose of angle recoding (AR) is to reduce the number of CORDIC iterations by encoding the angle of rotation as a linear combination of a ...
  5. [5]
  6. [6]
    The Birth of Cordic | Journal of Signal Processing Systems
    Jun 1, 2000 · J.E. Volder, “The CORDIC Trigonometric Computing Technique,” IRE Transactions on Electronic Computers, vol. EC-8, no.3, 1959, pp. 330–334.
  7. [7]
    CORDIC and the Hustler - EDN
    Apr 22, 2007 · Volder developed the algorithm for the first digital aircraft navigation computer, placed aboard the Convair B-58 Hustler nuclear bomber.Missing: guidance | Show results with:guidance
  8. [8]
    CORDIC - Acemap
    In 1958, Convair finally started to build a demonstration system to solve radar fix-taking problems named CORDIC I, completed in 1960 without Volder, who had ...
  9. [9]
    [PDF] 50 Years of CORDIC: Algorithms, Architectures, and Applications
    Sep 2, 2009 · Abstract—Year 2009 marks the completion of 50 years of the invention of CORDIC (COordinate Rotation DIgital Computer) by Jack E. Volder.Missing: retrospective | Show results with:retrospective
  10. [10]
    [PDF] A survey of CORDIC algorithms for FPGA based computers
    The CORDIC algorithm has found its way into diverse applications including the 8087 math coprocessor[7], the HP-35 calculator, radar signal processors[3] and ...
  11. [11]
  12. [12]
    [PDF] CORDIC Is All You NeedCORDIC Is All You NeedCORDIC Is ... - arXiv
    Mar 4, 2025 · CORDIC is used in a pipelined architecture for linear MAC computations and nonlinear activation functions in AI hardware accelerators.
  13. [13]
    Efficient CORDIC-Based Activation Functions for RNN Acceleration ...
    We propose an efficient activation function implementation method based on CORDIC, along with a unified hardware architecture capable of simultaneously ...
  14. [14]
    [PDF] Error Analysis of CORDIC Processor with FPGA Implementation - arXiv
    In this research, we analyse error sources of CORDIC processor. The condition is that the final scaling for normalization is conducted only one-time at the last.
  15. [15]
  16. [16]
    [PDF] CORDIC METHOD • ROTATION AND VECTORING MODE
    CORDIC ALGORITHM AND IMPLEMENTATIONS. • CORDIC METHOD. • ROTATION AND VECTORING MODE. • CONVERGENCE, PRECISION AND RANGE. • SCALING FACTOR AND COMPENSATION.
  17. [17]
  18. [18]
    [PDF] A survey of CORDIC algorithms for FPGA based computers - CECS
    The trigonometric algorithm is called CORDIC, an acronym for Coordinate Rotation Digital Computer.
  19. [19]
    A unified algorithm for elementary functions - ACM Digital Library
    A unified algorithm for elementary functions. Author: J. S. WaltherAuthors ... Giuffrida LMasera GMartina M(2025)TOXOS: Spinning Up Nonlinearity in On-Vehicle ...
  20. [20]
    Simple CORDIC
    Below is some very simple ANSI C code for fixed point CORDIC calculations. It is based on the definitions given in the excellent FXTBook . Read that if you ...<|control11|><|separator|>
  21. [21]
    [PDF] Comparison of Parallel and Pipelined CORDIC algorithm using RCA ...
    [2] Ray Andraka. A survey of cordic algorithms for fpga based computers ... 50 years of cordic: Algorithms, architectures, and applications. Circuits.<|control11|><|separator|>
  22. [22]
    CORDIC Architectures: A Survey - Lakshmi - 2010 - VLSI Design
    Mar 31, 2010 · In this paper, we first classify the CORDIC algorithm based on the number system and discuss its importance in the implementation of CORDIC algorithm.Abstract · CORDIC Algorithm · Constant Scale Factor... · Higher Radix Redundant...Missing: original | Show results with:original<|control11|><|separator|>
  23. [23]
    A comparison of pipelined parallel and iterative CORDIC design on ...
    This paper presents different hardware implementations of CORDIC (Coordinate Rotation Digital Computer) algorithm in iterative mode. CORDIC is a shift-add ...Missing: chain | Show results with:chain
  24. [24]
    A comparison of pipelined parallel and iterative CORDIC design on ...
    The paper compares the different CORDIC architectures with respect to their area, speed, and data throughput performance especially in three different major ...
  25. [25]
    [PDF] Low Power FPGA Based Implementation of CORDIC Architecture
    Logical combination of three construction elements of modern FPGAs which were LUT, simple scaling- free CORDIC stages, and multipliers allowed a considerable ...
  26. [26]
    15.4.20. CORDIC - Intel
    May 27, 2022 · Rotation mode allows rotating a vector defined by its cartesian coordinates by an angle provided in radians. By providing specific input vector ...Missing: Python | Show results with:Python
  27. [27]
    (PDF) FPGA-based Low Latency Square Root CORDIC Algorithm
    Aug 8, 2025 · The CORDIC algorithm is an alternative for Newton-Raphson numerical calculation and for the FPGA based resource-expensive look-up-table (LUT) ...
  28. [28]
    [PDF] AN5888 - Getting started with the CORDIC accelerator using Stellar E
    Feb 8, 2023 · The CORDIC accelerator provides hardware support of certain mathematical functions (mainly trigonometric ones) commonly used in motor control, ...Missing: NEON GPU
  29. [29]
    ASIC Implementation of Low Power Cordiac Processor for ...
    The CORDIC algorithm offers a low-power alternative by performing trigonometric calculations using only shift-and-add operations, reducing hardware complexity ...
  30. [30]
    Low power resource efficient CORDIC enabled neuron architecture ...
    The CORDIC architecture has the advantage of having hardware that is hardware efficient for computing vector rotation by shift and adds operations. Digital ...
  31. [31]
    CORDIC co-processor - Enable hardware acceleration ... - MathWorks
    The CORDIC co-processor block provides hardware acceleration of trigonometric and hyperbolic mathematical functions used in computationally intensive ...
  32. [32]
    Radix-4 CORDIC algorithm based low-latency and hardware ...
    Nov 27, 2023 · The R4HR CORDIC shows the complex scale factor, and compensation of such scale factor necessitates the complex hardware. The complexity of R4HR ...<|control11|><|separator|>
  33. [33]
    Architectural design and FPGA implementation of radix-4 CORDIC ...
    A new scaled radix-4 CORDIC architecture that incorporates pipelining and parallelism is presented. The latency of the architecture is n/2 clock cycles and ...
  34. [34]
    Radix-4 CORDIC algorithm based low-latency and hardware ... - NIH
    This paper proposes a low-complexity VLSI architecture using a radix-4 CORDIC algorithm to compute Nth root and Nth power, using modified R4HV, R4LV, and R4HR  ...Missing: variants | Show results with:variants
  35. [35]
    High performance and resource efficient FFT processor based on ...
    Mar 21, 2022 · The use of CORDIC in the proposed architecture eliminates the complex multiplier and memory blocks needed to store the rotation factor values.<|separator|>
  36. [36]
  37. [37]
    [PDF] Design of FFT Based Hilbert Transform For Phase Detection
    This project focuses on the phase detection part of ADPLL. The phase detection is realized using the discrete Hilbert transform followed by the cordic algorithm ...
  38. [38]
    Quadrature Detection Methods for FM Demodulation
    Aug 29, 2013 · In this paper, different quadrature detection methods for FM demodulation like arc tan method, CORDIC vector rotation method etc. are studied ...Missing: atan2 | Show results with:atan2
  39. [39]
    [PDF] IMPLEMENTATION OF THE TRIGONOMETRIC LMS ALGORITHM ...
    ABSTRACT. The LMS algorithm is one of the most successful adaptive filtering algorithms. It uses the instantaneous value of the square of the error signal ...Missing: sqrt division
  40. [40]
    A Cost-Effective CORDIC-Based Architecture for Adaptive Lattice ...
    This paper presents a cost-effective CORDIC-based architecture for adaptive lattice filters. An implementation method for an ARMA lattice filter using the ...
  41. [41]
    Merged CORDIC algorithm | IEEE Conference Publication
    These iterations can be paired off to form double iterations to lower the hardware complexity while the computational complexity stays the same. With this ...<|control11|><|separator|>
  42. [42]
    [PDF] Technology Roadmap of CORDIC Algorithm
    CORDIC is invented by Jack E. Volder in. 1959. CORDIC is initially evolved in 1956[1] at the aeronautics department of Convair to resolve the navigation problem ...<|control11|><|separator|>
  43. [43]
    High performance rotation architectures based on the radix-4 ...
    The paper presents a radix-4 CORDIC algorithm, three architectures: word serial, pipelined, and application specific, and a technique for nonconstant scale ...
  44. [44]
    Low Latency VLSI Architecture for the Radix-4 CORDIC Algorithm
    In this paper, we are proposing a pipelined architecture for the VLSI implementation of radix-4 CORDIC rotator with redundant arithmetic to achieve low latency.
  45. [45]
    A 380 MHz Direct Digital Synthesizer/Mixer With Hybrid CORDIC ...
    A 380 MHz Direct Digital Synthesizer/Mixer With Hybrid CORDIC ... The directions of the CORDIC rotations are computed in parallel by using a little lookup table ...
  46. [46]
    A cordic-based processor extension for scalar and vector processing
    This paper presents a CORDIC-based vector processor extension to accelerate trigonometric calculations. The suggested architecture works with a vector of data, ...
  47. [47]
    [PDF] SRT Division: Architectures, Models, and Implementations
    Sep 9, 1998 · We will consider SRT division algorithms which use redundant quotient digits and partial remainder representations to avoid the slow n-bit ...
  48. [48]
  49. [49]
    [PDF] Arithmetic-geometric Mean, π, Perimeter of Ellipse, and Beyond
    Jun 10, 2019 · We will begin by closely examining arithmetic-geometric mean as the cornerstone for our discussion. Then, with modest knowledge of elliptic.
  50. [50]
    Table-Lookup/Interpolation Function Generation for Fixed-Point ...
    Table-Lookup/Interpolation Function Generation for Fixed ... A sine-cosine generator for digital or hybrid-computer simulation is exhibited as an example.
  51. [51]
    Quadrature direct digital frequency synthesizers using interpolation ...
    The fine/coarse decomposition significantly reduces the size of required lookup tables, and the polynomial interpolation enables accurate approximation of ...
  52. [52]
    Low-power direct digital frequency synthesis for wireless ...
    It uses a smaller lookup table for sine and cosine functions compared to already existing systems with a minimum additional hardware. Only 16 points are stored ...
  53. [53]
    [PDF] Bhaskara's Approximation to and Madhava's Series for Sine
    May 20, 2021 · In Bh¯askara I's first work, now called Mah¯abh¯askar¯ıya, he gives a stunningly accurate approximation for the sine function. The original ...
  54. [54]
    [PDF] A Table-based Method to Evaluate Trigonometric Function
    This study investigates the so-called (M, p, k) scheme that reduces the range of input argument to a very small interval so that trigonometric functions can be ...