Fact-checked by Grok 2 weeks ago

Exponentiation by squaring

Exponentiation by squaring is an efficient for computing large powers a^b, where a is the and b is a non-negative exponent, by leveraging the representation of b to minimize the number of multiplications through repeated squaring of intermediate results. This method, also known as exponentiation or the square-and-multiply , transforms the problem of into a series of squarings and selective multiplications, making it particularly effective for scenarios involving very large exponents. The algorithm operates by expressing the exponent b in its binary form, say b = \sum_{i=0}^{k} u_i 2^i where each u_i is 0 or 1, and then computing powers of the base corresponding to powers of 2 via successive squaring. For instance, it initializes with a^{2^0} = a, then iteratively squares to obtain a^{2^j} = (a^{2^{j-1}})^2 for j = 1 to the bit length of b, reducing modulo m if applicable to keep values manageable. The final result is the product of those a^{2^j} where u_j = 1, again taken modulo m if needed. This can be implemented recursively, where a^b = (a^{b/2})^2 if b is even, or a \cdot (a^{(b-1)/2})^2 if odd, or iteratively by scanning the binary digits from most to least significant bit. In terms of efficiency, the naive approach to exponentiation requires up to b-1 multiplications, which is impractical for large b, whereas exponentiation by squaring performs at most $2 \log_2 b multiplications—roughly one squaring per bit and one additional multiplication per set bit—achieving O(\log b) time complexity. This logarithmic reduction is crucial in computational number theory, as it handles exponents with thousands of bits in feasible time. The algorithm finds widespread use in , a core operation in such as the algorithm, where computing a^b \mod m for large primes m is essential yet computationally intensive without optimization. It also appears in matrix exponentiation for graph algorithms and scientific computing, extending its utility beyond scalar arithmetic to linear algebra problems.

Core Algorithms

Recursive Binary Exponentiation

Recursive binary exponentiation is a divide-and-conquer method for computing x^n, where x is the base and n is a non-negative exponent. The algorithm recursively computes x^{\lfloor n/2 \rfloor}, squares this value to obtain x^{n - (n \mod 2)}, and then multiplies by an additional factor of x if n is odd to account for the least significant bit in the representation of n. This approach exploits the x^n = (x^{\lfloor n/2 \rfloor})^2 \cdot x^{n \mod 2} to break down the problem into smaller subproblems. The recursive structure is captured in the following , which handles base cases for n = 0 (returning the multiplicative 1) and n = 1 (returning x):
[function](/page/Function) power(x, n):
    if n == 0:
        return 1
    if n == 1:
        return x
    half = power(x, n // 2)
    result = half * half
    if n % 2 == 1:
        result = result * x
    return result
This performs a tree of recursive calls corresponding to the digits of n, with each level halving the exponent until reaching the base cases. To illustrate, consider computing $5^{13}. The binary representation of 13 is 1101 (i.e., $13 = 2^3 + 2^2 + 2^0). The recursion proceeds as follows:
  • power(5, 13): Since 13 is odd, compute half = power(5, 6), result = half * half, then result = result * 5.
  • power(5, 6): Even, so compute half = power(5, 3), result = half * half.
  • power(5, 3): Odd, so compute half = power(5, 1), result = half * half = 5 * 5 = 25, then result = 25 * 5 = 125.
  • power(5, 1): Base case, returns 5.
  • Thus, power(5, 3) = 125.
  • Then, power(5, 6) = 125 * 125 = 15,625.
  • Finally, power(5, 13): half = 15,625, result = 15,625 * 15,625 = 244,140,625, then result = 244,140,625 * 5 = 1,220,703,125.
The multiplications are: $5 \times 5 = 25, $25 \times 5 = 125, $125 \times 125 = 15,625, $15,625 \times 15,625 = 244,140,625, and $244,140,625 \times 5 = 1,220,703,125. This leverages the binary expansion of the exponent, selecting terms from the powers computed via squaring (specifically $5^1, 5^2, 5^4, 5^8) to reduce the total multiplications to a number logarithmic in n. The technique has ancient origins in , appearing in Pingala's Chandah-sutra (c. 200 BCE) as a for efficient of powers in prosody and . It was formalized as a modern recursive in 20th-century , particularly in Donald Knuth's (1969). As a conceptual tool, the recursive version highlights the divide-and-conquer paradigm, though practical implementations often prefer iterative alternatives to avoid stack overhead.

Iterative Binary Exponentiation

The iterative exponentiation provides a space-efficient to compute a^n by processing the representation of the exponent n from least significant bit (LSB) to most significant bit (MSB), avoiding the recursion depth limitations of recursive implementations. In the right-to-left variant, the process initializes the result to 1 and iteratively squares the current base while conditionally multiplying it into the result based on whether the current bit of n is set, effectively building the power through repeated squaring and selective accumulation. The following pseudocode illustrates the right-to-left approach:
function iterative_pow(base, exponent):
    result = 1
    while exponent > 0:
        if exponent % 2 == 1:
            result = result * base
        base = base * base
        exponent = exponent // 2
    return result
This implementation uses a simple that halves the exponent in each iteration until it reaches zero. To demonstrate, consider computing $5^{13}, where 13 in is 1101 (bits: 1, 1, 0, 1 from MSB to LSB). Start with result = 1, base = 5, exponent = 13:
  • Iteration 1: exponent = 13 (odd), result = 1 × 5 = 5; base = 5 × 5 = 25; exponent = 6.
  • Iteration 2: exponent = 6 (even), result = 5; base = 25 × 25 = 625; exponent = 3.
  • Iteration 3: exponent = 3 (odd), result = 5 × 625 = 3125; base = 625 × 625 = 390625; exponent = 1.
  • Iteration 4: exponent = 1 (odd), result = 3125 × 390625 = 1,220,703,125; base = 390625 × 390625 (unused); exponent = 0.
The final result is 1,220,703,125, matching $5^{13}. This method requires only constant auxiliary memory beyond the input values, achieving O(1) space complexity by relying on a fixed number of variables (result, base, and exponent) regardless of the exponent size, making it ideal for large exponents in resource-constrained environments. A left-to-right variant processes the bits from MSB to LSB, initializing the result to 1 and iteratively squaring it while multiplying by the original if the current bit is 1, yielding equivalent and space usage. The for this approach is:
[function](/page/Function) left_to_right_pow([base](/page/Base), exponent):
    # Assume exponent binary representation has t+1 bits, e_t ... e_0
    result = 1
    for i from t downto 0:
        result = result * result
        if bit i of exponent == 1:
            result = result * [base](/page/Base)
    return result
This iterative form serves as a practical, stack-safe alternative to the recursive binary exponentiation, producing the same logarithmic-time results without function call overhead.

Efficiency Analysis

Time and Space Complexity

The binary exponentiation algorithms, both recursive and iterative, exhibit a time complexity of O(\log n) in terms of the number of multiplications required to compute x^n, where n is the exponent. This logarithmic scaling arises because the exponent is halved in each step, leading to at most \lfloor \log_2 n \rfloor + 1 squarings and a number of additional multiplications equal to the of n minus one. The , or population count (\popcount(n)), represents the number of 1-bits in the representation of n and averages approximately (\log_2 n + 1)/2 for random exponents. Thus, the total number of multiplications is approximately \log_2 n + \popcount(n) - 1. For large integers without modular reduction, each squaring or operation incurs an additional cost of O((\log x)^2) using the schoolbook , where x is the , resulting in an overall of O((\log n) \cdot (\log x)^2). In the context of , where all operations are performed modulo m, the intermediate values remain bounded by the of m, preserving the O(\log n) count while each modular costs O((\log m)^2); the overall complexity thus remains logarithmic in n. This represents a significant improvement over the naive repeated- approach, which requires O(n) operations. Regarding space complexity, the iterative version of binary exponentiation uses O(1) auxiliary space, relying only on a constant number of variables to track the result, current power, and exponent. In contrast, the recursive version requires O(\log n) space due to the call stack depth, which matches the recursion depth of \log_2 n.

Comparison to Naive Multiplication

The naive method for computing x^n involves repeatedly multiplying x by itself n times, resulting in n-1 multiplications. This approach is straightforward but becomes highly inefficient as n grows large, since the number of operations scales linearly with the exponent. In contrast, binary exponentiation reduces the operation count to approximately \lfloor \log_2 n \rfloor + \mathrm{wt}(n) - 1 multiplications, where \mathrm{wt}(n) is the (number of 1-bits) in the representation of n. For example, with n = 1000 (binary 1111101000, \log_2 1000 \approx 9.97, \mathrm{wt}(1000) = 6), the naive method requires 999 multiplications, while binary exponentiation needs only 14 (9 squarings + 5 extra multiplications). This logarithmic scaling provides substantial efficiency gains for moderate to large exponents. To illustrate the difference more clearly, consider the following table comparing operation counts for powers of 2, where binary exponentiation consists solely of squarings due to the single 1-bit in the exponent:
Exponent nNaive Multiplications (n-1)Binary Multiplications (\log_2 n)Relative Speedup
$2^{10} = [1024](/page/1024)102310102.3×
$2^{20} = 1,048,5761,048,5752052,428.75×
Calculations based on standard formulas for each method. Historically, the naive method sufficed for small exponents in early computing, but binary exponentiation became essential with the advent of public-key cryptography, such as RSA, where exponents like 2048 bits require millions of naive operations but only about 3000 with binary methods. In RSA implementations, the square-and-multiply variant of binary exponentiation is standard to handle modular arithmetic efficiently. The naive method may still suffice in resource-constrained embedded systems for very small exponents (e.g., n < 10), where the overhead of binary exponentiation's conditional branches outweighs the minimal savings, and hardware lacks optimized big-integer support.

Higher-Radix Variants

2^k-ary Exponentiation

2^k-ary exponentiation generalizes , the special case for k=1, by representing the exponent in base m=2^k and precomputing the powers x^0, x^1, \dots, x^{m-1} to reduce the number of multiplications at the expense of additional precomputation and storage. The method processes groups of k bits at a time, enabling fewer but more expensive operations compared to binary methods. In the algorithm, the exponent n is expressed as n = \sum_{i=0}^{\ell-1} d_i m^i where each digit d_i \in {0, 1, \dots, m-1} and \ell \approx \log_m n = (\log_2 n)/k is the number of digits. Precomputation involves calculating the table of powers x^j for j=1 to m-1, typically using successive multiplications or squarings, costing O(m) operations. The main computation uses a left-to-right approach: initialize the result R = x^{d_{\ell-1}}, then for each subsequent digit d_i from i=\ell-2 down to 0, update R \leftarrow R^m \cdot x^{d_i}, where R^m is computed via k successive squarings. This requires approximately \log_2 n squarings in total, independent of k, and up to \ell multiplications for the digit terms, yielding O((\log_2 n)/k) extra multiplications. For k=2 (quaternary exponentiation, m=4), the binary digits of the exponent are grouped into 2-bit chunks, with precomputed values x^0 = 1, x^1 = x, x^2 = x^2, and x^3 = x^3. Each group corresponds to a digit d_i \in {0,1,2,3}, and R^m = R^4 is two squarings: (R^2)^2. The precomputation costs 3 multiplications (e.g., x^2 = x \cdot x, x^3 = x^2 \cdot x). Consider computing 3^{25} with k=2. First, 25_{10} = 121_4 since 1 \cdot 4^2 + 2 \cdot 4^1 + 1 \cdot 4^0 = 16 + 8 + 1 = 25, with digits d_2=1, d_1=2, d_0=1. Precompute: p_0=1, p_1=3, p_2=9, p_3=27. Initialize R = p_1 = 3 (for 3^1). For d_1=2: compute R^4 = ((3)^2)^2 = 9^2 = 81, then R = 81 \cdot p_2 = 81 \cdot 9 = 729 (now 3^{1+8} = 3^9). For d_0=1: R^4 = ((729)^2)^2 = 531441^2 = 282429536481, then R = 282429536481 \cdot p_1 = 282429536481 \cdot 3 = 847288609443 (3^{9+16} = 3^{25}). This uses 4 squarings and 3 multiplications (plus precomputation), compared to 5 squarings and 3 multiplications for binary exponentiation. The trade-off involves fewer multiplications—scaling as O((\log_2 n)/k)—but higher precomputation cost O(2^k) and potentially costlier multiplications by larger precomputed powers, with the number of extra multiplications depending on the digit distribution (typically \ell on average if multiplying even for d_i=0). For typical hardware and exponent sizes around 1024-2048 bits, an optimal k is around 3 to 5, balancing reduced iterations against precomputation overhead and multiplication complexity.

Sliding-Window Exponentiation

Sliding-window exponentiation is an optimization of binary exponentiation that processes the exponent in overlapping windows of fixed size w to reduce the total number of multiplications required, particularly by minimizing unnecessary squarings through lookahead and precomputation of small powers of the base x. The method scans the exponent bits from the most significant bit (left-to-right), using sliding windows that start at each '1' bit and look ahead up to w-1 bits to form the window value, allowing efficient handling of zero runs. This approach trades a small amount of precomputation storage for fewer operations during the main loop, making it suitable for variable exponents in cryptographic applications like modular exponentiation. The algorithm selects a window size w, typically small to balance precomputation cost and runtime savings. It precomputes the powers x^{2^i} for i = 0 to w-1, which support the repeated squarings needed between windows. Additionally, a lookup table is built for the possible window values corresponding to odd integers from 1 to $2^w - 1, computed as x^j = x \cdot (x^2)^{(j-1)/2} using successive squarings and a single multiplication by x per entry; this requires $2^{w-1} - 1 multiplications in total. The main computation initializes the result to 1. It processes the exponent by repeatedly finding the next window starting from the current position: if the current bit is 0, advance by squaring the result; otherwise, form the w-bit (or shorter) window value v by looking ahead, multiply the result by the precomputed x^v, then square the result w times to shift past the window (consolidating squarings for trailing zeros in the window). This process repeats until all bits are processed, with leading zeros handled by adjusting the initial window. For example, consider computing $7^{26} with w = 3, where 26 in binary is $11010_2. Precompute odd powers: $7^1 = 7, $7^3 = 343, $7^5 = 16807, $7^7 = 823543. The binary representation has windows that can be processed left-to-right: starting from MSB, the first window (bits 4-2: 110 binary 6, but since even, adjust to odd via recoding or treat as 0110 with leading 0; implementations vary but effectively use 2 windows: one for the leading 11 (3, shifted), then skip 0, then 10. The total reduces to approximately 2 multiplications plus precomputation and 5 squarings, fewer than binary's 5 squarings and 3 multiplications for this sparse exponent. The benefits of sliding-window exponentiation include fewer overall multiplications compared to naive binary methods, with an average of approximately \frac{\log n}{w} + \frac{2^w - 1}{2^w} \cdot \frac{\log n}{w} multiplications for an n-bit exponent, where the first term approximates squarings per window and the second reflects multiplication probability for non-zero windows. Precomputation cost is $2^{w-1} - 1 multiplications, but runtime savings dominate for large exponents. Window sizes w = 4 to $5 are optimal, balancing table size (16-32 entries) against speed gains of 5-8% over m-ary methods for 512-bit exponents, as larger w increases precomputation exponentially while diminishing marginal returns. Unlike the non-sliding 2^k-ary method with fixed blocks, sliding windows overlap to further reduce squarings by adapting to bit patterns.

Constant-Time Techniques

Montgomery Ladder

The Montgomery ladder is a variant of binary exponentiation designed for constant-time execution, making it resistant to side-channel attacks such as timing and power analysis in cryptographic applications. It maintains two registers, R_0 and R_1, initialized to 1 and the base g respectively, and processes the binary representation of the exponent k from the most significant bit (MSB) to the least significant bit (LSB), performing exactly one squaring and one multiplication per bit regardless of the bit value. This uniformity ensures that the algorithm's execution path and operation counts do not leak information about the exponent. The core technique operates in a multiplicative group, where for each bit k_j:
  • If k_j = 0, compute R_1 \leftarrow R_0 \cdot R_1 followed by R_0 \leftarrow R_0^2.
  • If k_j = 1, compute R_0 \leftarrow R_0 \cdot R_1 followed by R_1 \leftarrow R_1^2.
To avoid conditional branches that could introduce timing variations, implementations often use conditional swaps or arithmetic operations that are independent of the bit value. The invariant preserved throughout is that R_1 / R_0 = g after processing the first j bits (starting from the MSB). At the end, R_0 = g^k. The total cost is $2t multiplications for an t-bit exponent, matching the efficiency of standard binary exponentiation while adding side-channel resistance. The technique was first described by in 1985. Here is the pseudocode for computing y = g^k \mod n in a ring (e.g., modular arithmetic):
Input: base g, exponent k = (k_{t-1} ... k_0)_2, modulus n
Output: y = g^k mod n
R_0 ← 1 mod n
R_1 ← g mod n
for j = t-1 downto 0 do
    if k_j = 0 then
        R_1 ← (R_0 * R_1) mod n
        R_0 ← (R_0 * R_0) mod n
    else
        R_0 ← (R_0 * R_1) mod n
        R_1 ← (R_1 * R_1) mod n
return R_0
Branch-free variants replace the if-statement with a swap conditioned on k_j, ensuring constant-time behavior. To illustrate, consider computing x^{13} \mod n where $13 = 1101_2 (MSB first: bits 1,1,0,1). Initialize R_0 = 1, R_1 = x.
  • Bit 1 (MSB): R_0 \leftarrow 1 \cdot x = x, R_1 \leftarrow x^2. Now (R_0, R_1) = (x, x^2).
  • Bit 1: R_0 \leftarrow x \cdot x^2 = x^3, R_1 \leftarrow (x^2)^2 = x^4. Now (R_0, R_1) = (x^3, x^4).
  • Bit 0: R_1 \leftarrow x^3 \cdot x^4 = x^7, R_0 \leftarrow (x^3)^2 = x^6. Now (R_0, R_1) = (x^6, x^7).
  • Bit 1 (LSB): R_0 \leftarrow x^6 \cdot x^7 = x^{13}, R_1 \leftarrow (x^7)^2 = x^{14}. Now (R_0, R_1) = (x^{13}, x^{14}).
The result is R_0 = x^{13} \mod n. The security stems from performing the same sequence of operations—one squaring and one multiplication—per bit, preventing attackers from distinguishing bit values via execution time, power consumption, or electromagnetic emissions. This makes it suitable for protecting secret exponents in , where traditional binary methods could leak via variable-time branches. Variants further randomize the ladder to counter fault attacks or differential power analysis. In practice, the Montgomery ladder is widely adopted for elliptic curve scalar multiplication in standards like and protocols such as , with implementations in libraries like that use it for constant-time point multiplication on . It also applies to general modular exponentiation in and , offering parallelization potential for up to 200% speedup on multi-processor systems.

Signed-Digit Recoding

Signed-digit recoding transforms the binary representation of the exponent into a signed-digit form, such as the non-adjacent form (NAF), to minimize the number of non-zero digits while ensuring no two consecutive digits are non-zero. In NAF, each digit d_i belongs to the set \{-1, 0, 1\}, often denoted with \bar{1} for -1, and the representation of an integer e is e = \sum_{i=0}^{\ell} d_i 2^i. This recoding, originally introduced for efficient binary arithmetic, reduces the average Hamming weight—the number of non-zero digits—from approximately $1/2 in standard binary to $1/3 in NAF, leading to fewer multiplication operations during exponentiation. The NAF can be computed from the binary representation using a deterministic algorithm that scans the bits from least to most significant, adjusting for signed digits to enforce the non-adjacency constraint. One standard method initializes a carry c_0 = 0 and, for each bit position i from 0 to \ell-1, computes the next carry c_{i+1} = \lfloor (c_i + b_i + b_{i+1}) / 2 \rfloor and the recoded digit d_i = c_i + b_i - 2 c_{i+1}, where b_i are the binary digits (with b_\ell = 0); the process may extend by one bit if a final carry remains. This produces a unique NAF representation of minimal weight. Related techniques, such as Booth recoding adapted for signed digits, follow similar principles by examining overlapping bit pairs to generate \pm 1 or 0, though NAF specifically guarantees non-adjacency. For example, the exponent 13 in binary is $1101_2, which recodes to $10\bar{1}01_{\text{NAF}} (digits from MSB: 1, 0, -1, 0, 1), satisfying $1 \cdot 2^4 + 0 \cdot 2^3 + (-1) \cdot 2^2 + 0 \cdot 2^1 + 1 \cdot 2^0 = 16 - 4 + 1 = 13. In the context of exponentiation by squaring, the recoded exponent enables a left-to-right algorithm where the result is iteratively squared, and multiplied by the base x (or x^{-1} for -1 digits) only at non-zero positions, precomputing x^{-1} \mod m once if needed for modular arithmetic. To avoid explicit inversion in practice, especially in modular settings, implementations often precompute -x \mod m and perform additions equivalent to subtractions, treating negative digits as multiplications by this negated base, which maintains efficiency since modular negation is inexpensive. This approach yields approximately 30% fewer multiplications compared to binary methods, as the reduced Hamming weight directly lowers the number of base multiplications (from roughly m/2 to m/3 for an m-bit exponent), making it particularly advantageous for hardware implementations with left-to-right scanning. Balanced ternary, another signed-digit system using radix 3 with digits -1, 0, 1, offers similar sparsity benefits but requires adjustments for higher-radix squaring. The non-adjacent property also enhances security against side-channel attacks by promoting more uniform operation patterns.

Fixed-Base Optimizations

Yao's Method

Yao's method, proposed by in 1976, provides an efficient approach for fixed-base exponentiation, particularly beneficial when the same base x is used to compute x^n for multiple different exponents n. The technique relies on representing the exponent in a higher radix b = h > 2, precomputing a table of powers x^{b^i} for i = 0, 1, \dots, \ell-1, where \ell \approx \log_b n, and then assembling the result through grouped multiplications based on the digits of n. This amortizes the precomputation cost across multiple evaluations, making it suitable for applications such as batch verification in cryptographic protocols. The algorithm proceeds as follows. First, during precomputation, compute the table entries successively: set x_0 = x, and for i \geq 1, compute x_i = x_{i-1}^h using O(\log h) group multiplications per step, yielding a total precomputation cost of O(\log n) multiplications and O(\ell) storage, where \ell = \lceil \log_h n \rceil. For a given exponent n, express n = \sum_{i=0}^{\ell-1} n_i b^i with digits $0 \leq n_i < h. The power is then given by x^n = \prod_{j=1}^{h-1} \left( \prod_{\{i : n_i = j\}} x^{b^i} \right)^j. In the evaluation phase, initialize y = 1. For each j = h-1 down to $1, if there exists at least one i with n_i = j, compute u = \prod_{\{i : n_i = j\}} x^{b^i} using \mathrm{count}_j - 1 multiplications, where \mathrm{count}_j is the number of such i; then compute u^j via binary exponentiation in O(\log j) multiplications and update y \leftarrow y \cdot u^j. The total evaluation cost is O(\log n + h \log h), dominated by the digit products and powerings. To illustrate, consider computing x^{2989} with h = 4 (so b = 4) and precomputed table x_0 = x, x_1 = x^4, x_2 = x^{16}, x_3 = x^{64}, x_4 = x^{256}, x_5 = x^{1024}. The base-4 digits of 2989 are n = 2 \cdot 4^5 + 3 \cdot 4^4 + 2 \cdot 4^3 + 2 \cdot 4^2 + 3 \cdot 4^1 + 1 \cdot 4^0, or (n_5, \dots, n_0) = (2, 3, 2, 2, 3, 1).
  • For j=3: Positions i=1,4, so u = x_1 \cdot x_4 = x^4 \cdot x^{256} = x^{260} (1 multiplication); compute u^3 = (x^{260})^3 = x^{780} (using 1 squaring and 1 multiplication via binary exponentiation); set y = x^{780}.
  • For j=2: Positions i=2,3,5, so u = x_2 \cdot x_3 \cdot x_5 = x^{16} \cdot x^{64} \cdot x^{1024} = x^{1104} (2 multiplications); compute u^2 = (x^{1104})^2 = x^{2208} (1 squaring); set y = x^{780} \cdot x^{2208} = x^{2988}.
  • For j=1: Position i=0, so u = x_0 = x (0 multiplications); compute u^1 = x (0 operations); set y = x^{2988} \cdot x = x^{2989}.
This requires 1 multiplication for the j=3 product, 2 for j=2, 0 for j=1, plus the costs for the three powerings (1 squaring + 1 mult for ^3, 1 squaring for ^2, 0 for ^1; total 3 product mult + 3 operations for powerings = 6 operations in this case, though general complexity is \ell + h - 2 = 8 multiplications). For fixed base 2, the method similarly precomputes powers like $2^{4^i}, enabling efficient computation of $2^n for various n by combining table entries as above, with the binary nature of the base simplifying some steps. The overall complexity balances precomputation O(\log n) against evaluation O(\log n + h \log h); choosing h \approx \log n / \log \log n (i.e., window size m = \log_2 h \approx \log \log n) minimizes the per-evaluation cost to O(\log n), with the shared table providing significant savings for multiple exponentiations, such as in cryptographic batch processing where dozens of powers with the same base are needed.

Euclidean Method

The Euclidean method provides an efficient approach to fixed-base exponentiation by applying principles from the to decompose the exponent recursively, minimizing multiplications through division and modulo operations on exponent components. For a fixed base x and exponents a > b, the power x^a \cdot x^b is computed as (x^{a \mod b})^k \cdot x^r, where k = \lfloor a / b \rfloor and r = a \mod b, with the exponentiation of x^{a \mod b} handled recursively and optimized using squaring chains for the power k. This decomposition extends the core idea of exponentiation by squaring to leverage exponent differences, reducing the problem size at each step similar to the Euclidean algorithm for GCD computation. The algorithm operates by maintaining a set of current powers and corresponding exponents, initially derived from the -b representation of the target exponent, and iteratively applying the step to the largest remaining exponents u > v: compute x^u = x^{u - v} \cdot x^v (or the modulo variant for efficiency), recursing on the smaller pair until base cases (such as x^0 = 1 or x^1 = x) are reached. In practice, for a single target exponent n, the process starts by selecting an auxiliary exponent (e.g., a power of 2 or another related value) and recurses on the differences or remainders, building the result through a chain of multiplications and squarings. This avoids exhaustive scanning and focuses on rapid reduction via . The method relates briefly to Yao's table-based approach for fixed- cases but relies on recursive without extensive precomputed sums. A representative example illustrates the recursion for computing x^{15}, expressed initially as x^{15} = x^8 \cdot x^7. Recursing on 8 and 7: since $8 = 1 \cdot 7 + 1, x^8 = x^1 \cdot x^7, so x^{15} = x^1 \cdot x^7 \cdot x^7 = x \cdot (x^7)^2. Now recurse on computing x^7: choose auxiliary 4, $7 = 1 \cdot 4 + 3, so x^7 = x^3 \cdot x^4; then on 4 and 3, $4 = 1 \cdot 3 + 1, x^4 = x^1 \cdot x^3 = x \cdot x^3; continuing to x^3 = x^2 \cdot x^1, with x^2 = (x^1)^2. The full chain requires 6 multiplications and 3 squarings, fewer than the 7 operations of naive binary exponentiation for this case. Deeper recursion ensures the exponents diminish logarithmically. The efficiency stems from the fact that the number of multiplications equals the number of steps in the applied to the exponents, yielding O(\log n) total operations overall, though the constant factor depends on the exponent structure—with smaller constants observed for Fibonacci-like exponents, where the recursive relations align closely with the decomposition steps, enabling near-optimal addition chains. For general exponents, the method performs comparably to exponentiation but excels when exponents share common factors or follow patterns amenable to quick reductions. Historically, the approach draws from the spirit in constructing exponent chains, with seminal formulation in de Rooij's work.

Extensions and Applications

Further Uses in Computing

Exponentiation by squaring serves as the core algorithm for in , particularly in encryption and decryption, where computing c = m^e \mod n or m = c^d \mod n requires efficient handling of large exponents to minimize multiplications. This method reduces the computational complexity from O(e) to O(\log e) modular multiplications, making practical for 2048-bit or larger keys. Similarly, in the Diffie-Hellman key exchange, parties compute shared secrets via g^a \mod p and g^b \mod p, relying on binary exponentiation to perform these operations securely and efficiently over large primes. Software libraries commonly implement this technique for ; for instance, Python's built-in pow(base, exp, mod) function uses binary exponentiation in its long integer arithmetic to compute modular powers, optimizing for speed. In scientific , exponentiation by squaring extends to , enabling efficient computation of powers of in Markov chains to determine long-term state probabilities. For a P, the n-step probabilities are given by P^n, which can be calculated via repeated squaring in O(\log n) , avoiding the prohibitive cost of naive repeated for large n. This approach is particularly valuable in simulations of processes, such as or queueing models. Applications in leverage by squaring for rapid evaluation of power functions in rendering pipelines, including specular exponentiation in models like Phong, where high shininess exponents (e.g., 100+) are common, and in of fractals requiring iterative power computations. Hardware implementations integrate by squaring into processors and accelerators for cryptographic workloads. On x86 architectures, libraries like IPP Cryptography provide optimized routines using binary methods tailored to instruction sets like AVX for and operations. In field-programmable gate arrays (FPAs), it forms the basis of dedicated accelerators, such as those for modular via Montgomery reduction, achieving high throughput for secure communications. Post-2020 advancements in , including NIST-standardized lattice-based schemes like CRYSTALS-Kyber (standardized as ML-KEM (FIPS 203) in August 2024 by NIST), employ the Number Theoretic Transform (NTT) with precomputed twiddle factors, which underpin efficient multiplications during and encapsulation, significantly reducing computation time compared to direct powering.

Generalizations and Alternatives

Exponentiation by squaring, primarily designed for exponents, generalizes to other numerical domains with adaptations. For floating-point numbers with exponents, the method applies directly by performing squarings and multiplications in , preserving the logarithmic complexity while handling the base and results as representations. However, for non- real exponents, a common generalization uses the x^y = e^{y \ln x}, computing the power via the and logarithm functions; this approach, while versatile, is inefficient due to the higher computational cost and potential precision loss from transcendental function evaluations compared to direct squaring for . The binary method extends straightforwardly to matrix exponentiation, where repeated squaring and multiplication operate on instead of scalars, achieving O(\log n) matrix multiplications for computing A^n. This generalization is widely used in for tasks like solving differential equations or simulating dynamic systems, with implementations leveraging the same recursive structure but accounting for the cubic cost of . In p-adic numbers, generalizes via expansions or p-adic logarithms and exponentials, converging within the p-adic metric for suitable arguments, analogous to but with ultrametric properties enabling distinct convergence behaviors. Higher-radix variants beyond (2-ary) represent a generalization, such as m-ary methods that process the exponent in base-m digits (m > 2), reducing the number of squarings at the expense of precomputing m-1 multiples; these are particularly effective for large exponents in implementations, trading off for fewer operations. For instance, radix-4 or radix-8 methods halve the count compared to for equivalent bit lengths. Alternatives to pure squaring-based methods include addition chains, which compute powers using a of additions and doublings to minimize total multiplications beyond the representation's O(\log n) bound. Optimal addition chains, often found via evolutionary algorithms or branch-and-bound, outperform exponentiation for small-to-medium exponents by reducing steps—for example, computing a^{15} requires 6 multiplications in but only 5 in a minimal chain. These chains are especially useful in for fixed small exponents, though finding minimal chains is NP-hard for large n. Within squaring iterations, alternatives to schoolbook multiplication enhance efficiency; the Karatsuba algorithm, with O(n^{1.585}) complexity for n-digit numbers, accelerates the multiplications and squarings in big-integer contexts, outperforming quadratic methods for operands beyond 300-600 bits and integrating seamlessly into binary exponentiation pipelines. In modern computing, vectorized implementations exploit SIMD instructions or GPU parallelism, applying squaring across arrays in a single instruction multiple data fashion; for example, TensorFlow's power operations enable efficient batch computations in machine learning workloads on GPUs. Despite these strengths, exponentiation by squaring has limitations, particularly for very small exponents where simpler repeated or optimal chains require fewer operations overall—binary methods perform unnecessary squarings for n < 4. While quantum alternatives like offer quadratic speedups for search-related exponentiations in unstructured spaces, classical focus remains on these optimizations for practical deployment.

References

  1. [1]
    Number Theory - Modular Exponentiation
    The second way is better because the numbers involved are smaller. This trick, known as repeated squaring, allows us to compute mod using only ⁡ modular ...
  2. [2]
    [PDF] Powers via Successive Squaring We will describe a method to ...
    We now describe the general method of computing powers by successive squaring. Successive Squaring to Compute ak mod m. The following steps compute the value ...
  3. [3]
    [PDF] Efficient Modular Exponentiation
    Feb 27, 2018 · Modular Exponentiation by Repeated Squaring. Given m, n ∈ N and a ∈ Z, the following algorithm returns the remainder when am is divided by n.
  4. [4]
  5. [5]
    Origin of square-and-multiply algorithm - MathOverflow
    Sep 20, 2012 · This method is indeed over 2000 years old. The history, with references, is discussed by Donald Knuth in Seminumerical Algorithms, ...Missing: squaring | Show results with:squaring
  6. [6]
    Hindu Mathematics : Free Download, Borrow, and Streaming
    Feb 22, 2013 · Bibhutibhusan Datta and Avadesh Narayan Singh (1935) History of Hindu Mathematics ... PDF download · download 1 file · SINGLE PAGE PROCESSED ...Missing: URL | Show results with:URL
  7. [7]
    [PDF] Review of general exponentiation algorithms
    Just as in the left-to-right algorithm, the asymptotic complexity of this algorithm is O(log e). Algorithm of right-to-left binary exponentiation. Input: g G.<|control11|><|separator|>
  8. [8]
    [PDF] Chapter 14 - Centre For Applied Cryptographic Research
    This section considers three types of exponentiation algorithms. Handbook of Applied Cryptography by A. Menezes, P. van Oorschot and S. Vanstone. Page 25 ...
  9. [9]
    [PDF] Highly Regular m-ary Powering Ladders - Marc Joye
    This paper describes new exponentiation algorithms with applications to cryptography. The proposed algorithms can be seen as m-ary generalizations of the so- ...
  10. [10]
    [PDF] Chapter - Koc Lab
    2k. 2k. 2k. 2k. 2k. 2k. 2k-ary algorithm. The general idea of this method, introduced by Brauer [BRA 1939], is to write the exponent on a larger base b = 2k.
  11. [11]
    [PDF] Analysis of Sliding Window Techniques for Exponentiation
    A sliding window exponentiation algorithm first decomposes E into zero and nonzero words (win- dows) Fi of length L(Fi). The number of windows k may not be ...
  12. [12]
    [PDF] The Montgomery Powering Ladder
    This paper gives a comprehensive analysis of Montgomery powering ladder. Initially developed for fast scalar multiplication on el- liptic curves, we extend the ...Missing: seminal | Show results with:seminal
  13. [13]
    Speeding the Pollard and Elliptic Curve Methods
    SPEEDING THE POLLARD AND ELLIPTIC CURVE METHODS OF FACTORIZATION ... Therefore, it suffices to look at residue classes. Page 10. 252. PETER L. MONTGOMERY modulo ...
  14. [14]
    [PDF] Montgomery curves and the Montgomery ladder
    Mar 30, 2017 · The Montgomery ladder is a simple, fast method for computing scalar multiples on elliptic curves, defined on the Montgomery curve By2 = x3 + Ax ...
  15. [15]
    Binary Arithmetic - ScienceDirect.com
    This chapter discusses the performance of binary arithmetic and thus has potential application to the design of binary digital computing equipment.
  16. [16]
    [PDF] This is Chapter 9 by Christophe Doche of the Handbook of Elliptic ...
    The joint Hamming weight is the number of positions different from a zero column. Solinas also gives an algorithm to compute the JSF of two integers ...<|control11|><|separator|>
  17. [17]
    [PDF] Optimal Left-to-right Binary Signed-Digit Recoding - Marc Joye
    Abstract. This paper describes new methods for producing optimal binary signed-digit representations. This can be useful in the fast com-.
  18. [18]
    Efficient exponentiation using precomputation and vector addition ...
    Jan 1, 2006 · An efficient algorithm for exponentiation with precomputation is proposed and analyzed. Due to the precomputations, it is mainly useful for (discrete log based ...<|control11|><|separator|>
  19. [19]
    [PDF] Chapter 8: Public-Key Encryption
    Some of these techniques are covered in Chapter 14, includ- ing fast modular multiplication (§14.3), fast modular exponentiation (§14.6), and the use of the ...
  20. [20]
    [PDF] Key Establishment - Centre For Applied Cryptographic Research
    Diffie-Hellman key agreement (also called exponential key exchange) is a fundamental technique providing unauthenticated key agreement. This section ...
  21. [21]
    [PDF] Almost-Linear-Time Algorithms for Markov Chains and New Spectral ...
    Nov 2, 2016 · As k grows, 긔k quickly becomes dense, so computing it requires repeatedly squaring dense matrices, which takes time. O(nω). To deal with this, ...
  22. [22]
    GPU Acceleration of RSA is Vulnerable to Side-channel Timing ...
    Nov 8, 2018 · To compute exponentiation of a large number, a simple bi- nary method [15] that performs squaring, and conditional multiplications of each bit ...
  23. [23]
    Finite Field Arithmetic - Intel
    Intel® IPP Cryptography contains several different optimized implementations of finite field arithmetic functions. These implementations, referred to in this ...
  24. [24]
    Carry-Save Montgomery Modular Exponentiation on Reconfigurable ...
    In this paper we present a hardware implementation of the RSA algorithm for public-key cryptography. Basi- cally, the RSA algorithm entails a modular ...<|separator|>
  25. [25]
    [PDF] Lattice-Based Key Sharing Schemes: A Survey
    While the process received proposals from various categories of post-quantum cryptography, lattice-based cryptography features most prominently among all the ...
  26. [26]
    The most efficient way to implement an integer based power function ...
    Sep 19, 2008 · Exponentiation by squaring is not the most optimal method. It is probably the best you can do as a general method that works for all exponent values.Efficient implementation of fixed-point power (pow) function with ...Efficient Exponentiation For HUGE Numbers (I'm Talking Googols)More results from stackoverflow.comMissing: inefficiency | Show results with:inefficiency
  27. [27]
    What Every Computer Scientist Should Know About Floating-Point ...
    This paper is a tutorial on those aspects of floating-point arithmetic (floating-point hereafter) that have a direct connection to systems building.
  28. [28]
    [PDF] Exponential and logarithm in p-adic fields
    Feb 22, 2005 · Obviously none of these roots of unity (except 1) belongs to 1 + m. I want to see when the exponential and logarithm functions are defined, ...Missing: higher radix
  29. [29]
    High-radix and bit recoding techniques for modular exponentiation
    It is shown that the high-radix methods with optimal choice of the radix provide significant reductions in the number of multiplications required for modular ...
  30. [30]
    High-radix Modular Exponentiation for hardware implementation of ...
    Dec 19, 2016 · This paper presents the modular exponential techniques in higher radix namely, radix-4 and radix-8. These algorithms are compatible to the ...
  31. [31]
    [PDF] A Survey of Addition Chain Algorithms - Koc Lab
    Jun 13, 2018 · To minimize the number of such multiplications, an addition chain of exponents is computed, such that each exponent in the chain can be ...
  32. [32]
    [PDF] Evolutionary Algorithms for Finding Short Addition Chains
    Thus, the length of the addition chain defines the number of multiplications required for computing the exponentiation. The aim is to find the shortest ad-.
  33. [33]
    Efficient Big Integer Multiplication and Squaring Algorithms for ...
    Jul 24, 2014 · Compared to the adopted classical and Karatsuba multiplication algorithms for squaring, the proposed squaring algorithm is 2 to 3.7 and 7.9 ...
  34. [34]
    tf.math.pow | TensorFlow v2.16.1
    Computes the power of one value to another.
  35. [35]