Fact-checked by Grok 2 weeks ago

Karatsuba algorithm

The Karatsuba , also known as the Karatsuba–Ofman , is a divide-and-conquer method for multiplying large integers that reduces the from the standard O(n^2) of the schoolbook to O(n^{\log_2 3}) \approx O(n^{1.585}), where n is the number of digits. Developed by Soviet mathematicians and Yuri Ofman in late 1960 and first published in 1962 under the title "Multiplication of Many-Digital Numbers by Automata," it represents the earliest known sub-quadratic multiplication technique. By recursively breaking down the problem into smaller subproblems, the algorithm minimizes the number of required multiplications while leveraging additions and subtractions, making it a foundational advance in computational . At its core, splits two n-digit numbers x and y (assuming n is even for simplicity) into upper and lower halves: x = x_1 \cdot b^{m} + x_0 and y = y_1 \cdot b^{m} + y_0, where b is (typically 10 or $2^{32} for efficiency) and m = n/2. Instead of performing the four x_1 y_1, x_1 y_0, x_0 y_1, and x_0 y_0 as in the classical approach, it computes only three: p_1 = x_1 y_1, p_0 = x_0 y_0, and p_2 = (x_1 + x_0)(y_1 + y_0). The cross term x_1 y_0 + x_0 y_1 is then obtained efficiently as p_2 - p_1 - p_0, and the full product is assembled as p_1 \cdot b^{2m} + (p_2 - p_1 - p_0) \cdot b^m + p_0. This continues until the case of single- (or small-block) , with the overall recurrence T(n) = 3T(n/2) + O(n) solving to the stated complexity via . The algorithm's significance lies in its demonstration that algebraic identities can yield asymptotic speedups in , challenging the long-held assumption that required \Theta(n^2) operations. It paved the way for more advanced techniques, including the Toom–Cook generalizations and fast Fourier transform-based methods like Schönhage–Strassen, and remains relevant in practice for intermediate-sized operands in arbitrary-precision libraries such as GMP and Java's BigInteger, where it outperforms both naive and highly optimized FFT approaches for numbers up to several thousand digits. Despite increased constant factors from recursions and carries, its simplicity and hardware adaptability have led to implementations in cryptographic systems and scientific computing for efficient and modular multiplications.

History

Discovery

The Karatsuba algorithm was developed in 1960 by , a 23-year-old graduate student at . As a participant in a on mathematical problems in organized by , Karatsuba was motivated by the need to explore the fundamental limits of for arithmetic operations. During the seminar, Kolmogorov conjectured that multiplying two n-digit numbers requires at least \Omega(n^2) operations in any general algorithm, building on earlier work suggesting that the standard quadratic-time method represented an asymptotic lower bound. Within a week of attending, Karatsuba devised a novel divide-and-conquer approach that reduced the number of required multiplications, achieving a time complexity of O(n^{\log_2 3}), where \log_2 3 \approx 1.585, thereby disproving Kolmogorov's conjecture. He provided an initial proof demonstrating that this complexity holds for the recursive multiplication of large integers. Karatsuba presented his discovery at the same Kolmogorov seminar later in 1960, where it immediately challenged prevailing assumptions about efficiency and reportedly led to the seminar's termination. This breakthrough marked the first sub-quadratic for , laying foundational groundwork for subsequent advances in computational .

Publication and Initial Impact

The Karatsuba algorithm was formally published in 1962 in the Doklady Akademii Nauk SSSR (Proceedings of the USSR Academy of Sciences), volume 145, pages 293–294, under the title "Multiplication of Multidigit Numbers on Automata." The paper, authored by and Yuri Ofman, was presented by on February 13, 1962, and combined Karatsuba's technique with Ofman's independent extension to the complexity of and operations on multidigit numbers. This joint presentation highlighted the algorithm's broader implications for operations in automata, marking an early theoretical advancement in computational efficiency. Following Karatsuba's presentation in late 1960, where he first outlined , Kolmogorov expressed initial agitation due to its of his on for . Despite this, Kolmogorov verified the result and announced it at the subsequent meeting, effectively endorsing its validity and contributing to the paper's submission. An English translation appeared in Soviet Physics Doklady in 1963, volume 7, pages 595–596, broadening accessibility beyond Soviet academia. The publication garnered early recognition within Soviet circles, where it challenged prevailing assumptions about and spurred research into faster methods. It laid foundational groundwork for subsequent developments in divide-and-conquer strategies, influencing theoretical work on computational limits during the 1960s.

Mathematical Background

Standard Multiplication

The standard multiplication algorithm, often referred to as the schoolbook or grade-school method, computes the product of two large integers by leveraging their positional representation in a given b. Consider two n-digit numbers x = \sum_{i=0}^{n-1} x_i b^i and y = \sum_{j=0}^{n-1} y_j b^j, where each $0 \leq x_i, y_j < b. The product z = x \cdot y is then given by z = \sum_{i=0}^{n-1} \sum_{j=0}^{n-1} x_i y_j b^{i+j}, which expands to at most $2n digits in base b. This approach generates n partial products, each obtained by multiplying one number (say, x) by a single digit of the other (y_j), and then shifts each partial product by j positions to account for the place value b^j. These shifted partial products are subsequently added together column by column, starting from the least significant digit, with carries propagated whenever the sum in a position exceeds or equals b. For instance, in base 10, multiplying 123 by 456 yields partial products 123 × 6 = 738 (no shift), 123 × 50 = 6150 (one position shift), and 123 × 400 = 49200 (two positions shift); summing 738 + 6150 + 49200 = 56088, with no carries needed in this case. The algorithm's time complexity is O(n^2), arising from the n^2 individual digit multiplications and a comparable number of additions and carry operations required to form and sum the partial products. This quadratic scaling establishes the baseline efficiency for large n, highlighting the need for optimized methods when multiplying very large integers.

Divide-and-Conquer Multiplication

The divide-and-conquer strategy for multiplication involves recursively breaking down the problem of multiplying two large numbers into smaller subproblems by splitting each number into its higher and lower halves, computing the necessary products of these halves, and then combining the results through addition and shifting operations. For two n-bit numbers x and y, this approach represents x = x_1 \cdot 2^{n/2} + x_0 and y = y_1 \cdot 2^{n/2} + y_0, where x_1, x_0, y_1, y_0 are each approximately n/2-bit numbers. The full product is then formed as xy = x_1 y_1 \cdot 2^n + (x_1 y_0 + x_0 y_1) \cdot 2^{n/2} + x_0 y_0, with the combination step requiring O(n) time for the additions and shifts. In the naive implementation of this strategy, four separate recursive multiplications are performed: one each for x_1 y_1, x_1 y_0, x_0 y_1, and x_0 y_0. Each of these multiplications operates on n/2-bit operands, mirroring the original problem at half the scale, which naturally leads to a recursive structure. This direct approach to decomposition ensures that the subproblems are solved independently before recombination, but it does not reduce the overall computational burden below that of the standard method. The time complexity of this naive divide-and-conquer multiplication is captured by the recurrence relation T(n) = 4 T(n/2) + O(n), where the $4 T(n/2) term accounts for the four recursive calls and the O(n) term reflects the linear-time work to combine results. Solving this recurrence using the or recursion tree method yields T(n) = O(n^2), which is asymptotically equivalent to the complexity of the grade-school multiplication algorithm. Thus, while the method introduces recursion, it offers no speedup in the big-O sense for the base case. Despite the lack of asymptotic improvement, the recursive divide-and-conquer framework proves valuable for multiplying very large numbers, as it establishes a modular structure that facilitates subsequent optimizations and easier analysis of more advanced techniques. For instance, this approach can enhance cache efficiency in practical implementations by processing data in predictable blocks, and it paves the way for variants that reduce the number of recursive multiplications.

Algorithm Description

Core Step

The core step of the Karatsuba algorithm multiplies two n-digit numbers x and y in base b by dividing each into higher and lower parts of approximately m = \lceil n/2 \rceil digits, expressed as x = x_1 b^m + x_0 and y = y_1 b^m + y_0, where $0 \leq x_0, x_1, y_0, y_1 < b^m. This splitting leverages the positional nature of base-b representation to align the parts for efficient recombination. Instead of the four multiplications required in standard divide-and-conquer (x_1 y_1, x_1 y_0, x_0 y_1, x_0 y_0), the algorithm computes only three products: p_1 = x_1 y_1, p_2 = x_0 y_0, and p_3 = (x_1 + x_0)(y_1 + y_0). The key innovation lies in deriving the cross terms from these, as p_3 - p_1 - p_2 = x_1 y_0 + x_0 y_1, eliminating the need for a direct fourth multiplication while relying on additions and subtractions. The final product is then assembled as z = p_1 b^{2m} + (p_3 - p_1 - p_2) b^m + p_2. Each p_i may span up to $2m digits due to the size of the factors, and the shifts by b^{2m} and b^m correspond to appending zeros in base b. In practice, additions like x_1 + x_0 and subtractions like p_3 - p_1 - p_2 may produce temporary values exceeding b^m - 1, requiring carry propagation across digits during these operations and in the final assembly of z. These carries are handled through standard base-b addition algorithms, each taking O(m) time, ensuring the overall step remains efficient for the reduced multiplication count.

Illustrative Example

To illustrate the core step of the Karatsuba algorithm, consider multiplying the 4-digit numbers 1234 and 5678 in , using a split size of m=2 digits. Split each number into higher and lower parts: 1234 = 12 × 10² + 34 (so x₁=12, x₀=34) and 5678 = 56 × 10² + 78 (so y₁=56, y₀=78). Compute three products: p₁ = x₁ × y₁ = 12 × 56 = 672, p₂ = x₀ × y₀ = 34 × 78 = 2652, and p₃ = (x₁ + x₀) × (y₁ + y₀) = (12 + 34) × (56 + 78) = 46 × 134 = 6164. The middle term, corresponding to x₁ y₀ + x₀ y₁, is then p₃ - p₁ - p₂ = 6164 - 672 - 2652 = 2840. Assemble the result as z = p₁ × 10⁴ + (middle term) × 10² + p₂ = 672 × 10000 + 2840 × 100 + 2652 = 6,720,000 + 284,000 + 2,652 = 7,006,652. This matches the direct multiplication 1234 × 5678 = 7,006,652. In this example, the partial products align without additional carries across the digit boundaries due to their magnitudes, though general implementations must propagate carries when combining terms to ensure correctness in base-b.

Recursive Formulation

The Karatsuba algorithm extends its core multiplication step through recursion, enabling the efficient computation of products for arbitrarily large integers by repeatedly dividing the operands into smaller parts until reaching a base case. For two n-digit numbers x and y in base b (typically b = 10 or b = 2), the algorithm splits each number at a point m = \lceil n/2 \rceil, yielding x = x_1 b^m + x_0 and y = y_1 b^m + y_0, where x_1, y_1 have up to \lceil n/2 \rceil digits and x_0, y_0 have up to \lfloor n/2 \rfloor digits. This handles uneven splits naturally, as the higher-order parts may have one more digit than the lower-order parts when n is odd; no padding is strictly required, though implementations may pad for simplicity to ensure even recursion depths. The sums x_1 + x_0 and y_1 + y_0 may have up to m + 1 digits due to carry-over. The recursion proceeds by computing three subproducts: z_2 = \text{Karatsuba}(x_1, y_1), z_0 = \text{Karatsuba}(x_0, y_0), and z_1 = \text{Karatsuba}(x_1 + x_0, y_1 + y_0). These are then combined using the formula xy = z_2 b^{2m} + (z_1 - z_0 - z_2) b^m + z_0, which avoids the fourth multiplication of the naive divide-and-conquer approach by leveraging the algebraic identity for the cross terms. The process recurses on these smaller instances until the base case is reached, typically when n \leq 1 or a small threshold (e.g., single-digit multiplication), at which point the product is computed directly using standard arithmetic. An outline of the recursive function can be expressed as follows:
function Karatsuba(x, y, n):
    if n ≤ 1:
        return x * y  // base case: direct multiplication
    m = ceil(n / 2)
    x1 = x // b^m   // higher part
    x0 = x % b^m    // lower part
    y1 = y // b^m
    y0 = y % b^m
    z2 = Karatsuba(x1, y1, m)
    z0 = Karatsuba(x0, y0, n - m)
    z1 = Karatsuba(x1 + x0, y1 + y0, m + 1)
    return z2 * b^(2*m) + (z1 - z0 - z2) * b^m + z0
This structure ensures that the recursion tree branches into three subproblems of roughly half the size each time, facilitating the algorithm's subquadratic performance.

Complexity Analysis

Time Complexity

The time complexity of the recursive Karatsuba algorithm for multiplying two n-digit numbers is captured by the recurrence relation T(n) = 3T(n/2) + \Theta(n), where the three recursive calls correspond to the multiplications of half-sized operands, and the \Theta(n) term accounts for the costs of additions, subtractions, and shifts in combining the results. This recurrence can be solved using the master theorem, with parameters a=3, b=2, and f(n)=\Theta(n^1). Since \log_2 3 \approx 1.585 > 1 and f(n) = O(n^{\log_2 3 - \epsilon}) for \epsilon \approx 0.585 > 0, the theorem yields T(n) = \Theta(n^{\log_2 3}). To derive this explicitly, assume T(n) = c n^b for constants c and b > 0. Substituting into the recurrence gives c n^b = 3 c (n/2)^b + d n for some constant d > 0. Dividing through by n^b yields c = 3 c (1/2)^b + d n^{1-b}. As n \to \infty, the second term vanishes if b > 1, so c = 3 c / 2^b, or $2^b = 3, hence b = \log_2 3. The leading constant c depends on base cases and the exact \Theta(n) coefficient but remains O(1) in the asymptotic bound. This \Theta(n^{\log_2 3}) complexity is asymptotically superior to the O(n^2) time of the standard grade-school for sufficiently large n, as \log_2 3 < 2, though the constant factors in Karatsuba (approximately for multiplications versus 4 in the naive approach, adjusted for linear work) imply a crossover point around 20 to 100 digits in practice.

Space Complexity and Optimality

The of the traditional recursive Karatsuba algorithm follows the recurrence S(n) = S(n/2) + O(n), arising from the sequential recursive subproblems and linear space for intermediate computations at each level, leading to an asymptotic bound of O(n). This bound accounts for temporary storage during the divide-and-conquer process. The algorithm can be further optimized to achieve O(\log n) through advanced in-place techniques that minimize usage. A key aspect of the recursive formulation is the call stack depth of O(\log n), which adds negligible space overhead. Iterative versions can eliminate the stack while maintaining the same time complexity and O(n) space. In practice, the Karatsuba algorithm outperforms the quadratic schoolbook method starting at around 10 to 100 digits, depending on the number base, hardware architecture, and implementation details; for example, on modern processors, the crossover occurs between 8 and 24 machine words (roughly 150 to 500 decimal digits for 64-bit words). Theoretically, integer multiplication has a conjectured lower bound of \Omega(n \log n), making Karatsuba's exponent of \log_2 3 \approx 1.585 a seminal milestone as the first algorithm to achieve sub-quadratic time below O(n^2), influencing subsequent advancements like the Schönhage–Strassen algorithm.

Implementations and Extensions

Pseudocode Implementation

The Karatsuba algorithm can be implemented recursively in a high-level pseudocode form, treating the input numbers as arrays of digits in a given base b (e.g., b = 10 for decimal or b = 2 for binary), which allows handling arbitrary-precision integers without overflow in the base case. This representation facilitates splitting the arrays into high and low halves, performing recursive multiplications, and recombining results while propagating carries during addition. The base case uses a standard schoolbook multiplication for small inputs to terminate recursion. The following pseudocode assumes the inputs x and y are arrays of digits (least significant digit first) of equal length n (padded if necessary), and n is a power of 2 for simplicity. The function returns the product as a digit array in the same base, with a helper function for schoolbook multiplication and another for addition with carry propagation.
function [karatsuba_multiply](/page/Function)(x, y, [base](/page/Base)):
    n = [length](/page/Length)(x)
    if n <= 1:  // Base case for small n
        return schoolbook_multiply(x, y, [base](/page/Base))  // O(1) or O(n^2) for tiny n

    m = n // 2
    x_low = x[0:m]      // Low half (least significant digits)
    x_high = x[m:n]     // High half (most significant digits)
    y_low = y[0:m]
    y_high = y[m:n]

    // Compute three recursive products
    p1 = karatsuba_multiply(x_high, y_high, base)  // x_high * y_high
    p2 = karatsuba_multiply(x_low, y_low, base)    // x_low * y_low
    // Compute (x_high + x_low) and (y_high + y_low) with carry
    x_mid = add_arrays(x_high, x_low, base)  // Temporary sum, carry propagated
    y_mid = add_arrays(y_high, y_low, base)
    p3 = karatsuba_multiply(x_mid, y_mid, base)  // (x_high + x_low) * (y_high + y_low)

    // Compute z = p3 - p1 - p2 (cross term x_high y_low + x_low y_high), with carries/borrows handled
    z = subtract_arrays(p3, add_arrays(p1, p2, base), base)

    // Combine: result = p1 * base^n + z * base^m + p2
    // Shift and add with carry propagation
    result_high = shift_left(p1, n, base)  // Multiply by base^n
    z_shifted = shift_left(z, m, base)     // Multiply by base^m
    result_low = p2
    result = add_arrays(add_arrays(result_high, z_shifted, base), result_low, base)
    return normalize(result, base)  // Remove leading zeros, propagate final carries
Here, schoolbook_multiply performs direct digit-by-digit , add_arrays adds two digit arrays element-wise with carry propagation (e.g., carry = (a_i + b_i + carry_in) // base, digit = % base), subtract_arrays does analogous subtraction (handling borrows), shift_left appends zeros to represent multiplication by a power of the base, and normalize ensures the output array has no digits. This formulation directly implements the core step of computing the cross terms via z = (x_h + x_l)(y_h + y_l) - x_h y_h - x_l y_l. For contexts, the algorithm adapts naturally to representation (base b = 2), where inputs are bit arrays and operations simplify: digits are 0 or 1, schoolbook is a simple AND/XOR for bits, and additions/subtractions use bitwise operations with carry (though full carry propagation is still needed for correctness). The structure remains identical, but shifts become left-shifts by bit counts (e.g., $2^k is appending k zero bits), and the three recursive calls reduce the constant factors in for implementations.

Practical Optimizations and Variants

In practical implementations, the Karatsuba algorithm is often employed in hybrid strategies that combine it with simpler methods for small operands and more advanced techniques for very large ones to optimize overall performance. For medium-sized integers, typically ranging from hundreds to thousands of bits, Karatsuba provides asymptotic benefits over the quadratic schoolbook multiplication while avoiding the setup costs of faster Fourier transform (FFT)-based methods; thus, implementations switch to schoolbook multiplication below a certain threshold (e.g., around 50-100 limbs in base-2^{32}) and to FFT or number-theoretic transform (NTT) variants for operands exceeding several thousand bits. This adaptive approach minimizes constant factors and overhead, achieving up to 20-30% speedups in real-world benchmarks for cryptographic applications compared to pure Karatsuba recursion. To address recursion-related issues such as stack overflow and excessive function call overhead in deep recursions, iterative non-recursive versions of Karatsuba have been developed, reformulating the divide-and-conquer process as a series of loops that process splits level-by-level, often using deques or arrays to manage intermediate results. These variants reduce space complexity from O(n log n) to O(n) in the worst case and improve cache locality on modern CPUs by minimizing branch predictions and function prologue/epilogue costs, with reported reductions in runtime by 10-15% for large inputs. Additionally, windowing techniques allow uneven splits (e.g., dividing into parts of sizes k and n-k where k ≠ n/2) to better align with hardware word sizes or operand asymmetries, further tuning efficiency for specific data distributions. Variants extending Karatsuba include the Toom-Cook family, where the three-way split (Toom-3) generalizes the two-way Karatsuba by evaluating polynomials at more points to reduce multiplications from 9 to 5 for ternary divisions, achieving O(n^{1.465}) complexity and serving as a bridge to higher-order methods before FFT dominance. Another adaptation, the Karatsuba-Winograd variant, applies the algorithm to matrix multiplication by treating matrices as bivariate polynomials and using similar reduction tricks to lower the number of scalar multiplications, though it incurs higher addition counts and is practical only for modest dimensions due to constant overhead. Despite these advances, challenges persist: the algorithm's reliance on numerous additions and subtractions (roughly 3n per level) can dominate for small-to-medium n on hardware where multiplications are cheaper than adds, prompting thresholds around 2^{10}-2^{15} bits for breakeven; base choices like 2^{32} or 10^9 optimize limb operations to match CPU word sizes and reduce carry propagations, but poor alignment can inflate costs by 5-10%; and modern CPU cache effects, such as L1/L2 misses from scattered recursive accesses, necessitate strided or blocked layouts to maintain performance within 80-90% of theoretical bounds.

Applications

In Arbitrary-Precision Arithmetic

The GNU Multiple Precision Arithmetic Library (GMP) employs the Karatsuba algorithm as an intermediate multiplication method for large integers, applying it to operand sizes exceeding the basecase threshold and before transitioning to more advanced techniques like Toom-Cook variants or FFT-based methods. This positions Karatsuba to handle multiplications efficiently for numbers up to several thousand digits, where it provides a balance between computational overhead and asymptotic performance gains over schoolbook multiplication. In programming language implementations, Karatsuba serves as a core component for arbitrary-precision integer arithmetic. Python's interpreter integrates Karatsuba multiplication for built-in int types when handling integers larger than approximately 70 digits, acting as a bridge to even faster algorithms like Nussbaumer for extremely large . Similarly, Java's BigInteger class in the uses Karatsuba for multiplications in the sub-quadratic regime, specifically for sizes between naive methods and theoretical optimal approaches, ensuring robust before invoking higher-complexity strategies. Karatsuba's efficiency proves particularly valuable in cryptographic applications requiring rapid of , such as , where it accelerates the computation of products between large primes. For instance, implementations leveraging recursive Karatsuba can achieve measurable speedups in and key derivation compared to unoptimized methods. In , Karatsuba contributes to an effective O(n^{1.6}) complexity for intermediate-sized operands, enhancing decryption and signing performance in workflows. As of 2025, Karatsuba remains relevant in systems for optimizing big-integer operations in . It is incorporated in Bitcoin's for efficient representation and of 254-bit numbers, supporting scalable zero-knowledge proofs and layer-2 solutions. Additionally, hybrid Karatsuba variants enhance processors for applications, delivering high throughput for 191-bit operations in resource-constrained environments.

In Computational Number Theory

The Karatsuba algorithm enhances the efficiency of integer factorization in computational number theory by providing a fast method for multiplying large integers, which is a bottleneck in trial division and more advanced techniques like Pollard's rho algorithm for semiprimes. In Pollard's rho, the algorithm relies on iterative computations involving modular multiplications and gcd operations on potentially very large numbers, where Karatsuba's O(n^{1.585}) complexity significantly reduces the time compared to quadratic methods, enabling factorization of numbers with hundreds or thousands of digits. This application is particularly valuable for semiprimes, common in cryptographic contexts, as it balances the probabilistic nature of rho with optimized arithmetic primitives. In (), Karatsuba multiplication is integral to scalar multiplications, the core operation for and agreement, which require repeated point doublings and additions over large finite fields. By accelerating the underlying big integer multiplications in field arithmetic, Karatsuba variants reduce the overall computational cost, making ECC suitable for resource-constrained environments like embedded systems. Implementations often combine Karatsuba with projective coordinates to further minimize inversions. Karatsuba also supports fast exponentiation methods, such as binary , by efficiently handling the squarings and multiplications needed for computing g^e mod p with large e and p in number-theoretic protocols. This integration is crucial for operations like Diffie-Hellman key exchange or primality testing, where dominates runtime; hybrid approaches using distributed Karatsuba with Montgomery reduction. Recent extensions in 2020s research have hybridized Karatsuba with quantum-resistant algorithms to address emerging threats from . In , such as , Karatsuba optimizes polynomial multiplications over rings like Z_q/(x^n + 1), which are central to and decryption, enabling lightweight implementations on 8-bit microcontrollers with cycle counts reduced by factors of 2-5 compared to naive methods. These advancements support post-quantum standards by maintaining efficiency in key encapsulation mechanisms while resisting .

References

  1. [1]
    [PDF] Multidigit Multiplication for Mathematicians
    MULTIDIGIT MULTIPLICATION FOR MATHEMATICIANS. DANIEL J. BERNSTEIN. Abstract. This paper surveys techniques for multiplying elements of various.
  2. [2]
    [PDF] Karatsuba and Yu. Ofman - cs.wisc.edu
    Original article submitted February 9, 1962. One of the problems studied in this paper, just as in the previously published paper by Yu. Ofman [1], whose no-.
  3. [3]
    [PDF] The Karatsuba algorithm (multiplication of large integers)
    The Karatsuba algorithm provides a striking example of how the “Divide and Conquer” technique can achieve an asymptotic speedup over an ancient algorithm.
  4. [4]
    Mathematicians Discover the Perfect Way to Multiply
    Apr 11, 2019 · Then in 1960, the 23-year-old Russian mathematician Anatoly Karatsuba took a seminar led by Andrey Kolmogorov, one of the great ...
  5. [5]
    O - Introduction to Theoretical Computer Science
    The operations of Karatsuba's algorithm are detailed in Algorithm 0.4, while the analysis is given in Lemma 0.5 and Lemma 0.6. 2: Karatsuba's multiplication ...<|control11|><|separator|>
  6. [6]
    [PDF] The Speed Limit of Multiplication. - Institutt for matematiske fag
    Sep 6, 2019 · Attending was the 23-year old student Anatoly Karatsuba. A week's search later, he discovered the Karatsuba algorithm: M. KA. (n) = O(nlog. 2.
  7. [7]
    09-fast-multiplication-and-the-discrete-fourier-transform
    This method was discovered by Anatoly Karatsuba as a student in 1960 and presented at Moscow State University in a seminar run by Andrey Kolmogorov.<|control11|><|separator|>
  8. [8]
    Why did Kolmogorov publish Karatsuba's algorithm?
    Mar 15, 2014 · Karatsuba presented his algorithm at a seminar led by Kolmogorov. · Kolmogorov prepared an article that had two results of his students, ...
  9. [9]
    [PDF] The Complexity of Computations
    Kolmogorov was very agitated because this contradicted his very plausible conjecture. At the next meeting of the seminar, Kolmogorov himself told the ...
  10. [10]
    Multiplication of Multidigit Numbers on Automata - ResearchGate
    Karatsuba and Ofman [KO63] showed that some multiplication operations of such an algorithm can be replaced by less costly addition operations which reduces ...
  11. [11]
    [PDF] CMSC 351: Integer Multiplication - UMD MATH
    Mar 29, 2022 · What is the worst-case time complexity of this operation? 2 Schoolbook Multiplication. 2.1 Method. The first and most obvious way to multiply ...
  12. [12]
    [PDF] Divide-and-conquer algorithms - People @EECS
    T(n) = 4T(n/2) + O(n). We will soon see general strategies for solving ... T(n)=8T(n/2) + O(n2). This comes out to an unimpressive O(n3), the same as ...
  13. [13]
    [PDF] Divide-and-Conquer - cs.Princeton
    Multiply four N/2-digit integers. □. Add two N/2-digit integers, and shift to obtain result. Divide-and-Conquer Multiplication: First Attempt. 9,224. 121,401 ...
  14. [14]
    [PDF] Chapter 2. Divide-and-conquer algorithms
    Our method for multiplying n-bit numbers starts by making recursive calls to multiply these four pairs of n/2-bit numbers, and then evaluates the preceding.
  15. [15]
    Karatsuba Multiplication -- from Wolfram MathWorld
    As discovered by Karatsuba (Karatsuba and Ofman 1962), multiplication of two n -digit numbers can be done with a bit complexity of less than n^2.
  16. [16]
    A. Karatsuba and Yu. Ofman, “Multiplication of Multidigit Numbers on ...
    A. Karatsuba and Yu. Ofman, “Multiplication of Multidigit Numbers on Automata,” Soviet Physics-Doklady, Vol. 7, 1963, pp. 595-596.
  17. [17]
    [PDF] CMSC 351: Integer Multiplication - UMD MATH
    Mar 29, 2022 · A tree diagram can succinctly show how Karatsuba's Algorithm plays out in terms of the breaking down to single-digit multiplications. Example ...
  18. [18]
    [PDF] CS 473: Algorithms - University of Illinois Urbana-Champaign
    Karatsuba's Algorithm. Example. 1234 × 5678 = (100 × 12 + 34) × (100 × 56 + 78). = 10000 × 12 × 56 + 100 × (12 × 78 + 34 × 56). +34 × 78. Chekuri. CS473. 37 ...Missing: numerical | Show results with:numerical
  19. [19]
    Karatsuba algorithm for fast multiplication using Divide and Conquer ...
    Jul 23, 2025 · Using Divide and Conquer, we can multiply two integers in less time complexity. We divide the given numbers in two halves. Let the given numbers ...
  20. [20]
    [PDF] 1 Introduction 2 Recurrences
    Oct 3, 2016 · and Karatsuba's runtime can be written as the recurrence T2(n)=3T2 n 2 + O(n). Note that the constant “hidden” in the O(n) term in T2 may be ...
  21. [21]
    Space-Efficient Karatsuba Multiplication for Multi-Precision Integers
    May 22, 2016 · The traditional Karatsuba algorithm for the multiplication of polynomials and multi-precision integers has a time complexity of O(n^{1.59}) and a space ...
  22. [22]
    [PDF] Karatsuba multiplication with temporary space of size ≤ n
    Abstract. We describe in this short note how it is possible to perform a Karatsuba multiplication of two polynomials of degree n − 1 using a buffer holding.
  23. [23]
    Karatsuba algorithm with O(n) memory instead of O(n log n)
    May 4, 2015 · The recursive Karatsuba multiplication algorithm with time complexity ~O(n^log2(3)) is known to be faster than the simple multiplication ...Missing: CLRS | Show results with:CLRS
  24. [24]
    c++ - Is it really efficient to use Karatsuba algorithm in 64-bit x 64-bit ...
    Jun 26, 2015 · No. On modern architectures the crossover at which Karatsuba beats schoolbook multiplication is usually somewhere between 8 and 24 machine words.
  25. [25]
    Karatsuba multiplication - CMU School of Computer Science
    Karatsuba, properly implemented, beats grade-school multiplication even for 16-digit numbers. It is significantly better at 32 digits.Missing: schoolbook | Show results with:schoolbook
  26. [26]
  27. [27]
  28. [28]
    [PDF] 1 Karatsuba Multiplication - CMU School of Computer Science
    Apr 19, 2020 · When handed directly to the interpreter, it results in an algorithm that examines all n! permutations of the given input list until it finds one ...
  29. [29]
    [PDF] Modern Computer Arithmetic - Mathematical Sciences Institute, ANU
    The case r = 2 corresponds to Karatsuba's algorithm (§1.3.2). The case r = 3 is known as Toom-Cook 3-way, sometimes simply called “the. Toom-Cook algorithm”.
  30. [30]
    [PDF] Multidigit multiplication for mathematicians
    This paper surveys techniques for multiplying elements of various commutative rings. It covers Karatsuba multiplication, dual Karatsuba multi- plication, Toom ...
  31. [31]
    [PDF] Flattening Karatsuba's recursion tree into a single summation - arXiv
    Nov 16, 2019 · The fast multiplication algorithm discovered by Anatoly Karatsuba in 1960. (and published two years later) is known to be the oldest ...
  32. [32]
    [PDF] Generalizations of the Karatsuba Algorithm for Efficient ...
    The present paper provides a generalization and detailed analysis of the algorithm by Karatsuba [2] to multiply two polynomials which was introduced in 1962.
  33. [33]
    Multiplication Algorithms (GNU MP 6.3.0)
    NxN limb multiplications and squares are done using one of seven algorithms, as the size N increases. Algorithm, Threshold. Basecase, (none). Karatsuba ...
  34. [34]
    Faster large integer multiplication - Ideas - Discussions on Python.org
    Jan 23, 2022 · I'm considering looking at improving the multiplication of Pythons built-in integers. There are faster methods than Karatsuba which is currently used in Python ...
  35. [35]
    BigInteger (Java SE 24 & JDK 24) - Oracle Help Center
    Common multiplication algorithms between the bounds of the naive and theoretical cases include the Karatsuba multiplication (O(n1.585)) and 3-way Toom-Cook ...
  36. [36]
    [PDF] Fast Implementations of RSA Cryptography
    Trying to contruct larger multiplies by applying Karatsuba's algorithm recursively does not result in a performance improve- ment: there are not enough ...
  37. [37]
    [PDF] improving performance, cryptographic strength of the post-quantum ...
    Jan 15, 2024 · The Anatoly. Karatsuba's algorithm has been successfully applied to significantly increase the speed of key generation and encryption. In spite ...
  38. [38]
    Computational complexity of RSA - Cryptography Stack Exchange
    Apr 16, 2018 · However OpenSSL (for these sizes) uses Karatsuba multiplication which lowers this to roughly O(n1.6) (thanks frgieu) and also Montgomery ...RSA performance - Cryptography Stack ExchangeAre there practical upper limits of RSA key lengths?More results from crypto.stackexchange.com
  39. [39]
    [PDF] optimizing big integer multiplication on bitcoin: introducing
    The Karatsuba Algorithm is a fast multiplication algorithm to multiply two integers using ... (2025) “Bridging bitcoin to second layers via BitVM2 ...
  40. [40]
    (PDF) A Fast and Efficient 191-bit Elliptic Curve Cryptographic ...
    Aug 6, 2025 · The system is built using the four-way overlap-free Karatsuba algorithm (OFKA) and a modified radix-n interleaved multiplication (mRnIM) ...
  41. [41]
    A Novel Approach to Integer Factorization: A Paradigm in ...
    Jan 16, 2025 · The article introduces a novel factorization algorithm that leverages the multiplication rules of numbers, specifically the Toom-Cook algorithm, ...
  42. [42]
    [PDF] An Efficient Elliptic Curve Scalar Multiplication using Karatsuba ...
    In 2013, Rezai A et al analysed a new and efficient implementation approach for the elliptic curve cryptosystem (ECC) based on a novel finite field ...
  43. [43]
    [PDF] ELLIPTIC CURVE SCALAR MULTIPLIER USING KARATSUBA
    To make it more efficient the scalar multiplication is the main operation in elliptic curve cryptography. Scalarmultiplication involves plenty of point addition ...
  44. [44]
    (PDF) Multiplication and exponentiation of big integers with hybrid ...
    The distributed Karatsuba algorithm is applied to perform multiplication operations that use very large numbers (more than 256 digits) in the Montgomery ...
  45. [45]
    [PDF] A Lightweight Implementation of NTRU Prime for the Post-quantum ...
    Mar 18, 2021 · We achieved this performance through (i) new low-level software optimization techniques to accelerate. Karatsuba-based polynomial multiplication ...