Fact-checked by Grok 2 weeks ago

HHL algorithm

The Harrow–Hassidim–Lloyd (HHL) algorithm is a quantum algorithm for solving linear systems of the form Ax = b, where A is an N \times N sparse Hermitian matrix with condition number \kappa, by preparing a quantum state |\mathbf{x}\rangle proportional to the classical solution vector \mathbf{x} and enabling the estimation of expectation values such as \mathbf{x}^\dagger M \mathbf{x} for some observable M. Introduced in 2009 by Aram W. Harrow, Avinatan Hassidim, and Seth Lloyd, it achieves an exponential speedup over classical methods, running in time polynomial in \log N, \kappa, and the desired precision \epsilon, compared to the classical \mathcal{O}(N \sqrt{\kappa}) complexity for sparse matrices. The algorithm proceeds in three main stages: first, it uses quantum phase estimation (QPE) to encode the eigenvalues of A into a register of qubits while applying powers of A via its sparse oracle access; second, it performs controlled rotations on an ancillary qubit to apply the inversion A^{-1} effectively by scaling with the reciprocal eigenvalues; and third, it applies inverse QPE and post-selection to yield the solution state |\mathbf{x}\rangle, from which inner products can be sampled. This process exploits and interference to handle the full matrix implicitly, without explicitly storing or manipulating the N-dimensional vector. Key requirements include efficient implementation of the matrix A as a sparse unitary operator, Hermitian structure (or extensions to normal matrices), and a low condition number \kappa to avoid amplification of errors. The HHL algorithm has foundational importance in quantum computing, demonstrating one of the first provable exponential speedups for a practically relevant problem and serving as a building block for advanced applications in quantum simulation, optimization, and , such as linear regression, support vector machines, and recommender systems. For instance, it accelerates computations in electromagnetic and analysis by solving large-scale linear systems quantumly. However, practical limitations persist, including sensitivity to decoherence and noise on current noisy intermediate-scale quantum (NISQ) devices, the need for fault-tolerant quantum computers for large N, and the fact that it outputs a rather than a classical vector, necessitating additional measurements for full solution recovery. Subsequent improvements, such as variable-time and of unitaries, have addressed some runtime dependencies on \kappa.

Introduction

Overview

The Harrow–Hassidim–Lloyd (HHL) algorithm is a quantum algorithm designed to solve systems of linear equations of the form Ax = b, where A is an N \times N matrix and b is a known input vector. It achieves an exponential speedup compared to classical methods for large-scale problems involving specific matrix properties, enabling efficient estimation of solution features such as expectation values x^\dagger M x for some observable matrix M. The core output of the HHL algorithm is a quantum state |x\rangle proportional to the classical solution vector x, normalized as |x\rangle \propto A^{-1} |b\rangle. This state representation does not yield the full vector explicitly but allows subsequent quantum algorithms—such as those in —to access and manipulate the solution with potential quadratic or exponential speedups inherited from the encoding. In outline, the algorithm encodes the input b as a quantum state |b\rangle, applies quantum phase estimation to decompose it into the eigenbasis of a of A, performs eigenvalue inversion via controlled operations on an ancillary , and post-selects the output state upon measuring the ancillary in a specific basis. The quantum holds under the conditions that A is Hermitian, sparse (with a constant number of nonzero entries per row), and well-conditioned (with small \kappa), ensuring the runtime scales favorably with system size N for such matrices.

Historical development

The –Hassidim– (HHL) algorithm was first proposed in 2009 by Aram W. Harrow, Avinatan Hassidim, and in their seminal paper titled "Quantum algorithm for solving linear systems of equations." This work introduced a quantum method to approximate solutions to sparse linear systems A\mathbf{x} = \mathbf{b} with speedup over classical approaches under certain conditions, such as when the matrix A is well-conditioned and Hermitian. The primary motivation behind the HHL algorithm stemmed from the desire to extend the paradigm-shifting speedups of earlier quantum algorithms—such as for and Grover's algorithm for unstructured search—to fundamental linear algebra problems. These problems underpin a wide array of applications in quantum simulation of physical systems, optimization, and , where classical methods scale poorly with system size. By leveraging quantum phase estimation and techniques, HHL enabled the preparation of a proportional to the solution , facilitating efficient estimation of expectation values rather than full classical readout. In the years immediately following its proposal, several theoretical extensions enhanced the algorithm's robustness and applicability. A key improvement came in 2012 from Andris Ambainis, who refined the dependence on the matrix \kappa from quadratic to nearly linear using variable-time , making the algorithm more efficient for moderately ill-conditioned systems. Another significant early advancement was the 2013 preconditioned quantum algorithm by Brian D. Clader, Brent C. Jacobs, and Christopher R. Sprouse, which addressed limitations for non-sparse matrices by incorporating classical preconditioning to reduce the effective , thereby broadening HHL's practical utility. The HHL algorithm quickly gained recognition as a of quantum linear algebra. It was integrated into influential reviews and textbooks on , such as the comprehensive primer by M. Dervović, P. Herbster, M. Mountney, S. Severini, N. Usher, and L. Wossnig, which detailed its foundational role and subsequent developments. This early acknowledgment underscored HHL's potential to drive progress in quantum-enhanced numerical methods.

Background

Classical linear systems solving

The problem of solving a linear system consists of finding a x \in \mathbb{R}^N (or \mathbb{C}^N) such that Ax = b, where A is an N \times N nonsingular and b is a known of length N. This task arises in numerous scientific and engineering applications, including simulations in physics, optimization, and . Classical direct methods, such as , transform the system into an equivalent upper triangular form through row operations, enabling back-substitution to obtain x. For dense matrices, this approach requires O(N^3) arithmetic operations in the worst case. traces its roots to the early , when developed systematic elimination procedures for solving least-squares problems in astronomy. These methods are reliable for small to moderate N but become computationally prohibitive for large N due to their cubic scaling. For sparse matrices, where A has only s \ll N^2 nonzeros per row on average, iterative methods are preferred to exploit structure and reduce costs. The method, introduced by Magnus Hestenes and Eduard Stiefel in 1952, is a seminal algorithm for symmetric positive definite systems, converging in at most N iterations but typically requiring O(\sqrt{\kappa}) iterations in practice, with each iteration costing O(s N) operations, where \kappa is the of A (defined as \kappa = \|A\| \|A^{-1}\|, the ratio of the largest to smallest singular values). Similarly, the generalized minimal residual (GMRES) method, developed by Yousef Saad and Martin Schultz in 1986, extends this to nonsymmetric systems by minimizing the residual in a , with depending on \kappa and sparsity s, often requiring restarts to manage storage. Modern implementations incorporate preconditioning—techniques like to approximate A^{-1}—to reduce effective \kappa and accelerate , enabling solutions for systems with millions of unknowns. Despite these advances, classical solvers face significant limitations for high-dimensional problems. Direct methods scale as O(N^3), demanding infeasible memory and time for N > 10^4 or so, while iterative methods scale polynomially in N and \kappa but can stall if \kappa is large (e.g., ill-conditioned matrices from discretized PDEs), leading to slow or numerical instability. In very large dimensions, even sparse iterative solvers struggle with storage for vectors and the need for efficient matrix-vector products, rendering exponentially scaled systems (in terms of problem parameters) intractable on classical hardware.

Relevant quantum computing concepts

In quantum computing, the basic unit of information is the , which can exist in a superposition of its basis states, unlike classical bits that are strictly 0 or 1. The general state of a single qubit is given by |\psi\rangle = \alpha |0\rangle + \beta |1\rangle, where \alpha and \beta are complex amplitudes satisfying |\alpha|^2 + |\beta|^2 = 1, representing the probabilities of measuring the qubit in |0\rangle or |1\rangle. This superposition allows a quantum system of n qubits to represent $2^n states simultaneously, enabling that underpins algorithms like HHL. Quantum gates form the building blocks for manipulating these superposition states through unitary transformations. The Hadamard gate H, for instance, generates equal superposition from a basis state: H |0\rangle = \frac{|0\rangle + |1\rangle}{\sqrt{2}}, \quad H |1\rangle = \frac{|0\rangle - |1\rangle}{\sqrt{2}}, creating balanced superpositions essential for exploring multiple computational branches. Controlled gates, such as the controlled-NOT (CNOT), introduce entanglement by applying an operation on a target only if the control qubit is in |1\rangle, linking the states of multiple qubits to perform correlated operations. These gates enable the construction of universal quantum circuits, assuming a set like Pauli gates, Hadamard, and phase shifts suffices for arbitrary computations. For algorithms addressing classical problems, such as linear systems, classical data must be encoded into s with sufficient fidelity. In the HHL algorithm, the input b is prepared as the |b\rangle = \sum_i \beta_i |i\rangle, where the \beta_i are the normalized entries of b, typically using encoding to achieve logarithmic usage relative to the dimension. Efficient state preparation requires the encoding circuit to run in polylogarithmic time and ensure good overlap between |b\rangle and the eigenvectors of the system matrix, as low overlap reduces the algorithm's effective precision. The HHL algorithm operates under the assumption of fault-tolerant , where logical qubits are protected by codes to maintain coherence over extended circuit depths. This necessitates universal sets implementable with error rates below the (typically around $10^{-3} to $10^{-4} per ), allowing polylogarithmic-depth circuits in the matrix size N and \kappa. Without such , decoherence would overwhelm the subtle phase information critical to the algorithm.

Algorithm Description

Setup and assumptions

The HHL algorithm addresses the problem of solving a A \mathbf{x} = \mathbf{b}, where A is an N \times N and \mathbf{b} is a known , by producing a encoding the solution \mathbf{x}. If A is not Hermitian, it can be into a larger block-diagonal \tilde{A} = \begin{pmatrix} 0 & A \\ A^\dagger & 0 \end{pmatrix}, effectively doubling the dimension while preserving the solution structure. , assume A is scaled such that its eigenvalues lie in the interval [1/\kappa, 1], where \kappa is the and \|A\| \leq 1. The matrix A is required to be s-sparse, with at most s non-zero entries per row, and these entries must be computable in O(s) time to enable efficient . The \kappa \ll \mathrm{poly}(N) to ensure the algorithm's runtime remains polynomial in \log N. The input vector \mathbf{b} is encoded as a normalized |\mathbf{b}\rangle = \sum_j \beta_j |u_j\rangle, where |u_j\rangle are the eigenvectors of A with eigenvalues \lambda_j, and this state must be preparable in time in \log N. For the solution to have unit \ell_2-norm (up to constants), \mathbf{b} should have limited overlap with eigenvectors corresponding to very small eigenvalues, ensuring \|\mathbf{x}\| = O(1). The output of the algorithm is a quantum state |\mathbf{x}\rangle approximating the normalized solution |\mathbf{x}\rangle \approx \sum_j (\beta_j / \lambda_j) |u_j\rangle / \|\sum_j (\beta_j / \lambda_j) |u_j\rangle\|, with the approximation error controlled by the algorithm's precision parameters. This state encodes \mathbf{x} in superposition, allowing extraction of values \langle \mathbf{x} | M | \mathbf{x} \rangle for suitable observables M without reconstructing the full classically. The sparsity assumption on A aligns with classical methods for sparse linear systems, enabling quantum speedup under similar structural constraints.

Hamiltonian simulation

In the HHL algorithm, constructs the unitary operator U(t) = e^{-i A t} for the scaled A, which is assumed to be s-sparse with at most s nonzero entries per row. This operator approximates the continuous-time dynamics generated by A over time t, enabling the encoding of spectral properties into quantum states. Efficient simulation relies on sparse oracle access to A, defined as an oracle O_{A} that, on input |i\rangle |0\rangle, outputs \sum_j A_{ij} |j\rangle |A_{ij}\rangle for the relevant nonzero entries j in row i, computable in O(s \mathrm{polylog} N) time where N is the . This access model allows querying the matrix structure and values without storing the full A, which is crucial for scalability in large systems. The simulation is typically achieved using sparse Hamiltonian simulation techniques, such as Trotter-Suzuki decomposition for structured forms of A, or linear combinations of unitaries (), which expresses A as a weighted sum of implementable unitaries and selects the desired evolution via post-selection or . Both methods leverage the sparsity to decompose the exponential into gate-efficient circuits. The computational cost of simulating U(t) scales as O(t \|A\| / \epsilon_{\mathrm{sim}}) queries to the , where \epsilon_{\mathrm{sim}} is the , but optimized implementations achieve polylogarithmic dependence on N in gate count for fixed t and constant . This simulated unitary serves as the core primitive in the HHL algorithm, supplying the controlled operations needed for subsequent eigenvalue .

Phase estimation

The quantum phase estimation (QPE) subroutine in the HHL algorithm is applied to the unitary time evolution operator U(t) = e^{-iAt} derived from the Hamiltonian simulation of the Hermitian matrix A, where t is a scaling time parameter. For an eigenvector |u_j\rangle of A with eigenvalue \lambda_j, the corresponding eigenvalue of U(t) is e^{-i\lambda_j t} = e^{-i 2\pi \phi_j}, with phase \phi_j = \lambda_j t / 2\pi. The QPE estimates \phi_j to high precision, enabling recovery of an approximation \tilde{\lambda}_j \approx \lambda_j by scaling back with t. This step entangles an ancillary clock register with the input state to encode the eigenvalues in the computational basis. The circuit for QPE begins by initializing the clock register to |0\rangle^{\otimes m} and the system register to the input state |\psi\rangle, where m is the number of qubits in the clock register. An m-qubit Hadamard gate H^{\otimes m} is applied to the clock register, creating an equal superposition \frac{1}{\sqrt{2^m}} \sum_{k=0}^{2^m - 1} |k\rangle |\psi\rangle. For each clock qubit l = 0 to m-1, a controlled operation applies U(2^l t) to the system register, controlled on the l-th clock qubit being in state |1\rangle. Finally, an inverse quantum Fourier transform (QFT) is applied to the clock register, yielding the state |\tilde{\phi}_j\rangle |\psi\rangle \approx |\phi_j\rangle |\psi\rangle for eigenvector inputs, where |\tilde{\phi}_j\rangle is the binary approximation of \phi_j. The scaling \tilde{\lambda}_j = 2\pi \tilde{\phi}_j / t directly provides the eigenvalue estimate in the clock register. The precision of the eigenvalue estimate is controlled by the clock register size m = O(\log(1/\varepsilon_{\mathrm{QPE}})), achieving an additive error |\tilde{\lambda}_j - \lambda_j| \leq \varepsilon_{\mathrm{QPE}} with success probability at least $1 - \delta, where \delta can be made small (e.g., O(1/m)) by standard QPE analysis assuming \lambda_j lies within the estimable range. This logarithmic scaling in precision qubits is a key efficiency feature of QPE, contrasting with classical methods that require linear resources for similar accuracy. In the HHL context, \varepsilon_{\mathrm{QPE}} is chosen proportionally to the desired overall solution error, typically O(\varepsilon / \kappa) where \kappa is the condition number of A, but the subroutine itself operates independently of subsequent inversion steps. When integrated into HHL, the input to QPE is the normalized right-hand-side state |b\rangle \approx \sum_j \beta_j |u_j\rangle, where \{\beta_j\} are the coefficients in the eigenbasis of A. The output is the entangled state \sum_j \beta_j |\tilde{\lambda}_j\rangle |u_j\rangle, with the clock register holding the approximate eigenvalues conditional on each eigenvector component. This superposition preserves the amplitudes \beta_j while attaching eigenvalue information, setting up the algorithm for conditional operations on |\tilde{\lambda}_j\rangle without collapsing the coherent superposition. The success probability for the full input state inherits the per-eigenvalue guarantees, assuming the eigenvalues are well-conditioned and t is chosen appropriately (e.g., t = O(\kappa)) to map phases into [0,1).

Eigenvalue inversion

In the HHL algorithm, an ancillary , initialized in the |0\rangle_A, is introduced to enable the conditional inversion of the estimated eigenvalues obtained from the phase estimation subroutine. This ancilla allows the inversion to be encoded as an in its , facilitating the of the solution vector upon postselection. The inversion is implemented via controlled rotations on the ancilla, conditioned on the clock register that encodes the estimated eigenvalues \tilde{\lambda}_k. Specifically, for each basis |k\rangle in the clock register corresponding to \tilde{\lambda}_k, a controlled-R_y(\theta_k) gate is applied to the ancilla, where \theta_k = 2 \arcsin(C / \tilde{\lambda}_k) and C is a constant chosen such that C \leq \min_j \lambda_j, typically C = O(1/\kappa) with \kappa being the of the matrix. This rotation ensures that the of |1\rangle_A is proportional to $1/\tilde{\lambda}_k, while avoiding over-rotation for small eigenvalues by the bound on C. The controlled rotations are realized by decomposing the multi-qubit clock register into single-qubit controls, applying a series of two-qubit gates tailored to each |k\rangle. Following the phase estimation step, which yields the entangled state \sum_j \beta_j |\tilde{\lambda}_j\rangle |u_j\rangle |0\rangle_A (where |b\rangle = \sum_j \beta_j |u_j\rangle and |u_j\rangle are the eigenvectors of the A with eigenvalues \lambda_j), the controlled rotations evolve the state to: \sum_j \beta_j |\tilde{\lambda}_j\rangle |u_j\rangle \left( \sqrt{1 - \left( \frac{C}{\tilde{\lambda}_j} \right)^2 } \, |0\rangle_A + \frac{C}{\tilde{\lambda}_j} |1\rangle_A \right). This inverts the eigenvalues conditionally, encoding the reciprocals in the ancilla's |1\rangle_A component. The goal of this step is to prepare a state where postselection on the ancilla in |1\rangle_A projects the system onto \sum_j \beta_j (C / \lambda_j) |u_j\rangle, which is proportional to C A^{-1} |b\rangle \approx C |x\rangle, the normalized solution to the A |x\rangle = |b\rangle, up to the approximation in the eigenvalue estimates. The success probability of this projection is \Omega(1/\kappa^2), reflecting the conditioning of the problem.

Measurement and output

After the eigenvalue inversion step, the quantum state includes an ancilla qubit that is measured in the computational basis. A measurement outcome of 1 on this ancilla post-selects the solution subspace, yielding a state proportional to |x⟩ = ∑_j β_j λ_j^{-1} |u_j⟩, where β_j are the coefficients of the input vector |b⟩ in the eigenbasis of A, λ_j are the eigenvalues, and |u_j⟩ are the corresponding eigenvectors. Upon successful post-selection, the clock register from the phase estimation is uncomputed to disentangle it, leaving the normalized solution state |x⟩ ≈ A^{-1} |b⟩ / ||A^{-1} |b⟩|| in the register. The success probability of this post-selection is at least Ω(1/κ²), where κ is the of A, due to the scaling introduced by the controlled inversion on the ancilla. If the probability is low, can be applied to boost it, requiring O(κ) repetitions of the algorithm to obtain the state with , though post-selection alone often suffices for many applications. The output state |x⟩ is not typically read out via full tomography, which would be inefficient; instead, it is used directly for expectation values such as ⟨x| M |x⟩ for some M, often via the to estimate inner products without collapsing the full vector. For norms or specific components, amplitude estimation techniques provide efficient readout of quantities like ||A^{-1} |b⟩|| or |⟨i|x⟩|² by sampling in the computational basis. This approach leverages the to extract solution properties probabilistically while preserving the advantages of the algorithm's exponential speedup.

Performance Analysis

Runtime complexity

The runtime complexity of the Harrow-Hassidim-Lloyd (HHL) algorithm is analyzed in terms of query complexity to sparse oracles and overall gate count, assuming access to an s-sparse Hermitian matrix A via an efficient oracle that allows row computations. In the original formulation, the total query complexity scales as \tilde{O}(s^2 \kappa^3 \log N / \epsilon), where \kappa is the condition number of A, N is the dimension of the system, and \epsilon is the desired solution precision. Subsequent optimizations, particularly using variable-time amplitude amplification, improve the dependence on \kappa to linear while worsening on \epsilon, yielding \tilde{O}(\kappa \log^3 \kappa \log N / \epsilon^3 \polylog(1/\epsilon)). The complexity breakdown reveals that quantum phase estimation (QPE) often dominates the , especially for high-precision regimes, requiring \tilde{O} \left( \left( \frac{\kappa \log N}{\epsilon} \right)^2 / \log(\kappa / \epsilon) \right) queries under optimized conditions with constant-depth simulations. In contrast, the Hamiltonian simulation subroutine incurs a of \tilde{O}(s \kappa \log N / \epsilon), which is typically lower when sparsity s is small. Regarding gate counts, the HHL algorithm achieves a total depth of polylogarithmic in N, specifically \tilde{O}(\poly(\log N, \log(1/\epsilon))), when implemented with fault-tolerant quantum computing resources. This polylogarithmic scaling in N provides an exponential advantage in matrix dimension compared to classical methods that scale linearly or sublinearly for sparse systems. The dependence on parameters is linear in \kappa after optimizations, logarithmic in N, and worse than linear in $1/\epsilon in early improved versions, though cubic scaling in \kappa appeared in the original due to unoptimized amplification steps.

Classical comparison

The classical algorithms for solving linear systems of equations A\mathbf{x} = \mathbf{b} provide baselines against which the HHL algorithm's performance can be evaluated, particularly in terms of . Direct solvers, such as with partial pivoting, require O(N^3) operations for an N \times N dense , rendering them inefficient for large-scale problems where N exceeds millions. For sparse matrices with at most s non-zero entries per row, iterative methods like the conjugate gradient (CG) algorithm offer improved efficiency, achieving convergence in O(\sqrt{\kappa} \log(1/\epsilon)) iterations, where \kappa is the of A (the ratio of its largest to smallest ) and \epsilon is the desired relative error; each iteration costs O(s N) time, yielding an overall complexity of \tilde{O}(s N \sqrt{\kappa}). These classical approaches scale linearly with N, lacking any logarithmic dependence on the system dimension. In contrast, the HHL algorithm demonstrates a theoretical exponential speedup over classical methods for sparse, ill-conditioned linear systems under specific conditions. Its runtime of \tilde{O}(s^2 \kappa^3 \log N / \epsilon) replaces the linear N factor with \log N, providing an advantage that grows exponentially with the logarithm of the matrix size—effectively trading polynomial scaling in \kappa for superpolynomial gains in N. This speedup materializes primarily for s-sparse matrices where \kappa and $1/\epsilon are polylogarithmic in N, and the post-selection step (which succeeds with probability \Omega(1/\kappa)) can be repeated efficiently without dominating the cost. For instance, in applications like quantum chemistry simulations involving large but sparse Hamiltonians, HHL can theoretically outperform CG by orders of magnitude when the system's ill-conditioning is mild relative to its size. The quantum advantage of HHL is not universal and can fail in several regimes. For dense matrices where s \approx N, the effective reduces to at most , as the sparsity assumption no longer holds and classical direct or preconditioned iterative solvers regain competitiveness. Similarly, when \kappa grows faster than polylogarithmic in N—common in highly ill-conditioned systems—the \kappa^3 in HHL's runtime erodes the benefit, often making classical methods faster overall. In practice, classical-quantum strategies, which leverage classical preprocessing to reduce \kappa or handle dense components, frequently outperform pure HHL implementations in these scenarios. To illustrate the speedup threshold, consider a large sparse system with N = 2^{50}; here, HHL is theoretically faster than classical iterative solvers provided \kappa \lesssim 10^9, assuming constant sparsity s=1 and unit \epsilon=1, as the quantum log-factor dominates the classical linear term below this boundary.

Optimality

The query complexity of the HHL algorithm for solving sparse linear systems has been shown to match known lower bounds up to logarithmic factors. Specifically, any solving the quantum linear systems problem (QLSP) with an s-sparse of κ requires Ω(κ) queries to the sparse access , even for constant ε and matrix dimension N. This lower bound is established through reductions from hard search problems in the sparse oracle model, demonstrating that the κ dependence in HHL is unavoidable without additional assumptions. Subsequent improvements have refined HHL's complexity while preserving near-optimality. In 2017, Childs, Kothari, and Somma introduced a quantum linear systems solver that exponentially improves the dependence on precision ε, changing it from linear in 1/ε to polylogarithmic in 1/ε, yielding a total query complexity of \tilde{O}(\kappa^3 \log N \cdot \mathrm{polylog}(1/\varepsilon)) under the original HHL assumptions. This polynomial method achieves the improved ε scaling but retains higher powers of κ compared to later frameworks. The quantum singular value transformation (QSVT) framework further generalizes HHL to handle non-invertible matrices by applying optimal-degree transformations to the s of the input matrix, without relying on phase estimation. In the block-encoding model, QSVT-based solvers achieve query complexity \tilde{O}(\kappa \polylog(N, 1/\varepsilon)), which is optimal for the eigenvalue inversion step and extends HHL's applicability to decompositions and other non-Hermitian cases with minimal overhead. As of 2025, recent advances include randomized QSVT variants that achieve gate complexity independent of the number of terms in sparse Hamiltonians while maintaining the near-optimal \tilde{O}(\kappa \polylog(N, 1/\varepsilon)) scaling, enhancing practicality for large-scale simulations. Despite these advances, open questions remain regarding optimality beyond the sparse, Hermitian setting. In particular, achieving similar κ-independent or low-overhead performance for non-sparse matrices or directly solving non-Hermitian systems without embedding into larger Hermitian forms lacks tight bounds, with current methods incurring additional logarithmic or polynomial factors.

Error Analysis

Error sources

The Harrow-Hassidim-Lloyd (HHL) algorithm relies on several approximations inherent to its quantum subroutines, which introduce errors that must be controlled to achieve the desired solution precision. These errors primarily stem from the discretization in quantum phase estimation (QPE), the approximate inversion of eigenvalues, imperfections in initial state preparation, and inaccuracies in . Each contributes to the overall between the output state and the ideal solution, with bounds derived from the algorithm's analysis. A key source of error arises in the QPE subroutine, which discretizes the continuous eigenvalues of the normalized into a finite representation using m ancillary qubits. This leads to an eigenvalue rounding error bounded by O(1/2^m), where the estimated eigenvalue \tilde{\lambda}_j satisfies |\tilde{\lambda}_j - \lambda_j| \leq 1/2^m. Consequently, the inverted estimate satisfies $1/\tilde{\lambda}_j \approx 1/\lambda_j + O(\epsilon_{\mathrm{QPE}} / \lambda_{\min}^2), with \epsilon_{\mathrm{QPE}} = O(1/2^m), amplifying for small eigenvalues near the minimum \lambda_{\min}. This is fundamental to QPE, as detailed in the phase estimation procedure. The eigenvalue inversion step introduces approximation errors due to the precision of the controlled rotation gates approximating the reciprocal eigenvalues. The inversion is implemented via a rotation that scales the eigenvalues by C \lambda_j, where C = O(1/\lambda_{\min}) ensures the amplitudes remain valid for the rotation. However, the choice of finite C affects the success probability of post-selection, scaling as \Omega(1/\kappa^2), but does not introduce error in the fidelity of the post-selected solution state, which remains proportional to the ideal solution. Rotation gate precision further contributes to infidelity if not sufficiently accurate. Errors in preparing the initial |b\rangle, which encodes the right-hand side of the , propagate through the algorithm. If the preparation circuit achieves the with \delta (i.e., the prepared has overlap $1 - \delta with the |b\rangle), this leads to an output bounded by O(\sqrt{\delta}) in the solution , as the subsequent operations amplify the initial deviation proportionally to the of the overlap. Efficient preparation, often via quantum random access memory (qRAM), is assumed but introduces this controllable term. Hamiltonian simulation errors, arising from approximations like Trotterization or linear combination of unitaries (), add unitary infidelity \epsilon_{\mathrm{sim}} to the controlled evolutions used in QPE. These errors accumulate over the multiple applications required for phase estimation, leading to amplified distortions in the eigenvalue estimates; specifically, the QPE output degrades by a factor related to $1 + O(\epsilon_{\mathrm{sim}} \cdot t_0), where t_0 is the simulation evolution time scaling with the and desired precision. For sparse Hamiltonians, higher-order Trotter or LCU methods can reduce \epsilon_{\mathrm{sim}}, but the error remains a primary source.

Precision scaling

The precision of the HHL algorithm's output state is fundamentally limited by errors arising from quantum phase estimation (QPE), Hamiltonian simulation, and post-selection effects, with the total error bound given by \| |x\rangle - |x_{\text{ideal}}\rangle \| \leq O(\kappa \varepsilon_{\text{QPE}} + \varepsilon_{\text{sim}} + \delta), where \kappa is the condition number of the input matrix, \varepsilon_{\text{QPE}} is the QPE precision, \varepsilon_{\text{sim}} is the simulation error, and \delta accounts for post-selection inaccuracies and initial state infidelity. To achieve an overall output precision of \varepsilon, the QPE precision must be set to \varepsilon_{\text{QPE}} = O(\varepsilon / \kappa), as errors in eigenvalue inversion amplify linearly with \kappa due to the combined effect of eigenvalue sensitivity and the \lambda_j^{-1} scaling. This precision requirement imposes significant overhead on resource demands: the number of ancillary qubits for QPE increases to O(\log(\kappa / \varepsilon)) to resolve eigenvalues finely enough, while the worst-case runtime scales as O(\kappa^3 \log N / \varepsilon) due to repeated circuit executions for error suppression and success probability boosting, where N is the matrix dimension. However, optimizations can reduce this to O(\kappa \log N / \varepsilon) by leveraging advanced techniques that minimize full-circuit repetitions. The \kappa exacerbates error amplification, as perturbations in eigenvalue estimates propagate through the inversion, necessitating well-conditioned matrices (small \kappa) for practical viability; ill-conditioned systems can render the output unreliable even with high precision settings. Mitigations focus on efficient error control without excessive overhead: oblivious amplitude amplification boosts the low success probability (initially \Omega(1/\kappa^2)) to near-constant levels using O(\kappa) queries, avoiding full HHL circuit repetitions by amplifying directly on the clock register. Additionally, variable-time QPE enhances efficiency by adaptively adjusting evolution times based on measured phases, reducing the average runtime while maintaining precision bounds. These approaches collectively ensure that precision scaling remains polynomial in \kappa and $1/\varepsilon, preserving the algorithm's theoretical advantages over classical methods for sparse, well-conditioned problems.

Applications

Core applications

The HHL algorithm finds primary application in , particularly for solving least-squares problems central to tasks. By efficiently inverting sparse, well-conditioned matrices, HHL enables the preparation of a proportional to the , allowing estimation of regression coefficients through of values. This approach achieves an over classical methods for high-dimensional datasets, provided the input data can be encoded into a . A representative example is the use of HHL to compute the to A \mathbf{w} = \mathbf{b}, where A is the and \mathbf{w} the weight , facilitating tasks like on exponentially large feature spaces. In kernel-based methods, HHL supports the computation of inner products between solution states \langle x | x \rangle, which underpins algorithms such as quantum support vector machines (QSVMs). These methods leverage HHL to solve the dual optimization problem for , where evaluations replace explicit mappings, yielding logarithmic dependence on data size rather than polynomial. This has been demonstrated in problems, where HHL accelerates the training phase by preparing the support vector solution state for subsequent amplitude estimation. For optimization, HHL serves as a subroutine in solving linear programs through dual formulations, particularly by speeding up iterations in interior-point methods (IPMs). In these methods, repeated solutions to linear systems arising from steps are required; HHL's polylogarithmic scaling in dimension can quadratically accelerate compared to classical IPMs, assuming sparse and Hermitian matrices. This application is especially relevant for dense linear programs, where the algorithm inverts the Hessian-like matrices to find feasible directions toward the optimum. Seminal work has shown that hybrid quantum-classical IPMs using HHL achieve exponential speedup in the bit precision for certain linear programs. In quantum simulation, HHL enables the solution of time-dependent linear differential equations, including the , by reformulating the dynamics as a inversion. Specifically, the can be addressed by solving equations of the form (i \partial_t - H) |\psi(t)\rangle = 0, where H is the , through phase estimation and matrix inversion to propagate the state efficiently. This provides an exponential advantage over classical for sparse Hamiltonians, allowing high-precision over long times with query complexity scaling as \kappa \log(1/\epsilon) / \log(\log(1/\epsilon)), where \kappa is the and \epsilon the . The approach has been applied to initial-value problems in , demonstrating superconvergence for oscillatory systems. Data fitting represents another core use, where HHL delivers exponential speedup for sparse signal processing, such as in scenarios. By solving underdetermined least-squares systems A \mathbf{x} = \mathbf{b} with sparse \mathbf{x}, HHL prepares a state encoding the minimal-norm solution, from which sparsity-promoting norms can be estimated via post-selection or . This is particularly effective for reconstructing signals from incomplete measurements, as the algorithm's runtime depends logarithmically on the signal dimension while exponentially improving over classical for well-conditioned, low-rank data. For instance, fitting exponential models to large datasets uses HHL to evaluate fit quality and parameter estimates in superposition, enabling applications like with reduced sampling overhead.

Extensions and variants

One prominent extension of the HHL algorithm addresses the limitation that the original formulation requires the input matrix A to be Hermitian. For non-Hermitian matrices, the algorithm can be adapted by embedding A into a larger of the form \begin{pmatrix} 0 & A \\ A^\dagger & 0 \end{pmatrix}, which doubles the matrix dimension but introduces only a constant-factor overhead in resources. This preserves the essential structure needed for quantum phase estimation while enabling the solution of the original non-Hermitian system through post-processing on the doubled space. More recent variants focus on iterative approaches to mitigate the strong dependence of HHL on the \kappa of A. In 2025, an iterative HHL algorithm was proposed specifically for computing resonances, leveraging eigenvector continuation to analytically extend solutions beyond physical boundaries and iteratively refine estimates with reduced sensitivity to \kappa. This method applies successive HHL iterations on modified systems, using classical projections to stabilize and achieve accurate resonance energies in few-body systems after a small number of steps, such as 5–6 iterations for targeted eigenvalues. Hybrid variants integrate classical preprocessing with the quantum core of HHL to enhance compatibility with noisy intermediate-scale quantum (NISQ) devices. A hybrid HHL method employs classical preconditioning to guide eigenvalue inversion, estimating singular values with higher precision (e.g., two extra bits) before quantum processing, which reduces overall error by up to 57% in simulations and demonstrates feasibility on hardware like IBM Torino for small 2×2 systems. This approach uses semiclassical quantum phase estimation with mid-circuit measurements, minimizing ancillary qubits to one and gate depth, while classical steps handle spectral scaling for better eigenvalue resolution. A broader generalization of HHL arises through the quantum singular value transformation (QSVT) , which applies arbitrary to the of a block-encoded without requiring full quantum phase estimation. QSVT-based HHL variants perform inversion by selecting a that approximates $1/\sigma_i for \sigma_i, achieving optimal query complexity of O(\kappa \log(1/\epsilon)) to precision \epsilon, independent of matrix sparsity assumptions in the original HHL. This enables extensions to non-unitary operations and handles ill-conditioned systems more robustly, unifying HHL with other linear algebra tasks under a single protocol.

Implementations

Early experiments

The first experimental realization of the HHL algorithm occurred in 2013 using photonic quantum hardware. Cai et al. demonstrated the algorithm on a linear optical with four photonic qubits and four controlled-NOT , solving 2×2 linear systems with solution fidelities ranging from 0.825 to 0.993. This proof-of-concept highlighted the feasibility of quantum linear solving but was constrained to small-scale systems due to the probabilistic nature of photonic . Independently, Barz et al. implemented a two-qubit photonic based on polarization-encoded qubits and entangling operations, applying the HHL algorithm to simple linear equations with process fidelities of 64.7% to 98.1%. NMR-based experiments followed shortly after, providing deterministic control over qubit interactions. In 2014, Pan et al. executed the HHL algorithm on a four-qubit liquid-state NMR quantum computer, solving 2×2 systems across three distinct test cases and achieving overall fidelities exceeding 96%. These implementations explicitly demonstrated the core subroutines of quantum phase estimation for eigenvalue extraction and controlled inversion for matrix conditioning, with errors primarily arising from pulse imperfections rather than decoherence. Subsequent NMR work in the mid-2010s extended to 3–4 qubit configurations, refining these steps for improved scalability in liquid-state systems. Superconducting qubit platforms enabled further progress toward larger matrices in the late . In 2017, Zheng et al. reported an implementation on a four- superconducting processor from the University of Science and Technology of , solving 2×2 linear systems with a process of 0.837 ± 0.006. Practical performance was bottlenecked by gate errors and qubit decoherence times on the order of microseconds. Efforts by other groups have explored larger systems but underscored noise as the dominant limitation. These early demonstrations established proof-of-principle success for the HHL algorithm across diverse modalities, achieving high-fidelity solutions for problems. Key findings revealed that experimental errors were overwhelmingly due to hardware decoherence and imperfect gates, rather than approximations inherent to the algorithm itself, paving the way for noise-mitigation strategies in subsequent implementations.

Recent developments

In 2024, researchers demonstrated a hybrid variant of the HHL algorithm tailored for noisy intermediate-scale quantum (NISQ) devices, incorporating classical optimizations to enhance feasibility on current . This approach, termed hybrid HHL++, leverages semiclassical quantum phase estimation and spectral scaling to reduce and gate requirements while maintaining solution fidelity. Implemented on Quantinuum's H-series trapped-ion processors, it successfully solved linear systems for problems with sizes up to 8×8, achieving close to 1 through controlled-SWAP tests and demonstrating up to 291 two-qubit gate depths on real . Advancements in software frameworks have enabled more practical simulations and hybrid executions of the HHL algorithm in 2025. Using the Qrisp high-level language integrated with Xanadu's compiler, demonstrations showcased hybrid quantum-classical workflows for solving linear systems, including applications to finite-difference approximations of equations in one and two dimensions. These implementations achieved numerical accuracy on the order of $10^{-3}, comparable to classical methods like the Thomas algorithm for one-dimensional cases, while highlighting the algorithm's potential for solvers despite limitations in conditioning. Iterative refinements to the HHL algorithm have shown promise in specialized domains such as . A 2025 study introduced an iterative HHL framework combined with eigenvector continuation and complex scaling to compute resonances, particularly for the α-α system. This method reduces the number of queries by approximately 50% compared to standard approaches by leveraging continuation techniques to approximate non-Hermitian eigenvalues, yielding resonant states in good agreement with classical computations and offering efficiency gains for coupled-channel problems. Security assessments of the HHL algorithm have gained attention amid growing concerns over quantum vulnerabilities. In 2025, evaluations targeted fault injection-like attacks, including improper initialization and higher-energy disruptions on critical qubits such as ancilla, clock, and input states during matrix inversion steps. A proposed defense mechanism, involving circuit redesigns for self-detection and error mitigation, was validated through simulations and executions on quantum , preserving solution accuracy with minimal overhead and resilience to noise-induced faults.

Challenges

Theoretical limitations

The HHL algorithm's efficiency is highly sensitive to the \kappa of the input matrix A, defined as the ratio of its largest to smallest singular values. The runtime scales as \tilde{O}(\kappa^2 \log N / \epsilon), where N is the matrix dimension and \epsilon is the desired precision, leading to no quantum speedup over classical methods if \kappa = \Omega(\mathrm{poly}(N)), as the overall complexity then becomes polynomial in N rather than logarithmic. Moreover, errors in the estimation step amplify exponentially with \kappa, since the inversion step produces coefficients $1/\lambda_j that magnify small eigenvalue perturbations, potentially rendering the solution unreliable for ill-conditioned systems without additional techniques. The algorithm operates within an oracle model that demands efficient sparse access to A, requiring it to be s-sparse (at most s non-zero entries per row, with s = O(\mathrm{poly}(\log N))) and row-computable in O(s) classical time. This enables Hamiltonian simulation of e^{iAt} in \tilde{O}(\log N \cdot s^2 \kappa^2 / \epsilon) time, but random or dense matrices lack such sparse structure, necessitating costly preprocessing to construct an appropriate oracle and often eliminating the exponential advantage. The output of HHL is a quantum state |x\rangle proportional to the solution vector, which provides no direct classical access to the full solution; efficient readout of all components would require \Omega(N) measurements, collapsing the speedup. This state is only useful in contexts requiring further quantum processing, such as amplitude amplification or integration into larger quantum algorithms, without which its practical utility is limited. HHL assumes the matrix A corresponds to a Hamiltonian that can be efficiently simulated unitarily via sparse access, restricting applicability to structured problems; for general non-unitarily simulable s, embedding into a larger unitary (e.g., via block encoding) introduces significant overhead in query complexity and ancilla qubits, often scaling as O(\kappa) or worse and undermining scalability.

Practical difficulties

Implementing the Harrow-Hassidim-Lloyd (HHL) algorithm on current noisy intermediate-scale quantum (NISQ) devices faces significant hurdles due to inherent limitations, particularly and decoherence effects. NISQ systems exhibit two-qubit error rates typically ranging from 0.05% to 0.5%, with leading devices approaching 0.1% or better, which accumulate rapidly in the deep circuits required by HHL, leading to fidelity degradation and unreliable outputs. Without full , these s necessitate hybrid approaches or mitigation techniques, but even then, achieving meaningful precision remains challenging on devices with 50–100 qubits. Moreover, decoherence times on superconducting qubits now reach 0.3-1.6 ms in advanced devices, though still constraining the total circuit depth before is lost, which is particularly problematic for HHL's phase estimation steps that demand prolonged . State preparation of the input |b⟩ represents another major overhead in HHL implementations. For arbitrary vectors, quantum amplitude encoding requires a circuit depth scaling as O(√N) in the worst case, where N is the matrix dimension, imposing significant resource demands on NISQ hardware lacking efficient quantum random access memory (QRAM). However, for structured data—such as those arising from Fourier transforms—quantum state preparation can leverage the (QFT) to reduce this cost to polylogarithmic in N, enabling more feasible encoding in applications like . This overhead not only increases gate counts but also amplifies susceptibility to noise, often requiring multiple repetitions to achieve acceptable success probabilities. Recent hybrid HHL variants integrate classical preprocessing to reduce quantum circuit depth, enabling demonstrations on matrices up to 64×64 as of 2025, though full fault-tolerance remains required for advantage. Scalability of HHL is further impeded by the and depth requirements of its core quantum phase estimation (QPE) subroutine. QPE demands O(log N) ancillary s for precision with system size N, yet the associated controlled operations result in depths that exceed available times on current , limiting full-scale demonstrations to matrices up to × or larger in variants, though still far from . For instance, even modest N values lead to control depths incompatible with NISQ constraints, necessitating optimizations like semiclassical QPE to compress ancilla usage and mitigate decoherence. As of , additional concerns include vulnerability to hybrid attacks, such as side-channel exploits targeting the oracle implementations in HHL's phase, where timing or power variations could leak sensitive circuit details. Resource estimations for fault-tolerant scaling highlight the path forward: achieving quantum advantage for N ≈ 10^6 would require on the order of 10^5 physical qubits under surface code error correction, with runtimes inflated polynomially by the noise tolerance ε (e.g., poly(1/ε_noise) overhead) and total execution times reaching 10^6 seconds or more. These projections underscore the need for advances in error-corrected architectures to realize HHL's potential beyond NISQ limitations.

References

  1. [1]
  2. [2]
    [0811.3171] Quantum algorithm for solving linear systems of equations
    Nov 19, 2008 · Access Paper: View a PDF of the paper titled Quantum algorithm for solving linear systems of equations, by Aram W. Harrow and 1 other ...
  3. [3]
    [1802.08227] Quantum linear systems algorithms: a primer - arXiv
    Feb 22, 2018 · This paper presents the HHL quantum algorithm for linear systems, its improvements, and subroutines like quantum phase estimation and amplitude ...
  4. [4]
  5. [5]
    A survey on HHL algorithm: From theory to application in quantum ...
    Aug 28, 2020 · The Harrow-Hassidim-Lloyd (HHL) algorithm is a method to solve the quantum linear system of equations that may be found at the core of various scientific ...
  6. [6]
  7. [7]
  8. [8]
    [PDF] Iterative Methods for Sparse Linear Systems Second Edition
    In the six years that passed since the publication of the first edition of this book, iterative methods for linear systems have made good progress in ...
  9. [9]
    [PDF] Lecture 5 Smoothed Complexity of Gaussian Elimination
    Today we will show that the smoothed complexity of solving an n x n linear system to t bits of accuracy, using Gaussian Elimination without pivoting, is O(n3 ...
  10. [10]
    [PDF] A Brief History of Linear Algebra - University of Utah Math Dept.
    With the turn into the 19th century Gauss introduced a procedure to be used for solving a system of linear equations. His work dealt mainly with the linear ...
  11. [11]
    [PDF] Methods of Conjugate Gradients for Solving Linear Systems 1
    The present section will be devoted to a description of a method of solving a system of linear equations. Ax= k. This method will be called the conjugate.
  12. [12]
    Regarding impractical usage of direct solvers of linear systems
    Nov 20, 2016 · Since the computational complexity of direct elimilation methods for solving linear systems is O(n3), it's not practical when the number of dofs ...
  13. [13]
    Quantum computers - Nature
    Mar 4, 2010 · 1. The central question of this review is what form quantum 'hardware' will take, and for this there are no easy answers. There are many ...
  14. [14]
    A survey on HHL algorithm: From theory to application in quantum ...
    Aug 28, 2020 · The Harrow-Hassidim-Lloyd (HHL) algorithm is a method to solve the quantum linear system of equations that may be found at the core of various scientific ...
  15. [15]
  16. [16]
  17. [17]
  18. [18]
    None
    ### Summary of HHL Algorithm from arXiv:0811.3171
  19. [19]
    None
    ### Runtime Complexity of Improved HHL Algorithm with Variable-Time Amplitude Amplification
  20. [20]
    Tight Quantum Depth Lower Bound for Solving Systems of Linear ...
    Jul 8, 2024 · In this paper, we study the limitation of parallel quantum computing on this problem. We show that any quantum algorithm for solving systems of ...
  21. [21]
    Quantum Algorithm for Systems of Linear Equations with ...
    18. A. W. Harrow, A. Hassidim, and S. Lloyd, Quantum algorithm for linear systems of equations, Phys. Rev. Lett., 103 (2009), 150502. ... 19. J. K. Hunter and B.
  22. [22]
    [2401.17182] Detailed Error Analysis of the HHL Algorithm - arXiv
    Jan 30, 2024 · This study is beneficial for the comprehension of the choice of the phase register size and its interrelation with the Hamiltonian simulation duration.<|separator|>
  23. [23]
    Quantum support vector machine for big data classification - arXiv
    Jul 1, 2013 · In this work, we show that the support vector machine, an optimized binary classifier, can be implemented on a quantum computer, with complexity logarithmic.Missing: HHL | Show results with:HHL
  24. [24]
    [1902.06749] A Quantum Interior-Point Predictor-Corrector Algorithm ...
    Feb 18, 2019 · We introduce a new quantum optimization algorithm for dense Linear Programming problems, which can be seen as the quantization of the Interior ...Missing: HHL | Show results with:HHL
  25. [25]
    Iterative Harrow-Hassidim-Lloyd quantum algorithm for solving ...
    We propose a novel quantum algorithm for solving nuclear resonances, which is based on the iterative Harrow-Hassidim-Lloyd algorithm and eigenvector ...Missing: HHL | Show results with:HHL
  26. [26]
    [2404.10103] An Enhanced Hybrid HHL Algorithm - arXiv
    Apr 15, 2024 · We show that eigenvalue estimates with just two extra bits of precision result in tighter error bounds for our Enhanced Hybrid HHL compared to ...Missing: scaling | Show results with:scaling
  27. [27]
    Solving linear systems on quantum hardware with hybrid HHL - Nature
    Sep 10, 2024 · Our proposal adds to the existing literature of hybrid quantum algorithms for linear algebra that are more compatible with the current scale of quantum devices.
  28. [28]
  29. [29]
  30. [30]
  31. [31]
    Solving systems of linear equations via HHL using Qrisp and Catalyst
    Feb 26, 2025 · The Harrow-Hassidim-Lloyd (HHL) quantum algorithm offers an exponential speed-up over classical methods for solving linear system problems A ...
  32. [32]
    and two-dimensional Poisson equations with the quantum Harrow ...
    Aug 13, 2025 · This paper assesses the numerical accuracy of the Harrow-Hassidim-Lloyd (HHL) algorithm in solving a finite-difference approximation of the ...Missing: ε_QPE ε_sim
  33. [33]
    Securing HHL Quantum Algorithm against Quantum Computer Attacks
    This work focuses on securing the HHL quantum algorithm against attacks while it executes on a quantum computer.
  34. [34]
  35. [35]
    Hybrid quantum linear equation algorithm and its experimental test ...
    Mar 18, 2019 · We propose a hybrid quantum algorithm based on the Harrow-Hassidim-Lloyd (HHL) algorithm for solving a system of linear equations.Missing: extensions | Show results with:extensions
  36. [36]
    [PDF] Timing Side-Channel Attacks on Cloud-Based Quantum Services
    Jan 3, 2024 · Timing side-channel attacks can identify the quantum computer and circuit type with few measurements, and even extract the Grover circuit ...
  37. [37]
    [2502.11239] Towards identifying possible fault-tolerant advantage ...
    Feb 16, 2025 · We provide a detailed estimation of space, time, and energy resources for fault-tolerant superconducting devices running the Harrow-Hassidim-Lloyd (HHL) ...