Hamiltonian simulation is a cornerstone problem in quantum computing, involving the approximation of the unitary operator e^{-iHt} that describes the time evolution of a quantum state under a given Hermitian Hamiltonian H for time t, typically within a specified error \epsilon, to model the dynamics of quantum systems efficiently on quantum hardware.[1] This task is central to realizing Richard Feynman's vision of using quantum computers to simulate physical processes that are intractable for classical computers due to the exponential scaling of quantum state spaces.The concept traces its origins to Feynman's 1982 proposal, where he argued that quantum mechanical systems, particularly those involving many particles, cannot be accurately simulated classically without exponential resource overhead, advocating instead for programmable quantum simulators to mimic such evolutions. Early algorithmic developments in the 1990s, such as Lloyd's use of Trotter-Suzuki product formulas to decompose e^{-iHt} into short-time evolutions under summands of H, laid the groundwork for digital quantum simulation, achieving query complexities scaling as O((t \|H\| / \epsilon)^{1+o(1)}) for sparse Hamiltonians. Subsequent advances, including Taylor series methods and quantum walks, improved efficiency, but these were superseded by optimal techniques like quantum signal processing (QSP) and qubitization, which achieve near-linear dependence on simulation time t \|H\|_{\max} and polylogarithmic scaling in $1/\epsilon, with query complexity O(t \|H\|_{\max} + \mathrm{polylog}(t \|H\|_{\max}/\epsilon)).[2]Key methods for Hamiltonian simulation include product formulas, which approximate the evolution via repeated applications of exponentials of Hamiltonian terms, suitable for near-term noisy intermediate-scale quantum (NISQ) devices despite higher-order errors; linear combination of unitaries (LCU) and select-apply oracles for sparse access models; and advanced frameworks like QSP, which leverage block encodings to implement the evolution with minimal overhead.[1] These algorithms often assume access to oracles that prepare or block-encode H, enabling simulations of local Hamiltonians in quantum chemistry or condensed matter physics.[2]Applications span quantum chemistry for molecular energy calculations, materials science for modeling Hubbard models, and as subroutines in algorithms like the Harrow-Hassidim-Lloyd (HHL) solver for linear systems, promising exponential speedups over classical methods for certain problems.[1] However, challenges persist, including the need for fault-tolerant quantum computers to handle long simulation times, sensitivity to noise in NISQ hardware, and the development of efficient encodings for complex, time-dependent Hamiltonians.[3] In October 2025, Google Quantum AI demonstrated a verifiable quantum advantage using their Willow processor, achieving a 13,000-fold speedup over the world's fastest supercomputer in simulating quantum dynamics for physics tasks.[4] Ongoing research focuses on optimal sample complexity and low-energy subspace simulations to mitigate these issues.[5]
Introduction
Definition and motivation
In quantum mechanics, the dynamics of a closed quantum system are governed by the Hamiltonian H, a Hermitian operator that encodes the totalenergy of the system. This operator generates the time evolution through the time-dependent Schrödinger equation,i \hbar \frac{d}{dt} |\psi(t)\rangle = H |\psi(t)\rangle,where |\psi(t)\rangle is the state vector at time t, i is the imaginary unit, and \hbar is the reduced Planck's constant.[6] The formal solution to this equation is the unitary time-evolution operator U(t) = e^{-i H t / \hbar}, which advances an initial state |\psi(0)\rangle to |\psi(t)\rangle = U(t) |\psi(0)\rangle.[3]Hamiltonian simulation refers to the problem of implementing, or approximately implementing, this unitary operator U(t) (or equivalently, evolving the state |\psi(t)\rangle) on a quantum computer, given access to a description of the Hamiltonian H and the evolution time t.[3] The goal is to produce an output state that approximates |\psi(t)\rangle within a specified error tolerance, leveraging the quantum computer's ability to naturally represent and manipulate quantum states.[7]The primary motivation for Hamiltonian simulation stems from the challenge of modeling complex quantum systems classically, where the exponential growth in Hilbert space dimension renders exact simulations infeasible for large systems, such as molecular dynamics or many-body interactions.[3] This task realizes Richard Feynman's 1982 vision of quantum computers as tools for simulating the inherently quantum behavior of nature, particularly for problems like quantum field theories or condensed matter systems that defy efficient classical computation due to entanglement and superposition.[8] By enabling such simulations, Hamiltonian simulation underpins broader applications in quantum chemistry and materials science, where understanding time evolution is essential.[7]
Historical development
The concept of using quantum systems to simulate other quantum systems originated with Richard Feynman's 1982 proposal, where he argued that classical computers struggle with the exponential complexity of simulating quantum physics, suggesting instead that a quantum computer could efficiently mimic the time evolution of physical Hamiltonians.[9] This idea laid the foundational motivation for Hamiltonian simulation as a core application of quantum computing. In 1996, Seth Lloyd formalized the first efficient algorithm for universal quantum simulation, demonstrating that any local quantum system could be simulated on a quantum computer using Trotterization—a product formula approximation of the time-evolution operator—achieving polynomial scaling in the system size and evolution time.[10]During the 2000s, advancements built on Lloyd's work by refining Trotterization through higher-order formulas originally developed by Masuo Suzuki in the 1970s, which were adapted to quantum contexts to reduce approximation errors.[11] These higher-order Trotter-Suzuki decompositions enabled more accurate simulations for increasingly complex Hamiltonians, such as those in quantum chemistry, paving the way for applications in molecular dynamics. The decade also saw extensions to sparse Hamiltonians, with algorithms leveraging the structure of physically realistic systems to improve efficiency.The 2010s marked a shift toward optimal simulation methods, including the development of truncated Taylor series approaches for approximating the evolution operator, which offered advantages in gate complexity for certain input models.[12] Contributions in the late 2010s included the qubitization technique, rooted in Alexei Kitaev's 1995 ideas on phase estimation and block-encoding, which was rigorously implemented by Guang Hao Low and Isaac L. Chuang to achieve near-optimal query complexity independent of the spectral norm;[13] and quantum singular value transformation (QSVT), introduced by András Gilyén and colleagues, which further optimized Hamiltonian simulation by allowing polynomial transformations on singular values, achieving optimality for a wide class of problems.[14] This method subsumed prior techniques like linear combination of unitaries and sparse access models, enabling simulations with scaling close to the theoretical lower bound. Randomized methods like qDRIFT, proposed by Edward H. Campbell, provided practical error mitigation for noisy intermediate-scale quantum devices by randomizing term orders in product formulas.[15]In the 2020s, extensions to time-dependent Hamiltonians advanced rapidly, with algorithms scaling with the L1-norm of the Hamiltonian path rather than the spectral norm, as shown by Dominic W. Berry, Andrew M. Childs, and Yuan Su.[16] Recent advances (2023–2025) include hybrid real-imaginary time evolution for low-depth Hamiltonian simulation in quantum optimization and efficient algorithms for quantum thermal simulation, enhancing applicability to near-term devices.[17][18] These milestones have solidified Hamiltonian simulation as a benchmark for quantum advantage, with ongoing refinements targeting fault-tolerant implementations.
Problem formulation
Time-independent Hamiltonians
In the time-independent Hamiltonian simulation problem, the goal is to approximate the unitary time-evolution operator U(t) = e^{-i H t} generated by a fixed Hermitian Hamiltonian H acting on an n-qubit system, where the dimension of the Hilbert space is d = 2^n. This operator describes the dynamics of a closed quantum system evolving under the time-independent Schrödinger equation i \frac{d}{dt} |\psi(t)\rangle = H |\psi(t)\rangle, with the solution |\psi(t)\rangle = e^{-i H t} |\psi(0)\rangle. The problem requires implementing a quantum circuit V that approximates U(t) using queries to oracles providing access to H, such that the implemented unitary can be applied to an initial state with high fidelity.[19]The approximation is typically required to satisfy \| U(t) - V \| \leq \epsilon in the spectral norm (also known as the operator norm induced by the vector 2-norm), where \epsilon > 0 is a user-specified error tolerance; this ensures that for any initial state |\psi\rangle, the output stateerror satisfies \| U(t) |\psi\rangle - V |\psi\rangle \|_2 \leq \epsilon. For scenarios involving quantum channels or open systems, the diamond norm may be used instead to bound the error in the completely bounded trace-preserving map, though the spectral norm suffices for unitary simulation of closed systems. The input Hamiltonian H is accessed via black-box oracles that allow efficient queries to its matrixelements or structure, such as selecting rows and retrieving nonzero entries.[20][19]Common assumptions include H being Hermitian to guarantee unitarity of U(t), and often sparse with at most a polynomial (e.g., polylogarithmic in d) number of nonzero entries per row, or local in the sense of arising from interactions involving a constant number of qubits. To simplify complexityanalysis, H is frequently normalized such that its spectral norm satisfies \| H \| \leq 1, though general bounds scale with the actual norm. These assumptions enable efficient simulation while capturing physically relevant systems like lattice models or molecular Hamiltonians.[20][19]The problem was first formalized in the context of universal quantum simulation, demonstrating that any local Hamiltonian can be efficiently simulated on a quantum computer. Modern formulations emphasize query complexity to the oracles as the primary resource measure, with gate counts following from standard quantum circuit models.[10][20]
Time-dependent Hamiltonians
In the context of quantum simulation, time-dependent Hamiltonians arise when the system's energy operator varies with time, such as in driven quantum systems or those subject to external fields. This generalizes the time-independent case, where the Hamiltonian remains fixed, serving as a special instance when H(s) = H for all s. The primary goal is to approximate the unitary time-evolution operator that propagates the quantum state from initial time 0 to final time t, enabling the study of dynamic phenomena like coherent control or nonequilibrium processes.[21]The evolution operator for a time-dependent Hamiltonian H(s) is formally defined as the time-ordered exponentialU(t) = \mathcal{T} \exp\left( -\frac{i}{\hbar} \int_0^t H(s) \, ds \right),where \mathcal{T} denotes the time-ordering operator that ensures non-commuting operators at different times are multiplied in chronological order. This expression cannot be simplified to a single exponential unless [H(s), H(s')] = 0 for all s, s'. A brief expansion via the Dyson series provides insight into its structure: U(t) = \sum_{n=0}^\infty \frac{(-i/\hbar)^n}{n!} \int_0^t ds_1 \cdots \int_0^t ds_n \, \mathcal{T} \bigl\{ H(s_1) \cdots H(s_n) \bigr\}, where the time-ordering applies to the product inside the integrals. In practice, H(s) is often approximated as piecewise constant over small intervals \Delta s_k, allowing the overall evolution to be decomposed into a sequence of time-independent exponentials U(t) \approx \prod_k \exp\left( -i H_k \Delta s_k / \hbar \right), with the number of pieces scaling with the desired accuracy.[21][22]Simulating such dynamics introduces significant challenges beyond the time-independent setting, particularly when H(s) at different times do not commute, necessitating explicit handling of the time-ordering to avoid errors from naive approximations. This non-commutativity increases the number of queries required to access the Hamiltonian compared to fixed-H cases, as algorithms must resolve interactions across time slices. Additionally, errors from approximations in each interval accumulate multiplicatively over the total evolution time t, demanding finer discretizations or higher-order methods to maintain overall precision \varepsilon. For specific forms of time-dependence, such as periodic H(s + T) = H(s) in driven systems or linear ramps H(s) = H_0 + s V, tailored approaches can mitigate these issues by exploiting symmetry or perturbative structure.[21]Recent algorithmic progress has addressed these challenges with near-optimal scalings. For instance, the method introduced by Berry, Childs, Su, Wang, and Wiebe achieves a gate complexity of O\left( \left( \int_0^t \|H(s)\|_1 \, ds \right) \mathrm{polylog}(t/\varepsilon, \|H\|_\mathrm{max}/\varepsilon) \right) for sparse Hamiltonians under the L^1-norm input model, where the integral captures the cumulative strength over time; this improves upon prior bounds scaling with t \|H(t)\|_\mathrm{max} and is nearly optimal given known lower bounds. Such advancements enable efficient simulation of realistic time-varying systems, like those in quantum chemistry or condensed matter, while bounding error accumulation.
Input access models
In Hamiltonian simulation, the input access model specifies how the Hamiltonian operator is provided to the quantum algorithm, enabling the implementation of its action without storing the full exponential-sized matrix. Common models balance efficiency with the structure of the Hamiltonian, such as sparsity or locality, to minimize the number of queries required for simulation. These models define oracles or unitaries that allow the algorithm to access relevant entries or embeddings of the Hamiltonian, with query complexity measured by the number of calls to these access routines.[23]The sparse access model applies to Hamiltonians that are d-sparse, meaning each row or column has at most d non-zero entries, which is suitable for many physical systems like tight-binding models. In this model, two oracles are provided: a SELECT oracle that, given an index j for the j-th non-zero entry in a column or row, applies the corresponding unitary to shift the state to that position, and a PREPARE oracle that prepares a uniform superposition over the indices of non-zero positions. The query complexity of optimal simulation algorithms under this model scales as \tilde{O}(d t |H| + \polylog(t |H| / \epsilon)), achieving near-optimal dependence on all parameters including polylogarithmic scaling in 1/ε. This framework was introduced for efficient simulation of sparse Hamiltonians, enabling sublinear dependence on system size for certain classes.[24]The dense access model assumes direct access to the full matrix entries of the Hamiltonian, such as through an oracle that outputs the (i,j)-th entry upon query. However, this requires specifying up to 2^n entries for an n-qubit system, rendering it exponentially costly and impractical for large n, though it serves as a theoretical baseline for non-sparse cases. Query complexity in this model would naively scale with the matrix dimension, but it is rarely used due to storage and access overheads.[23]A more versatile model is block-encoding, where the Hamiltonian H is embedded into a larger unitary U acting on ancillary qubits such that the top-left block approximates H normalized by a scaling factor α ≥ ||H||. Formally, a (α,ε)-block-encoding satisfies\left\| \left( I \otimes \langle 0^m | \right) U \left( I \otimes |0^m \rangle \right) - \frac{H}{\alpha} \right\| \leq \varepsilon,where m is the number of ancillary qubits, I is the identity on the system qubits, and the norm is the operator norm. Here, α controls the normalization, and the query complexity involves calls to U, often achieving optimal scaling for simulation when combined with techniques like quantum signal processing. This model unifies sparse and other structured inputs by allowing H to be "prepared" via unitaries, and it was developed to enable Hamiltonian simulation with nearly optimal dependence on all parameters.[13]For local Hamiltonians prevalent in quantum chemistry and condensed matter, the input is given as a sum H = ∑_k c_k P_k of at most polynomially many (in n) Pauli terms P_k (tensor products of single-qubit Paulis) with coefficients c_k, providing access via oracles that select and prepare these terms. The query complexity depends on the number of terms and their weights, often scaling as O(∑ |c_k| t / ε) for first-order Trotter methods. This model exploits the locality of interactions in molecular Hamiltonians.In the time-dependent setting, the interaction picture transforms the problem by factoring out a known time-independent part, recasting the Hamiltonian as an effective time-dependent perturbation accessible via oracles for the interaction terms. This allows simulation algorithms to treat time-dependence through dilated oracles that incorporate time parameters, with query complexity depending on the L1-norm of the perturbation over time. The approach facilitates efficient handling of driven systems by reducing to time-independent-like access in the rotated frame.[25]
Simulation techniques
Product formula methods
Product formula methods, also known as Trotter-Suzuki decompositions, provide an intuitive approach to simulating the unitary time evolution operator e^{-iHt} for a Hamiltonian H = \sum_{k=1}^r H_k, where each H_k is a sparse or local term whose exponential can be implemented efficiently on a quantum computer. These methods rely on the Baker-Campbell-Hausdorff formula and the Lie-Trotter product theorem to approximate the exponential of a sum by a product of exponentials, repeated over multiple time steps to control the approximation error.The foundational first-order Trotter formula approximates the evolution over total time t by dividing it into m equal steps of length t/m, yieldinge^{-iHt} \approx \left( \prod_{k=1}^r e^{-i H_k (t/m)} \right)^m,with the simulation error scaling as O\left( r (\|H\| t)^2 / m \right), where \|H\| is the spectral norm of H. This decomposition was first established in the context of operator semigroups and later adapted for quantum simulation of local Hamiltonians.[26]Higher-order formulas, developed by Suzuki, improve the error scaling by symmetrizing the product and recursively building more accurate approximants. For instance, the second-order Suzuki-Trotter formula for a two-term Hamiltonian H = H_1 + H_2 isS_2(t) = e^{-i H_1 t/2} e^{-i H_2 t} e^{-i H_1 t/2},which approximates e^{-iHt} with error O(t^3 / m^2) when repeated m times; this extends naturally to more terms by ordering the products symmetrically. These formulas achieve an error of O(t^{k+1} / m^k) for a k-th order decomposition, with even k typically used to ensure time-reversibility.[27][26]The gate complexity for implementing a k-th order product formula to accuracy \epsilon scales as O\left( (\|H\| t)^{1 + 1/k} / \epsilon^{1/k} \right) exponentials of the individual H_j, assuming each such exponential requires a constant number of gates. This arises from choosing m = O\left( (\|H\| t)^{1 + 1/k} / \epsilon^{1/k} \right) to bound the total error.[26]Suzuki's general recursion for constructing p-th orderformulas uses a fractal decomposition, where an approximant of order $2k is built from lower-order ones viaS_{2k}(t) = \left[ S_{2k-2}(u_k t) \right]^2 S_{2k-2} \left( (1 - 4 u_k) t \right) \left[ S_{2k-2}(u_k t) \right]^2 + O(t^{2k+1}),where u_k = \frac{1}{4 - 4^{1/(2k-1)}}, starting from the second-orderformula and iterating to higher even orders with coefficients that ensure the desired accuracy. This recursive structure allows efficient generation of high-order decompositions without explicit computation of all commutators.[27][26]
Truncated Taylor series methods
Truncated Taylor series methods approximate the unitary time evolution operator e^{-iHt} for a Hamiltonian H by truncating its formal power series expansion at order K, enabling efficient quantum simulation particularly for sparse Hamiltonians via oracle access models. The core idea is to implement the partial sum \sum_{k=0}^K \frac{(-it)^k}{k!} H^k using techniques like linear combinations of unitaries and amplitude amplification, with the remainder controlled to achieve desired accuracy. This approach was introduced by Berry et al.[12], who demonstrated its efficacy for time-independent sparse Hamiltonians by breaking the evolution into short segments and applying oblivious amplitude amplification to boost success probability.For time-dependent or structured Hamiltonians, the method is adapted to the interaction picture, where the Hamiltonian is split as H = A + B and transformed such that the evolution becomes U_I(t) = \mathcal{T} \exp\left( -i \int_0^t H_I(s) \, ds \right), with H_I(s) = e^{i A s} B e^{-i A s}. The time-ordered exponential is then approximated using the truncated Dyson series:U_I(t) \approx \sum_{k=0}^K (-i)^k \int_0^t dt_1 \int_0^{t_1} dt_2 \cdots \int_0^{t_{k-1}} dt_k \, H_I(t_1) H_I(t_2) \cdots H_I(t_k),where \mathcal{T} denotes time-ordering. The truncation error \| R_K \| is bounded by \| R_K \| \leq \frac{(t \| H_I \|_\infty)^{K+1}}{(K+1)!} e^{t \| H_I \|_\infty}, ensuring spectral norm error below \epsilon for K = O\left( \log\left( \frac{\|H\| t}{\epsilon} \right) / \log \log\left( \frac{\|H\| t}{\epsilon} \right) \right). This formulation facilitates simulation of sparse or low-rank perturbations with reduced query complexity compared to direct methods.[25]The resulting algorithm implements the series via a select oracle for Hamiltonian terms and a prepare oracle for coefficients $1/k!, combined with oblivious amplitude amplification to achieve high fidelity. For sparse Hamiltonians with d-sparse access, the query complexity is O\left( d \| H \|_{\max} t \cdot \frac{\log( d \| H \|_{\max} t / \epsilon )}{\log \log( d \| H \|_{\max} t / \epsilon )} \right), nearly optimal up to polylog factors. Low refined this for the interaction picture, yielding low ancillary qubit overhead ( O(1) beyond the system size) and query complexity O\left( \alpha_B t \cdot \mathrm{polylog}\left( (\alpha_A + \alpha_B) t / \epsilon \right) \right), where \alpha_A, \alpha_B bound the norms of A and B.[25]These methods offer key advantages, including a polylogarithmic dependence on precision $1/\epsilon, which scales favorably compared to the $1/\sqrt{\epsilon} factor in basic product formulas, and applicability to time-dependent Hamiltonians through post-Trotterization—approximating the original dynamics via a short-time Trotter formula to generate piecewise-constant segments, each simulated independently. For instance, in simulating Hubbard models, the interaction picture reduces gate complexity from O(N^{11/3} t) to O(N^2 t) for N-site systems with appropriate splitting.[25]
Qubitization and LCU
Linear combination of unitaries (LCU) is a technique for implementing non-unitary operators, such as Hamiltonians, by decomposing them into a weighted sum of unitary operators. A Hamiltonian H is expressed as H = \sum_{j=1}^d \alpha_j U_j, where each U_j is a unitary operator and the coefficients \alpha_j \geq 0 satisfy \alpha = \sum_{j=1}^d \alpha_j \geq \|H\|, the spectral norm of H.[28] This decomposition enables the construction of block-encodings, which embed a scaled version of H into a larger unitary matrix, as referenced in standard input access models for quantum simulation.[28]The LCU decomposition is implemented using two oracles: the PREPAREoracle, which prepares a normalized superposition state |\psi\rangle = \sum_{j=1}^d \sqrt{\alpha_j / \alpha} |j\rangle on ancillary registers, and the SELECToracle, which applies the corresponding unitary conditionally on the ancillary index, acting as \sum_{j=1}^d |j\rangle\langle j| \otimes U_j on the combined ancillary and system registers.[28] These oracles allow the effective application of H / \alpha through post-selection on the ancillary qubits, though with a success probability scaling as $1/\alpha. The technique originates from early quantum algorithm designs but was formalized for Hamiltonian simulation in subsequent works.[29]Qubitization builds on LCU by embedding the block-encoded Hamiltonian into a two-dimensional SU(2) subspace of an enlarged Hilbert space, facilitating efficient iteration to simulate time evolution. Specifically, the qubitization walk operator is defined asW = \mathrm{SELECT} \otimes |0\rangle\langle 0| + \mathrm{PREPARE} \otimes |1\rangle\langle 0| + \mathrm{h.c.},where h.c. denotes the Hermitian conjugate, and the oracles act on ancillary and system qubits.[28] This operator W has eigenvalues e^{\pm i \theta_\lambda} with \cos \theta_\lambda = \lambda / \alpha, where \lambda are the eigenvalues of the block-encoded H / \alpha, leading to an arcsin scaling in the simulation precision. Applying W repeatedly, combined with phase estimation or oblivious amplitude amplification, approximates the unitary time evolution e^{-i H t} by projecting onto the desired subspace.[28]The query complexity of qubitization-based simulation is O(\alpha t + \log(1/\epsilon)) to achieve error \epsilon, where t is the simulation time, offering near-optimal scaling for many sparse or structured Hamiltonians.[28] This approach traces its origins to A. Yu. Kitaev's 1995 proposal for phase estimation in quantum simulation, which introduced the core idea of iterative unitary walks for encoding operator spectra.[30] It was modernized and optimized for general Hamiltonian simulation by Low and Chuang in 2019, subsuming prior methods like sparse Hamiltonian simulation and LCU-based approaches.[28]
Quantum singular value transformation
Quantum singular value transformation (QSVT) provides a general and optimal framework for Hamiltonian simulation by enabling the application of polynomial transformations to the singular values of a block-encoded Hamiltonianoperator. Given a block-encoding of the Hamiltonian H, which embeds the normalized operator \alpha^{-1} H into a unitary matrix such that the singular values \sigma_i of the block satisfy $0 \leq \sigma_i \leq \alpha, QSVT constructs a new unitary that transforms these singular values according to a degree-d polynomial f, yielding \sum_i f(\sigma_i) |u_i\rangle\langle v_i|, where |u_i\rangle and |v_i\rangle are the corresponding singular vectors.[14] This transformation is implemented using a sequence of phase factors \theta_k derived from quantum signal processing (QSP), a subroutine that applies rotations based on the polynomial coefficients to selectively manipulate the singular values.[14]For Hamiltonian simulation, QSVT approximates the time evolution operator e^{-iHt} by selecting a polynomial f(x) that approximates e^{-i x t} on the interval [0, 1], assuming the block-encoding normalizes H to have spectrum in this range. The required polynomial degree is O(t + \log(1/\epsilon)), where t is the evolution time and \epsilon is the error tolerance, leading to a query complexity of O(t + \log(1/\epsilon)) to the block-encoding oracle, up to logarithmic factors in the system dimension.[14] This achieves near-optimality for the simulation problem, matching known lower bounds up to polylogarithmic overhead. The protocol proceeds by interleaving applications of the block-encoding unitary with controlled phase rotations parameterized by the \theta_k, effectively implementing the desired polynomial on the singular values without decomposing the operator into linear combinations of unitaries.[14]Extensions of QSVT to time-dependent Hamiltonians H(t) leverage multiple block-encodings of time slices or use integral approximations, scaling with the L_1-norm \int_0^t \|H(\tau)\| d\tau rather than the maximum norm, which can offer improvements for slowly varying or sparse-in-time dynamics. In this setting, QSVT constructs higher-order integrators or directly approximates the time-ordered exponential via polynomial fits over discretized time intervals, maintaining the optimal scaling when block-encodings are available for each H(t).The core QSVT equation for a degree-d polynomial transformation is given byU_f = \mathcal{A}( \Phi ) \begin{pmatrix} A & \sqrt{I - A^\dagger A} \\ \sqrt{I - A A^\dagger} & -A^\dagger \end{pmatrix} \mathcal{A}( \Phi )^\dagger,where \mathcal{A}(\Phi) is the QSP operator applying the phase sequence \Phi = (\theta_0, \dots, \theta_d), and A is the block-encoded operator; the resulting U_f block-encodes f(A).[14]
Quantum walks and other specialized methods
Quantum walks provide a specialized approach to simulating Hamiltonians that correspond to graph structures, particularly those arising from adjacency matrices or graph Laplacians. In continuous-time quantum walks, the evolution is governed by the Schrödinger equation with the Hamiltonian H = -\gamma L, where L is the graph Laplacian and \gamma > 0 is a scaling factor. The Laplacian matrix has off-diagonal elements L_{ij} = -1 if vertices i and j are connected by an edge, and diagonal elements L_{ii} = d_i, the degree of vertex i; thus, the Hamiltonian elements are H_{ij} = -\gamma if there is an edge between i and j, H_{ii} = \gamma d_i, and zero otherwise. This setup naturally simulates diffusion-like processes on graphs, with the walk operator e^{-iHt} approximating the continuous-time evolution.[31]Discrete-time quantum walks extend this capability by constructing unitary operators that approximate the continuous-time dynamics, enabling efficient simulation on quantum circuits. Methods based on Szegedy's framework use two copies of the state space and reflection operators to create a walk that embeds the Hamiltonian's singular values, allowing simulation of the evolution e^{-iHt} with query complexity scaling as O(\|H\| t + \log(1/\epsilon)/\log\log(1/\epsilon)) for error \epsilon. Childs' approach similarly leverages discrete walks on expanded graphs to simulate sparse Hamiltonians, providing a correspondence between discrete and continuous models that preserves key properties like spectral gaps.[31] These techniques offer exponential speedups over classical methods for tasks such as spatial search on graphs, where quantum walks achieve O(\sqrt{N}) time to find a marked vertex in an N-vertex graph, compared to O(N) classically.The qDRIFT protocol introduces randomization to product formulas, enhancing efficiency for Hamiltonians expressed as sums H = \sum_k H_k. By sampling terms with probabilities proportional to their norms \|H_k\| and averaging over multiple channels, qDRIFT produces an unbiased approximation to the idealevolution e^{-iHt} in the diamond norm, with gate complexity O\left( \left( \sum_k \|H_k\| t \right)^2 / \epsilon \right) independent of the number of terms. This randomized compilation reduces sensitivity to term ordering and commutators, making it suitable for noisy intermediate-scale quantum devices where deterministic Trotter errors accumulate rapidly.[32]Other specialized methods include simulation in the interaction picture, which transforms the Hamiltonian into a form where dominant terms evolve exactly while perturbations are treated approximately, yielding high-precision results for structured systems like those in quantum chemistry.[25] Recent extensions, such as parallel quantum walks, adapt these techniques for near-term hardware by distributing walk steps across multiple processors, achieving doubly logarithmic depth for simulating sparse, uniform Hamiltonians with reduced circuit overhead.[33]
Complexity bounds
Query and gate complexity
In the sparse oracle model, the optimal query complexity for simulating the time evolution under a Hamiltonian H for time t to precision ε is O(t ||H|| + log(1/ε)), where ||H|| is the spectral norm of H. This scaling is achieved using quantum signal processing or qubitization techniques that construct efficient block-encodings of H and apply optimal polynomial approximations to the sign function for simulation. The corresponding total gate complexity for an n-qubit system is O(n (t + log(1/ε))), assuming a normalized Hamiltonian with unit spectral norm; in general, it scales linearly with ||H|| t up to polylogarithmic factors in the system size and precision.[28]Different simulation algorithms exhibit varying resource requirements. Product formula methods based on Trotterization achieve a query complexity of O((||H||_1 t)^{1+o(1)} / ε^{o(1)}), where the o(1) terms arise from higher-order approximations that reduce the exponent at the cost of increased local gates per step. In contrast, methods based on quantum singular valuetransformation (QSVT) attain near-optimal query complexity of O(t + log(1/ε)), matching the lower bound up to constant factors when H is provided via an efficient block-encoding.[14]For time-dependent Hamiltonians H(t), the query complexity is near-optimal, scaling as O(t polylog(t, 1/ε)) with additional polylogarithmic factors to account for temporal variations, often through interaction-picture transformations or post-Trotter corrections. Recent analyses, such as Mizuta and Fujii (2023), have optimized protocols for specific cases like time-periodic systems, yielding tighter polylogarithmic dependencies on system parameters. In fault-tolerant quantum computing, the overall gate complexity receives further overhead from magic state distillation to implement non-Clifford gates, scaling as polylog(1/ε) per logical gate. Recent 2024-2025 developments, including algorithms for low-energy states, achieve optimal O(t + polylog(1/ε)) scaling within restricted subspaces.[34][35]
Lower bounds and optimality
Lower bounds on the complexity of Hamiltonian simulation provide fundamental limits on the resources required for accurate approximation of time evolution under a given Hamiltonian. In the standard sparse access model, where the Hamiltonian is accessed via an oracle that selects rows of its matrix representation, the query complexity is at least \Omega(\|H\| t + \log(1/\epsilon)), establishing that any algorithm must scale linearly with the product of the Hamiltonian norm, evolution time, and logarithmically with the inverse error tolerance. This bound was initially derived using the polynomial method for quantum query complexity and later refined to include the precision term.[36] A recent improvement tightens the dependence on \epsilon to \Omega(\|H\| t + \log(1/\epsilon)/\log\log(1/\epsilon)) in certain settings, confirming the near-optimality of existing algorithms up to sub-logarithmic factors.[37]Quantum singular value transformation (QSVT) and qubitization methods achieve query complexity \tilde{O}(\|H\| t + \log(1/\epsilon)) for sparse Hamiltonians with sparsity s, saturating the known lower bounds up to polylogarithmic factors in the system dimension and other parameters. These approaches are optimal in the query model for s-sparse operators, as they match the \Omega(s t \|H\| + \log(1/\epsilon)) bound derived from no-fast-forwarding arguments, which prohibit sublinear scaling in evolution time.[38] For non-sparse Hamiltonians, where full matrix access is required without sparsity assumptions, the gate complexity lower bound is \Omega(\|H\| t), reflecting the no-fast-forwarding theorem, which prohibits sublinear scaling in the evolution time and Hamiltonian norm.[38]Proofs of these lower bounds typically employ the polynomial method, which approximates the simulation task as a multivariate polynomial of bounded degree over the oracle entries, or hybrid arguments that distinguish between different Hamiltonian instances by interleaving queries with computational steps.[36] For time-dependent Hamiltonians H(t), the lower bounds follow similar scalings to the time-independent case, with polylogarithmic overheads for temporal structure.Despite these advances, gaps remain in the precise dependence on logarithmic factors, particularly whether the \log(1/\epsilon)/\log\log(1/\epsilon) term is universally tight across all input models. Additionally, lower bounds for simulation in the noisy intermediate-scale quantum (NISQ) regime, accounting for gate noise and limited coherence, remain an open area, with preliminary results suggesting exponential overheads in error rates for fault-tolerant thresholds.[37]
Applications
Quantum chemistry and materials science
Hamiltonian simulation plays a pivotal role in quantum chemistry by enabling the modeling of electronic structure problems through second-quantized representations of molecular Hamiltonians. These Hamiltonians typically take the formH = \sum_{pq} h_{pq} a_p^\dagger a_q + \frac{1}{2} \sum_{pqrs} v_{pqrs} a_p^\dagger a_q^\dagger a_r a_s,where a_p^\dagger (a_p) creates (annihilates) a fermion in orbital p, h_{pq} encodes one-electron integrals, and v_{pqrs} captures two-electron interactions. Such simulations facilitate the computation of molecular energy levels and properties beyond classical capabilities, particularly for strongly correlated systems where traditional methods like density functional theory falter. For instance, low-depth algorithms have been developed to simulate these Hamiltonians efficiently on near-term quantum devices, achieving subexponential scaling in system size for plane-wave basis sets.[39]In materials science, Hamiltonian simulation extends to condensed matter models like the Fermi-Hubbard model, which approximates electron interactions in lattice systems and serves as a benchmark for understanding high-temperature superconductors. The model Hamiltonian is H = -t \sum_{\langle i,j \rangle, \sigma} (c_{i\sigma}^\dagger c_{j\sigma} + \text{h.c.}) + U \sum_i n_{i\uparrow} n_{i\downarrow}, where t is the hopping parameter, U the on-site repulsion, and c_{i\sigma}^\dagger (c_{i\sigma}) creates (annihilates) an electron at site i with spin \sigma. Experimental realizations using Trotterization on trapped-ion quantum simulators have demonstrated superconducting pairing correlations in the repulsive regime, providing insights into cuprate mechanisms. These 2020s experiments, including those on Quantinuum platforms such as the H2 and Helios systems, have simulated up to dozens of sites with high fidelities, highlighting the approach's viability for probing exotic phases. A November 2025 experiment on the Helios system measured non-zero d-wave and s-wave superconducting pairing correlations in half-filled and doped Hubbard models, advancing understanding of light-induced superconductivity.[40][41]Advanced techniques like quantum singular value transformation (QSVT) enhance precision in simulating reaction dynamics, achieving near-optimal query complexity of O(\alpha t + \log(1/\epsilon)) for evolution time t and error \epsilon, where \alpha reflects Hamiltonian sparsity—effectively independent of $1/\epsilon up to logarithmic factors. This scaling enables accurate modeling of time-dependent electronic processes, such as non-adiabatic transitions in photochemical reactions, outperforming Trotter methods for long-time dynamics. Recent benchmarks, including the 2025 AppQSim suite, evaluate these simulations on molecular Hamiltonians for tasks like ground-state preparation in chemistry, reporting distinguishability costs that scale favorably with qubit count for systems up to 100 orbitals, underscoring practical impacts on drug discovery and catalyst design. While often integrated with variational quantum eigensolvers for initial states, the emphasis remains on dynamical evolution to capture real-time correlation effects.[42][43][44]
High-energy physics and beyond
In high-energy physics, Hamiltonian simulation techniques enable the modeling of complex quantum chromodynamics (QCD) processes, particularly through lattice QCD formulations that capture quark-gluon interactions. These simulations approximate the strong nuclear force dynamics by discretizing space-time into a lattice, where the Hamiltonian includes quark kinetic terms and gluon-mediated interactions, allowing for the study of phenomena like confinement and hadron formation.[45] Recent advances have focused on improving the efficiency of these simulations using optimized Hamiltonians derived from methods like the in-medium similarity renormalization group (IMSRG), which reduce computational overhead while preserving key physical features such as real-time evolution of baryon states in one-dimensional QCD with massless quarks.[46]A significant challenge in applying Hamiltonian simulation to high-energy systems arises from fermion-to-qubit mappings, such as the Jordan-Wigner transformation (JWT), which introduces non-local string operators that scale poorly with system size and complicate gate decompositions for multi-qubit architectures. In 2024, research developed local fermion-to-qudit mappings that mitigate non-locality, enabling more scalable simulations of fermionic lattice models without excessive qubit overhead. These mappings preserve locality in one-dimensional systems and extend to higher-dimensional lattices.[47]Beyond closed-system simulations, Hamiltonian methods extend to open quantum systems in high-energy contexts by incorporating dissipative effects through Lindblad master equations, which describe decoherence and environmental interactions via dilated Hamiltonians. This approach embeds the non-unitary Lindblad evolution into a larger unitary Hamiltonian on an extended Hilbert space, allowing standard simulation techniques to model processes like particle decay or thermalization in colliders.[48] For instance, dilated frameworks have been used to simulate decoherence in relativistic quantum field theories, capturing how environmental noise affects entanglement in simulated gluon fields.[49]Emerging applications leverage Hamiltonian encoding in quantum machine learning, where data is embedded into quantum states via problem-specific Hamiltonians to compute kernel matrices for classification tasks, enhancing expressivity over classical methods. This encoding transforms input features into time-evolution operators, enabling kernelestimation through Hamiltoniansimulation that reveals non-linear patterns in high-dimensional data.[50] In reacting flows, a 2024 study transformed scalar transport equations into Hamiltonian forms, allowing quantum simulation of combustiondynamics and speciesevolution with logarithmic scaling advantages over classical solvers.[51] Recent 2025 benchmarks from the AppQSim suite demonstrate these techniques' efficacy in non-physical domains, such as optimization problems mapped to Ising Hamiltonians, where simulations achieve near-optimal ground-state approximations for large-scale combinatorial tasks on noisy intermediate-scale quantum devices.[44]