Fact-checked by Grok 2 weeks ago

Quantum tomography

Quantum tomography is a fundamental technique in used to reconstruct an unknown or process from a series of measurements performed on an ensemble of identical copies of the system. By employing projective measurements in various bases, it enables the complete characterization of the quantum or equivalent representations, such as the Wigner function, overcoming limitations imposed by the and uncertainty principles. This process is essential for verifying quantum properties like entanglement, , and nonclassicality, providing a full statistical description of the system that single measurements cannot achieve. The method originated from early theoretical proposals in , such as Fano's 1957 concept of a "" for state reconstruction and Wigner's phase-space distributions, but gained experimental traction in the through techniques demonstrated on coherent and squeezed light states. Pioneering work by groups like Raymer's in and established optical implementations, expanding later to , solid-state, and multi-qubit systems. Today, quantum tomography encompasses not only state tomography (QST) but also process tomography (QPT), gate set tomography (GST), and extensions for non-Markovian dynamics, forming a core framework for quantum characterization, verification, and validation (QCVV). Key reconstruction approaches include linear inversion, which directly solves for the density matrix from measurement statistics but amplifies noise, and maximum-likelihood estimation (MLE), which enforces physical constraints like positivity for more robust results at the cost of computational intensity. Applications span quantum computing for device calibration and error mitigation, quantum communication for key distribution protocols, and fundamental tests of quantum mechanics, with recent advances addressing scalability through compressed sensing and shadow tomography to mitigate the exponential resource demands for high-dimensional systems. Despite these progresses, challenges remain in handling noise, achieving sample-optimal efficiency (requiring O(d^2 / \epsilon^2) measurements for dimension d and precision \epsilon), and extending to open quantum systems.

Introduction

Definition and scope

Quantum tomography is the process of reconstructing an unknown quantum state, measurement device, or dynamical evolution from a series of measurements performed on multiple identically prepared copies of the quantum system. This reconstruction relies on the statistical analysis of measurement outcomes to infer the full description of the quantum object in question, typically represented in the density operator formalism for states or formalism for processes. The scope of quantum tomography encompasses several variants, distinguished by the object being reconstructed. Quantum state tomography focuses on determining the of an unknown state using a complete set of measurements. In contrast, quantum process tomography characterizes an unknown or evolution by preparing known input states, applying the process, and performing state tomography on the outputs. Quantum measurement tomography, meanwhile, reconstructs the positive operator-valued measure () elements of an unknown detection apparatus by using known probe states and analyzing the resulting statistics. Full tomography aims to completely specify the object, whereas partial tomography targets only specific properties, such as expectation values of a subset of observables, to reduce experimental overhead. Central to quantum tomography are overcomplete measurement bases, often implemented via POVMs, which provide more outcomes than the minimal number required to span the system's , ensuring informational completeness and robustness against noise. Since individual quantum measurements are destructive, tomography necessitates an of identical preparations to accumulate sufficient statistics for reliable reconstruction. Early proposals for quantum tomography emerged in the late 1980s, initially applied to optical fields using homodyne detection to reconstruct quadrature distributions and Wigner functions.

Historical overview

The concept of quantum tomography emerged in the context of quantum optics during the late 1980s, with foundational theoretical proposals for reconstructing quantum states using homodyne detection techniques. In 1989, Klaus Vogel and Hannes Risken introduced a method to determine the Wigner function of a quantum state via balanced homodyne detection and the inverse Radon transform, marking an early milestone in optical quantum state reconstruction. This approach was built upon in 1993 by Ulf Leonhardt and Helmut Paul, who developed a pattern-function method for direct tomographic reconstruction of the density matrix from homodyne data, addressing practical measurement inefficiencies in optical systems. Their work formalized quantum tomography as a tool for characterizing non-classical light states, such as squeezed vacuum. Experimental realizations followed swiftly, with the first demonstration of optical tomography achieved in 1993 by A. Smithey, M. Beck, and M. G. Raymer using on coherent and squeezed light fields, confirming the theoretical predictions and enabling full characterization of single-mode optical states. The technique rapidly extended to other systems; by 1997, N. A. Gershenfeld and I. L. Chuang performed the first state tomography in liquid-state (NMR) systems, reconstructing two-qubit density matrices to verify operations in ensemble . This NMR implementation highlighted tomography's role in benchmarking early processors. Theoretical advancements for discrete systems culminated in 2001 with D. F. V. James, P. G. Kwiat, W. J. Munro, and A. G. White providing a comprehensive framework for measuring states via projective measurements in , applicable to photonic qubits and emphasizing error analysis in finite-sample reconstructions. Concurrently, researchers like Matteo G. A. and Jaroslav Řeháček advanced statistical methods, introducing maximum-likelihood estimation and Bayesian approaches in the early 2000s to handle noise and incomplete data, as detailed in their edited volume on estimation. These contributions shifted focus toward robust, informationally complete protocols. By the 2010s, the exponential scaling of full tomography with system size—requiring resources growing as d^2 for d-dimensional systems—prompted a transition to efficient variants, such as techniques proposed by , Yi-Kai Liu, Steven T. Flammia, Stephen D. Becker, and Jens Eisert in 2010, which reduced measurement requirements polynomially for low-rank states. This evolution addressed scalability challenges in multi-qubit systems, paving the way for practical applications in larger quantum devices while preserving in .

Mathematical Foundations

Quantum states and measurements

In quantum mechanics, the state of a physical system is described by a vector in a complex Hilbert space, known as a pure state, typically denoted as |\psi\rangle. Pure states fully characterize the system with complete knowledge, satisfying normalization \langle \psi | \psi \rangle = 1. In contrast, mixed states arise when the system is in an ensemble of pure states with classical probabilities, represented by the density operator \rho = \sum_i p_i |\psi_i\rangle \langle \psi_i |, where p_i \geq 0 are probabilities summing to 1, and the |\psi_i\rangle are orthonormal or not. The density operator is Hermitian (\rho^\dagger = \rho), positive semi-definite, and trace-normalized (\mathrm{Tr}(\rho) = 1), providing a general framework for both pure (\rho = |\psi\rangle \langle \psi |) and mixed states. Quantum measurements are formalized through observables, which for pure states correspond to self-adjoint operators, but in the density operator formalism, outcomes are predicted via the . The probability of obtaining outcome m for a measurement described by a positive operator-valued measure (POVM) \{E_m\} is given by p_m = \mathrm{Tr}(\rho E_m), where each E_m \geq 0 and \sum_m E_m = I, with I the identity operator. Projective measurements are a special case of POVMs where the E_m = P_m are orthogonal projectors (P_m^\dagger = P_m, P_m^2 = P_m, P_m P_n = 0 for m \neq n, and \sum_m P_m = I), corresponding to von Neumann measurements that collapse the state onto eigenspaces. POVMs generalize this to allow non-orthogonal and inefficient detections, essential for realistic quantum experiments. The , originally proposed for scattering probabilities, underpins all quantum prediction schemes by linking state descriptions to observable statistics. The underlying can be finite-dimensional, as in systems of qudits where the dimension d < \infty (e.g., qubits with d=2), or infinite-dimensional for continuous-variable systems like harmonic oscillators. In finite dimensions, states are expanded in a discrete basis, facilitating computational tomography. Infinite-dimensional cases, such as those involving coherent states |\alpha\rangle = e^{-|\alpha|^2/2} \sum_{n=0}^\infty \frac{\alpha^n}{\sqrt{n!}} |n\rangle, describe fields in quantum optics and require careful truncation or continuous representations for practical reconstruction. For quantum tomography, measurements must be informationally complete, meaning the set of POVM elements \{E_k\} spans the full space of Hermitian operators on the Hilbert space, allowing unique reconstruction of \rho from the outcome probabilities p_k = \mathrm{Tr}(\rho E_k). This requires at least d^2 - 1 independent parameters for a d-dimensional system, often achieved via overcomplete bases like mutually unbiased measurements in finite dimensions or phase-randomized quadratures in continuous variables.

The tomography reconstruction problem

The tomography reconstruction problem in quantum mechanics centers on inferring the density operator \rho of a quantum system from empirical probabilities obtained via measurements. Given a positive operator-valued measure (POVM) \{M_i\} where each M_i \geq 0 and \sum_i M_i = I, the probability p_i of observing outcome i is p_i = \operatorname{Tr}(\rho M_i). The task is to solve this system of equations for \rho, which must satisfy the constraints of hermiticity, positivity, and unit trace. This relation can be expressed as a linear map by vectorizing the operators in a chosen basis. Let \operatorname{vec}(\rho) denote the column vector formed by stacking the columns of \rho, and similarly for the probabilities (with the understanding that \sum_i p_i = 1). The probabilities satisfy \operatorname{vec}(\mathbf{p}) = L \operatorname{vec}(\rho), where L is the measurement superoperator matrix whose elements are L_{ij} = \langle A_j | M_i \rangle = \mathrm{Tr}(A_j^\dagger M_i) for an orthonormal operator basis \{A_j\}. If L is invertible, the reconstruction follows as \operatorname{vec}(\rho) = L^{-1} \operatorname{vec}(\mathbf{p}). An equivalent representation uses the \chi-matrix parameterization of \rho = \sum_{kl} \chi_{kl} E_k E_l^\dagger in terms of Kraus-like operators E_k, where the coefficients \chi_{kl} are solved linearly from the p_i. Uniqueness of the reconstruction requires the POVM to be informationally complete, meaning the operators \{M_i\} span the full space of Hermitian operators on the Hilbert space \mathcal{H}. For a system with \dim(\mathcal{H}) = d, this space has dimension d^2, but the trace condition \operatorname{Tr}(\rho) = 1 reduces the effective degrees of freedom to d^2 - 1, necessitating at least d^2 - 1 linearly independent measurement outcomes for an invertible L. Incomplete sets lead to underdetermined systems with multiple possible \rho consistent with the data. Experimental realizations introduce noise, primarily statistical fluctuations in the estimated p_i = N_i / N from finite counts N_i in total trials N, following Poisson statistics approximated as Gaussian errors \Delta p_i \approx \sqrt{p_i (1 - p_i)/N}. These errors propagate through the inversion, potentially yielding non-physical \rho (e.g., negative eigenvalues), and degrade reconstruction fidelity, often quantified by metrics like the trace distance or state fidelity. Addressing this requires regularization or constrained optimization to enforce physicality.

Applications

In quantum computing and information

In quantum computing, quantum tomography plays a crucial role in benchmarking by providing a complete characterization of their performance, including both coherent and incoherent errors. , in particular, reconstructs the full quantum channel implemented by a gate, enabling precise fidelity estimates that are essential for assessing hardware reliability beyond simpler metrics like randomized benchmarking. For instance, experiments on have used process tomography to quantify gate errors without assuming perfect state preparation or measurement, revealing error rates around 2% for single-qubit rotations. Quantum tomography also supports error correction protocols by verifying entanglement, a key resource for fault-tolerant computing. In superconducting quantum processors, state tomography is routinely applied to reconstruct , confirming their fidelity and concurrence to ensure reliable two-qubit operations. For example, preparations of on transmon qubits have achieved fidelities exceeding 95%, as verified through full density matrix reconstruction, which distinguishes true entanglement from classical correlations. This verification is vital for stabilizing logical qubits against decoherence in error-corrected architectures. In quantum key distribution (QKD), state tomography characterizes potential eavesdropping by reconstructing the quantum states transmitted over the channel, allowing detection of deviations from expected pure states due to interception. Tomographic protocols, such as those based on minimal qubit tomography, enable secure key generation by confirming that observed noise is consistent with environmental decoherence rather than adversarial interference. For quantum simulation, process tomography validates the effective Hamiltonians realized on quantum hardware by comparing reconstructed dynamics against theoretical models. This approach has been used to learn local Hamiltonians in noisy intermediate-scale quantum devices, achieving reconstruction accuracies on the order of 10% error in experimental demonstrations for short-time evolutions under the . In certifying quantum advantage experiments, two-qubit state tomography has been employed to quantify noise in subsystems, as seen in studies related to demonstrations like Google's 2019 Sycamore processor.

In quantum optics and sensing

In quantum optics, homodyne and heterodyne detection serve as cornerstone techniques for quantum state tomography of continuous-variable systems, particularly for characterizing non-classical states of light such as . Homodyne tomography reconstructs the density matrix and Wigner function of by measuring the quantum statistics of the electric field quadratures using balanced detection, enabling the verification of reduced uncertainty in one quadrature below the vacuum noise level. For instance, early experiments with optical parametric amplifiers demonstrated the complete reconstruction of families, confirming their quantum nature through tomographic analysis. Heterodyne detection extends this capability by simultaneously measuring both quadratures, albeit with added vacuum noise, and has been shown to enhance estimation performance in the presence of squeezing, as applied to where it provides trace-distance guarantees for state recovery independent of photon number. These methods are integral to probing light-matter interactions in optical systems, with recent advances incorporating adaptive squeezing to minimize measurement resources. Quantum tomography enhances sensing applications in quantum metrology by enabling precise characterization of probe states for tasks like phase estimation, surpassing classical limits through the use of non-classical resources. In continuous-variable setups, such as , tomography via reconstructs the of Gaussian states like coherent squeezed vacuum, informing recursive that achieve Heisenberg-limited scaling (δ ∝ M^{-1/2} N^{-1}, where M is the number of measurements and N the mean photon number). This approach yields sub-shot-noise precision, for example, δ ≈ 0.0045 for squeezing parameter r = 1.8, by leveraging the state's moments to compute phase shift probabilities. Such tomographic feedback optimizes metrological protocols, including joint estimation of phase and diffusion, where quantum correlations reduce uncertainty in noisy environments. A prominent example of quantum tomography in cavity quantum electrodynamics (QED) involves the direct measurement of photon number states or Gaussian states within optical cavities. Schemes based on quantum weak values allow access to Fock state superpositions without full global reconstruction, applicable to atomic or circuit QED systems, thereby improving efficiency over traditional phase-space methods like . For Gaussian states, energy-independent protocols using homodyne detection with auxiliary squeezed vacuum and passive unitaries achieve provable recovery in trace distance, scaling only with the number of modes and offering exponential improvements for high-squeezing regimes in cavity setups. These techniques have facilitated the tomography of single-photon Fock states and their superpositions, reconstructing to confirm non-classical features in itinerant cavity photons. Emerging applications extend quantum tomography to biomedical imaging, particularly in quantum-enhanced magnetic resonance imaging (MRI) and optical coherence tomography (OCT) analogs. In quantum sensing for MRI, nitrogen-vacancy (NV) centers in diamond enable nanoscale nuclear magnetic resonance (NMR) tomography, achieving subcellular resolution for single-molecule detection and neuronal magnetometry, enhancing sensitivity for metabolomics and protein structure analysis. Quantum optical coherence tomography (Q-OCT), leveraging entangled photon pairs, mimics state tomography through coincidence measurements to reconstruct depth profiles, providing up to twofold axial resolution improvement (e.g., from 25.8 μm to 15.9 μm) and dispersion cancellation for deeper tissue imaging. These analogs exploit quantum correlations to boost signal-to-noise ratios in non-invasive diagnostics, though challenges like acquisition times persist.

Quantum State Tomography

Linear inversion techniques

Linear inversion techniques provide a direct algebraic approach to reconstructing the quantum density operator \rho from measurement outcomes in quantum state tomography. These methods express \rho as a linear combination of dual frame operators G_i derived from the measurement basis, yielding the explicit solution \rho = \sum_i p_i G_i, where p_i are the probabilities obtained from the expectation values of the measurement operators. This framework relies on informationally complete measurements that span the space of Hermitian operators, allowing the dual frames to invert the measurement map uniquely. The dual operators G_i are constructed such that \text{Tr}(G_i E_j) = \delta_{ij}, where E_j are the positive operator-valued measure (POVM) elements, ensuring an unbiased estimator for \rho. In finite-dimensional systems, linear inversion is particularly straightforward for low-dimensional cases like single qubits. For a single qubit, measurements in the Pauli bases \sigma_x, \sigma_y, and \sigma_z yield the expectation values \langle \sigma_x \rangle, \langle \sigma_y \rangle, and \langle \sigma_z \rangle, which directly determine the Bloch vector components r_x, r_y, r_z. The density matrix is then reconstructed as \rho = \frac{1}{2} (I + r_x \sigma_x + r_y \sigma_y + r_z \sigma_z), providing a complete description of the state with just three independent measurements. This approach maps the state onto the , where the vector length | \mathbf{r} | \leq 1 indicates purity, with pure states on the surface. For continuous-variable systems, linear inversion manifests in quantum homodyne tomography, where balanced homodyne detection measures quadrature distributions p(x|\phi) for various local oscillator phases \phi. These distributions correspond to the Radon transform of the Wigner quasi-probability function W(q,p), and reconstruction proceeds via the inverse Radon transform: W(q,p) = \frac{1}{2\pi^2} \int_0^\pi d\phi \int_{-\infty}^\infty dx' \, p(x'|\phi) \frac{\partial}{\partial x'} \left( \frac{1}{q \cos\phi + p \sin\phi - x'} \right). The resulting W(q,p) yields the state, often discretized on a truncated Fock basis for numerical implementation. An example is the tomography of coherent states, which produce Gaussian Wigner functions; here, pattern functions R[O](x,\phi) serve as kernels to estimate expectation values \langle O \rangle = \int dx \, d\phi \, p(x|\phi) R[O](x,\phi), such as photon number \langle a^\dagger a \rangle for states with mean photon number \bar{n} \approx 8. Despite their simplicity, linear inversion techniques are highly sensitive to experimental noise, as the inversion matrix is often ill-conditioned, amplifying statistical fluctuations in the measurement probabilities p_i. This leads to reconstructed states with negative eigenvalues, violating the positivity requirement for physical density operators. For instance, even modest noise in for a pure state like |0\rangle\langle 0| can produce non-physical \rho with trace less than 1 or negative eigenvalues, necessitating post-processing corrections that degrade fidelity.

Maximum likelihood and iterative methods

Maximum likelihood estimation (MLE) in quantum state tomography involves finding the density operator \rho that maximizes the likelihood of observing the measured data, ensuring the reconstructed state remains physically valid. The objective is to maximize the log-likelihood function L(\rho) = \sum_i n_i \log \operatorname{Tr}(\rho M_i), where n_i are the observed counts for measurement outcomes corresponding to positive operator-valued measure (POVM) elements M_i, subject to the constraints \operatorname{Tr}(\rho) = 1 and \rho \geq 0. This approach treats the measurement outcomes as multinomially distributed, naturally accounting for the probabilistic nature of quantum measurements. To solve this constrained nonlinear optimization problem, iterative algorithms are employed, such as the expectation-maximization (EM) method, which alternates between updating the eigenvalues and eigenvectors of \rho. A refined variant incorporates "diluted" iterations by introducing a relaxation parameter \alpha < 1 in the update rule \rho_{k+1} = (1 - \alpha) \rho_k + \alpha \frac{\rho_k R_k + R_k \rho_k}{2}, where R_k = \sum_i \frac{n_i}{\operatorname{Tr}(\rho_k M_i)} M_i, to improve convergence and avoid stagnation. These iterations typically converge in fewer than 100 steps for low-dimensional systems, yielding a positive semidefinite \rho. The primary advantages of MLE include its guarantee of a physical density operator—avoiding the unphysical negative eigenvalues possible in linear inversion—and its ability to properly handle Poissonian counting statistics inherent in quantum experiments, leading to asymptotically efficient estimators that saturate the Cramér-Rao bound. Unlike linear methods, MLE enforces quantum constraints throughout, providing more reliable reconstructions for noisy or limited data. However, it suffers from high computational cost, with per-iteration complexity O(d^3) due to matrix operations, and the total cost can be significant for large d depending on the number of iterations needed for convergence. A notable application of MLE is in reconstructing two-qubit states for entanglement detection, as demonstrated in experiments with polarization-entangled photons. For instance, analysis of data from entangled photon pairs measured in 16 directions yielded a reconstructed state \rho_{ML} = 0.962 |\phi^+\rangle\langle\phi^+| + 0.038 |\phi^-\rangle\langle\phi^-|, with fidelity exceeding 0.99 to the ideal Bell state, confirming entanglement where standard methods failed due to negativity. This highlights MLE's utility in verifying quantum correlations essential for quantum information protocols.

Bayesian and probabilistic approaches

Bayesian approaches to quantum state tomography treat the density matrix as a random variable and update beliefs about it based on measurement data using . The posterior distribution is proportional to the likelihood of the observed data given the state multiplied by a prior distribution over possible states: p(\rho | \mathbf{d}) \propto p(\mathbf{d} | \rho) \cdot p(\rho), where \mathbf{d} represents the measurement outcomes. This framework allows incorporation of prior knowledge about the quantum system, such as expected symmetries or physical constraints, through the choice of p(\rho). A common prior is the applied to the eigenvalues of \rho, which ensures the prior is uniform over the simplex of valid probability distributions while allowing tuning via hyperparameters to reflect ignorance or informed guesses about the state's spectrum. The Bayesian mean estimator, obtained by integrating over the posterior, minimizes the expected Hilbert-Schmidt distance between the true state and the estimate, providing a robust point estimate that hedges against overfitting. This is particularly useful in finite-sample regimes, where it outperforms frequentist methods by averaging over possible states weighted by their posterior probability. Uncertainty in the estimate is quantified through credible intervals derived from the posterior, often computed via Markov chain Monte Carlo (MCMC) sampling to explore the high-dimensional space of density matrices efficiently. These intervals offer probabilistic error bars that account for both statistical fluctuations and prior assumptions, enabling reliable assessment of reconstruction fidelity. The advantages of Bayesian methods include their ability to handle prior information—such as known depolarizing noise in a channel—and to provide full posterior distributions for downstream tasks like hypothesis testing or control. Maximum likelihood estimation emerges as a special case when using a uniform (Jeffreys) prior, but Bayesian approaches generalize this by allowing informative priors for better performance in low-data scenarios. While primarily for complete data sets, these methods can be extended to incorporate strategies for handling incomplete measurements by adjusting the likelihood model.

Strategies for incomplete or noisy data

In practical quantum experiments, data incompleteness arises from limited measurement settings or sample sizes, while noise introduces errors from imperfect detectors, decoherence, or gate inaccuracies, necessitating specialized strategies to reconstruct reliable quantum states. These approaches leverage prior knowledge about state complexity, such as sparsity or low rank, to reduce the required measurements without sacrificing fidelity. Compressed sensing techniques, in particular, exploit the fact that many quantum states of interest are sparse in a suitable basis or have low effective dimensionality, enabling reconstruction from far fewer data points than traditional full tomography demands. For incomplete measurements, compressed sensing relies on sparsity assumptions, such as the density matrix \rho being low-rank or k-sparse in the , allowing reconstruction via convex optimization. A seminal method formulates the problem as minimizing the nuclear norm \|\rho\|_* = \operatorname{Tr}(\sqrt{\rho^\dagger \rho}) (or \ell_1 norm for coefficient sparsity) subject to linear constraints from the observed measurement outcomes p_i = \operatorname{Tr}(E_i \rho), where E_i are the measurement operators; this \ell_1-style minimization promotes sparsity and guarantees exact recovery with high probability using O(r d \log^2 d) measurements for a d-dimensional system with rank r \ll d. This approach has been experimentally validated on systems up to seven , achieving fidelities above 0.99 with sampling ratios as low as 10% of full tomography requirements. Adaptive measurement strategies further mitigate incompleteness by sequentially selecting optimal bases based on prior data, reducing the total number of settings needed by focusing on informative directions. Noise mitigation in these scenarios builds on compressed sensing by incorporating robust estimators that handle probabilistic errors, such as diluted or depolarizing noise, through regularization terms in the optimization that penalize deviations from physical constraints like positivity and unit trace. For instance, compressed sensing tomography can suppress noise amplification in high-dimensional reconstructions by enforcing sparsity, outperforming linear inversion in noisy regimes with error scaling O(\sqrt{k \log d / N}) for N samples and sparsity k. Self-guided tomography addresses both incompleteness and noise via iterative feedback loops: starting from an initial guess, it selects the next measurement to maximally distinguish the current hypothesis from the true state, converging to the density matrix with fewer iterations than non-adaptive methods, even under moderate noise levels. This technique has been demonstrated experimentally on photonic qubits, yielding state estimates with fidelities exceeding 0.95 using 20-50% fewer measurements. A prominent example is shadow tomography, which estimates multiple expectation values \langle O_m \rangle = \operatorname{Tr}(O_m \rho) for many observables O_m from a single set of random measurements, requiring only O(\log M / \epsilon^2) samples for M observables with precision \epsilon, far fewer than pairwise tomography. Introduced for classical shadows of quantum states, this method randomizes measurements in the computational basis after applying random unitaries, then inverts via a channel to recover shadows whose statistics predict the targets efficiently. These strategies, however, rely on strong assumptions about state complexity, such as k-sparsity in the or bounded shadow norm for the observables, which may fail for highly entangled or mixed states without additional priors, potentially leading to reconstruction errors if the true state violates the low-rank or sparsity conditions.

Quantum Measurement Tomography

Core principles

Quantum measurement tomography, also known as quantum detector tomography, is the process of reconstructing the positive operator-valued measure (POVM) elements \{E_m\} that fully characterize a quantum measurement device. This is achieved by preparing a known set of probe states, such as pure states |\psi_k\rangle, and measuring the outcome probabilities p_{km} = \langle \psi_k | E_m | \psi_k \rangle for each outcome m. The POVM elements satisfy the completeness relation \sum_m E_m = I and are positive semidefinite, ensuring the probabilities sum to one for any input state. By collecting statistics over multiple preparations and measurements, the POVM can be estimated without prior assumptions about the device's operation, completing the characterization of quantum experiments alongside state and process tomography. This reconstruction problem is the dual of quantum state tomography, where the unknown quantum state is reconstructed assuming a known measurement; in contrast, measurement tomography assumes access to known input states while treating the measurement as unknown. The duality arises from the trace rule p_m = \mathrm{tr}(E_m \rho) that symmetrically interchanges the roles of the density operator \rho and the POVM elements E_m. This perspective highlights how measurement tomography calibrates the detection apparatus, enabling accurate interpretation of subsequent experiments. For the reconstruction to be possible, the set of probe states must be informationally complete, meaning the outer products \{|\psi_k\rangle\langle\psi_k|\} span the full space of Hermitian operators on the Hilbert space. In a d-dimensional system, at least d^2 such probes are required to uniquely determine the POVM. For a qubit (d=2), a minimal informationally complete set consists of four pure states whose Bloch vectors point to the vertices of a regular tetrahedron, providing symmetric coverage of the state space. The fidelity of the reconstructed POVM to an ideal measurement is typically quantified using the process fidelity, which measures the overlap between the Choi matrices of the actual and ideal measurement channels. This metric, defined as F = \frac{1}{d^2} \mathrm{tr}(\sqrt{\sqrt{J} J_\mathrm{ideal} \sqrt{J}})^2 where J is the Choi operator, assesses how well the calibrated measurement reproduces expected outcomes for arbitrary inputs, guiding device calibration and error analysis.

Reconstruction algorithms

Reconstruction of the positive operator-valued measure (POVM) elements E_m in quantum measurement tomography begins with linear inversion techniques, which provide a straightforward, non-iterative approach to estimate the measurement operators from experimental data. The core relation is the matrix equation P_{km} = \langle \psi_k | E_m | \psi_k \rangle, where \{\psi_k\} is a complete set of prepared pure states and P_{km} are the measured probabilities for outcome m when preparing state k. By vectorizing the operators into a linear system, the E_m are solved via matrix inversion, yielding an explicit reconstruction formula. However, this method can produce non-physical results, such as negative eigenvalues, due to statistical fluctuations or experimental noise, necessitating post-processing projections onto the space of valid POVMs. To address these limitations, maximum likelihood estimation (MLE) for POVMs optimizes the likelihood function under physical constraints \sum_m E_m = I and E_m \geq 0. The estimator maximizes the log-likelihood \mathcal{L} = \sum_{k,m} N_{km} \log \operatorname{Tr}(\rho_k E_m), where N_{km} are observed counts and \rho_k are the prepared states (often idealized as pure states), subject to the POVM conditions enforced via semidefinite programming or iterative algorithms like the diluted RρR method. Unlike linear inversion, MLE guarantees physicality and is asymptotically efficient, converging to the true POVM as the number of trials increases. This approach was introduced for quantum measurement calibration and has become a standard for robust reconstruction. A practical example is qubit measurement tomography using symmetric informationally complete (SIC) POVMs for calibration. For a single qubit, a SIC-POVM consists of four rank-one operators E_m = \frac{1}{2} |\phi_m\rangle\langle\phi_m|, where the |\phi_m\rangle are equiangular states satisfying \left| \langle\phi_i|\phi_j\rangle \right|^2 = \frac{1}{3} for i \neq j. By preparing the six tetrahedral states and measuring outcomes, linear inversion or MLE reconstructs the effective POVM, enabling correction for basis misalignments or inefficiencies with high fidelity, as demonstrated in numerical simulations achieving near-unit process fidelity. Error analysis in these reconstructions reveals how imperfections in state preparation propagate to the estimated POVMs. State preparation errors, modeled as deviations \delta \rho_k from ideal \rho_k, introduce biases in P_{km}, leading to covariance in the reconstructed E_m that scales with the condition number of the inversion matrix; for overcomplete sets like SIC-POVMs, this propagation is mitigated but still requires gauge-invariant SPAM (state preparation and measurement) tomography to disentangle correlated errors. Simulations show that preparation infidelity above 1% can inflate POVM reconstruction errors by factors of 2-5, emphasizing the need for self-consistent methods. In photonics applications, these algorithms enable detector efficiency tomography, where POVM elements capture varying quantum efficiencies across modes or polarizations. For instance, in multi-mode optical setups, linear inversion calibrated with coherent states reconstructs efficiency maps, correcting for losses in entangled photon sources and improving state tomography fidelity from 0.85 to 0.98 in experimental fiber-based systems.

Quantum Process Tomography

Quantum channels and dynamical maps

In quantum information theory, quantum channels provide the general framework for describing the evolution of a quantum system due to interactions with an uncontrollable environment, such as decoherence or dissipation. These evolutions are mathematically represented as completely positive trace-preserving (CPTP) maps acting on the density operator of the system. A key representation of quantum channels is the Kraus operator-sum decomposition, where a channel \Phi applied to a density operator \rho is given by \Phi(\rho) = \sum_{i} K_i \rho K_i^\dagger, with the Kraus operators \{K_i\} satisfying the completeness relation \sum_i K_i^\dagger K_i = I to ensure trace preservation. This form guarantees complete positivity, meaning the map preserves the positivity of density operators even when tensored with the identity on an ancillary system. The number of Kraus operators is at most d^2, where d is the dimension of the Hilbert space. The Choi-Jamiolkowski isomorphism offers an alternative representation of quantum channels by associating each CPTP map with a positive semidefinite operator on the doubled Hilbert space \mathcal{H} \otimes \mathcal{H}. Specifically, the Choi matrix J(\Phi) of the channel \Phi is obtained by applying \Phi \otimes \mathcal{I} (where \mathcal{I} is the identity map) to the maximally entangled state projector |\Omega\rangle\langle\Omega|, with |\Omega\rangle = \frac{1}{\sqrt{d}} \sum_{k=1}^d |k\rangle \otimes |k\rangle. This isomorphism facilitates the analysis of channel properties, such as entanglement breaking, and enables tomography by reconstructing the Choi state instead of the process matrix. Dynamical maps extend the concept of quantum channels to time-dependent evolutions of open quantum systems, capturing the propagation of the density operator over time under environmental influences. In the Markovian regime, these maps are generated by the Liouvillian superoperator \mathcal{L}, which acts in the vectorized space of operators and yields the master equation \frac{d\rho}{dt} = \mathcal{L}(\rho) = -i[H, \rho] + \sum_j \left( V_j \rho V_j^\dagger - \frac{1}{2} \{V_j^\dagger V_j, \rho\} \right), where H is the and \{V_j\} are describing jump processes. The solution is the dynamical map \Lambda(t) = e^{t \mathcal{L}}, which is CPTP for each t \geq 0. Time evolution can also be parameterized directly via time-ordered products of for non-Markovian cases. Common noise models illustrate these formalisms. The depolarizing channel, a symmetric noise source, acts as \Phi(\rho) = (1 - p) \rho + p \frac{I}{d}, where p is the depolarization probability and I/d is the maximally mixed state; its Kraus operators for a qubit (d=2) are \sqrt{1 - \frac{3p}{4}} I, \sqrt{\frac{p}{4}} X, \sqrt{\frac{p}{4}} Y, and \sqrt{\frac{p}{4}} Z, with Pauli matrices X, Y, Z. Another example is the amplitude damping channel, modeling relaxation from an excited to a ground state (e.g., spontaneous emission), with Kraus operators for a qubit K_0 = \begin{pmatrix} 1 & 0 \\ 0 & \sqrt{1 - \gamma} \end{pmatrix} and K_1 = \begin{pmatrix} 0 & \sqrt{\gamma} \\ 0 & 0 \end{pmatrix}, where \gamma is the damping probability. These models are foundational for understanding error processes in quantum devices.

Standard process tomography protocols

Standard quantum process tomography protocols involve preparing a complete set of basis input states, applying the unknown quantum channel to each, and then performing quantum state tomography on the output states to reconstruct the process matrix. This approach, often referred to as standard quantum process tomography (SQPT), fully characterizes the channel by estimating all elements of the χ-matrix, which represents the process in the operator-sum decomposition Φ(ρ) = ∑{m,n} χ{mn} E_m ρ E_n^†, where {E_m} is a basis of operators and ρ is the input density matrix. The reconstruction typically relies on linear inversion or iterative methods applied to the collected measurement data, building on techniques from quantum state tomography. For qubit systems, Pauli twirling enhances efficiency by conjugating the channel with random Pauli operators before and after application, projecting the process onto a and simplifying the estimation of diagonal elements in the χ-matrix. This method reduces the number of required measurements for certain noise models while maintaining the ability to capture coherent errors, making it particularly useful in noisy intermediate-scale quantum devices. An illustrative experimental implementation is the tomographic characterization of a controlled-NOT (CNOT) gate using capacitively coupled superconducting phase qubits, where input basis states were prepared, the gate applied, and output states tomographed to yield a process fidelity of approximately 56% for the CNOT operation. Such protocols have been instrumental in benchmarking two-qubit gates in early superconducting quantum processors. The scalability of these protocols is limited by the need to estimate O(d^4) parameters for a d-dimensional system, where d = 2^n for n qubits, leading to exponential growth in measurement resources as system size increases. This resource demand arises from the quadratic scaling in both the input basis preparation and output tomography steps, rendering full characterization infeasible beyond a few qubits without approximations. To quantify the performance of the reconstructed channel, the average gate fidelity is commonly used, defined as F = ∫ ⟨ψ|Φ(|ψ⟩⟨ψ|)|ψ⟩ dψ, where the integral is over all pure states |ψ⟩ on the Haar measure. This state-independent metric provides a single scalar measure of how closely the channel Φ approximates an ideal unitary operation, with values near 1 indicating high fidelity.

Advanced variants like gate set tomography

Gate set tomography (GST) represents a sophisticated extension of quantum process tomography that characterizes an entire gate set, including state preparation and measurement (SPAM) operations, in a self-consistent and gauge-invariant manner. Unlike standard protocols that assume ideal SPAM, GST jointly reconstructs the effective gates, preparation processes, and measurement effects by analyzing data from interleaved sequences of gates, enabling the detection of systematic errors across the full experimental apparatus. This approach uses randomized benchmarking-like sequences of varying lengths to probe the system's behavior efficiently, achieving high precision through long sequences that scale favorably with the desired accuracy. A key advantage of GST is its gauge invariance, which eliminates ambiguities arising from arbitrary choices in basis representations by focusing on relative gate fidelities and compositions, providing a robust, calibration-free characterization suitable for real quantum devices. Additionally, GST excels at distinguishing coherent errors—such as unitary rotations—from incoherent noise, offering detailed insights into error sources that simpler methods overlook, which is crucial for improving gate fidelities toward fault tolerance. For instance, in a 2013 experiment on a trapped-ion qubit, GST was used to characterize Clifford-generating gates, demonstrating extremely accurate operations. Randomized process tomography variants enhance efficiency by sampling subsets of possible input states and measurements, reducing the exponential resource demands of full tomography while leveraging compressive sensing principles to reconstruct low-rank quantum channels. These methods draw random sequences or projections to estimate the process with fewer experiments, maintaining accuracy for sparse or structured noise models prevalent in near-term devices. Compressive gate set tomography, a specific randomized extension of GST, further optimizes this by using low-rank approximations of gate sets, requiring significantly fewer gate sequences—often by orders of magnitude—compared to standard GST, as demonstrated in simulations and small-scale implementations. Recent advancements in GST include extensions to stochastic noise models that account for time-correlated or non-Markovian effects, improving reconstructions for devices with fluctuating control fields or environmental interactions. In 2025, microscopic parametrizations were introduced for GST under correlated phase noise, enabling precise modeling of stochastic processes in quantum information processors by incorporating device-specific noise spectra, as applied to superconducting and ion-trap platforms. Additionally, threshold quantum state tomography techniques, adapted for hybrid photonic-integrated circuits, have shown promise in 2025 experiments by efficiently verifying states near fault-tolerance thresholds with minimal measurements, highlighting scalability in multi-platform systems.

Challenges and Recent Developments

Scalability and computational challenges

Quantum state tomography requires an informationally complete set of measurements that scales as O(d^2), where d is the dimension of the Hilbert space, to reconstruct the d^2 - 1 independent parameters of a density matrix. For an n-qubit system, this implies an exponential growth in the number of required measurements, as d = 2^n, rendering full tomography resource-intensive even for moderate system sizes. Quantum process tomography exacerbates this issue, demanding O(d^4) measurements to characterize the d^4 - 1 parameters of a general quantum channel, involving preparation of d^2 input states and measurement in d^2 bases for each. The computational demands further compound these scaling issues. Reconstruction often involves matrix inversion or least-squares optimization, with a complexity of O(d^6) for inverting the Gram matrix in linear estimation schemes, making it infeasible for large d. For processes, similar optimizations scale as O(d^8) or worse, limiting practical implementations to systems with d \lesssim 10. Experimentally, these protocols suffer from sample inefficiency, as the exponential number of measurements requires vast ensembles to achieve sufficient statistical precision, often exceeding available quantum resources. Decoherence during repeated preparations and measurements introduces additional errors, degrading fidelity and amplifying noise in the reconstructed state or channel, particularly in noisy intermediate-scale quantum devices. In many-body systems, the curse of dimensionality manifests acutely, where the parameter count grows as O(4^n) for processes, prohibiting naive tomography for n > 10 qubits due to prohibitive resource and time requirements. For instance, attempts at 50-qubit tomography fail under standard protocols, as they would necessitate approximately $3^{50} \approx 7 \times 10^{23} measurement settings, far beyond current experimental capabilities and leading to insurmountable statistical and decoherence errors. To address these limitations, high-level mitigation approaches include adaptive measurement schemes and prior-informed reconstructions, though full scalability remains an open challenge.

Emerging methods and experimental advances

Recent advancements in quantum tomography have focused on compressed and adaptive techniques to mitigate the exponential scaling of traditional methods with system size. Classical shadow tomography, introduced by Huang et al., enables the estimation of expectation values for a large number of observables using a sample complexity of O(\log d / \epsilon^2) for a d-dimensional system and error \epsilon, where measurements are performed in random bases and classical post-processing reconstructs the "shadow" of the quantum state. This approach has been experimentally demonstrated on superconducting qubits and photonic systems, achieving high-fidelity predictions with far fewer copies than full state tomography. Adaptive variants further refine this by sequentially selecting measurements based on prior data, reducing overhead in noisy intermediate-scale quantum devices. Machine learning methods have emerged as powerful tools for , particularly for handling incomplete or high-dimensional data. Neural networks, such as variational autoencoders, are trained on simulated outcomes to map noisy data directly to density matrices, enforcing physical constraints like positivity and unit trace. For instance, convolutional neural networks and conditional generative adversarial networks have shown robust scaling in reconstructing states of up to 10 qubits, outperforming linear inversion techniques in under realistic levels. These approaches leverage simulated datasets for pre-training, allowing efficient adaptation to experimental data without exhaustive parameter sweeps. In 2025, was experimentally verified on hybrid photonic platforms integrating superconducting detectors and integrated waveguides, achieving reconstruction fidelities above 95% for multi-qubit states while requiring only measurements to bound purity. This method addresses sparsity in high-dimensional Hilbert spaces by focusing on dominant eigenvalues, demonstrated on a 4-qubit system with non-local entanglement. Concurrently, extensions of (GST) to non-Markovian noise models have been developed, incorporating time-correlated through parameterizations that unify irregular noise characterization without assuming memoryless dynamics. These advancements enable accurate process for gates exhibiting coherent , as validated on ion-trap processors with error rates below 0.1%. Experimental progress has extended tomography to distributed quantum networks, where link parameters are estimated via quantum network tomography protocols that model entanglement distribution across nodes with sample complexities scaling linearly with network depth. Additionally, beyond-leading-order spin tomography has been proposed for high-energy physics applications, incorporating higher-order quantum corrections in angular momentum reconstructions at particle colliders, with simulations showing 15% improved precision over standard methods for qutrit states. Looking ahead, integrating with promises scalable verification of logical qubits, where subspace projection methods confirm code-space membership using tomographic projections onto error syndromes, reducing measurement overhead by orders of magnitude compared to full-state protocols. This framework supports fault-tolerant benchmarking in modular architectures, paving the way for practical networks.

References

  1. [1]
    [quant-ph/0302028] Quantum Tomography - arXiv
    Feb 4, 2003 · This is a review paper on Quantum Tomography by G. Mauro D'Ariano, Matteo G. A. Paris, and Massimiliano F. Sacchi, to appear in "Advances in ...
  2. [2]
    Selected Concepts of Quantum State Tomography - MDPI
    Aug 25, 2022 · Quantum state tomography (QST) is any method that reconstructs an accurate representation of a quantum system based on experimental data.
  3. [3]
  4. [4]
    [2305.20069] A survey on the complexity of learning quantum states
    May 31, 2023 · This survey studies the complexity of learning quantum states, including quantum tomography, learning physical states, and classical functions encoded as  ...
  5. [5]
    [2110.05294] Quantum tomography explains quantum mechanics
    Oct 11, 2021 · This paper gives a self-contained deductive approach to quantum mechanics and quantum measurement.
  6. [6]
    Efficient quantum state tomography | Nature Communications
    Dec 21, 2010 · Quantum state tomography—deducing quantum states from measured data—is the gold standard for verification and benchmarking of quantum devices.
  7. [7]
    Quantum process tomography with unsupervised learning ... - Nature
    May 19, 2023 · Quantum process tomography (QPT), a procedure that reconstructs an unknown quantum process from measurement data, is a fundamental tool for ...Results · Quantum Process Tomography · Methods
  8. [8]
    Nearly Optimal Measurement Scheduling for Partial Tomography of ...
    Sep 22, 2020 · A common way to estimate the energy of a quantum state during a variational quantum algorithm is to perform partial tomography [2] on a set of ...
  9. [9]
    Experimental Realization of Quantum Tomography of Photonic ...
    Oct 12, 2015 · Symmetric informationally complete positive operator-valued measures provide efficient quantum state tomography in any finite dimension.
  10. [10]
    Concepts in quantum state tomography and classical ...
    Quantum state tomography (QST) estimates quantum state properties by manipulating single photons in a sequence of projective measurements, similar to CT scans.
  11. [11]
    Description of States in Quantum Mechanics by Density Matrix and ...
    von Neumann, Göttinger Nachr. 245 and 273 (1927); J. von Neumann, Mathematical Foundations of Quantum Mechanics (Princeton University Press, Princeton, 1955) ...
  12. [12]
  13. [13]
    [PDF] Quantum State Tomography - University of Illinois Urbana-Champaign
    Quantum tomography characterizes a quantum state through measurements in different bases, using identical copies of the state, as measuring a single particle ...
  14. [14]
    Randomized benchmarking and process tomography for gate errors ...
    Nov 26, 2008 · Results from quantum process tomography and randomized benchmarking are compared with gate errors obtained from a double pi pulse experiment.
  15. [15]
    [quant-ph/0305018] Tomographic Quantum Cryptography - arXiv
    May 5, 2003 · Eavesdropping on the quantum channel is seriously impeded by requiring that the outcome of the tomography is consistent with unbiased noise in ...Missing: distribution | Show results with:distribution
  16. [16]
    [2509.15713] Hamiltonian learning via quantum Zeno effect - arXiv
    Sep 19, 2025 · It leverages the quantum Zeno effect as a reshaping tool to localize the system's dynamics and then applies quantum process tomography to learn ...
  17. [17]
  18. [18]
    [PDF] 3 Maximum-Likelihood Methods in Quantum Mechanics
    The ML estimation simply selects the state for which the likelihood attains its maximum value on the manifold of density matrices. The mathematical formulation ...
  19. [19]
    Diluted maximum-likelihood algorithm for quantum tomography - arXiv
    Nov 23, 2006 · We propose a refined iterative likelihood-maximization algorithm for reconstructing a quantum state from a set of tomographic measurements.
  20. [20]
    Quantum State Tomography via Linear Regression Estimation - Nature
    Dec 13, 2013 · The LS inversion method can be applied when measurable quantities exist that are linearly related to all density matrix elements of the quantum ...Missing: techniques seminal
  21. [21]
  22. [22]
    Quantum State Tomography via Compressed Sensing
    Oct 4, 2010 · We establish methods for quantum state tomography based on compressed sensing. These methods are specialized for quantum states that are fairly pure.Abstract · Article Text · ACKNOWLEDGEMENTS
  23. [23]
    [0909.3304] Quantum state tomography via compressed sensing
    Sep 18, 2009 · This paper establishes methods for quantum state tomography using compressed sensing, requiring O(rd log^2 d) settings, and using simple Pauli ...
  24. [24]
    error bounds, sample complexity and efficient estimators - IOPscience
    In this paper, we present a new theoretical analysis of compressed tomography, based on the restricted isometry property for low-rank matrices.
  25. [25]
    Experimental quantum compressed sensing for a seven-qubit system
    May 17, 2017 · Quantum compressed sensing mitigates this problem by reconstructing states from incomplete data. Here we present an experimental implementation ...
  26. [26]
    Adaptive quantum state tomography with neural networks - Nature
    Jun 24, 2021 · We introduce neural adaptive quantum state tomography (NAQT), a fast, flexible machine-learning-based algorithm for QST that adapts measurements.
  27. [27]
    Quantum state and process tomography via adaptive measurements
    Aug 18, 2016 · We investigate quantum state tomography (QST) for pure states and quantum process tomography (QPT) for unitary channels via adaptive ...<|control11|><|separator|>
  28. [28]
    [1406.4101] Self-guided quantum tomography - arXiv
    Jun 16, 2014 · Self-guided quantum tomography (SGQT) uses measurements to directly test hypotheses in an iterative algorithm which converges to the true state.
  29. [29]
    Self-Guided Quantum Tomography | Phys. Rev. Lett.
    Nov 7, 2014 · Self-guided quantum tomography uses measurements to directly test hypotheses in an iterative algorithm which converges to the true state.Abstract · Article Text
  30. [30]
    Experimental Demonstration of Self-Guided Quantum Tomography
    Here, we experimentally demonstrate self-guided quantum tomography performed on polarization photonic qubits. The quantum state is iteratively learned by ...
  31. [31]
    Predicting many properties of a quantum system from very few ...
    Jun 22, 2020 · We present an efficient method for constructing an approximate classical description of a quantum state using very few measurements of the state.
  32. [32]
    Predicting Many Properties of a Quantum System from Very Few ...
    Feb 18, 2020 · We present an efficient method for constructing an approximate classical description of a quantum state using very few measurements of the state.
  33. [33]
    Tomography of quantum detectors | Nature Physics
    Nov 16, 2008 · Now, measuring a set of known probe states {ρ} enables us to characterize an unknown detector, and thus find {πn}. For these operators to ...
  34. [34]
    [PDF] Joint Quantum-State and Measurement Tomography with ...
    The probing experiments use the high-fidelity processes to probe unknown state preparations, similar to QST. In our procedure the estimation of the measurement ...
  35. [35]
    Minimal Informationally Complete Measurements for Pure States
    Apr 23, 2004 · We consider measurements, described by a positive-operator-valued measure (POVM), whose outcome probabilities determine an arbitrary pure state ...Missing: probe tomography
  36. [36]
    Self-consistent quantum measurement tomography based on ...
    The method consists of alternating between an SDP for measurement tomography and an SDP for state tomography on the whole set of input states. The measurement ...Article Text · QUANTUM MEASUREMENT... · SEMIDEFINITE PROGRAMS...Missing: principles | Show results with:principles
  37. [37]
    Maximum-likelihood estimation of quantum measurement
    Jul 17, 2001 · Maximum-likelihood estimation is applied to the determination of an unknown quantum measurement. The calibrated measuring apparatus carries out measurements.
  38. [38]
    Detecting correlated errors in state-preparation-and-measurement ...
    We are going to analyze SPAM tomography for the case of a single qubit and two-outcome POVMs. Even for this simple case our results suggest novel experiments.
  39. [39]
    Efficient tomography of coherent optical detectors | Phys. Rev. A
    The paper proposes an efficient tomography method for quantum coherent-optical detectors, extracting linear loss to obtain a smaller matrix representation.
  40. [40]
    [PDF] arXiv:1212.0105v1 [quant-ph] 1 Dec 2012
    Dec 1, 2012 · The standard quantum process tomography (SQPT) has the unique property that it can be applied without introducing any additional quantum ...
  41. [41]
    Quantum-process tomography: Resource analysis of different ...
    Mar 13, 2008 · We conclude that for quantum systems where two-body interactions are not naturally available, SQPT is the most efficient scheme.
  42. [42]
    Progress toward scalable tomography of quantum maps using ...
    Jun 11, 2010 · Notice that either the one-qubit twirl or the full-space twirl implies a Pauli twirl (since the Pauli operators are a subgroup of the Clifford ...
  43. [43]
    Constructing Smaller Pauli Twirling Sets for Arbitrary Error Channels
    Aug 2, 2019 · Twirling is a technique widely used for converting arbitrary noise channels into Pauli channels in error threshold estimations of quantum ...
  44. [44]
    [PDF] Quantum process tomography of two-qubit controlled-Z and ...
    Nov 9, 2010 · We experimentally demonstrate quantum process tomography of controlled-Z and controlled-NOT gates using capacitively coupled superconducting ...
  45. [45]
    Quantum Process Tomography Using Superconducting Qubits
    Here we demonstrate how to implement SQPT with our Josephson junction phase qubits and use it to characterize a CNOT gate. We show how to obtain the process ...
  46. [46]
    Experimental demonstration of simplified quantum process ...
    Jan 14, 2013 · By using the standard QPT method,1 we need O(d4) configurations for experimentally determining χ matrix, which is difficult to work even in ...
  47. [47]
    Experimental realization of self-guided quantum process tomography
    Feb 18, 2020 · The characterization of a quantum process requires O ( d 4 ) parameters for a d -dimensional system, as opposed to the state estimation problem ...
  48. [48]
    A simple formula for the average gate fidelity of a quantum ...
    (3) we obtain the following formula for the average gate fidelity (17) F ( E ,U)= F U † ∘ E = ∑ j tr (UU j † U † E (U j ))+d 2 d 2 (d+1) . When d=2 and choosing ...
  49. [49]
    A simple formula for the average gate fidelity of a quantum ... - arXiv
    May 7, 2002 · This note presents a simple formula for the average fidelity between a unitary quantum gate and a general quantum operation on a qudit.
  50. [50]
    Gate Set Tomography - Quantum Journal
    Oct 5, 2021 · Gate set tomography (GST) is a protocol for detailed, predictive characterization of logic operations (gates) on quantum computing processors.
  51. [51]
    Robust, self-consistent, closed-form tomography of quantum logic ...
    Oct 16, 2013 · We introduce and demonstrate experimentally: (1) a framework called gate set tomography (GST) for self-consistently characterizing an entire set of quantum ...Missing: original | Show results with:original
  52. [52]
    Microscopic parametrizations for gate set tomography under ...
    Feb 6, 2025 · Gate set tomography (GST) allows for a self-consistent characterization of noisy quantum information processors (QIPs).
  53. [53]
  54. [54]
    Focus on quantum tomography - IOP Science
    Dec 13, 2013 · This curse of dimensionality renders any naive approach to quantum tomography manifestly impossible, even for moderately large systems.
  55. [55]
    Two-qubit decoherence mechanisms revealed via quantum process ...
    Oct 5, 2009 · We analyze the quantum process tomography (QPT) in the presence of decoherence, focusing on distinguishing local and nonlocal decoherence ...
  56. [56]
    Mitigating shot noise in local overlapping quantum tomography with ...
    However, their experimental estimation is challenging, as it involves preparing and measuring quantum states in multiple bases, a resource-intensive process ...
  57. [57]
    Experimental Estimation of Quantum State Properties from Classical ...
    Jan 13, 2021 · Shadow tomography extracts quantum state features from limited measurements, predicting expectation values of observables using classical  ...
  58. [58]
    [1711.01053] Shadow Tomography of Quantum States - arXiv
    Nov 3, 2017 · Shadow tomography estimates the probability of a quantum state accepting a measurement, using only a small number of copies of the state.Missing: ε^ | Show results with:ε^
  59. [59]
    Learning hard quantum distributions with variational autoencoders
    Jun 28, 2018 · Here, we introduce a practically usable deep architecture for representing and sampling from probability distributions of quantum states.
  60. [60]
    Neural Network Architectures for Scalable Quantum State Tomography
    Jul 30, 2025 · Our results reveal that CNN and CGAN scale more robustly and achieve the highest fidelities, while Spiking Variational Autoencoder (SVAE) ...
  61. [61]
    Neural-network quantum state tomography | Phys. Rev. A
    Jul 6, 2022 · We revisit the application of neural networks to quantum state tomography. We confirm that the positivity constraint can be successfully implemented.
  62. [62]
    Unifying Non-Markovian Characterization with an Efficient and Self ...
    May 9, 2025 · We introduce a theoretical framework that can describe all forms of this irregular noise—known as non-Markovian behavior—in quantum systems ...
  63. [63]
    Quantum Tomography: Advancements and Applications
    Mar 18, 2025 · In this work we demonstrate an extension of gate set tomography (GST) based on promoting parameters of a markovian noise model to stochastic ...
  64. [64]
  65. [65]
    Robust High-Fidelity Quantum Entanglement Distribution over Large ...
    Apr 11, 2025 · Here, we demonstrate a real-world scalable quantum networking testbed deployed within Deutsche Telekom's metropolitan fibers in Berlin.
  66. [66]
    Quantum tomography beyond the leading order - EPJ C
    Sep 11, 2025 · Quantum tomography beyond the leading order ... , which is of high interest for upcoming qutrit entanglement tests at the Large Hadron Collider.
  67. [67]
    [2410.12551] Quantum subspace verification for error correction codes
    Oct 16, 2024 · Benchmarking the performance of quantum error correction codes in physical systems is crucial for achieving fault-tolerant quantum computing.
  68. [68]
    Quantum subspace verification for error correction codes
    This improvement stems from the use of subspace verification, which leverages the knowledge of code subspaces to significantly reduce measurement costs.