Fact-checked by Grok 2 weeks ago

Measurement in quantum mechanics

In quantum mechanics, measurement refers to the interaction between a quantum system and a measuring apparatus that extracts information about the system's state, resulting in a stochastic collapse of the wave function to one of its eigenstates according to the Born rule, which assigns probabilities p_i = |\langle \psi | \phi_i \rangle|^2 to each possible outcome i. This process contrasts sharply with the deterministic, unitary evolution of isolated quantum systems under the Schrödinger equation, which allows for superpositions of multiple states. The arises from this apparent inconsistency: while quantum states evolve linearly to form entangled superpositions involving the system and apparatus, actual observations yield single, definite outcomes without superposition, posing a challenge to the completeness of the theory. Formally described in John von Neumann's 1932 framework, measurement involves coupling the quantum system to a macroscopic probe, leading to a projection postulate where the post-measurement state is updated as \rho' = Q_i \rho Q_i^\dagger / \operatorname{Tr}(Q_i \rho), with Q_i as the projector for outcome i. This irreversibility and the role of the observer have fueled debates since the 1920s, when introduced the probabilistic interpretation in 1926 to resolve scattering predictions. Historically, the , developed by and in the late , addresses measurement by emphasizing the classical nature of the apparatus and the complementarity of quantum observables, treating the wave function as an epistemic tool for predicting probabilities rather than an ontological description. However, it leaves unresolved the boundary between quantum and classical realms, often invoking an "collapse" upon observation. Alternative approaches, such as the proposed by Hugh Everett in 1957, eliminate collapse by positing that all outcomes occur in branching parallel universes, while decoherence theory, advanced in the 1970s–1980s by Wojciech Zurek and others, explains the apparent classicality through rapid entanglement with the environment, suppressing interference without invoking fundamental collapse. Despite these developments, the measurement problem remains a of foundational quantum research, with ongoing efforts exploring objective collapse models, , and thermodynamic derivations of the to reconcile quantum predictions with empirical irreversibility. Experimental tests, including weak measurements and delayed-choice setups, continue to probe these issues, underscoring measurement's centrality to , computing, and the quest for a unified theory of .

Mathematical Formalism

Observables as self-adjoint operators

In , physical observables—such as , , or —are mathematically represented by operators acting on a . This formalism, developed in the rigorous mathematical framework of , associates each with a linear A that is , meaning A = A^\dagger, where A^\dagger is the adjoint operator defined by \langle \psi | A \phi \rangle = \langle A \psi | \phi \rangle for all vectors \psi, \phi in the . Self-adjointness ensures that the eigenvalues of A, which correspond to the possible outcomes of a of the , are real numbers, aligning with the empirical reality that measurement results are always real-valued quantities. The provides the foundational decomposition for these operators, stating that any bounded A on a can be expressed in spectral form as A = \int_{\sigma(A)} \lambda \, dE(\lambda), where \sigma(A) is the spectrum of A (the set of eigenvalues), and E(\lambda) is the of the identity, consisting of projection operators onto the eigenspaces associated with eigenvalue \lambda. For unbounded operators, which are common in (e.g., for continuous spectra like ), the theorem extends via the Stone–von Neumann representation, ensuring a complete of generalized eigenvectors. This decomposition directly links the operator's structure to outcomes: upon measuring A, the system projects onto one of the eigenspaces, yielding eigenvalue \lambda with probability given by the expectation value of the corresponding projector. The expectation value of an A in a quantifies the average outcome over many measurements. For a mixed described by a density \rho (a positive semi-definite with \operatorname{Tr}(\rho) = 1), the expectation value is \langle A \rangle = \operatorname{Tr}(\rho A). In the special case of a pure represented by a normalized |\psi\rangle, it simplifies to \langle A \rangle = \langle \psi | A | \psi \rangle. These formulas arise from the inner product structure of the and the probabilistic interpretation of , providing a bridge between the representation and statistical predictions. A example is the , represented by the multiplication operator \hat{x} \psi(x) = x \psi(x) on the L^2(\mathbb{R}), and the observable, \hat{p} = -i \hbar \frac{d}{dx}, where \hbar = h / 2\pi and h is Planck's constant. These operators satisfy the [\hat{x}, \hat{p}] = i \hbar \hat{1}, which encodes the non-commutativity inherent to and underlies phenomena like the . This relation holds in the Weyl algebra of quantum mechanics, ensuring compatibility with the spectral properties of operators. For composite quantum systems, such as multiple particles, the total Hilbert space is constructed as the tensor product \mathcal{H} = \mathcal{H}_1 \otimes \mathcal{H}_2 \otimes \cdots \otimes \mathcal{H}_N of the individual particle spaces. Observables for the composite system are then self-adjoint operators on this tensor product space; for instance, the total position operator for two particles is \hat{X} = \hat{x}_1 \otimes \hat{1}_2 + \hat{1}_1 \otimes \hat{x}_2, where subscripts denote the subsystems. This structure preserves self-adjointness and allows for the description of entangled states and joint measurements across subsystems.

Projective measurements

In the standard formulation of quantum mechanics, projective measurements represent the ideal case where the measurement of an corresponds to a projection of the onto one of the eigenspaces of the associated . This model, known as Von Neumann's projective measurement postulate, states that upon measuring an A with eigenvalues a_k and corresponding orthonormal eigenvectors |\phi_k\rangle, the possible outcomes are the eigenvalues a_k, each occurring with probability |\langle \phi_k | \psi \rangle|^2, where |\psi\rangle is the pre-measurement state of the system. This postulate integrates the , which interprets the squared modulus of the wave function overlap as a probability, originally proposed by in the context of scattering processes. More formally, the probabilities can be expressed using orthogonal projection operators P_k onto the eigenspaces, where p_k = \langle \psi | P_k | \psi \rangle and the projectors satisfy \sum_k P_k = I with P_k P_j = \delta_{kj} P_k. These projectors ensure that the measurement outcomes are mutually exclusive and exhaustive, reflecting the sharp, distinguishable nature of the eigenvalues in the spectral decomposition of A = \sum_k a_k P_k. The thus provides the probabilistic foundation for projective measurements, linking the abstract formalism to empirical outcomes. A key feature of projective measurements is their : if the measurement yields outcome a_k, an immediate re-measurement of the same will certainly reproduce a_k, as the state has been onto the eigenspace spanned by |\phi_k\rangle, making further projections within that deterministic. This property underscores the "" aspect of the measurement process, distinguishing it from unitary . Projective measurements are compatible when the observables commute, i.e., [A, B] = 0, allowing simultaneous measurements that yield joint probabilities determined by the common eigenbasis. In such cases, the measurement can be represented by a joint set of projectors, preserving the statistics for each observable individually. However, projective measurements assume perfect distinguishability of outcomes and exclude intermediate or ambiguous results, limiting their applicability to idealized scenarios without or inefficiency.

Generalized measurements (POVMs)

In , generalized measurements provide a for describing non-ideal measurement processes that cannot be captured by standard projective measurements, allowing for outcomes that are ambiguous or incomplete. These measurements are formalized using positive operator-valued measures (POVMs), which consist of a set of positive semi-definite s \{E_m\} acting on the of the system, satisfying the completeness relation \sum_m E_m = I, where I is the identity and the sum runs over all possible measurement outcomes m. The probability p_m of obtaining outcome m when measuring a represented by the \rho is given by p_m = \operatorname{Tr}(\rho E_m). This formulation, introduced in the operational approach to quantum probability, accommodates scenarios where the measurement apparatus introduces noise or partial information extraction, such as in real-world detectors. Any can be represented using Kraus operators \{K_m\}, a set of bounded operators satisfying E_m = K_m^\dagger K_m and the normalization condition \sum_m K_m^\dagger K_m = I. This Kraus operator formalism describes the measurement as a completely positive trace-preserving map, or , that transforms the input state into an ensemble of post-measurement states while preserving the outcome probabilities. The representation arises from the general theory of state changes in , enabling the modeling of measurement-induced evolutions in a consistent algebraic framework. POVMs are particularly useful for applications involving inefficient or noisy detectors, where projective measurements fail to account for realistic imperfections. For example, in detection, detector inefficiency (modeled by a transmissivity \eta < 1) and dark counts (false positives due to thermal noise) can be incorporated by treating the detection process as interaction with an environment, resulting in a POVM that mixes the ideal photon-number projectors with error terms; the inefficiency is often simulated by a fictitious beamsplitter preceding an ideal detector, leading to non-orthogonal effects E_m. This approach quantifies how such imperfections degrade measurement fidelity while still allowing probabilistic inference about the input state. Projective measurements correspond to a special case of POVMs, where each E_m is an orthogonal projector onto a subspace of the Hilbert space, ensuring mutually exclusive and complete outcomes without ambiguity. In generalized schemes, POVMs reveal a fundamental information-disturbance trade-off: the amount of information extracted about the system, quantified by mutual information between input and output, is bounded by the disturbance to the state, measured by the norm of the difference between input and average output states.

Post-measurement state collapse

In quantum mechanics, the collapse postulate describes the abrupt change in the quantum state upon measurement of an observable. For a system in a pure state |\psi\rangle and an observable represented by a self-adjoint operator A with spectral decomposition A = \sum_k a_k P_k, where P_k are the orthogonal projectors onto the eigenspaces corresponding to eigenvalues a_k, a measurement yielding outcome a_k updates the state to the normalized projection: |\psi'\rangle = \frac{P_k |\psi\rangle}{\sqrt{\langle \psi | P_k | \psi \rangle}}, with the probability of outcome k given by p_k = \langle \psi | P_k | \psi \rangle. This projection eliminates superpositions across different eigenspaces, restricting the state to the subspace associated with the observed eigenvalue. For systems described by mixed states via the density operator \rho, the post-measurement state following outcome k is similarly obtained by projecting and normalizing: \rho' = \frac{P_k \rho P_k}{\operatorname{Tr}(P_k \rho)}, where p_k = \operatorname{Tr}(P_k \rho). This update rule preserves the trace condition \operatorname{Tr}(\rho') = 1 and ensures compatibility with the for probabilities. In the framework of generalized measurements using positive operator-valued measures (POVMs), where the effects \{E_m\} satisfy \sum_m E_m = I and p_m = \operatorname{Tr}(E_m \rho), the Lüders rule provides a standard prescription for the selective post-measurement state assuming an ideal instrument: \rho' = \frac{\sqrt{E_m} \rho \sqrt{E_m}}{p_m} for outcome m, with \sqrt{E_m} denoting the unique positive square root of the effect operator E_m. This form generalizes the projective case, as E_m = P_m implies \sqrt{E_m} = P_m, and applies to unsharp measurements where the effects are not necessarily projectors, provided the outcomes are compatible. The collapse process introduces irreversibility, as the non-unitary projection reduces the purity of the state or increases its von Neumann entropy in general, preventing recovery of the pre-measurement superposition through time-reversed evolution. This contrasts sharply with the unitary dynamics governed by the Schrödinger equation, which preserve information and exhibit time-reversal symmetry, whereas measurement breaks this symmetry by entangling the system with the apparatus and environment, leading to effective decoherence that renders the collapse practically irreversible.

Basic examples

A fundamental example of quantum measurement involves the spin angular momentum of a spin-1/2 particle, where the observable along the z-direction, \sigma_z, has eigenvalues \pm 1 and corresponding eigenstates |\uparrow\rangle and |\downarrow\rangle. In the , a beam of such particles prepared in a superposition state, such as \frac{1}{\sqrt{2}} (|\uparrow\rangle + |\downarrow\rangle), passes through an inhomogeneous magnetic field that couples to \sigma_z, resulting in deflection into two discrete paths corresponding to the outcomes +1 or -1. The measurement projectors are P_+ = |\uparrow\rangle\langle\uparrow| and P_- = |\downarrow\rangle\langle\downarrow|, and upon detection in one path, the post-measurement state collapses to the corresponding eigenstate, erasing the superposition. The probabilities of these outcomes follow the Born rule, where for the initial state |\psi\rangle = \frac{1}{\sqrt{2}} (|\uparrow\rangle + |\downarrow\rangle), the probability of measuring +1 is |\langle\uparrow|\psi\rangle|^2 = \frac{1}{2}, and similarly \frac{1}{2} for -1. This explicit computation illustrates how the squared modulus of the projection onto the eigenspace yields the outcome likelihood, with the state update given by the normalized projection: for outcome +1, the new state is |\uparrow\rangle. An analogous case arises in quantum information with a qubit measured in the computational basis \{|0\rangle, |1\rangle\}, where the observable is \sigma_z with eigenvalues \pm 1. Consider the initial state |+\rangle = \frac{1}{\sqrt{2}} (|0\rangle + |1\rangle); upon measurement, the probability of obtaining |0\rangle (eigenvalue +1) is \frac{1}{2}, and the post-measurement state collapses to |0\rangle, while the probability for |1\rangle is likewise \frac{1}{2} with collapse to |1\rangle. The projectors are |0\rangle\langle 0| and |1\rangle\langle 1|, and this process randomizes the outcome while destroying the superposition, as dictated by the applied to the expansion coefficients. For systems with continuous spectra, such as the position measurement of a quantum harmonic oscillator, the observable \hat{x} has an uncountable set of eigenvalues corresponding to all real positions x. The initial state is described by a wavefunction \psi(x) in the position basis, with the probability density for outcome x given by |\psi(x)|^2 dx via the . Upon measuring a specific x_0, the post-measurement state collapses to the eigenstate \delta(x - x_0), though in practice with finite-precision instruments, it approximates a narrow distribution centered at x_0. To highlight the effects of non-commuting observables, consider sequential measurements on a two-level system using \sigma_z followed by \sigma_x, where the Pauli matrices satisfy [\sigma_x, \sigma_z] = 2i \sigma_y \neq 0. Starting from |+\rangle = \frac{1}{\sqrt{2}} (|0\rangle + |1\rangle), a \sigma_z measurement yields |0\rangle or |1\rangle each with probability \frac{1}{2}; subsequent \sigma_x measurement on, say, the outcome |0\rangle then gives |+\rangle or |-\rangle each with probability \frac{1}{2}, demonstrating disturbance since the second result is indeterminate despite the first measurement's definiteness. If the observables commuted, the second measurement would yield a predictable value compatible with the first, underscoring the incompatibility arising from the non-zero commutator.

Historical Evolution

Old quantum theory and early measurements

In the late 19th century, classical physics faced significant challenges in explaining thermal radiation from black bodies, particularly the "ultraviolet catastrophe" predicted by the , which suggested infinite energy at high frequencies. To resolve this, Max Planck introduced the hypothesis of energy quantization in 1900, proposing that the energy of electromagnetic oscillators in a black body is restricted to discrete multiples of a fundamental unit, E = n h \nu, where n is a positive integer, h is , and \nu is the frequency. This ad hoc assumption yielded for the spectral energy density, u(\nu, T) = \frac{8\pi h \nu^3}{c^3} \frac{1}{e^{h\nu / kT} - 1}, which accurately matched experimental measurements of blackbody spectra conducted by Otto Lummer and Ferdinand Kurlbaum. In this framework, measurement involved classical detection of radiation intensity, but the quantization implied discrete energy exchanges, interpreted initially as a statistical convenience rather than a fundamental property of light itself. Building on Planck's ideas, applied quantization directly to light in 1905 to explain the photoelectric effect, where light incident on a metal surface ejects electrons with kinetic energy K_{\max} = h\nu - \phi, dependent on frequency \nu but independent of light intensity. Experimental observations by confirmed that electron emission occurs only above a threshold frequency and instantaneously, without accumulation, suggesting light consists of discrete packets or "quanta" (later called ) that transfer energy in indivisible units during measurement. Here, measurement was treated classically—via detection of ejected electrons or photocurrents—but the discrete nature introduced paradoxes, as it conflicted with the continuous wave description of light in classical electromagnetism, implying that detectors register individual quanta rather than smooth waves. Niels Bohr further developed these concepts in 1913 with his model of the hydrogen atom, building on 's nuclear structure but incorporating quantization to stabilize the atom against classical radiation losses. Bohr postulated stationary electron orbits where angular momentum is quantized as L = n \frac{h}{2\pi}, with n an integer, leading to discrete energy levels E_n = -\frac{13.6}{n^2} eV and spectral lines from transitions between them, matching the observed in hydrogen spectra measured by and others. Measurements of atomic emission and absorption lines thus revealed sharp, discrete frequencies, explained by abrupt "quantum jumps" between orbits, while the electron's motion in orbits remained classical. However, this hybrid approach treated measurement as a classical probe of quantum-restricted states, without addressing how continuous classical fields interact with discrete jumps. To extend the theory beyond simple systems, Bohr formulated the correspondence principle in 1923, asserting that quantum predictions must asymptotically match classical electrodynamics for large quantum numbers n, such as in high-energy transitions where radiation behaves like classical waves. This semiclassical rule guided quantization conditions for more complex atomic spectra, like those in alkali metals, by analogy to Fourier components of classical orbits, and was validated against spectroscopic data from ionized helium and other elements. Measurements of line intensities and Zeeman splittings aligned with these rules in the classical limit, but the principle highlighted tensions, as it relied on classical intuitions for quantum regimes. Despite these advances, the old quantum theory remained a patchwork of ad hoc rules applied to classical mechanics, with measurement invariably classical and external to the quantum system, leading to inconsistencies such as unexplained selection rules for transition probabilities and failures to predict intensities in multi-electron atoms. The absence of a unified treatment for wave-particle duality in measurement processes created paradoxes, exemplified by the need for discrete photon counts in scattering experiments that clashed with classical wave propagation, underscoring the theory's provisional nature. These limitations spurred the transition to a fully quantum mechanical framework in the mid-1920s.

Development of modern quantum mechanics

The development of modern quantum mechanics in the mid-1920s marked a profound shift from the ad hoc rules of the old quantum theory, which struggled to reconcile classical mechanics with discrete energy levels and atomic spectra. This period saw the emergence of rigorous mathematical frameworks that incorporated measurement as a fundamental process, resolving earlier paradoxes by treating observables and their outcomes in probabilistic terms. Key contributions emphasized the non-classical nature of quantum measurements, where outcomes are inherently unpredictable and tied to the preparation of the system. A pivotal precursor was the Compton effect, observed in 1923, which provided experimental evidence for the particle-like behavior of light during scattering interactions. Arthur Compton demonstrated that X-rays scattered off electrons in light elements exhibit a wavelength shift dependent on the scattering angle, consistent with the conservation of energy and momentum if photons are treated as particles with momentum h/\lambda. This effect underscored the dual wave-particle nature of light and highlighted how measurements could reveal corpuscular properties, challenging purely wave-based descriptions and paving the way for quantum treatments of measurement processes. In 1925, Werner Heisenberg formulated matrix mechanics, the first complete quantum theory, by representing physical observables such as position and momentum as arrays of non-commuting mathematical objects. These non-commuting operators implied that precise simultaneous measurements of conjugate variables like position and momentum are impossible, introducing an intrinsic uncertainty into quantum measurements that arises from the algebraic structure of the theory itself. Heisenberg's approach focused on observable quantities, avoiding unmeasurable classical trajectories, and laid the groundwork for understanding measurement as a process that yields discrete, probabilistic results rather than continuous classical values. Erwin Schrödinger's wave mechanics, introduced in 1926, offered an alternative formulation using wave functions to describe quantum states. In this framework, the wave function \psi evolves continuously according to a deterministic equation, but measurement of position causes the wave packet to collapse abruptly to a localized state, reflecting the particle's detected position. This collapse mechanism highlighted the discontinuous role of observation in quantum mechanics, where the act of measurement selects a definite outcome from the superposition encoded in \psi, transitioning from a delocalized wave to a sharply peaked distribution. Schrödinger's series of papers on quantization as an eigenvalue problem established wave mechanics as a powerful tool for calculating energy levels and predicting measurement outcomes in bound systems. Complementing Schrödinger's work, Max Born provided the probabilistic interpretation of the wave function in 1926, proposing that the probability of measuring a particle at a position is given by the square of the wave function's modulus, |\psi|^2. This statistical rule, derived in the context of collision processes, transformed the wave function from a putative physical wave into a probability amplitude, ensuring that measurement outcomes align with empirical frequencies rather than deterministic paths. Born's insight resolved ambiguities in interpreting wave mechanics by linking theoretical predictions directly to observable probabilities, earning him the 1954 Nobel Prize in Physics. By 1927, Paul Dirac and Pascual Jordan independently developed transformation theory, a unifying formalism that bridged matrix and wave mechanics through continuous transformations between representations of quantum states. Dirac's approach emphasized the physical interpretation of quantum dynamics, showing how overlap integrals between states yield transition probabilities under measurement, while Jordan's statistical transformation theory formalized the equivalence of the two mechanics via canonical transformations in an abstract Hilbert space. This synthesis not only reconciled competing formulations but also solidified the measurement postulate as a core element, where observables correspond to Hermitian operators with eigenvalues as possible outcomes.

Uncertainty principle and hidden variables theorems

The Heisenberg uncertainty principle establishes a fundamental limit on the precision with which certain pairs of physical properties, such as position and momentum, can be simultaneously known in quantum mechanics. Formulated by in 1927, it arises from the non-commutativity of the corresponding operators in the quantum formalism, specifically the canonical commutation relation [ \hat{x}, \hat{p} ] = i \hbar. The principle is expressed mathematically as \Delta x \, \Delta p \geq \frac{\hbar}{2}, where \Delta x and \Delta p denote the standard deviations of position and momentum measurements, respectively, and \hbar = h / 2\pi with h being Planck's constant. This relation implies that improving the accuracy of one observable necessarily degrades the accuracy of the conjugate observable, reflecting the intrinsic probabilistic nature of quantum measurements rather than limitations in measurement technology. In 1935, , Boris Podolsky, and Nathan Rosen introduced the EPR paradox to challenge the completeness of quantum mechanics, arguing that it must admit local hidden variables to describe physical reality fully. They considered a hypothetical entangled pair of particles where measuring the position of one instantaneously determines the position of the other, regardless of distance, which they viewed as "spooky action at a distance" violating locality. The EPR argument posits that if quantum mechanics is complete, it should predict definite outcomes for all observables without additional variables; however, since it allows predictions with certainty for non-commuting observables only upon measurement, hidden variables are required to assign pre-existing values to all observables simultaneously. This critique aimed to restore classical realism by suggesting quantum mechanics describes correlations but not underlying reality, prompting debates on locality and realism in measurement outcomes. John Stewart Bell's theorem in 1964 provided a rigorous mathematical framework to test the EPR proposal, demonstrating that no local hidden variable theory can reproduce all predictions of quantum mechanics for entangled systems. Bell derived inequalities that must hold for local realistic theories, such as the CHSH inequality for two-qubit systems, where the correlation function satisfies | \langle AB + A'B' \rangle | \leq 2 for local hidden variables, but quantum mechanics allows violations up to $2\sqrt{2}. The theorem assumes locality (outcomes at one site independent of distant settings) and realism (pre-existing values for observables), showing their incompatibility with quantum correlations in measurements on singlet states. By ruling out local realistic hidden variables, Bell's work shifted focus to non-local quantum features inherent in measurement processes. The , proved by and in 1967, extends these ideas by demonstrating contextuality in quantum measurements, eliminating non-contextual hidden variable theories even without locality assumptions. In three-dimensional , they constructed a set of 117 vectors such that no assignment of definite values (0 or 1) to projection operators can satisfy the functional relations imposed by orthogonality bases, where the sum of values in each orthogonal triad must be 1. This impossibility arises because quantum outcomes depend on the measurement context—i.e., the compatible set of observables chosen—preventing a consistent, pre-determined value assignment independent of context. The theorem underscores that quantum mechanics cannot be modeled by non-contextual hidden variables, reinforcing the role of measurement in determining observable values. In 1989, Daniel Greenberger, Michael Horne, and Anton Zeilinger proposed the GHZ state as a three-particle extension of Bell's theorem, offering a stronger proof against local hidden variables without relying on inequalities. The GHZ state is |\psi\rangle = \frac{1}{\sqrt{2}} (|000\rangle + |111\rangle), where measurements of spin along specific axes yield perfect correlations: for operators \sigma_x^{(1)} \sigma_x^{(2)} \sigma_x^{(3)} = +1 and \sigma_y^{(1)} \sigma_y^{(2)} \sigma_y^{(3)} = -1, quantum mechanics predicts deterministic outcomes, while local hidden variables predict the opposite for one combination, leading to a direct contradiction. This all-or-nothing result highlights multipartite entanglement's power to refute local realism more starkly than probabilistic Bell inequalities. The GHZ argument thus provides an elegant demonstration of quantum non-locality in measurements involving more than two particles.

Emergence of the measurement problem

The measurement problem in quantum mechanics arises from the apparent incompatibility between the unitary, deterministic evolution of quantum states described by the and the non-unitary, probabilistic outcomes observed during measurements, which seem to cause an abrupt collapse of the wave function to a definite state. This tension became evident as quantum mechanics formalized in the late 1920s and early 1930s, where the theory successfully predicted microscopic phenomena but struggled to explain how measurements yield classical-like results without invoking ad hoc postulates. Early formulations, such as those in the , posited that measurement induces a projection onto eigenstates of the observable, but failed to specify the physical mechanism or boundary between quantum and classical regimes. A key early articulation of this issue came from John von Neumann's analysis of the measurement process, where he modeled the interaction between a quantum system and a measuring apparatus as a unitary evolution that entangles the system with the apparatus, leading to a superposition of outcomes rather than a definite result. Von Neumann identified an infinite regress, or "von Neumann chain," in which the apparatus itself becomes quantum-entangled, requiring further measurement by another device, and so on, without resolution unless an external observer intervenes to collapse the state—raising questions about where this chain terminates. This chain highlighted the problem's philosophical depth, as it blurred the distinction between observed system and observer, suggesting that the entire universe might be in superposition absent some cutoff. Erwin Schrödinger further dramatized the issue in 1935 with his famous cat thought experiment, envisioning a macroscopic superposition where a cat in a sealed box is simultaneously alive and dead pending a quantum event, such as radioactive decay triggering poison release, until an observer opens the box and "measures" the outcome. This paradox underscored the absurdity of applying quantum superposition to everyday objects, questioning why macroscopic systems do not exhibit observable interference effects and emphasizing the observer's role in resolving superpositions. Building on this, Eugene Wigner's 1961 "friend" paradox extended the regress by considering a scenario where one observer (the friend) measures a quantum system inside a lab, entangling with it, while a second observer (Wigner) views the entire lab as a closed quantum system in superposition until measuring it himself—thus rendering the first measurement's outcome relative and observer-dependent. Compounding these dilemmas is the preferred basis problem, which asks why wave function collapse, if it occurs, selects a particular basis (e.g., position over momentum) for the definite outcome, rather than an arbitrary one, given that the Schrödinger equation treats all bases equivalently under unitary evolution. This issue arises because without a preferred basis, the theory cannot consistently predict the classical correlations observed in measurements, such as pointer positions on dials. Ultimately, the measurement problem encapsulates the enigmatic transition from the coherent, superposed quantum world of microscopic particles to the definite, decoherence-free classical world of macroscopic experience, where quantum effects seemingly vanish, challenging the universality of the quantum formalism. Precursors like the uncertainty principle and hidden variables debates had hinted at foundational inconsistencies, but it was these thought experiments that crystallized the problem's urgency.

Decoherence and environmental interactions

Decoherence refers to the process by which quantum systems lose their coherent superpositions due to interactions with the environment, leading to the emergence of classical-like behavior without invoking an actual wave function collapse. developed the decoherence program in the 1980s and 1990s, demonstrating that entanglement between a quantum system and its environment rapidly suppresses quantum interference effects, resulting in apparent classicality for macroscopic observables. In this framework, the environment acts as an unwitting witness to the system's state, entangling with it in such a way that the overall evolution remains unitary, but the local description of the system mimics a classical measurement outcome. The mechanism of decoherence is formalized through the evolution of the system's density matrix. When the total Hilbert space includes both the system and the environment, the unitary dynamics entangle the system's states with environmental degrees of freedom; tracing over the environment then yields a reduced density matrix for the system where off-diagonal elements—corresponding to superpositions—are suppressed in a preferred basis known as the pointer basis. This diagonalization occurs preferentially in the basis that aligns with the interaction Hamiltonian between the system and environment, effectively selecting states robust against decoherence. The timescale for decoherence is extremely short for macroscopic systems in thermal environments. For a particle interacting with a thermal bath, the decoherence time is approximately given by \tau_{\rm dec} \approx \frac{\hbar}{2 m k_B T (\Delta x)^2}, where m is the mass, k_B T is the thermal energy, and \Delta x is the spatial separation between interfering wave packets; this yields \tau_{\rm dec} \sim 10^{-23} s for a 1 g object at room temperature over 1 cm, far faster than thermal relaxation times. Such rapid decoherence ensures that superpositions are undetectable in everyday macroscopic settings, explaining the classical appearance of the world. Central to Zurek's approach is the concept of einselection (environment-induced superselection), whereby the environment dynamically selects a subset of system states—pointer states—that are stable and resistant to decoherence due to their redundancy in the environmental correlations. These pointer states emerge as the eigenstates of the interaction and can retain classical correlations with the environment over long times, forming the basis for objective classical reality in quantum mechanics. While decoherence resolves the preferred basis problem by identifying robust observables through einselection, it does not fully address the issue of why a single outcome is observed in each measurement, leaving the problem of definite outcomes to interpretive considerations such as preferred histories. This limitation highlights that decoherence provides a dynamical account of the transition to classicality but relies on additional assumptions for the full resolution of the measurement process.

Interpretations of Measurement

Copenhagen interpretation

The Copenhagen interpretation, originating from the collaborative efforts of and in the 1920s, views measurement as the essential process that actualizes quantum probabilities into definite classical outcomes, with the measuring apparatus treated as a classical entity external to the quantum system. This approach underscores the irreducible role of observation in quantum mechanics, where the quantum system's indeterminate state is resolved only upon interaction with the measurement device. Central to this interpretation is Bohr's principle of complementarity, introduced in his 1927 Como lecture, which addresses the wave-particle duality by asserting that quantum entities exhibit mutually exclusive but complementary behaviors depending on the experimental context. For instance, in arrangements designed to demonstrate wave-like interference, such as double-slit experiments, the particle aspect is incompatible, while particle-like detection setups reveal localized impacts but obscure wave propagation. These complementary descriptions—space-time coordination versus causality, or corpuscular versus undulatory pictures—cannot be simultaneously realized in a single measurement but together form a complete account of the phenomenon, limited by the Heisenberg uncertainty relations inherent to the formalism. The post-measurement state then corresponds to the collapse of the quantum wave function to one of the eigenstates of the observed observable. A key feature is the Heisenberg-Bohr cutoff, delineating the boundary between the quantum domain of the system and the classical domain of the measurement apparatus, beyond which classical concepts must be invoked to describe the irreversible recording of results. This divide ensures that the uncontrollable disturbance from the measurement—due to the finite energy exchange required—prevents a joint classical description of complementary variables, enforcing the quantum-classical transition at the apparatus scale. Bohr emphasized that the position of this cutoff is not fixed but context-dependent, determined by the experimental arrangement to maintain the theory's consistency. The interpretation adopts a pragmatic stance, maintaining that quantum mechanics serves solely to predict probabilities for measurement outcomes, without positing an underlying objective reality independent of observation. Bohr argued that the formalism exhausts the possibilities for describing atomic phenomena, rendering inquiries into unmeasured quantum states as ill-posed, since any such description would require classical terms inapplicable to the quantum realm. Questions about "what happens between measurements," such as the trajectory of an unobserved particle, are thus deemed meaningless, as they transcend the theory's epistemological limits defined by verifiable predictions. This framework significantly shaped early experiments in quantum optics and atomic physics, guiding researchers at Bohr's Copenhagen institute to design setups that accounted for the apparatus's classical role, as seen in spectroscopic studies of atomic transitions and initial photon correlation measurements, ensuring interpretations aligned with probabilistic quantum predictions rather than classical intuitions.

Many-worlds interpretation

The many-worlds interpretation (MWI), originally formulated by Hugh Everett III in his 1957 doctoral thesis, posits that the universal wave function describing the entire universe evolves deterministically and unitarily according to the , without any collapse upon measurement. In this framework, what appears as a measurement outcome is instead the branching of the universal wave function into a superposition of multiple, non-interfering components, each corresponding to a possible result; observers within each branch experience a definite outcome relative to their own state, but all branches coexist in a multiverse. This relative-state approach eliminates the need for a special measurement postulate, treating observers as quantum systems fully described by the wave function, thereby resolving the by extending quantum mechanics universally without introducing classical elements. Decoherence plays a crucial role in the modern development of MWI by explaining how these branches become effectively orthogonal and non-interfering through interactions with the environment. When a quantum system interacts with its surroundings, the off-diagonal terms in the representing superpositions are suppressed exponentially fast, leading to the appearance of classical probabilities and definite outcomes in each branch without actual wave function collapse. This environmental monitoring selects a preferred basis for branching, aligning the subjective experience of observers with the robust, stable states that survive decoherence, thus providing a physical mechanism for the apparent definiteness of measurement results in the multiverse. Probabilities in MWI are derived from the weights of the branches, with the Born rule emerging as the unique measure consistent with rational decision-making under uncertainty. David Deutsch initially proposed a decision-theoretic argument showing that agents maximizing expected utility in the multiverse must assign probabilities proportional to the squared amplitudes of branch states, as deviations would lead to suboptimal choices across branches. This approach was formalized and rigorously proven by David Wallace, demonstrating that the Born rule follows from the norm-squared structure of the Hilbert space and basic axioms of decision theory, without assuming it a priori. Consequently, the subjective probabilities experienced by observers in different branches match the standard quantum predictions, grounding empirical success in the theory's unitary dynamics. In MWI, there is no privileged measurement process distinct from other unitary interactions; all quantum evolutions, including those involving macroscopic apparatus and observers, proceed unitarily, with branching arising naturally from entanglement. This unitary universality contrasts with interpretations requiring collapse, offering a seamless description where measurement is just correlated evolution between system and observer states. The interpretation has significant implications for quantum computation, as the branching multiverse enables massive parallelism across all possible computational paths simultaneously. In this view, a quantum algorithm like Shor's exploits the superposition of the universal wave function to evaluate exponentially many branches in parallel, with decoherence ensuring that the observer in each outcome branch retrieves the relevant result without interference from others. This perspective underscores quantum computers as direct manifestations of the multiverse, where computational power stems from the ontological reality of all branches contributing coherently until measurement-like interactions select experienced outcomes.

Objective collapse theories

Objective collapse theories propose modifications to the standard quantum mechanical formalism by introducing spontaneous, stochastic wave function collapses that occur universally, independent of measurement apparatus or observers. These theories aim to resolve the measurement problem by providing a dynamical mechanism for the reduction of the wave function to a single outcome, ensuring definite physical states without invoking special roles for macroscopic devices. Unlike the unitary evolution of standard quantum mechanics, these models incorporate nonlinear and stochastic terms in the evolution equation, leading to localization of the wave function over time, with collapse rates tuned to be negligible for microscopic systems but significant for macroscopic ones. The Ghirardi–Rimini–Weber (GRW) theory, introduced in 1986, represents one of the earliest and most influential objective collapse models. It modifies the Schrödinger equation by adding spontaneous localization events, modeled as Gaussian multipliers that hit the wave function at random positions and times, causing it to collapse toward a localized state. The collapse rate λ is approximately 10^{-16} s^{-1} per particle for microscopic systems, resulting in an extremely low probability of collapse for isolated elementary particles, while for macroscopic objects containing around 10^{23} particles, the effective rate becomes on the order of 10^{7} s^{-1}, ensuring rapid localization and classical-like behavior. This parameter choice suppresses deviations from standard quantum predictions at small scales while enforcing objective reduction at larger scales. Building on the GRW framework, the continuous spontaneous localization (CSL) model provides a continuous approximation of the discrete collapses, treating the localization process as an ongoing stochastic diffusion rather than discrete jumps. Developed in 1990, CSL introduces a density-dependent collapse term in the master equation governing the system's evolution, which localizes the wave function by amplifying differences in mass density distributions. The collapse strength is parameterized by a rate λ, often set around 10^{-16} s^{-1} for microscopic systems in early formulations, with the localization scale determined by a correlation length parameter r_C, typically on the order of 10^{-7} m. This model avoids the instantaneous jumps of GRW, offering a smoother dynamics that better accommodates relativistic extensions and identical particle statistics. In 1996, Roger Penrose proposed gravitational objective reduction (GOR) as a physically motivated variant, suggesting that superpositions of spacetime geometries become unstable due to , triggering collapse when the gravitational self-energy difference between states exceeds a threshold related to . The collapse time τ is inversely proportional to the gravitational self-energy uncertainty, τ ≈ ℏ / E_G, where E_G quantifies the nonlinear gravitational interaction within the superposition; for macroscopic objects like , this yields collapses in fractions of a second, while microscopic systems remain unaffected for observable timescales. This approach ties collapse directly to gravity, predicting objective reduction without ad hoc parameters beyond fundamental constants. These theories solve the measurement problem by making collapse a fundamental, observer-independent process that occurs spontaneously for all systems, eliminating the need for a special measurement postulate and ensuring a single definite outcome in every run of an experiment. Experimental tests, including non-interferometric setups with optomechanical resonators, cold atoms, matter-wave interferometry, charged macromolecules (2024), and rotational noise analyses (2025), have placed stringent upper bounds on collapse parameters, with no deviations from quantum mechanics observed as of 2025; for instance, in mass-proportional CSL variants, λ < 1.5 × 10^{-9} s^{-1} for r_C ≈ 10^{-7} m, with recent LISA Pathfinder data tightening previous limits by a factor of ~2 in certain regimes but still allowing room for the predicted scales.

Information-theoretic interpretations (e.g., )

Information-theoretic interpretations of quantum mechanics treat the quantum state not as an objective description of physical reality but as an epistemic tool encoding an agent's knowledge or beliefs about outcomes of measurements. These views emphasize the role of information and probability in resolving interpretive challenges, positing that quantum indeterminacy arises from limitations in what observers can know rather than inherent ontological randomness. Pioneered in the late 20th and early 21st centuries, such approaches draw from and to reframe measurement as a process of updating personal or relational information. Quantum Bayesianism, or QBism, exemplifies this perspective by interpreting the quantum state vector |\psi\rangle as a representation of an agent's subjective probabilities for measurement outcomes, rather than a literal depiction of the system's physical state. Developed primarily by Christopher A. Fuchs and collaborators in the 2010s, QBism views quantum mechanics as a normative framework for how rational agents should update their beliefs upon receiving measurement results, akin to Bayesian inference in classical statistics. In this framework, measurements do not cause an objective collapse of the wave function but instead serve as interactions that refine the agent's probabilistic expectations, eliminating the need for a special measurement postulate. Relational quantum mechanics, proposed by in 1996, extends this informational emphasis by asserting that quantum states are inherently relative to specific observers or systems, with no privileged absolute state existing independently. Measurements in this view yield outcomes that are valid only from the perspective of the interacting observer, avoiding any global collapse and treating quantum correlations as relational facts about systems' mutual information. Rovelli's formulation aligns with information-theoretic principles by focusing on how observers extract and share descriptive information about one another through interactions. John Archibald Wheeler's "it from bit" hypothesis, articulated in 1989, further underscores information as the foundational element of physical reality, suggesting that all material entities ("it") emerge from binary yes/no questions posed through measurements ("bit"). Wheeler argued that the universe's structure arises from participatory acts of observation, where measurements generate informational records that define existence, bridging quantum mechanics with a deeper informational ontology. These interpretations collectively dissolve the measurement problem by relocating indeterminacy to the realm of an agent's incomplete knowledge or relational perspectives, rather than positing ontological changes during measurement. Quantum states thus function as tools for prediction and information management, with no paradox arising from observer involvement. They also connect to , where environmental interactions redundantly broadcast classical-like information about preferred states, facilitating the emergence of shared, objective-seeming realities among observers without invoking absolute facts.

Applications in Quantum Information

Measurements in quantum circuits and error correction

In the quantum circuit model, measurements are typically implemented as projective operations that readout the state of qubits in the computational basis, collapsing the wavefunction to an eigenstate of the measurement operator. These measurements are essential for extracting classical information from quantum computations and are performed at the end of circuits in many superconducting quantum processors. For instance, in IBM's quantum hardware, such as the Eagle processor, projective measurements involve dispersive readout of transmon qubits using microwave resonators, achieving fidelities around 99% with single-shot detection. Similarly, Google's Sycamore processor employs projective measurements in the Z-basis via parametric amplification, enabling high-speed readout critical for random quantum circuit sampling experiments. In quantum error correction (QEC), measurements play a pivotal role in syndrome extraction, where ancillary qubits are used to detect errors without directly disturbing the logical qubit state. Stabilizer codes, such as the surface code introduced by , rely on repeated parity checks via syndrome measurements to identify bit-flip or phase-flip errors. In the surface code, these measurements are performed on plaquettes and vertices of a 2D lattice, using multi-qubit controlled gates to compute stabilizers like products of Pauli X or Z operators; errors are inferred from non-trivial syndrome patterns, allowing correction while preserving the encoded logical information. This approach has been experimentally demonstrated on superconducting platforms, where syndrome readout circuits achieve error detection rates exceeding 99% per cycle. Imperfect readouts in these systems are often modeled using the positive operator-valued measure (POVM) formalism to account for non-projective effects like relaxation during measurement. The threshold theorem, established in the late 1990s, underpins fault-tolerant quantum computation by proving that if the physical error rate per gate or measurement is below a constant threshold—typically around 1% for leading codes like the —arbitrarily long computations can be performed with arbitrarily low logical error rates through sufficient redundancy and repeated syndrome measurements. This result, first rigorously shown by Aharonov and Ben-Or, relies on concatenating error-correcting codes and assumes local error models, enabling scalable QEC via ongoing measurement and feedback. In practice, achieving fault tolerance requires error rates below ~10^{-3} for gates and measurements to suppress logical errors exponentially with code distance. Mid-circuit measurements extend this framework by allowing projective readouts during circuit execution, enabling adaptive protocols where measurement outcomes dynamically control subsequent gates via classical feedback loops. These are crucial for real-time error correction and algorithms like quantum phase estimation, with implementations on trapped-ion and superconducting systems demonstrating conditional routing with latencies under 1 μs. For example, adaptive circuits on IBM hardware use mid-circuit measurements to implement error mitigation in variational quantum eigensolvers, where outcomes adjust gate parameters on-the-fly to improve convergence. Recent advances post-2020 have realized logical qubits in trapped-ion systems with error rates below 10^{-3}. Quantinuum's H-series processors, using ⁴³Ca⁺ ions, have demonstrated logical qubits encoded in repetition codes with two-qubit gate fidelities exceeding 99.9% and logical error rates suppressed to approximately 10^{-5} (0.001%), 800 times lower than physical qubits, through repeated syndrome measurements as of April 2024, outperforming physical qubits in fidelity. These achievements leverage all-to-all connectivity in ion traps to implement surface-code-like stabilizers efficiently, reducing overhead for scaling to larger codes. As of June 2025, Quantinuum demonstrated fully fault-tolerant universal gate sets with repeatable error correction on H-series, crossing key thresholds for utility-scale quantum computing. In December 2024, they entangled 50 logical qubits at over 98% fidelity. IonQ's November 2025 modular QEC roadmap and Oxford Ionics' May 2025 scalable plans further advance trapped-ion fault tolerance.

Quantum tomography and state reconstruction

Quantum tomography refers to the process of reconstructing the density operator \rho of a quantum system through a series of measurements on an ensemble of identical copies of the state. Full state tomography typically involves performing projective measurements in multiple mutually unbiased bases to obtain the necessary statistics for reconstruction. For qubit systems, the density matrix can be expanded in the Pauli basis as \rho = \frac{1}{d} \sum_{i=0}^{d^2-1} r_i \sigma_i, where d is the dimension, \sigma_i are the (including identity), and r_i are coefficients estimated from expectation values \langle \sigma_i \rangle. This decomposition allows efficient numerical reconstruction for low-dimensional systems, with the measurement outcomes providing the probabilities required to solve the linear system for \rho. An efficient implementation for multiqubit states uses adaptive measurements to minimize the number of required experiments while achieving high accuracy. Process tomography extends this to characterize quantum channels, which map input density operators to outputs via \mathcal{E}(\rho) = \sum_k K_k \rho K_k^\dagger, where K_k are Kraus operators. The process is reconstructed by preparing known input states, applying the channel, and performing state tomography on the outputs, yielding the superoperator representation. A common method uses the \chi-matrix in the Pauli basis, where \mathcal{E} = \sum_{m,n} \chi_{mn} \sigma_m \cdot \sigma_n, with elements \chi_{mn} determined from the measured expectation values. This approach fully determines the channel's action, including non-unitary effects like decoherence. Seminal experimental demonstrations confirmed the method's feasibility for simple gates, such as the controlled-NOT. The Choi-Jamiołkowski isomorphism provides an alternative representation by associating the channel with a bipartite state \sigma_{\mathcal{E}} = \sum_{ij} |i\rangle\langle j| \otimes \mathcal{E}(|i\rangle\langle j|), enabling process tomography via state tomography on this Choi state, which simplifies error analysis and verification. To address the exponential resource demands of full tomography in high-dimensional systems, compressed sensing techniques exploit the sparsity or low-rank structure of \rho or the channel. By measuring in randomly chosen bases and solving a convex optimization problem, such as \min \|\rho\|_* subject to data constraints (where \|\cdot\|_* is the nuclear norm), the state can be reconstructed with sub-exponential samples, scaling favorably for nearly pure states. This method has been experimentally validated for systems up to seven qubits, achieving fidelities above 0.99 with reduced measurement overhead compared to traditional tomography. The fidelity F(\rho, \sigma) = \left( \operatorname{Tr} \sqrt{\sqrt{\rho} \sigma \sqrt{\rho}} \right)^2 serves as a key metric to quantify the accuracy of reconstructed states or processes against ideal targets, ranging from 0 (orthogonal) to 1 (identical). This captures both purity and overlap, providing a single scalar for benchmarking tomography results. For example, in compressed sensing reconstructions, average fidelities near 1 demonstrate the method's reliability for sparse states. Despite these advances, full tomography scales exponentially with the number of qubits, requiring O(d^2) parameters and measurements for dimension d = 2^n, limiting practical applications to small systems. Post-2010 developments, including partial tomography focusing on subspaces or expectation values of interest, mitigate this by reconstructing only relevant components, such as reduced density matrices or process fidelities, with polynomial resources. Techniques like randomized Pauli measurements further enhance scalability, enabling characterization of up to tens of qubits in specific contexts.

Measurement-based quantum computation (MBQC)

Measurement-based quantum computation (MBQC), also known as one-way quantum computing, is a paradigm where universal quantum computation is performed solely through single-qubit measurements on a pre-prepared entangled resource state, eliminating the need for dynamic entangling gates during the computation phase. This model separates the resource preparation from the computational steps, potentially simplifying implementation in systems where entanglement generation is more reliable than coherent control. The approach leverages the correlations in the resource state to propagate quantum information via measurement outcomes, with classical feedforward adapting subsequent measurements to correct for randomness in results. At the core of MBQC are cluster states, which are multipartite entangled states constructed on an underlying graph structure. Each vertex represents a qubit initialized in the |+\rangle state, connected by controlled-Z gates along the edges to create the entanglement. These states serve as the universal resource, where local measurements in Pauli-adapted bases (rotated by angles depending on the desired gate) effectively implement arbitrary single-qubit unitaries and entangling operations by teleporting the logical qubit through the graph. The measurement outcomes introduce by-product Pauli corrections, which are accounted for adaptively to maintain computational fidelity. The seminal Raussendorf-Briegel model, proposed in 2001, demonstrates how adaptive measurements on a linear can simulate a quantum circuit by directing information flow from input to output qubits. Measurements in the X-Y plane propagate the state, while Z-basis measurements apply corrections, enabling the execution of arbitrary algorithms through a sequence of such operations. This model was extended in 2003 to provide a detailed framework for computation on 2D and higher-dimensional , showing how graph geometry dictates the parallelism and depth of the computation. Universality in MBQC is established by the ability to simulate any unitary quantum circuit using measurement patterns on a 2D cluster-state lattice, where the lattice size scales with the circuit's depth and width. Single-qubit rotations and Clifford gates are directly implemented via basis choices, while non-Clifford elements like the T-gate require ancillary resources or magic states, integrated into the measurement sequence. This equivalence to the standard gate-based model holds for sufficiently large 2D clusters, with the computational power arising from the entanglement structure rather than local operations alone. Fault tolerance in MBQC is achieved through topological encoding in extended lattices, such as the 3D , which uses embedded in the for error detection and correction. Errors during preparation, measurement, or storage are mapped to correctable defects via syndrome measurements on auxiliary qubits, providing protection without mid-circuit gates. Numerical analyses indicate error thresholds of approximately 0.75–1% per error source (preparation, gate, storage, and measurement), competitive with circuit-based models and enabling scalable computation under realistic noise levels. Experimental realizations of MBQC have progressed to small-scale universal demonstrations in the 2020s, particularly in photonic and trapped-ion platforms. In photonics, deterministic generation of 2D cluster states with up to tens of modes has enabled execution of simple universal circuits, such as on four qubits, by fusing smaller graph states via interferometric measurements. Trapped-ion experiments have achieved small-scale universality with 14-ion linear clusters, implementing adaptive feedback for non-local gates and verifying entanglement via partial tomography, while recent protocols demonstrate verifiable random sampling tasks on 10+ qubits using measurement-induced entanglement. As of January 2025, verifiable MBQC was demonstrated on trapped-ion processors. In May 2025, Quantinuum achieved high-fidelity logical teleportation on H2-1, and October 2025 resource-adaptive compilation reduced execution times in noisy MBQC systems. These implementations highlight MBQC's potential for scalability in systems favoring offline entanglement over online control.

Advanced Measurement Techniques

Quantum metrology and precision sensing

Quantum metrology leverages quantum mechanical principles, such as and , to achieve precision in estimating physical parameters that surpasses the capabilities of classical measurement strategies. In classical metrology, the precision for estimating a parameter, such as a phase shift, using N independent probes scales as the standard quantum limit (SQL), with uncertainty δθ ∝ 1/√N, due to the additive nature of uncorrelated noise. By contrast, quantum resources like enable the (HL), where δθ ∝ 1/N, offering a quadratic improvement in sensitivity for the same number of probes. This enhancement arises from correlated quantum states that amplify the signal while collectively suppressing noise. A key example of HL achievement is in phase estimation via Ramsey interferometry, where entangled states are prepared, evolve under the parameter of interest, and are measured to extract the phase. NOON states, of the form (|N0⟩ + |0N⟩)/√2, where N photons are delocalized between two modes, are particularly effective for this purpose, yielding a phase sensitivity scaling as 1/N due to the enhanced fringe contrast in interferometric readout. These states have been theoretically shown to saturate the HL in lossless scenarios, though practical implementations must contend with decoherence. The fundamental bound on precision is quantified by the quantum Fisher information (QFI), which measures the sensitivity of a quantum state to changes in the parameter. The quantum Cramér-Rao bound (QCRB) states that the variance of any unbiased estimator is at least 1/QFI, providing the ultimate limit achievable with optimal measurements. For entangled states like or , the QFI scales as N², enabling HL precision when the measurement strategy, such as positive operator-valued measures (POVMs), is tailored to maximize information extraction. Applications of quantum metrology span diverse fields, including timekeeping with atomic clocks, where entangled ensembles of atoms achieve near-HL stability, improving frequency measurements beyond SQL constraints for applications in navigation and fundamental physics tests. In gravitational wave detection, enhancements to interferometers like incorporate squeezed vacuum states to reduce quantum noise, approaching HL sensitivity and extending the observable universe for cosmic events. Magnetometry using nitrogen-vacancy (NV) centers in diamond exploits spin-entangled states for nanoscale field sensing, enabling HL precision in biomedical imaging and material characterization. Recent advancements from 2020 to 2025 have demonstrated entangled sensor networks in laboratory settings that achieve near-HL performance, such as distributed atomic clocks linked via photonic entanglement to estimate phase differences with uncertainties scaling as 1/N across nodes. These networks mitigate local noise through global correlations, paving the way for scalable quantum-enhanced sensing in real-world environments.

Weak measurements and continuous monitoring

Weak measurements represent a paradigm in quantum mechanics where the interaction between the system and the measurement apparatus is sufficiently weak that only partial information about the observable is extracted, resulting in minimal disturbance to the system's state. This approach contrasts with strong projective measurements by allowing the system to evolve coherently between successive weak interactions, enabling the observation of subtle quantum dynamics without collapsing the wavefunction into an eigenstate. The concept was formalized in 1988 by Yakir Aharonov, David Albert, and Lev Vaidman, who introduced the notion of weak values as ensemble averages over pre- and post-selected states. In the weak measurement framework, the weak value of an observable A for a system prepared in initial state |\psi_i\rangle and post-selected in final state |\psi_f\rangle is given by \langle A \rangle_w = \frac{\operatorname{Re} \langle \psi_f | A | \psi_i \rangle}{\langle \psi_f | \psi_i \rangle}, which can lie outside the standard eigenvalue range of A due to the post-selection. This formulation arises from coupling the system weakly to a pointer, such that the pointer's shift yields the real part of the weak value with small back-action on the system. The disturbance scales with the inverse square root of the number of ensemble members, making it suitable for probing anomalous quantum effects. Continuous monitoring extends weak measurements to ongoing processes, where the system is subjected to a sequence of infinitesimal weak interactions over time, leading to stochastic evolution. The dynamics are described by the stochastic master equation for the density operator \rho: d\rho = -i[H, \rho] \, dt + \sqrt{\gamma} \left( L \rho + \rho L^\dagger - \operatorname{Tr}(L^\dagger L \rho) \rho \right) dW, where H is the , L is the jump operator associated with the measurement (e.g., for homodyne detection), \gamma is the measurement strength, and dW is the Wiener increment representing noise. This equation captures both the coherent evolution and the diffusive back-action from continuous observation of L. Quantum trajectory theory provides an unraveling of this continuous measurement process, decomposing the ensemble-averaged density matrix evolution into an ensemble of individual stochastic pure-state paths. Each trajectory corresponds to a conditioned unitary evolution interrupted by measurement-induced updates, effectively representing the system's state as it "jumps" or diffuses in response to real-time measurement outcomes. This approach, rooted in the , facilitates numerical simulations of open quantum systems by averaging over many such paths to recover the master equation solution. Seminal developments include the , which unravels the into nonlinear stochastic for efficient computation of quantum trajectories. Applications of weak measurements and continuous monitoring span quantum control and amplification techniques. In optomechanics, continuous weak measurements enable feedback control by providing real-time information about the mechanical oscillator's position or momentum, allowing stabilization against thermal noise through adaptive forces derived from the stochastic trajectories. For instance, weak monitoring of light fields coupled to a movable mirror facilitates the implementation of quantum feedback loops that cool the oscillator to near-ground states. Additionally, weak value amplification leverages post-selection to enhance signals beyond the eigenvalues of the observable, such as magnifying small phase shifts in optical interferometers for improved parameter estimation, though it requires low post-selection probability to achieve anomalous values. Post-2020 experiments have demonstrated real-time trajectory tracking using weak measurements in superconducting circuits, where dispersive readout of transmon qubits yields noisy signals that are processed to reconstruct individual quantum paths. In one such implementation, a neural network analyzes continuous weak measurement records from a superconducting qubit to monitor fast dynamics, such as relaxation and dephasing, achieving trajectory fidelity comparable to strong projective methods but with reduced disturbance. These advances enable applications in quantum error correction and state preparation by allowing mid-circuit corrections based on ongoing monitoring. Weak measurement schemes also benefit quantum metrology by enabling gentle, real-time probing that accumulates information over extended evolution times.

Quantum non-demolition (QND) measurements

Quantum non-demolition (QND) measurements are a class of quantum measurements designed to determine the value of an observable without introducing additional quantum uncertainty into its subsequent determinations, thereby preserving the predictability of that observable for repeated interrogations. Formally, a QND measurement of an observable \hat{A} requires that the interaction Hamiltonian \hat{H}_{\text{int}} between the system and the measurement apparatus commutes with \hat{A}, satisfying [\hat{A}, \hat{H}_{\text{int}}] = 0. This condition ensures that the measurement process does not alter the eigenstates of \hat{A} or increase its variance, allowing the observable to retain its value post-measurement. A key feature of QND measurements is back-action evasion, where the measurement extracts information about \hat{A} by coupling to a conjugate variable without disturbing \hat{A} itself, thereby circumventing the usual Heisenberg uncertainty-imposed back-action. For instance, in quantum optics, the photon number operator \hat{n} in a cavity mode can be measured QND-style via a cross-Kerr interaction with a probe field, which imprints a phase shift proportional to \hat{n} without changing the photon number distribution. This approach has roots in early theoretical work on evading quantum limits in gravitational wave detection and has been extended to various systems. In superconducting circuit quantum electrodynamics (cQED), dispersive readout serves as a prominent QND implementation for qubit state measurement. Here, a transmon qubit is coupled to a microwave cavity such that the qubit's state conditionally shifts the cavity's resonance frequency via the dispersive interaction Hamiltonian \hat{H}_{\text{disp}} / \hbar = \chi \hat{a}^\dagger \hat{a} \hat{\sigma}_z / 2, where \chi is the dispersive shift, \hat{a} the cavity annihilation operator, and \hat{\sigma}_z the qubit Pauli operator. Probing the cavity transmission reveals the qubit state through this frequency shift without dephasing or relaxing the qubit, enabling high-fidelity, repeatable readout with fidelities exceeding 99%. Squeezed states enhance QND measurements by reducing quantum noise in the relevant quadrature of the probe field, allowing sub-shot-noise sensitivity while maintaining the non-demolition property. In optical systems, injecting a squeezed vacuum into a QND setup for photon number measurement suppresses the added noise from the probe, improving the signal-to-noise ratio without back-action on the signal quadrature. Similarly, in mechanical oscillators, QND detection combined with reservoir engineering has stabilized squeezed states, achieving up to 3 dB of squeezing below the standard quantum limit. Advances in the 2010s and 2020s have leveraged QND measurements on spin ensembles to surpass the standard quantum limit in atomic clocks, enabling spin squeezing and multi-round feedback for enhanced precision. In optical lattice clocks with strontium atoms, cavity-enhanced QND interactions project the collective spin onto the cavity field, generating entangled squeezed states with up to 8 dB of squeezing for ensembles of $10^4 atoms, which reduces phase noise and supports iterative feedback loops for stability at the $10^{-18} level. These techniques, demonstrated in trapped-ion and neutral-atom systems, have improved clock quality factors by factors of 2-3, paving the way for entanglement-enhanced metrology.

Experimental Implementations

Historical experiments (e.g., double-slit, Stern-Gerlach)

The double-slit experiment stands as a cornerstone in revealing the wave-particle duality of quantum entities and the profound impact of measurement on quantum behavior. Initially conducted by Thomas Young in 1801 using sunlight passing through two closely spaced slits in a card, the setup produced an interference pattern of alternating bright and dark fringes on a screen, providing compelling evidence for the wave nature of light over the then-dominant particle theory. This classical demonstration highlighted constructive and destructive interference, where light waves from each slit reinforced or canceled each other depending on path length differences. In the quantum era, the double-slit paradigm extended to matter waves following Louis de Broglie's 1924 hypothesis that particles possess wave-like properties with wavelength \lambda = h / p, where h is and p is momentum. The first experimental confirmation came in 1927 through the work of Clinton Davisson and Lester Germer at Bell Laboratories, who directed a beam of slow electrons onto a nickel crystal surface and observed diffraction peaks in the scattered intensity, mirroring X-ray diffraction patterns and verifying electron wave interference. This electron diffraction, with peaks at angles predicted by adapted for de Broglie waves, marked the initial quantum realization of double-slit-like interference for particles. A more literal double-slit setup with electrons was achieved by Claus Jönsson in 1961, using microscopic slits to produce clear interference fringes, further solidifying the wave nature of electrons without path detection. A defining feature of quantum measurement emerged from attempts to discern which slit a particle passes through: acquiring such which-path information collapses the interference pattern into a classical particle distribution, as the measurement disturbs the coherent superposition. This , articulated by , underscores that mutually exclusive aspects—wave interference or particle trajectory—cannot be observed simultaneously without one precluding the other. Early demonstrations of this effect in double-slit setups with photons and electrons in the 1960s and 1970s confirmed that detectors at the slits eliminate fringes, illustrating how measurement enforces a definite outcome at the expense of quantum coherence. The , performed in 1922 by and , provided direct evidence for quantized angular momentum through measurement-induced projection. They passed a beam of neutral silver atoms, each with one unpaired valence electron, through an inhomogeneous magnetic field created by a specially shaped magnet. Instead of a continuous spread of deflections expected from classical vector precession, the atoms split into two discrete spots on a screen, corresponding to spin projections of \pm \hbar/2 along the field gradient direction. This binary outcome demonstrated that spin, a intrinsic quantum property, is quantized and that the measurement apparatus projects the atom's spin state onto eigenstates of the interaction Hamiltonian, yielding probabilistic but discrete results. The experiment's success relied on the weak field ensuring minimal disturbance beyond projection, highlighting measurement as a selective process in Hilbert space. The photoelectric effect, theoretically explained by Albert Einstein in 1905, revealed light's quantized nature through energy measurements. Einstein proposed that light consists of discrete energy packets (quanta, later called ) with energy E = h\nu, where \nu is frequency, predicting that electron emission from a metal surface occurs only if h\nu > \phi (work function \phi) and with maximum K_\max = h\nu - \phi. This challenged classical wave theory, which expected emission dependent solely on intensity. Robert Millikan's meticulous 1914–1916 experiments verified the linear relation between K_\max and \nu across multiple metals, measuring h with 0.5% accuracy and confirming as particles transferring discrete energy upon absorption. These measurements established that detection of photoelectrons quantizes the interaction, with each ejecting at most one , embodying the corpuscular aspect enforced by the observational setup. Arthur Compton's 1923 scattering experiments further corroborated the particle model by demonstrating momentum conservation in photon-electron collisions. Using X-rays of wavelength 0.71 Å incident on , Compton measured the scattered photon's shift \Delta\lambda = (h/m_e c)(1 - \cos\theta), where m_e is , c , and \theta angle—matching predictions from treating the interaction as a billiard-ball collision between and . Observations showed shifts up to 0.048 Å at 180° backscattering, with angular dependence aligning precisely with the formula, ruling out classical Thomson 's wavelength invariance. This measurement post-interaction proved photons carry p = h/[\lambda](/page/Lambda), and the detection apparatus resolves the quantized , affirming light's particle-like behavior under momentum-probing conditions. John Archibald Wheeler's delayed-choice , proposed in 1978, probed the retroactive role of in quantum outcomes, extending double-slit paradoxes. Wheeler envisioned inserting or removing a beamsplitter after a passes the slits but before detection, questioning whether the photon's "past" path or wave behavior is determined by the final choice. Initial realizations in the 1980s, such as the 1987 interferometric setup by Hellmuth et al. using switched paths for single photons, confirmed that delayed decisions yield either (wave) or which-path (particle) results without altering statistics, as if the choice influences the historical trajectory. Modern implementations in the , including 2007 experiments with faint pulses and beam-switching, achieved near-unit visibility in both regimes, underscoring 's non-local determination of quantum reality. These setups illustrate how the timing of can seemingly retroact on the system's evolution, central to quantum measurement theory.

Modern laboratory setups (e.g., trapped ions, superconducting qubits)

In modern quantum laboratories, measurements are performed using advanced platforms that enable high-fidelity readout of quantum states while scaling to multiple qubits. These setups leverage precise control over quantum systems in controlled environments, such as chambers or cryogenic dilution refrigerators, to minimize decoherence and achieve near-ideal measurement outcomes. Key examples include trapped ions, superconducting circuits, photonic systems, neutral atom arrays, and hybrid optomechanical devices, each optimized for specific aspects of processing. Trapped ion systems utilize electromagnetic traps to confine ions, such as or calcium, serving as s encoded in hyperfine or optical states. readout is achieved through state-dependent detection, where a drives a transition for one state, producing scattered photons that are collected and detected with photomultiplier tubes or cameras, while the other state yields no . This method has demonstrated average readout fidelities exceeding 99.9%, as reported in experiments with trap-integrated single-photon avalanche diodes achieving 99.91% fidelity in 46 μs measurement time. Such high performance, developed at institutions like NIST in the 2010s and beyond, supports scalable by enabling rapid, non-destructive verification of multi-ion registers. Superconducting qubit platforms employ Josephson junction-based qubits fabricated on silicon chips, cooled to millikelvin temperatures in dilution refrigerators to suppress thermal noise. Measurement occurs via dispersive readout, where the qubit state shifts the frequency of a coupled ; a brief pulse probes this shift, and the transmitted or reflected signal is amplified by quantum-limited Josephson parametric amplifiers before detection. This integrates seamlessly with cryogenic , as demonstrated by and in the 2020s, with readout fidelities reaching 99.5% in parallel operations across 105 qubits on Google's Willow processor. IBM's systems similarly achieve high-fidelity dispersive measurements, supporting error-corrected logical qubits in multi-chip architectures. Photonic platforms encode in properties of single , such as or , and perform using superconducting single- detectors (SNSPDs) that offer near-unity detection efficiency (>98%) and low dark counts at cryogenic temperatures. These detectors are crucial for verifying entanglement in Bell inequality tests and reconstructing quantum states via , where arrival times and polarizations are recorded to infer outcomes. For instance, SNSPDs have enabled loophole-free Bell tests by detecting from entangled sources with minimal loss, achieving violation parameters exceeding local realism bounds by over 5 standard deviations. This setup facilitates scalable photonic quantum networks, with fidelities approaching 99% in integrated devices. Neutral atom arrays in optical lattices trap atoms like or cesium using interfering beams to form site-addressable qubit registers, enabling quantum simulation of many-body systems. Site-resolved measurements rely on : a site-specific illuminates atoms, and their emitted is captured by high-numerical-aperture objectives onto EMCCD cameras, distinguishing states based on counts per lattice site. In 2020s experiments, this approach has achieved >99% for parallel readout of hundreds of atoms in 2D arrays, as in quantum simulators demonstrating Rydberg and Ising models. Recent advances as of 2025 include demonstrations of 6,100- neutral-atom arrays, enhancing scalability for quantum simulation. Such capabilities support measurement-based protocols in scalable neutral-atom quantum processors. Hybrid systems combine mechanical resonators with optical or microwave cavities for optomechanical quantum non-demolition (QND) measurements, where the resonator's phonon number is probed without backaction dephasing. In these setups, radiation pressure couples the mechanical motion to cavity photons, enabling QND detection of discrete vibrational quanta via homodyne readout of transmitted light. Theoretical proposals and early experiments have aimed to resolve single-phonon states in silicon nitride membranes at room temperature, with targeted measurement fidelities >90% for phonon number, paving the way for hybrid quantum interfaces between mechanical and spin degrees of freedom.

Challenges in realizing ideal measurements

In quantum mechanics, realizing ideal measurements—those that project the system onto eigenstates without introducing additional errors—faces significant practical challenges, particularly in superconducting platforms used in modern laboratories. One primary obstacle is decoherence during the readout process, where the 's time, typically ranging from 1 to 100 μs, limits the window for accurate measurement before environmental interactions cause energy relaxation or . This decoherence arises from coupling to lossy modes in the readout or thermal noise, degrading the . To mitigate this, fast readout pulses on the order of 50-100 ns are employed, reducing the exposure time and preserving by minimizing relaxation errors. Readout fidelity, the probability of correctly identifying the qubit state, is further limited by crosstalk between adjacent qubits and amplifier noise in the detection chain, often capping single-shot fidelities below 99% in early implementations. Cryogenic technologies, such as quantum-limited amplifiers and tunable Purcell filters operating at millikelvin temperatures, have improved these limits, achieving average readout fidelities of up to 99.9% in multiplexed setups as of 2025 by suppressing noise and linewidth broadening. These advancements reduce to below 0.02% through frequency-selective filtering, enabling reliable state discrimination even in dense qubit arrays. Scalability poses another barrier, as controlling and wiring more than 100 qubits requires thousands of lines for microwave signals, leading to excessive heat dissipation, , and physical footprint issues at cryogenic temperatures. Cryogenic multiplexers, including frequency-division schemes and cryo-CMOS circuits, address this by consolidating multiple control lines into fewer transmission paths, minimizing latency and power draw while supporting parallel readout. Non-idealities in detectors introduce deviations from projective measurements, resulting in partial collapse of the wavefunction rather than full , and are formally described by positive operator-valued measures (POVMs) that account for incomplete . Imperfect detectors, such as those with low or environmental coupling, cause the post-measurement state to remain partially coherent, with effects like increased linear due to channels. Experimental demonstrations in systems like double quantum dots or phase qubits confirm this partial , where the collapse probability is tunable but limited by decoherence rates, leading to Bayesian-updated states that blend initial and measured outcomes. Integrating measurements into (QEC) protocols enables active stabilization, where repeated syndrome measurements detect and correct s in real time without halting computation. This approach uses stabilizer codes to monitor without disturbing the logical , suppressing decoherence rates below intrinsic limits and extending effective coherence for fault-tolerant operations. In continuously encoded qubits, such measurement-based achieves rates as low as 0.1% per , demonstrating viability for scaling beyond 100 physical qubits.

References

  1. [1]
    The Quantum Measurement Problem: A Review of Recent Trends
    ### Summary of Abstract and Introduction
  2. [2]
    The measurement postulates of quantum mechanics are ... - NIH
    Mar 25, 2019 · Here we show that the mathematical structure of quantum measurements, the formula for assigning outcome probabilities (Born's rule) and the post-measurement ...
  3. [3]
    The von Neumann Model of Measurement in Quantum Mechanics
    Nov 29, 2013 · We describe how to obtain information on a quantum-mechanical system by coupling it to a probe and detecting some property of the latter, using a model ...
  4. [4]
    The Born Rule—100 Years Ago and Today - PMC - NIH
    This paper traces the early history of the Born rule 100 years ago, its generalization (essential for today's quantum optics and quantum information theory)1.1. The Born Rule Before... · 2. The Born Rule 100 Years... · 3. The Born Rule Today
  5. [5]
  6. [6]
    Mathematical Foundations of Quantum Mechanics
    ### Summary of Key Sections on Observables, Spectral Theorem, and Expectation Values
  7. [7]
    Mathematical foundations of quantum mechanics : Von Neumann ...
    Jul 2, 2019 · Mathematical foundations of quantum mechanics. by: Von Neumann, John, 1903-1957. Publication date: 1955. Topics: Matrix mechanics. Publisher ...
  8. [8]
    [PDF] Zur Quantenmechanik der Sto&#x00DF;vorg&#x00E4;nge - psiquadrat
    Zur Quantenmechanik der Sto~vorg~nge. [Vorl~ufige Mitteilung. I)]. Von Max Born, GSttingen. (Eingegangen am 25. Juni 1926.) Durch eine Untersuchung der S ...
  9. [9]
    [quant-ph/0605031] Irreversibility in Collapse-Free Quantum ... - arXiv
    May 2, 2006 · Irreversibility in Collapse-Free Quantum Dynamics and the Second Law of Thermodynamics. Authors:M. B. Weissman.<|control11|><|separator|>
  10. [10]
    [PDF] The Thermal Radiation Formula of Planck (1900) - arXiv
    Feb 12, 2004 · Abstract. We review the derivation of Planck's Radiation Formula on the light of recent studies in its centenary. We discuss specially the ...
  11. [11]
    [PDF] Einstein's Proposal of the Photon Concept-a Translation
    Of the trio of famous papers that Albert Einstein sent to the Annalen der Physik in 1905 only the paper proposing the photon concept has been unavailable in ...
  12. [12]
    [PDF] 1913 On the Constitution of Atoms and Molecules
    ... angular momentum of the electron round the nucleus is equal to h/2π. On the theory of this paper the only neutral atom which contains a single electron is ...Missing: quantization | Show results with:quantization
  13. [13]
    [PDF] PhilSci-Archive - THE (?) CORRESPONDENCE PRINCIPLE
    That the correspondence principle in the old quantum theory was intended to extend Bohr's postulates in such a way as to provide predictions about transition ...
  14. [14]
    [PDF] Rise and premature fall of the old quantum theory - arXiv
    Feb 11, 2008 · The old quantum theory of Bohr and Sommerfeld was abandonned for the wrong reason. Its contradictions were caused not by the orbit con-.
  15. [15]
    A Quantum Theory of the Scattering of X-rays by Light Elements
    Jan 27, 2025 · Arthur Compton's results convinced most skeptics that in some experiments, light can act like a stream of particles. See more in Physics. See ...
  16. [16]
  17. [17]
  18. [18]
  19. [19]
    [PDF] 1.3 THE PHYSICAL CONTENT OF QUANTUM KINEMATICS AND ...
    The Franck-Hertz collision experiments allow one to base the measurement of the energy of the atom on the measurement of the energy of electrons in rectilinear ...
  20. [20]
    [PDF] Can Quantum-Mechanical Description of Physical Reality Be
    MAY 15, 1935. PHYSICAL REVIEW. VOLUME 47. Can Quantum-Mechanical Description of Physical Reality Be Considered Complete? A. EINSTEIN, B. PODOLSKY AND N. ROSEN, ...
  21. [21]
    [PDF] ON THE EINSTEIN PODOLSKY ROSEN PARADOX*
    THE paradox of Einstein, Podolsky and Rosen [1] was advanced as an argument that quantum mechanics could not be a complete theory but should be supplemented ...
  22. [22]
    [PDF] Bell's theorem without inequalities.
    BELL'S THEOREM WITHOUT INEQUALITIES. Consider a system of four spin-1/2 particles produced so that particles 1 and 2 move freely in the positive z-direction.
  23. [23]
    Logic Meets Wigner's Friend (and their Friends)
    Apr 23, 2024 · In this paper we focus on Wigner's Friend thought-experiment [30] ... First, consider the original Wigner's Friend paradox (scenario 1).Logic Meets Wigner's Friend... · 2.1 Wigner's Friend Thought... · 3 Our Solution
  24. [24]
    [PDF] A Translation of Schrödinger's "Cat Paradox" Paper - Unicamp
    Rev. 47: p. 777 (1935). 3 E. Schrodinger, Proc. Cambridge Phil. Soc. 31 ...
  25. [25]
    Decoherence, the measurement problem, and interpretations of ...
    Feb 23, 2005 · In essence, as we have seen above, the measurement problem deals with the transition from a quantum world, described by essentially arbitrary ...Missing: authoritative | Show results with:authoritative
  26. [26]
  27. [27]
    "Relative State" Formulation of Quantum Mechanics | Rev. Mod. Phys.
    Apr 18, 2025 · "Relative State" Formulation of Quantum Mechanics. Hugh Everett, III* ... Quantum Milestones, 1957: Sprouting Parallel Universes. Published ...
  28. [28]
    A formal proof of the Born rule from decision-theoretic assumptions
    Jun 15, 2009 · I develop the decision-theoretic approach to quantum probability, originally proposed by David Deutsch, into a mathematically rigorous proof of the Born rule.
  29. [29]
    [0802.2504] An introduction to many worlds in quantum computation
    Feb 18, 2008 · This paper introduces one interpretation of quantum mechanics, a modern `many-worlds' theory, from the perspective of quantum computation.
  30. [30]
    Unified dynamics for microscopic and macroscopic systems
    An explicit model allowing a unified description of microscopic and macroscopic systems is exhibited.
  31. [31]
    Markov processes in Hilbert space and continuous spontaneous ...
    Jul 1, 1990 · Markov processes in Hilbert space and continuous spontaneous localization of systems of identical particles. Gian Carlo Ghirardi · Philip Pearle.Missing: CSL paper
  32. [32]
    Present status and future challenges of non-interferometric tests of ...
    Feb 17, 2022 · Technological advances allow to increasingly challenge collapse models and the quantum superposition principle, with a variety of different ...
  33. [33]
    [1003.5209] QBism, the Perimeter of Quantum Bayesianism - arXiv
    Mar 26, 2010 · This article summarizes the Quantum Bayesian point of view of quantum mechanics, with special emphasis on the view's outer edges---dubbed QBism.Missing: seminal | Show results with:seminal
  34. [34]
    Quantum-Bayesian coherence | Rev. Mod. Phys.
    Dec 27, 2013 · This review has focused on adding a new girder to the developing structure of quantum Bayesianism (“QBism” hereafter). ... Fuchs, “Quantum ...Missing: seminal | Show results with:seminal
  35. [35]
    [quant-ph/9609002] Relational Quantum Mechanics - arXiv
    Aug 31, 1996 · Relational Quantum Mechanics. Authors:Carlo Rovelli. View a PDF of the paper titled Relational Quantum Mechanics, by Carlo Rovelli. View PDF.
  36. [36]
    [PDF] INFORMATION, PHYSICS, QUANTUM: THE SEARCH FOR LINKS
    at a very deep bottom, in most instances — an immaterial source and.
  37. [37]
    Decoherence, einselection, and the quantum origins of the classical
    May 22, 2003 · Reprinted 1983 in Quantum Theory and Measurement, edited by J. A. Wheeler and W. H. Zurek (Prince ton University, Princeton, NJ), p. 749.Missing: seminal | Show results with:seminal
  38. [38]
    [quant-ph/9707021] Fault-tolerant quantum computation by anyons
    Jul 9, 1997 · Abstract: A two-dimensional quantum system with anyonic excitations can be considered as a quantum computer. Unitary transformations can be ...
  39. [39]
    Surface codes: Towards practical large-scale quantum computation
    Sep 18, 2012 · This article provides an introduction to surface code quantum computing. We first estimate the size and speed of a surface code quantum computer.Abstract · Article Text · THE SURFACE CODE · QUIESCENT STATE OF THE...
  40. [40]
    Quantum error correction below the surface code threshold - Nature
    Dec 9, 2024 · Equipped with below-threshold logical qubits, we can now probe the sensitivity of logical error to various error mechanisms in this new regime.
  41. [41]
    Fault Tolerant Quantum Computation with Constant Error - arXiv
    Nov 14, 1996 · Authors:Dorit Aharonov (Physics and computer science, Hebrew Univ.), Michael Ben-Or (Computer science, Hebrew univ.) View a PDF of the paper ...Missing: original | Show results with:original
  42. [42]
    Fault-Tolerant Quantum Computation With Constant Error Rate - arXiv
    Jun 30, 1999 · This paper proves the threshold result, which asserts that quantum computation can be made robust against errors and inaccuracies.Missing: 1990s | Show results with:1990s
  43. [43]
    [2410.16706] Measuring error rates of mid-circuit ... - arXiv
    Oct 22, 2024 · High-fidelity mid-circuit measurements, which read out the state of specific qubits in a multiqubit processor without destroying them or ...
  44. [44]
    A randomized benchmarking suite for mid-circuit measurements
    Dec 8, 2023 · Mid-circuit measurements are a key component in many quantum information computing protocols, including quantum error correction, ...Missing: adaptive | Show results with:adaptive
  45. [45]
    Hardware-efficient quantum error correction via ... - Nature
    Feb 26, 2025 · We study the performance and scaling of the logical qubit memory, finding that the phase-flip correcting repetition code operates below the ...
  46. [46]
    Efficient quantum state tomography | Nature Communications
    Dec 21, 2010 · Quantum state tomography—deducing quantum states from measured data—is the gold standard for verification and benchmarking of quantum devices ...
  47. [47]
    [quant-ph/9611013] Complete Characterization of a Quantum Process
    Nov 10, 1996 · View a PDF of the paper titled Complete Characterization of a Quantum Process: the Two-Bit Quantum Gate, by J. F. Poyatos and 7 other authors.
  48. [48]
    Quantum State Tomography via Compressed Sensing
    Oct 4, 2010 · We establish methods for quantum state tomography based on compressed sensing. These methods are specialized for quantum states that are fairly pure.Abstract · Article Text · ACKNOWLEDGEMENTS
  49. [49]
    Experimental quantum compressed sensing for a seven-qubit system
    May 17, 2017 · Quantum compressed sensing is most effective on density matrices with quickly decaying eigenvalues. Such a state can be well approximated by a ...
  50. [50]
    Fidelity for Mixed Quantum States: Journal of Modern Optics
    We propose a definition of fidelity for mixed quantum states in terms of Uhlmann's 'transition probability' formula.
  51. [51]
    Scalable quantum tomography with fidelity estimation | Phys. Rev. A
    Mar 16, 2020 · We propose a quantum tomography scheme for pure qudit systems which adopts a certain version of random basis measurements and a generative learning method.
  52. [52]
    A One-Way Quantum Computer | Phys. Rev. Lett.
    May 28, 2001 · We present a scheme of quantum computation that consists entirely of one-qubit measurements on a particular class of entangled states, the cluster states.
  53. [53]
    Measurement-based quantum computation on cluster states
    Aug 25, 2003 · We give a detailed account of the one-way quantum computer, a scheme of quantum computation that consists entirely of one-qubit measurements ...
  54. [54]
    Measurement-based quantum computation with cluster states - arXiv
    Jan 13, 2003 · A scheme of quantum computation that consists entirely of one-qubit measurements on a particular class of entangled states, the cluster states.
  55. [55]
    Universal resources for measurement-based quantum computation
    Apr 3, 2006 · This paper investigates entanglement resources for universal measurement-based quantum computation, finding that 2D cluster state features are ...
  56. [56]
    Fault-Tolerant Quantum Computation with High Threshold in Two ...
    May 11, 2007 · We present a scheme of fault-tolerant quantum computation for a local architecture in two spatial dimensions. The error threshold is 0.75% for each source.
  57. [57]
    Deterministic generation of a two-dimensional cluster state - Science
    Oct 18, 2019 · We propose and demonstrate a scalable scheme for the generation of photonic cluster states suitable for universal measurement-based quantum computation.
  58. [58]
    Realizations of Measurement Based Quantum Computing - arXiv
    Dec 22, 2021 · The Measurement Based Quantum Computation (MBQC) model achieves universal quantum computation by employing projective single qubit measurements.
  59. [59]
    Measurement-Based Quantum Computation with Trapped Ions
    Nov 19, 2013 · Here we demonstrate the principles of measurement-based quantum computation using deterministically generated cluster states, in a system of trapped calcium ...Abstract · Article Text
  60. [60]
    Verifiable measurement-based quantum random sampling with ...
    Jan 2, 2025 · We experimentally demonstrate efficiently verifiable quantum random sampling in the measurement-based model of quantum computation on a trapped-ion quantum ...
  61. [61]
    Advances in quantum metrology | Nature Photonics
    Mar 31, 2011 · In this Review, we analyse some of the most promising recent developments of this research field and point out some of the new experiments. We ...
  62. [62]
    Quantum metrology for gravitational wave astronomy - Nature
    Nov 16, 2010 · Suitable telescopes for GW astronomy are kilometre-scale laser interferometers that measure the distance between quasi-free-falling mirrors.
  63. [63]
    Quantum metrology with imperfect measurements - Nature
    Nov 15, 2022 · This general formalism provides tools to identify optimal probe states and measurements for any given quantum metrology task. Interestingly it ...
  64. [64]
    A Straightforward Introduction to Continuous Quantum Measurement
    Nov 6, 2006 · Specifically, we use the simple and direct approach of generalized measurements to derive the stochastic master equation describing the ...
  65. [65]
    Quantum non-demolition measurements in optics: a review and ...
    We review the schemes which have been implemented, in order to achieve quantum non-demolition (QND) measurements in the optical domain.
  66. [66]
    Thomas Young and the Nature of Light - American Physical Society
    The basic double-slit setup Young proposed has since been used not only to show that light acts like a wave, but also to demonstrate that electrons can act like ...
  67. [67]
    Diffraction of Electrons by a Crystal of Nickel | Phys. Rev.
    Feb 3, 2025 · Davisson and Germer showed that electrons scatter from a crystal the way x rays do, proving that particles of matter can act like waves. See ...
  68. [68]
    Delayed-choice gedanken experiments and their realizations
    Mar 3, 2016 · In a Young-type double-slit experiment, every quantum system is at one point in time in an equal-weight superposition of being at the left and ...<|control11|><|separator|>
  69. [69]
    Delayed-choice experiments in quantum interference | Phys. Rev. A
    Mar 1, 1987 · Following a suggestion by Wheeler, we have performed delayed-choice experiments in both the spatial and time domains. For the first ...Missing: original | Show results with:original
  70. [70]
    [PDF] State Readout of a Trapped Ion Qubit Using a Trap-Integrated ...
    Jan 6, 2021 · The average readout fidelity is 0.9991(1), with a mean readout duration of 46 μs, and is limited by the polarization impurity of the readout ...
  71. [71]
    High-fidelity, multi-qubit generalized measurements with dynamic ...
    Mar 3, 2024 · Here, we realize a generalized measurement of one and two superconducting qubits with high fidelity and in a single experimental setting.Missing: Google | Show results with:Google
  72. [72]
    Photocounting statistics of superconducting nanowire single-photon ...
    The SNSPDs are key measurement elements for implementations of strong loophole-free tests of Bell inequalities [59] . They have also been applied as building ...
  73. [73]
    Quantum computing with neutral atoms
    Sep 21, 2020 · Quantum computing with neutral atoms uses light to manipulate atoms, preserving quantum properties, and is scalable to 100-1,000 qubits.Missing: 2020s | Show results with:2020s
  74. [74]
    Quantum nondemolition measurement of mechanical motion quanta
    Sep 6, 2018 · Quantization of mechanical energy can be observed by a quantum nondemolition (QND) measurement 24,25 of an oscillator's phonon number operator.
  75. [75]
    [PDF] Extending Coherence in Superconducting Qubits - Schoelkopf Lab
    Superconducting qubits typically have a coherence time of 1 − 100 µs, and swap operations between a spin and a resonator occur at the rate geff ...
  76. [76]
    [PDF] Partial collapse and uncollapse of a wavefunction
    May 6, 2010 · excited. - Relaxes to the ground state if left alone (low-T). - Becomes fully mixed if coupled to a high-T. (non-equilibrium) environment.
  77. [77]
  78. [78]
    Experimental demonstration of continuous quantum error correction
    Jul 23, 2021 · The paper demonstrates continuous quantum error correction using direct parity measurements, achieving 91% bit-flip detection efficiency and ...