Fact-checked by Grok 2 weeks ago

Local hidden-variable theory

Local hidden-variable theory is a deterministic proposed to interpret by introducing unobserved "hidden variables" that predetermine the outcomes of on individual particles, while assuming locality—the principle that influences cannot propagate —and , the idea that physical properties have definite values independent of . These theories seek to eliminate the inherent and apparent non-locality of standard , providing a complete description of physical without probabilistic . The concept emerged from the 1935 Einstein-Podolsky-Rosen (EPR) paradox, in which , , and argued that ' predictions for entangled particles implied "spooky ," suggesting the theory was incomplete and required supplementary variables to restore locality and . In 1964, formalized a test for such theories by deriving inequalities that any local hidden-variable model must satisfy when predicting correlations between measurements on spatially separated entangled particles, such as entangled photons or electrons in the spin . , however, predicts violations of these Bell inequalities for certain measurement angles, implying that no local hidden-variable theory can fully reproduce quantum results without allowing non-local influences. Subsequent experiments, starting with those by and Stuart Freedman in 1972 and in 1981-1982, confirmed quantum predictions by violating Bell inequalities, progressively closing potential loopholes such as detector efficiency and locality. More recent loophole-free tests, including those using superconducting circuits in 2023 and Hardy's paradox in 2024, have further ruled out local hidden-variable theories with high , supporting quantum non-locality while leaving room for non-local hidden-variable alternatives like Bohmian mechanics. These findings underscore the tension between and classical intuitions of locality, influencing fields from to foundational physics.

Definition and Principles

Core Assumptions

Local hidden-variable theories posit that the outcomes of quantum measurements are determined by pre-existing properties of the physical system, independent of the measurement process itself. This principle, known as realism, asserts that physical quantities possess definite values prior to observation, corresponding to what Einstein, Podolsky, and Rosen termed "elements of physical reality." Specifically, if a physical quantity can be predicted with certainty without disturbing the system, it must correspond to such an element, implying that quantum mechanics' wave function provides an incomplete description of reality. Central to these theories is , the idea that future events, including measurement outcomes, are fully fixed by the initial conditions of the system and a set of underlying parameters. These parameters, denoted as hidden variables λ, serve as a complete specification of the system's state, supplementing or replacing the probabilistic quantum wave function. In this framework, λ determines the results of all possible measurements, ensuring that the theory yields definite predictions rather than probabilities. For instance, in models addressing spin measurements, λ dictates specific outcomes like +1 or -1 for given measurement settings. This deterministic approach contrasts sharply with standard , which treats measurement outcomes as inherently probabilistic, governed by the and the wave function's evolution under the . Local hidden-variable theories aim to restore a classical-like while maintaining the requirement that they reproduce all empirical predictions of for measurements that are compatible—meaning those that do not involve simultaneous incompatible observables on the same system. These theories also incorporate locality, stipulating that influences propagate no , though this condition is explored separately.

Locality Condition

In local hidden-variable theories, the locality condition stipulates that the outcome of a on one particle cannot be instantaneously influenced by the choice of setting or outcome on a spacelike-separated particle, ensuring that physical influences propagate no faster than the . This , central to such theories, posits that any correlations observed between distant events must arise solely from shared variables established prior to the separation of the systems, without ongoing superluminal signaling. A key aspect of this condition is parameter independence, which requires that the probability of an outcome at one location depends only on the local setting and the hidden variable , and not on the remote setting. Formally, for outcomes A and B at two s with local settings a and b, respectively, the marginal probabilities satisfy P(A|a, b, \lambda) = P(A|a, \lambda) and P(B|a, b, \lambda) = P(B|b, \lambda). Similarly, outcome independence ensures that the outcome at one is statistically independent of the actual outcome at the distant , given \lambda and the settings, such that the joint probability factorizes as P(A, B|a, b, \lambda) = P(A|a, \lambda) \cdot P(B|b, \lambda). For spacelike-separated measurement events, the overall correlations in the theory are then determined by integrating over the distribution of the shared hidden variables: the joint probability P(A, B|a, b) = \int P(A|a, \lambda) P(B|b, \lambda) \rho(\lambda) \, d\lambda, where \rho(\lambda) is the probability density of \lambda. This formal structure enforces that all nonlocal appearances in quantum predictions must be traceable to the initial common cause encoded in \lambda, without direct causal links between the separated sites. The locality condition aligns with by preserving : influences are confined within light cones, upholding the no-signaling principle that prevents information transfer faster than light, even in the presence of hidden variables that complete the quantum description under and .

Historical Context

Early Proposals

The origins of local hidden-variable theories trace back to early critiques of ' completeness, particularly the 1935 paper by , , and , known as the EPR paradox. In this work, the authors argued that quantum mechanics could not provide a complete description of physical because it allowed for instantaneous correlations between distant particles without a for influences, implying the need for additional "elements of " that could be modeled by hidden variables to restore and ity. Their analysis focused on entangled systems, such as position-momentum correlations in a two-particle state, to highlight what they saw as an incompleteness in the theory's probabilistic predictions. Prior to EPR, John von Neumann's 1932 book Mathematical Foundations of Quantum Mechanics presented a influential proof against the existence of hidden variables, claiming that no such addition could reproduce quantum statistics without contradicting the formalism. Von Neumann assumed that hidden variables would need to yield dispersion-free ensembles, but his argument overlooked the possibility of non-local correlations, a flaw later identified in critiques. In 1935, Grete Hermann published a philosophical analysis challenging the implications of quantum indeterminacy for causality, arguing in her essay "Die Naturphilosophischen Grundlagen der Quantenmechanik" that von Neumann's proof failed to rule out hidden variables because it did not adequately address how non-commuting observables might allow for deterministic substructures underlying statistical outcomes. Hermann's work emphasized preserving Kantian causality while accommodating quantum statistics, suggesting that hidden factors could resolve apparent acausality without violating locality. In 1952, David Bohm revived Louis de Broglie's earlier pilot-wave idea from 1927, proposing a hidden-variable where particles follow definite trajectories guided by a , restoring but requiring non-local influences to match quantum predictions. Bohm's formulation, detailed in two papers, contrasted sharply with the quest for strictly local theories, as its quantum potential acted instantaneously across space, yet it inspired subsequent efforts to develop local variants. During the and , physicists explored modifications to de Broglie-Bohm approaches and other deterministic models, aiming to incorporate locality by assuming hidden variables that influenced outcomes only through local interactions, though these attempts struggled with reproducing entanglement correlations without non-local elements. Such proposals, often building on motivations, sought to complete while adhering to relativistic principles, setting the stage for later rigorous tests.

Bell's Theorem Impact

John Bell's 1964 theorem demonstrated that no local hidden-variable theory can reproduce all the predictions of for entangled particles, particularly the correlations observed in measurements on spatially separated systems. This result arose as a direct response to the Einstein-Podolsky-Rosen (EPR) paradox of 1935, which questioned the completeness of , and David Bohm's 1952 formulation of a nonlocal hidden-variable interpretation that successfully reproduced quantum predictions but violated locality. Bell's key insight was that permits correlations stronger than those allowed by local —the combination of locality (no influences) and (pre-existing values for observables)—for certain entangled states, such as the spin singlet state of two particles. The implications of were profound, shifting the debate from philosophical speculation to testable predictions and challenging the viability of local hidden-variable theories as alternatives to standard . Early experimental efforts to verify these predictions began with the 1972 test by Stuart Freedman and , which used entangled photons and confirmed a violation of the Bell inequality by about 6.5 standard deviations, aligning with quantum mechanical expectations over local hidden-variable models. This was followed by more refined experiments by and collaborators in 1981 and 1982, employing time-varying analyzers to better approximate spacelike separation and demonstrating violations exceeding 5 standard deviations, further supporting quantum mechanics while ruling out local . Despite these confirmations, initial experiments left open several "loopholes" that could allow local hidden-variable theories to evade refutation: the detection loophole (due to low-efficiency detectors potentially biasing results toward quantum-like correlations), the locality loophole (insufficient spacelike separation allowing light-speed signaling), and the freedom-of-choice loophole (measurement settings not truly random, possibly correlated with hidden variables). Subsequent experiments addressed these systematically; for instance, the 2015 Delft experiment by Bas Hensen et al. closed both the detection and locality loopholes using entangled electron spins separated by 1.3 km, achieving a Bell parameter of S = 2.42 \pm 0.20, exceeding the classical bound of 2 by over 7 standard deviations. Similarly, the 2015 Vienna experiment by Marissa Giustina et al. closed the same two loopholes with entangled photons, violating the CH-Eberhard Bell inequality with a p-value of $3.74 \times 10^{-31} (11.5 standard deviations). The freedom-of-choice loophole was tackled in the 2017 cosmic Bell test by Denis Rauch et al. in Vienna, using light from distant stars (arriving after 600 years) to generate measurement settings, confirming quantum violations with CHSH values of 2.425 (7.31 standard deviations) and 2.502 (11.93 standard deviations) in the two runs while ensuring no causal influence from hidden variables on choices. These loophole-free tests have solidified the incompatibility of local hidden-variable theories with quantum mechanics, affirming nonlocal quantum correlations as a fundamental feature of nature.

Mathematical Framework

Hidden Variable Models

Local hidden-variable theories posit that the outcomes of quantum measurements can be predetermined by underlying hidden variables that complete the quantum description, while adhering to the principles of and locality. These models assume that measurement results are functions solely of local settings and a shared hidden variable distributed according to some probability density, without any instantaneous influence between distant systems. In the standard formulation, consider two distant observers, , measuring observables on their respective particles. Alice's outcome is denoted by A(a, \lambda), where a is her chosen setting and \lambda is the hidden variable, typically taking values \pm 1 for outcomes like components. Similarly, Bob's outcome is B(b, \lambda), with b as his setting. The hidden variable \lambda is drawn from a \rho(\lambda), normalized such that \int \rho(\lambda) \, d\lambda = 1, and assumed independent of the measurement settings a and b. Locality is enforced by requiring that Alice's outcome A(a, \lambda) depends only on her local setting a and \lambda, remaining independent of Bob's remote setting b. Analogously, B(b, \lambda) is independent of a. This ensures no signaling or causal influence propagates between the separated measurement sites. The joint expectation value for the product of outcomes, which quantifies correlations, is then given by the integral \langle A B \rangle (a, b) = \int A(a, \lambda) B(b, \lambda) \rho(\lambda) \, d\lambda. This expression must reproduce the quantum mechanical predictions for joint measurements to be empirically viable. For the theory to match quantum mechanics, the marginal probabilities for local observables must also align with the Born rule. Specifically, the single-party expectation \langle A(a) \rangle = \int A(a, \lambda) \rho(\lambda) \, d\lambda and similarly for Bob should equal the quantum averages, such as zero for certain symmetric states like the singlet. In the CHSH variant, often used for bipartite scenarios, the correlation function is defined as E(a, b) = \int A(a, \lambda) B(b, \lambda) \rho(\lambda) \, d\lambda, which directly corresponds to the expectation \langle A B \rangle (a, b) and facilitates tests of locality through derived inequalities. These marginal and correlation requirements ensure that any local hidden-variable model is indistinguishable from quantum mechanics at the level of local statistics.

Bell Inequalities

Bell inequalities represent quantitative constraints on the statistical correlations observable in experiments involving spatially separated measurements, arising directly from the assumptions of local hidden-variable theories. These inequalities were first derived by John Bell in 1964 as part of his analysis of the Einstein-Podolsky-Rosen paradox, demonstrating that local realism imposes bounds on joint measurement outcomes that can violate. A particularly influential formulation, known as the , was developed in 1969 to provide a testable prediction for experiments with pairs in entangled states. In the CHSH framework, consider two parties, Alice and Bob, each performing measurements on their respective particles using one of two settings, denoted a, a' for Alice and b, b' for Bob. The measurement outcomes are assigned values \pm 1, and the correlation function E(a,b) is defined as the expectation value \langle A(a) B(b) \rangle, where A(a) and B(b) are the outcomes. Under local hidden-variable theories, the outcomes are determined by a shared hidden variable \lambda distributed according to a probability density \rho(\lambda), such that A(a) = A(a, \lambda) = \pm 1 and B(b) = B(b, \lambda) = \pm 1, with locality ensuring that Alice's outcome depends only on her setting and \lambda, independent of Bob's choice. The correlation is then E(a,b) = \int A(a, \lambda) B(b, \lambda) \rho(\lambda) \, d\lambda. The derivation of the CHSH inequality proceeds by considering the linear combination \langle A(a) B(b) \rangle + \langle A(a) B(b') \rangle + \langle A(a') B(b) \rangle - \langle A(a') B(b') \rangle. Substituting the hidden-variable form yields \int [A(a,\lambda) (B(b,\lambda) + B(b',\lambda)) + A(a',\lambda) (B(b,\lambda) - B(b',\lambda))] \rho(\lambda) \, d\lambda. For each \lambda, the term in brackets has at most 2, since |A| = 1, |A'| = 1, and |B(b) \pm B(b')| \leq 2 given the \pm 1 outcomes. Integrating over \lambda thus bounds the of the combination by 2: |E(a,b) + E(a,b') + E(a',b) - E(a',b')| \leq 2. This bound holds for any local hidden-variable model. Quantum mechanics predicts violations of the for entangled states, such as the spin singlet state of two particles. For this state, the is E(a,b) = -\cos\theta, where \theta is between the measurement directions a and b. Choosing settings with angles 0° for a, 45° for a', 22.5° for b, and -22.5° for b' yields E(a,b) = -\cos 22.5^\circ \approx -0.924, E(a,b') = -\cos 22.5^\circ \approx -0.924, E(a',b) = -\cos 22.5^\circ \approx -0.924, and E(a',b') = -\cos 67.5^\circ \approx -0.383, so the combination equals -3 \cos 22.5^\circ + \cos 67.5^\circ \approx -2.389, with absolute value exceeding 2. The maximum quantum violation reaches $2\sqrt{2} \approx 2.828, known as the Tsirelson bound, for optimally chosen angles. Such violations imply that local hidden-variable theories cannot reproduce quantum correlations; if an experiment demonstrates a breach of the inequality while closing all relevant loopholes, it disproves local .

Specific Implementations

Single-Qubit Case

In quantum mechanics, the state of a single is represented by a density operator \rho = \frac{1}{2} (I + \vec{r} \cdot \vec{\sigma}), where \vec{r} is the Bloch vector with |\vec{r}| \leq 1 and \vec{\sigma} are the Pauli matrices; measurements correspond to projections onto the eigenvectors of \vec{n} \cdot \vec{\sigma} for a unit vector \vec{n} specifying the measurement direction, yielding outcomes \pm 1 with probabilities \frac{1 \pm \vec{r} \cdot \vec{n}}{2}. A local hidden-variable model for such projective measurements assigns to each qubit a hidden variable \lambda, a unit vector on the Bloch sphere, with outcome A(\theta, \lambda) = \operatorname{sign}(\lambda \cdot \vec{n}_\theta) for measurement direction \vec{n}_\theta. For a pure state with Bloch vector \vec{r} = |\vec{r}| \hat{r}, the distribution of \lambda is uniform over the hemisphere \{\lambda \mid \lambda \cdot \hat{r} > 0\} with density \frac{1}{2\pi} (normalized such that the integral over the hemisphere yields 1), while for mixed states it is a convex combination of such hemispherical distributions and the uniform distribution over the full sphere. This model reproduces the quantum mechanical probabilities exactly, as the probability of outcome +1 is the fraction of the hemisphere where \lambda \cdot \vec{n}_\theta > 0, which equals \frac{1 + \hat{r} \cdot \vec{n}_\theta}{2} for pure states (and similarly for mixed states via the mixture); the expectation value \langle A(\theta) \rangle = \vec{r} \cdot \vec{n}_\theta follows directly from integrating over the distribution. Unlike entangled systems, the single-qubit case exhibits no Bell inequality violations, as there are no non-local correlations between multiple particles to test locality against quantum predictions. For instance, the Stern-Gerlach experiment on particles can be modeled deterministically within this framework: an unprepared ensemble corresponds to \lambda uniform over the full sphere (yielding 50% deflection in either direction along any axis), while selecting the positively deflected beam post-measurement restricts \lambda to the corresponding hemisphere, such that subsequent measurements along another direction match the probabilities via the geometric fraction of the hemisphere.

Bipartite Entangled States

Bipartite entangled states provide a critical for local hidden-variable theories, as predicts strong correlations between distant particles that challenge locality assumptions. The paradigmatic example is the spin singlet state for two qubits, |\psi^-\rangle = \frac{1}{\sqrt{2}} \left( |01\rangle - |10\rangle \right), where |0\rangle and |1\rangle denote opposite spin projections along a reference . In , when Alice measures spin along direction \mathbf{a} and Bob along \mathbf{b}, the expectation value of the product of their outcomes is E(\mathbf{a}, \mathbf{b}) = -\mathbf{a} \cdot \mathbf{b} = -\cos\theta, with \theta the angle between \mathbf{a} and \mathbf{b}. This yields perfect anticorrelation (E = -1) when \theta = 0 and no correlation (E = 0) when \theta = 90^\circ. Local hidden-variable models attempt to reproduce these correlations by assigning predetermined outcomes A(\mathbf{a}, \lambda) = \pm 1 for and B(\mathbf{b}, \lambda) = \pm 1 for , determined by a shared hidden variable \lambda distributed according to \rho(\lambda). The correlation is then E(\mathbf{a}, \mathbf{b}) = \int A(\mathbf{a}, \lambda) B(\mathbf{b}, \lambda) \rho(\lambda) \, d\lambda. To match the singlet's perfect anticorrelation at \theta = 0, the model requires A(\mathbf{a}, \lambda) = -B(\mathbf{a}, \lambda) for almost all \lambda. However, this deterministic relation fails to reproduce the full angular dependence -\cos\theta across all pairs of directions without violating locality or introducing inconsistencies in outcome assignments. This failure manifests quantitatively through violations of Bell inequalities, such as the derived for such bipartite scenarios. The CHSH expression is |\langle AB \rangle + \langle AB' \rangle + \langle A'B \rangle - \langle A'B' \rangle| \leq 2 under local hidden variables, where A, A' are outcomes for 's two settings and B, B' for Bob's. For the with measurement settings where chooses directions at $0^\circ and $45^\circ, and Bob at $22.5^\circ and $67.5^\circ, predicts a value of $2\sqrt{2} \approx 2.828 > 2, demonstrating that no local model can match all correlations simultaneously. Local hidden-variable models inherently preserve the no-signaling condition, as marginal probabilities for one party, such as P(A = +1 | \mathbf{a}) = \int A(\mathbf{a}, \lambda) \rho(\lambda) \, d\lambda / 2, remain independent of the distant party's choice of setting. Quantum entangled states also satisfy no-signaling, but the joint probability distributions P(A, B | \mathbf{a}, \mathbf{b}) in the singlet state cannot be replicated by local models, as the latter are limited to factorized forms over \lambda that bound correlations below quantum levels. An additional incompatibility arises from the , which states that an unknown cannot be perfectly copied. In the context of entanglement, a local hidden-variable model would effectively "clone" the predictive information encoded in \lambda, allowing both parties to deterministically access outcomes without disturbing the shared , in contradiction to the non-clonable nature of entangled states like the . This argument underscores why local cannot accommodate quantum entanglement's and uniqueness properties.

Limitations and Extensions

No-Go Results

The Kochen-Specker theorem, established by Simon Kochen and Ernst Specker in 1967, proves that non-contextual hidden-variable theories—those assigning definite values to all observables independently of the context—are incompatible with . This no-go result applies to single-particle systems with non-commuting observables, such as measurements in , where quantum predictions require at least 117 such observables to demonstrate the impossibility of a consistent value assignment without context dependence. Contextuality, as revealed by this theorem, implies that the outcome of a depends on which compatible set of observables is measured alongside it, ruling out non-contextual models even without entanglement or locality considerations. Building on Bell's foundational work, the Greenberger-Horne-Zeilinger (GHZ) theorem of 1989 provides an inequality-free contradiction between local realism and for multipartite systems. For a three-qubit GHZ state, predicts perfect anticorrelations in certain joint measurements, such as expectation values of ±1 for specific operator combinations, which local hidden-variable theories cannot reproduce without violating additivity of expectations. This result strengthens the case against local realism by yielding an all-or-nothing contradiction rather than a probabilistic bound, and it has been generalized to higher numbers of particles, highlighting intrinsic non-local features in multipartite entanglement. Tsirelson's bound, derived by Boris Tsirelson in 1980, delineates the precise upper limit on quantum correlations achievable under no-signaling constraints, showing that while Bell inequalities are violated, the violations cannot exceed certain quantum-specific thresholds. For the Clauser-Horne-Shimony-Holt (CHSH) inequality, the classical bound is 2, but reaches at most $2\sqrt{2} \approx 2.828, ensuring that quantum predictions remain compatible with relativity's prohibition on superluminal signaling. This bound underscores the non-classical yet bounded nature of quantum non-locality, influencing device-independent protocols by capping the strength of certifiable correlations. From to , significant progress in device-independent proofs has addressed remaining loopholes in multipartite settings, enabling robust certifications of entanglement without trusting the measurement devices. In , photonic experiments self-tested multiparticle GHZ states while simultaneously closing the locality and detection loopholes, achieving violations beyond local realistic bounds with high . Theoretical advancements in introduced self-testing schemes for general N-partite Bell inequalities with outcomes, allowing of multipartite non-locality independent of device characterization. By , extensions to tilted CHSH scenarios optimized loophole-free non-locality under realistic detection inefficiencies, further solidifying no-go results for variables in multi-party quantum networks. Superdeterminism proposes a to these no-go theorems by assuming that variables correlate not only with particle states but also with experimenters' choices, eliminating the need for non-locality at the cost of statistical . However, this approach is widely critiqued for implying a conspiratorial of the , where all relevant variables are predetermined from the , thereby undermining the free-will assumption essential to empirical science and rendering experimental falsification impossible.

Time-Dependent Variables

In local hidden-variable theories, time-dependent variables introduce an evolution of the hidden parameter \lambda(t) over time, enabling models that address limitations of static formulations by incorporating dynamics such as or , thereby potentially reconciling locality with quantum correlations. This approach posits that \lambda(t) can change according to deterministic rules, allowing past states to be influenced by events or initial conditions that correlate all relevant factors, while maintaining the core requirement that influences propagate no . Superdeterministic models treat the hidden variables and experimental measurement choices as correlated from the outset due to fixed initial conditions, such as those established at the universe's origin, which evolve deterministically to produce observed quantum statistics without violating locality. In Gerard 't Hooft's interpretation, the underlying reality consists of discrete, deterministic states evolving via local rules akin to classical , where quantum probabilities arise emergently from the averaging over inaccessible initial configurations that enforce these correlations. This framework evades Bell inequalities by rejecting the assumption of statistical between \lambda(t) and measurement settings, interpreting apparent as of the full deterministic history. Retrocausal approaches, in contrast, allow future measurement outcomes to influence the evolution of \lambda(t) backward in time, providing a mechanism for entanglement without forward non-locality. Building on John Cramer's , which uses advanced (future-to-past) and retarded (past-to-future) waves to form "transactions" that resolve quantum amplitudes, later variants integrate hidden variables where \lambda(t) trajectories are guided by these retrocausal signals to match observed correlations locally. For example, Ken Wharton's models employ post-selected future inputs to steer particle paths within past light cones, reproducing Bell violations through time-symmetric influences rather than instantaneous . These time-dependent extensions, however, encounter significant challenges, including the inherent violation of statistical , which requires improbable conspiratorial alignments between \lambda(t) and experimenter choices to avoid detection, rendering the models empirically indistinguishable from standard in typical tests. Experimental efforts to probe or , such as those using cosmic sources for settings, have yet to yield decisive evidence, as the required correlations demand extreme that borders on unfalsifiability. Recent proposals from 2023 to 2025 explore time-symmetric formulations of \lambda(t) to simulate entanglement without invoking non-locality or superluminal effects. One 2025 approach uses backward-in-time conditional probabilities to generate correlations, relaxing temporal ordering assumptions while upholding a modified statistical independence that aligns with local . These developments prioritize compatibility with relativistic invariance, adapting the locality condition dynamically to ensure all influences respect light-cone structures throughout the evolution of \lambda(t).