Fact-checked by Grok 2 weeks ago

Boson sampling

Boson sampling is a computational problem in and theory, where indistinguishable bosons, such as photons, are injected into a linear optical interferometer, and the task is to sample from the resulting of output modes, which is governed by the permanent of a complex derived from the interferometer's unitary transformation. Introduced by and Alex Arkhipov in 2010, this model of non-universal quantum computation aims to provide evidence that even rudimentary quantum systems can perform tasks intractable for classical computers, thereby challenging the Extended Church-Turing thesis without requiring full fault-tolerant . The problem's classical hardness stems from the #P-completeness of computing matrix permanents, making exact exponentially difficult, while approximate simulation is believed hard under conjectures like the permanent-of-Gaussians conjecture. The original BosonSampling scheme involves n single s entering m optical modes (with m ≥ n) via a random , followed by non-adaptive measurement to record photon counts per mode, producing samples that are statistically verifiable but hard to generate classically for moderate n (e.g., 20–50). Aaronson and Arkhipov proved that an efficient classical algorithm for exact BosonSampling would imply P^#P ⊆ BPP^NP, collapsing the to its third level, while approximate versions rely on anti-concentration properties of permanents to maintain hardness. Experimental realizations began in with integrated photonic chips demonstrating small-scale BosonSampling using up to four s, confirming the model's feasibility despite challenges like photon indistinguishability and loss. Variants have expanded the framework's applicability and experimental reach. Scattershot BosonSampling, introduced in 2015, uses heralded photon sources from to improve heralding efficiency, enabling experiments with up to six photons. Gaussian BosonSampling (GBS), proposed in 2014, replaces single-photon Fock states with squeezed vacuum states, leveraging continuous-variable Gaussian operations for larger-scale demonstrations; a 2022 experiment achieved GBS with 216 modes and a mean photon number of 125, using time-multiplexed encoding. These variants highlight BosonSampling's role in pursuing quantum advantage, with GBS showing potential for applications in molecular vibronic spectra simulation and graph optimization due to its sampling from permanents of Gaussian matrices. By 2025, BosonSampling has influenced quantum advantage claims and hybrid applications. Noisy intermediate-scale implementations, such as a 2022 GBS device with 216 squeezed modes, have approached regimes where classical simulation becomes prohibitive, though verification remains an open challenge. Recent theoretical work has explored its resilience to photon distinguishability, showing that even logarithmic fractions of distinguishable photons preserve classical hardness under certain noise models. Emerging applications include quantum machine learning, where boson samplers enhance image recognition tasks by generating high-dimensional features intractable for classical neural networks alone, as demonstrated in simulated 2025 experiments. Additionally, proposals integrate BosonSampling into proof-of-work consensus protocols for blockchain security, exploiting its sampling hardness for verifiable randomness. In 2025, experiments with programmable GBS devices exceeding 3000 modes have further pushed the boundaries of quantum advantage. Despite these advances, scalable fault-tolerant implementations remain elusive, underscoring BosonSampling's position as a benchmark for photonic quantum technologies.

Introduction

Definition and Basic Principle

Boson sampling is a computational task in quantum optics that involves generating samples from the produced when indistinguishable bosons, such as photons, propagate through a linear optical interferometer. Bosons are fundamental particles with values (0, 1, 2, etc.) that obey Bose-Einstein statistics, permitting multiple particles to occupy the same due to the symmetric nature of their wave functions under particle exchange. Photons, as massless bosons mediating the electromagnetic force, are particularly suitable for this task because of their indistinguishability when generated in identical temporal, spatial, and polarization modes. In linear optics, the evolution of these bosons is governed by passive optical elements like beam splitters and phase shifters, which implement a on the creation operators of the photon modes without introducing nonlinear interactions or photon . The basic setup of boson sampling consists of injecting N single into N distinct input modes of an M-mode interferometer, where M \geq N and typically M = O(N), leaving the remaining input modes empty. The interferometer applies an M \times M U to the mode amplitudes, after which photon-number-resolving detectors measure the output configuration s = (s_1, \dots, s_M), where s_j \geq 0 is the number of photons in output mode j and \sum_j s_j = N. The probability P(s) of observing a particular output configuration s from the standard input configuration i (with single s in the first N modes) is given by P(s) = \frac{ \left| \perm \left( U_{s,i} \right) \right|^2 }{ \prod_{j=1}^M s_j ! }, where U_{s,i} is the N \times N of U formed by selecting rows corresponding to output modes j repeated s_j times and columns corresponding to the N input modes with one each, and \perm(\cdot) denotes the permanent of a , defined as \perm(A) = \sum_{\sigma \in S_N} \prod_{k=1}^N A_{k, \sigma(k)} for an N \times N A. This task was originally proposed to demonstrate a form of quantum computational advantage, as sampling from this distribution is believed to be classically intractable for large N (requiring exponential time in the worst case), yet efficiently implementable using near-term linear optical quantum devices that do not require fault-tolerant qubits or universal quantum gates. By exploiting the effects arising from indistinguishability, provides a for verifying quantum behavior in photonic systems without the overhead of full-scale .

Historical Development

Boson sampling was proposed by and Alex Arkhipov in their 2010 paper, later published in 2011, as a model of quantum computation using linear optical networks to process identical photons and generate output distributions that are believed to be hard to simulate classically. The motivation stemmed from the desire to identify intermediate computational models that lie between fully classical computing and universal , leveraging feasible photonic setups to demonstrate quantum advantage without requiring full fault-tolerant quantum hardware. This approach focused on sampling problems whose probabilities are given by the permanent of a complex , a #P-hard quantity, providing a pathway to verify in near-term devices. In the following years, theoretical work extended the original framework and bolstered arguments for its classical intractability. Aaronson and Arkhipov further analyzed the problem in 2013, proving that the output distribution of boson sampling with Haar-random unitary matrices is statistically far from uniform, which supports the anti-concentration properties essential for hardness under reasonable conjectures. Additional extensions explored generalized input states; for instance, Seshadreesan et al. demonstrated in 2014 that boson sampling remains hard even with photon-added coherent states as inputs, broadening the model's applicability while preserving computational complexity. By 2014, further proofs refined the conditions under which approximate sampling would imply collapses in complexity classes, reinforcing the original conditional hardness results. Initial proofs-of-principle involved small-scale classical simulations of few-photon to validate the model's predictions, with theoretical simulations in 2012-2013 confirming multi-photon Hong-Ou-Mandel effects central to the sampling distribution. The first experimental implementations emerged in 2013, with three independent groups reporting boson sampling using three indistinguishable photons in integrated photonic circuits. Broome et al. used a tunable fiber-based setup to measure three-photon , verifying that output probabilities matched permanent calculations within experimental error. Similarly, Spring et al. demonstrated sampling on a photonic chip, observing nonclassical patterns consistent with boson sampling theory. Tillmann et al. employed a similar chip-based approach, achieving high-fidelity three-photon sampling and highlighting the role of indistinguishability. These milestones sparked debates on , as the small photon numbers allowed full classical simulation and verification, raising questions about achieving verifiable quantum at larger scales without advanced techniques.

Theoretical Framework

Mathematical Formulation

Boson sampling is formally defined as the task of sampling an output configuration from the generated by passing n indistinguishable bosons through an m-mode linear optical interferometer, where m \geq n, followed by measurement in the Fock basis. The interferometer is described by an m \times m U, which encodes the action of passive optical elements such as beamsplitters and phase shifters on the creation and annihilation operators. In the standard setup, the input state is the |1^n\rangle = |1,1,\dots,1,0,\dots,0\rangle, consisting of exactly one in each of the first n spatial modes and in the remaining modes. The evolution of the input under the interferometer is given by \phi(U)|1^n\rangle, where \phi(U) is the second-quantized representation of U acting on the n- of the m- . The output is then measured by detecting the number in each , yielding a S = (s_1, s_2, \dots, s_m) where each s_i is a non-negative representing the number of in i, and \sum_{i=1}^m s_i = n. The probability of obtaining this output is \mathrm{Pr}[S] = \left| \langle S | \phi(U) | 1^n \rangle \right|^2 = \frac{|\mathrm{Per}(U_{S,1^n})|^2}{\prod_{i=1}^m s_i !}, where U_{S,1^n} is the n \times n submatrix of U formed by taking s_i copies of the i-th row of the m \times n submatrix consisting of the first n columns of U, for each i. The permanent function, central to this formulation, is defined for an n \times n complex matrix B = (b_{j,k}) as \mathrm{Per}(B) = \sum_{\sigma \in S_n} \prod_{j=1}^n b_{j,\sigma(j)}, where the sum runs over all n! permutations \sigma in the S_n. Unlike the determinant, which includes a sign factor (-1)^{\mathrm{sgn}(\sigma)} for each permutation and can be computed in polynomial time using algorithms like , the permanent lacks this antisymmetry and requires summing all terms without cancellation, making exact computation #P-hard even for $0-$1 matrices. Approximating the permanent to within a multiplicative factor of (1+\epsilon) is also #P-hard for any fixed \epsilon > 0. The hardness results in boson sampling rely on specific assumptions, including that U is drawn from the Haar measure (uniformly at random over the unitary group) to ensure average-case complexity, and that the optical setup is lossless, meaning no photon absorption or detection inefficiencies occur in the ideal model.

Computational Complexity

The computational complexity of boson sampling is rooted in the hardness of simulating the output distribution of indistinguishable photons interfering in a linear optical network. Exact boson sampling, which requires sampling precisely from the ideal distribution, is #P-hard under the assumption that an efficient classical algorithm exists, as it reduces to computing the permanent of a complex matrix, a canonical #P-complete problem. Specifically, Aaronson and Arkhipov proved that a polynomial-time classical algorithm for exact boson sampling would imply that P^{#P} = BPP^{NP}, collapsing the third level of the polynomial hierarchy. This hardness holds for systems with n photons and m output modes where m \geq n, and theoretical analyses indicate that no efficient classical algorithm exists for n exceeding approximately 50 photons, based on the exponential scaling required for permanent evaluation. For approximate boson sampling, where the goal is to sample within a multiplicative error < 1 - \delta for some constant \delta > 0 with high probability, the average-case hardness is established under the . These results show that no BPP algorithm can approximate the distribution unless the collapses, providing strong evidence against efficient classical simulation even with noise tolerance. Supremacy claims typically target systems with n \sim 30-50 photons, where the computational cost for classical exceeds feasible limits, such as 10^{18} operations or more, marking a for potential quantum . Classical simulation methods for boson sampling, such as Metropolis-Hastings approaches and graph-based algorithms exploiting the interference structure, achieve approximate sampling but suffer from exponential scaling in the number of photons n. These techniques, which iteratively sample output configurations proportional to their probabilities derived from permanents or hafnians, become intractable beyond small n due to the need to evaluate exponentially many terms. Recent advancements in have improved classical algorithms for small-scale instances, including faster average-case time complexities of O(n \cdot 1.69^n) for Fock-state inputs when the number of modes m is proportional to n, and enhanced tensor-network methods for limited-connectivity networks, enabling simulations up to n \approx 50 in specialized cases but still confirming exponential barriers for larger n. The quantum advantage in boson sampling arises because linear optics alone suffices to generate distributions that are hard to sample classically, without requiring fault-tolerant quantum gates or error correction, distinguishing it from universal models. This intermediate model leverages the inherent #P-hardness of the task to demonstrate exponential speedup for specific sampling problems.

Variants

Scattershot Boson Sampling

Scattershot boson sampling is a variant of boson sampling designed to address the challenges of generating multiple indistinguishable single s using imperfect sources, by leveraging heralded s from (SPDC) processes. In this setup, multiple (k > n) SPDC sources produce entangled pairs, where detection of one in each pair heralds the presence of a single in the other, which is then injected into random input modes of an m-mode linear optical interferometer. This results in an input state that is a probabilistic superposition of Fock states |1⟩^{\otimes n} distributed across different subsets of input modes, enabling the sampling of output without the need for deterministic multi- sources. The primary advantage of scattershot boson sampling lies in its dramatically higher success probability for n-photon interference events compared to traditional single-photon injection schemes. By distributing the sources across more input modes, the probability of obtaining a valid n-photon input scales as \binom{k}{n} \epsilon^n, where \epsilon is the single-photon generation efficiency, providing an exponential boost that mitigates losses inherent in early photonic setups. This makes the approach particularly effective for demonstrating boson sampling with n up to 5–8 photons, where standard methods would yield impractically low event rates. Mathematically, scattershot boson sampling involves sampling from a mixed input state rather than a pure , requiring the output to be averaged over all possible n-photon input configurations drawn from the heralding process. The probability of a specific output configuration is computed as the average of the squared permanents of the corresponding submatrices of the interferometer's U, weighted by the likelihood of each input subset. This adjustment preserves the computational hardness of the task while accommodating the probabilistic nature of the inputs. The first experimental realizations of scattershot boson sampling occurred in 2015, using six independent type-II SPDC sources in beta-barium crystals to herald photons coupled to a 13-mode silica-on-silicon photonic , successfully demonstrating 2- and 3-photon with over 2000 validated events and passing statistical tests for quantum behavior with >95% confidence. Building on this, a 2018 experiment employed 12 SPDC sources pumped by an ultrafast at 775 nm to generate telecom-wavelength photons, interfacing with a 12-mode interferometer to achieve 3-, 4-, and 5-photon sampling rates up to 3.9 kHz for n=3, confirming scalability and high indistinguishability (>96%) of the heralded photons. These implementations highlighted the feasibility of scattershot boson sampling for intermediate-scale quantum simulation.

Gaussian Boson Sampling

Gaussian Boson Sampling (GBS) is a variant of boson sampling that employs continuous-variable , such as squeezed vacuum states, as inputs to a linear optical interferometer, circumventing the need for precise single- preparation. In this setup, multiple single-mode squeezed states are injected into distinct input ports of a programmable interferometer, which applies a random unitary transformation to the modes. The output is then measured using detection schemes like threshold detectors, photon-number-resolving detectors, or homodyne measurements to sample from the resulting multimode . This approach leverages parametric down-conversion sources to generate squeezed light, enabling higher effective fluxes without the inefficiencies of heralding single photons. The mathematical foundation of GBS relies on the probability distributions of output configurations, which can be computed using Gaussian integrals over the of the state. For squeezed vacuum inputs, these probabilities are expressed in terms of the hafnian of a complex Gaussian matrix derived from the interferometer's unitary transformation, particularly for configurations with even total numbers that align with the bosonic statistics of the input. The hafnian, analogous to the permanent in standard boson sampling, captures the bunching behavior and leads to a #P-hard classical problem when the squeezing is sufficiently large, preserving computational hardness even with moderate experimental imperfections. GBS offers scalability advantages over discrete-photon variants, with photonic implementations demonstrating effective operation across 50 to over 200 modes using current technology, such as ultralow-loss integrated interferometers and high-efficiency detectors. Experiments between 2019 and 2023, including Jiuzhang (100 modes with 50 squeezed states in 2020) and a 2022 programmable photonic processor (216 squeezed modes with three-dimensional connectivity), have claimed quantum advantage by producing samples at rates exceeding classical capabilities by factors of up to 10¹⁴, highlighting the protocol's potential for large-scale demonstrations. A key distinction of GBS lies in its continuous-variable nature, which facilitates threshold detection methods that register photon-click events without resolving exact numbers, thereby reducing sensitivity to photon loss compared to discrete single-photon schemes where losses directly diminish the sampled photon count. This tolerance arises because squeezed states contain superpositions of multiple photon numbers, allowing meaningful samples even under partial loss, while the linear optical transformations remain applicable as in the broader framework.

Other Variants

Certain variants of boson sampling are designed to be classically simulable, providing benchmarks for understanding the boundaries of quantum advantage. In thresholded boson sampling, where output photon counts are restricted to binary detections (zero or one photon per mode), the problem reduces to computing permanents of matrices with limited non-zero entries, enabling efficient classical simulation via exact algorithms for sparse matrices. Similarly, low-depth interferometers, with a small number of beam-splitter layers, limit the entanglement generation, allowing mean-field approximations to model the bosonic interference with classical resources; for instance, when the number of interfering photons is much smaller than the number of modes, the output distribution can be approximated by independent single-particle evolutions, bypassing the full permanent computation. These simulable cases highlight regimes where noise or simplified architectures make boson sampling tractable on classical hardware, serving as testbeds for verifying quantum devices. Time-frequency and time-bin boson sampling extend the encoding of photonic modes into temporal dimensions, achieving higher effective dimensions without increasing spatial complexity. In time-bin encoding, photons are prepared in superpositions of early and late arrival times, propagating through loop-based interferometers that simulate multi-mode in a compact setup; this approach the number of modes by recirculating photons, as demonstrated in scalable implementations using fiber loops. A notable 2025 experiment realized bipartite Gaussian boson sampling in the time-frequency-bin domain, using squeezed light across six mixed temporal-frequency modes generated by a microresonator, showcasing patterns that leverage temporal correlations for enhanced sampling efficiency. These temporal variants mitigate challenges in spatial , such as mode mismatch, while preserving the computational hardness under ideal conditions. Hybrid variants integrate bosonic modes from different physical platforms, combining photonic interferometry with matter-wave systems to explore multi-species interference. For example, proposals for hybrid boson sampling couple photons with Bose-Einstein condensates of atoms inside a multi-mode cavity, where atomic excitations mediate photon scattering, enabling sampling from joint photon-atom output distributions that capture non-equilibrium dynamics in both subsystems. Non-photonic implementations include atomic boson sampling using ultracold atoms in a two-dimensional optical lattice, where programmable tunnel couplings simulate interferometric evolution, achieving interference of up to 180 atomic bosons with fidelity comparable to photonic setups. Extensions to superconducting circuits have been theoretically proposed, leveraging circuit quantum electrodynamics to realize boson sampling with microwave photons in Josephson junction-based interferometers, though experimental realizations remain in early stages. These hybrids broaden the platform diversity for boson sampling, potentially improving robustness against decoherence in atomic or circuit-based environments. Proof-of-concept extensions demonstrate boson sampling's utility beyond pure sampling tasks, applying it to specific computational problems. In graph similarity estimation, Gaussian boson sampling encodes graph adjacency matrices into squeezed-state interferometers, where output statistics yield kernel functions measuring structural similarity between graphs; for instance, tuning squeezing parameters allows approximation of graph invariants like the number of matching subgraphs, outperforming classical heuristics for certain sparse graphs. For simulating molecular vibronic spectra, modified boson sampling with displaced or squeezed inputs maps vibrational mode couplings to photonic , generating spectra that include anharmonic effects and finite-temperature distributions; a seminal proposal showed that sampling from such a device reproduces absorption spectra of molecules like , offering a quantum pathway for vibronic dynamics intractable on classical computers. These applications underscore boson sampling's potential as a specialized , linking abstract to practical molecular and -theoretic modeling.

Experimental Implementations

Early Photonic Experiments

The first experimental demonstration of boson sampling using photons was reported in 2013 by Broome et al., who implemented a 3-photon sampling task in a 4-mode linear optical interferometer fabricated from integrated silica-on-silicon waveguides. The setup employed a femtosecond-pulsed laser to pump a spontaneous parametric down-conversion (SPDC) source, generating heralded single photons that were injected into the interferometer, which was reconfigurable via thermo-optic phase shifters to realize arbitrary unitary transformations. They collected approximately 100,000 output configurations and verified that the measured transition probabilities matched the theoretical predictions given by the permanent of the submatrix of the unitary, within experimental error bars of about 10%. Key challenges included photon loss rates of around 20% due to coupling inefficiencies and waveguide propagation, as well as ensuring high indistinguishability of the photons, achieved through spectral filtering to a visibility exceeding 96%. Shortly thereafter, Tillmann et al. in 2013 extended the approach to a 6-mode integrated photonic circuit, again using 3 heralded photons from an SPDC source driven by a femtosecond laser. The interferometer consisted of femtosecond-laser-written waveguides in borosilicate glass, incorporating 15 beam splitters and phase shifters for programmable interference, with photons detected via superconducting nanowire single-photon detectors. This experiment produced over 200,000 samples, demonstrating clear signatures of multi-photon quantum interference, such as bunching probabilities that aligned with boson sampling predictions to within 1% error after accounting for partial distinguishability. Technical hurdles involved heralding efficiencies of approximately 30-40% and overall transmission losses of 10-15%, which limited the fidelity but confirmed the nonclassical nature of the output distribution. In the same year, Spring et al. reported a complementary demonstration using an integrated photonic chip for boson sampling with 3 and 4 photons in 13 spatial modes. Photons were sourced via SPDC in a periodically poled crystal pumped by a mode-locked titanium-sapphire , with the interferometer implemented using integrated waveguides. For the 4-photon case, they acquired about 1,000 samples, showing good agreement with ideal boson sampling statistics, though with reduced visibility due to timing jitter and losses estimated at 15-20%. This work highlighted the use of spatial light modulators for initial beam shaping but underscored scalability issues from decreasing multi-photon coincidence rates, dropping to rates below 1 Hz for 4 photons. Subsequent efforts in the mid-2010s incorporated variants like scattershot boson sampling, where multiple photon-pair sources were used to boost heralding rates, as demonstrated in early implementations achieving 4-5 samples at rates up to 10^4 per hour despite similar loss and indistinguishability challenges. Overall, these pioneering photonic experiments validated the core principles of boson sampling for small numbers (n ≤ 5), with output distributions matching within experimental uncertainties, but revealed fundamental barriers to scaling, including probabilistic photon generation, detection inefficiencies below 90%, and the need for near-perfect photon indistinguishability to observe full Hong-Ou-Mandel .

Recent Advances

In 2020, the University of Science and Technology of China (USTC) demonstrated Gaussian boson sampling using the Jiuzhang photonic processor, which detected up to 76 photons in a 100-mode interferometer, producing an output state-space dimension of $10^{30} and a sampling rate of $10^{14} per second; classical simulation on the Sunway TaihuLight supercomputer was estimated to require approximately 600 million years for equivalent sampling. This marked an early claim of quantum computational advantage in the Gaussian variant. In 2022, Xanadu's processor advanced this further by performing programmable Gaussian boson sampling on 216 squeezed modes with three-dimensional connectivity via time-multiplexing, achieving a estimated at over $10^{30} times compared to state-of-the-art classical s for specific sampling tasks. From 2023 onward, experiments shifted toward noisy Gaussian boson sampling (GBS), where inherent hardware errors were incorporated yet still demonstrated computational advantage through techniques like pseudo-photon-number-resolving detection and frequency-bin or time-multiplexed modes. USTC's Jiuzhang 3.0, reported in 2023, registered up to 255 -click events in a 144-mode , enabling sampling rates that outpaced classical methods by factors exceeding $10^{24} despite levels around 50% photon loss. Similar noisy GBS demonstrations using integrated photonic chips highlighted robustness, with output distributions verifiable against classical thresholds even under realistic error rates of 1-10% per mode. In 2025, progress included a bipartite time-frequency GBS experiment using squeezed light from a microresonator across 6 mixed modes, generating non-degenerate two-mode squeezing for high-dimensional sampling with fidelity above 90% to ideal Gaussian states. Another development in September demonstrated hardware-efficient boson sampling as an accelerator for , leveraging in a 12-mode photonic setup to reduce variance in probabilistic estimates by up to 40% over classical counterparts. In November 2025, an experimental demonstration of boson sampling using optical lines was reported, enabling enhanced temporal multiplexing in boson sampling setups. Countering these advances, a University of Chicago tensor-network published in mid-2024 enabled efficient classical of experimental GBS under photon loss rates up to 20%, scaling to 100 modes in hours on standard hardware and challenging some advantage claims. Non-photonic implementations remain less mature but show promise in alternative platforms. In systems, a experiment prepared and measured boson sampling states with up to 180 bosonic atoms in an optical , achieving site-resolved detection and patterns consistent with ideal sampling for small numbers of particles (n ≤ 10). Superconducting proposals for boson sampling, leveraging photons in Josephson junctions, have been theoretically outlined but lack large-scale experimental validation, primarily due to challenges in scaling coherent mode numbers beyond 20.

Verification and Certification

Methods for Certifying Results

Certifying the results of boson sampling experiments involves verifying that the observed output distributions match predictions from , distinguishing them from classical simulatable approximations such as mean-field or sampling. This is crucial to confirm the presence of genuine many-body quantum without relying on full , which becomes infeasible for large numbers of photons. Methods range from complete for small-scale devices to scalable statistical tests for larger systems. For small-scale boson sampling with few photons (n ≤ 5), full characterization can be achieved using Bayesian mean or (MLE) to reconstruct the output . In the Bayesian approach, the is computed to assess whether the experimental samples are consistent with the ideal boson sampling distribution, providing a measure for the that the device operates quantum mechanically rather than classically. MLE, meanwhile, optimizes the likelihood of observed samples under the assumed model, enabling of the distribution's even with limited data; for instance, it has been applied to multiphoton in linear optical networks with up to 8 modes. These methods are computationally intensive but allow precise of metrics like distance between experimental and ideal distributions for small n. Scalable certification techniques address larger systems by avoiding full reconstruction. Compressed sensing exploits the sparsity of boson sampling outputs to reconstruct the effective linear optical unitary matrix U from partial output statistics, verifying fidelity through cross-entropy or total variation distance comparisons with fewer measurements than traditional tomography. For example, in photonic state preparations relevant to boson sampling, compressed sensing reduces the sampling requirements for low-rank approximations, enabling certification of the interference matrix with polynomial resources. Additionally, tests based on marginal distributions—examining subsets of output modes—provide efficient checks for consistency with quantum predictions, as deviations in one- or two-particle correlations signal classical artifacts. Specific metrics have been developed to detect signatures of quantum in outputs. The benchmark quantifies how closely experimental samples match the ideal , with values near 1 indicating strong agreement; for Gaussian boson sampling (GBS), a linear variant (LXE score) certifies advantage by comparing against reference classical , addressing noise in 2020s experiments. Frame potentials, originally used to assess unitary , can be estimated from samples to verify the Haar-like of U, ensuring the structure required for hardness. In noisy GBS, graph-theoretic methods using feature vectors of output collision graphs further certify by classifying via kernels, distinguishing quantum from classical models with high accuracy. Benchmarking through community efforts, such as classical simulation challenges on supercomputers, provides standardized tools for certification; for instance, simulations on systems like the Sunway TaihuLight have set thresholds for verifiable quantum advantage in 50-photon GBS by comparing sampling rates and . These competitions highlight scalable tests, fostering reproducible verification across international experiments.

Challenges and Criticisms

One of the primary practical challenges in boson sampling experiments is , which occurs due to , , or imperfect detection and effectively reduces the number of interfering s, thereby diminishing the and making the output distribution more amenable to classical simulation. To counteract this, error mitigation strategies such as post-selection on detected s or heralding techniques are employed, but these become increasingly inefficient as the number of s scales. Theoretical analyses indicate that boson sampling remains classically hard only if the overall survival probability exceeds approximately 1/n, where n is the number of input s; under higher rates, such as an expected exceeding 50% in deep interferometers, efficient classical simulations are possible. For low- regimes below 1% per optical component, quantum advantage requires extremely large n to overcome the effective reduction in interference, yet current photonic setups struggle to maintain such at scale. Noise sources, including dephasing from environmental fluctuations and partial distinguishability of photons due to imperfect spatial-temporal mode overlap, further erode the Hong-Ou-Mandel essential to boson sampling's hardness. randomizes relative phases in the interferometer, suppressing multi-photon bunching probabilities and transitioning the problem toward classical simulability, with modest noise levels significantly degrading anticoncentration properties. Partial distinguishability acts analogously to loss by attenuating permanent amplitudes in the output probability formula, and studies show that if the indistinguishability falls below 90-95%, classical algorithms can approximate the distribution in polynomial time. These effects are particularly pronounced in Gaussian boson sampling (GBS), where squeezed state preparation amplifies sensitivity to noise. Critiques of quantum advantage claims in boson sampling have intensified, particularly for GBS implementations, due to loopholes enabling classical simulability. In 2017, Neville et al. demonstrated classical algorithms that outperform early photonic experiments with up to 30 photons, arguing that realistic noise and loss preclude near-term supremacy without massive scaling. More recently, in 2024, researchers at the developed a tensor-network that efficiently simulates state-of-the-art GBS experiments, including those with over 200 modes, generating millions of samples faster than the devices themselves and matching ideal distributions more accurately than the noisy —thus challenging claims of computational superiority. These 2024-2025 analyses highlight that many setups inadvertently operate in simulable regimes due to unaccounted loss-distinguishability correlations, underscoring gaps where certifying hardness for n > 50 remains computationally intensive and scales poorly. Scalability poses a fundamental hurdle, as achieving robust quantum advantage demands interferometers with millions of modes to ensure the output distribution's anticoncentration overwhelms classical thresholds, yet current photonic demonstrations are limited to approximately 100-300 modes due to fabrication and constraints. Proposals for temporal and continuous-variable encoding aim to reach this scale, but practical losses accumulate exponentially with mode count, confining experiments to regimes where classical methods suffice.

Applications and Extensions

Quantum Machine Learning and AI

Boson sampling has emerged as a promising framework for (QML) and (AI) applications, leveraging its ability to generate complex probability distributions from photonic interferometers to enhance classical algorithms. In particular, the sampling outputs from Gaussian boson sampling (GBS) provide high-dimensional features that capture nonlinear correlations intractable for classical computers, enabling hybrid quantum-classical models for tasks like pattern recognition and prediction. This integration exploits the exponential scaling of boson sampling in producing non-Gaussian states, offering potential speedups in processing large datasets for AI. A key application is quantum , where the fixed random interferometer in boson sampling serves as the to time-series through nonlinear photonic . Researchers demonstrated that this setup, using single-photon inputs into a random linear optical network, generates the complex temporal mappings needed for tasks such as signal , outperforming classical reservoirs in expressivity due to quantum effects. In a 2025 experiment, this approach achieved superior forecasting accuracy on benchmark datasets like the Santa Fe time series, with the bosonic nonlinearity providing an edge in handling multivariate inputs. The method's advantages include low training overhead, as only the readout layer is optimized classically, making it suitable for near-term photonic . In image recognition, boson sampling facilitates feature extraction by mapping pixel data to photonic modes, where the resulting encodes discriminative patterns for . A 2025 GBS-based scheme, inspired by extreme learning machines, preprocesses images from datasets like MNIST by injecting encoded features into a Gaussian state interferometer, yielding a 95.86% accuracy on MNIST and 85.95% on —improvements over baseline perceptrons attributed to the quantum-enhanced kernel similarities in the output space. This photonic sampling approach reduces for high-resolution images by exploiting the interferometer's ability to amplify subtle correlations without full quantum training. Hybrid neural networks further integrate boson sampling by using GBS outputs as precomputed within classical architectures, augmenting support vector machines or deep layers for high-dimensional . In a October 2025 framework, GBS-generated distributions from multi-mode interferometers are fed as kernel matrices to neural networks, enabling efficient of complex datasets with reduced parameter counts; simulations showed up to 20% accuracy gains on synthetic high-dimensional benchmarks compared to purely classical kernels. This hybrid design leverages the quantum advantage in kernel computation for optimization-heavy tasks. Overall, these applications highlight boson sampling's role in providing exponential speedups for sampling complex distributions essential to generative models and optimization in , such as approximating Boltzmann machines or solving NP-hard problems with photonic efficiency. The energy savings from passive interferometers—requiring no active quantum gates—position it as a pathway to scalable, practical beyond simulation demonstrations.

Other Potential Uses

Beyond its applications in machine learning, boson sampling has been proposed for simulating molecular vibronic spectra, which are crucial for understanding photochemical processes in chemistry. In this approach, Gaussian boson sampling with squeezed input states can generate the probability distributions of vibronic transitions, capturing effects like Duschinsky rotations that are computationally intensive for classical methods. This was first demonstrated theoretically for molecules such as , where the boson sampling output maps directly to the Franck-Condon factors and vibronic couplings. Recent extensions in 2024 have further refined these simulations by integrating Doktorov's approximation with boson sampling-inspired algorithms, enabling efficient computation of spectra for larger polyatomic molecules without full quantum hardware. Boson sampling's computational hardness has also been leveraged for proof-of-work (PoW) consensus mechanisms in protocols. Here, variants like coarse-grained boson sampling (CGBS) serve as quantum-resistant puzzles, where miners perform sampling tasks dependent on data to validate transactions, offering enhanced against classical attacks. This exploits the intractability of verifying large-scale boson sampling outputs, with proposals showing that even noisy photonic devices can implement it scalably. Developments from 2023 to 2025 include prototypes integrating CGBS into networks, demonstrating faster while maintaining compared to traditional hashing. In , boson sampling facilitates solving problems such as isomorphism and dense identification by encoding graphs into the sampling interferometer. The output probabilities, proportional to the permanent of graph adjacency matrices, allow sampling-based approximations of similarities, which are #P-hard classically. For instance, Gaussian boson sampling has been shown to enhance detection of the densest k-s, providing quadratic speedups in sampling efficiency for sparse graphs. This leverages the interference patterns to highlight matching substructures, with applications in network analysis. Finally, boson sampling serves as a key for certifying quantum advantage in noisy intermediate-scale quantum (NISQ) devices. By demonstrating sampling distributions that exceed classical thresholds—such as those from Jiuzhang or experiments—it validates the utility of photonic hardware without requiring full error correction. The task's reliance on the hardness of computing permanents makes it an ideal intermediate-scale testbed for assessing claims.

References

  1. [1]
    [1011.3245] The Computational Complexity of Linear Optics - arXiv
    Nov 14, 2010 · Access Paper: View a PDF of the paper titled The Computational Complexity of Linear Optics, by Scott Aaronson and Alex Arkhipov. View PDF · TeX ...
  2. [2]
    Experimental boson sampling | Nature Photonics
    May 12, 2013 · We demonstrate this model of computation using laser-written integrated quantum networks that were designed to implement unitary matrix transformations.Missing: realizations | Show results with:realizations
  3. [3]
    Experimental scattershot boson sampling | Science Advances
    Apr 17, 2015 · We report the first scattershot boson sampling experiments, where six different photon-pair sources are coupled to integrated photonic circuits.
  4. [4]
    Experimental Demonstration of Gaussian Boson Sampling with ...
    May 17, 2022 · In this work, we build a GBS machine that achieves the displacement by injecting a laser beam alongside a two-mode squeezed vacuum state into a 15-mode ...Missing: realizations | Show results with:realizations
  5. [5]
    [1406.6767] An introduction to boson-sampling - arXiv
    Jun 26, 2014 · Boson-sampling is a simplified model for quantum computing that may hold the key to implementing the first ever post-classical quantum computer.
  6. [6]
    Quantum Computational Advantage of Noisy Boson Sampling with ...
    Sep 25, 2025 · Noisy boson sampling is shown to retain classical hardness even when a logarithmic fraction of photons are mutually distinguishable.
  7. [7]
    Proof-of-work consensus by quantum sampling - IOPscience
    We propose a new PoW consensus protocol based on boson sampling. Boson-sampling was originally developed to demonstrate quantum supremacy, owing to its ...
  8. [8]
    DOE Explains...Bosons and Fermions - Department of Energy
    Bosons are the fundamental particles that have spin in integer values (0, 1, 2, etc.). Fermions, on the other hand, have spin in odd half integer values (1/2, 3 ...
  9. [9]
    [1309.7460] BosonSampling Is Far From Uniform - arXiv
    Sep 28, 2013 · BosonSampling, which we proposed three years ago, is a scheme for using linear-optical networks to solve sampling problems that appear to be ...Missing: original | Show results with:original
  10. [10]
    [1212.2234] Photonic Boson Sampling in a Tunable Circuit - arXiv
    Dec 10, 2012 · Here we test the central premise of BosonSampling, experimentally verifying that the amplitudes of 3-photon scattering processes are given by the permanents of ...
  11. [11]
    [1212.2622] Boson Sampling on a Photonic Chip - arXiv
    Dec 11, 2012 · We construct a quantum boson sampling machine (QBSM) to sample the output distribution resulting from the nonclassical interference of photons ...Missing: et al
  12. [12]
    [1212.2240] Experimental Boson Sampling - Quantum Physics - arXiv
    Dec 10, 2012 · Here we demonstrate this model of computation using high--quality laser--written integrated quantum networks that were designed to implement random unitary ...
  13. [13]
    Quantum sampling problems, BosonSampling and ... - Nature
    Apr 13, 2017 · The linear optical system they proposed was the class of problems called BosonSampling which is the production of samples from Fock basis ...
  14. [14]
    Classical simulation of boson sampling based on graph structure
    Oct 4, 2021 · In this work, we present classical sampling algorithms for single-photon and Gaussian input states that take advantage of a graph structure of a ...Missing: methods Metropolis- Hastings
  15. [15]
    Faster classical boson sampling - IOPscience
    The new algorithm for classical boson sampling runs in approximately O(n · 1.69^n) time on average when m=n, when m is proportional to n, it is much faster.Abstract · Methods · Marginal uniformity of the... · Average-case time complexity...
  16. [16]
    Speeding up the classical simulation of Gaussian boson sampling ...
    Apr 1, 2024 · we introduce an enhanced classical algorithm for simulating GBS processes with limited connectivity. It computes the loop Hafnian of an ...<|control11|><|separator|>
  17. [17]
    [1505.03708] Experimental Scattershot Boson Sampling - arXiv
    May 14, 2015 · Here we report the first Scattershot Boson Sampling experiments, where six different photon-pair sources are coupled to integrated photonic circuits.
  18. [18]
    12-Photon Entanglement and Scalable Scattershot Boson Sampling ...
    Dec 21, 2018 · The key idea of scattershot boson sampling [33] is to use k ( k ≫ n ) heralded single-photon sources connecting to different input modes of the ...
  19. [19]
    12-photon entanglement and scalable scattershot boson sampling ...
    Oct 11, 2018 · We further demonstrate a blueprint of scalable scattershot boson sampling using 12 SPDC sources and a 12*12-modes interferometer for three-, ...
  20. [20]
  21. [21]
  22. [22]
    Noise in boson sampling and the threshold of efficient classical ...
    Jul 24, 2019 · We study the quantum to classical transition in boson sampling by analyzing how N -boson interference is affected by inevitable noise in an experimental setup.
  23. [23]
    [PDF] Efficient Simulation of Shallow Quantum Circuits: Boson Sampling
    The primary constraint in this work is that the interferometer being simulated is shallow. We recapitulate here what that means. Definition 5.1.1 (reduced ...Missing: thresholded approximations
  24. [24]
    Bipartite Gaussian boson sampling in the time-frequency-bin ...
    Aug 8, 2025 · We demonstrate high-dimensional bipartite Gaussian boson sampling with squeezed light across 6 mixed time-frequency modes.
  25. [25]
    Boson sampling with Gaussian input states: Toward efficient scaling ...
    Nov 26, 2024 · Quantum computational advantage via high-dimensional Gaussian boson sampling ... Scalable Boson Sampling with Time-Bin Encoding Using a Loop-Based ...
  26. [26]
    [2409.08973] Hybrid boson sampling - Quantum Physics - arXiv
    Sep 13, 2024 · We propose boson sampling from a system of coupled photons and Bose-Einstein condensed atoms placed inside a multi-mode cavity as a simulation process.Missing: matter waves non- superconducting circuits
  27. [27]
    Hybrid Boson Sampling - MDPI
    We propose boson sampling from a system of coupled photons and Bose–Einstein condensed atoms placed inside a multi-mode cavity as a simulation process.
  28. [28]
    Boson Sampling and Quantum Simulations in Circuit QED
    Jan 15, 2021 · 'Circuit QED' is non-linear quantum optics extended to superconducting electrical circuits and represents a leading architecture for the ...
  29. [29]
    Measuring the similarity of graphs with a Gaussian boson sampler
    Mar 11, 2020 · The adjacency matrix of a graph gets encoded into the Gaussian state of the light modes by tuning the squeezing and interferometer parameters.
  30. [30]
    [1412.8427] Boson Sampling for Molecular Vibronic Spectra - arXiv
    Dec 29, 2014 · We show that a boson sampling device with a modified input state can be used to generate molecular vibronic spectra, including complicated effects such as ...Missing: graph similarity simulation
  31. [31]
    Boson Sampling on a Photonic Chip - Science
    We constructed a quantum boson-sampling machine (QBSM) to sample the output distribution resulting from the nonclassical interference of photons in an ...
  32. [32]
    Photonic implementation of boson sampling: a review
    May 9, 2019 · We review recent advances in photonic boson sampling, describing both the technological improvements achieved and the future challenges.
  33. [33]
    Quantum computational advantage using photons - Science
    We performed Gaussian boson sampling by sending 50 indistinguishable single-mode squeezed states into a 100-mode ultralow-loss interferometer with full ...
  34. [34]
    Quantum computational advantage with a programmable photonic ...
    Jun 1, 2022 · We carry out Gaussian boson sampling (GBS) on 216 squeezed modes entangled with three-dimensional connectivity, using a time-multiplexed and ...
  35. [35]
    Gaussian Boson Sampling with Pseudo-Photon-Number-Resolving ...
    Oct 10, 2023 · We report new Gaussian boson sampling experiments with pseudo-photon-number-resolving detection, which register up to 255 photon-click events.
  36. [36]
    [2205.02586] Post-selection in noisy Gaussian boson sampling - arXiv
    May 5, 2022 · Recently, several experimental breakthroughs based on Gaussian boson sampling pointing to quantum computing supremacy have been presented.Missing: 2023 | Show results with:2023
  37. [37]
    Non-linear Boson Sampling | npj Quantum Information - Nature
    Jan 9, 2023 · Detection of one photon in each of the k auxiliary modes of the gadget heralds a successful simulation of the single-mode non-linearity in mode ...
  38. [38]
    [2301.12814] Simulating lossy Gaussian boson sampling with matrix ...
    Jan 30, 2023 · To understand the effect of photon loss on the scalability of Gaussian boson sampling, we analytically derive the asymptotic operator entanglement entropy ...
  39. [39]
    Classical algorithm for simulating experimental Gaussian boson ...
    Jun 25, 2024 · Here we present a classical tensor-network algorithm that simulates Gaussian boson sampling and whose complexity can be significantly reduced when the photon ...Missing: UChicago 2024
  40. [40]
    Simulating boson sampling in lossy architectures - Quantum Journal
    Aug 5, 2019 · In this work we show that using classical computers, one can efficiently simulate multi-photon interference in all architectures that suffer from an ...
  41. [41]
    Effect of partial distinguishability on quantum supremacy in ... - Nature
    May 11, 2022 · In this paper, we investigate GBS with partial distinguishability using an approach based on virtual modes and indistinguishability efficiency.
  42. [42]
    Verifiable measurement-based quantum random sampling with ...
    Jan 2, 2025 · These results indicate that output distributions of states subject to a significant amount of dephasing noise may still have a TVD well below ...<|separator|>
  43. [43]
    [PDF] Distinguishing noisy boson sampling from classical simulations
    Distinguishing noisy boson sampling from classical simulations is possible by considering low-order quantum multiboson interferences, and the boson density ...
  44. [44]
    Classical boson sampling algorithms with superior performance to ...
    Oct 2, 2017 · Here we present classical boson sampling algorithms and theoretical analyses of prospects for scaling boson sampling experiments.
  45. [45]
    Quantum computational advantage of noisy boson sampling with ...
    Jan 23, 2025 · In this work, we identify the level of partial distinguishability noise that upholds the classical intractability of boson sampling. We find ...Missing: effects dephasing
  46. [46]
    The boundary for quantum advantage in Gaussian boson sampling
    Jan 26, 2022 · Here, we present faster classical GBS simulation methods, including speed and accuracy improvements to the calculation of loop hafnians. We test ...
  47. [47]
    Enhanced Image Recognition Using Gaussian Boson Sampling - arXiv
    Jun 24, 2025 · We apply this scheme to classify images from the MNIST and Fashion-MNIST datasets, achieving a testing accuracy of 95.86% on MNIST and 85.95% on ...
  48. [48]
    Quantum optical reservoir computing powered by boson sampling
    In this work, we show that the random interferometer powering boson sampling can be used to generate the complex dynamics necessary for quantum reservoir ...
  49. [49]
    Hybrid Boson Sampling-Neural Network Architecture for Enhanced ...
    Abstract page for arXiv paper 2510.13332: Hybrid Boson Sampling-Neural Network Architecture for Enhanced Classification. ... [v1] Wed, 15 Oct 2025 ...Missing: Gaussian October
  50. [50]
    Boson sampling finds first practical applications in quantum AI
    Jun 25, 2025 · In their simulated experiment, they began by generating a complex photonic quantum state, onto which simplified image data was encoded. Three ...
  51. [51]
    Boson sampling for molecular vibronic spectra | Nature Photonics
    Aug 24, 2015 · We show that, by means of squeezed states of light coupled to a boson sampling optical network, one can generate molecular vibronic spectra.
  52. [52]
    Simulating Vibronic Spectra by Direct Application of Doktorov ...
    May 15, 2024 · Recently, a problem known as boson sampling has been shown to provide a pathway for solving a computationally intractable problem without the ...
  53. [53]
    [2305.19865] Proof-of-work consensus by quantum sampling - arXiv
    May 31, 2023 · We propose to use a variant, called coarse-grained boson-sampling (CGBS), as a quantum Proof-of-Work (PoW) scheme for blockchain consensus.
  54. [54]
    Solving Graph Problems Using Gaussian Boson Sampling
    Gaussian boson sampling (GBS) is not only a feasible protocol for demonstrating quantum computational advantage, but also mathematically associated with ...
  55. [55]
    [1810.10644] Graph isomorphism and Gaussian boson sampling
    Oct 24, 2018 · We introduce a connection between a near-term quantum computing device, specifically a Gaussian boson sampler, and the graph isomorphism problem.Missing: subgraph | Show results with:subgraph