Fact-checked by Grok 2 weeks ago

Linear optical quantum computing

Linear optical quantum computing is a paradigm of that encodes qubits in the photonic , such as or spatial modes of single photons, and performs quantum operations using only linear optical elements—including beam splitters, phase shifters, and mirrors—combined with single-photon sources, detectors, and feed-forward control based on measurement outcomes. This approach achieves universal quantum computation despite the absence of strong nonlinear photon-photon interactions in linear optics by relying on probabilistic, measurement-induced that incorporate post-selection and to simulate deterministic nonlinearities. Pioneered in the , it enables scalable fault-tolerant computation in principle, with success probabilities for key two-qubit like the controlled-NOT reaching up to 1/9 without additional resources. The foundational scheme, proposed by Emanuel Knill, , and Gerard J. Milburn in 2001, demonstrated that efficient is feasible with near-term technologies by encoding logical qubits in optical modes and using linear for gate , while projective measurements provide the necessary nonlinearity. Building on earlier work in optical , such as deterministic single-qubit gates via , the protocol introduced error-correcting codes tailored to photonic loss and detection inefficiencies, setting a threshold for at approximately 0.1% (10^{-3}) error rate per component. Subsequent developments, including cluster-state models and fusion-based architectures, have expanded LOQC to measurement-based , where large entangled photonic states are generated and consumed via local measurements. Key advantages of LOQC include the inherent low decoherence of photons in free space or , enabling room-temperature operation without cryogenic cooling, and seamless with existing telecommunication for scalable networking of quantum processors. Photonic qubits also from high-speed and the potential for massive parallelism in multi-mode interferometers, as evidenced by demonstrations of high- two-qubit entangling gates exceeding 99% in recent experiments using advanced sources such as Rydberg atoms or quantum dots. However, challenges persist, such as the probabilistic of gates requiring heralding and resource overheads, sensitivity to photon (with loss thresholds around 1-10% for early schemes and up to 19% in advanced architectures for ), and the need for near-unity-efficiency single-photon sources and detectors to suppress errors below fault-tolerant levels. Recent advances have focused on hybrid and modular implementations to overcome these hurdles, including the use of Gottesman-Kitaev-Preskill (GKP) codes for continuous-variable encoding and to boost resource efficiency. For instance, integrated photonic chips have enabled the generation of large-scale Gaussian cluster states with billions of modes, paving the way for distributed networks. As of 2025, further progress includes scalable fault-tolerant architectures using quantum dots for improved single-photon sources and exposure-based adaptivity raising photon loss thresholds to approximately 18.8%, alongside squeezed-cat codes for enhanced error correction. Ongoing research emphasizes error correction tailored to optical platforms and high-dimensional qudit encodings to reduce overheads, positioning LOQC as a leading contender for practical, large-scale processing.

Introduction

Core Principles

Linear optical quantum computing (LOQC) is a paradigm for processing that utilizes photons as qubits, manipulated solely through linear optical elements such as beam splitters, phase shifters, and mirrors, which implement unitary transformations on the photonic modes without invoking nonlinear interactions. These components enable the and redistribution of photons across spatial, temporal, or modes, forming the basis for encoding and processing quantum information in optical systems. Unlike other architectures that rely on direct particle interactions, LOQC leverages the bosonic nature of photons and their ability to occupy multiple modes simultaneously to perform computations. A fundamental limitation of linear optics is its inability to achieve universal quantum computing deterministically, as established by no-go results demonstrating that linear optical networks cannot generate entanglement from separable input states without additional resources like ancillary photons or measurements. This stems from the absence of direct photon-photon interactions in linear media, which preserves the total photon number and prevents the creation of non-classical correlations required for entangling gates. Consequently, two-qubit operations in LOQC are inherently probabilistic, succeeding only upon specific measurement outcomes that herald the desired transformation, thus necessitating post-selection or feed-forward control to approximate deterministic behavior and mitigate the exponential resource overhead. The mathematical underpinnings of LOQC lie in the representation of linear optical transformations as unitary matrices acting on the of the photonic modes. For example, a with mixing angle \theta transforms the creation operators as a^\dagger \rightarrow \cos\theta \, a^\dagger + \sin\theta \, b^\dagger, \quad b^\dagger \rightarrow -\sin\theta \, a^\dagger + \cos\theta \, b^\dagger, where a^\dagger and b^\dagger are the input mode operators, illustrating how the device linearly mixes the modes while conserving overall photon statistics. This framework ensures that all operations remain within the Gaussian state manifold for coherent or squeezed inputs, but requires non-Gaussian elements (via measurements) for full computational power. In distinction to nonlinear optical quantum computing approaches, which employ effects like the Kerr nonlinearity to induce effective interactions through cross-phase , LOQC circumvents these challenging and weak nonlinearities by relying exclusively on passive linear elements augmented with detection-based . This design choice enhances practical feasibility in photonic systems, where linear offer low-loss propagation and with existing fiber-optic infrastructure, though at the cost of probabilistic fidelities.

Historical Context

The foundations of linear optical quantum computing (LOQC) trace back to the 1980s and 1990s, when researchers explored photonic systems primarily for quantum communication rather than computation. Pioneering work by Jeffrey H. Shapiro and colleagues examined quantum limits in optical communications, demonstrating how nonclassical light states, such as two-photon coherent states, could enhance channel capacities beyond classical bounds using linear optical elements like beam splitters and phase shifters. These early proposals highlighted the potential of photons for preserving over distances, laying groundwork for later computational applications through and detection. A pivotal milestone arrived in 2001 with the Knill-Laflamme-Milburn (KLM) protocol, which proved that universal quantum computation could be achieved using only linear optics, single-photon sources, and adaptive measurements, bypassing the need for strong nonlinear interactions. This theoretical breakthrough spurred experimental efforts in the mid-2000s, including the first demonstration of a controlled-NOT (CNOT) gate using single-photon polarization qubits and linear optical elements in 2003, achieving a success probability of about 1/9 as predicted by the protocol. Concurrently, experiments generated photonic entanglement via post-selected linear optical processes, enabling basic quantum logic operations. The marked a transition to integrated for greater , with silicon-based chips enabling compact interferometers and reducing losses compared to bulk . A landmark experiment in 2013 demonstrated three-photon —a computationally hard task proposed by and Alex Arkhipov in 2011—using a programmable integrated optical , verifying patterns matching theoretical permanents. , rooted in earlier optical studies like the 1987 Hong-Ou-Mandel effect, emerged as a non-universal but verifiable quantum advantage paradigm. In the 2020s, progress accelerated with improvements in scalable single-photon sources and high-efficiency detectors, such as superconducting nanowire single-photon detectors (SNSPDs) achieving near-unity . Experiments scaled to 20 input photons across 60 modes by 2019, sampling Hilbert spaces of dimension $10^{14}, with further refinements in 2022–2024 enabling higher mode counts (over 20) through integrated platforms and error-mitigated detection. These advances, including heralded sources with >99% indistinguishability from quantum dots, addressed key bottlenecks in photon generation and loss tolerance. In 2025, further progress included hybrid encoding schemes like squeezed-cat codes for improved error resilience in linear optical systems.

Fundamental Components

Photonic Encoding and Modes

In linear optical quantum computing (LOQC), is encoded using photons as the primary carriers, leveraging their bosonic nature and ability to maintain coherence over long distances. The most common encoding schemes map logical states onto distinct photonic , enabling manipulation via linear while addressing challenges inherent to photonic systems, such as and decoherence. These encodings prioritize compatibility with optical hardware and interference requirements for multi-qubit operations. Dual-rail encoding represents the foundational approach, where a single 's presence in one of two orthogonal s defines the logical states: |0\rangle_L = |1\rangle_a |0\rangle_b ( in a, in b) and |1\rangle_L = |0\rangle_a |1\rangle_b. This can be implemented using spatial paths (e.g., separate waveguides or interferometer arms) or (horizontal |H\rangle versus vertical |V\rangle), offering robustness to single- loss since detection in neither rail heralds an error without collapsing the superposition. Path-based dual-rail provides flexibility for bulk but requires precise matching, while encoding simplifies single- rotations using wave plates, though it is sensitive to in fibers. A key advantage is error detection via checks, tolerating up to 50% loss in some protocols, but it scales quadratically with number due to proliferation. Time-bin encoding stores information in the temporal arrival of a single within a single spatial mode, defining |0\rangle_L as an "early" and |1\rangle_L as a "late" , often separated by a fixed delay. This scheme excels in fiber-optic environments, minimizing spatial mode complexity and enabling long-distance transmission with low decoherence, as it leverages infrastructure and avoids drift. However, it demands ultrafast switching for gates (on the order of picoseconds) and lacks inherent SU(2) symmetry, complicating deterministic single-qubit operations without ancillary resources. Compared to dual-rail, time-bin reduces modal distinguishability issues in integrated platforms but requires high detectors. Frequency-bin encoding utilizes discrete spectral modes within a broadband pulse, encoding qubits as a single photon in one of two frequency channels (e.g., lower |0\rangle_L or upper |1\rangle_L bin), manipulated via pulse shapers and electro-optic modulators. It offers linear resource scaling for multi-qubit systems—requiring only O(N) modes versus O(N^2) for spatial dual-rail—and supports parallel processing of multiple qubits in the same spatial-temporal frame, enhancing scalability for fiber-compatible networks. Drawbacks include the need for guard bands to prevent crosstalk, limiting spectral density, and reliance on probabilistic gates with success rates around 7%. This encoding mitigates indistinguishability demands by exploiting frequency mismatches, contrasting with monochromatic requirements in traditional schemes. Photonic modes serve as the degrees of freedom for these encodings, expanding the for multi-photon states in LOQC. Spatial modes encompass distinct paths or waveguides, allowing parallel encoding of multiple s, while temporal modes divide the photon's into time bins for sequential processing in a single channel. Orbital (OAM) modes, characterized by helical phase fronts with \ell, provide high-dimensional encoding within a single beam, where states |\ell\rangle carry \ell \hbar , enabling compact multi- representations without additional spatial separation. In multi-photon contexts, these modes support superpositions, such as |1\rangle_a |0\rangle_b for a single-photon or entangled states like \frac{1}{\sqrt{2}} (|1\rangle_a |1\rangle_b |0\rangle_c + |0\rangle_a |0\rangle_b |2\rangle_c) for bosonic , assuming indistinguishable photons across modes. For effective quantum in LOQC, photons must be indistinguishable in critical properties: matching central (typically near-infrared for low loss), identical spectral bandwidth (to ensure temporal overlap), and aligned spatial profiles (e.g., Gaussian mode matching >99% for Hong-Ou-Mandel visibility). Spectral or temporal mismatches reduce bunching probabilities, degrading gate fidelities below 90% in practice. This requirement necessitates heralded single-photon sources, such as parametric down-conversion, with post-selection to filter non-ideal events. Key challenges in photonic encoding include photon loss, which erodes the no-cloning theorem's protection and demands efficiencies exceeding 99% for fault-tolerant scaling, and partial distinguishability from source variations (e.g., timing jitter or mode mismatch), which suppresses interference and limits multi-photon coherence times to nanoseconds. Dual-rail encodings detect loss heraldically, but frequency- and time-bin schemes offer better tolerance in noisy channels, though all require active stabilization to maintain mode fidelity.

Linear Optical Elements

Linear optical elements form the foundational hardware for manipulating photonic states in linear optical quantum computing (LOQC), enabling the implementation of unitary transformations on optical modes without altering photon number. These passive components include beam splitters, phase shifters, mirrors, and wave plates, which collectively allow for the reconfiguration of spatial, temporal, and of . , typically generated from single-photon sources, serve as inputs to these elements, where they undergo and mixing to perform computational operations. Beam splitters are central to LOQC, functioning as devices that partially transmit and reflect incident to mix two input optical modes into output modes. A 50/50 , with equal transmissivity and reflectivity, is particularly important for demonstrating quantum interference effects and is described by a that couples the of the input modes \hat{a}^\dagger and \hat{b}^\dagger. shifters introduce a controllable relative delay between modes, modeled as a diagonal unitary transformation \hat{a}_{\text{out}}^\dagger = e^{i\phi} \hat{a}_{\text{in}}^\dagger, which preserves the photon number while adjusting the of the field. Mirrors redirect paths via total reflection, effectively swapping or inverting modes in optical circuits, while wave plates control states; for instance, half-wave and quarter-wave plates rotate linear polarizations or convert between linear and circular polarizations, enabling manipulation of dual-rail encodings. The operation of these elements relies on linear unitary transformations that conserve total photon number, as governed by the bosonic commutation relations of the . Each element corresponds to a that is quadratic in the field operators, ensuring no photon creation or annihilation occurs—only redistribution across modes. A key demonstration of their quantum nature is the Hong-Ou-Mandel (HOM) interference effect, where two indistinguishable single s incident on the two inputs of a 50/50 bunch into the same output mode, resulting in either the |2,0\rangle or |0,2\rangle state with probability 1/2 and zero probability for one photon per output. This bunching arises from the destructive interference of the amplitude for distinguishable paths, highlighting the nonclassical statistics essential for LOQC protocols. Despite their versatility, linear optical elements have inherent limitations due to their passivity and : they cannot generate entanglement from separable inputs or create photons, restricting direct nonlinear interactions between photons. Operations remain superpositions of single-photon transformations, necessitating ancillary resources and measurements for universality, as linear optics alone cannot implement deterministic two-qubit gates. In terms of implementation, linear optical elements can be realized in free-space using discrete bulk components such as dielectric-coated mirrors, birefringent wave plates, and partially reflecting beam splitters aligned via lenses and mounts, offering flexibility for prototyping but challenging due to sensitivities. Alternatively, waveguide-based integrated platforms embed these elements on photonic chips using materials like or silica, where beam splitters are formed by directional couplers, phase shifters by thermo-optic or electro-optic effects, and mirrors by or gratings, enabling compact, stable, and potentially scalable LOQC architectures.

Ancillary Resources

In linear optical quantum computing (LOQC), ancillary resources play a crucial role in enabling operations beyond passive linear transformations, primarily through the generation of non-classical light states and precise photon detection. Single-photon sources are essential for injecting well-defined quantum states into optical circuits, with key requirements including high purity (minimal multi-photon emission), efficiency (collection and coupling rates), and indistinguishability (overlap of quantum states from successive emissions to ensure ). Heralded sources based on (PDC) in nonlinear crystals produce correlated photon pairs, where detection of one photon heralds the presence of the other as a single-photon state, achieving purities exceeding 90% and indistinguishabilities up to 99% in optimized setups. sources, such as semiconductor nanostructures, offer on-demand emission with brightness approaching one photon per excitation pulse and near-unity indistinguishability due to their discrete energy levels, making them promising for scalable LOQC despite challenges in integrating with optical fibers. Photon detection is another vital ancillary resource, distinguishing between and photon-arrival events while minimizing errors from counts (false detections without incident s). Bucket detectors, typically avalanche photodiodes (APDs), operate as on/off devices that cannot resolve photon number, offering high (~70%) but limited utility for multi-photon states due to saturation. In contrast, number-resolving detectors like superconducting single-photon detectors (SNSPDs) can distinguish multiple photons per pulse with efficiencies over 90%, low count rates below 0.1 Hz, and timing under 20 ps, enabling accurate projection measurements critical for LOQC protocols. These detectors operate at cryogenic temperatures but provide superior performance for applications requiring precise photon counting. Measurement-induced nonlinearity addresses the inherent limitation of linear optics, which cannot produce photon-number-dependent phase shifts without additional resources. Adaptive measurements with feed-forward—where detection outcomes dynamically adjust subsequent optical elements—enable effective nonlinear gates, such as the nonlinear sign-shift (NS) gate, by post-selecting on specific detection patterns. For instance, post-selected Bell measurements on ancillary photons can project input states onto entangled outcomes, simulating controlled-phase interactions with success probabilities around 1/9 in basic implementations. This approach relies on fast electronics for feedback, achieving gate fidelities above 90% in demonstrations. The use of ancillary resources introduces significant overhead in LOQC, particularly in gate-based schemes where probabilistic operations necessitate multiple attempts for high success rates. In the foundational Knill-Laflamme-Milburn () protocol, implementing a single logical two-qubit gate requires on the order of $10^4 physical photons across ancillary modes to suppress errors below fault-tolerant thresholds, due to the low inherent success probability (~1%) of basic nonlinear elements and the need for resource-state purification. This overhead scales with circuit depth but can be mitigated through optimized resource states and error correction, though it remains a primary challenge for practical scalability. Preparation of specific ancillary states, such as Fock states |0\rangle () and |1\rangle (), is foundational for these operations and typically leverages PDC paired with detection. The vacuum state |0\rangle is inherently available as an unused optical mode, while the single-photon Fock state |1\rangle is heralded from PDC by detecting one photon of a pair, yielding purities over 95% with conditional efficiencies around 30-50% in state-of-the-art systems. Higher-fidelity preparation involves post-selection on number-resolving detections to suppress multi-photon contamination, essential for initializing qubits and ancillary resources in LOQC circuits.

Core Protocols

KLM Scheme for Universality

The Knill-Laflamme-Milburn () scheme provides a foundational protocol for achieving universal using only linear optical elements, single-photon sources, and photodetectors, by introducing nonlinearity through post-selected measurements. This approach circumvents the inherent limitations of linear optics, which alone cannot generate entanglement or implement universal gates, by constructing a nonlinear sign (NS) gate that conditionally imparts a phase shift on multi-photon components. The scheme relies on teleportation-like operations to propagate while incorporating probabilistic gates, enabling the construction of a complete set of universal quantum operations when combined with single-qubit rotations achievable via phase shifters and beam splitters. Key components of the KLM scheme include dual-rail qubit encoding, where a logical is represented by the spatial modes of a single —specifically, |0⟩_L = |10⟩ (photon in mode 0, in mode 1) and |1⟩_L = |01⟩—and ancillary single photons used in the NS gate implementation. The core operation is the NS gate, which acts on a single optical mode and applies a conditional sign flip to the two-photon Fock state while leaving the and single-photon states unchanged. This gate is realized through an interferometric network involving beam splitters, phase shifters, and two ancilla photons in separate modes, followed by post-selection on specific photodetection outcomes that herald successful operation. To achieve a two-qubit conditional sign (CSIGN) gate, two NS gates are combined with linear and partial Bell-state projections, where the control qubit's |1⟩_L rail interferes with an ancilla pair, projecting onto a symmetric via measurement to enforce the conditional phase shift on the target . Mathematically, the NS gate transforms an input state in a single as |\psi\rangle = a_0 |0\rangle + a_1 |1\rangle + a_2 |2\rangle \mapsto a_0 |0\rangle + a_1 |1\rangle - a_2 |2\rangle, with a success probability of 1/4 upon detecting one in a designated output and in another. In the dual-rail encoding, the CSIGN gate applies this nonlinearity effectively between qubits by routing the control's excited rail through the NS setup, yielding a phase shift on the |11⟩_L component only when both qubits are in |1⟩_L, heralded by dual photodetections. The partial Bell-state projection is implemented nondeterministically using linear and single- detectors, succeeding with probability 1/2 for the relevant . The basic CSIGN gate in the KLM scheme succeeds with probability 1/16, as it requires two successful operations and post-selections, necessitating on average 16 attempts per gate and introducing significant resource overhead from discarded failures. This probabilistic nature is mitigated by encoding logical qubits into larger error-correcting codes, such as concatenated codes, which allow fault-tolerant computation with polynomial overhead in ancilla photons and optical elements, as the gate can be boosted arbitrarily close to 1 using additional teleportations and purifications. Variants of the NS gate reduce ancilla overhead and improve efficiency; for instance, a simplified using only two beam splitters and one ancilla achieves a success probability of (3 - √2)/7 ≈ 0.181 while maintaining the required nonlinearity. These modifications lower the resource demands for constructing the CSIGN gate without altering the overall universality of the scheme. Theoretically, the ensures scalable by demonstrating that fault-tolerant universal operations require only polynomially many linear optical components and single- resources, provided single-photon sources and detectors meet modest efficiency thresholds.

Boson Sampling Paradigm

is a computational task that involves generating samples from the produced by indistinguishable bosons passing through a linear optical interferometer, a problem conjectured to be computationally hard for classical computers. In this paradigm, single photons serve as the bosons, and the task demonstrates a form of quantum advantage without requiring full universality. The standard setup for uses N indistinguishable single photons injected into the first N input of a linear optical network comprising M , where typically M \geq N. These photons evolve under a random unitary transformation U, implemented via beam splitters and phase shifters, before detection at the output yields photon-number statistics. The probability of observing a specific output s, where s denotes the number of photons in each output , is given by P(s) = \frac{|\perm(U_{s,i})|^2}{\prod_j s_j !}, with \perm denoting the permanent of the (possibly row-repeated) submatrix of U formed by rows corresponding to output modes (repeated by multiplicity s_j) and columns to input modes; the denominator accounts for the normalization of the output Fock state. Computing the permanent exactly is #P-complete, and even approximate sampling from this distribution to within a small multiplicative error is believed to be classically intractable. In 2011, and Alex Arkhipov formalized the problem and d that there exists no efficient classical algorithm capable of approximately sampling from the ideal output distribution, even allowing for a modest error tolerance. This Aaronson-Arkhipov underpins the hardness of the task, positioning as a for intermediate-scale photonic quantum devices to exhibit . Beyond universal quantum computing, finds applications in verifying photonic quantum hardware and exploring quantum advantage in non-universal settings, such as simulating molecular vibronic spectra or graph optimization problems through scattershot variants. An extension known as Gaussian boson sampling replaces single-photon Fock states with squeezed vacuum states as inputs, leveraging Gaussian operations to produce highly entangled multimode states whose sampling remains classically hard, potentially enabling larger-scale demonstrations.

Fusion-Based and Measurement-Driven Approaches

Fusion-based approaches in linear optical quantum computing (LOQC) represent a of post-KLM protocols that leverage probabilistic entangling measurements, known as gates, to construct large-scale entangled resource states for measurement-based (MBQC). These methods address the high resource overhead of the scheme by focusing on the offline preparation of compact cluster states or graph states, which are then interconnected via fusions to enable universal computation through local measurements and feed-forward corrections. Unlike gate-teleportation models, fusion-based protocols emphasize modular entanglement generation, where small ancillary states are fused probabilistically, allowing for fault-tolerant scaling with reduced ancilla requirements. Fusion gates are the core primitives in these approaches, implemented using linear optical elements like polarizing beam splitters and single-photon detectors to perform partial or complete Bell-state measurements on photonic qubits. Type-I fusion gates execute a partial Bell measurement, projecting two input qubits onto a parity subspace (e.g., even or odd photon number parity) with a success probability of 1/2; upon success, they connect two separate cluster states by effectively merging them into a single larger state with one fewer qubit, while failure outcomes apply a Pauli Z correction to maintain entanglement. Type-II fusion gates perform a complete Bell measurement, also with 1/2 success probability, but their failure mode introduces redundancy by encoding the surviving qubit in a two-photon state, enabling the creation of fused states with built-in error protection. These gates facilitate the "growing" of cluster states from elementary entangled pairs, such as Bell states, by iteratively linking horizontal chains (via Type-I) and adding vertical connections (via Type-II) to form 2D lattices suitable for MBQC. The seminal scheme by Browne and Rudolph (2005) introduced a resource-efficient framework for scalable MBQC in LOQC, utilizing these fusion gates to generate cluster states without relying on the resource-intensive teleported nonlinear gates of KLM. In this protocol, computation proceeds by preparing small, constant-sized resource states (e.g., 2-4 photon graph states) offline and fusing them on demand, with measurement outcomes dictating adaptive feed-forward operations like Pauli corrections to steer the logical qubit evolution. This measurement-driven paradigm contrasts with gate-based models by pre-preparing the entangled resource state as a "one-way" computation medium, where the entire algorithm is encoded in the measurement pattern rather than sequential gate applications. The approach achieves universality through a combination of fusions and local single-qubit measurements, enabling Clifford and non-Clifford operations via appropriate basis choices. Resource analysis highlights the efficiency gains of fusion-based methods over KLM, which demands ancilla scaling (e.g., O(n^2) resources for n-qubit operations). In contrast, networks require only O(1) ancillae per logical site, as small resource states are reused across the , with overall overhead scaling linearly with the number of fusions needed for the size; for example, generating an n-mode requires approximately 2n-1 fusions, each heralded at 1/2 probability, yielding a resource cost. This reduction stems from offline entanglement distribution and the tolerance to failures through redundant encodings, allowing percolation-based growth where excess attempts compensate for losses without blowup. Recent variants in the 2020s have refined these protocols for enhanced , incorporating fusion-based construction of topological states directly within MBQC frameworks. For instance, the fusion-based quantum computation model integrates error-correcting codes like the surface code into the fusion network, where measurement outcomes propagate checks across the , achieving thresholds up to 11.98% and loss tolerance around 10.4% with boosted Bell pairs—significantly higher than KLM's stringent requirements (>99.999% efficiency). These advancements maintain the measurement-driven essence, with feed-forward adapting to detection outcomes to build and correct entanglement on-the-fly, paving the way for hybrid error-corrected LOQC architectures.

Experimental Implementations

Bulk Optic Systems

Bulk optic systems in linear optical quantum computing rely on discrete free-space optical components arranged in tabletop configurations to manipulate photonic qubits. These setups typically involve mirrors for , lenses for focusing and collimation, beam splitters for , and phase shifters for path length control, with fiber coupling employed to interface with single-photon sources and detectors for precise mode matching. Such arrangements enable the construction of reconfigurable interferometers using linear optical elements like polarizing beam splitters and half-wave plates as bulk components. A key early experiment was the 2004 demonstration of a controlled-NOT gate by O'Brien et al., which achieved an average gate fidelity of 0.90 as determined by , marking a milestone in realizing entangling operations with photons. In the , these systems facilitated experiments as a tested paradigm, with implementations involving 4 to 8 photons to verify multi-photon interference in custom free-space multimode interferometers. A notable milestone was the 2016 demonstration of 5-photon in free-space optics, where fidelity metrics and error rates underscored the potential and practical challenges of scaling interference patterns. The flexibility of bulk optic systems allows for of arbitrary interferometer designs, facilitating experimentation with varied topologies without fabrication constraints. Additionally, they support high-visibility quantum , routinely exceeding 99% due to the quality of discrete components and careful . Despite these strengths, bulk optic systems face significant limitations in and . is highly sensitive to environmental factors, with mechanical vibrations and thermal drifts causing phase instabilities that degrade performance over extended runs. Photon loss accumulates rapidly, often reaching 10-20% per component from imperfect fiber , surface reflections, and in , which confines reliable demonstrations to small scales involving only a few spatial or temporal modes. These issues result in low overall success rates for multi-photon events, typically compounded by detection inefficiencies.

Integrated Photonic Platforms

Integrated photonic platforms implement linear optical quantum computing (LOQC) using monolithic chips fabricated from materials such as or thin-film (TFLN), where light is confined to waveguides for stable and compact manipulation. Key components include waveguide-based beam splitters realized via directional couplers or multimode (MMI) structures, phase shifters employing thermo-optic effects in or electro-optic modulation in , and couplers for efficient fiber-to-chip interfacing. These elements enable the construction of reconfigurable interferometers essential for LOQC protocols like and KLM-inspired gates. These platforms offer significant advantages over bulk , including enhanced stability due to fixed alignments, propagation losses below 1 dB/cm in optimized or TFLN waveguides, and potential scalability to hundreds of modes through very-large-scale integration (VLSI) techniques. The compact , with devices fitting on millimeter-scale chips, facilitates dense packing of optical modes and reduces susceptibility to environmental perturbations, making them suitable for practical processing. Seminal experiments include the 2013 demonstration by Spring et al., which achieved with three indistinguishable single photons in a six-mode femtosecond-laser-written interferometer on a , verifying nonclassical with . More recent advances, such as the 2023 VLSI photonic chip by Bao et al., implemented reconfigurable graph-based interferometers supporting up to dozens of modes for sampling tasks akin to , with statistical overlaps exceeding 0.98. By 2025, monolithically integrated platforms have scaled to multi-qubit operations, incorporating on-chip heralded sources and detectors for reconfigurable , supporting scalable networking via chip-to-chip interconnects. Fabrication leverages CMOS-compatible processes for , enabling mass production via 300-mm wafer foundries with over 20 steps, while TFLN platforms utilize ion-slicing and bonding for thin-film integration. A persistent challenge lies in simulating single-photon nonlinearities required for universal gates using linear elements and feed-forward measurements, as direct on-chip nonlinearity remains limited without ancillary resources like integrated detectors. Performance benchmarks highlight photon indistinguishability exceeding 95%, evidenced by Hong-Ou-Mandel visibilities above 99% in silicon interferometers, and two-qubit gate fidelities reaching 99% for fusion-based operations in recent chips. These metrics underscore the maturity of integrated platforms for scaling LOQC beyond proof-of-principle demonstrations.

Hybrid and Scalable Architectures

Hybrid architectures in linear optical quantum computing (LOQC) integrate photonic systems with other quantum technologies to address limitations in detection , entanglement , and error correction, enabling pathways toward fault-tolerant operation. These approaches leverage the strengths of —such as room-temperature compatibility and low decoherence—while incorporating cryogenic components for high-fidelity single-photon detection or systems for robust storage and swapping. For instance, superconducting single-photon detectors (SNSPDs) provide near-unity detection (>98%) at cryogenic temperatures, essential for heralding operations in LOQC protocols. One prominent combines photonic circuits with superconducting circuits, particularly for on-chip detectors that mitigate losses in linear optical networks. In these setups, photonic qubits are processed using splitters and phase shifters, with SNSPDs integrated via cryogenic packaging to enable efficient without bulky external . This has been demonstrated in hybrid optomechanical systems where superconducting qubits to photonic modes through mechanical resonators, achieving coherent interactions with rates on the order of MHz. Such hybrids reduce the resource overhead for non-deterministic gates in the scheme by improving heralding success probabilities to over 90%. Another key hybrid involves photonic interfaces with trapped-ion systems for entanglement swapping, facilitating modular quantum networks. Here, ions serve as matter qubits with long times (>1 second), while photons mediate remote entanglement via and detection. In a 2025 experiment, two trapped-ion modules were interconnected photonically over an link, demonstrating distributed quantum computation with remote entanglement fidelity of 96.9% and a distributed gate fidelity of 86% via quantum gate . This approach enables entanglement between ion-photon pairs, paving the way for networked LOQC architectures. Scalable designs in hybrid LOQC emphasize modular architectures connected via fusion links, where small entangled photonic states (e.g., ) are fused across to build larger logical . Fusion operations, involving type-I (two-mode) or type-II (four-mode) interferometers followed by detection, allow probabilistic entanglement generation with post-selection, scaling to networks of 100+ . Proposals from 2024-2025 outline chip-to-chip links using low-loss waveguides and cryogenic interconnects, targeting fault-tolerant thresholds with overheads below 10^4 physical per logical . A 2025 demonstration scaled a modular photonic system using 24 source , 6 refinery , and 5 QPU , implementing measurements on a 12-mode . PsiQuantum's fusion-based approach exemplifies scalable hybrid LOQC, utilizing silicon photonic chips for resource state generation and fusion modules for error-corrected computation. This architecture generates small magic states (e.g., 4-8 qubits) on separate chips, fusing them probabilistically to construct fault-tolerant logical qubits, with a roadmap aiming for a million-qubit system by 2030 via CMOS-compatible fabrication. Integrated SNSPDs and cryogenic dilution refrigerators ensure low-noise operation, with simulated error rates below 0.1% for fusion gates under realistic losses. In parallel, Xanadu's continuous-variable (CV) photonic approach hybridizes squeezed-light sources with integrated waveguides and homodyne detectors, encoding qumodes rather than photons for Gaussian operations. This enables measurement-based LOQC with modular linked via , demonstrating a scalable building block in 2025 that generates error-protected CV cluster states with squeezing levels >10 dB. CV hybrids reduce the need for single-photon sources by using multimode interferometers, achieving universal gates with infidelity <5% in small-scale networks. Efficient interfaces are critical for these hybrids, particularly fiber-to-chip coupling and cryogenic integration to minimize noise and losses. Grating couplers with efficiencies >90% enable stable light transfer from optical fibers to photonic , while cryogenic maintains across temperature cycles from 300 to 4 with coupling losses <1 . These techniques support low-noise operation in dilution refrigerators, where SNSPDs operate at 1 , preserving photonic for fusion links in multi-chip setups. Recent progress includes 2025 demonstrations of hybrid photonic systems achieving 10-mode universal operations with two-qubit gate error rates below 1%, as shown in ion-photon networked setups. These milestones highlight the viability of hybrids for scaling beyond 50 qubits, with ongoing efforts focusing on automated fusion routing to suppress error accumulation.

Comparisons and Challenges

Protocol Trade-offs

Linear optical quantum computing (LOQC) protocols exhibit significant trade-offs in resource demands, computational universality, and practical implementation, primarily due to the inherent limitations of linear optics in generating photon-photon interactions. The Knill-Laflamme-Milburn (KLM) scheme achieves universality through probabilistic gates that rely on postselected measurements and ancillary photons, but this introduces substantial overhead: the nonlinear sign-shift (NS) gate, a core primitive, succeeds with a probability of only 1/9, necessitating repeated attempts and exponential resource scaling for deep circuits. In contrast, boson sampling excels in efficiency for specialized sampling tasks, requiring no ancillas or adaptive measurements, as it leverages passive linear interferometers to produce distributions believed intractable for classical computers, though it lacks programmability for general-purpose computation. Fusion-based and measurement-based quantum computing (MBQC) approaches mitigate some of KLM's overhead by modularly assembling large entangled resource states (e.g., cluster states) via type-I and type-II measurements, which entangle s probabilistically with success rates up to 1/2 per fusion. These methods demand linear scaling in ancillary resources compared to KLM's overhead, as fusions integrate entanglement generation and computation in a single step, reducing circuit depth and error propagation; however, they require pre-fabricated resource state generators and fixed routing, limiting reconfigurability without additional hardware. protocols also offer superior error tolerance, with photonic loss thresholds around 10.4% per fusion (or 2.7% per photon) when using error-corrected encodings like the (2,2)-Shor , surpassing KLM's more stringent requirements for component efficiencies. Key efficiency metrics highlight these disparities: KLM's controlled-Z (CZ) gate achieves a success probability of 2/27, incurring high photon overhead (e.g., up to n+1 photons for n-mode ancillas) and increased circuit depth from feed-forward corrections, while boson sampling operates deterministically with minimal depth but fixed input-output mapping. Fusion-based MBQC balances this with lower photon counts per operation but trades off against the need for high-fidelity resource state preparation, where failure rates compound across the fusion network. Overall, these metrics underscore a spectrum of universality, from boson sampling's niche applicability—efficient for demonstrating quantum advantage in sampling without universality—to KLM's full but resource-intensive gate model, and fusion/MBQC's scalable path toward fault-tolerant universal computation with modest overhead.
ProtocolResources (Ancillas/Photons)ScalabilityExperimental FeasibilityUniversality
(Gate-based)Polynomial overhead (e.g., n+1 photons per gate); high ancilla count for Poor due to success probability decay (p^N for N gates)Moderate; demonstrated small-scale gates but limited by high requirements (near 99%) for Full QC, probabilistic
Minimal; no ancillas, O(n) photons for n modesHigh for fixed tasks; linear in modes but non-adaptiveHigh; implemented with up to 20+ photons in interferometersSpecialized (sampling only), non-
Fusion-based MBQCLinear overhead; constant-size resource states (e.g., 6-ring clusters)Good; modular fusion networks with topological scalingHigh; leverages integrated , tolerant to 10.4% loss fault-tolerant QC

Scalability Limitations

Photon loss represents the primary scalability barrier in linear optical quantum computing (LOQC), as photons are inherently susceptible to and scattering in optical components such as waveguides, beam splitters, and detectors. Typical loss rates range from 1% to 10% per component, which accumulates rapidly in large-scale circuits and degrades computational . This necessitates fault-tolerant error correction protocols, such as those based on surface codes adapted for photonic systems, imposing resource overheads exceeding 1000:1 in terms of additional photons and optical elements to achieve reliable operation. Indistinguishability of input is essential for achieving high-fidelity Hong-Ou-Mandel , yet practical challenges arise from and temporal mismatches between generated from different sources or propagated through varying paths. These mismatches reduce , introducing errors that scale with circuit depth and number. Additionally, , including decoherence from interactions or , further diminishes times, limiting the viable in LOQC implementations. Resource demands escalate dramatically in LOQC protocols like the Knill-Laflamme-Milburn () scheme, where achieving fault-tolerant universality requires exponentially increasing numbers of ancilla photons to boost nondeterministic gate success probabilities to near-unity levels. Although large-scale cluster-state generations have reached billions of s, current experimental setups for fully programmable interferometers remain constrained by mode counts in the thousands due to fabrication complexities and precision requirements. Detection inefficiencies compound these issues, particularly with non-number-resolving detectors that cannot distinguish between single and multiple photons arriving simultaneously, thereby restricting accurate sampling of multi-photon output distributions in tasks like . This limitation hampers verification of quantum advantage and full state in larger systems. Mitigation efforts focus on integrating photonic error-correcting codes, such as photonic cluster-state encodings, to tolerate rates up to 1% while minimizing overhead through optimized decoding algorithms. Advances in materials, including low- silicon nitride waveguides achieving propagation losses below 2 dB/m without high-temperature annealing, enable longer photon lifetimes and reduced error accumulation in integrated platforms.

Recent Advances and Future Prospects

In 2023, researchers at the University of Science and Technology of demonstrated a significant advancement in Gaussian boson sampling using the Jiuzhang 3.0 photonic quantum processor, which registered up to 255 photon-click events, establishing a new for quantum computational advantage in sampling tasks. This experiment highlighted the scalability of linear optical systems by overcoming photon loss challenges through pseudo-photon-number-resolving detection, enabling samples that would take classical supercomputers an estimated 10^24 years to generate. Complementing this, 's 2025 demonstrations of scalable photonic modules, including error-resistant generation, have paved the way for modular architectures capable of integrating hundreds of photonic components while maintaining coherence. In June 2025, demonstrated on-chip generation of error-resistant Gottesman-Kitaev-Preskill (GKP) qubits, advancing fault-tolerant encoding. These developments underscore the progress toward practical applications, such as for pattern recognition. Fault-tolerant demonstrations in linear optical quantum computing have also advanced, with Xanadu achieving initial encoded photonic qubits resistant to errors in 2025, forming building blocks for larger logical systems. While full-scale logical qubit arrays remain emerging, these efforts build on fusion-based protocols to encode across multiple photons, demonstrating preliminary error suppression in small-scale networks. In parallel, photonic implementations of error correction have shown promise, particularly through fusion networks that enable surface code-like protections adapted for optical losses. In November 2025, advanced to Stage B of DARPA's Quantum Initiative, securing up to $15 million in funding. Error correction in photonic systems has benefited from fusion-based approaches, where entangled measurements project resource states into logical encodings, achieving erasure thresholds exceeding 11% in hardware-agnostic models. Recent adaptations of surface codes via photonic fusions have targeted biased profiles inherent to optical systems, with simulations indicating achievable error rates below 1% for photon loss when using optimized fusion success probabilities around 99%. These thresholds are critical for scaling, as they allow fault-tolerant operation despite imperfect single-photon sources and detectors. Commercial progress has accelerated, exemplified by PsiQuantum's September 2025 announcement of a $1 billion funding round to develop million-qubit-scale, fault-tolerant photonic quantum computers using and high-volume semiconductor manufacturing. This milestone supports their roadmap to deploy the first such system by the late 2020s, leveraging cryogenic dilution refrigerators for stability. Similarly, Computing's PT-2 system, launched in late 2024 with deployments in 2025, represents a rack-mountable, room-temperature photonic quantum computer built with telecom-grade components, enabling into existing centers for hybrid quantum-classical workflows without extensive cooling . Looking ahead, linear optical quantum computing is poised for integration with quantum networks, where photonic qubits serve as natural carriers for distributed entanglement over fiber links, facilitating modular scaling across remote nodes. Projections suggest that by 2030, fault-tolerant photonic systems could deliver quantum advantage in optimization problems, such as solving large-scale combinatorial tasks intractable for classical computers, driven by advancements in error-corrected and fusion architectures. However, key challenges persist, including the development of , indistinguishable single-photon sources at scale, which currently suffer from low brightness and multi-photon emissions in non-deterministic setups. Additionally, trade-offs between cryogenic environments—offering high-fidelity superconducting detectors but requiring complex cooling—and room-temperature alternatives—promising easier yet facing higher noise from thermal phonons—remain central to achieving practical utility.

References

  1. [1]
    A scheme for efficient quantum computation with linear optics - Nature
    Jan 4, 2001 · Here we show that efficient quantum computation is possible using only beam splitters, phase shifters, single photon sources and photo-detectors.
  2. [2]
  3. [3]
    Scaling and networking a modular photonic quantum computer
    Jan 22, 2025 · Photonics offers a promising platform for quantum computing, owing to the availability of chip integration for mass-manufacturable modules, ...
  4. [4]
  5. [5]
    Photonic Boson Sampling in a Tunable Circuit - Science
    We tested the central premise of boson sampling, experimentally verifying that three-photon scattering amplitudes are given by the permanents of submatrices.
  6. [6]
    The computational complexity of linear optics - ACM Digital Library
    This paper does not assume knowledge of quantum optics. Indeed, part of its goal is to develop the beautiful theory of noninteracting bosons underlying our ...
  7. [7]
    Scalable integrated single-photon source | Science Advances
    Dec 9, 2020 · We report on the realization of a deterministic single-photon source featuring near-unity indistinguishability using a quantum dot in an “on-chip” planar ...
  8. [8]
    scaling superconducting nanowire single-photon detectors for ...
    Oct 25, 2025 · Superconducting nanowire single-photon detectors (SNSPDs) have emerged as essential devices that push the boundaries of photon detection ...
  9. [9]
    Dual-rail quantum code | Error Correction Zoo
    The dual-rail quantum code is a two-mode bosonic code encoding a logical qubit in Fock states, and it is error-detecting against one photon loss.
  10. [10]
  11. [11]
    Linear optical quantum computing with photonic qubits
    Jan 24, 2007 · Efficient scalable quantum computing with single photons, linear optical elements, and projective measurements is possible.
  12. [12]
    Measurement of subpicosecond time intervals between two photons ...
    Nov 2, 1987 · Measurement of subpicosecond time intervals between two photons by interference. C. K. Hong, Z. Y. Ou, and L. Mandel. Department of Physics ...
  13. [13]
    Programmable quantum circuits in a large-scale photonic ... - Nature
    Feb 3, 2025 · Linear optical circuits offer a versatile platform to perform quantum computing tasks and have evolved from free space optics to integrated ...
  14. [14]
    On-demand indistinguishable single photons from an efficient and ...
    Engineering single-photon sources with high efficiency, purity, and indistinguishability is a longstanding goal for applications such as linear optical quantum ...
  15. [15]
    Estimating the Indistinguishability of Heralded Single Photons Using ...
    Nov 12, 2019 · High-visibility quantum interference (indistinguishability) among single photons is the key to scalable, high-fidelity linear optical ...
  16. [16]
    On-chip scalable highly pure and indistinguishable single-photon ...
    Sep 2, 2022 · Our platform of SPSs paves the path to creating on-chip scalable quantum photonic networks for communication, computation, simulation, sensing and imaging.
  17. [17]
    On-chip heralded single photon sources | AVS Quantum Science
    Oct 5, 2020 · Time correlated photon pairs are used to produce heralded single photon states for quantum integrated circuits.Ii. Single Photons · Iii. Heralded Single Photon... · Iv. State-Of-The-Art<|control11|><|separator|>
  18. [18]
    Research trends in single-photon detectors based ... - AIP Publishing
    Apr 15, 2025 · SNSPDs are capable of high-rate counting with remarkable efficiency and low dark counts ... For instance, linear optical quantum computing, as ...
  19. [19]
    Single-photon detectors for optical quantum information applications
    Aug 10, 2025 · Similarly, superconducting nanowire single-photon detectors can offer high detection efficiency, minimal dark count rate and dead time with ...
  20. [20]
    Superconducting nanowire single-photon detectors integrated ... - NIH
    Oct 13, 2020 · SNSPDs offer efficient photon counting over an extremely broad wavelength range and show outstanding performance in terms of speed, timing ...Missing: bucket | Show results with:bucket
  21. [21]
    Photon number projection using non-number- resolving detectors
    Jul 17, 2007 · In fact, most commonly available photo-detectors are so- called 'bucket' or 'on/off' detectors, which can distinguish only between two cases—no ...
  22. [22]
    Measurement-induced nonlinearity in linear optics | Phys. Rev. A
    Sep 22, 2003 · Abstract. We investigate the generation of nonlinear operators with single-photon sources, linear optical elements, and appropriate measurements ...Missing: adaptive forward post- selected
  23. [23]
    Nonlinear feedforward enabling quantum computation - Nature
    Jul 12, 2023 · In this paper, we demonstrate that a fast and flexible nonlinear feedforward realizes the essential measurement required for fault-tolerant and universal ...
  24. [24]
    Bell-state measurement exceeding 50% success probability with ...
    Aug 9, 2023 · In this work, we demonstrate such a linear-optical BSM scheme enhanced by ancillary photons by adapting and implementing a scheme proposed in ( ...
  25. [25]
    None
    ### Summary of Resource Overhead in KLM Scheme (arXiv:quant-ph/0512104)
  26. [26]
    Deterministic Linear Optics Quantum Computation with Single ...
    Jul 17, 2003 · The scheme reduces the resources required per logical gate by several orders of magnitude, compared to an earlier proposal of Knill, Laflamme, ...
  27. [27]
    None
    Summary of each segment:
  28. [28]
    Optimized generation of heralded Fock states using parametric ...
    The generation of heralded pure Fock states via spontaneous parametric down-conversion (PDC) relies on perfect photon-number correlations in the output modes.
  29. [29]
    Characterization of the nonclassical nature of conditionally prepared ...
    In the case of PDC, conditional preparation was first reported by Mandel et al. [4] and since then has been optimized to generate approximately n = 1 Fock ...<|control11|><|separator|>
  30. [30]
    [1011.3245] The Computational Complexity of Linear Optics - arXiv
    Nov 14, 2010 · Access Paper: View a PDF of the paper titled The Computational Complexity of Linear Optics, by Scott Aaronson and Alex Arkhipov. View PDF · TeX ...
  31. [31]
    Gaussian Boson Sampling | Phys. Rev. Lett.
    Oct 23, 2017 · Here, we introduce Gaussian Boson sampling, a classically hard-to-solve problem that uses squeezed states as a nonclassical resource.Missing: paper | Show results with:paper
  32. [32]
    Fusion-based quantum computation | Nature Communications
    Feb 17, 2023 · It is simple to modify linear optical circuits to choose the failure basis using appropriate single qubit gates, which are easy to implement in ...
  33. [33]
    Resource-efficient linear optical quantum computation - arXiv
    May 26, 2004 · This paper introduces a linear optics quantum computation scheme using cluster states, without teleported gates, and uses redundant encoding of ...
  34. [34]
    [PDF] Linear optical quantum computing - Uni Ulm
    Initially, the KLM protocol was designed as a proof that linear optics and projective measurements allow for scalable quantum computing in principle. However,.
  35. [35]
    Quantum process tomography of a controlled-NOT gate - arXiv
    Feb 23, 2004 · We demonstrate complete characterization of a two-qubit entangling process - a linear optics controlled-NOT gate operating with coincident detection - by ...
  36. [36]
    Photonic implementation of boson sampling: a review
    May 9, 2019 · We review recent advances in photonic boson sampling, describing both the technological improvements achieved and the future challenges.
  37. [37]
    High-visibility two-photon interference at a telecom wavelength ...
    Feb 9, 2010 · We report a high two-photon interference net visibility, i.e., 99 % , in a configuration extendable to quantum relays. Such a proof-of ...
  38. [38]
    [PDF] Effect of unbalanced and common losses in quantum photonic ...
    In this work, we divide losses into unbalanced linear loss and shared common loss, and provide a detailed analysis on how loss affects the integrated linear ...Missing: per | Show results with:per
  39. [39]
    Integrated Photonics for Quantum Communications and Metrology
    Feb 12, 2024 · Over the last two decades, integrated photonics has profoundly revolutionized the domain of quantum technologies. The ongoing second quantum ...
  40. [40]
    A manufacturable platform for photonic quantum computing - Nature
    Feb 26, 2025 · We benchmark a set of monolithically-integrated silicon photonics-based modules to generate, manipulate, network, and detect heralded photonic qubits.
  41. [41]
    Material platforms for integrated quantum photonics
    The most important photonic devices for linear quantum information protocols include pump sources, non-classical light sources, filters, waveguides, directional ...
  42. [42]
    Very-large-scale integrated quantum graph photonics - Nature
    Apr 6, 2023 · Here we demonstrate a graph-theoretical programmable quantum photonic device in very-large-scale integrated nanophotonic circuits.
  43. [43]
    Ultra-low loss quantum photonic circuits integrated with single ...
    Dec 12, 2022 · Our work demonstrates integration of a quantum emitter single-photon source onto photonic integrated circuits with waveguide losses of ≈ 1 dB/m.
  44. [44]
    Hybrid integration methods for on-chip quantum photonics
    We review various issues in solid-state quantum emitters and photonic integrated circuits, the hybrid integration techniques that bridge these two systems, and ...3. Pics For Quantum... · A. Coherent Control Of... · D. Quantum Interface Between...
  45. [45]
    Hybrid optomechanical superconducting qubit system
    Apr 5, 2024 · We propose an integrated nonlinear superconducting device based on a nanoelectromechanical shuttle. The system can be described as a qubit coupled to a bosonic ...
  46. [46]
    Photonic Hybrid Quantum Computing - arXiv
    Oct 1, 2025 · Quantum computing has made remarkable progress across various physical platforms. Recent advances in superconducting circuits, ion traps, and ...
  47. [47]
    Distributed quantum computing across an optical network link - Nature
    Feb 5, 2025 · Here we experimentally demonstrate the distribution of quantum computations between two photonically interconnected trapped-ion modules.
  48. [48]
    [2101.09310] Fusion-based quantum computation - arXiv
    Jan 22, 2021 · We introduce fusion-based quantum computing (FBQC) - a model of universal quantum computation in which entangling measurements, called fusions, are performed.
  49. [49]
    First Photonic Quantum Computer on the Cloud - Xanadu
    Sep 9, 2020 · In contrast, Xanadu's strategy, known as continuous variable quantum computing, does not employ single-photon generators. Instead, the company ...
  50. [50]
    Xanadu Demonstrates Scalable Building Block for Photonic ...
    Jun 5, 2025 · The dominant limiting factor remains optical loss, which reduces the purity and coherence of the quantum states. Improving chip fabrication ...
  51. [51]
    [PDF] Cryogenic packaging of nanophotonic devices with a low coupling ...
    A technique for low-loss coupling between fiber and nanophotonic devices is achieved, stable from 300K to 30mK, with <1dB loss per facet, and cryogenically ...
  52. [52]
    [quant-ph/0512071] Review article: Linear optical quantum computing
    Dec 9, 2005 · Abstract: Linear optics with photon counting is a prominent candidate for practical quantum computing. The protocol by Knill, Laflamme, ...
  53. [53]
    Resource costs for fault-tolerant linear optical quantum computing
    Apr 9, 2015 · Moreover the resource requirements grow higher if the per-component photon loss rate is worse than one in a thousand, or the per-component noise ...
  54. [54]
    Overcoming phonon-induced dephasing for indistinguishable ...
    We here examine phonon-induced decoherence and assess its impact on the rate of production, and indistinguishability, of single photons emitted from an ...
  55. [55]
  56. [56]
    Anneal-free ultra-low loss silicon nitride integrated photonics - arXiv
    Sep 8, 2023 · We report a significant advance in silicon nitride integrated photonics, demonstrating the lowest losses to date for an anneal-free process at a maximum ...