Fact-checked by Grok 2 weeks ago

Quantum information

Quantum information science is an interdisciplinary field that merges principles of quantum mechanics with information theory to study, process, store, and transmit information using quantum systems. At its core, it investigates how quantum phenomena, such as superposition and entanglement, enable fundamentally new ways to handle information that surpass classical limits. This field encompasses quantum computing, communication, sensing, and metrology, drawing on advances in physics, computer science, materials science, and engineering to develop transformative technologies. Central to quantum information science are qubits, the basic units of quantum information, which unlike classical bits (that exist strictly as 0 or 1) can occupy superpositions of multiple states simultaneously, allowing for of information. , a phenomenon in which the quantum states of two or more particles become linked such that the of each particle cannot be described independently of the state of the others, even when the particles are separated by large distances, underpins key capabilities like enhanced computational power and secure data transmission. Another defining feature is the , which prohibits perfect copying of arbitrary quantum states, providing a natural basis for but also posing challenges for error correction in quantum systems. These principles distinguish quantum information from classical theory, enabling tasks like simulating complex that are intractable for traditional computers. Applications of quantum information science are poised to revolutionize multiple domains. In , algorithms such as Shor's enable efficient of large numbers, threatening classical while opening doors to and materials optimization through molecular simulations. Quantum communication leverages entanglement and the for protocols like (QKD), ensuring provably secure information exchange detectable against eavesdropping. Additionally, quantum sensing exploits superposition for ultra-precise measurements, such as detecting or imaging at the atomic scale, with potential impacts in , , and . Ongoing research focuses on scaling these technologies, addressing decoherence challenges, and integrating them into practical networks. The has proclaimed 2025 the International Year of Quantum Science and Technology, marking a century since the inception of and underscoring the field's transformative potential.

Fundamentals

Definition and overview

Quantum information is the field that integrates principles from quantum physics and to study the storage, processing, and transmission of information using quantum mechanical systems rather than classical bits. This discipline leverages inherently quantum phenomena, such as superposition and entanglement, to enable novel ways of encoding and manipulating data at the atomic and subatomic scales. Unlike classical information theory, which operates on deterministic bits representing either 0 or 1, quantum information exploits the probabilistic nature of quantum states, allowing a single unit—known as a —to embody multiple potential values simultaneously through superposition. This fundamental distinction extends to entanglement, where quantum particles become correlated such that the measurement of one instantly determines the state of the other, regardless of distance, opening pathways for tasks impossible in classical systems. As an interdisciplinary endeavor, quantum information draws from physics, , , and to develop technologies that push beyond classical limitations. Its potential impacts include dramatically accelerating computational problem-solving for complex simulations, establishing unbreakable for secure communications, and achieving unprecedented precision in sensing applications like and .

Relation to quantum mechanics

Quantum information science is fundamentally rooted in , which describe the behavior of physical systems at microscopic scales. A cornerstone of this framework is the wave-particle duality, first proposed by in 1924, positing that particles such as electrons exhibit both wave-like and particle-like properties depending on the experimental context. This duality underpins the probabilistic nature of quantum phenomena, as demonstrated in interference experiments like the double-slit setup adapted to quantum particles. Complementing this is Werner Heisenberg's , formulated in 1927, which mathematically expresses the inherent limits on simultaneously measuring such as and , with the product of their uncertainties bounded by ħ/2. These principles highlight the departure from classical intuitions, where objects have definite trajectories and properties. The dynamical of quantum systems is governed by the , introduced by in 1926, which describes how the wave function ψ evolves linearly over time via iħ ∂ψ/∂t = Ĥψ, where Ĥ is the operator representing the system's energy. Quantum information extends these quantum mechanical foundations by interpreting quantum states not merely as descriptions of physical systems, but as carriers of that can be manipulated and processed in novel ways. In this paradigm, the abstract of becomes a for encoding and transmitting , leveraging the intrinsic properties of quantum states to surpass classical limits. This shift is articulated in foundational texts on the subject, which emphasize how provides the mathematical and physical substrate for information-theoretic applications. A key postulate bridging to information contexts is the of quantum evolution, ensuring that the time development of states under unitary operators preserves superpositions and , combined with the postulate, which induces a non-unitary to probabilistic outcomes upon . This allows reversible operations on quantum states, in stark contrast to the often irreversible, deterministic paths of , where information processing lacks such inherent parallelism. Non-local effects, arising from entangled states, further distinguish quantum systems by enabling correlations that defy classical locality without violating . These features establish as a prerequisite for quantum information, as they enable the exploitation of superposition for parallel information processing, where multiple computational paths can be explored simultaneously.

Core Concepts

Qubits and quantum states

In quantum information theory, a serves as the basic unit of quantum information, generalizing the classical bit to a two-level quantum mechanical system that can exist in a superposition of states. Unlike a classical bit, which is strictly either 0 or 1, a qubit's state is inherently probabilistic and can be manipulated to encode more complex information. The mathematical description of a qubit's state employs the formalism of quantum mechanics, representing it as a normalized vector in a two-dimensional complex Hilbert space. A general pure state of a single qubit is expressed as |\psi\rangle = \alpha |0\rangle + \beta |1\rangle, where |0\rangle and |1\rangle form the computational basis—an orthonormal set of states analogous to classical 0 and 1—and \alpha, \beta \in \mathbb{C} are coefficients satisfying the condition |\alpha|^2 + |\beta|^2 = 1. These basis states play a crucial role in quantum : upon observation in the computational basis, the qubit collapses to |0\rangle with probability |\alpha|^2 or to |1\rangle with probability |\beta|^2, yielding a definite classical outcome. A useful geometric visualization of single-qubit pure states is provided by the , which maps the state |\psi\rangle to a point on the surface of a in three-dimensional real space. The position on the sphere is determined by the expectation values of the , with the north pole corresponding to |0\rangle, the south pole to |1\rangle, and equatorial points representing balanced superpositions. This representation highlights the continuum of possible states and facilitates understanding unitary evolutions as rotations on the sphere. For systems composed of multiple qubits, the total quantum state resides in the tensor product of individual Hilbert spaces, enabling the description of composite systems. For two qubits with states |\psi\rangle and |\phi\rangle, the joint state is |\psi\rangle \otimes |\phi\rangle = (\alpha_1 |0\rangle + \beta_1 |1\rangle) \otimes (\alpha_2 |0\rangle + \beta_2 |1\rangle), expanding to a four-dimensional basis \{|00\rangle, |01\rangle, |10\rangle, |11\rangle\}. This tensor product structure underpins the scalability of quantum information processing, where an n-qubit system occupies a $2^n-dimensional .

Superposition and entanglement

In quantum information, superposition refers to the principle that a quantum system can exist in a of multiple states simultaneously, enabling the representation of an number of possibilities within a single . This allows to perform computations in parallel across those states, with effects arising upon that can amplify correct outcomes or suppress incorrect ones. Mathematically, a general superposition state for a quantum system is expressed as |\psi\rangle = \sum_i c_i |i\rangle, where the |i\rangle form an , the coefficients c_i are complex numbers satisfying \sum_i |c_i|^2 = 1, and the probability of measuring the system in state |i\rangle is |c_i|^2. This form underpins the ability of quantum computers to explore vast state spaces efficiently, as an n- system can represent up to $2^n basis states in superposition, providing an scaling in the dimensionality of the compared to classical bits. Entanglement describes a quantum state of multiple particles that cannot be expressed as a product of individual states, resulting in correlations between measurements that persist regardless of spatial separation. A canonical example is the Bell state \frac{1}{\sqrt{2}} (|00\rangle + |11\rangle), where measuring one qubit instantly determines the state of the other, even at arbitrary distances. The phenomenon of entanglement was highlighted in the Einstein-Podolsky-Rosen (EPR) paradox, which questioned the completeness of quantum mechanics by arguing that entangled particles imply instantaneous influences violating locality. John Bell's theorem later formalized this by deriving inequalities that local hidden-variable theories must satisfy; quantum mechanics violates these inequalities, demonstrating non-local correlations inherent to entangled states. Despite these non-local correlations, the no-signaling theorem ensures that entanglement cannot be used to transmit classical information , as local measurements on one subsystem yield statistics independent of operations on the distant subsystem. This preserves relativistic while allowing entanglement to enable correlations stronger than any achievable classically, such as in tasks requiring coordinated outcomes across parties. Superposition and entanglement together provide the foundation for quantum advantage, with superposition enabling the manipulation of exponentially large state spaces for , and entanglement facilitating multipartite correlations that surpass classical limits, as seen in protocols for secure .

Quantum Information Theory

Classical information foundations

Classical information theory provides the foundational framework for quantifying, storing, and transmitting data in deterministic systems, where is represented by bits as the smallest indivisible units of data. In classical and communication, bits are processed sequentially and stored deterministically, with each bit representing a state of 0 or 1 without any inherent ambiguity beyond probabilistic sources. This structure enables reliable manipulation through logic gates and arithmetic operations, but it imposes fundamental limits on parallelism and security in information handling. A central concept in classical is Shannon entropy, which measures the average uncertainty or in a X with p(x). Formally, it is defined as H(X) = -\sum_{x} p(x) \log_2 p(x), where the logarithm is typically base-2 to yield units in bits. For example, a flip has entropy H(X) = 1 bit, reflecting maximal uncertainty between two equally likely outcomes, while a deterministic event has zero entropy. This metric quantifies the expected number of bits needed to encode outcomes from the source, serving as a cornerstone for data compression. Channel capacity extends entropy concepts to communication over noisy channels, defined as the maximum mutual information I(X;Y) between input X and output Y, representing the highest rate for reliable information transmission. Mutual information is given by I(X;Y) = H(X) - H(X|Y), capturing the reduction in uncertainty about X provided by Y. For a binary symmetric channel with crossover probability p, the capacity is C = 1 - h(p), where h(p) = -p \log_2 p - (1-p) \log_2 (1-p) is the binary entropy function, illustrating how noise degrades but does not eliminate transmittable information. Two key theorems underpin these ideas: the source coding theorem, which states that no fewer than H(X) bits per symbol are needed on average to losslessly compress a source (achievable in the limit of large blocks), and the , which asserts that reliable communication is possible at rates up to the C using appropriate error-correcting codes. These results, proven asymptotically, establish the theoretical bounds for efficient encoding and decoding in classical systems. Despite their power, classical information foundations exhibit limitations, such as the absence of superposition, which confines processing to sequential operations and makes systems vulnerable to , as any can be detected only through rather than inherent . These constraints motivate quantum extensions, where concepts like Shannon entropy generalize to for density operators in quantum channels.

Quantum entropy and measures

In quantum information theory, entropy measures quantify the uncertainty or information content inherent in quantum states and processes, extending classical notions to account for superposition and entanglement. The primary such measure is the , which generalizes the Shannon entropy from classical probability distributions to density operators describing quantum systems. The S(\rho) of a density operator \rho is defined as S(\rho) = -\operatorname{Tr}(\rho \log_2 \rho), where \operatorname{Tr} denotes the operation, and the logarithm is base-2 for bits of . This expression arises from the eigenvalues of \rho, mirroring the Shannon entropy H(p) = -\sum_i p_i \log_2 p_i for a p, but applied to the \rho = \sum_i \lambda_i | \psi_i \rangle \langle \psi_i |, yielding S(\rho) = -\sum_i \lambda_i \log_2 \lambda_i. For pure states where \rho = |\psi\rangle\langle\psi|, S(\rho) = 0, indicating no mixedness, while maximally mixed states achieve maximum entropy, such as S(I/d) = \log_2 d for a d-dimensional system. Key properties of the underpin its utility in quantum information. It is , satisfying S(\sum_i p_i \rho_i) \geq \sum_i p_i S(\rho_i) for probabilities p_i > 0 and operators \rho_i, which implies that mixing states cannot decrease . holds as S(\rho_{AB}) \leq S(\rho_A) + S(\rho_B), where \rho_A = \operatorname{Tr}_B(\rho_{AB}) and similarly for \rho_B, bounding the of a composite by the sum of subsystem entropies. Strong further strengthens this: S(\rho_{ABC}) + S(\rho_B) \leq S(\rho_{AB}) + S(\rho_{BC}), a non-classical property essential for analyzing quantum correlations and capacities. These inequalities facilitate derivations in quantum theorems and entanglement . Building on , the quantum I(A:B) quantifies total correlations, including classical and quantum components, between subsystems A and B of a bipartite state \rho_{AB}. It is defined as I(A:B) = S(\rho_A) + S(\rho_B) - S(\rho_{AB}), where S(\rho_A) and S(\rho_B) are marginal entropies. For product states \rho_{AB} = \rho_A \otimes \rho_B, I(A:B) = 0, while entangled or classically correlated states yield positive values; notably, I(A:B) \geq 2 S(\rho_A) for pure \rho_{AB}, highlighting entanglement's role in excess correlations. This measure is symmetric, non-negative, and satisfies data-processing inequalities, making it a cornerstone for analysis. Quantum generalizations of provide parameterized families of measures that interpolate between (\alpha \to \infty) and (\alpha \to 1), useful for robustness in noisy settings and hypothesis testing. The \alpha- for \alpha > 0, \alpha \neq 1 is S_\alpha(\rho) = \frac{1}{1 - \alpha} \log_2 \operatorname{Tr}(\rho^\alpha), reducing to the Shannon form in the classical limit and preserving monotonicity under quantum operations for fixed \alpha. These entropies find applications in for security proofs and in resource theories for bounding rates, with higher \alpha emphasizing worst-case uncertainties. A pivotal application of these entropies is the Holevo bound, which limits the classical transmittable through a . For an ensemble \{p_i, \rho_i\} where p_i are probabilities and \rho_i are output states, the Holevo quantity is \chi = S\left( \sum_i p_i \rho_i \right) - \sum_i p_i S(\rho_i), representing the maximum accessible classical , achievable via collective measurements but not single-copy ones in general. This bound, originally derived for memoryless channels, establishes that quantum communication cannot exceed classical limits without entanglement assistance, influencing capacities in quantum networks.

Quantum Processing

Quantum gates and circuits

Quantum gates are unitary operators that manipulate quantum states in a reversible manner, serving as the fundamental building blocks for quantum information processing. Unlike gates, quantum gates operate on qubits and preserve the norm of the due to their unitarity, ensuring that information is not lost during . The evolution of a under a gate is governed by the time-dependent , i \hbar \frac{d}{dt} |\psi(t)\rangle = H |\psi(t)\rangle, where H is the , leading to a unitary time-evolution U(t) = e^{-i H t / \hbar} that satisfies U^\dagger U = I, guaranteeing reversibility. Key single-qubit gates include the Pauli gates, which correspond to rotations by \pi radians around the respective axes on the . The Pauli-X gate, represented by the matrix X = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, flips the computational basis states |0\rangle to |1\rangle and vice versa, analogous to a classical NOT gate. The Pauli-Z gate, Z = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}, introduces a shift of \pi to |1\rangle, while the Pauli-Y gate, Y = \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix}, combines a bit flip and shift, rotating around the Y-axis. The Hadamard gate, H = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}, creates superposition by mapping |0\rangle to \frac{|0\rangle + |1\rangle}{\sqrt{2}} and |1\rangle to \frac{|0\rangle - |1\rangle}{\sqrt{2}}. For multi-qubit operations, the controlled-NOT (CNOT) gate entangles s by applying the Pauli-X gate to the target conditional on the control being |1\rangle, with matrix form \text{CNOT} = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{pmatrix} in the computational basis. Together with single- gates, the CNOT forms a : any single- unitary can be decomposed into a product of rotations, such as U = e^{-i \alpha I/2} R_z(\beta) R_y(\gamma) R_z(\delta), where R_n(\theta) = e^{-i \theta \sigma_n / 2} and \sigma_n are , enabling approximation of arbitrary quantum operations. A is a representing a sequence of quantum gates applied to qubits, typically followed by measurements to extract classical outcomes. Gates act on wires symbolizing qubits, with single-qubit gates on one wire and multi-qubit gates spanning multiple wires, allowing the construction of complex computations through composition of these unitary operations. An important limitation arises from the , which proves that it is impossible to create an identical copy of an arbitrary unknown using a unitary operation, as any attempt to clone non-orthogonal states violates the linearity of quantum evolution. This theorem, derived by showing that a universal cloning machine would imply superluminal signaling or contradict quantum linearity, underscores the unique challenges in quantum information processing compared to classical systems.

Quantum algorithms overview

Quantum algorithms leverage , such as superposition and entanglement, to solve specific computational problems more efficiently than classical algorithms in certain cases. These algorithms operate within the oracle model, where access to a black-box (or ) allows querying input-output pairs without revealing the function's internal structure, enabling analyses of query that highlight quantum advantages. This framework is central to demonstrating speedups, as seen in algorithms that reduce the number of oracle calls from linear to sublinear scales. Grover's algorithm provides a quadratic speedup for unstructured search problems, identifying a marked item in an unsorted database of N elements using O(√N) oracle queries, compared to the classical O(N) requirement. It achieves this through iterative applications of an oracle that flips the phase of the target state and a diffusion operator that amplifies the amplitude of the solution, converging to the marked item with high probability after approximately π/4 √N iterations. This makes it particularly useful for optimization and database querying where exhaustive search is impractical classically. The (QFT) is a key subroutine in many quantum algorithms, efficiently computing the on quantum states and enabling period-finding tasks essential for number-theoretic problems. For an n-qubit state, the QFT applies a unitary transformation whose elements are given by F_{jk} = \frac{1}{\sqrt{N}} \exp\left(2\pi i j k / N\right), where N = 2^n, transforming the computational basis into the basis in O(n^2) gates, exponentially faster than the classical fast transform's O(n 2^n) operations for quantum-encoded data. Shor's algorithm exploits the QFT to factor large integers in polynomial time, a task believed to be hard classically and foundational to like . By reducing factoring to finding the period of a modular via QFT-based order-finding, it runs in O((log N)^3) time for an N-bit number, potentially breaking widely used encryption schemes on sufficiently large quantum computers. Variational quantum algorithms, such as the (VQE), adopt a hybrid classical-quantum approach for near-term devices, optimizing variational parameters in a parameterized to approximate solutions to and problems. In VQE, a trial wavefunction is prepared on the quantum , its measured to estimate the of a , and classical optimization iteratively refines parameters to find ground-state energies, demonstrating practical utility for molecular simulations where full quantum simulation is intractable classically. Recent advances as of 2025 include Google's Quantum Echoes algorithm, which demonstrates verifiable quantum advantage by measuring out-of-time-order correlators (OTOCs) on a 103-qubit processor, enabling efficient simulation of quantum-chaotic systems beyond classical capabilities.

Applications

Quantum communication protocols

Quantum communication protocols enable the transmission of quantum information through channels that exploit quantum mechanical principles, such as superposition and entanglement, to achieve tasks unattainable with classical methods. These protocols model real-world quantum channels as noisy operations that evolve quantum states in a physically realizable manner, ensuring complete positivity and trace preservation to maintain the probabilistic interpretation of quantum mechanics. A fundamental representation of such channels uses Kraus operators, where a quantum channel \mathcal{E} acting on a density operator \rho is given by \mathcal{E}(\rho) = \sum_{k} K_k \rho K_k^\dagger, with the completeness relation \sum_{k} K_k^\dagger K_k = I guaranteeing trace preservation. This formalism captures decoherence and noise effects, such as amplitude damping or phase flips, by decomposing the channel into a sum of unitary operations followed by partial trace-outs over environmental degrees of freedom. One key leveraging shared entanglement is , which allows the transmission of two classical bits using a single , doubling the efficiency of a classical under conditions. In this scheme, two parties, , initially share a maximally entangled , such as |\Phi^+\rangle = \frac{1}{\sqrt{2}} (|00\rangle + |11\rangle). encodes her two-bit by applying one of four Pauli operations (I, X, Z, or XZ) to her and sends it to Bob, who then measures both qubits in the Bell basis to decode the perfectly. This enhancement relies on the pre-shared entanglement, which correlates the qubits such that local operations on one convey information about the joint state. The assumes a noiseless quantum for the transmission but can be analyzed using Kraus operators to quantify fidelity under realistic noise. Complementing , facilitates the transfer of an arbitrary state from to without physically transporting the particle, using a shared entangled pair and a classical . The process begins with Alice performing a Bell-state on her (in unknown state |\psi\rangle = \alpha|0\rangle + \beta|1\rangle) and one half of the entangled pair, yielding one of four possible outcomes with equal probability. She transmits the two classical bits describing this outcome to Bob, who applies the corresponding Pauli correction (I, X, Z, or XZ) to his , reconstructing |\psi\rangle faithfully. This consumes one ebit of entanglement per teleported and requires two classical bits, demonstrating how quantum information can be "teleported" via hybrid quantum-classical signaling, with success verified through process tomography in noisy channels modeled by Kraus operators. Entanglement distribution is crucial for enabling protocols like and over distances, as shared entanglement must first be established between remote parties. A common laboratory method generates polarization-entangled pairs via (SPDC) in a type-II nonlinear , such as beta-barium (BBO), pumped by a . In this process, a splits into two lower-energy photons with orthogonal polarizations, producing a state like \frac{1}{\sqrt{2}} (|H V\rangle + |V H\rangle), where H and V denote horizontal and vertical polarizations; the pairs are then distributed via optical fibers or free-space links with fidelities exceeding 0.9 in typical setups. For long-distance applications, satellite-based distribution overcomes atmospheric losses, as demonstrated by the Micius satellite, which delivered entangled photons over 1200 km to ground stations using downlink telescopes and to maintain entanglement visibility above 0.8. These methods ensure scalable entanglement resources for quantum networks, with SPDC sources achieving pair generation rates up to millions per second per milliwatt of power. The ultimate limits of quantum information transmission are governed by capacity theorems, which quantify the maximum reliable rate over many channel uses. The quantum capacity Q(\mathcal{N}) of a channel \mathcal{N}, defined as the supremum of rates for transmitting qubits with vanishing error, is given by the regularized coherent information \lim_{n \to \infty} \frac{1}{n} \max_{\psi} I_c(\psi, \mathcal{N}^{\otimes n}), where I_c(\rho, \mathcal{N}) = S(\mathcal{N}(\rho)) - S_e(\rho, \mathcal{N}) measures the distillable entanglement, with S the von Neumann entropy and S_e the entropy exchange. For degradable channels, this simplifies to a single-letter formula, providing an achievable rate via coherent state encoding and decoding. Seminal results establish that Q can exceed classical capacities in entanglement-assisted scenarios, with explicit computations for qubit channels like the depolarizing channel yielding Q \approx 1 - H(p) bits per use for small noise p, highlighting the advantage of quantum signaling. These theorems guide protocol design by bounding the entanglement and information throughput in noisy environments.

Quantum computation paradigms

Quantum computation paradigms encompass diverse architectural frameworks for harnessing quantum mechanical principles to perform computations, distinct from classical models by leveraging superposition and entanglement at scale. The gate-model, also known as the circuit model, represents the foundational paradigm, where quantum information is processed through sequences of unitary operations on qubits. In this approach, a universal set of quantum gates—such as single-qubit rotations and a two-qubit entangling gate like the controlled-NOT—enables the simulation of any , achieving for quantum systems. This universality stems from the ability to approximate any unitary transformation on n qubits using a finite set, allowing for programmable and flexible . An alternative paradigm, adiabatic quantum computing, relies on the quantum adiabatic theorem, which guarantees that a system initialized in the ground state of a simple Hamiltonian will remain in the ground state of a slowly evolving Hamiltonian. Computation proceeds by gradually interpolating from an initial Hamiltonian with a known ground state to a final problem Hamiltonian, whose ground state encodes the solution to an optimization problem. This model, proposed for solving NP-hard problems, avoids explicit gate sequences and instead exploits continuous-time evolution, with the runtime determined by the energy gap between ground and excited states during the process. While theoretically equivalent to the gate model in computational power, adiabatic approaches offer potential advantages in hardware implementations tolerant to certain noise types. Measurement-based quantum computing, or one-way quantum computing, shifts the computational burden from unitary to measurements on a pre-prepared entangled resource state. Central to this paradigm is the , a highly entangled multi-qubit state generated via controlled-phase on a , which serves as a universal resource for computation. Algorithms are executed through a sequence of single-qubit s in adaptive bases on the , with outcomes feeding back to determine subsequent measurement angles, effectively implementing any . This model decouples state preparation from processing, potentially simplifying fault-tolerant implementations by allowing offline entanglement generation. Physical realizations of these paradigms require hardware platforms capable of manipulating s while addressing challenges such as increasing qubit counts, maintaining , and enabling reliable interconnections. Superconducting qubits, fabricated using Josephson junctions in circuits, excel in fast operations and integration with semiconductor fabrication, but face hurdles in coherence times limited by material defects and environmental coupling, with current systems exceeding 1,000 qubits amid and readout errors. Trapped-ion systems employ electromagnetic traps to confine atomic ions as qubits, offering long coherence times (seconds) and high-fidelity gates via interactions, yet is constrained by the need for complex ion shuttling in segmented traps and laser addressing precision for large arrays. Photonic encodes qubits in or paths, leveraging room-temperature operation and fiber-optic compatibility for distributed systems, but struggles with probabilistic operations due to weak nonlinearities and loss, necessitating heralding techniques and auxiliary resources to achieve determinism at scale. Across these platforms, hybrid approaches combining elements—such as photonic interconnects for modular scaling—are emerging to mitigate individual limitations. The DiVincenzo criteria provide a foundational for evaluating quantum suitability across paradigms, stipulating five essential requirements: a scalable array of well-characterized qubits; reliable initialization to a pure ; a of gates or equivalent operations; qubit-specific capability; and sufficient time relative to speeds. Additional criteria for quantum communication include interconversion between stationary and flying qubits and faithful transmission over channels. These benchmarks have guided two decades of progress, ensuring that candidate systems can support fault-tolerant despite ongoing challenges in error rates and integration.

Quantum cryptography techniques

Quantum cryptography techniques leverage the fundamental principles of quantum mechanics to provide security guarantees that are provably unbreakable by any computationally bounded adversary, primarily through the detection of attempts via disturbances to quantum states. Central to these methods is (QKD), which enables two parties, often called , to generate a key over an insecure channel while ensuring that any interception by an eavesdropper, , introduces detectable errors. The security of QKD stems from the , which prevents perfect copying of unknown quantum states, and the monogamy of quantum correlations, making it impossible for to gain full without altering the transmission in a way that can be statistically verified. The protocol, proposed by Charles Bennett and in , is the foundational QKD scheme and uses polarized single photons to encode bits in two bases. randomly selects either the (0°/90°) or diagonal (45°/135°) basis to prepare photons representing bits 0 or 1, sends them to over a , and measures each photon in a randomly chosen basis. After transmission, and publicly compare their basis choices via a classical channel and discard mismatched measurements during the sifting phase, retaining roughly half the bits as a raw key. To detect eavesdropping, they sample a subset of the sifted key to estimate the quantum bit error rate (QBER); if it exceeds a threshold (typically around 11% for optimal security), indicating potential interception, they abort. The final secure key is obtained through error correction to handle channel noise and privacy amplification to reduce Eve's information, achieving a key rate proportional to the sifted key length minus leaked information. Experimental implementations of have demonstrated secure over distances up to 421 km in fiber optics. The E91 protocol, developed by Artur Ekert in 1991, extends QKD to an entanglement-based approach, utilizing pairs of entangled photons shared between Alice and Bob to generate the key while simultaneously testing for eavesdropping through violations of Bell inequalities. Alice and Bob each receive one photon from an entangled source, such as a parametric down-conversion crystal producing singlet states, and perform measurements in randomly chosen bases: for key generation, they use bases correlated with the entanglement (e.g., Z-basis for bits), and for security checking, they select bases that test CHSH-Bell inequalities (e.g., combinations of X and Z). After measurements, they publicly announce their basis choices and use the Bell test results to bound Eve's information; a sufficient violation (e.g., CHSH value > 2) certifies security against general attacks, as it implies the shared state is sufficiently entangled and not influenced by Eve. The sifted key is processed similarly to BB84 with error correction and privacy amplification. E91's advantage lies in its self-testing of quantum correlations, providing security even against certain implementation flaws, and it has been realized in free-space links over 144 km. A simplified variant, the protocol introduced by Bennett in 1992, reduces the complexity of by encoding bits using only two non-orthogonal quantum states, such as |0⟩ for bit 0 and |+⟩ (a superposition) for bit 1, sent as polarized s. Bob measures in a basis that unambiguously discriminates |0⟩ but only probabilistically detects |+⟩, leading to inconclusive results for bit 1 attempts. sift the key by keeping only conclusive measurements where Bob detects a corresponding to bit 0 or an inconclusive outcome interpreted as bit 1 after basis revelation. is detected via increased inconclusive rates or error rates exceeding the inherent 25% ambiguity bound for non-orthogonal states. B92 requires fewer resources than BB84, as it uses a single basis per party, but achieves lower key rates; it has been implemented experimentally with weak coherent pulses over short distances. The security of these QKD protocols is underpinned by rigorous information-theoretic proofs, establishing that the key is secret and private even against unbounded adversaries, as long as the quantum channel's properties are bounded. For instance, the Csiszár-Körner bound on the secrecy capacity of quantum wiretap channels quantifies the maximum secure key rate as the difference between the of the legitimate channel and the eavesdropper's channel, ensuring that Eve's accessible information is exponentially small after privacy amplification. These proofs, developed in works by Shor and Preskill (2000) for and subsequent extensions, reduce security to the disturbance caused by attacks, with finite-key analyses providing practical bounds for real implementations. Over 20 years of theoretical and experimental validation have confirmed QKD's robustness, with global networks like the Chinese Micius satellite demonstrating entanglement-based distribution over 1200 km. As of 2025, satellite-based QKD has advanced further, with Europe's ESA launching trials via the Eagle-1 for secure ground-satellite links, China's USTC demonstrating real-time QKD from a quantum to mobile ground stations, and the mission contracted to enhance Europe's sovereign quantum communication capabilities.

Challenges

Decoherence and noise

Decoherence refers to the process by which a quantum system loses its coherent superposition of states due to unintended interactions with its surrounding , effectively mimicking the collapse of the wave function without invoking . This interaction entangles the system's quantum information with environmental , leading to the rapid suppression of off-diagonal elements in the system's and the emergence of classical-like behavior. In quantum information processing, decoherence represents a fundamental barrier, as it degrades the fragile quantum superpositions essential for tasks like and communication. The dynamics of decoherence are commonly modeled using open quantum system theory, particularly through Markovian approximations that assume weak coupling to the environment. A standard framework is the Lindblad master equation, which describes the evolution of the system's density operator \rho as \frac{d\rho}{dt} = -i[H, \rho] + \sum_k \left( L_k \rho L_k^\dagger - \frac{1}{2} \{ L_k^\dagger L_k, \rho \} \right), where H is the system's Hamiltonian and the L_k are Lindblad operators representing environmental influences. Specific noise models include the amplitude damping channel, which captures energy relaxation from excited to ground states via operators like L = \sqrt{\gamma} \sigma_- (with \gamma as the damping rate and \sigma_- the lowering operator); the phase damping channel, modeling pure dephasing that preserves populations but erodes coherences through L = \sqrt{\gamma} \sigma_z; and the depolarization channel, which uniformly mixes the state toward the maximally mixed state, often represented by Pauli operators scaled by a decay parameter. These channels provide tractable ways to simulate and quantify decoherence effects in qubit-based systems. Key timescales characterize the rate of decoherence: the longitudinal relaxation time T_1, which measures the decay of populations due to energy exchange with the , and the transverse relaxation time T_2, which quantifies the loss of , often related to T_2^{-1} = \frac{1}{2} T_1^{-1} + T_\phi^{-1} where T_\phi is the pure time. preservation is assessed via metrics, such as the average F = \int d\psi \langle \psi | \mathcal{E}(\rho_\psi) | \psi \rangle, where \mathcal{E} is the noisy and the integral is over pure states \psi; high (e.g., >99%) is for scalable quantum operations but degrades exponentially with decoherence time. In physical implementations, decoherence arises from diverse environmental sources, including thermal noise from phonons or at finite temperatures, which induces relaxation and in solid-state qubits. , such as fluctuating magnetic fields from control electronics or cosmic rays, further contributes to phase flips and unwanted couplings in superconducting or ion-trap systems. Initial mitigation strategies focus on environmental , such as cryogenic cooling to millikelvin temperatures to suppress excitations, and to minimize stray fields. Dynamical decoupling techniques apply sequences of fast pulses to refocus errors, effectively extending coherence times without additional encoding overhead.

Error correction methods

(QEC) addresses the fragility of quantum information by encoding a single logical into a subspace spanned by multiple physical qubits, thereby detecting and correcting errors without disturbing the encoded state. This principle, first demonstrated theoretically by , relies on redundancy to protect against both bit-flip and phase-flip errors, which are inherent to due to interactions with the environment. In this approach, errors are identified through non-demolition measurements that project the system onto error syndromes, allowing corrective operations to be applied conditionally while preserving . The Shor code exemplifies early QEC constructions, encoding one logical into nine physical qubits to correct arbitrary single-qubit errors. It combines three-qubit repetition codes for bit-flip and phase-flip protection, arranged in a concatenated structure where the logical |0⟩ state is represented as \frac{1}{2^{3/2}} \left( |000\rangle + |111\rangle \right)^{\otimes 3}, and the logical |1⟩ as \frac{1}{2^{3/2}} \left( |000\rangle - |111\rangle \right)^{\otimes 3}. Error detection uses parity checks, such as measuring XXX on subsets of qubits to identify bit flips, and ZZZ for phase flips. This code operates within the stabilizer formalism, introduced by Daniel Gottesman, where the code space is the +1 eigenspace of a commuting set of Pauli operators (stabilizers) that define the logical without revealing the encoded . For the Shor code, the stabilizers include operators like X₁X₂X₃ X₄X₅X₆, X₄X₅X₆ X₇X₈X₉ (for phase flips), Z₁Z₂, Z₂Z₃, Z₄Z₅, Z₅Z₆, Z₇Z₈, Z₈Z₉ (for bit flips), enabling syndrome extraction via ancillary qubits. The surface code, a topological QEC scheme, encodes logical qubits on a two-dimensional of physical qubits placed on the edges of a square grid, with stabilizers defined on plaquettes (Z-type) and vertices (X-type). Logical information is stored in non-local excitations (anyons) that require errors along extended paths to cause failure, providing inherent robustness. Syndrome measurement involves ancillary qubits coupled to each stabilizer group, repeated periodically to extract error locations via minimum-weight matching algorithms, achieving error thresholds up to approximately 1% under realistic noise models. This code's planar geometry supports local interactions, making it suitable for fault-tolerant implementations on near-term hardware. The establishes that if the physical error rate per or falls below a critical —typically around 1% for leading like the surface —arbitrarily long quantum computations can be performed reliably through concatenated levels of error correction, where each level corrects errors from the one below. Proven independently by several groups, this result shows that the logical error rate decreases exponentially with the number of levels, enabling provided the base error rate is sufficiently low. For instance, under a depolarizing model, thresholds range from 0.5% to 3% depending on the and set, with concatenation amplifying protection. Fault-tolerant gates in QEC maintain the code space's integrity during computation, often implemented via transversal operations that apply identical single-qubit gates to corresponding physical qubits in the logical encoding. In the Shor code, for example, transversal Pauli gates (e.g., logical X as X on all nine qubits) commute with s, preserving the code distance and correcting single errors post-gate. More generally, in stabilizer codes can be fault-tolerantly realized through syndrome-based protocols or lattice surgery in surface codes, ensuring that gate errors do not propagate uncontrollably. These methods underpin universal quantum computation by composing low-error primitives. Recent experiments have demonstrated below the surface code threshold, as achieved by in December 2024, marking progress toward fault-tolerant computation. Further advances in 2025 include qudit-based QEC beyond .

Historical Development

Origins in quantum mechanics and physics

The foundations of quantum information trace back to the early formalisms of developed in the and , which introduced a state-based description of physical systems essential for later concepts of quantum information processing. Werner Heisenberg's , introduced in 1925, provided a non-commutative algebraic framework for observables, emphasizing relationships between measurable quantities rather than classical trajectories. This approach laid the groundwork for treating quantum states as abstract entities capable of encoding information. Erwin Schrödinger's wave mechanics, published in 1926, complemented this by describing quantum systems through continuous wave functions, enabling probabilistic interpretations of state evolution that underpin and —key to information storage in quantum systems. Paul Dirac's 1930 monograph, , synthesized these developments into a relativistic-invariant formalism using bras and kets, formalizing quantum states in and establishing the linear algebraic structure that allows quantum information to be manipulated mathematically. A pivotal moment came in 1935 with the Einstein-Podolsky-Rosen (EPR) paper, which highlighted paradoxes in arising from entangled states, where measurements on one particle instantaneously correlate with outcomes on a distant partner, challenging classical notions of locality and . The authors argued that this "spooky action at a distance" implied was incomplete, necessitating hidden variables to restore , yet their analysis inadvertently spotlighted entanglement as a resource for non-local quantum correlations central to . This paradox set the stage for probing the informational implications of quantum non-locality. John Bell's 1964 theorem resolved key aspects of the EPR debate by deriving inequalities that local hidden-variable theories must satisfy, while predicts violations thereof, experimentally confirmed later and demonstrating inherent non-locality in quantum correlations. These correlations, stronger than classical limits, form the basis for quantum information protocols exploiting shared entanglement without classical analogs. In atomic physics, the invention of the by Charles Townes and colleagues in 1954 marked an early milestone in coherent quantum control, using in to amplify microwaves, serving as a precursor to manipulating quantum states for information purposes. The subsequent proposal of the in 1958 by Townes and Arthur Schawlow extended this to optical frequencies, enabling precise control over quantum emitters and laying groundwork for quantum optical information processing. Ties to relativity emerged through the no-faster-than-light signaling principle, inherent in to preserve : entangled measurements yield correlations but cannot transmit usable information superluminally, aligning quantum information with special 's light-speed limit and preventing paradoxes in space-time. This compatibility ensured ' consistency with relativistic physics, framing quantum information as a framework respecting fundamental physical constraints.

Evolution from information theory and computing

The foundations of quantum information emerged in the mid-20th century as researchers began extending classical to quantum systems. In 1948, introduced the , establishing concepts like and that quantified information transmission in classical systems. This framework inspired subsequent efforts to incorporate , recognizing that quantum states could potentially enhance information processing beyond classical limits. A pivotal early contribution came in 1973 when Alexander Holevo derived bounds on the amount of classical information transmissible through a , showing that it is limited by the of the ensemble of quantum states, thus bridging Shannon's entropy to quantum contexts. Holevo's theorem highlighted the no-cloning theorem's implications for quantum communication, setting a theoretical ceiling that classical measurements could not exceed without quantum-specific encoding. The 1980s marked a with , as physicists and theorists explored quantum analogs to classical computing models. In 1982, proposed that could simulate other quantum phenomena more efficiently than classical computers, arguing that a quantum would be necessary to model the exponential complexity of quantum many-body problems. Concurrently, in 1980, Paul Benioff formalized a quantum mechanical model of Turing machines using a framework, demonstrating that computation could be realized reversibly at the quantum level without energy dissipation in idealized cases. These works laid the groundwork for viewing not just as a tool but as a computational . The integration of quantum ideas with accelerated in 1984, when Charles Bennett and developed the protocol, the first scheme that leveraged the to detect eavesdropping and enable secure classical over quantum channels. This innovation merged quantum no-cloning with classical , proving practical applications for quantum-enhanced . The 1990s saw explosive growth in s, drawing directly from challenges. In 1994, devised a polynomial-time for and discrete logarithms, exploiting quantum Fourier transforms to solve problems intractable for classical computers and threatening widely used public-key cryptosystems. Building on this momentum, in 1996 introduced a that quadratically speeds up unstructured database searches compared to classical exhaustive methods, providing the first demonstrated quantum speedup for a practical problem. By the 2010s, attention turned to practical gaps in quantum networks, such as signal loss over distance, prompting advancements in quantum repeaters. A 2011 review by Duan, Lukin, Cirac, and Zoller outlined protocols using atomic ensembles and linear optics for entanglement distribution, enabling scalable quantum communication architectures that address Holevo's capacity limits in noisy, long-distance channels. These developments extended the interdisciplinary synthesis, evolving quantum information from theoretical extensions of classical theory into a robust field with engineered solutions for real-world constraints. The foundational importance of entanglement was further recognized in 2022, when the was awarded to , John F. Clauser, and for their experiments with entangled photons, confirming ' predictions and advancing applications in quantum information.

References

  1. [1]
    Quantum information science | NIST
    electrons inside an atom, tiny circuits, massless particles of light — to ...
  2. [2]
    [2103.07712] Quantum information - arXiv
    Mar 13, 2021 · This article reviews the extraordinary features of quantum information predicted by the quantum formalism.
  3. [3]
    Quantum Information Science | NSF - National Science Foundation
    Quantum information science touches nearly all areas of science and engineering, relying on advances in the physical sciences, materials science, computer ...
  4. [4]
    Quantum Information Science | PNNL
    Quantum information science studies quantum mechanics to advance computing, secure communication, and solve challenges beyond classical systems.
  5. [5]
    Optical and Quantum Communications Group - RLE at MIT
    Jan 1, 2012 · Quantum superposition and quantum entanglement are the bedrock on which new paradigms for information transmission, storage, and processing ...
  6. [6]
    [PDF] Harnessing Quantum Entanglement - arXiv
    Unlike bits in classical communication, quantum communication utilizes qubits in superposition states, allowing for novel information storage and processing.
  7. [7]
    Quantum Computing - Stanford Encyclopedia of Philosophy
    Dec 3, 2006 · 2.1 The Qubit​​ The qubit is the quantum analogue of the bit, the classical fundamental unit of information. It is a mathematical object with ...
  8. [8]
    Chapter: 2 Quantum Computing: A New Paradigm
    A quantum bit, or qubit, has two quantum states, analogous to the classical binary states. While the qubit can be in either state, it can also exist in a “ ...
  9. [9]
    Quantum Technology | U.S. GAO
    Jan 22, 2025 · Quantum technology could revolutionize sensors, computation, and communication. This type of technology uses quantum physics, often occurring at atomic scales.
  10. [10]
    Quantum Information Science | Department of Energy
    Nov 15, 2024 · Quantum information science uses novel principles from quantum mechanics to transform computing, sensing, and communication as we know them.
  11. [11]
    Quantum coding | Phys. Rev. A - Physical Review Link Manager
    Apr 1, 1995 · Quantum coding. Benjamin Schumacher. Department of Physics, Kenyon College, Gambier, Ohio 43022. PDF Share. X; Facebook; Mendeley; LinkedIn ...
  12. [12]
    Exponential Communication Complexity Advantage from Quantum ...
    May 24, 2016 · In this work, we study the role of the quantum superposition of the direction of communication as a resource for communication complexity.
  13. [13]
    [PDF] A Mathematical Theory of Communication
    A mixed system is one in which both discrete and continuous variables appear, e.g., PCM transmission of speech. We first consider the discrete case. This case ...
  14. [14]
    [PDF] Chapter 5 Quantum Information Theory
    The quantum source is more interesting when the signal states ρ are not mutually commuting. We will argue that the Von Neumann entropy quantifies the ...
  15. [15]
    [PDF] Entropy in Quantum Information Theory - arXiv
    Oct 24, 2018 · The von Neumann entropy is defined as the Shannon entropy of the spectrum of a quantum state, or, equivalently, H(ρ) = −trρlog ρ. Note that ...
  16. [16]
    [1306.3142] On quantum Renyi entropies: a new generalization and ...
    Jun 13, 2013 · The Renyi entropies constitute a family of information measures that generalizes the well-known Shannon entropy, inheriting many of its ...
  17. [17]
    [quant-ph/9503016] Elementary gates for quantum computation - arXiv
    Mar 23, 1995 · The elementary gates are one-bit quantum gates (U(2)) and the two-bit exclusive-or gate, which maps Boolean values (x,y) to (x,x \oplus y).
  18. [18]
    A single quantum cannot be cloned - Nature
    Oct 28, 1982 · We show here that the linearity of quantum mechanics forbids such replication and that this conclusion holds for all quantum systems.
  19. [19]
    Quantum-classical correspondence in the oracle model of computation
    Jun 29, 2005 · Abstract: The oracle model of computation is believed to allow a rigorous proof of quantum over classical computational superiority.
  20. [20]
    A fast quantum mechanical algorithm for database search - arXiv
    Nov 19, 1996 · Grover (Bell Labs, Murray Hill NJ). View a PDF of the paper titled A fast quantum mechanical algorithm for database search, by Lov K. Grover ...
  21. [21]
    [quant-ph/9508027] Polynomial-Time Algorithms for Prime ... - arXiv
    Aug 30, 1995 · This paper considers factoring integers and finding discrete logarithms, two problems which are generally thought to be hard on a classical computer.
  22. [22]
    A variational eigenvalue solver on a quantum processor - arXiv
    Apr 10, 2013 · Access Paper: View a PDF of the paper titled A variational eigenvalue solver on a quantum processor, by Alberto Peruzzo and 7 other authors.
  23. [23]
    A variational eigenvalue solver on a photonic quantum processor
    Jul 23, 2014 · The quantum phase estimation algorithm efficiently finds the eigenvalue of a given eigenvector but requires fully coherent evolution.
  24. [24]
    General state changes in quantum theory | Semantic Scholar
    Jun 1, 1971 · General state changes in quantum theory. @article{Kraus1971GeneralSC ... Kraus operator representation for a quantum operation without recourse to ...
  25. [25]
    Teleporting an unknown quantum state via dual classical and ...
    Quantum Milestones, 1993: Teleportation Is Not Science Fiction. Published 10 March, 2025. Theorists proposed an idea they called quantum teleportation—a means ...
  26. [26]
    Satellite-based entanglement distribution over 1200 kilometers
    Jun 16, 2017 · Long-distance entanglement distribution is essential for both foundational tests of quantum physics and scalable quantum networks.
  27. [27]
    Capacity of the noisy quantum channel | Phys. Rev. A
    Mar 1, 1997 · An upper limit is given to the amount of quantum information that can be transmitted reliably down a noisy, decoherent quantum channel.
  28. [28]
    [PDF] Elementary gates for quantum computation - arXiv
    Abstract. We show that a set of gates that consists of all one-bit quantum gates (U(2)) and the two-bit exclusive-or gate (that maps Boolean values (x, ...Missing: complete | Show results with:complete
  29. [29]
    [PDF] Universality in Quantum Computation - arXiv
    We show that in quantum computation almost every gate that operates on two or more bits is a universal gate. We discuss various physical considerations bearing ...Missing: complete | Show results with:complete
  30. [30]
    [PDF] Quantum Computation by Adiabatic Evolution - arXiv
    According to the adiabatic theorem, if the evolution time T is big enough, the state of the system at time T will be very close to the ground state of H(T), ...
  31. [31]
    Measurement-based quantum computation with cluster states - arXiv
    Jan 13, 2003 · A scheme of quantum computation that consists entirely of one-qubit measurements on a particular class of entangled states, the cluster states.
  32. [32]
    [2006.10433] Superconducting Quantum Computing: A Review - arXiv
    Jun 18, 2020 · This review covers experimental efforts for superconducting quantum computers, including qubit design, quantum control, readout, error ...
  33. [33]
    Trapped-Ion Quantum Computing: Progress and Challenges - arXiv
    Apr 8, 2019 · We review the state of the field, covering the basics of how trapped ions are used for QC and their strengths and limitations as qubits.
  34. [34]
    Linear Optics to Scalable Photonic Quantum Computing - arXiv
    Jan 5, 2025 · In this review, we examine the principles, technological advancements, applications, and challenges of photonic quantum computing.Linear Optics To Scalable... · 2 Photonic Qubits In Quantum... · 3 Photon Generation And...<|separator|>
  35. [35]
    The Physical Implementation of Quantum Computation - arXiv
    Feb 25, 2000 · Access Paper: View a PDF of the paper titled The Physical Implementation of Quantum Computation, by David P. DiVincenzo and 1 other authors.
  36. [36]
    [PDF] Decoherence and the Transition from Quantum to Classical ... - arXiv
    Decoherence leads to the environment-induced superselection (einselection) that justifies the existence of the preferred pointer states.
  37. [37]
    [PDF] Foundations of Quantum Decoherence - arXiv
    This absorption of quantum information is the exact process that the environment performs in quantum decoherence theory. In both cases, the information is ...
  38. [38]
    [PDF] Introduction to dissipation and decoherence in quantum systems
    Sep 25, 2008 · It means that, contrary to first appearances, quantum computers are not as bad as classical analog computers when it comes to error correction.
  39. [39]
    [PDF] arXiv:2306.17161v1 [quant-ph] 29 Jun 2023
    Jun 29, 2023 · In this work, we study three decoherence channels in qubit systems—dephasing, amplitude damping, and depolarization channels—as examples to ...
  40. [40]
    Noise gates for decoherent quantum circuits | Phys. Rev. A
    Mar 14, 2008 · In detail we review the depolarizing channel and the generalized amplitude damping channel in terms of this noise gate formalism and show how ...
  41. [41]
    [PDF] arXiv:1608.05053v3 [quant-ph] 27 Feb 2017
    Feb 27, 2017 · We consider the decoherence to be characterized by two timescales,. T1 and T2. The former is the timescale for amplitude damping, and the latter ...<|control11|><|separator|>
  42. [42]
    [PDF] Measures of Decoherence - arXiv
    6,7 Therefore, the time scales T1 and T2 have limited applicability in evaluating quantum computing error correction criteria. Their obvious advantage is in ...
  43. [43]
    [PDF] arXiv:2101.02109v4 [quant-ph] 6 Dec 2021
    Dec 6, 2021 · This paper models noise in NISQ computers, focusing on gate infidelities, SPAM errors, and thermal decoherence, using quantum channels and a ...
  44. [44]
    Inherent Thermal-Noise Problem in Addressing Qubits | PRX Quantum
    Jul 2, 2024 · Although the quantum processor can be thermalized and shielded from electromagnetic noise, the interconnects themselves introduce an unavoidable ...
  45. [45]
    Scheme for reducing decoherence in quantum computer memory
    Quantum Milestones, 1995: Correcting Quantum Computer Errors​​ Researchers proposed methods to preserve the integrity of quantum bits—techniques that may become ...Missing: principles | Show results with:principles
  46. [46]
    [quant-ph/9705052] Stabilizer Codes and Quantum Error Correction
    May 28, 1997 · I will give an overview of the field of quantum error correction and the formalism of stabilizer codes. In the context of stabilizer codes, I ...
  47. [47]
    [quant-ph/0110143] Topological quantum memory - arXiv
    Oct 24, 2001 · We analyze surface codes, the topological quantum error-correcting codes introduced by Kitaev. In these codes, qubits are arranged in a two-dimensional array.
  48. [48]
    A mathematical theory of communication | Nokia Bell Labs Journals ...
    A mathematical theory of communication. Abstract: The recent development of various methods of modulation such as PCM and PPM which exchange bandwidth for ...
  49. [49]
    Bounds for the quantity of information transmitted by a quantum ...
    Semantic Scholar extracted view of "Bounds for the quantity of information transmitted by a quantum communication channel" by A. Holevo.
  50. [50]
    Simulating physics with computers | International Journal of ...
    Feynman, RP Simulating physics with computers. Int J Theor Phys 21, 467–488 (1982). https://doi.org/10.1007/BF02650179Missing: proposal | Show results with:proposal
  51. [51]
    The computer as a physical system: A microscopic quantum ...
    Aug 9, 1979 · In this paper a microscopic quantum mechanical model of computers as represented by Turing machines is constructed.
  52. [52]
    Quantum cryptography: Public key distribution and coin tossing - arXiv
    Mar 14, 2020 · Title:Quantum cryptography: Public key distribution and coin tossing. Authors:Charles H. Bennett, Gilles Brassard. View a PDF of the paper ...
  53. [53]
    Algorithms for quantum computation: discrete logarithms and factoring
    This paper gives Las Vegas algorithms for finding discrete logarithms and factoring integers on a quantum computer that take a number of steps which is ...
  54. [54]
    Quantum repeaters based on atomic ensembles and linear optics
    Mar 21, 2011 · Since the seminal DLCZ paper, there has been significant progress toward the realization of quantum repeaters with atomic ensembles and ...Missing: post- | Show results with:post-