Observable is a San Francisco-based software company that provides a cloud-based platform for collaborative data exploration, analysis, and visualization.[1] The platform enables teams to create interactive notebooks integrating code, data queries, and dynamic graphics, supporting languages like JavaScript, SQL, and Python while emphasizing real-time collaboration and AI-assisted insights.[2] Founded in 2016 by Mike Bostock, the creator of the D3.js visualization library, and Melody Meckfessel, a former Google executive, Observable aims to make data work more accessible and trustworthy for diverse users including developers, analysts, and business professionals.[3][1]The core of Observable's offering is its reactive computational notebook environment, where cells execute code to generate live outputs such as charts, maps, and dashboards, with changes propagating automatically across the document.[2] Key features include seamless integration with databases like Snowflake and BigQuery, support for open-source libraries like D3 and Plot, and tools for embedding visualizations in external applications or exporting them as static files.[2] The platform's design draws from Bostock's expertise in web-based graphics, evolving from an initial focus on JavaScript-centric exploration to a broader ecosystem that includes the Observable Framework for building data apps and the Observable Canvases for AI-enhanced analysis.[1] Since its public launch in 2018, Observable has grown through community contributions, with over 1.2 million notebooks created.[1]Observable has secured significant venture funding to expand its capabilities, including a $10.5 million Series A round in November 2020 led by Sequoia Capital and Acrew Capital and a $35.6 million Series B in 2022 led by Menlo Ventures, bringing total funding to $46.1 million.[4][5] As of 2025, the company reports milestones such as over 508 million downloads of D3.js, 350,000 users, and the launch of Observable Canvases, positioning it as a leader in modern data collaboration tools.[1]
Overview
Definition
In physics, an observable is a physical property or quantity whose value can be determined through empirical measurement.[6] Examples include position, momentum, and energy in mechanics, which can be quantified using appropriate instruments under controlled conditions.[7]Unlike abstract theoretical variables—such as parameters in mathematical models that lack direct empirical access—observables are distinguished by their measurability and relevance to experimental verification.[8] This empirical accessibility ensures that observables bridge theoretical predictions with real-world data, forming the foundation of scientific inquiry across disciplines.General examples span various fields: in thermodynamics, temperature serves as a key observable, representing the average kinetic energy of particles and measurable via thermometers or spectroscopic methods.[9] In kinematics, velocity is an observable that quantifies motion and can be determined from displacement over time using tools like radar or optical tracking.[6]In classical physics, the measurability of observables implies reproducibility: under identical initial conditions, repeated measurements yield the same value, underscoring the deterministic framework of the theory.[10] This property contrasts with quantum contexts, where observables are associated with operators, though their measurement outcomes exhibit probabilistic characteristics.[8]
Historical Development
The concept of observables in physics originated in the foundations of classical mechanics during the 17th century. Galileo Galilei advanced the empirical measurement of quantities such as position, velocity, and time through his inclined-plane experiments on falling objects, establishing a quantitative approach to motion that emphasized observable data over qualitative speculation.[11] This work directly influenced Isaac Newton's Philosophiæ Naturalis Principia Mathematica (1687), which integrated these observables into a comprehensive deterministic system governed by laws of motion and universal gravitation, allowing precise predictions from initial conditions.[12]By the 19th century, the deterministic nature of classical observables reached its philosophical zenith with Pierre-Simon Laplace's articulation of causal determinism. In his 1814 Essai philosophique sur les probabilités, Laplace described a hypothetical superintelligence—later termed Laplace's demon—that could compute the future state of the universe entirely from the positions and momenta of all particles at one moment, embodying the classical ideal of fully predictable observables. This view reinforced the belief that physical quantities like position and momentum were inherently deterministic and accessible through complete measurement.The early 20th century marked a profound transition as anomalies in atomic phenomena exposed limitations in classical observables, leading to the quantum revolution. Niels Bohr's principle of complementarity, presented in his 1927 Como lecture, shifted focus from underlying hidden variables to the inherent incompatibility of certain observables in quantum measurements, resolving paradoxes by prioritizing empirical outcomes over classical determinism.[13]Pivotal milestones defined this quantum framework: Werner Heisenberg's 1925 paper introduced matrix mechanics, representing observables as non-commuting arrays to model discrete quantum transitions and spectral lines, diverging from continuous classical variables.[14] John von Neumann's 1932 Mathematical Foundations of Quantum Mechanics further solidified the shift by formalizing observables within Hilbert space, transforming them from definite classical parameters into probabilistic operators whose measurements yield eigenvalue spectra with inherent uncertainties. Post-1920s quantum theory thus supplanted Laplace's deterministic worldview with a probabilistic paradigm, where observables reflect statistical predictions rather than absolute certainties.[15]
Classical Physics
Observables in Classical Mechanics
In classical mechanics, observables are real-valued smooth functions defined on the phase space, a mathematical structure that encodes the complete state of a dynamical system through its position coordinates q and conjugate momentum coordinates p. The phase space is typically represented as the cotangent bundle T^*M over the configuration space manifold M, where each point (q, p) specifies the system's configuration and its generalized momenta. This formalism allows observables to capture measurable physical quantities, such as energy or angular momentum, as functions A: T^*M \to \mathbb{R}.[16][17]Common examples of observables include the kinetic energy T = \frac{p^2}{2m}, which depends solely on the momentum for a particle of mass m; the potential energy V(q), a function of position alone; and the totalenergy, embodied by the Hamiltonian H = T + V, which governs the system's dynamics. These functions are evaluated at specific points in phase space to yield numerical values for the observables at a given state. The Hamiltonian itself serves as the primary observable for energy conservation and time evolution in isolated systems.[17][18]The dynamics of observables are described using the Poisson bracket, a bilinear operation that encodes the symplectic structure of phase space. For two observables A and B in a system with n degrees of freedom, the Poisson bracket is defined as\{A, B\} = \sum_{i=1}^n \left( \frac{\partial A}{\partial q_i} \frac{\partial B}{\partial p_i} - \frac{\partial A}{\partial p_i} \frac{\partial B}{\partial q_i} \right).This bracket satisfies antisymmetry (\{A, B\} = -\{B, A\}), bilinearity, and the Jacobi identity, endowing the space of observables with a Lie algebra structure. The time evolution of an observable A follows Hamilton's equation\frac{dA}{dt} = \{A, H\} + \frac{\partial A}{\partial t},where the partial derivative accounts for any explicit time dependence in A; if A is time-independent and \{A, H\} = 0, then A is a conserved quantity. In one dimension, the bracket simplifies to \{A, B\} = \frac{\partial A}{\partial q} \frac{\partial B}{\partial p} - \frac{\partial A}{\partial p} \frac{\partial B}{\partial q}.[18][17]Measurement in classical mechanics is inherently deterministic: given ideal instruments, the exact values of all observables can be obtained simultaneously for a system in a well-defined phase space state (q, p), as there are no fundamental limitations on precision or compatibility between measurements. This contrasts with quantum mechanics, where observables become non-commuting operators on a Hilbert space.[16][18]
Measurement and Determinism
In classical physics, the measurement of observables entails direct empirical determination of their numerical values under controlled conditions, typically using instruments such as rulers to gauge position or clocks to record time.[19] This process assumes negligible back-action from the measurement apparatus on macroscopic systems, allowing observables like velocity or energy to be ascertained with high precision without significantly altering the system's state.[20] Such measurements form the foundation for empirical validation of physical laws, where the observable's value is extracted reproducibly from the system's interaction with the measuring device.[19]The principle of determinism underpins the interpretation of these measurements, asserting that the future evolution of a classical system is uniquely determined by its initial state. Pierre-Simon Laplace formalized this in 1814, describing an intellect that, knowing the positions and momenta (key observables) of all particles at time t=0, could predict the entire future trajectory through Newton's laws.[21] This Laplacian determinism implies that perfect measurement of observables eliminates uncertainty, rendering the system's path predictable in principle, as the equations of motion are reversible and time-symmetric.[22]However, practical limitations arise in chaotic systems, where infinitesimal errors in measuring initial observables are exponentially amplified over time, leading to divergence in predicted trajectories. The three-body problem exemplifies this sensitivity, as small uncertainties in positions or velocities render long-term forecasts infeasible despite the underlying deterministic equations.[23] Theoretically, these challenges are resolvable with arbitrarily precise measurements, preserving determinism, though computational and observational constraints impose effective unpredictability.[24]In statistical mechanics, observables are often treated as ensemble averages over many realizations of the system, bridging microscopic measurements to macroscopic properties. For instance, pressure in an ideal gas emerges as the time-averaged force per unit area from molecular collisions, yielding the equation of state PV = NkT, where P is the observable pressure, V the volume, N the number of particles, k Boltzmann's constant, and T the temperature. This averaging approach accounts for the collective behavior of unmeasurable individual particle states, providing a deterministic framework for thermodynamic predictions.[25]
Quantum Mechanics
Observables as Operators
In quantum mechanics, observables are mathematically represented by self-adjoint (Hermitian) operators acting on the Hilbert space of the quantum system. This formulation, introduced by John von Neumann, ensures that the possible outcomes of measurements correspond to the real eigenvalues of these operators, reflecting the empirical reality that measurement results are real numbers. Self-adjointness guarantees that the operator is equal to its adjoint, \hat{A}^\dagger = \hat{A}, which is essential for the probabilistic interpretation of quantum states.[26]Specific examples illustrate this operator representation. In the position basis, the position operator \hat{x} is defined as multiplication by the position coordinate x, so \hat{x} \psi(x) = x \psi(x) for a wave function \psi(x).[26] The momentum operator \hat{p}, in the same representation, takes the differential form \hat{p} = -i \hbar \frac{d}{dx}, where \hbar is the reduced Planck's constant.[26] These canonical operators satisfy the commutation relation [\hat{x}, \hat{p}] = i \hbar, foundational to the Heisenberg uncertainty principle, though their explicit forms depend on the chosen basis.The general structure of an observable operator \hat{A} is provided by the spectral theorem for self-adjoint operators, which allows a spectral decomposition \hat{A} = \sum_i \lambda_i |\psi_i \rangle \langle \psi_i |, where \{\lambda_i\} are the distinct eigenvalues representing possible measurement outcomes, and |\psi_i \rangle are the corresponding orthonormal eigenvectors forming a complete basis of the Hilbert space. For continuous spectra, the sum generalizes to an integral over the spectral measure. This decomposition underpins the probabilistic nature of measurements, as the probability of obtaining eigenvalue \lambda_i in state |\psi \rangle is |\langle \psi_i | \psi \rangle|^2.The expectation value of an observable \hat{A} in a normalized state |\psi \rangle is given by the inner product \langle \hat{A} \rangle = \langle \psi | \hat{A} | \psi \rangle, which represents the average outcome over many measurements on identically prepared systems. Using the spectral decomposition, this simplifies to \langle \hat{A} \rangle = \sum_i \lambda_i |\langle \psi_i | \psi \rangle|^2, weighting each eigenvalue by its probability. This operator framework distinguishes quantum observables from their classical counterparts, where they are simply real-valued functions on phase space.
Compatible Observables
In quantum mechanics, two observables represented by self-adjoint operators  and B̂ are compatible if they commute, meaning their commutator vanishes: [ \hat{A}, \hat{B} ] = \hat{A}\hat{B} - \hat{B}\hat{A} = 0. This commutation relation implies that both observables can be measured simultaneously with arbitrary precision, without one measurement disturbing the outcome of the other. The concept originates from the foundational axiomatization of quantum mechanics, where observables are associated with linear operators on Hilbert space, and compatibility ensures that the measurement process preserves the eigenstates relevant to both.[27][28]A key mathematical property of compatible observables is that commuting self-adjoint operators share a common complete set of eigenstates. If |\psi\rangle is an eigenstate of  with eigenvalue \lambda, so \hat{A} |\psi\rangle = \lambda |\psi\rangle, then |\psi\rangle is also an eigenstate of B̂ with some eigenvalue \mu, satisfying \hat{B} |\psi\rangle = \mu |\psi\rangle. This shared eigenbasis arises from the simultaneous diagonalizability of commuting operators, as guaranteed by the spectral theorem in Hilbert space theory, allowing the observables to be jointly diagonalized.[27]Representative examples of compatible observables include the Hamiltonian Ĥ (total energy) and components of angular momentum in systems governed by central potentials, such as the Coulomb potential in the hydrogen atom. Here, Ĥ commutes with the square of the angular momentum operator \hat{L}^2 and its z-component \hat{L}_z, i.e., [ \hat{H}, \hat{L}^2 ] = 0 and [ \hat{H}, \hat{L}_z ] = 0, enabling the specification of energy levels alongside angular momentum quantum numbers without conflict. Another example is the position operator in one spatial direction and the momentum operator in an orthogonal direction, such as \hat{x} and \hat{p}_y, which commute since [ \hat{x}, \hat{p}_y ] = 0, reflecting the independence of perpendicular coordinates in free space.[29]/02%3A_Introduction_to_Quantum_Mechanics/2.05%3A_Operators_Commutators_and_Uncertainty_Principle)The implications of compatibility extend to the prediction of joint measurement outcomes. For a quantum state |\psi\rangle, the joint probability distribution for obtaining eigenvalues \lambda of  and \mu of B̂ is P(\lambda, \mu) = |\langle \phi_{\lambda \mu} | \psi \rangle|^2, where |\phi_{\lambda \mu}\rangle denotes the common eigenstates labeled by both eigenvalues. This distribution fully characterizes the simultaneous statistics, underscoring how compatibility resolves the otherwise indeterminate correlations between observables in quantum theory.[30]
Incompatible Observables
In quantum mechanics, two observables represented by Hermitian operators  and B̂ are defined as incompatible if their commutator [Â, B̂] ≠ 0. This non-commutativity implies that the operators do not share a complete set of common eigenstates, preventing the existence of a joint eigenbasis and thereby prohibiting the precise simultaneous determination of definite values for both observables in any quantum state.[31]A canonical example of incompatible observables is the position operator x̂ and the momentum operator p̂, which satisfy the commutation relation [x̂, p̂] = iℏ. This relation underpins the Fourier transform duality between the position and momentum representations of the wave function, where precise knowledge in one domain inherently broadens the uncertainty in the other.[31]The general consequences of incompatibility are profound: no quantum state can serve as a simultaneous eigenstate of both operators, and a measurement of one observable inevitably disturbs the system's preparation with respect to the other. Heisenberg illustrated this disturbance through his 1927 gamma-ray microscope thought experiment, in which high-resolution imaging of an electron's position using short-wavelength gamma rays causes unpredictable momentum transfers via Compton scattering, thus altering the electron's trajectory.[31]This notion of incompatible observables was pivotal in the philosophical debates between Niels Bohr and Albert Einstein at the Fifth Solvay Conference in 1927, where discussions centered on whether quantum mechanics fully captures the objective reality of physical observables or introduces fundamental indeterminacy. Such incompatibility also gives rise to uncertainty relations that quantify the trade-offs in measurement precision.
Measurement and Interpretation
Eigenvalue Measurements
In quantum mechanics, the measurement of an observable \hat{A} is governed by the projection postulate, which specifies both the possible outcomes and their probabilities. For a system in a normalized state |\psi\rangle, the observable \hat{A} possesses a complete set of orthonormal eigenstates |i\rangle with corresponding eigenvalues \lambda_i. Upon measurement, the system yields one of the eigenvalues \lambda_i with probability |\langle i | \psi \rangle|^2, the square of the modulus of the projection of |\psi\rangle onto the eigenstate |i\rangle. This probabilistic nature arises because the state |\psi\rangle is generally a superposition of the eigenstates, and the measurement selects one eigenvalue according to the Born rule.Following the measurement outcome \lambda_i, the wave function undergoes an instantaneous collapse to the corresponding eigenstate |i\rangle, a process known as the von Neumann projection. This non-unitary projection abruptly reduces the superposition to a definite eigenstate, marking the transition from quantum superposition to a classical-like definite outcome. The postulate, formalized by John von Neumann in his 1932 treatise, ensures that the post-measurement state is normalized and aligned with the measured eigenvalue.A key consequence of this collapse is the repeatability of measurements: if the same observable \hat{A} is measured immediately after on the collapsed state |i\rangle, the outcome \lambda_i is obtained with certainty (probability 1), as |i\rangle is an eigenstate. This property underscores the definitive nature of the measurement process and distinguishes it from the unitary evolution governing the system between measurements.A illustrative example is the measurement of the z-component of spin, S_z, for a spin-1/2 particle, where the eigenvalues are \pm \hbar/2 with eigenstates |+\rangle and |-\rangle. For a general state |\psi\rangle = \cos(\theta/2) |+\rangle + \sin(\theta/2) e^{i\phi} |-\rangle, represented on the Bloch sphere by a point at polar angle \theta from the +z axis, the probability of measuring +\hbar/2 is \cos^2(\theta/2), and -\hbar/2 is \sin^2(\theta/2). Post-measurement, the state collapses to either |+\rangle or |-\rangle, ensuring a subsequent S_z measurement yields the same result deterministically.
Expectation Values and Uncertainty
In quantum mechanics, the expectation value of an observable represented by a Hermitian operator \hat{A} provides the average result of repeated measurements on an ensemble of systems prepared in the state |\psi\rangle. If |\psi\rangle = \sum_i c_i |\lambda_i\rangle in the eigenbasis of \hat{A}, with eigenvalues \lambda_i and probabilities |c_i|^2, then the expectation value is given by\langle \hat{A} \rangle = \sum_i \lambda_i |c_i|^2.This quantity, introduced in the foundational formalism of quantum mechanics, connects the probabilistic nature of measurements to a mean value analogous to classical statistics.The time dependence of expectation values follows from the dynamics of the wave function, as encapsulated in the Ehrenfest theorem. For a general observable \hat{A}, possibly time-dependent, the theorem states\frac{d\langle \hat{A} \rangle}{dt} = \left\langle \frac{\partial \hat{A}}{\partial t} \right\rangle + \frac{i}{\hbar} \langle [\hat{H}, \hat{A}] \rangle,[32]
where \hat{H} is the Hamiltonian and [\cdot, \cdot] denotes the commutator.[32] This relation demonstrates how quantum expectation values approximate classical trajectories for sharply peaked states, bridging the two theories.[32]To characterize the fluctuations around this average, the variance of an observable \hat{A} is defined as\Delta A^2 = \langle \hat{A}^2 \rangle - \langle \hat{A} \rangle^2,where \Delta A measures the standard deviation or intrinsic uncertainty in measurement outcomes. For pairs of incompatible observables \hat{A} and \hat{B} that do not commute, these uncertainties obey the Robertson–Schrödinger relation:\Delta A \Delta B \geq \frac{1}{2} \left| \langle [\hat{A}, \hat{B}] \rangle \right|.[33][34] Originally derived by Howard P. Robertson in 1929 as a general inequality from the Cauchy-Schwarz relation applied to states, it was refined by Erwin Schrödinger in 1930 to include covariance terms, emphasizing the role of non-commutativity in limiting simultaneous precision.[33][34]A prominent illustration occurs for position \hat{x} and momentum \hat{p}, where [\hat{x}, \hat{p}] = i\hbar yields \Delta x \Delta p \geq \hbar/2. Equality holds for Gaussian wave packets, which minimize the product of uncertainties and represent coherent states with balanced spreads in conjugate variables.[34] These packets, evolving under free-particle dynamics, exhibit spreading over time while preserving the minimum-uncertainty property at t=0.[34]
Extensions and Applications
In Quantum Field Theory
In quantum field theory (QFT), observables are extended to relativistic settings through field operators defined over spacetime. For a real scalar field, the primary observable is the field operator \phi(x), which acts as an operator-valued distribution on the Hilbert space of the theory, smeared with test functions to ensure well-definedness. The conjugate momentum operator is \pi(x) = \frac{\partial \mathcal{L}}{\partial (\partial_0 \phi)}, where \mathcal{L} is the Lagrangiandensity, typically \mathcal{L} = \frac{1}{2} \partial^\mu \phi \partial_\mu \phi - \frac{1}{2} m^2 \phi^2 for a free massive scalar field. These operators generalize the position and momentum operators from non-relativistic quantum mechanics to an infinite-dimensional field space.[35]The algebraic structure of these observables is governed by canonical commutation relations imposed during quantization. At equal times, [\phi(\mathbf{x}, t), \pi(\mathbf{y}, t)] = i \hbar \delta^3(\mathbf{x} - \mathbf{y}), while [\phi(\mathbf{x}, t), \phi(\mathbf{y}, t)] = [\pi(\mathbf{x}, t), \pi(\mathbf{y}, t)] = 0. These relations extend the familiar [x, p] = i \hbar from single-particle quantum mechanics to the field theory context, ensuring the correct Poisson bracket promotion to quantum operators across the spatial degrees of freedom. In the full relativistic framework, the commutation relations are extended to all spacetime points, with fields at spacelike separations satisfying [\phi(x), \phi(y)] = 0 when (x - y)^2 < 0.[35]Vacuum expectation values of these field operators provide essential correlation functions that encode the theory's dynamics. The vacuum state |0\rangle is annihilated by all destruction operators, and expectation values like \langle 0 | \hat{A} | 0 \rangle for a field observable \hat{A} yield vacuum condensates or propagators. A key example is the two-point correlation function \langle 0 | \phi(x) \phi(y) | 0 \rangle = \int \frac{d^4 p}{(2\pi)^4} \frac{i}{p^2 - m^2 + i\epsilon} e^{-i p \cdot (x - y)} in Minkowski space, which represents the propagator and is fundamental for computing scattering amplitudes and Green's functions in perturbation theory.[35]A central challenge in defining QFT observables is ensuring locality and causality, which require that measurements in spacelike-separated regions do not influence each other instantaneously. This is enforced by the microcausality condition: observables associated with spacelike-separated regions must commute, [O(x), O(y)] = 0 for (x - y)^2 < 0, preventing superluminal signaling and aligning the theory with special relativity. Violations of this principle would undermine the causal structure of spacetime, making it a foundational axiom in axiomatic formulations of QFT.
In Quantum Information Theory
In quantum information theory, observables play a central role in characterizing the properties of qubits, which are the basic units of quantum information encoded in two-dimensional Hilbert spaces. The Pauli operators—\sigma_x, \sigma_y, and \sigma_z—serve as the canonical set of observables for a single qubit, each being Hermitian with eigenvalues \pm 1. These operators correspond to measurements along the x, y, and z directions in the Bloch sphere representation of the qubit state. For instance, measuring \sigma_z projects the qubit onto the computational basis states |0\rangle or |1\rangle, yielding outcomes +1 or -1 with probabilities determined by the state's density matrix \rho, specifically p(+1) = \langle 0 | \rho | 0 \rangle and p(-1) = \langle 1 | \rho | 1 \rangle. Similar projections occur for the other Pauli bases, enabling the extraction of superposition and phase information essential for quantum computation.For entangled qubits, observables reveal non-local correlations that violate classical bounds, as exemplified by Bell observables in the Clauser-Horne-Shimony-Holt (CHSH) inequality. This inequality involves incompatible spin measurements on two entangled particles, such as combinations of \sigma_z and rotated Pauli operators, where the expectation value of the Bell operator B = \sigma_z \otimes \sigma_z + \sigma_z \otimes \sigma_x + \sigma_x \otimes \sigma_z - \sigma_x \otimes \sigma_x satisfies | \langle B \rangle | \leq 2 under local hidden variable theories. Quantum mechanics predicts a maximum violation of $2\sqrt{2} \approx 2.828 for the singlet state, demonstrating entanglement through these incompatible observables. Experimental confirmation came in 1982, where time-varying analyzers measured photon polarizations, achieving a violation exceeding the classical limit by 5 standard deviations, thus ruling out local realism.Information-theoretic measures quantify the uncertainty and information content associated with observables in quantum systems. The von Neumann entropy S(\rho) = -\Tr(\rho \log \rho) of the density operator \rho measures the intrinsic uncertainty of the quantum state, providing an upper bound on the classical information accessible via measurements of observables. For a general observable \hat{A} implemented via a positive operator-valued measure (POVM), the Shannon entropy of the measurement outcomes quantifies the extracted classical information, subject to bounds such as the Holevo bound. POVMs allow generalized measurements beyond projective ones to optimize information extraction while preserving quantum coherence in applications like quantum cryptography.[36]A key application of observables in quantum information is quantum tomography, which reconstructs the full density matrix \rho from statistics of measurements on an informationally complete set of observables, such as the Pauli basis for qubits. By estimating expectation values \langle \hat{O}_i \rangle = \operatorname{Tr}(\rho \hat{O}_i) for multiple observables \hat{O}_i via repeated preparations and measurements, linear inversion or maximum-likelihood methods yield \rho, enabling verification of quantum states and computation of arbitrary observables. This technique is vital for benchmarking quantum devices, with efficient protocols reducing the number of required measurements to scale favorably for low-dimensional systems.