Fact-checked by Grok 2 weeks ago

State vector

A state vector is a mathematical representation of the state of a , used in both and classical dynamical systems. In , it is often denoted as |\psi\rangle in Dirac notation and serves as an element of a complex that encodes all probabilistic information about the system's . This vector fully specifies the system's properties, including the probabilities of outcomes for any , as per the , where the probability of obtaining a particular eigenvalue is given by the squared modulus of the onto the corresponding eigenstate. State vectors are typically normalized such that their inner product with themselves equals 1, ensuring they represent physically realizable pure states, and they can be superposed to describe more complex quantum behaviors like . In classical dynamical systems, the state vector captures the minimal set of variables, such as positions and velocities, needed to describe the system's evolution in . The framework underpinning vectors provides a rigorous structure for , where the space is separable and complete, allowing for the representation of states as infinite-dimensional vectors when necessary for continuous systems. In practice, vectors evolve deterministically according to the time-dependent , i\hbar \frac{\partial}{\partial t} |\psi\rangle = \hat{H} |\psi\rangle, where \hat{H} is the operator describing the system's total energy. Upon , the state vector collapses to one of the eigenstates of the measured , a process central to the probabilistic nature of . State vectors are foundational to various applications, from to and , where they enable descriptions like entanglement in multi-particle as vectors in spaces. Different representations, such as the position-space wavefunction \psi(\mathbf{r}) = \langle \mathbf{r} | \psi \rangle, offer practical ways to compute expectations and transitions, bridging with experimental predictions. The quantum mechanical of state vectors, introduced in the early by pioneers like Dirac and , emphasizes features such as superposition and non-commutativity of observables that distinguish it from classical descriptions.

Mathematical background

Vector spaces

A vector space over a field F (such as the real numbers \mathbb{R} or complex numbers \mathbb{C}) is a nonempty set V of objects called vectors, equipped with two operations: vector addition +: V \times V \to V and scalar multiplication \cdot: F \times V \to V, satisfying the following axioms for all vectors \mathbf{u}, \mathbf{v}, \mathbf{w} \in V and scalars a, b \in F:
  • Associativity of addition: (\mathbf{u} + \mathbf{v}) + \mathbf{w} = \mathbf{u} + (\mathbf{v} + \mathbf{w}).
  • Commutativity of addition: \mathbf{u} + \mathbf{v} = \mathbf{v} + \mathbf{u}.
  • Existence of zero vector: There exists \mathbf{0} \in V such that \mathbf{u} + \mathbf{0} = \mathbf{u}.
  • Existence of additive inverses: For each \mathbf{u}, there exists -\mathbf{u} \in V such that \mathbf{u} + (-\mathbf{u}) = \mathbf{0}.
  • Distributivity of scalar multiplication over vector addition: a \cdot (\mathbf{u} + \mathbf{v}) = a \cdot \mathbf{u} + a \cdot \mathbf{v}.
  • Distributivity of scalar addition over scalar multiplication: (a + b) \cdot \mathbf{u} = a \cdot \mathbf{u} + b \cdot \mathbf{u}.
  • Compatibility of scalar multiplication: a \cdot (b \cdot \mathbf{u}) = (a b) \cdot \mathbf{u}.
  • Identity element of scalars: $1 \cdot \mathbf{u} = \mathbf{u}, where 1 is the multiplicative identity in F.
These axioms ensure closure under the operations and provide the necessary for linear combinations. Common examples include the \mathbb{R}^n, where vectors are n-tuples of real numbers, addition is componentwise, and scales each component; this forms a of dimension n. Another example is the space of continuous functions C[0,1] on the interval [0,1], where addition and are defined : (f + g)(x) = f(x) + g(x) and (a \cdot f)(x) = a f(x) for x \in [0,1]. A subset \{\mathbf{v}_1, \dots, \mathbf{v}_k\} \subseteq V is linearly independent if the only solution to a_1 \mathbf{v}_1 + \dots + a_k \mathbf{v}_k = \mathbf{0} is a_1 = \dots = a_k = 0; otherwise, it is linearly dependent. The set spans V if every vector in V can be expressed as a \sum a_i \mathbf{v}_i. A basis for V is a linearly independent set that spans V; all bases for a given have the same , called the dimension of V, denoted \dim V. Given a basis \{\mathbf{e}_1, \dots, \mathbf{e}_n\} for a of dimension n, any vector \mathbf{v} \in V can be uniquely represented as \mathbf{v} = x_1 \mathbf{e}_1 + \dots + x_n \mathbf{e}_n, where the coefficients x_1, \dots, x_n form the coordinate representation of \mathbf{v} with respect to this basis, often written as the column vector \begin{pmatrix} x_1 \\ \vdots \\ x_n \end{pmatrix}. The foundational ideas of vector spaces emerged in the 19th century through the development of linear algebra, notably in the works of William Rowan Hamilton on quaternions and Hermann Grassmann on extension theory.

Hilbert spaces

An inner product space, also referred to as a pre-Hilbert space, is a complex vector space equipped with a sesquilinear form denoted as the inner product \langle \phi | \psi \rangle, which satisfies three key axioms: positivity, where \langle \phi | \phi \rangle \geq 0 and equals zero if and only if \phi = 0; linearity in the second argument, meaning \langle \phi | a\psi + b\chi \rangle = a \langle \phi | \psi \rangle + b \langle \phi | \chi \rangle for scalars a, b; and conjugate symmetry, \langle \phi | \psi \rangle = \overline{\langle \psi | \phi \rangle}. These properties induce a norm \|\phi\| = \sqrt{\langle \phi | \phi \rangle}, enabling the definition of distances and angles within the space, extending the structure of general vector spaces by adding a metric compatible with the vector operations. A is defined as a complete , meaning every with respect to the norm converges to an element within the space; this completeness turns it into a where the norm arises from the inner product. In such spaces, play a central role: an \{e_n\} satisfies \langle e_m | e_n \rangle = \delta_{mn}, and for a complete (also called a Hilbert basis), holds, stating that for any \psi in the space, \|\psi\|^2 = \sum_n |\langle e_n | \psi \rangle|^2. This identity equates the squared norm of a vector to the sum of the squared magnitudes of its coefficients in the basis expansion. Prominent examples of Hilbert spaces include the sequence space \ell^2, consisting of square-summable complex sequences \{x_n\} with inner product \langle x | y \rangle = \sum_n x_n^* y_n, which is complete under the induced norm. Another key example is the function space L^2(\mathbb{R}), the set of square-integrable functions f on the real line, equipped with \langle f | g \rangle = \int_{-\infty}^{\infty} f^*(x) g(x) \, dx, providing a foundational setting for representing continuous systems such as wave functions in mathematical analysis. The inner product in a Hilbert space ensures separation of distinct states: for two vectors \phi \neq \psi, the positive definiteness implies \|\phi - \psi\|^2 = \langle \phi - \psi | \phi - \psi \rangle > 0, distinguishing them via their inner product differences.

Applications in quantum mechanics

Definition of quantum state vector

In quantum mechanics, the state of a physical system in a pure state is represented by a state vector |ψ⟩, which belongs to a separable Hilbert space H equipped with an inner product. This vector encodes all observable information about the system, such as probabilities of measurement outcomes for physical observables represented as self-adjoint operators on H. Pure states correspond directly to these vectors, distinguishing them from mixed states, which require density operators to describe statistical ensembles of pure states. The Dirac notation provides a compact and basis-independent way to express these state vectors. Here, |ψ⟩ denotes the ket vector in H, while its dual ⟨ψ| is the vector in the , and the inner product between two state vectors is given by ⟨ψ|φ⟩, a satisfying ⟨ψ|ψ⟩ ≥ 0 with equality only for the zero vector. This notation facilitates operations like projections and transformations without explicit coordinate representations. State vectors are defined up to a global , meaning that |ψ⟩ and e^{iθ} |ψ⟩, where θ is a real number, describe the identical physical state, as phase differences do not affect measurement probabilities. The mathematical framework of separable Hilbert spaces for quantum states was formalized by in his 1932 treatise. introduced the ket-bra notation and the abstract representation in , building on earlier and matrix formulations.

Superposition and normalization

In , the asserts that a vector |\psi\rangle can be expressed as a of basis states |\phi_i\rangle, written as |\psi\rangle = \sum_i c_i |\phi_i\rangle, where the coefficients c_i are complex numbers satisfying the condition \sum_i |c_i|^2 = 1. This principle, foundational to the linear structure of , allows quantum systems to inhabit multiple states simultaneously until , distinguishing them from classical systems. The condition ensures the state vector has unit , defined as \langle \psi | \psi \rangle = 1, where \langle \psi | is the vector corresponding to |\psi\rangle, and the is \|\psi\| = \sqrt{\langle \psi | \psi \rangle}. This requirement guarantees that the total probability of all possible outcomes sums to , providing physical interpretability to the state vector. For distinct eigenstates |\phi_i\rangle and |\phi_j\rangle (with i \neq j) of a Hermitian , such as the , orthogonality holds: \langle \phi_i | \phi_j \rangle = 0. This property arises from the for Hermitian operators and facilitates the expansion of any state in an of eigenstates. The probability interpretation of the state vector, known as the , states that if a system is in state |\psi\rangle, the probability of measuring it to be in eigenstate |\phi_i\rangle is |\langle \phi_i | \psi \rangle|^2. This rule, postulated by , links the mathematical formalism to observable probabilities without deriving from deeper principles in the standard formulation. A representative example is the state of a two-level quantum bit, or , given by |\psi\rangle = \alpha |0\rangle + \beta |1\rangle, where |0\rangle and |1\rangle form an with \langle 0 | 1 \rangle = 0, and the complex coefficients satisfy |\alpha|^2 + |\beta|^2 = 1 for normalization. Here, |\alpha|^2 is the probability of measuring the qubit in |0\rangle, illustrating superposition in processing.

Time evolution

The time evolution of a quantum state vector |\psi(t)\rangle in a closed system is described by the time-dependent Schrödinger equation, i \hbar \frac{\partial}{\partial t} |\psi(t)\rangle = \hat{H} |\psi(t)\rangle, where \hat{H} is the Hamiltonian operator representing the total energy of the system, \hbar is the reduced Planck's constant, and i is the imaginary unit. This equation, introduced by Erwin Schrödinger in 1926, governs the deterministic dynamics of the wave function in non-relativistic quantum mechanics. For time-independent Hamiltonians, the formal solution is given by unitary evolution, |\psi(t)\rangle = \hat{U}(t) |\psi(0)\rangle, \quad \hat{U}(t) = e^{-i \hat{H} t / \hbar}, where \hat{U}(t) is the time-evolution operator, which is unitary (\hat{U}^\dagger \hat{U} = \hat{1}) and preserves the norm of the state vector, ensuring that probabilities remain normalized over time. This unitary nature stems from the hermiticity of \hat{H}, guaranteeing reversible dynamics without information loss in isolated systems. Two equivalent formulations describe this evolution: the , where state vectors evolve in time while operators remain fixed, and the , where state vectors are time-independent and operators evolve according to the Heisenberg equation d\hat{A}/dt = (i/\hbar) [\hat{H}, \hat{A}] + \partial \hat{A}/\partial t. The pictures are related by a unitary transformation and yield identical observable predictions, with the choice depending on convenience for calculations involving time-dependent perturbations or symmetries./03%3A_Schrodinger_and_Heisenberg_Pictures) A representative example is the , with \hat{H} = \hbar \omega (\hat{a}^\dagger \hat{a} + 1/2), where \omega is the , \hat{a}^\dagger and \hat{a} are the . The energy eigenstates |n\rangle evolve as |n(t)\rangle = e^{-i E_n t / \hbar} |n\rangle, with E_n = \hbar \omega (n + 1/2), acquiring only a global that does not affect values. Superpositions of these states, such as coherent states, exhibit classical-like oscillatory behavior in and values while maintaining quantum coherence. For time-dependent Hamiltonians that vary slowly compared to the inverse energy gaps between eigenstates, the states that if the system begins in an instantaneous eigenstate |n(0)\rangle of \hat{H}(0), it will remain in the corresponding instantaneous eigenstate |n(t)\rangle of \hat{H}(t) up to a dynamic and , provided the change is sufficiently gradual. This result, proven by and in 1928, underpins applications like adiabatic quantum computing and in slowly varying potentials.

Applications in classical dynamical systems

State-space representation

In classical dynamical systems, the state vector \mathbf{x}(t) \in \mathbb{R}^n (or occasionally \mathbb{C}^n for systems with ) encapsulates the minimal set of variables required to uniquely determine the future evolution of the system given any inputs and initial conditions. These variables represent the complete internal of the system at time t, allowing of all subsequent without reference to prior history beyond the initial state. The collection of all possible state vectors forms the , a manifold that geometrically represents the set of all achievable states of the system. This space provides a coordinate for visualizing trajectories, where each point corresponds to a unique state, and the system's trace paths along the manifold. The evolution of the state vector is governed by the \dot{\mathbf{x}}(t) = \mathbf{f}(\mathbf{x}(t), \mathbf{u}(t), t), \quad \mathbf{x}(0) = \mathbf{x}_0, where \mathbf{u}(t) denotes the input vector influencing the system, \mathbf{f} is a describing the , and \mathbf{x}_0 specifies the starting . Solutions to this problem yield the state \mathbf{x}(t) for t \geq 0, assuming local existence and uniqueness under suitable conditions on \mathbf{f}. The conceptual foundations of state-space representation trace back to Henri Poincaré's work in the 1890s, where he introduced analysis to study , particularly the , revealing non-integrable dynamics. This approach was later formalized in modern by during the 1960s, who developed state-space methods for linear systems, enabling systematic analysis and design. A representative example is a simple mechanical oscillator, such as a mass-spring system without , where the state vector comprises the q(t) and \dot{q}(t) of the mass. These two variables fully capture the system's and motion, allowing the future and speed to be predicted from the initial conditions and any applied forces.

Linear systems

In linear time-invariant (LTI) dynamical systems, the state vector \mathbf{x}(t) \in \mathbb{R}^n captures the system's internal dynamics, evolving according to the state-space equations \dot{\mathbf{x}}(t) = A \mathbf{x}(t) + B \mathbf{u}(t) and \mathbf{y}(t) = C \mathbf{x}(t) + D \mathbf{u}(t), where A \in \mathbb{R}^{n \times n} is the system matrix, B \in \mathbb{R}^{n \times m} is the input matrix, C \in \mathbb{R}^{p \times n} is the output matrix, D \in \mathbb{R}^{p \times m} is the feedthrough matrix, \mathbf{u}(t) \in \mathbb{R}^m is the input vector, and \mathbf{y}(t) \in \mathbb{R}^p is the output vector. This matrix formulation, introduced by Kalman, enables the analysis of multi-input multi-output systems through linear algebra, assuming constant coefficients and superposition of solutions. The explicit solution to the state equation for zero initial conditions is given by the convolution integral: \mathbf{x}(t) = \int_0^t e^{A(t - \tau)} B \mathbf{u}(\tau) \, d\tau, while the full solution incorporating initial state \mathbf{x}(0) is \mathbf{x}(t) = e^{A t} \mathbf{x}(0) + \int_0^t e^{A(t - \tau)} B \mathbf{u}(\tau) \, d\tau, where e^{A t} is the state transition matrix, computable via matrix exponential or modal decomposition. This form highlights the system's memory through the initial state and the influence of past inputs weighted by the transition matrix. Key properties for control design are controllability and . A is controllable if the controllability \mathcal{C} = [B \, AB \, \cdots \, A^{n-1}B] has full row n, ensuring any initial can be driven to the (or any ) in finite time using admissible inputs. Similarly, the is observable if the observability \mathcal{O} = \begin{bmatrix} C \\ CA \\ \vdots \\ CA^{n-1} \end{bmatrix} has full column n, allowing reconstruction of the from output measurements over a finite . These conditions, central to modern , facilitate and observer design. Stability of the unforced \dot{\mathbf{x}} = A \mathbf{x} is determined by the eigenvalues \lambda_i of A: the origin is asymptotically stable if all \operatorname{Re}(\lambda_i) < 0, marginally stable if \operatorname{Re}(\lambda_i) \leq 0 with simple imaginary eigenvalues, and unstable otherwise. This eigenvalue criterion provides a direct link between system matrix properties and long-term behavior, independent of input details. A representative example is the mass-spring-damper system, modeled with states x_1 = x (position) and x_2 = \dot{x} (velocity), yielding \dot{\mathbf{x}} = \begin{bmatrix} 0 & 1 \\ -\frac{k}{m} & -\frac{c}{m} \end{bmatrix} \mathbf{x} + \begin{bmatrix} 0 \\ \frac{1}{m} \end{bmatrix} u, where m is mass, k is spring constant, c is coefficient, and u is applied force; the output might be position y = x_1. Here, controllability holds as the matrix [B \, AB] is full rank, and stability requires c > 0 for positive , ensuring \operatorname{Re}(\lambda_i) < 0.

Nonlinear systems

In nonlinear dynamical systems, the state vector \mathbf{x}(t) \in \mathbb{R}^n encapsulates the minimal set of variables required to describe the system's evolution, where the dynamics are governed by nonlinear differential equations of the form \dot{\mathbf{x}} = \mathbf{f}(\mathbf{x}, \mathbf{u}, t), with \mathbf{u}(t) denoting the input vector and \mathbf{f} a nonlinear function. This representation contrasts with linear systems by lacking the superposition principle, leading to phenomena such as bifurcations, limit cycles, and chaos. The state space X \subseteq \mathbb{R}^n is typically a manifold, and the system's trajectory \mathbf{x}(t) = \phi(t; \mathbf{x}_0, \mathbf{u}) is determined by an initial state \mathbf{x}_0 and the transition map \phi, ensuring causality where future states depend only on prior history. For continuous-time systems, the state vector often includes position and velocity components to convert higher-order nonlinear ODEs into first-order form. A classic example is the pendulum equation \ddot{\theta} + r \dot{\theta} + \sin \theta = 0, where the state vector is \mathbf{x} = [\theta, \dot{\theta}]^T and \mathbf{f}(\mathbf{x}) = [\dot{\theta}, -r \dot{\theta} - \sin \theta]^T, exhibiting periodic motion for small angles but chaotic behavior for large swings. Another illustrative case is the Duffing oscillator, modeling nonlinear stiffness in mechanical systems: \ddot{x} + \delta \dot{x} - \omega^2 x + \epsilon x^3 = \beta \cos(\Omega t), with state vector \mathbf{x} = [x, \dot{x}]^T and dynamics \dot{\mathbf{x}} = [ \dot{x}, -\delta \dot{x} + \omega^2 x - \epsilon x^3 + \beta \cos(\Omega t) ]^T, capable of producing chaotic attractors depending on parameters like \epsilon > 0. In discrete-time settings, the x_{k+1} = a x_k (1 - x_k) for x_k \in [0,1] and a \in (0,4] serves as a one-dimensional state vector example, demonstrating period-doubling bifurcations leading to as a approaches 4. Analysis of nonlinear systems often involves around points \mathbf{x}_e where \mathbf{f}(\mathbf{x}_e, \mathbf{u}_e) = 0, yielding \dot{\delta \mathbf{x}} = A \delta \mathbf{x} + B \delta \mathbf{u} with Jacobian A = \frac{\partial \mathbf{f}}{\partial \mathbf{x}} \big|_{\mathbf{x}_e, \mathbf{u}_e}, which approximates local via eigenvalues of A. For nonlinear systems under \mathbf{w}(t), the state-space form \dot{\mathbf{x}} = \mathbf{f}(\mathbf{x}, \mathbf{w}, t) enables emulation techniques like State Space Kriging, which learns the mapping from state and input to derivatives, mitigating high-dimensional challenges in prediction with relative errors below $10^{-3} using sparse training data. In , the state vector \mathbf{u}(x,t) decomposes into nonlinear modes via \mathbf{u}(x,t) = \sum \phi_n(x) a_n(t), revealing energy transfers and connections between modes in conservative systems. Learning such systems from data treats the state vector as hidden, evolving via \mathbf{x}_{t} = \mathbf{f}(\mathbf{x}_{t-1}, \mathbf{u}_{t-1}) + \mathbf{w}_t with output \mathbf{y}_t = \mathbf{g}(\mathbf{x}_t, \mathbf{u}_t) + \mathbf{v}_t, often parameterized by radial basis functions and inferred using expectation-maximization with Kalman smoothing for parameter estimation in under 10 iterations. These representations underpin applications in control, prediction, and simulation, emphasizing the state vector's role in capturing complex, non-superposable behaviors.

References

  1. [1]
    [PDF] 2. Introduction to Quantum Mechanics - MIT OpenCourseWare
    In quantum mechanics, a complete descrip tion of the state of a quantum object (or system) is given mathematically by the state vector |ψ> (or wavefunction ψ(r)) ...
  2. [2]
    [PDF] 1 Introduction 2 Wave functions - University of Oregon
    Apr 11, 2006 · In quantum mechanics, a state is a vector, and for a one-dimensional particle, it can be represented by a function ψ(x) at position x.
  3. [3]
    Quantum Mechanics - Stanford Encyclopedia of Philosophy
    Nov 29, 2000 · Every physical system is associated with a Hilbert Space, every unit vector in the space corresponds to a possible pure state of the system, and ...
  4. [4]
    Axioms of vector spaces - UCLA Math
    A real vector space is a set X with a special element 0, and three operations. These operations must satisfy the following axioms.
  5. [5]
    [PDF] The vector space axioms
    A vector space over a field F is a set V , equipped with. • an element 0 ∈ V called zero, • an addition law α : V × V → V (usually written α(v, w) = v + w), ...
  6. [6]
    Vector Spaces
    A vector space (which I'll define below) consists of two sets: A set of objects called vectors and a field (the scalars).
  7. [7]
    [PDF] 10. Euclidean Spaces
    Sep 13, 2022 · Spaces of continuous functions are one type of vector space commonly used in economics. ▷ Example 10.13.1: Spaces of Continuous Functions.
  8. [8]
    [PDF] Linear Independence, span, basis, dimension
    A basis of a subspace is a set of vectors that spans the subspace and is linearly independent.
  9. [9]
    [PDF] Span, Linear Independence, and Dimension - Penn Math
    Jul 18, 2013 · If dimV = n, then any set of n linearly independent vectors in V is a basis. If dimV = n, then any set of n vectors that spans V is a basis. ...
  10. [10]
    Vector Representations - A First Course in Linear Algebra
    You may have noticed that many questions about elements of abstract vector spaces eventually become questions about column vectors or systems of equations.
  11. [11]
    3.2 Bases and coordinate systems - Understanding Linear Algebra
    A set of vectors in R m is called a basis for R m if the set of vectors spans R m and is linearly independent.
  12. [12]
    [PDF] A History of Vector Analysis
    Grassmann dated the origin of these ideas to 1832 and traced his fundamental idea to reflections on negative numbers and to the idea of adding and subtracting ...
  13. [13]
    [PDF] hermann grassmann and the - creation of linear algebra
    The story of Hamilton's invention of the quaternions [14] in 1843 and of the subsequent influence of Hamilton and of Grassmann in the emergence of vector ...
  14. [14]
    9.1: Inner Products - Mathematics LibreTexts
    Mar 5, 2021 · Hence, for real vector spaces, conjugate symmetry of an inner product becomes actual symmetry. Definition 9.1.3. An inner product space is a ...
  15. [15]
    Inner product - StatLect
    Discover how vector inner products are defined and what their properties are. Learn how to compute the inner products of real and complex vectors.
  16. [16]
    Hilbert Space -- from Wolfram MathWorld
    A Hilbert space is a vector space H with an inner product <f,g> such that the norm defined by |f|=sqrt(<f,f>) turns H into a complete metric space.
  17. [17]
    [PDF] Chapter 6: Hilbert Spaces - UC Davis Mathematics
    Definition 6.2 A Hilbert space is a complete inner product space. In particular, every Hilbert space is a Banach space with respect to the norm in (6.1).
  18. [18]
    Parseval's Identity for Inner Product Spaces - Mathonline
    We will now state and prove Parseval's identity for Hilbert spaces. The result is very similar to Bessel's inequality but is stronger.
  19. [19]
    [PDF] Orthogonal systems in Hilbert space and applications
    Jul 13, 2023 · A complete orthonormal system is also called a Hilbert basis. If ... Using the polarisation identity and Parseval's identity, we have. 4 ...
  20. [20]
    L^2-spaces are Hilbert spaces - PlanetMath
    Mar 22, 2013 · The spaces Cn ℂ n or Rn ℝ n with the usual inner product are particular examples of L2(X) L 2 ⁢ ( X ) , choosing X={1,…,n} X = { 1 , … , n } ...
  21. [21]
    L^2-Space -- from Wolfram MathWorld
    L^2-space really consists of equivalence classes of functions. Two functions represent the same L^2-function if the set where they differ has measure zero.
  22. [22]
    [PDF] L2 spaces (and their useful properties)
    For a measure space (X,A,µ), the set L2 = L2(X,A,µ) of all square inte- grable, A\B(R)-measurable real functions on X would be a Hilbert space.
  23. [23]
    [PDF] 1 The Postulates of Quantum Mechanics - Rutgers Physics
    Sep 12, 2007 · So the eigenstates belonging to different eigenvalues of a hermitian operator are neces- sarily orthogonal. Next, choose the norm of the ...
  24. [24]
    [PDF] Zur Quantenmechanik der Sto&#x00DF;vorg&#x00E4;nge - psiquadrat
    (Eingegangen am 25. Juni 1926.) Durch eine Untersuchung der S~oflvorg~nge wird die Auffassung entwickelt, daft die Quantenmechanik in der S chrSdingerschen Form ...
  25. [25]
    [PDF] Quantum Computation and Quantum Information - Michael Nielsen
    The ability of a qubit to be in a superposition state runs counter to our 'common sense' ... For example, a qubit can be in the state. 1. √. 2. |0i +. 1. √. 2.
  26. [26]
    [PDF] 1926
    The wave-equation derived from a Hamiltonian variation- principle; generalization to an arbitrary conservative system. §8. The wave- function physically ...
  27. [27]
    [PDF] Time Evolution in Quantum Mechanics
    First we introduce the time evolution operator and define the Hamiltonian in terms of it. Then we discuss the evolution of state vectors and the Schrödinger ...
  28. [28]
    [PDF] 9. Harmonic Oscillator - MIT OpenCourseWare
    Using these results, we can also find the time evolution of the position and momentum operators: p(0) x(t) = x(0) cos(ωt) + sin(ωt) mω p(t) = p(0) cos(ωt) ...
  29. [29]
    [PDF] Quantum Physics III Chapter 6: Adiabatic Approximation
    Jun 2, 2017 · What remains roughly constant as ω(t) changes adiabatically? H(t). Claim: I(t) ≡ is almost constant in adiabatic changes of ω. (1.6).
  30. [30]
    [PDF] 16.30 Topic 5: Introduction to state-space models
    State space model: a representation of the dynamics of an Nth order system as a first order differential equation in an N-vector, which is ... • Most general ...
  31. [31]
    [PDF] Introduction to Nonlinear Dynamical Systems
    Dynamical systems are mathematical systems characterized by a state that evolves over time under the action of a group of transition operators.
  32. [32]
    Dynamical systems - Scholarpedia
    Feb 9, 2007 · Mathematically, a dynamical system is described by an initial value problem. The implication is that there is a notion of time and that a state ...
  33. [33]
    [PDF] Chapter Two - System Modeling
    For mechanical systems, the state consists of the position and velocity of the system, so that x = (q, ˙q) in the case of a damped spring–mass system.<|control11|><|separator|>
  34. [34]
    [PDF] Dynamical Systems as Solutions of Ordinary Differential Equations
    We call equation (33) an initial value problem (IVP). We've therefore demonstrated that trajectories of a smooth dynamical system can be represented locally as ...
  35. [35]
    [PDF] The tangled tale of phase space - Galileo Unbound
    Poincaré completed his studies of dynamic motion and published his results in three volumes under the title New. Methods of Celestial Mechanics.15 He carried ...
  36. [36]
    [PDF] Kalman 1960: The birth of modern system theory - Hal-Inria
    Nov 30, 2018 · to use the state-space representation, Kalman (1960a) for the continuous-. 270 time problem used the theory of Carath eodory (1935) ...
  37. [37]
    Introduction: System Modeling
    For instance, in a simple mechanical mass-spring-damper system, the two state variables could be the position and velocity of the mass. $\mathbf{u}(t)$ is ...
  38. [38]
    State Space Representations of Linear Physical Systems
    For example, in a mechanical system you would choose extension of springs (potential energy, ½kx²) and the velocity of masses (kinetic energy, ½mv²); for ...
  39. [39]
    Mathematical Description of Linear Dynamical Systems
    Mathematical Description of Linear Dynamical Systems. Author: R. E. KalmanAuthors Info & Affiliations. https://doi.org/10.1137/0301010 · PDF · BibTeX. Tools.
  40. [40]
    [PDF] Mathematical Description of Linear Dynamical Systems - Duke People
    KALMAN, Canonical structure of linear dynamical systems, Proc. Nat. Acad. Sci. USA, 48 (1962), pp. 596-600. [3] 1. E. KALMAN, On the general theory of ...
  41. [41]
    [PDF] Time-Domain Solution of LTI State Equations - MIT
    A linear system, described by state equations ˙x = AX + Bu, is asymptotically stable if and only if all eigenvalues of the matrix A have negative real parts.
  42. [42]
    State Space Kriging model for emulating complex nonlinear ...
    The state space representation of a stochastic dynamical system can be considered as a multi-input multi-output (MIMO) function, in which the input is the state ...
  43. [43]
    Connections between the modes of a nonlinear dynamical system ...
    Jun 9, 2021 · In modal analysis, the state vector u of a dynamical system, depending on position x and time t , is assumed to be decomposed in the following ...
  44. [44]
    None
    ### Summary of Key Concepts on State Vectors in Nonlinear Dynamical Systems