Fact-checked by Grok 2 weeks ago

Observability

Observability is a fundamental concept in , introduced by in 1960, that quantifies the extent to which the internal states of a dynamic can be inferred or reconstructed solely from measurements of its external inputs and outputs over a finite time interval. A is deemed fully observable if its observability matrix—constructed from the system's state-space model—has full rank, ensuring no initial state remains indistinguishable from others based on output data. Observability is dual to , both essential for state estimation in applications like and .

Fundamentals

Definition

Observability is a fundamental concept in that quantifies the extent to which the internal states of a can be reconstructed from its external outputs over a finite time interval. Introduced by Rudolf E. Kalman in his seminal work on linear systems in the late 1950s and early 1960s, observability addresses the problem of determining unmeasurable state variables from measurable outputs, enabling state estimation and control design. In essence, a is observable if distinct initial states produce distinguishable output trajectories under known inputs, allowing unique inference of the current state. For linear time-invariant systems described by the state-space model \dot{x} = Ax + Bu and y = Cx + Du, where x \in \mathbb{R}^n is the , u \in \mathbb{R}^m is the input, and y \in \mathbb{R}^p is the output, observability is formally defined as the property that permits determination of x(T) from measurements of y(t) and u(t) over any interval [0, T] with T > 0. This requires that the observability matrix \mathcal{O} = \begin{bmatrix} C \\ CA \\ \vdots \\ CA^{n-1} \end{bmatrix} has full column rank equal to n, ensuring no hidden dynamics that cannot be detected from the outputs. If the rank condition holds, the system's states are fully reconstructible; otherwise, an subspace exists, limiting state estimation to the observable part. The concept extends to discrete-time systems, where x[k+1] = Ax + Bu and y = Cx + Du, with observability defined analogously via the same matrix \mathcal{O} achieving full rank n, allowing initial state recovery from a finite sequence of outputs. Observability is dual to , sharing similar algebraic tests, and is crucial for applications like Kalman filtering, where it ensures the filter can accurately estimate states despite noise. While the definition is precise for linear systems, it generalizes to nonlinear and time-varying cases through analogous conditions on output distinguishability.

Importance in Control Theory

Observability plays a pivotal role in by enabling the reconstruction of a system's internal states from its measurable outputs over a finite time interval. This property ensures that the complete behavior of the system can be inferred without direct access to all state variables, which is often impractical or impossible in real-world applications. Introduced by in his seminal work, observability addresses a fundamental challenge in system analysis: distinguishing between different initial states based solely on input-output data. Without observability, hidden dynamics could remain undetected, leading to incomplete system understanding or ineffective control strategies. In control system design, observability is essential for developing state estimators, such as the Luenberger observer or the , which approximate unmeasurable states for feedback purposes. For instance, an observable system allows the design of a stable observer that asymptotically reconstructs the , facilitating pole placement and stabilization even when is unavailable. This is particularly critical in applications like aerospace navigation and process control, where precise state knowledge is required to achieve desired performance without excessive sensors. Observability's duality with further underscores its importance: together, they guarantee that a system can be both driven to desired states and monitored effectively, forming the basis for modern . Beyond estimation, observability supports by identifying deviations in system behavior that indicate anomalies, as unobservable modes could mask such issues. In linear time-invariant systems, it is assessed through the of the observability , providing a rigorous algebraic test for practical implementation. This has influenced fields from to economic modeling, emphasizing the need for observable structures in complex, interconnected systems to ensure reliability and performance.

Linear Time-Invariant Systems

Observability Matrix

In linear time-invariant (LTI) systems described by the state-space model \dot{x}(t) = A x(t) + B u(t), y(t) = C x(t) + D u(t), where x \in \mathbb{R}^n is the state vector, u \in \mathbb{R}^m is the input, y \in \mathbb{R}^p is the output, and A, B, C, D are constant matrices of appropriate dimensions (with D often zero for simplicity), the observability matrix provides a algebraic test for determining whether the initial state x(0) can be uniquely reconstructed from the output y(t) over a finite time interval. This matrix, introduced by Rudolf E. Kalman in his foundational work on control systems, aggregates the output matrix and its powers under the system dynamics to capture the mapping from states to outputs. The observability matrix \mathcal{O} is constructed by vertically stacking the rows of C and successive powers of A premultiplied by C: \mathcal{O} = \begin{bmatrix} C \\ CA \\ CA^2 \\ \vdots \\ CA^{n-1} \end{bmatrix} This results in a matrix of dimensions p n \times n. For a discrete-time LTI system x(k+1) = A x(k) + B u(k), y(k) = C x(k) + D u(k), the construction is analogous, using the same form without the continuous-time derivative. The rank of \mathcal{O} determines observability: the system is observable if and only if \operatorname{rank}(\mathcal{O}) = n, meaning \mathcal{O} has full column . This condition, known as the Kalman rank condition or Popov-Belevitch-Hautus (PBH) test in eigenvalue form, ensures that there are no nontrivial linear combinations of states that produce zero output for all inputs and time, i.e., the kernel of \mathcal{O} is trivial. If \operatorname{rank}(\mathcal{O}) < n, the unobservable subspace has dimension n - \operatorname{rank}(\mathcal{O}), consisting of states that cannot be distinguished from zero through measurements. To illustrate, consider a second-order continuous-time system with A = \begin{bmatrix} 0 & 1 \\ -2 & -3 \end{bmatrix}, C = \begin{bmatrix} 1 & 0 \end{bmatrix}. The observability matrix is \mathcal{O} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}, which has rank 2 and thus confirms , allowing unique state reconstruction from y(t) and its derivative. In contrast, for A = \begin{bmatrix} 1 & 0 \\ 0 & 2 \end{bmatrix}, C = \begin{bmatrix} 1 & 0 \end{bmatrix}, \mathcal{O} = \begin{bmatrix} 1 & 0 \\ 1 & 0 \end{bmatrix} has rank 1, indicating the second state is unobservable. This matrix-based test is computationally efficient for moderate n and underpins observer design, such as Luenberger observers, by enabling decomposition into observable and unobservable canonical forms. By duality, the observability matrix of (A, C) corresponds to the controllability matrix of (A^T, C^T), linking the concepts symmetrically.

Unobservable Subspace

In linear time-invariant (LTI) systems described by \dot{x} = Ax + Bu, y = Cx + Du, the unobservable subspace consists of all initial states x(0) that generate zero output y(t) = 0 for all t \geq 0 under zero input u(t) = 0, rendering them indistinguishable from the origin based on output measurements. This subspace, originally conceptualized in the state-space framework, forms a linear subspace of the state space \mathbb{R}^n. Mathematically, the unobservable subspace \mathcal{N}_{C,A} is the kernel of the observability matrix O(C,A), defined as O(C,A) = \begin{bmatrix} C \\ CA \\ \vdots \\ CA^{n-1} \end{bmatrix}, where \mathcal{N}_{C,A} = \ker O(C,A) = \{ x \in \mathbb{R}^n \mid O(C,A) x = 0 \}. Equivalently, it is the intersection of the kernels of the output map and its iterates: \mathcal{N}_{C,A} = \bigcap_{k=0}^{n-1} \ker (CA^k). The dimension of \mathcal{N}_{C,A} equals n - \rank O(C,A), and the system is observable if and only if \dim \mathcal{N}_{C,A} = 0, or \rank O(C,A) = n. The unobservable subspace possesses key structural properties: it is the largest A-invariant subspace contained in \ker C, meaning if x \in \mathcal{N}_{C,A}, then A x \in \mathcal{N}_{C,A}. This invariance ensures that trajectories starting in \mathcal{N}_{C,A} remain within it, producing no observable effect. In the Kalman canonical decomposition, the state space decomposes as \mathbb{R}^n = \mathcal{R}_{A,B} \oplus \mathcal{N}_{C,A}, where \mathcal{R}_{A,B} is the controllable subspace, allowing transformation to a basis that separates observable and unobservable dynamics. For modal analysis, \mathcal{N}_{C,A} includes eigenspaces corresponding to unobservable eigenvalues, where C v_i = 0 for eigenvector v_i of A. An alternative characterization uses the Popov-Belevitch-Hautus (PBH) test: \mathcal{N}_{C,A} is spanned by vectors v such that \rank \begin{bmatrix} A - \lambda I \\ C \end{bmatrix} < n for some eigenvalue \lambda, with the system observable if this rank is full for all \lambda. The observability Gramian W_o(t) = \int_0^t e^{A^\top \tau} C^\top C e^{A \tau} d\tau provides a continuous-time equivalent, where \ker W_o(t) = \mathcal{N}_{C,A} for t > 0 in observable systems, though the matrix kernel is the primary finite-dimensional tool.

Observability Index

In linear time-invariant (LTI) systems, the observability index quantifies the structural complexity of state reconstruction from outputs, serving as a concept to the index. For a controllable and pair (A, C) where A \in \mathbb{R}^{n \times n} is the system matrix and C \in \mathbb{R}^{p \times n} is the output matrix with p outputs, the observability indices \hat{d}_1, \hat{d}_2, \dots, \hat{d}_p are defined as the indices of the pair (A^T, C^T). These indices are positive integers satisfying \hat{d}_1 \geq \hat{d}_2 \geq \dots \geq \hat{d}_p > 0 and \sum_{i=1}^p \hat{d}_i = n, reflecting the minimal dimensions required to span the subspace through successive output derivatives or shifts. The overall observability \hat{d} is the maximum of these individual indices, \hat{d} = \max \{ \hat{d}_i : i = 1, 2, \dots, p \}, indicating the longest chain of dependencies needed to observe the full state from the outputs. To compute it, transform the (A^T, C^T) into controller via a , where the controllability indices appear as the lengths of the blocks in the block-diagonal structure. The of the partial observability matrix \mathcal{O}_k = \begin{bmatrix} C \\ CA \\ \vdots \\ CA^{k-1} \end{bmatrix} stabilizes at n for k \geq \hat{d}, confirming observability when \text{rank}(\mathcal{O}_n) = n. This is for designing reduced-order observers, as the minimal observer order relates to n - p but is influenced by \hat{d} in multi-output cases. For single-output systems (p=1), the observability index simplifies to \hat{d}_1 = n, aligning with the full rank requirement of the standard observability matrix. In multi-output scenarios, the indices partition the space into observable modes, enabling efficient numerical reconstruction algorithms that require only \hat{d} time steps or derivatives for in discrete-time equivalents. This structural insight, rooted in Kalman , ensures that unobservable subspaces are isolated, with the indices providing a measure of observability depth without relying on eigenvalue computations.

Detectability

In , detectability is a of linear time-invariant (LTI) systems that generalizes observability by requiring that only the unstable are observable from the outputs. For a \dot{x} = Ax + Bu, y = Cx + Du, the pair (A, C) is detectable if all eigenvalues of A corresponding to the unobservable have negative real parts, ensuring that unobservable states decay asymptotically to zero. This condition implies that an observer can asymptotically estimate the state despite partial unobservability, as the error for stable unobservable modes converges exponentially. An equivalent definition states that (A, C) is detectable if there exists an output injection matrix L such that the closed-loop matrix A - LC is Hurwitz (all eigenvalues have negative real parts), allowing arbitrary pole placement for the observer dynamics via output feedback. Observable systems are inherently detectable, since the full state is reconstructible, but detectability is a weaker that accommodates systems where stable modes need not be observed. Detectability can be tested using the Popov-Belevitch-Hautus (PBH) criterion, adapted from the observability test: the pair (A, C) is detectable \rank \begin{bmatrix} \lambda I - A \\ C \end{bmatrix} = n for every complex eigenvalue \lambda of A with \operatorname{Re}(\lambda) \geq 0, where n is the system dimension. This ensures no unstable eigenvector lies in the unobservable subspace, as \ker(\lambda I - A) \cap \ker(C) = \{0\} for such \lambda. If the system is observable, the rank condition holds for all \lambda \in \mathbb{C}. In the Kalman observability decomposition, detectability corresponds to the unobservable subsystem matrix A_{\bar{o}} being Hurwitz, where the state is transformed as \bar{x} = T^{-1} x into observable and unobservable coordinates: \dot{\bar{x}}_o = A_o \bar{x}_o + B_o u, \quad y = C_o \bar{x}_o + D u, \dot{\bar{x}}_{\bar{o}} = A_{\bar{o}} \bar{x}_{\bar{o}} + A_{21} \bar{x}_o + B_{\bar{o}} u. Here, A_o governs the observable dynamics, and stability of A_{\bar{o}} guarantees detectability. Detectability plays a critical role in observer-based , such as the separation for linear quadratic Gaussian (LQG) regulators, where it pairs with stabilizability of (A, B) to ensure closed-loop under output . For instance, in applications like systems, detectability allows robust state estimation when minor stable disturbances are unobservable, avoiding the need for full observability that might require excessive sensors.

Linear Time-Varying Systems

Observability Matrix Generalization

For linear time-varying (LTV) systems described by \dot{x}(t) = A(t) x(t) + B(t) u(t) and y(t) = C(t) x(t), where x \in \mathbb{R}^n, u \in \mathbb{R}^m, and y \in \mathbb{R}^p, the observability matrix generalizes the LTI case by incorporating the time dependence of A(t) and C(t). In LTI systems, observability is determined by the constant matrix \mathcal{O} = \begin{bmatrix} C \\ CA \\ \vdots \\ CA^{n-1} \end{bmatrix}, whose full rank implies observability. For LTV systems, no such static matrix exists due to the absence of a constant A; instead, a time-varying n \times n matrix Q_o(t) is defined recursively to capture instantaneous or local observability properties. The columns q_k(t) \in \mathbb{R}^n of Q_o(t) = [q_0(t), q_1(t), \dots, q_{n-1}(t)] satisfy the backward recursion q_{k+1}(t) = A^T(t) q_k(t) + \dot{q}_k(t), \quad k = 0, 1, \dots, n-2, with the initial condition q_0(t) = C^T(t). This recursion arises from differentiating the output equation and substituting the state dynamics, mirroring the LTI construction but adjusted for time variation via the derivative term. The resulting Q_o(t) transforms the state x(t) such that the generalized outputs [y(t), \dot{y}(t), \dots, y^{(n-1)}(t)]^T = Q_o(t) x(t), assuming zero input for simplicity. A LTV system is uniformly if Q_o(t) is nonsingular for all t \geq t_0, or more stringently, if there exists c > 0 such that \det Q_o(t) \geq c for all t, ensuring bounded and persistent observability across time. This condition is equivalent to the existence of a bounded to an for single-input single-output systems, generalizing the LTI rank condition. Seminal results by Silverman and Meadows established that uniform observability holds no nontrivial remains under zero input, with the providing a computable test under smoothness assumptions on A(t) and C(t). For finite-horizon observability over [t_0, t_1], the matrix generalization extends to a stacked form using the \Phi(t, \tau), satisfying \frac{d}{dt} \Phi(t, \tau) = A(t) \Phi(t, \tau) with \Phi(\tau, \tau) = I. The observability map from x(t_0) to outputs is represented by the infinite-dimensional whose finite-dimensional approximation involves rows C(\tau) \Phi(\tau, t_0) for \tau \in [t_0, t_1]; full in the discretized sense implies observability. However, practical assessment often relies on the observability Gramian W_o(t_0, t_1) = \int_{t_0}^{t_1} \Phi^T(\tau, t_0) C^T(\tau) C(\tau) \Phi(\tau, t_0) \, d\tau, where the system is observable on [t_0, t_1] if W_o(t_0, t_1) > 0 (positive definite). Uniform complete observability requires \inf_{t_0} \lambda_{\min}(W_o(t_0, t_0 + d_o)) > 0 for some fixed d_o > 0, linking the matrix generalization to integral criteria for global properties. This framework supports observer design and state reconstruction in applications like adaptive control, where bounded parameters ensure the recursion yields stable estimates. For example, in slowly varying systems with \|\dot{A}(t)\| \leq \mu, the LTI observability matrix of the frozen system approximates Q_o(t), preserving rank under small \mu.

Observability Gramian

In linear time-varying (LTV) systems, the observability Gramian provides a quantitative measure of observability over a finite time interval, generalizing the observability matrix used for linear time-invariant (LTI) systems. For an LTV system described by \dot{x}(t) = A(t)x(t) and y(t) = C(t)x(t), where x(t) \in \mathbb{R}^n is the state vector and y(t) \in \mathbb{R}^p is the output, the observability Gramian W_o(t_0, t_f) on the interval [t_0, t_f] is defined as W_o(t_0, t_f) = \int_{t_0}^{t_f} \Phi^T(t, t_0) C^T(t) C(t) \Phi(t, t_0) \, dt, where \Phi(t, t_0) denotes the state transition matrix satisfying \frac{d}{dt} \Phi(t, t_0) = A(t) \Phi(t, t_0) with \Phi(t_0, t_0) = I_n. This Gramian is symmetric and positive semi-definite by construction, as it represents the inner product operator associated with the output map from initial states to the L^2 of outputs over the . The is observable on [t_0, t_f] if and only if W_o(t_0, t_f) is positive definite, meaning its smallest eigenvalue is positive, which ensures that the initial state x(t_0) can be uniquely reconstructed from the output y(t) for t \in [t_0, t_f]. This stems from the fact that W_o(t_0, t_f) x = 0 if and only if C(t) \Phi(t, t_0) x = 0 for all t \in [t_0, t_f], identifying the unobservable subspace. The observability Gramian satisfies a derived from the : \dot{W}_o(t, t_f) = -A^T(t) W_o(t, t_f) - W_o(t, t_f) A(t) - C^T(t) C(t) with terminal condition W_o(t_f, t_f) = 0, allowing numerical computation via backward integration for practical assessment in applications like state estimation and . For infinite-horizon analysis in asymptotically stable LTV systems, the Gramian may converge to a steady-state form, but finite-interval evaluation is essential for non-stationary cases to capture time-dependent observability. This tool, building on foundational concepts from Kalman, is widely used in and for verifying system monitorability under varying conditions.

Nonlinear Systems

Definitions of Observability

In nonlinear control systems, observability concerns the extent to which the internal state can be inferred from the system's inputs and outputs over a finite time interval. For a general described by \dot{x} = f(x, u), y = h(x, u), where x \in \mathbb{R}^n is the state, u \in \mathbb{R}^m the input, y \in \mathbb{R}^p the output, f and h are functions, observability at an initial state x_0 is defined via the indistinguishability relation I(x_0), which consists of all states x' that produce identical output trajectories y(t) for the same input u(t) starting from x_0 and x' over [t_0, t_1]. The system is observable at x_0 if I(x_0) = \{x_0\}, meaning no other state can mimic the input-output behavior uniquely from x_0. Global observability extends this property to the entire state space, requiring I(x) = \{x\} for all x. However, due to the inherent complexities of nonlinear dynamics, such as multiple equilibria or bifurcations, global observability is rare and difficult to achieve; instead, local variants are more commonly analyzed. Local observability at x_0 holds if, for every open neighborhood U of x_0, the restricted indistinguishability set I_U(x_0) = \{x_0\}. Weak observability refines this by requiring the existence of some neighborhood U such that I(x_0) \cap U = \{x_0\}, often assuming specific inputs like zero or constant. Local weak observability strengthens it further, ensuring that for some open U containing x_0, every subneighborhood V \subset U satisfies I_V(x_0) = \{x_0\}. A key analytical tool for assessing these properties is the observability rank condition, introduced for general nonlinear systems. Consider the space \mathcal{G} of smooth functions on the state space closed under Lie differentiation by the vector fields in \mathcal{F} = \{ \frac{\partial}{\partial u_i} + L_{g_i} \mid i=1,\dots,m \} \cup \{ L_f \}, where L denotes the . The system satisfies the observability rank condition at x_0 if the dimension of the differential codistribution d\mathcal{G}(x_0) equals n, the state dimension. This condition implies local weak observability at x_0. For the prevalent class of input-affine nonlinear systems, \dot{x} = f(x) + \sum_{i=1}^m g_i(x) u_i, y = h(x), the observability codistribution \Omega^* is the smallest codistribution invariant under Lie differentiation along f and the g_i, containing the differentials dh_j of the output functions. It is computed iteratively: \begin{align*} \Omega_0 &= \operatorname{span}\{ dh_1, \dots, dh_p \}, \\ \Omega_{k} &= \Omega_{k-1} + L_f \Omega_{k-1} + \sum_{i=1}^m L_{g_i} \Omega_{k-1}, \\ \Omega^* &= \lim_{k \to \infty} \Omega_k. \end{align*} The rank condition requires \dim \Omega^*(x_0) = n, ensuring local weak observability and enabling state reconstruction via observers in a neighborhood of x_0. For analytic systems that are weakly controllable, this condition is equivalent to weak observability. These definitions underpin observer design, such as extended Kalman filters or high-gain observers, by quantifying the information content in outputs relative to state trajectories.

Observability Tests

In theory, observability tests determine whether the internal state of a can be uniquely reconstructed from its output measurements over a finite time interval, extending the linear case but accounting for the 's nonlinearity. Unlike linear time-invariant systems, where the Kalman observability matrix provides a straightforward condition, nonlinear systems require more sophisticated tools, often involving and to assess local distinguishability of states. These tests typically assume the is described by \dot{x} = f(x, u), y = h(x), where x \in \mathbb{R}^n, u is the input, y \in \mathbb{R}^p is the output, and f, h are smooth functions. The foundational observability test for nonlinear systems was introduced by Hermann and Krener, relying on the concept of the observability codistribution generated by iterated Lie derivatives of the output function. The Lie derivative of a scalar function \phi(x) along a vector field g(x) is defined as L_g \phi(x) = \frac{\partial \phi}{\partial x} g(x). For the system, the observability codistribution \mathcal{O}(x) at a state x is the span of the differentials dh(x), d(L_f h)(x), d(L_f^2 h)(x), ..., up to order n-1, along with Lie brackets involving input-dependent vector fields if the system is affine in control. The system is locally weakly observable at x_0 if \dim \mathcal{O}(x_0) = n, meaning the codistribution has full rank, ensuring that nearby states produce distinguishable output trajectories under suitable inputs. This rank condition is algebraic and computable for low-dimensional systems, but it provides only local guarantees and assumes C^\infty smoothness of the functions. For analytic nonlinear systems, the Hermann-Krener rank condition strengthens to imply local observability (unique reconstruction in a neighborhood) when combined with weak , as the analyticity ensures the local of the state manifold via the output . In , this test is applied by constructing the observability matrix whose rows are the Jacobians of these Lie derivatives; full column rank confirms observability. Extensions include numerical approximations using Taylor expansions or computation for higher-order terms, though exact rank computation can be challenging due to symbolic complexity. Alternative tests address specific nonlinear classes, such as immersion-based methods for systems linearizable by output injection, where observability is checked via the involutivity of the orthogonal to the output derivatives. These build on the core framework but adapt to structural properties like feedback equivalence. Overall, while no universal global test exists due to nonlinear pathologies (e.g., multiple equilibria), the Lie-based rank condition remains the cornerstone for both theoretical analysis and observer design in nonlinear systems.

Generalizations

Static Systems

In control theory, static systems are characterized by algebraic relations without time-dependent dynamics, typically modeled as y = C x + D u, where y \in \mathbb{R}^m represents the measured outputs, x \in \mathbb{R}^n the state variables, u \in \mathbb{R}^p the known inputs, C \in \mathbb{R}^{m \times n} the state-output matrix, and D \in \mathbb{R}^{m \times p} the input-output matrix. Unlike dynamic systems, where observability involves trajectories over time, static systems rely solely on instantaneous measurements to infer states. Observability in static systems is defined as the capability to uniquely determine the state x from the outputs y and inputs u. Assuming u is known, this reduces to solving the linear equation y - D u = C x, requiring \rank(C) = n to uniquely determine the state x. In the absence of inputs (D = 0), observability simplifies to \rank(C) = n, ensuring the kernel of C is trivial and distinct states produce distinct outputs. If m < n, full observability is impossible, as the number of measurements cannot resolve all states. Testing observability involves computing the rank of C (or the augmented form) via singular value decomposition or Gaussian elimination; the system is observable if the smallest singular value exceeds a numerical threshold, confirming injectivity. Structural observability extends this to sparse or graph-based representations, where the incidence matrix M (encoding variable relations) is analyzed: a system is structurally observable if \rank(M_m) = v - m, with M_m the submatrix for measured variables, v the total variables, and m measurements, allowing decomposition into observable and non-observable subspaces. Applications of static observability appear in fault detection, sensor placement, and process monitoring, such as chemical plants or energy networks, where algebraic models estimate unmeasured variables. Redundancy relations, derived as \Omega y = 0 with \Omega a left null-space projector of full rank, enable fault isolation while preserving state estimation. Dependability assessments quantify fault tolerance through the analytical redundancy degree—the maximum sensor failures maintainable without rank deficiency in C—followed by stochastic simulations for reliability metrics.
AspectCondition for ObservabilityInterpretation
Full State Reconstruction\rank(C) = nUnique x from y = C x
With Inputs\rank(C) = nUnique x from y = C x + D u with known u
Structural Test\rank(M_m) = v - mEstimability in sparse algebraic networks

Topological Spaces

In control theory, the concept of observability extends to systems where the state space is a topological vector space, such as Banach or Hilbert spaces, which is essential for modeling infinite-dimensional phenomena like partial differential equations (PDEs) or delay systems. Unlike finite-dimensional cases, where observability is equivalent to the injectivity of the observability matrix, infinite-dimensional settings require careful consideration of the topology to ensure well-posedness. A linear system \dot{x}(t) = A x(t) + B u(t), y(t) = C x(t), with state space X a topological vector space, output space Y, and A generating a strongly continuous semigroup S(t) on X, is said to satisfy an observability inequality if there exist constants K > 0 and T > 0 such that \|x\|_X \leq K \left( \int_0^T \|C S(t) x\|_Y^2 \, dt \right)^{1/2} for all x \in X. This ensures that the initial state x(0) can be bounded by the L^2-norm of the output over [0, T], generalizing the finite-dimensional Kalman criterion while accounting for the lack of compactness in infinite dimensions. Exact observability, a stronger condition, requires the observability map \mathcal{O}_T: X \to L^2(0,T; Y) defined by \mathcal{O}_T x = C S(\cdot) x to be boundedly invertible, meaning \mathcal{O}_T is injective and has a continuous inverse on its range. This property holds if and only if the only state yielding zero output is the zero state, and it implies stability estimates for the system. In Hilbert spaces, duality links observability to approximate controllability of the adjoint system, where the control operator is the adjoint of the observation operator. For non-Hilbert topological vector spaces, such as Banach spaces, observability inequalities often rely on sectorial operators and Logvinenko-Sereda-type theorems for vector-valued functions, ensuring bounds like \|S(T) x\|_X \leq C \|\mathcal{O}_T x\|_{L^r(0,T; Y)} under assumptions of exponential boundedness and measurability of the semigroup. These generalizations enable analysis of PDEs, such as the wave or heat equation observed on a boundary subset. A refined notion, topological observability, addresses subtleties in infinite dimensions where standard observability may not suffice for unique reconstruction due to topological constraints. Introduced for s with maps in continuous-time settings, a is topologically observable if the observability map \mathcal{O}_T is continuously invertible when restricted to its image, ensuring well-posed initial determination from output in bounded time T > 0. This requires \mathcal{O}_T to be and the map to be surjective onto the range. Topological observability in bounded time yields realizations that are minimal, analogous to finite-dimensional minimal realizations, and it duality-relates to approximate via the . For example, certain delay-differential s exhibit topological observability despite lacking exact observability in the classical sense. This framework has been pivotal in realization theory for pseudo-rational transfer functions and networked s.

References

  1. [1]
    [PDF] Observability - MIT OpenCourseWare
    Observability is a notion that plays a major role in filtering and reconstruction of states from inputs and outputs. Together with reachability, observability ...
  2. [2]
    Observability Engineering
    ### Summary of Introduction and Definition of Observability (Chapter 1)
  3. [3]
    2. Monitoring and Observability - Distributed Systems ... - O'Reilly
    Observability is a superset of both monitoring and testing; it provides information about unpredictable failure modes that couldn't be monitored for or tested.
  4. [4]
    What is Observability? An Introduction - Splunk
    A system is considered “observable” if the current state can be estimated by only using information from outputs, namely sensor data. Observability can be used ...
  5. [5]
    4. The Three Pillars of Observability - Distributed Systems ... - O'Reilly
    Logs, metrics, and traces are often known as the three pillars of observability. While plainly having access to logs, metrics, and traces doesn't necessarily ...<|control11|><|separator|>
  6. [6]
    Observability Engineering [Book] - O'Reilly
    This practical book explains the value of observable systems and shows you how to practice observability-driven development.Missing: original | Show results with:original
  7. [7]
    [PDF] Chapter Seven - Graduate Degree in Control + Dynamical Systems
    Definition 7.1 (Observability). A linear system is observable if for any T > 0 it is possible to determine the state of the system x(T) through measurements ...
  8. [8]
    [PDF] Chapter Five Controllability and Observability
    Controllability and observability represent two major concepts of modern control system theory. These originally theoretical concepts, introduced by R.
  9. [9]
    On the general theory of control systems - ScienceDirect.com
    IFAC Proceedings Volumes Volume 1, Issue 1, August 1960, Pages 491-502 On the general theory of control systems
  10. [10]
    [PDF] REACHABILITY AND OBSERVABILITY - F.L. Lewis
    Oct 4, 2008 · It turns out that observability means we can design a stable observer to reconstruct the internal states given the available measurements. This ...
  11. [11]
    [PDF] On the General Theory of Control Systems
    In Section 6 we introduce the concept of observability and solve the problem of reconstructing unmeasurable state variables from the measurable ones in the ...
  12. [12]
    [PDF] LECTURES ON CONTROLLABILITY AND OBSERVABILITY - DTIC
    120 —. R.E.Kaiman. The reader interested in further ...
  13. [13]
    [PDF] Linear Time-Invariant Dynamical Systems - Duke People
    Oct 6, 2020 · The matrix exponential is defined for A ∈ Rn×n as e. A ... The matrix On has nm rows and n columns and is called the observability matrix.
  14. [14]
    [PDF] Modern Control Systems - Lecture 09: Observability - Matthew M. Peet
    The Unobservable Subspace is NCA = ker ΨT = ker O(C, A). Theorem 6. For this reason, the study of controllability and observability are related. Theorem 8.
  15. [15]
    [PDF] Fall 2010 Controllability and Observability of LTI Systems
    Oct 21, 2010 · In summary, for discrete-time linear systems, reachability implies controllability, and the two notions are equivalent if the matrix A of the ...
  16. [16]
    [PDF] MAE270A: Concepts of Observability/Detectability for LTI Systems
    By virtue of duality condition, then (A, C) is observable if and only if Rank O = n. As such matrix O is called observability matrix. Matrix algebra recall. For ...
  17. [17]
    [PDF] Notes on Linear Systems Theory
    A (stable) linear system is observable if and only if the observabilty Gramian has rank n. The observability Gramian can be computed using linear algebra:.Missing: LTI | Show results with:LTI
  18. [18]
    [PDF] Module 03 Linear Systems Theory: Necessary Background
    Sep 2, 2015 · DTLTI or CTLIT system, defined by (A, C), is detectable if there exists a matrix L such that A − LC is stable. Detectability Theorem. DTLTI or ...
  19. [19]
    [PDF] K. Tsakalis and P. Ioannou, Linear Time-Varying Systems
    ... Time-Varying Di erential Operators ... observability matrix. Q> o = [q0; q1;...; qn;1] ; qk+1 = A>. (t)qk + _qk ; q0 ...<|control11|><|separator|>
  20. [20]
    Controllability and Observability in Time-Variable Linear Systems
    Jul 18, 2006 · L. M. Silverman, Representation and realization of time-variable linear systems, Tech. Rept., 94, Department of Electrical Engineering ...
  21. [21]
    [PDF] Linear Control Systems Time-Varying Systems Change of Variables ...
    Let Φ(t, τ) be the transition matrix of A(t) and define a constant matrix R by e. RT. = Φ(T, 0). There is always a matrix R (although it is not unique). – p. 4 ...
  22. [22]
    [PDF] Linear Systems, 2019 - Lecture 3 - Automatic control (LTH)
    Observability Gramian. The matrix function. M(t0,tf ) = Z tf t0. Φ(t, t0)T C(t)T C(t)Φ(t, t0)dt is called the observability Gramian of the system. ˙x(t) = A(t)x ...Missing: varying | Show results with:varying
  23. [23]
    [PDF] Nonlinear Controllability and Observability
    Nonlinear Controllability and Observability. ROBERT HERMANN AND ARTHUR J ... of xo. Page 4. HERMANN AND KRENER: NONLINEAR CONTROLLABILITY AND OBSERVABILITY.
  24. [24]
    [PDF] Extension of the Observability Rank Condition to Time ... - Hal-Inria
    Jan 26, 2023 · Abstract—This note provides an extension of the observability rank condition to time-varying nonlinear systems. Previous conditions to.
  25. [25]
    [PDF] observability and redundancy decomposition
    The first part of the paper reviews the case of static systems. The second part is devoted to dynamic systems. In both cases, we propose a new presentation ...
  26. [26]
    Dependability analysis of instrumented linear static systems based ...
    Dependability analysis of instrumented linear static systems based on their observability. Samia MazaCentre de Recherche en Automatique de Nancy CNRS UMR7039 ...
  27. [27]
    An Introduction to Infinite-Dimensional Linear Systems Theory
    This book introduces infinite dimensional linear systems, integrating state-space and frequency-domain methods, for graduate engineers and mathematicians with ...
  28. [28]
    [PDF] Observability inequalities for infinite-dimensional systems in Banach ...
    For a topological vector space X, the space X1 “ LpX,Cq is the dual space of X. We have the dual pairing x¨,¨y : X. 1. ˆ X Ñ C, px1,xqÞÑ xx1,xy “ x1 pxq ...
  29. [29]
    Realization theory of infinite-dimensional linear systems. Part I
    Central emphasis is placed on the introduction of a new notion of observability, called topological observability.
  30. [30]
    [PDF] CONTROLLABILITY AND OBSERVABILITY - Sontag Lab
    Topological observability requires that the initial state de-. (and takes values in X), it has finite rank, and hence it can termination be well posed. The ...