Observability
Observability is a fundamental concept in control theory, introduced by Rudolf E. Kálmán in 1960, that quantifies the extent to which the internal states of a dynamic system can be inferred or reconstructed solely from measurements of its external inputs and outputs over a finite time interval.[1] A system is deemed fully observable if its observability matrix—constructed from the system's state-space model—has full rank, ensuring no initial state remains indistinguishable from others based on output data.[1] Observability is dual to controllability, both essential for state estimation in applications like robotics and aerospace.[1]Fundamentals
Definition
Observability is a fundamental concept in control theory that quantifies the extent to which the internal states of a dynamical system can be reconstructed from its external outputs over a finite time interval.[2] Introduced by Rudolf E. Kalman in his seminal work on linear systems in the late 1950s and early 1960s, observability addresses the problem of determining unmeasurable state variables from measurable outputs, enabling state estimation and feedback control design.[3] In essence, a system is observable if distinct initial states produce distinguishable output trajectories under known inputs, allowing unique inference of the current state.[2] For linear time-invariant systems described by the state-space model \dot{x} = Ax + Bu and y = Cx + Du, where x \in \mathbb{R}^n is the state vector, u \in \mathbb{R}^m is the input, and y \in \mathbb{R}^p is the output, observability is formally defined as the property that permits determination of x(T) from measurements of y(t) and u(t) over any interval [0, T] with T > 0.[2] This requires that the observability matrix \mathcal{O} = \begin{bmatrix} C \\ CA \\ \vdots \\ CA^{n-1} \end{bmatrix} has full column rank equal to n, ensuring no hidden dynamics that cannot be detected from the outputs.[3] If the rank condition holds, the system's states are fully reconstructible; otherwise, an unobservable subspace exists, limiting state estimation to the observable part.[2] The concept extends to discrete-time systems, where x[k+1] = Ax + Bu and y = Cx + Du, with observability defined analogously via the same matrix \mathcal{O} achieving full rank n, allowing initial state recovery from a finite sequence of outputs.[3] Observability is dual to controllability, sharing similar algebraic tests, and is crucial for applications like Kalman filtering, where it ensures the filter can accurately estimate states despite noise.[2] While the definition is precise for linear systems, it generalizes to nonlinear and time-varying cases through analogous conditions on output distinguishability.[3]Importance in Control Theory
Observability plays a pivotal role in control theory by enabling the reconstruction of a system's internal states from its measurable outputs over a finite time interval. This property ensures that the complete behavior of the system can be inferred without direct access to all state variables, which is often impractical or impossible in real-world applications. Introduced by Rudolf E. Kálmán in his seminal 1960 work, observability addresses a fundamental challenge in system analysis: distinguishing between different initial states based solely on input-output data. Without observability, hidden dynamics could remain undetected, leading to incomplete system understanding or ineffective control strategies.[4][3] In control system design, observability is essential for developing state estimators, such as the Luenberger observer or the Kalman filter, which approximate unmeasurable states for feedback purposes. For instance, an observable system allows the design of a stable observer that asymptotically reconstructs the state vector, facilitating pole placement and stabilization even when full state feedback is unavailable. This is particularly critical in applications like aerospace navigation and process control, where precise state knowledge is required to achieve desired performance without excessive sensors. Observability's duality with controllability further underscores its importance: together, they guarantee that a system can be both driven to desired states and monitored effectively, forming the basis for modern optimal control theory.[5][1] Beyond estimation, observability supports fault detection and isolation by identifying deviations in system behavior that indicate anomalies, as unobservable modes could mask such issues. In linear time-invariant systems, it is assessed through the rank of the observability matrix, providing a rigorous algebraic test for practical implementation. This conceptual framework has influenced fields from robotics to economic modeling, emphasizing the need for observable structures in complex, interconnected systems to ensure reliability and performance.[3][1]Linear Time-Invariant Systems
Observability Matrix
In linear time-invariant (LTI) systems described by the state-space model \dot{x}(t) = A x(t) + B u(t), y(t) = C x(t) + D u(t), where x \in \mathbb{R}^n is the state vector, u \in \mathbb{R}^m is the input, y \in \mathbb{R}^p is the output, and A, B, C, D are constant matrices of appropriate dimensions (with D often zero for simplicity), the observability matrix provides a algebraic test for determining whether the initial state x(0) can be uniquely reconstructed from the output y(t) over a finite time interval. This matrix, introduced by Rudolf E. Kalman in his foundational work on control systems, aggregates the output matrix and its powers under the system dynamics to capture the mapping from states to outputs.[6] The observability matrix \mathcal{O} is constructed by vertically stacking the rows of C and successive powers of A premultiplied by C: \mathcal{O} = \begin{bmatrix} C \\ CA \\ CA^2 \\ \vdots \\ CA^{n-1} \end{bmatrix} This results in a matrix of dimensions p n \times n. For a discrete-time LTI system x(k+1) = A x(k) + B u(k), y(k) = C x(k) + D u(k), the construction is analogous, using the same form without the continuous-time derivative. The rank of \mathcal{O} determines observability: the system is observable if and only if \operatorname{rank}(\mathcal{O}) = n, meaning \mathcal{O} has full column rank. This condition, known as the Kalman rank condition or Popov-Belevitch-Hautus (PBH) test in eigenvalue form, ensures that there are no nontrivial linear combinations of states that produce zero output for all inputs and time, i.e., the kernel of \mathcal{O} is trivial. If \operatorname{rank}(\mathcal{O}) < n, the unobservable subspace has dimension n - \operatorname{rank}(\mathcal{O}), consisting of states that cannot be distinguished from zero through measurements.[7][8][3] To illustrate, consider a second-order continuous-time system with A = \begin{bmatrix} 0 & 1 \\ -2 & -3 \end{bmatrix}, C = \begin{bmatrix} 1 & 0 \end{bmatrix}. The observability matrix is \mathcal{O} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}, which has rank 2 and thus confirms observability, allowing unique state reconstruction from y(t) and its derivative. In contrast, for A = \begin{bmatrix} 1 & 0 \\ 0 & 2 \end{bmatrix}, C = \begin{bmatrix} 1 & 0 \end{bmatrix}, \mathcal{O} = \begin{bmatrix} 1 & 0 \\ 1 & 0 \end{bmatrix} has rank 1, indicating the second state is unobservable. This matrix-based test is computationally efficient for moderate n and underpins observer design, such as Luenberger observers, by enabling decomposition into observable and unobservable canonical forms. By duality, the observability matrix of (A, C) corresponds to the controllability matrix of (A^T, C^T), linking the concepts symmetrically.[7][3]Unobservable Subspace
In linear time-invariant (LTI) systems described by \dot{x} = Ax + Bu, y = Cx + Du, the unobservable subspace consists of all initial states x(0) that generate zero output y(t) = 0 for all t \geq 0 under zero input u(t) = 0, rendering them indistinguishable from the origin based on output measurements.[1] This subspace, originally conceptualized in the state-space framework, forms a linear subspace of the state space \mathbb{R}^n.[6] Mathematically, the unobservable subspace \mathcal{N}_{C,A} is the kernel of the observability matrix O(C,A), defined as O(C,A) = \begin{bmatrix} C \\ CA \\ \vdots \\ CA^{n-1} \end{bmatrix}, where \mathcal{N}_{C,A} = \ker O(C,A) = \{ x \in \mathbb{R}^n \mid O(C,A) x = 0 \}.[9] Equivalently, it is the intersection of the kernels of the output map and its iterates: \mathcal{N}_{C,A} = \bigcap_{k=0}^{n-1} \ker (CA^k).[1] The dimension of \mathcal{N}_{C,A} equals n - \rank O(C,A), and the system is observable if and only if \dim \mathcal{N}_{C,A} = 0, or \rank O(C,A) = n.[9] The unobservable subspace possesses key structural properties: it is the largest A-invariant subspace contained in \ker C, meaning if x \in \mathcal{N}_{C,A}, then A x \in \mathcal{N}_{C,A}.[1] This invariance ensures that trajectories starting in \mathcal{N}_{C,A} remain within it, producing no observable effect. In the Kalman canonical decomposition, the state space decomposes as \mathbb{R}^n = \mathcal{R}_{A,B} \oplus \mathcal{N}_{C,A}, where \mathcal{R}_{A,B} is the controllable subspace, allowing transformation to a basis that separates observable and unobservable dynamics.[6] For modal analysis, \mathcal{N}_{C,A} includes eigenspaces corresponding to unobservable eigenvalues, where C v_i = 0 for eigenvector v_i of A.[1] An alternative characterization uses the Popov-Belevitch-Hautus (PBH) test: \mathcal{N}_{C,A} is spanned by vectors v such that \rank \begin{bmatrix} A - \lambda I \\ C \end{bmatrix} < n for some eigenvalue \lambda, with the system observable if this rank is full for all \lambda.[9] The observability Gramian W_o(t) = \int_0^t e^{A^\top \tau} C^\top C e^{A \tau} d\tau provides a continuous-time equivalent, where \ker W_o(t) = \mathcal{N}_{C,A} for t > 0 in observable systems, though the matrix kernel is the primary finite-dimensional tool.[1]Observability Index
In linear time-invariant (LTI) systems, the observability index quantifies the structural complexity of state reconstruction from outputs, serving as a dual concept to the controllability index. For a controllable and observable pair (A, C) where A \in \mathbb{R}^{n \times n} is the system matrix and C \in \mathbb{R}^{p \times n} is the output matrix with p outputs, the observability indices \hat{d}_1, \hat{d}_2, \dots, \hat{d}_p are defined as the controllability indices of the dual pair (A^T, C^T). These indices are positive integers satisfying \hat{d}_1 \geq \hat{d}_2 \geq \dots \geq \hat{d}_p > 0 and \sum_{i=1}^p \hat{d}_i = n, reflecting the minimal dimensions required to span the observable subspace through successive output derivatives or shifts.[10] The overall observability index \hat{d} is the maximum of these individual indices, \hat{d} = \max \{ \hat{d}_i : i = 1, 2, \dots, p \}, indicating the longest chain of dependencies needed to observe the full state from the outputs. To compute it, transform the dual system (A^T, C^T) into controller canonical form via a similarity transformation, where the controllability indices appear as the lengths of the companion blocks in the block-diagonal structure. The rank of the partial observability matrix \mathcal{O}_k = \begin{bmatrix} C \\ CA \\ \vdots \\ CA^{k-1} \end{bmatrix} stabilizes at n for k \geq \hat{d}, confirming observability when \text{rank}(\mathcal{O}_n) = n. This index is crucial for designing reduced-order observers, as the minimal observer order relates to n - p but is influenced by \hat{d} in multi-output cases.[10] For single-output systems (p=1), the observability index simplifies to \hat{d}_1 = n, aligning with the full rank requirement of the standard observability matrix. In multi-output scenarios, the indices partition the state space into observable modes, enabling efficient numerical reconstruction algorithms that require only \hat{d} time steps or derivatives for state estimation in discrete-time equivalents. This structural insight, rooted in Kalman decomposition, ensures that unobservable subspaces are isolated, with the indices providing a canonical measure of observability depth without relying on eigenvalue computations.[10]Detectability
In control theory, detectability is a property of linear time-invariant (LTI) systems that generalizes observability by requiring that only the unstable dynamics are observable from the outputs. For a system \dot{x} = Ax + Bu, y = Cx + Du, the pair (A, C) is detectable if all eigenvalues of A corresponding to the unobservable subspace have negative real parts, ensuring that unobservable states decay asymptotically to zero.[11] This condition implies that an observer can asymptotically estimate the state despite partial unobservability, as the estimation error for stable unobservable modes converges exponentially.[12] An equivalent definition states that (A, C) is detectable if there exists an output injection matrix L such that the closed-loop matrix A - LC is Hurwitz (all eigenvalues have negative real parts), allowing arbitrary pole placement for the observer dynamics via output feedback.[13] Observable systems are inherently detectable, since the full state is reconstructible, but detectability is a weaker requirement that accommodates systems where stable modes need not be observed.[11] Detectability can be tested using the Popov-Belevitch-Hautus (PBH) criterion, adapted from the observability test: the pair (A, C) is detectable if and only if \rank \begin{bmatrix} \lambda I - A \\ C \end{bmatrix} = n for every complex eigenvalue \lambda of A with \operatorname{Re}(\lambda) \geq 0, where n is the system dimension.[9] This ensures no unstable eigenvector lies in the unobservable subspace, as \ker(\lambda I - A) \cap \ker(C) = \{0\} for such \lambda. If the system is observable, the rank condition holds for all \lambda \in \mathbb{C}.[13] In the Kalman observability decomposition, detectability corresponds to the unobservable subsystem matrix A_{\bar{o}} being Hurwitz, where the state is transformed as \bar{x} = T^{-1} x into observable and unobservable coordinates: \dot{\bar{x}}_o = A_o \bar{x}_o + B_o u, \quad y = C_o \bar{x}_o + D u, \dot{\bar{x}}_{\bar{o}} = A_{\bar{o}} \bar{x}_{\bar{o}} + A_{21} \bar{x}_o + B_{\bar{o}} u. Here, A_o governs the observable dynamics, and stability of A_{\bar{o}} guarantees detectability.[11] Detectability plays a critical role in observer-based control, such as the separation principle for linear quadratic Gaussian (LQG) regulators, where it pairs with stabilizability of (A, B) to ensure closed-loop stability under output feedback.[12] For instance, in applications like aerospace systems, detectability allows robust state estimation when minor stable disturbances are unobservable, avoiding the need for full observability that might require excessive sensors.[13]Linear Time-Varying Systems
Observability Matrix Generalization
For linear time-varying (LTV) systems described by \dot{x}(t) = A(t) x(t) + B(t) u(t) and y(t) = C(t) x(t), where x \in \mathbb{R}^n, u \in \mathbb{R}^m, and y \in \mathbb{R}^p, the observability matrix generalizes the LTI case by incorporating the time dependence of A(t) and C(t). In LTI systems, observability is determined by the constant matrix \mathcal{O} = \begin{bmatrix} C \\ CA \\ \vdots \\ CA^{n-1} \end{bmatrix}, whose full rank implies observability. For LTV systems, no such static matrix exists due to the absence of a constant A; instead, a time-varying n \times n matrix Q_o(t) is defined recursively to capture instantaneous or local observability properties.[14][15] The columns q_k(t) \in \mathbb{R}^n of Q_o(t) = [q_0(t), q_1(t), \dots, q_{n-1}(t)] satisfy the backward recursion q_{k+1}(t) = A^T(t) q_k(t) + \dot{q}_k(t), \quad k = 0, 1, \dots, n-2, with the initial condition q_0(t) = C^T(t). This recursion arises from differentiating the output equation and substituting the state dynamics, mirroring the LTI construction but adjusted for time variation via the derivative term. The resulting Q_o(t) transforms the state x(t) such that the generalized outputs [y(t), \dot{y}(t), \dots, y^{(n-1)}(t)]^T = Q_o(t) x(t), assuming zero input for simplicity.[14][15] A LTV system is uniformly observable if Q_o(t) is nonsingular for all t \geq t_0, or more stringently, if there exists c > 0 such that \det Q_o(t) \geq c for all t, ensuring bounded conditioning and persistent observability across time. This condition is equivalent to the existence of a bounded transformation to an observable canonical form for single-input single-output systems, generalizing the LTI rank condition. Seminal results by Silverman and Meadows established that uniform observability holds if and only if no nontrivial state trajectory remains unobservable under zero input, with the recursion providing a computable test under smoothness assumptions on A(t) and C(t).[14][15] For finite-horizon observability over [t_0, t_1], the matrix generalization extends to a stacked form using the state transition matrix \Phi(t, \tau), satisfying \frac{d}{dt} \Phi(t, \tau) = A(t) \Phi(t, \tau) with \Phi(\tau, \tau) = I. The observability map from x(t_0) to outputs is represented by the infinite-dimensional operator whose finite-dimensional approximation involves rows C(\tau) \Phi(\tau, t_0) for \tau \in [t_0, t_1]; full rank in the discretized sense implies observability. However, practical assessment often relies on the observability Gramian W_o(t_0, t_1) = \int_{t_0}^{t_1} \Phi^T(\tau, t_0) C^T(\tau) C(\tau) \Phi(\tau, t_0) \, d\tau, where the system is observable on [t_0, t_1] if W_o(t_0, t_1) > 0 (positive definite). Uniform complete observability requires \inf_{t_0} \lambda_{\min}(W_o(t_0, t_0 + d_o)) > 0 for some fixed d_o > 0, linking the matrix generalization to integral criteria for global properties.[14][15] This framework supports observer design and state reconstruction in applications like adaptive control, where bounded parameters ensure the recursion yields stable estimates. For example, in slowly varying systems with \|\dot{A}(t)\| \leq \mu, the LTI observability matrix of the frozen system approximates Q_o(t), preserving rank under small \mu.[14]Observability Gramian
In linear time-varying (LTV) systems, the observability Gramian provides a quantitative measure of observability over a finite time interval, generalizing the observability matrix used for linear time-invariant (LTI) systems. For an LTV system described by \dot{x}(t) = A(t)x(t) and y(t) = C(t)x(t), where x(t) \in \mathbb{R}^n is the state vector and y(t) \in \mathbb{R}^p is the output, the observability Gramian W_o(t_0, t_f) on the interval [t_0, t_f] is defined as W_o(t_0, t_f) = \int_{t_0}^{t_f} \Phi^T(t, t_0) C^T(t) C(t) \Phi(t, t_0) \, dt, where \Phi(t, t_0) denotes the state transition matrix satisfying \frac{d}{dt} \Phi(t, t_0) = A(t) \Phi(t, t_0) with \Phi(t_0, t_0) = I_n.[16][17] This Gramian is symmetric and positive semi-definite by construction, as it represents the inner product operator associated with the output map from initial states to the L^2 space of outputs over the interval. The system is observable on [t_0, t_f] if and only if W_o(t_0, t_f) is positive definite, meaning its smallest eigenvalue is positive, which ensures that the initial state x(t_0) can be uniquely reconstructed from the output y(t) for t \in [t_0, t_f].[17] This criterion stems from the fact that W_o(t_0, t_f) x = 0 if and only if C(t) \Phi(t, t_0) x = 0 for all t \in [t_0, t_f], identifying the unobservable subspace.[16] The observability Gramian satisfies a matrix differential equation derived from the system dynamics: \dot{W}_o(t, t_f) = -A^T(t) W_o(t, t_f) - W_o(t, t_f) A(t) - C^T(t) C(t) with terminal condition W_o(t_f, t_f) = 0, allowing numerical computation via backward integration for practical assessment in applications like state estimation and filter design.[16] For infinite-horizon analysis in asymptotically stable LTV systems, the Gramian may converge to a steady-state form, but finite-interval evaluation is essential for non-stationary cases to capture time-dependent observability.[1] This tool, building on foundational concepts from Kalman, is widely used in aerospace and robotics for verifying system monitorability under varying conditions.[6]Nonlinear Systems
Definitions of Observability
In nonlinear control systems, observability concerns the extent to which the internal state can be inferred from the system's inputs and outputs over a finite time interval.[18] For a general nonlinear system described by \dot{x} = f(x, u), y = h(x, u), where x \in \mathbb{R}^n is the state, u \in \mathbb{R}^m the input, y \in \mathbb{R}^p the output, f and h are smooth functions, observability at an initial state x_0 is defined via the indistinguishability relation I(x_0), which consists of all states x' that produce identical output trajectories y(t) for the same input u(t) starting from x_0 and x' over [t_0, t_1].[18] The system is observable at x_0 if I(x_0) = \{x_0\}, meaning no other state can mimic the input-output behavior uniquely from x_0.[18] Global observability extends this property to the entire state space, requiring I(x) = \{x\} for all x.[18] However, due to the inherent complexities of nonlinear dynamics, such as multiple equilibria or bifurcations, global observability is rare and difficult to achieve; instead, local variants are more commonly analyzed.[19] Local observability at x_0 holds if, for every open neighborhood U of x_0, the restricted indistinguishability set I_U(x_0) = \{x_0\}.[18] Weak observability refines this by requiring the existence of some neighborhood U such that I(x_0) \cap U = \{x_0\}, often assuming specific inputs like zero or constant.[18] Local weak observability strengthens it further, ensuring that for some open U containing x_0, every subneighborhood V \subset U satisfies I_V(x_0) = \{x_0\}.[18] A key analytical tool for assessing these properties is the observability rank condition, introduced for general nonlinear systems.[18] Consider the space \mathcal{G} of smooth functions on the state space closed under Lie differentiation by the vector fields in \mathcal{F} = \{ \frac{\partial}{\partial u_i} + L_{g_i} \mid i=1,\dots,m \} \cup \{ L_f \}, where L denotes the Lie derivative. The system satisfies the observability rank condition at x_0 if the dimension of the differential codistribution d\mathcal{G}(x_0) equals n, the state dimension.[18] This condition implies local weak observability at x_0.[18] For the prevalent class of input-affine nonlinear systems, \dot{x} = f(x) + \sum_{i=1}^m g_i(x) u_i, y = h(x), the observability codistribution \Omega^* is the smallest codistribution invariant under Lie differentiation along f and the g_i, containing the differentials dh_j of the output functions.[19] It is computed iteratively: \begin{align*} \Omega_0 &= \operatorname{span}\{ dh_1, \dots, dh_p \}, \\ \Omega_{k} &= \Omega_{k-1} + L_f \Omega_{k-1} + \sum_{i=1}^m L_{g_i} \Omega_{k-1}, \\ \Omega^* &= \lim_{k \to \infty} \Omega_k. \end{align*} The rank condition requires \dim \Omega^*(x_0) = n, ensuring local weak observability and enabling state reconstruction via observers in a neighborhood of x_0.[19] For analytic systems that are weakly controllable, this condition is equivalent to weak observability.[18] These definitions underpin observer design, such as extended Kalman filters or high-gain observers, by quantifying the information content in outputs relative to state trajectories.[19]Observability Tests
In nonlinear control theory, observability tests determine whether the internal state of a system can be uniquely reconstructed from its output measurements over a finite time interval, extending the linear case but accounting for the system's nonlinearity. Unlike linear time-invariant systems, where the Kalman observability matrix provides a straightforward rank condition, nonlinear systems require more sophisticated tools, often involving differential geometry and Lie theory to assess local distinguishability of states. These tests typically assume the system is described by \dot{x} = f(x, u), y = h(x), where x \in \mathbb{R}^n, u is the input, y \in \mathbb{R}^p is the output, and f, h are smooth functions.[18] The foundational observability test for nonlinear systems was introduced by Hermann and Krener, relying on the concept of the observability codistribution generated by iterated Lie derivatives of the output function. The Lie derivative of a scalar function \phi(x) along a vector field g(x) is defined as L_g \phi(x) = \frac{\partial \phi}{\partial x} g(x). For the system, the observability codistribution \mathcal{O}(x) at a state x is the span of the differentials dh(x), d(L_f h)(x), d(L_f^2 h)(x), ..., up to order n-1, along with Lie brackets involving input-dependent vector fields if the system is affine in control. The system is locally weakly observable at x_0 if \dim \mathcal{O}(x_0) = n, meaning the codistribution has full rank, ensuring that nearby states produce distinguishable output trajectories under suitable inputs. This rank condition is algebraic and computable for low-dimensional systems, but it provides only local guarantees and assumes C^\infty smoothness of the functions.[18] For analytic nonlinear systems, the Hermann-Krener rank condition strengthens to imply local observability (unique state reconstruction in a neighborhood) when combined with weak controllability, as the analyticity ensures the local embedding of the state manifold via the output map. In practice, this test is applied by constructing the observability matrix whose rows are the Jacobians of these Lie derivatives; full column rank confirms observability. Extensions include numerical approximations using Taylor expansions or symbolic computation for higher-order terms, though exact rank computation can be challenging due to symbolic complexity.[18] Alternative tests address specific nonlinear classes, such as immersion-based methods for systems linearizable by output injection, where observability is checked via the involutivity of the distribution orthogonal to the output derivatives. These build on the core Lie derivative framework but adapt to structural properties like feedback equivalence. Overall, while no universal global test exists due to nonlinear pathologies (e.g., multiple equilibria), the Lie-based rank condition remains the cornerstone for both theoretical analysis and observer design in nonlinear systems.[18]Generalizations
Static Systems
In control theory, static systems are characterized by algebraic relations without time-dependent dynamics, typically modeled as y = C x + D u, where y \in \mathbb{R}^m represents the measured outputs, x \in \mathbb{R}^n the state variables, u \in \mathbb{R}^p the known inputs, C \in \mathbb{R}^{m \times n} the state-output matrix, and D \in \mathbb{R}^{m \times p} the input-output matrix.[20] Unlike dynamic systems, where observability involves trajectories over time, static systems rely solely on instantaneous measurements to infer states.[21] Observability in static systems is defined as the capability to uniquely determine the state x from the outputs y and inputs u. Assuming u is known, this reduces to solving the linear equation y - D u = C x, requiring \rank(C) = n to uniquely determine the state x. In the absence of inputs (D = 0), observability simplifies to \rank(C) = n, ensuring the kernel of C is trivial and distinct states produce distinct outputs.[20] If m < n, full observability is impossible, as the number of measurements cannot resolve all states.[21] Testing observability involves computing the rank of C (or the augmented form) via singular value decomposition or Gaussian elimination; the system is observable if the smallest singular value exceeds a numerical threshold, confirming injectivity.[20] Structural observability extends this to sparse or graph-based representations, where the incidence matrix M (encoding variable relations) is analyzed: a system is structurally observable if \rank(M_m) = v - m, with M_m the submatrix for measured variables, v the total variables, and m measurements, allowing decomposition into observable and non-observable subspaces.[20] Applications of static observability appear in fault detection, sensor placement, and process monitoring, such as chemical plants or energy networks, where algebraic models estimate unmeasured variables. Redundancy relations, derived as \Omega y = 0 with \Omega a left null-space projector of full rank, enable fault isolation while preserving state estimation.[20] Dependability assessments quantify fault tolerance through the analytical redundancy degree—the maximum sensor failures maintainable without rank deficiency in C—followed by stochastic simulations for reliability metrics.[21]| Aspect | Condition for Observability | Interpretation |
|---|---|---|
| Full State Reconstruction | \rank(C) = n | Unique x from y = C x |
| With Inputs | \rank(C) = n | Unique x from y = C x + D u with known u |
| Structural Test | \rank(M_m) = v - m | Estimability in sparse algebraic networks |