Fact-checked by Grok 2 weeks ago

State variable

In and dynamical systems, a state variable is one of a minimal set of variables that fully describe the internal condition of a system at a given time, enabling the prediction of its future behavior under specified inputs and dynamics. These variables are typically chosen to correspond to the system's elements, such as capacitor voltages or inductor currents in electrical circuits, or positions and velocities in systems. The collection of state variables forms a , which evolves according to first-order ordinary differential equations in continuous-time models or difference equations in discrete-time models. The concept of state variables underpins the , a framework introduced by in the late and , which shifted analysis from input-output transfer functions to internal system descriptions. In this approach, the system's dynamics are expressed as \dot{x}(t) = Ax(t) + Bu(t) for linear time-invariant systems, where x(t) is the , u(t) is the input vector, and A and B are system matrices; the output is then y(t) = Cx(t) + Du(t). This formulation facilitates advanced techniques like analysis, , , and state estimation via Kalman filters, making it essential for modeling complex systems in , and . The minimal number of state variables required equals the order of the system, ensuring a compact yet complete description without redundancy. State variables extend beyond linear systems to nonlinear and time-varying cases, where their selection may involve coordinates or other coordinates that capture the system's memory and trajectory. Applications span diverse fields, including for flight control, for , and for filtering noisy measurements, highlighting their role in bridging theoretical modeling with practical .

Fundamentals

Definition

In , a state variable is one of a minimal set of variables that completely describe the internal condition of a system at any instant in time, enabling the prediction of its future evolution given the system's governing equations and external inputs. This set captures all essential information about the system's "" or past history, such that the values of these variables at an initial time t_0 suffice to determine the system's trajectory for all subsequent times t \geq t_0. The concept, formalized in modern , emphasizes that the state provides a unique snapshot from which all future behavior can be computed without recourse to earlier history. The requirement for minimality ensures that the state variables form the smallest possible set without redundancy, with the number of such variables equal to the of the —corresponding, for instance, to the number of energy-storing elements in physical realizations like or electrical systems. This minimal representation avoids superfluous variables while fully encoding the , making it foundational for analysis and design in fields such as . Initial conditions, specified by assigning values to these state variables at t_0, uniquely fix the system's response path under given inputs, highlighting their role in distinguishing different possible evolutions from the same model. State variables interact with the system's inputs and outputs in a structured manner: they evolve dynamically in response to inputs according to the system's equations, while outputs are derived as functions of the current , the inputs, and possibly explicit time dependence. This relationship underpins the state-space framework, where inputs drive changes in the , and the in turn governs observable outputs, facilitating comprehensive modeling of complex interactions without loss of .

Historical Development

The concept of state variables traces its early roots to the 19th century, particularly through James Clerk Maxwell's analysis of centrifugal governors in steam engines. In his 1868 paper "On Governors," Maxwell provided the first mathematical treatment of feedback control systems, examining stability and transient behavior in terms that implicitly captured internal system states, such as and , to regulate engine speed. This work laid foundational groundwork for , influencing later developments in without explicitly formalizing state-space representations. The formal emergence of state-space methods occurred in the mid-20th century amid the transition from classical frequency-domain techniques, like Laplace transforms and Bode plots, to time-domain approaches. This shift accelerated in the and during the , as aerospace applications demanded multivariable and modeling beyond the limitations of single-input single-output transfer functions. The Apollo program's guidance and systems, for instance, relied on state-space formulations for trajectory prediction and correction. A pivotal milestone came in 1960 with Rudolf E. Kalman's contributions, which established state-space as the cornerstone of modern . In "A New Approach to Linear Filtering and Problems," Kalman introduced state-variable models for optimal in noisy environments, while his concurrent work on and provided criteria to assess system realizability and design. These ideas, building on earlier concepts, enabled the analysis of linear systems via equations in state vectors. Following the , state-space methods integrated with frameworks, notably through the (LQR), which minimized quadratic cost functions over state trajectories and became widely adopted for stabilizing complex systems. By the and , advancements in computational tools, such as numerical solvers in software libraries like SLICOT and MATLAB's Toolbox, facilitated the simulation and design of high-dimensional state-space models, broadening their application in engineering practice.

Mathematical Representation

Continuous-Time Systems

In continuous-time systems, the provides a for modeling the of linear time-invariant (LTI) systems using equations. The core equation is given by \dot{x}(t) = A x(t) + B u(t), where x(t) \in \mathbb{R}^n is the capturing the system's internal conditions at time t, A \in \mathbb{R}^{n \times n} is the system describing the natural evolution of the , B \in \mathbb{R}^{n \times m} is the input relating the input vector u(t) \in \mathbb{R}^m to the , and \dot{x}(t) denotes the time of the . This formulation, introduced by Kalman, unifies the description of multi-input multi-output systems by focusing on the as a minimal set of variables sufficient to predict future behavior given the inputs. The output of the system is related to the states and inputs through the equation y(t) = C x(t) + D u(t), where y(t) \in \mathbb{R}^p is the output vector, C \in \mathbb{R}^{p \times n} is the output matrix mapping states to outputs, and D \in \mathbb{R}^{p \times m} is the feedthrough matrix accounting for direct transmission from inputs to outputs without state involvement. For LTI systems, the matrices A, B, C, and D are constant, ensuring the system's response is linear in the states and inputs and invariant to time shifts. The general solution to the state equation, assuming an initial state x(t_0) at time t_0, is obtained by solving the linear ordinary differential equation: x(t) = e^{A(t - t_0)} x(t_0) + \int_{t_0}^t e^{A(t - \tau)} B u(\tau) \, d\tau, where e^{A(t - t_0)} is the matrix exponential serving as the state transition matrix that propagates the homogeneous solution forward in time. This integral form incorporates the forced response due to the input, leveraging the superposition principle inherent to linear systems. While the state-space model primarily addresses LTI systems under linearity and time-invariance assumptions, it extends to nonlinear continuous-time systems by generalizing the equations to \dot{x}(t) = f(x(t), u(t)) and y(t) = h(x(t), u(t)), where f and h are nonlinear functions.

Discrete-Time Systems

In discrete-time systems, the state-space model describes the evolution of the at successive time steps, typically indexed by the k. The state update is given by x(k+1) = A x(k) + B u(k), where x(k) \in \mathbb{R}^n is the , u(k) \in \mathbb{R}^m is the input vector, A \in \mathbb{R}^{n \times n} is the , and B \in \mathbb{R}^{n \times m} is the input matrix. This formulation, introduced by Rudolf E. Kalman in his foundational work on linear filtering, captures the dynamics of systems such as digital controllers and sampled-data processes, where the at each step depends linearly on the previous and the current input. The output equation complements the state update, relating the observed output y(k) \in \mathbb{R}^p to the current and input via y(k) = C x(k) + D u(k), with C \in \mathbb{R}^{p \times n} as the output matrix and D \in \mathbb{R}^{p \times m} as the feedthrough matrix. This pair of equations forms the canonical discrete-time , enabling analysis of systems like algorithms and event simulations. The model assumes and time-invariance unless specified otherwise, distinguishing it from continuous-time counterparts by using equations rather than differentials. The solution to the state update equation can be expressed iteratively, starting from an initial state x(k_0) at time k_0: x(k) = A^{k - k_0} x(k_0) + \sum_{m = k_0}^{k-1} A^{k-1-m} B u(m), for k \geq k_0. This closed-form expression highlights the Markovian property of the state, where future states depend only on the current state and inputs, facilitating computations in recursive algorithms like the Kalman filter. It arises directly from repeated substitution of the state equation, underscoring the model's suitability for prediction in time-series data. Discrete-time models are often derived from continuous-time systems through sampling with a zero-order hold (ZOH), which maintains constant input over each sampling interval T. The discretized matrices are A_d = e^{A T}, \quad B_d = \int_0^T e^{A \tau} \, d\tau \, B, where A and B are the continuous-time matrices. This exact discretization preserves the system's response to piecewise-constant inputs, making it essential for implementing analog controllers on digital hardware, such as in aerospace and robotics applications. The matrix exponential e^{A T} can be computed via series expansion or eigenvalue decomposition, ensuring numerical stability for small T.

Properties and Analysis

Controllability

In control theory, controllability refers to the ability of a system to be driven from any initial state to any desired final state within a finite time interval using appropriate input signals. This property is fundamental for linear time-invariant (LTI) systems represented in state-space form, where the state vector x(t) evolves according to \dot{x}(t) = Ax(t) + Bu(t), with A as the system matrix, B as the input matrix, and u(t) as the control input. For continuous-time LTI systems, is determined by the controllability matrix \mathcal{C} = [B, AB, A^2B, \dots, A^{n-1}B], where n is the dimension of the state space. The system is controllable if and only if the of \mathcal{C} equals n, ensuring that the columns of \mathcal{C} span the entire state space. This condition, known as the Kalman rank condition, provides a necessary and sufficient criterion for and was established as part of the foundational framework for state-space analysis. In discrete-time LTI systems, described by x(k+1) = Ax(k) + Bu(k), the controllability matrix takes the identical form \mathcal{C} = [B, AB, A^2B, \dots, A^{n-1}B], and the system is controllable if \operatorname{[rank](/page/Rank)}(\mathcal{C}) = n. The Kalman rank condition applies similarly here, confirming that the reachable from the origin covers the full state space under admissible inputs. The Kalman rank condition has critical implications for control design, particularly in techniques like pole placement, where controllability ensures that state feedback can arbitrarily assign the closed-loop eigenvalues of the system matrix to achieve desired dynamic performance. For instance, in multi-input systems, this equivalence allows for the synthesis of stabilizing controllers by transforming the system into a controllable .

Observability

In , observability refers to the property of a dynamic system that allows the initial state x(t_0) to be uniquely reconstructed from the knowledge of the input u(t) and output y(t) over a finite time interval [t_0, t_0 + T] for some T > 0. This concept is fundamental for state estimation, as it ensures that all internal states influencing the system's behavior can be inferred from external measurements, distinguishing the system from cases where certain states produce identical input-output responses regardless of their values. For linear time-invariant (LTI) systems, is tested using the . In continuous-time systems described by \dot{x} = Ax + Bu, y = Cx + Du, the observability matrix is defined as \mathcal{O} = \begin{bmatrix} C \\ CA \\ \vdots \\ CA^{n-1} \end{bmatrix}, where n is the of the x, A is the state matrix, and C is the output matrix. The system is \mathcal{O} has full n, meaning its rows span the entire n-dimensional state space. For discrete-time LTI systems given by x_{k+1} = Ax_k + Bu_k, y_k = Cx_k + Du_k, the observability matrix takes the analogous form \mathcal{O} = \begin{bmatrix} C \\ CA \\ \vdots \\ CA^{n-1} \end{bmatrix}, and the system is if \mathcal{O} has full n. This condition guarantees that the initial can be uniquely determined from a finite sequence of inputs and outputs. A key application of is in via the Luenberger observer, which reconstructs the when direct measurement is unavailable. For a continuous-time LTI system, the observer are given by \dot{\hat{x}} = A\hat{x} + Bu + L(y - C\hat{x}), where \hat{x} is the estimated state, u is the input, y is the measured output, and L is the observer gain matrix chosen to ensure of the \dot{e} = (A - LC)e with e = x - \hat{x}. This structure requires the pair (A, C) to be , allowing the to converge asymptotically to zero under appropriate gain selection.

Applications and Examples

In Mechanical Systems

In mechanical systems, state variables provide a framework for modeling the dynamics of physical components such as masses, springs, and dampers by capturing the energy storage mechanisms inherent to the system. A canonical example is the second-order mass-spring-damper system, which consists of a mass m attached to a spring with stiffness k and a damper with coefficient c, subjected to an external force u(t). The governing differential equation for the displacement x(t) of the mass is given by m \ddot{x} + c \dot{x} + k x = u(t), where \ddot{x} and \dot{x} represent acceleration and velocity, respectively. To represent this in state-space form, suitable state variables are selected as the position x_1 = x and the velocity x_2 = \dot{x}, which together fully describe the system's configuration and momentum at any instant. These choices align with the general continuous-time state-space representation, where the state vector evolves according to first-order differential equations. The resulting state equations are \dot{x_1} = x_2, \quad \dot{x_2} = -\frac{k}{m} x_1 - \frac{c}{m} x_2 + \frac{1}{m} u. This formulation transforms the second-order equation into a pair of coupled first-order equations, facilitating analysis and simulation. In matrix notation, the state-space model is expressed as \dot{\mathbf{x}} = A \mathbf{x} + B u with output y = C \mathbf{x} (assuming as the measured output and no direct feedthrough), where A = \begin{bmatrix} 0 & 1 \\ -\frac{k}{m} & -\frac{c}{m} \end{bmatrix}, \quad B = \begin{bmatrix} 0 \\ \frac{1}{m} \end{bmatrix}, \quad C = \begin{bmatrix} 1 & 0 \end{bmatrix}. The A encodes the internal , including the restoring from the and dissipative effects from the , while B describes how the input influences the . The state variables x_1 and x_2 directly correspond to the system's : x_1 relates to stored in the (\frac{1}{2} k x_1^2), and x_2 to of the (\frac{1}{2} m x_2^2). This interpretation underscores how state variables encapsulate the minimal information needed to predict future behavior, accounting for both conservative and dissipative elements in mechanical systems. For instance, in or vehicle design, these states enable precise control strategies by tracking energy distribution over time.

In Electrical Circuits

In electrical circuits, is particularly useful for modeling dynamic systems involving elements, such as and . A classic example is the series , consisting of a (R), (L), and (C) connected in series with an input u(t). The governing for the i(t) through the circuit is derived from Kirchhoff's voltage law: L \frac{di}{dt} + R i + \frac{1}{C} \int i \, dt = u(t). This second-order equation captures the circuit's behavior, where the integral term represents the voltage across the capacitor. To express this in state-space form, appropriate state variables are selected based on the energy-storing components: the inductor current x_1 = i (which stores ) and the capacitor voltage x_2 = \frac{1}{C} \int i \, dt (which stores electric ). These choices align with that states should reflect independent mechanisms in the , allowing the system's future behavior to be determined from states and inputs. Differentiating the integral term yields \dot{x_2} = \frac{x_1}{C}, while from the governing equation, \dot{x_1} = -\frac{R}{L} x_1 - \frac{1}{L} x_2 + \frac{1}{L} u. Thus, the state equations are: \dot{x_1} = -\frac{R}{L} x_1 - \frac{1}{L} x_2 + \frac{1}{L} u, \quad \dot{x_2} = \frac{x_1}{C}. For an output such as the capacitor voltage, the measurement equation is y = x_2. In matrix form, the state-space model is compactly represented as \dot{\mathbf{x}} = A \mathbf{x} + B u, y = C \mathbf{x}, where A = \begin{bmatrix} -\frac{R}{L} & -\frac{1}{L} \\ \frac{1}{C} & 0 \end{bmatrix}, \quad B = \begin{bmatrix} \frac{1}{L} \\ 0 \end{bmatrix}, \quad C = \begin{bmatrix} 0 & 1 \end{bmatrix}. This formulation facilitates analysis of the circuit's , , and control design, emphasizing how the states encapsulate the circuit's memory through its passive elements.

References

  1. [1]
    8.1: State Variable Models - Engineering LibreTexts
    Jun 19, 2023 · The state variable model of a dynamic system comprises first-order ODEs that describe time derivatives of a set of state variables.The State Equations · Solution to the State Equations · The State-Transition Matrix
  2. [2]
    Control Systems - State Space Model - Tutorials Point
    State Variable. The number of the state variables required is equal to the number of the storage elements present in the system. Examples − current flowing ...
  3. [3]
    Control principles of complex systems | Rev. Mod. Phys.
    Sep 6, 2016 · The concept of state, introduced into control theory by Rudolf Kalman in the 1960s, is a mathematical entity that mediates between the inputs ...
  4. [4]
    [PDF] State-Space Representation of LTI Systems 1 Introduction - MIT
    The concept of the state of a dynamic system refers to a minimum set of variables, known as state variables, that fully describe the system and its response to ...
  5. [5]
    [PDF] Chapter 5 - State and Output Feedback
    The state of a dynamical system is a collection of variables that permits prediction of the future development of a system.
  6. [6]
    I. On governors | Proceedings of the Royal Society of London
    Bittanti S (2015) James Clerk Maxwell, a precursor of system identification and control science, International Journal of Control, 10.1080/00207179.2015. ...
  7. [7]
    [PDF] Governors and Feedback Control - James Clerk Maxwell Foundation
    Governors (a latin corruption of χυβερνήτης), the cruise control systems of Maxwell's time, were pioneered by James Watt (1736-1819), who controlled the steam ...
  8. [8]
    Brief History of Feedback Control - F.L. Lewis
    The Space/Computer Age and Modern Control. With the advent of the space age, controls design in the United States turned away from the frequency-domain ...
  9. [9]
    What Are State Space Models? | IBM
    State space models have their origins in control systems engineering, where they played a pivotal role in navigational calculations for the Apollo program ...What is a state space model... · What is a state space?
  10. [10]
    A New Approach to Linear Filtering and Prediction Problems
    Kalman, R. E. (March 1, 1960). "A New Approach to Linear Filtering and Prediction Problems." ASME. J. Basic Eng. March 1960; 82(1): 35–45. https://doi.org ...
  11. [11]
    [PDF] On the General Theory of Control Systems
    In Section 6 we introduce the concept of observability and solve the problem of reconstructing unmeasurable state variables from the measurable ones in the ...
  12. [12]
    [PDF] Kalman 1960: The birth of modern system theory - Hal-Inria
    Nov 30, 2018 · different state and control variables will respond to a change in the weighting. 491 matrices. This process, known as quadratic synthesis, is ...
  13. [13]
    [PDF] ADVANCED COMPUTATIONAL TOOLS FOR COMPUTER ... - SLICOT
    – 70's : Scandinavian Control library, Swiss library AUTLIB. – 80's : SLICE (+NAG), BIMAS(C), LISPACK, SYSLAB, RASP. – 90's : SLICOT (WGS) still active ! WGS ...<|control11|><|separator|>
  14. [14]
    Mathematical Description of Linear Dynamical Systems
    3. R. E. Kalman, On the general theory of control systems, Proc. 1st International Congress on Automatic Control, Moscow, 1960, Butterworths, ...<|control11|><|separator|>
  15. [15]
    [PDF] 16.30 Topic 5: Introduction to state-space models
    State space model: a representation of the dynamics of an Nth order system as a first order differential equation in an N-vector, which is called the state. ...
  16. [16]
    [PDF] Nonlinear Systems
    ± = f(x) + G(x)u, y = h(x) + J(x)u where f, G, h, and J are smooth functions of x. Suppose there is a positive constant. , such that ,2 I - JT (x) J (x) > 0 and.
  17. [17]
    [PDF] Discrete-time linear state-space models - MIT OpenCourseWare
    In this chapter we will continue the study of state-space models, concentrating on solutions and properties of DT linear state-space models, both time-varying.
  18. [18]
    [PDF] discretization of continuous systems - F.L. Lewis
    Oct 30, 2013 · The zero-order hold sampling technique just presented provides an exact discretized equivalent of the continuous-time plant for which discrete ...Missing: seminal | Show results with:seminal
  19. [19]
    [PDF] LECTURES ON CONTROLLABILITY AND OBSERVABILITY - DTIC
    Mathematical description of linear dynamical systems,. SIAM J. Contr., 1:152-192. Algebraic structure of linear dynamical systems. I. The. Module of Z, Proc ...
  20. [20]
    On pole assignment in multi-input controllable linear systems
    Abstract: It is shown that controllability of an open-loop system is equivalent to the possibility of assigning an arbitrary set of poles to the transfer ...
  21. [21]
    [PDF] Chapter Seven - Graduate Degree in Control + Dynamical Systems
    Definition 7.1 (Observability). A linear system is observable if for any T > 0 it is possible to determine the state of the system x(T) through measurements of ...
  22. [22]
    [PDF] Observability - MIT OpenCourseWare
    Observability. 24.1 Introduction. Observability is a notion that plays a major role in filtering and reconstruction of states from inputs and outputs. Together ...
  23. [23]
    [PDF] Chapter Five Controllability and Observability
    Controllability means a system can do whatever is desired with control input; observability means a system can be seen what is going on inside it.
  24. [24]
    [PDF] Introduction to Linear and Nonlinear Observers - Rutgers University
    According to Luenberger, any system driven by the output of the given system can serve as an observer for that system. Two main techniques are available for ...
  25. [25]
    [PDF] Chapter Two - System Modeling
    One can also use state space models to analyze the overall behavior of the system without making direct use of simulation. Consider again the damped spring–mass ...
  26. [26]
    [PDF] State Space Approach to Solving RLC circuits - MIT
    state-space methods. –. Identify the states of the system. –. Model the system using state vector representation. –. Obtain the state equations. • Solve a ...
  27. [27]
    Introduction: System Modeling
    Example: Mass-Spring-Damper System​​ To determine the state-space representation of the mass-spring-damper system, we must reduce the second-order governing ...Dynamic Systems · State-Space Representation · Example: Mass-Spring...
  28. [28]
    [PDF] Lecture 6 - ECEN 605
    Consider the series RLC circuit: R1. Lx1. C. +. - x2. −. + y. −. + u. Figure 4 ... Now fit these to the state space representation of the form: ˙z = Az + Bu.
  29. [29]
    [PDF] Lab 1: Modeling and Simulation in MATLAB / Simulink
    3. Choosing x1 = i and x2 = vC as states, u = v as input and y = vC as output, derive the state space model of the RLC circuit (that is, find the matrices A, B ...<|control11|><|separator|>
  30. [30]
    [PDF] State Space Models - Stanford CCRMA
    • RLC Electric Circuits: State variable for each capacitor and inductor. • In Discrete-Time: State variable for each unit-sample delay. • Continuous- or ...
  31. [31]
    [PDF] RLC Circuit Response and Analysis (Using State Space Method)
    Apr 20, 2008 · It provides a method with the exact accuracy to effectively calculate the state space models of RLC distributed interconnect (nodes) and ...