Fact-checked by Grok 2 weeks ago

Behavioral modeling

Behavioral modeling is an approach within and that describes dynamical systems in terms of their behaviors—the sets of all possible trajectories of system variables—without distinguishing a priori between inputs and outputs. Pioneered by Jan C. Willems in the late 1970s, it emerged to resolve inconsistencies in classical representations such as state-space models, transfer functions, and convolution models, offering a unified, physics-respecting framework for system analysis and design. This paradigm shifts focus from internal mechanisms to signal behaviors, enabling advancements in areas like , , and control by interconnection. It applies particularly to linear time-invariant systems and has influenced modern developments in networked and systems.

Overview

Definition and scope

In the behavioral approach to , a is formally defined as a triple (T, W, B), where T denotes the time axis, such as the real numbers \mathbb{R} for continuous time or the integers \mathbb{Z} for discrete time; W represents the signal space, typically \mathbb{R}^q for q variables; and B \subseteq W^T is the behavior, comprising the set of all valid signal trajectories w: T \to W that satisfy the system's constraints. This shifts the focus from internal mechanisms to the manifestations of the system, treating behaviors as relations imposed directly on signals without requiring a priori distinctions between inputs and outputs. Unlike classical state-space or methods, which presuppose , state variables, or partitioned input-output structures, the behavioral approach emphasizes constraints—such as or equations—that define the admissible signals, allowing for a more intrinsic description of . The core principle is that systems are governed by laws or relations on the signals themselves, eschewing assumptions about hidden states or directional flows, which makes it particularly suitable for modeling interconnected or physical systems where such partitions are artificial or unclear. The scope of behavioral modeling primarily encompasses linear time-invariant systems over continuous or discrete time domains, though extensions to nonlinear and time-varying cases exist; it originated within but extends to applications in , network analysis, and multidimensional systems like partial differential equations in physics.

Historical development

The behavioral approach to was initiated by Jan C. Willems in the mid-1980s to address fundamental inconsistencies in classical and state-space models, such as the non-realizability of certain transfer functions that lacked proper rational forms or corresponding minimal state-space realizations. In a series of foundational papers published in Automatica between 1986 and 1987, Willems introduced the core idea of viewing dynamical systems as sets of signal trajectories, or "behaviors," rather than predefined inputs and outputs, thereby resolving paradoxes like the artificial distinction between manifest and latent variables in physical systems. This shift emphasized a representation-free perspective, allowing for more flexible modeling of systems where was not presupposed. Willems' seminal 1991 paper, "Paradigms and Puzzles in the Theory of Dynamical Systems," further formalized this signal-based framework, outlining a of , behavioral equations, and latent variables to unify diverse representation problems in . The paper highlighted how traditional approaches often led to inconsistencies, such as non-causal behaviors being excluded despite their relevance in interconnected or descriptive models, and proposed intrinsic notions of and independent of input/output partitioning. Key milestones in the included extensions to linear time-invariant systems, building on the initial behavioral definitions to incorporate over rings for behavioral descriptions. In the , the development of representation theorems—such as kernel and image representations—provided canonical forms for behaviors, integrating deeply with to enable parametrizations like autoregressive and moving-average models. Willems remained the central figure until his death in 2013, with significant contributions from collaborators like Jan Willem Polderman on realization and Ivan Markovsky on identification methods, as detailed in their joint 1998 textbook and Markovsky's later works on behavioral . The approach evolved from resolving early paradoxes in system representations to profoundly influencing modern data-driven , where behaviors facilitate direct trajectory-based methods without explicit model , as seen in extensions like the fundamental for linear systems. Recent advances as of 2025 include extensions to stochastic systems via Willems' and expansions, and reformulations preserving LTI behaviors in data-driven contexts.

Core Concepts

Dynamical systems as sets of signals

In the behavioral approach to dynamical systems, signals are formalized as functions w: T \to W, where T denotes the —either the \mathbb{R} for continuous-time systems or the \mathbb{Z} for discrete-time systems—and W is a finite-dimensional , typically \mathbb{R}^q for some positive q, serving as the signal space for the system's manifest variables such as positions, velocities, or voltages. These signals represent the observable trajectories of the system over time, capturing its evolution without invoking hidden internal mechanisms. A is conceptualized as the behavior B, defined as the of all admissible signal trajectories in W^T that satisfy the external constraints dictated by the 's physical or mathematical laws, eschewing any reliance on internal variables. Formally, the is the \Sigma = (T, W, B) with B \subseteq W^T, where the constraints might arise from differential, difference, or algebraic relations among the signal components. This representation emphasizes the collective set of possible evolutions rather than a specific form. A example is the undamped mass-spring oscillator, modeled solely through the manifest signal w: \mathbb{R} \to \mathbb{R} representing the of the from . The behavior B comprises all twice-differentiable functions w satisfying the second-order differential constraint M w''(t) + k w(t) = 0 for all t \in \mathbb{R}, where M > 0 is the and k > 0 the spring constant; solutions are sinusoidal trajectories with frequencies determined by \sqrt{k/M}, illustrating how the set B encodes the system's oscillatory dynamics purely via signal relations. Central prerequisite concepts in this framework include time-invariance, under which the behavior remains unchanged under temporal shifts—formally, B is shift-invariant if w \in B implies \sigma_\tau w \in B for all \tau \in T, with the shift operator defined by (\sigma_\tau w)(t) = w(t - \tau)—ensuring the system's laws do not explicitly depend on absolute time. Linearity requires W to be a vector space and B to be a subspace thereof, satisfying superposition: if w_1, w_2 \in B, then \alpha w_1 + \beta w_2 \in B for scalars \alpha, \beta. The full axiomatic structure accommodates both complete systems, where B fully captures all constraints without latent variables, and incomplete systems, where additional hidden variables may underlie the manifest behavior but are not essential for the signal-set description. This signal-centric view diverges from classical paradigms, such as state-space or models, by imposing no a priori distinction between inputs and outputs; all variables are manifest signals treated symmetrically within the B, allowing flexible partitioning based on rather than fixed roles.

Behaviors and trajectories

In the behavioral approach to , the B is defined as the set of all trajectories w: \mathcal{T} \to \mathbb{R}^q that satisfy the constraints imposed by the system's laws, where \mathcal{T} \subseteq \mathbb{R} is the time axis and \mathbb{R}^q is the signal space. Thus, B \subseteq (\mathbb{R}^q)^{\mathcal{T}}, representing the collection of all admissible signal evolutions that comply with physical or mathematical relations, such as equations. This formulation treats the as the behavior itself, without presupposing input-output partitions, allowing for a unified treatment of various dynamical phenomena. Behaviors exhibit key properties that characterize their structure. ensures that B includes all possible trajectories satisfying the system equations, including weak solutions approximable by smooth strong ones, providing a robust for modeling. describes behaviors with no free variables, where trajectories are fully determined by initial conditions, as in closed systems governed by equations like P\left(\frac{d}{dt}\right) w = 0 with \det P(\xi) \neq 0, ensuring the past uniquely predicts the future. In contrast, controlled behaviors incorporate free variables (inputs) that influence the evolution, such as in systems where P\left(\frac{d}{dt}\right) y = Q\left(\frac{d}{dt}\right) u. Shift-invariance, a hallmark of time-invariant systems, requires that if w(t) \in B, then w(t - t_1) \in B for any shift t_1, preserving the behavior under temporal translations. Trajectories within B can be full, spanning the entire time domain \mathcal{T} (e.g., in L^1_{\mathrm{loc}}(\mathbb{R}, \mathbb{R}^q)), or restricted to subsets like (-\infty, t_0] or [0, \infty), allowing analysis of past, future, or finite-interval evolutions. For instance, in modeling an electrical resistor, trajectories (V, I) satisfy V = R I over the full real line, while restricted versions might focus on transient responses. Interconnections of behaviors arise in networked systems through projections (restricting to shared variables) or direct sums, such as combining autonomous and controlled components via B = B_{\mathrm{aut}} \oplus B_{\mathrm{contr}}, or intersections for compatibility, enabling the modeling of multi-component interactions like fluid tanks linked by pressure and flow constraints. A fundamental mathematical property of complete behaviors is closure under : if two trajectories w_1 and w_2 are compatible (e.g., matching states at junction time t_0), their forms a valid trajectory in B, supporting the analysis of continuous evolutions. This property is particularly useful in systems, where behaviors model switches between modes, or in networked systems like electrical circuits, where interconnections via latent variables capture emergent without explicit state specifications. Constraint modeling in the behavioral represents laws as relations R(w) = 0, where R encodes the system's equations (e.g., or algebraic), defining B as the or of R. For example, Newton's second law yields trajectories (q, F) satisfying F = m \frac{d^2 q}{dt^2}, with B as those evolutions where this relation holds. This relational view unifies diverse , from physical laws to interconnections, emphasizing behaviors as the primary objects for synthesis and analysis.

Linear Time-Invariant Systems

Kernel representations

In the behavioral approach to linear time-invariant (LTI) systems, kernel representations provide a constraint-based formulation of the system B \subseteq (\mathbb{R}^q)^\mathbb{R}, defined as the set of all trajectories w satisfying differential or difference constraints without reference to state variables. For continuous-time LTI systems, the behavior is given by B = \ker(R(\sigma)) = \{ w \in C^\infty(\mathbb{R}, \mathbb{R}^q) \mid R\left(\frac{d}{dt}\right) w(t) = 0 \ \forall t \in \mathbb{R} \}, where \sigma = \frac{d}{dt} is the and R(\xi) \in \mathbb{R}[\xi]^{g \times q} is a of full row g. This representation corresponds to a system of linear constant-coefficient differential equations, with the full row rank condition ensuring a minimal number of independent equations that uniquely characterize B. In discrete time, the formulation is analogous: B = \ker(R(\sigma)) = \{ w: \mathbb{Z} \to \mathbb{R}^q \mid R(\sigma) w(t) = 0 \ \forall t \in \mathbb{Z} \}, where \sigma denotes the left shift operator (\sigma w)(t) = w(t+1) and R(\xi) is again a full row rank polynomial matrix. The full row rank property guarantees that the representation is minimal and eliminates redundant constraints. Kernel representations are unique up to equivalence under left multiplication by unimodular polynomial matrices (invertible over the polynomials with constant nonzero determinant), meaning if R_1(\sigma) and R_2(\sigma) both minimally represent B, then R_1(\xi) = U(\xi) R_2(\xi) for some unimodular U(\xi). The lag of R, defined as the maximum degree over the rows of R(\xi), and the degree of R, the highest polynomial degree in its entries, together determine the order of the system, with the McMillan degree of B equaling the degree of \det(R(\xi)) for square full rank cases in autonomous systems. A representative example is the scalar second-order harmonic oscillator, where B = \ker\left( \frac{d^2}{dt^2} + 1 \right) = \{ w \in C^\infty(\mathbb{R}, \mathbb{R}) \mid \left( \frac{d^2 w}{dt^2} + w \right)(t) = 0 \ \forall t \}, with trajectories w(t) = a \cos t + b \sin t for constants a, b; here, R(\xi) = \xi^2 + 1 has lag and 2, capturing the system's without introducing latent states. This form directly encodes high-order differential equations, allowing reduction to equivalents via state-space realizations if needed, but the behavioral inherently avoids explicit state variables, focusing solely on manifest trajectory constraints.

Image representations

In the behavioral approach to linear time-invariant (LTI) systems, an image representation describes the behavior B as the image of a polynomial matrix operator applied to free variables, specifically B = \operatorname{im} M(\sigma), where M(\xi) \in \mathbb{R}^{q \times (q-m)}[\xi] is a polynomial matrix with full column rank, \sigma denotes the shift operator (or differential operator d/dt in continuous time), q is the total number of variables, and m is the number of output variables. This formulation generates all trajectories in the behavior by varying the latent (free) variables, emphasizing the generative aspect of the system rather than constraints. The full column rank condition on M ensures minimality, meaning the representation captures the essential dynamics without redundancy. Every of an LTI admits an equivalent , and vice versa, through the duality established in polynomial matrix theory, particularly via the , which decomposes into invariant factors that preserve the . This equivalence allows for between constraint-based () and generative () viewpoints, facilitating analysis tailored to the problem at hand; for instance, a B = \ker R(\sigma) with R full row rank can be converted to an by finding a right or using unimodular . Key properties of the image representation include the number of outputs m, which equals the row rank of M, determining the dimension of the constrained variables, while the cardinality of inputs is q - m, corresponding to the column dimension and reflecting the degrees of freedom in the system. These properties highlight the structural invariants of the behavior, independent of the specific representation chosen, and are invariant under left multiplication by unimodular matrices, preserving the behavioral equivalence. A representative example is the integrator chain, where the consists of signals generated by B = \operatorname{im} \begin{bmatrix} 1 \\ \xi \\ \xi^2 \end{bmatrix}, with \xi as the indeterminate; here, the variable drives successive integrations, yielding trajectories where the second component is the of the first and the third of the second, illustrating a second-order with one input and two outputs. The image representation offers advantages in distinguishing free (input-like) variables, which can be arbitrarily chosen to generate trajectories, from determined (output-like) variables, which are fully specified by the free ones and the matrix M, thereby clarifying causal structures without presupposing an input-output partition. It is particularly useful for realization theory, enabling the construction of state-space models or module-theoretic realizations directly from the polynomial description, bypassing explicit state variables and focusing on trajectory generation for controllable behaviors.

Observability and Identification

Observability of latent variables

In the behavioral approach to , observability refers to the ability to uniquely determine latent variables—internal or hidden signals that model system memory or auxiliary —from the observed manifest , where the observed B_{\text{obs}} is a of the full B_{\text{full}} consisting of all possible trajectories. Latent variables, such as states or unobserved currents in a , are introduced in an extended representation to capture the full , and ensures that these can be reconstructed solely from the manifest signals without ambiguity. The formulation of hinges on the injectivity of the projection map from the full to the observed : latent variables are if distinct trajectories in B_{\text{full}} that agree on the observed components must be identical overall, implying no "hidden freedoms" in the system description. For a partitioned with manifest signal w and latent signal \ell, this means that if two pairs (w, \ell) and (w, \ell') both belong to B_{\text{full}}, then \ell = \ell'. In linear time-invariant (LTI) systems, is characterized through representations, where the full is defined by a R(\frac{d}{dt}) \begin{bmatrix} w \\ \ell \end{bmatrix} = 0. The system is observable if the representation of B_{\text{obs}}, derived by eliminating \ell, implies the full , ensuring that the manifest uniquely extends to the latent components. Mathematically, for the latent-variable form R(\frac{d}{dt}) w = M(\frac{d}{dt}) \ell, holds if the M(\lambda) has full column d (the of \ell) for all \lambda \in \mathbb{C}, preventing non-trivial solutions for \ell that are invisible from w. This behavioral notion aligns with classical state-space observability (C-observability) in equivalent representations, where the pair (A, C) in the state-space model \dot{x} = A x + B u, y = C x + D u yields a full-rank observability matrix \mathcal{O} = \begin{bmatrix} C \\ CA \\ \vdots \\ CA^{n-1} \end{bmatrix} of rank n (the state dimension), but the behavioral framework avoids explicit state coordinates by focusing on trajectory relations. For instance, in an modeled with manifest port variables w (voltage and current) and latent branch variables \ell (internal voltages/currents), the latent signals are observable from w if the resistance-capacitance and inductance-resistance ratios satisfy a non-degeneracy condition like \frac{L}{R_L} \neq C R_C, allowing unique reconstruction via the system equations.

System identification techniques

Behavioral identification in the behavioral systems framework involves estimating a R or an image generator M from observed input-output or system . This process typically minimizes prediction errors for fitting or employs methods to parameterize the system's without assuming an underlying state-space . For instance, given a of w_d, the seeks a that captures the manifest \mathcal{B} while ensuring with the under conditions of persistency of . A key technique for kernel representation identification applies least-squares optimization to the polynomial coefficients of R(\sigma), where \sigma denotes the in time. This approach formulates the problem as fitting a linear difference equation R(\sigma) w = 0 to the data, often using structured total least-squares to account for errors in both inputs and outputs, yielding a minimal representation. Adaptations of ARMA models are used for scalar behaviors, where the autoregressive-moving-average structure aligns with the , solved via least-squares for coefficient estimation. The Ho-Kalman , originally for state-space realizations, is adapted in the behavioral subspace framework to construct minimal representations from Hankel matrices of impulse responses, bypassing explicit state estimation. Subspace methods enable data-driven computation of by constructing from trajectories, providing a non-parametric representation of the without latent states. These matrices, denoted H_L(w_d), span the space of all possible trajectories of length L if the input is persistently exciting of sufficient order. Willems' fundamental establishes that, for a controllable , the column space of the equals the restricted to horizon L, provided the data length satisfies T \geq (m+1)(L + n) - 1, where m is the input dimension and n the state dimension. This underpins kernel-free identification, allowing direct behavioral synthesis from data via low-rank approximations of the . For example, identifying a from multiple trajectories involves verifying through rank conditions on the data matrices, such as full column rank of the input-output to ensure the behavior is fully spanned without redundancies. In a simulated scalar system with known , trajectories are concatenated into a , and reveals the minimal order via the number of non-zero singular values, confirming when the rank equals the behavioral dimension. Advances in non- extend these methods to behaviors without predefined parametric forms, leveraging the fundamental lemma for direct parameterization in data-driven control. Handling in behavioral employs regularization techniques, such as nuclear norm minimization on Hankel matrices or bounded uncertainty sets via , to robustify estimates against measurement errors while preserving theoretical guarantees on approximation quality. These approaches require only mildly persistent excitation and small compared to alternatives, with empirical validation on benchmarks like the air passengers showing effective mitigation through low-rank pre-processing.

Applications and Extensions

In control theory

In the behavioral approach to control theory, the design of controllers is conceptualized through the interconnection of the plant behavior \mathcal{B}_p \subseteq (\mathbb{R}^q)^T with a controller behavior \mathcal{B}_c \subseteq (\mathbb{R}^p)^T, yielding a closed-loop behavior \mathcal{B}_{cl} = \mathcal{B}_p \cap \mathcal{B}_c that restricts trajectories to those satisfying both dynamics simultaneously. This framework treats control as variable sharing or feedback interconnection, where manifest variables (e.g., inputs and outputs) are constrained to achieve desired specifications, such as stability or performance, without presupposing an input/output partition. For linear time-invariant (LTI) systems, the plant is often represented by a kernel equation R(\frac{d}{dt}) w = 0, and the controller enforces compatibility via intersection, as illustrated in feedback loops for systems like mass-spring-dampers. Controllability within this setting ensures the existence of inputs capable of steering the to any admissible , defined as the surjectivity of the from the full onto the free (input) variables, allowing arbitrary specification of manifest outputs through appropriate input choices. For LTI behaviors, this property holds if the polynomial matrix R(\lambda) has constant for all complex \lambda, enabling the controller to manipulate the system's response freely within the behavioral constraints. In control design, is essential for pole placement, where a dynamic can assign closed-loop poles to stabilize the . The Q-design approach parameterizes all stabilizing controllers via doubly coprime factorizations in the behavioral framework, mirroring the classical Youla-Kučera method but adapted to or image representations of behaviors. Here, a behavioral Q generates the set of all controllers that regularly implement a specified subbehavior while ensuring closed-loop , achieved through coprime factorizations of the plant's representation that admit left and right inverses. For instance, in of LTI systems, the behavioral Youla parameterization constructs controllers C = (X + MQ)^{-1}(Y + NQ) (in terms adapted to behaviors), where M, N, X, Y derive from the plant's doubly coprime factors, allowing systematic tuning of Q for desired closed-loop responses like disturbance rejection. This framework uniformly addresses non-minimum phase or unstable plants by focusing on trajectory restrictions rather than inverses, avoiding singularities in and facilitating extensions to decentralized through multi-variable interconnections.

Modern developments and comparisons

Recent advances in behavioral modeling have leveraged Willems' fundamental , which characterizes the of controllable linear time-invariant systems through Hankel matrices of measured trajectories, enabling non-parametric representations without explicit model . This underpins data-enabled predictive (DeePC), a post-2010 method that directly uses input-output for predictive , bypassing traditional modeling and facilitating learning-based strategies in applications like process and . DeePC has demonstrated robustness in scenarios, achieving performance comparable to while reducing computational demands for subspace . Extensions to nonlinear behaviors have incorporated variational principles and manifold constraints, particularly through port-Hamiltonian frameworks that preserve energy dissipation properties. Arjan van der Schaft's work has advanced this by formulating nonlinear systems as interconnections of Dirac structures on manifolds, ensuring geometric consistency and passivity in behavioral descriptions. These approaches address nonlinear dynamics by embedding behaviors in or manifolds, allowing for modular composition without relying on local linearizations. Compared to state-space models, the behavioral approach avoids reliance on latent variables, focusing instead on trajectories, which enhances robustness to model mismatches in uncertain environments. Relative to input-output paradigms, it dispenses with a priori assumptions, treating variables symmetrically to capture bidirectional influences more flexibly. The behavioral framework complements port-Hamiltonian systems for energy-based modeling, where the former provides trajectory-level abstraction while the latter enforces physical structure like inequalities. Since 2015, integrations with have expanded behavioral modeling to data-driven in , using trajectory datasets for direct policy learning akin to behavioral cloning, improving adaptability in dynamic tasks like . Scalability to large networks has advanced through behavioral parametrizations of interconnected systems, enabling efficient analysis of graph-structured dynamics in power grids and sensor arrays without exponential complexity. These developments address gaps in systems via l-complete approximations for supervisory , merging discrete events with continuous behaviors, and in settings through series expansions that extend the fundamental to noisy trajectories.

References

  1. [1]
    Social learning theory. - APA PsycNet
    Bandura, A. (1977). Social learning theory. Prentice-Hall. Abstract. Details some of the significant developments within the framework of social learning theory ...
  2. [2]
    Using Behavior Modeling Therapy to Treat Phobias - Verywell Mind
    Dec 20, 2023 · Behavior modeling is the precise demonstration of the desired behavior. According to the theory, we learn not only by doing but by watching ...How Behavioral Modeling Works · Behavioral Modeling Steps · Uses
  3. [3]
    "Behavior Modeling Training" by Megan Paul - UNL Digital Commons
    Nov 24, 2021 · BMT can be used to train a variety of skills, from interpersonal skills like conflict management, interviewing, assertive communication, and cross-cultural ...
  4. [4]
    Behavioral Modeling: Methods of Understanding Consumer Behavior
    Behavioral modeling means using available and relevant consumer and business spending data to estimate future behavior.
  5. [5]
    [PDF] The behavioral approach to systems theory - Home pages of ESAT
    Jan C. Willems. Abstract. The behavioral approach provides a mathematical framework for modeling, analysis, and synthesis of dynamical systems.
  6. [6]
  7. [7]
    [PDF] Paradigms and Puzzles in the Theory of Dynamical Systems
    WILLEMS: PARADIGMS AND PUZZLES IN DYHAMICAL SYSTEMS. 263 where R,, R ... Our approach to dynamical system theory is very akin to the way discrete-event ...
  8. [8]
    [PDF] behavioral systems theory: a survey
    In his series of papers (Willems, 1986/87), J. C. Willems introduced a novel approach to systems and control the- ory, called the behavioral approach. Now, two ...
  9. [9]
    Behavioral systems theory in data-driven analysis, signal processing ...
    The behavioral approach to systems theory, put forward 40 years ago by Jan C. Willems, takes a representation-free perspective of a dynamical system as a ...Missing: theorems 1990s
  10. [10]
    [PDF] System identification in the behavioral setting - Ivan Markovsky
    Abstract. System identification is a fast growing research area that encompasses a broad range of problems and solution methods.
  11. [11]
    [PDF] Introduction to the Mathematical Theory of Systems and Control
    Page 1. Introduction to the. Mathematical Theory of. Systems and Control. Plant. Controller. Jan Willem ... behavioral approach. We now briefly explain the main ...
  12. [12]
    Introduction to Mathematical Systems Theory: A Behavioral Approach
    Dynamical Systems. Jan Willem Polderman, Jan C. Willems. Pages 1-25. Systems Defined by Linear Differential Equations. Jan Willem Polderman, Jan C. Willems.
  13. [13]
    System Identification in the Behavioral Setting - SpringerLink
    Aug 15, 2015 · In this paper, we described a unifying setting for system identification as a biobjective optimization problem. The identified model is defined ...
  14. [14]
    Overview of total least-squares methods - ScienceDirect
    ... least-squares approximate solution of the original incompatible system of equations. ... Of special importance are the kernel representation R C ^ ⊤ = 0 ...
  15. [15]
    [PDF] Behavioral Approach to Systems Theory - Ivan Markovsky
    Dynamic modeling and system identification aim at coming up with a specification of the behavior. Con- trol comes down to restricting the behavior.”Missing: (TWB) | Show results with:(TWB)
  16. [16]
  17. [17]
    Willems' Fundamental Lemma for State-space Systems and ... - arXiv
    Feb 3, 2020 · Willems' fundamental lemma states that all trajectories of a linear system can be obtained from a single given one, assuming a persistency of ...
  18. [18]
    On the Parametrization of All Regularly Implementing and ...
    We establish a parametrization of all controllers that regularly implement a given behavior. We also obtain a parametrization of all stabilizing controllers.
  19. [19]
    Mathematical modeling for nonlinear control: a Hamiltonian approach
    A.J. van der Schaft et al. Hamiltonian formulation of distributed-parameter systems with boundary energy flow. J. Geom. Phys.
  20. [20]
    Reciprocity of Nonlinear Systems | SIAM Journal on Control and ...
    The current paper is concerned with the extension of this theory to the nonlinear case. Emphasis is on nonlinear reciprocal systems with a Hessian pseudo- ...<|control11|><|separator|>
  21. [21]
    [PDF] The Behavioral Approach to Modeling and Control of Dynamical ...
    The behavioral approach is discussed, including the mathematical technicalities, in the recent textbook (Polderman and Willems, 1998), where addi- tional ...
  22. [22]
    [PDF] Port-Hamiltonian Systems Theory: An Introductory Overview
    Dec 13, 2013 · In this sense, port-Hamiltonian theory is a natural instance of a 'cyber-physical' systems theory: it admits the extension of physi- cal system ...
  23. [23]
    Discrete Supervisory Control of Hybrid Systems Based on l ...
    The basic ideas regarding the approximation step are explained within the framework of Willems' behavioral systems theory. ... behavioral approach · l-complete ...
  24. [24]
    Behavioral theory for stochastic systems? A data-driven journey from ...
    Willems discusses behavioral ideas for stochastic systems (Willems, 2013). Therein the main focus is on open stochastic static systems, their interconnection, ...