Fact-checked by Grok 2 weeks ago

Linearity

Linearity is a fundamental property in that characterizes functions, operators, or transformations between spaces which preserve the operations of addition and . Specifically, a f: V \to W between spaces V and W is linear if it satisfies additivity, f(\mathbf{u} + \mathbf{v}) = f(\mathbf{u}) + f(\mathbf{v}), and homogeneity, f(c \mathbf{u}) = c f(\mathbf{u}), for all vectors \mathbf{u}, \mathbf{v} \in V and scalars c. Equivalently, it obeys f(\alpha \mathbf{u} + \beta \mathbf{v}) = \alpha f(\mathbf{u}) + \beta f(\mathbf{v}) for scalars \alpha, \beta. This strict definition requires that linear functions map the zero vector to zero, distinguishing them from more general affine functions, which may include a constant shift and are sometimes colloquially called "linear" in contexts like single-variable where graphs form straight lines. In linear algebra, linearity underpins the study of vector spaces, matrices, and systems of equations, where linear transformations are represented by matrices that facilitate computations such as solving A\mathbf{x} = \mathbf{b}. Key properties include the (set of vectors mapped to zero) and ( of the transformation), with the relating their dimensions: \dim V = \dim(\ker f) + \dim(\operatorname{ran} f). Linear functions are fully determined by their action on a basis of the , enabling efficient algebraic manipulation. Beyond , linearity appears in diverse applications, such as differential equations where operators like \frac{d}{dx} are linear, satisfying \frac{d}{dx}(u + v) = \frac{du}{dx} + \frac{dv}{dx} and \frac{d}{dx}(c u) = c \frac{du}{dx}. In physics and , it embodies the , allowing solutions to complex problems by combining simpler linear ones, as seen in linear circuits with components like resistors that scale outputs proportionally to inputs. Nonlinear phenomena, by contrast, introduce complexities like bifurcations, but linearity provides essential approximations and analytical tools across fields.

Overview

Definition

In and related fields, linearity refers to a property of functions, transformations, or systems that preserves the operations of and , embodying the principle of superposition. Specifically, a f is linear if it satisfies additivity, f(\mathbf{x} + \mathbf{y}) = f(\mathbf{x}) + f(\mathbf{y}), and homogeneity, f(a \mathbf{x}) = a f(\mathbf{x}) for scalars a, which together imply the general form f(a \mathbf{x} + b \mathbf{y}) = a f(\mathbf{x}) + b f(\mathbf{y}) for scalars a, b. This ensures that the output scales proportionally with the input and that combined inputs produce combined outputs without interaction, a foundational concept in and . Unlike affine functions, which allow an additive constant shift (f(\mathbf{x}) = L(\mathbf{x}) + \mathbf{c} where L is linear), true linearity requires f(\mathbf{0}) = \mathbf{0}, excluding translations and emphasizing strict through the . The term "linearity" emerged in 19th-century mathematics amid efforts to generalize geometric and algebraic structures, with Giuseppe Peano providing a key formalization in 1888 by defining "linear systems" as axiomatic vector spaces closed under linear combinations, laying groundwork for modern linear algebra. Earlier contributions, such as those by Hermann Grassmann on linear extensions in the 1840s and Arthur Cayley on matrix operations in the 1850s, built toward this abstraction, but Peano's work integrated linearity into a rigorous framework for transformations. To illustrate the distinction from nonlinearity, consider physical examples: , given by \frac{1}{2} m v^2, is quadratic in and fails superposition since doubling two velocities does not double the total energy, reflecting nonlinear interactions. In contrast, gravitational potential energy m g h is linear in height h, satisfying homogeneity as scaling height proportionally scales energy, aligning with behavior.

Key Properties

Linearity is fundamentally characterized by two core properties: additivity and homogeneity. Additivity requires that a f satisfies f(x + y) = f(x) + f(y) for all elements x and y in its , ensuring that the function respects the operation in the and . This property can be verified through basic by considering the input as a sum and observing the output's additive structure, as in composing f with the . Homogeneity, also known as scalability or proportionality, stipulates that f(\alpha x) = \alpha f(x) for any scalar \alpha and input x, meaning the function scales outputs proportionally to inputs. This derives directly from the concept of proportionality, where scaling the input by a factor uniformly scales the output, forming the basis for linear scaling in functions over fields like the real numbers. The superposition principle emerges as the combination of additivity and homogeneity, yielding f(\alpha x + \beta y) = \alpha f(x) + \beta f(y) for scalars \alpha and \beta. This principle allows complex inputs to be decomposed into simpler linear combinations, facilitating the solution of problems by breaking them into additive components whose outputs can then be recombined. Key consequences of these properties include the preservation of the zero element, where f(0) = 0, derived from homogeneity by setting \alpha = 0: f(0) = f(0 \cdot x) = 0 \cdot f(x) = 0. Additionally, linear functions exhibit invertibility under certain conditions, such as when they are bijective over their , allowing reversal without loss of the linear structure. The identity function f(x) = x fully satisfies both additivity and homogeneity, serving as a example of linearity. These properties underpin applications such as the analysis of linear polynomials, where functions like f(x) = cx inherently obey them.

Mathematics

Linear Maps and Functions

In the context of vector spaces over a F, a , also known as a linear transformation, from a V to another W is a T: V \to W that preserves the vector addition and scalar multiplication operations. Specifically, for all vectors u, v \in V and scalars \alpha \in F, it satisfies T(u + v) = T(u) + T(v) and T(\alpha u) = \alpha T(u). This definition ensures that linear maps respect the linear structure of the spaces, making them fundamental to linear algebra. When finite-dimensional vector spaces are equipped with bases, every linear map T: V \to W can be represented by a matrix. If \{e_1, \dots, e_n\} is a basis for V and \{f_1, \dots, f_m\} is a basis for W, then the matrix A of T has columns given by the coordinates of T(e_j) in the basis for W, and T(x) = A x for x \in V expressed in coordinates. For example, a counterclockwise rotation by angle \theta in the plane \mathbb{R}^2 with the standard basis is represented by the matrix \begin{pmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{pmatrix}, which multiplies any vector to yield its rotated image. Key subspaces associated with a linear map T: V \to W are its , or null space, defined as \ker T = \{ v \in V \mid T(v) = 0 \}, and its , or range, defined as \operatorname{im} T = \{ T(v) \mid v \in V \}. The rank-nullity theorem, also known as the dimension theorem, states that if V is finite-dimensional, then \dim V = \dim \ker T + \dim \operatorname{im} T. This relates the "degeneracy" of the map (captured by the ) to its "span" (captured by the ). Illustrative examples include the operator D on the of polynomials of degree at most n over \mathbb{R}, where D(p) = p'; this is linear because differentiation preserves and , with consisting of constant polynomials. Another example is an onto a U \subset V, which maps each vector to its closest point in U and satisfies P^2 = P, represented by a symmetric in coordinates.

Linear Polynomials and Equations

A linear polynomial in one variable over the real numbers is an algebraic expression of the form ax + b, where a and b are constants with a \neq 0. This form represents a function f(x) = ax + b whose graph is a straight line in the Cartesian plane. In the context of vector spaces, a strictly linear polynomial passes through the , meaning b = 0 and f(0) = 0, satisfying the homogeneity property f(cx) = c f(x) for any scalar c. More generally, when b \neq 0, the function is affine, equivalent to a linear composed with a by b. Affine polynomials preserve straight lines and parallelism but do not necessarily map the to itself. The a in f(x) = ax + b is the of the line, indicating the rate of change of y with respect to x, while b is the y-, the value of f(0). For example, in the y = 2x + 1, the a = 2 means the line rises 2 units vertically for every 1 unit horizontally, and the b = 1 shows it crosses the y-axis at (0, 1). A consists of multiple linear polynomials set equal to constants, such as for two variables: \begin{cases} a_1 x + b_1 y = c_1 \\ a_2 x + b_2 y = c_2 \end{cases} where a_i, b_i, c_i are constants. In matrix form, a general system with n equations and k variables is A \mathbf{x} = \mathbf{b}, where A is the n \times k , \mathbf{x} is the column vector of variables, and \mathbf{b} is the column vector of constants. The nature of solutions to a square system (n = k) depends on the determinant of A: if \det(A) \neq 0, there is a solution; if \det(A) = 0, there are either no solutions (inconsistent) or infinitely many (dependent). For overdetermined systems (n > k), solutions are rare unless the extra equations are redundant; underdetermined systems (n < k) typically have infinitely many solutions. Gaussian elimination solves such systems by augmenting A with \mathbf{b} to form [A | \mathbf{b}] and applying elementary row operations—swapping rows, multiplying by nonzero scalars, or adding multiples of one row to another—to reduce it to row echelon form, followed by back-substitution to find \mathbf{x}. This method systematically eliminates variables from equations, starting from the top-left, until the system simplifies to an upper triangular form. For square systems with \det(A) \neq 0, provides an explicit formula using determinants: for the 2x2 system above, x = \frac{\det \begin{pmatrix} c_1 & b_1 \\ c_2 & b_2 \end{pmatrix}}{\det \begin{pmatrix} a_1 & b_1 \\ a_2 & b_2 \end{pmatrix}} = \frac{c_1 b_2 - c_2 b_1}{a_1 b_2 - a_2 b_1}, \quad y = \frac{\det \begin{pmatrix} a_1 & c_1 \\ a_2 & c_2 \end{pmatrix}}{\det \begin{pmatrix} a_1 & b_1 \\ a_2 & b_2 \end{pmatrix}} = \frac{a_1 c_2 - a_2 c_1}{a_1 b_2 - a_2 b_1}. This rule generalizes to higher dimensions but is computationally intensive for large matrices.

Linearity in Discrete Structures

In discrete mathematics, linearity manifests in structures over finite fields, particularly the field GF(2) with elements {0,1}, where addition corresponds to XOR (exclusive or) and multiplication to AND (logical conjunction). A Boolean function f: \mathbb{F}_2^n \to \mathbb{F}_2 is linear if it satisfies f(x + y) = f(x) + f(y) for all x, y \in \mathbb{F}_2^n, which equates to f being a linear functional under the standard vector space structure. Equivalently, such functions take the form f(x) = a \cdot x = \sum_{i=1}^n a_i x_i, where a = (a_1, \dots, a_n) \in \mathbb{F}_2^n and the sum is modulo 2 (i.e., XOR of the terms where a_i = 1). Scalar multiplication in this context aligns with AND, as the scalars are 0 or 1: multiplying by 0 yields 0, and by 1 preserves the input. More generally, affine Boolean functions extend linearity by including a constant term, represented as f(b_1, \dots, b_n) = a_0 \oplus a_1 b_1 \oplus \dots \oplus a_n b_n, where \oplus denotes XOR and each a_i, a_0 \in \{0,1\}. This form captures all degree-1 polynomials in the algebraic normal form (ANF) over GF(2). A classic example is the parity check function, which computes the XOR of all input bits: f(b_1, \dots, b_n) = b_1 \oplus \dots \oplus b_n; this is linear (with all a_i = 1 and a_0 = 0) and serves as a basic error-detection mechanism. All non-constant linear Boolean functions are balanced, meaning they output 0 and 1 equally often—specifically, exactly $2^{n-1} times each for n > 0—since the to f(x) = 0 forms a of 1 in \mathbb{F}_2^n. Affine functions inherit this balance unless the constant term makes them constant. Nonlinearity measures quantify deviation from these affine forms, with the Walsh-Hadamard transform providing a key tool: the transform of f at u \in \mathbb{F}_2^n is W_f(u) = \sum_{x \in \mathbb{F}_2^n} (-1)^{f(x) + u \cdot x}, and the nonlinearity is nl(f) = 2^{n-1} - \frac{1}{2} \max_u |W_f(u)|. For linear functions, W_f(u) = \pm 2^n at exactly one u (the dual linear form) and 0 elsewhere, yielding nl(f) = 0; affine functions exhibit the same maximum absolute value but shifted. This metric is central to assessing cryptographic strength, as high nonlinearity resists linear cryptanalysis. In coding theory, linear Boolean functions underpin Reed-Muller codes: the first-order binary Reed-Muller code RM(1,m) of length $2^m consists precisely of the evaluations of all affine functions on \mathbb{F}_2^m, forming a linear subspace of dimension m+1 with minimum distance $2^{m-1}. These codes, introduced in the 1950s, enable efficient error correction and are foundational in applications like satellite communication.

Physics

Linearity in Classical Mechanics and Electromagnetism

In classical mechanics, linearity manifests in the foundational laws governing the behavior of physical systems, particularly those involving proportional relationships between forces, displacements, and responses. This linearity enables the principle of superposition, allowing solutions to complex problems to be constructed as linear combinations of simpler ones. Such properties simplify the analysis of mechanical systems like springs and oscillatory motion, where small deformations or perturbations follow linear approximations. A prime example is , which describes the elastic behavior of and other deformable materials under small strains. Formulated by in 1678, it states that the restoring force F exerted by a spring is directly proportional to the x from , expressed as F = -kx, where k is the spring constant. This linear relationship holds for deformations within the elastic limit, facilitating the modeling of vibrations and structural responses in . In , exemplifies linearity by relating to voltage in conductive materials. Proposed by Georg Simon Ohm in his 1827 paper "Die galvanische Kette, mathematisch bearbeitet," it asserts that the current I through a is directly proportional to the applied voltage V, given by V = IR, where R is the . This linear proportionality applies to ohmic conductors at constant temperature, underpinning the design of electrical circuits and the analysis of steady-state currents. Maxwell's equations, which unify and , exhibit linearity in homogeneous, isotropic media without free charges or currents. As derived from James Clerk Maxwell's 1865 treatise "A Dynamical Theory of the Electromagnetic Field," these equations—comprising Gauss's laws, Faraday's law, and Ampère's law with Maxwell's correction—are linear partial differential equations in the fields \mathbf{E} and \mathbf{B}. In such media, where \epsilon and permeability \mu are constants, the equations permit the superposition of electromagnetic fields, meaning the total field from multiple sources is the vector sum of individual fields. This property is essential for solving wave propagation problems, such as light transmission in or dielectrics. The further illustrates linearity in , modeling the spread of , particles, or substances in . Originating from Adolf Fick's 1855 work on and later generalized by for conduction, it takes the form \frac{\partial u}{\partial t} = \kappa \nabla^2 u, where u is the concentration or temperature, t is time, \kappa is the diffusion coefficient, and \nabla^2 is the Laplacian operator. As a linear homogeneous , its solutions obey the : if u_1 and u_2 are solutions, then so is a u_1 + b u_2 for constants a and b. This allows the combination of fundamental solutions, like Gaussian profiles, to describe complex scenarios in isotropic media. A canonical application of these linear principles appears in the simple harmonic oscillator, a model for pendulums, springs, and molecular vibrations in . The governing equation, derived from Newton's second law applied to , is the linear second-order \frac{d^2 x}{dt^2} + \omega^2 x = 0, where \omega = \sqrt{k/m} is the , m is , and x(t) is . The general solution is a linear superposition of functions: x(t) = A \cos(\omega t) + B \sin(\omega t), or equivalently x(t) = C \cos(\omega t + \phi), with constants A, B, C, and phase \phi determined by initial conditions. This superposition arises directly from the linearity of the equation, enabling the description of arbitrary oscillatory motions as combinations of basis solutions.

Linearity in Quantum Mechanics

In quantum mechanics, linearity manifests fundamentally through the , which governs the time evolution of the described by the wave function \psi. The time-dependent is given by i \hbar \frac{\partial \psi}{\partial t} = \hat{H} \psi, where \hat{H} is the , \hbar is the reduced , and i is the ; this is linear in \psi, meaning that if \psi_1 and \psi_2 are solutions, then any a \psi_1 + b \psi_2 (with coefficients a, b) is also a . This linearity ensures that the holds, allowing quantum states to evolve without from nonlinear terms that could disrupt probabilistic predictions. The arises directly from this linearity, permitting a |\psi\rangle in to be expressed as a of basis states: |\psi\rangle = \sum_i c_i |\phi_i\rangle, where the coefficients c_i are complex amplitudes satisfying \sum_i |c_i|^2 = 1 to normalize the state. Upon measurement of an , the probability of obtaining the eigenvalue corresponding to |\phi_i\rangle is |c_i|^2, embodying the non-classical nature of . in are represented by linear operators, specifically (Hermitian) operators on the , whose eigenvalues correspond to possible measurement outcomes and eigenvectors to the associated states. A classic demonstration of linearity and superposition is the Stern-Gerlach experiment, where silver atoms with pass through an inhomogeneous , resulting in deflection into discrete paths rather than a continuous distribution. This outcome reflects the spin state as a superposition of up and down components along the field direction, |\psi\rangle = \alpha | \uparrow \rangle + \beta | \downarrow \rangle, with the linear evolution preserving the coherence until measurement collapses the state. The linearity of quantum dynamics extends to , where quantum gates are implemented as unitary linear transformations on the , enabling operations like the Hadamard gate to create superpositions essential for algorithms such as Shor's.

Engineering and Electronics

Linear Systems Theory

Linear systems theory provides a foundational framework in for modeling and analyzing systems where the output is a linear superposition of responses to individual inputs, enabling the prediction of system behavior from fundamental properties. A linear system satisfies the principles of superposition and homogeneity, meaning that if inputs x_1(t) and x_2(t) produce outputs y_1(t) and y_2(t), then a scaled and summed input a x_1(t) + b x_2(t) yields a y_1(t) + b y_2(t) for scalars a and b. This linearity allows the system's response to arbitrary inputs to be expressed as a convolution integral with the system's : y(t) = \int_{-\infty}^{\infty} h(\tau) x(t - \tau) \, d\tau, where x(t) is the input and h(t) is the function. The h(t) fully characterizes a , representing the output when the input is a unit impulse \delta(t). For time-invariant s, h(t) is of the impulse timing, and is imposed if h(t) = 0 for t < 0, ensuring the output depends only on past and present inputs. , a key subclass, combine linearity with time-invariance, allowing efficient analysis through transform methods that convert time-domain convolutions into algebraic operations in the . In LTI systems, the Laplace transform simplifies analysis by defining the transfer function H(s) = \frac{Y(s)}{X(s)}, where Y(s) and X(s) are the s of the output and input, respectively, and s is the complex frequency variable. Similarly, the yields the H(j\omega), which describes steady-state behavior to sinusoidal inputs. These transforms facilitate solving differential equations governing the system, such as those from or models, by replacing convolution with multiplication. A critical aspect of linear systems theory is stability analysis, particularly bounded-input bounded-output (, which requires that every bounded input produces a bounded output. For causal LTI systems described by rational transfer functions, holds if and only if all poles of H(s) lie in the open left-half of the s-plane (i.e., have negative real parts). Poles on or to the right of the imaginary indicate marginal or , leading to unbounded responses like oscillations or . As an illustrative example, consider a series with R and C, modeled as a LTI system with H(s) = \frac{1/RC}{s + 1/RC}. The is h(t) = \frac{1}{RC} e^{-t/RC} u(t), where u(t) is the unit step function, resulting in an for step inputs that demonstrates the system's low-pass filtering behavior and due to its single pole at s = -1/RC. This framework extends to more complex systems, underpinning applications in and .

Linearity in Electronic Devices and Circuits

In electronic devices and circuits, linearity refers to the proportional relationship between input and output signals, enabling predictable and without . This property is essential for components like amplifiers and data converters, where deviations from linearity can degrade signal fidelity and introduce errors. Real-world devices exhibit limitations due to material properties and operating conditions, necessitating metrics to quantify and mitigate nonlinearity. Transistors operate in distinct regions, with the providing linear behavior critical for applications. In this region, the collector current varies linearly with the base-emitter voltage, allowing small-signal while avoiding the (no conduction) or saturation (current limiting) states. Bipolar junction transistors (BJTs) and field-effect transistors (FETs) are biased in the for class-A or class-AB to maintain this , as deviations into nonlinear regions cause generation and . A key metric for assessing nonlinearity in amplifiers and circuits is (THD), which quantifies the presence of unwanted frequencies generated by nonlinear transfer functions. THD is defined as the ratio of the root-mean-square value of the sum of amplitudes (from the second onward) to the , often expressed as a . Lower THD values indicate better linearity; for instance, audio amplifiers target THD below 0.1% to preserve signal quality, while measurement involves of the output spectrum under sinusoidal input. In data converters such as analog-to-digital converters (ADCs) and digital-to-analog converters (DACs), integral linearity measures the maximum deviation of the actual from an ideal straight line over the full input range, typically specified in least significant bits (LSBs). This deviation arises from component mismatches and process variations, impacting overall accuracy. Common types include independent linearity, which uses a best-fit straight line to minimize the maximum deviation after adjusting for and errors, and zero-based linearity, which draws the line from the zero-scale point to the full-scale endpoint, making it sensitive to endpoint errors but simpler for endpoint-calibrated systems. Differential linearity, specific to ADCs, evaluates the variation in step size between consecutive codes relative to the ideal 1 LSB step, also measured in LSBs. A DNL greater than 1 LSB indicates missing codes or non-monotonic behavior, potentially reducing effective ; for example, a DNL of ±0.5 LSB ensures all codes are present without gaps. This metric complements integral linearity by focusing on local uniformity rather than global fit. Operational amplifiers (op-amps) exemplify linearity challenges in circuits, where ideal models assume infinite and perfect proportionality, but real devices face limits from finite , , and output swing. In configurations like inverting or non-inverting amplifiers, extends and stabilizes , enhancing linearity up to the point where input voltage exceeds common-mode limits or output approaches voltages, causing clipping and . For instance, the OPA227 op-amp achieves a typical THD+N of 0.00005% at 1 kHz in unity- configurations (V_O = 3 V RMS, R_L = 10 kΩ, V_S = ±15 V), but increases near the limits of its output swing due to internal limitations such as .

Applications in Other Fields

Linear Models in Statistics

In statistics, linear models provide a framework for analyzing relationships between a response variable and one or more predictor variables under the assumption of linearity in the parameters. The simplest form is the model, which posits that the of the response Y is a of a single predictor X, expressed as Y = \beta_0 + \beta_1 X + \varepsilon, where \beta_0 is , \beta_1 is the , and \varepsilon is the random error term with mean zero. This model assumes that the relationship between X and the mean of Y is straight-line, allowing for probabilistic on the parameters \beta_0 and \beta_1. The parameters are typically estimated using ordinary least squares (OLS), which minimizes the sum of squared residuals between observed and predicted values. The OLS estimator for the slope and intercept in simple linear regression is given by \hat{\beta} = (X^T X)^{-1} X^T Y, where X is the design matrix including a column of ones for the intercept, and Y is the vector of responses; this yields the best linear unbiased estimator under certain conditions. Key assumptions underpinning the validity of OLS include linearity in parameters (the conditional mean of Y given X is linear), independence of errors, homoscedasticity (constant variance of errors), and often normality of errors for inference. Violation of these can be diagnosed through residual plots, such as checking for patterns in residuals versus fitted values to assess linearity or using scale-location plots for homoscedasticity. Multiple linear regression extends this to p predictors, modeling Y = \beta_0 + \beta_1 X_1 + \cdots + \beta_p X_p + \varepsilon, enabling the examination of multivariate relationships while controlling for variables. Model significance is often tested using the , which compares the full model against a null model with only the intercept, assessing whether at least one \beta_j (for j = 1, \dots, p) is nonzero; the test statistic is F = \frac{(SSR / p)}{(SSE / (n - p - 1))}, where is the regression and is the error . A practical example is predicting body weight from height using simple linear regression on anthropometric data, where the model might yield \hat{Y} = -150.95 + 4.85 X (with height in inches and weight in pounds), indicating that each additional inch of height corresponds to about 4.85 pounds more weight on average. The Gauss-Markov theorem establishes that, under the assumptions of linearity, exogeneity (errors uncorrelated with predictors), homoscedasticity, and no perfect multicollinearity, the OLS estimator is the best linear unbiased estimator (BLUE), meaning it has the minimum variance among all linear unbiased estimators of the parameters. This theorem underscores the efficiency of OLS in linear models, provided the assumptions hold.

Linearity in Signal Processing and Control Systems

In , linearity is fundamental to the and of linear time-invariant (LTI) systems, where the output is a of inputs, enabling superposition and scalability. These systems are characterized by their , and their behavior is described using , allowing efficient computation in both time and frequency domains. Linearity ensures that frequency-domain representations, such as the (DTFT), accurately reflect the system's response without distortion from nonlinear interactions. Linear filters, a cornerstone of , exploit this property to shape signals by attenuating or amplifying specific components. (FIR) filters have a finite-duration , making them inherently stable and capable of achieving response, which preserves signal waveform integrity; they are designed using methods like windowing or sampling to approximate desired responses. (IIR) filters, in contrast, use to achieve sharper transitions with fewer coefficients but may introduce phase nonlinearity and potential if poles lie outside the unit circle. The of both FIR and IIR filters is analyzed using the (DFT), which provides a discrete approximation of the DTFT to evaluate magnitude and across bins, facilitating design verification and optimization. For instance, the DFT of an FIR filter's coefficients directly yields its at equally spaced points. The convolution theorem underscores linearity's role in efficient processing: the output y of an LTI system is the convolution y = x * h, where x is the input and h the impulse response; in the frequency domain, this becomes multiplication of their DTFTs, Y(e^{j\omega}) = X(e^{j\omega}) H(e^{j\omega}). This property enables fast computation via the fast Fourier transform (FFT), an efficient DFT algorithm, reducing complexity from O(N^2) to O(N \log N) for large signals, widely used in applications like audio equalization and image filtering. In control systems, linearity facilitates the modeling of dynamic systems using state-space representations, where the state evolution is given by \dot{x} = Ax + Bu and output by y = Cx + Du, with x the , u the input, y the output, and A, B, C, D constant matrices; this formulation assumes small-signal operation around a point. , a key concept, ensures that any desired can be reached from the initial using admissible inputs, verified by the rank of the controllability matrix [B, AB, A^2B, \dots, A^{n-1}B] being full (equal to the state dimension n). For loops, the assesses closed-loop stability by plotting the open-loop transfer function's and counting encirclements of the critical point (-1, 0) in the ; stability requires no net encirclements for systems with no open-loop poles in the right-half . A practical example is the proportional-integral-derivative () controller, often approximated as a linear G_c(s) = K_p + \frac{K_i}{s} + K_d s for frequency-domain analysis in linear regimes, where K_p, K_i, K_d are tuning parameters; this form enables straightforward application of criteria like Nyquist to ensure robust performance in regulating processes such as .

References

  1. [1]
    [PDF] Linear Algebra - UC Davis Mathematics
    In broad terms, vectors are things you can add and linear functions are functions of vectors that respect vector addition. The goal of this text is to.
  2. [2]
    [PDF] Linearity and nonlinearity - TTU Math
    In higher dimensions, we'll see that the linear functions are those whose zero level sets are planes through the origin. Here are three equivalent ways of ...
  3. [3]
    Linearity (article) | Khan Academy
    The term linearity refers to the property of scaling. Suppose you have two related physical properties, for example the speed you can run and the distance you ...
  4. [4]
    3.3 Linearity
    A linear function, we have seen is a function whose graph lies on a straight line, and which can be described by giving the slope and y intercept of that line.
  5. [5]
    [PDF] Fundamental Concepts: Linearity and Homogeneity
    Definition: A linear equation is an equation of the form. L(u) = g,. 5. Page 6. where L is a linear operator, g is a “given” or “known” function (or number, as.
  6. [6]
    Linearity | Spinning Numbers
    Linearity defined in the mathematical sense: A function f is linear if it has these properties, Homogeneity (scaling): f ( a x ) = a f ( x ) f(ax) = af(x) f(ax ...Preparation · Variables · Linearity · Scaling (homogeneity)
  7. [7]
    The linear function - Math Insight
    We first outline the strict definition of a linear function, which is the favorite version in higher mathematics. Then, we discuss the rebellious definition of ...
  8. [8]
    Linear transformation | Matrix, Vector & Mapping - Britannica
    Oct 31, 2025 · Peano called his vector spaces “linear systems” because he correctly saw that one can obtain any vector in the space from a linear combination ...
  9. [9]
    [PDF] A Brief History of Linear Algebra - University of Utah Math Dept.
    This project will discuss the history of linear algebra as it relates linear sets of equations and their transformations and vector spaces.
  10. [10]
    Kinetic energy - xaktly.com
    KE is quadratic in velocity and linear in mass. Kinetic energy depends quadratically (to the second power) on velocity, and linearly on the mass. You can ...
  11. [11]
    3.7: Energy Diagrams - Physics LibreTexts
    Nov 8, 2022 · For a given position, the gap between the total energy line and the potential energy line equals the kinetic energy of the object, since the sum ...
  12. [12]
    Proportionality vs. Linearity - Department of Mathematics at UTSA
    Oct 30, 2021 · Linearity is the property of a mathematical relationship (function) that can be graphically represented as a straight line. Linearity is closely ...
  13. [13]
    [PDF] LINEAR OPERATORS Throughout this note V is a vector space over ...
    The property (2.1) is called additivity, while the property (2.2) is called homogeneity. Together additivity and homogeneity are called linearity. Denote by ...
  14. [14]
    Linear Systems Theory
    Superposition: Systems that satisfy both homogeneity and additivity are considered to be linear systems. These two rules, taken together, are often referred to ...
  15. [15]
    [PDF] Linear Systems
    LINEARITY. Linear systems obey the superposition principle, which consists of two properties: Homogeneity and additivity. • Homogeneity: If we increase the ...
  16. [16]
    [PDF] Lecture 3 ELE 301: Signals and Systems - Princeton University
    Linearity: A system S is linear if it satisfies both Homogeneity: If y = Sx, and a is a constant then ay = S(ax). Superposition: If y1 = Sx1 and y2 = Sx2, then ...Missing: additivity principle
  17. [17]
    Linear Transformation -- from Wolfram MathWorld
    A linear transformation between two vector spaces V and W is a map T:V->W such that the following hold: 1. T(v_1+v_2)=T(v_1)+T(v_2) for any vectors v_1 and v_2 ...
  18. [18]
    16.1 Matrices and Linear Transformations
    A transformation can take a vector in a space of n dimensions into one in a space of m dimensions; its matrix will then have n columns, and m rows.
  19. [19]
    matrix representation of a linear transformation - PlanetMath.org
    Mar 22, 2013 · Linear transformations as matrices · (a). the matrix depends on the bases given to the vector spaces · (b). the ordering of a basis is important.
  20. [20]
    Rotation Matrix -- from Wolfram MathWorld
    A rotation matrix rotates a vector or the coordinate system. Any rotation can be represented by a 3x3 matrix, and any rotation can be composed of rotations ...
  21. [21]
    Rank-Nullity Theorem -- from Wolfram MathWorld
    Let V and W be vector spaces over a field F, and let T:V->W be a linear transformation. Assuming the dimension of V is finite, then dim(V)=dim(Ker(T))+dim(Im(T ...
  22. [22]
    [PDF] 18.03 lecture notes, spring 2025 - MIT Mathematics
    Feb 3, 2025 · The linear differential operator d2 dx2 maps each function to a function, just as a 2×2 matrix defines a linear transformation mapping each ...
  23. [23]
    Projection Matrix -- from Wolfram MathWorld
    A projection matrix P is an n×n square matrix that projects a vector space from R^n to a subspace W, where P^2=P.
  24. [24]
    Linear Equation -- from Wolfram MathWorld
    A linear equation is an algebraic equation of the form y=mx+b involving only a constant and a first-order (linear) term, where m is the slope and b is the y - ...Missing: definition | Show results with:definition
  25. [25]
    Linear System of Equations -- from Wolfram MathWorld
    A linear system of equations is a set of n linear equations in k variables (sometimes called "unknowns"). Linear systems can be represented in matrix form ...
  26. [26]
    Gaussian Elimination -- from Wolfram MathWorld
    Gaussian elimination is a method for solving matrix equations of the form Ax=b. (1) To perform Gaussian elimination starting with the system of equations ...
  27. [27]
    Cramer's Rule -- from Wolfram MathWorld
    The system has nondegenerate solutions (ie, solutions other than (0, 0, 0)) only if D=0 (in which case there is a family of solutions).
  28. [28]
    [PDF] Boolean Functions for Cryptography and Error Correcting Codes
    Boolean Functions for Cryptography and Error. Correcting Codes. Claude Carlet∗. ∗LAGA, University of Paris 8, France; e-mail: claude.carlet@univ-paris8.fr. 1 ...
  29. [29]
    [PDF] Reed-Muller Codes: Theory and Algorithms - arXiv
    Jun 10, 2020 · In particular, the paper discusses the recent connections established between RM codes, thresholds of Boolean functions, polarization theory, ...
  30. [30]
    [PDF] 4.1. Harmonic oscillator and superposition principle
    Solves the differential equation y+y=0. Proof We compute y" = 2. C₁ ... 2 linearly independent solutions. a linear homogeneous ODE: ALGORITHM" for solving.
  31. [31]
    [PDF] Hooke's law
    First stated in 1676 as a. Latin anagram ceiiinosssttuv, he revealed it in 1678 to stand for ut tensio sic vis, meaning “as is the extension, so is the force”.
  32. [32]
    [PDF] Ohm's Paper 1827 translated
    Part 2. Page 15. 424. OHM ON THE GALVANIC CIRCUIT. is nothing more than a formula of interpolation, which is valid only for a relatively very short variable ...
  33. [33]
    [PDF] EM Waves, Wave Propagation in Linear/Homogeneous/Isotropic ...
    μ = magnetic permeability of free space = 4π × 10-7 Henrys/m. Thus, Maxwell's equations for theE. and B. fields inside this linear, homogeneous and ...
  34. [34]
    9.5: Solution of the Diffusion Equation - Mathematics LibreTexts
    Feb 27, 2022 · The principle of linear superposition for homogeneous linear differential equations then states that the general solution to. is given by ⁡ ( x ...
  35. [35]
    [PDF] Complex Numbers and Simple Harmonic Oscillation - Galileo
    ... solutions x1(t) and x2(t), and two arbitrary constants A1 and A2, the function. A1x1(t) + A2x2(t) is also a solution of the differential equation. In fact ...
  36. [36]
  37. [37]
    [quant-ph/0412192] Why is Schrodinger's Equation Linear? - arXiv
    Dec 24, 2004 · Abstract: Information-theoretic arguments are used to obtain a link between the accurate linearity of Schrodinger's equation and Lorentz ...Missing: mechanics | Show results with:mechanics
  38. [38]
    [PDF] Lecture 4: Convolution - MIT OpenCourseWare
    Derivation of the convolution integral representation for continuous-time LTI systems. x(t) = Eim. ( x(k A). 'L+0 k=-o. Linear System: +o y(t) = 0 x(kA). +O k=- ...
  39. [39]
    [PDF] 4.3 Laplace Transform in Linear System Analysis
    Using the Laplace transform as a method for solving differential equations that represent dynamics of linear time invariant systems can be done in a straight ...
  40. [40]
    [PDF] 1 Stability
    H(s) = N(s)/d(s) is BIBO stable if and only if all poles of H(s), i.e., the roots of d(s), are in the open left-half of the complex plane. Proof: The ...
  41. [41]
    INL/DNL Measurements for High-Speed Analog-to-Digital ...
    Nov 20, 2001 · This technical provides useful guidelines for generic and specific INL and DNL test configurations of high-speed data converters.
  42. [42]
    AIEE/IRE/IEEE JS2-1962
    Often, the operating point may move from a low-current cut-off region through an essentially linear active region, to a high-current saturation region. The ...
  43. [43]
    [PDF] How to Measure Total Harmonic Distortion of an Op-Amp and THD + ...
    Total harmonic distortion plus noise is a measurement that provides a figure of merit for a circuits ability to accurately output a signal seen at it's input.
  44. [44]
    [PDF] "Understanding Data Converters" - Texas Instruments
    The limit of a 1/2 LSB differential linearity error is a missing code condition which is equivalent to a reduction of 1 bit of resolution and hence a reduction ...
  45. [45]
    [PDF] Understanding Operational Amplifier Specifications (Rev. B)
    Real op amps are not ideal. They have limitations. To understand and discuss the origins of these limitations, see the simplified op amp circuit diagram shown ...
  46. [46]
    [PDF] Op Amp Input and Output Swing Limitations - Texas Instruments
    The common-mode range is the range of linear operation of the amplifier versus the input common-mode signal. The input common-mode signal is defined as the ...
  47. [47]
    2.3 - The Simple Linear Regression Model | STAT 462
    which is called the "population regression line" — summarizes the trend in the population between the predictor x and the mean of the responses μY.
  48. [48]
    Testing the assumptions of linear regression - Duke People
    There are four principal assumptions which justify the use of linear regression models for purposes of inference or prediction: (i) linearity and additivity ...
  49. [49]
    [PDF] Ordinary Least Squares Linear Regression - cs.Princeton
    Aug 27, 2018 · In our basic linear regression setup here, ℓ : R×R → R, as it takes two real-valued arguments (prediction y and truth y) and produces a real- ...
  50. [50]
    [PDF] Chapter 9 Simple Linear Regression - Statistics & Data Science
    The error model underlying a linear regression analysis includes the assumptions of fixed-x, Normality, equal spread, and independent er- rors. In addition to ...
  51. [51]
    6.2 - The General Linear F-Test | STAT 501
    The general linear F-test involves defining a full and reduced model, then using an F-statistic to decide whether to reject the reduced model.
  52. [52]
    12.3 - Simple Linear Regression | STAT 200
    The slope is 4.854. For every one inch increase in height, the predicted weight increases by 4.854 pounds.
  53. [53]
    3.4 - Theoretical Justification | STAT 897D
    Gauss-Markov Theorem. This theorem says that the least squares estimator is the best linear unbiased estimator. Assume that the linear model is true. For any ...
  54. [54]
    [PDF] The Gauss-Markov Theorem - STA 211 - Stat@Duke
    Mar 7, 2023 · The Gauss-Markov Theorem asserts that under some assumptions, the OLS estimator is the “best” (has the lowest variance) among all estimators in ...
  55. [55]
    [PDF] discrete-time signal processing - INAOE
    The design techniques separate into those used for infinite impulse response (IIR) filters and those used for finite impulse response (FIR) filters. In ...
  56. [56]
    FIR Filter Design - MATLAB & Simulink - MathWorks
    The primary disadvantage of FIR filters is that they often require a much higher filter order than IIR filters to achieve a given level of performance.FIR vs. IIR Filters · FIR Filter Summary · Linear Phase Filters · Windowing MethodMissing: seminal | Show results with:seminal
  57. [57]
    A New Approach to Linear Filtering and Prediction Problems
    The classical filtering and prediction problem is re-examined using the Bode-Shannon representation of random processes and the “state-transition” method.
  58. [58]
    [PDF] Regeneration Theory - By H. NYQUIST
    Regeneration Theory. By H. NYQUIST. Regeneration or feed-back is of considerable importance in many appli- cations of vacuum tubes. The most obvious example ...
  59. [59]
    [PDF] PID Controllers, 2nd Edition
    with transfer function 1/(s+1)3 controlled by a PID controller tuned with ... (Hägglund and Åström, 1991) describes commercial controllers that combine ...