Fact-checked by Grok 2 weeks ago

Piecewise linear function

A piecewise linear function is a real-valued defined on an of the real numbers, composed of a finite or of affine (linear plus constant) segments, each applied on a sub of the . These functions are characterized by their graphs, which consist of straight-line segments connected at breakpoints where the changes, allowing them to model behaviors that linear functions alone cannot capture, such as abrupt shifts in trends. While not all piecewise linear functions are continuous—discontinuities can occur at breakpoints—continuous variants form an important subclass, often used in applications requiring smooth transitions. piecewise linear functions, where slopes are non-decreasing across segments, exhibit useful optimization properties, such as being minimizable over polyhedral sets. Common examples include the function |x|, which switches from slope -1 to +1 at x = 0, the sawtooth function for periodic approximations, and the floor function \lfloor x \rfloor, though the latter is typically discontinuous. In higher dimensions, piecewise linear functions extend to domains divided into polyhedra, enabling multidimensional modeling. Piecewise linear functions find broad applications in and related fields, including to with changing trends, such as agricultural yield models, and piecewise linear regression to detect structural breaks in statistical . They are also essential in optimization, where they approximate nonlinear costs or objectives in mixed-integer formulations, and in for representing activation functions like ReLU in neural networks. In engineering, they model piecewise-linear networks for circuit analysis and .

Core Concepts

Definition

A piecewise linear function, more precisely termed a piecewise affine function in modern usage, is a f: X \to \mathbb{R}^m where X \subseteq \mathbb{R}^n is the , obtained by partitioning X into a finite number of polyhedral sets \{P_i\}_{i \in I} (the pieces) such that the interiors of the P_i are disjoint, their union covers X, and f restricts to an affine on each piece P_i. On each P_i, the affine form is given by f(x) = A_i x + b_i, \quad x \in P_i, where A_i \in \mathbb{R}^{m \times n} is a and b_i \in \mathbb{R}^m is a constant vector. This structure allows the function to exhibit linear behavior locally within each polyhedral region while enabling global nonlinearity through the partitioning. The terminology "piecewise linear" is often used interchangeably with "piecewise affine," though strictly speaking, the latter emphasizes the inclusion of the constant term b_i, providing greater generality over purely homogeneous linear functions (where b_i = 0). In contexts requiring homogeneity, such as certain geometric or algebraic applications, piecewise linear may refer exclusively to functions without the additive constant. The polyhedral pieces are typically defined by linear inequalities, ensuring the domain decomposition aligns with frameworks. Piecewise linear functions gained prominence in the mid-20th century in and optimization as a tool for approximating nonlinear problems, building on foundational developments in such as George Dantzig's 1947 introduction of the simplex method.

Examples

One prominent example of a piecewise linear function is the absolute value function, defined as f(x) = |x| = \begin{cases} -x & \text{if } x < 0 \\ x & \text{if } x \geq 0 \end{cases} Its graph forms a V-shape with the vertex at the origin, consisting of two half-lines: one with slope -1 for negative x and one with slope 1 for non-negative x. In machine learning, the rectified linear unit (ReLU) activation function serves as another fundamental piecewise linear example, expressed as f(x) = \max(0, x) = \begin{cases} 0 & \text{if } x < 0 \\ x & \text{if } x \geq 0 \end{cases} This function outputs zero for negative inputs and follows the identity line with slope 1 for non-negative inputs, promoting sparsity and efficient computation in neural networks. Piecewise linear functions extend naturally to multiple dimensions; for instance, in two dimensions, a function can be defined over a triangulated domain where it is affine on each triangular region, interpolating linearly via from vertex values. Such constructions are common in for approximating surfaces or fields on planar domains. A non-continuous piecewise linear function arises in approximations of , such as the H(x) = 0 for x < 0 and H(x) = 1 for x \geq 0, which is piecewise constant—hence linear with zero slope—across the discontinuity at x = 0. Smoother approximations might insert a short linear ramp segment over a narrow interval around the jump to model transitions while preserving overall piecewise linearity.

Properties

Continuity and Differentiability

A piecewise linear function, defined as a function that is affine on each subdomain of a partition of its domain, is continuous at a breakpoint c if the left-hand limit \lim_{x \to c^-} f(x), the right-hand limit \lim_{x \to c^+} f(x), and the function value f(c) are equal. This condition holds when the affine expressions for the adjacent pieces match at c, ensuring no jump discontinuity occurs. In one dimension, if the pieces are f(x) = A_l x + b_l for x < c and f(x) = A_r x + b_r for x \geq c, continuity requires A_l c + b_l = A_r c + b_r. In higher dimensions, continuity extends to the pieces agreeing on the shared hyperplanes or faces between adjacent polytopes in the domain partition. For adjacent pieces defined by affine functions f_P(x) = c_P^\top x + d_P on polytope P and f_Q(x) = c_Q^\top x + d_Q on polytope Q, the condition is c_P^\top x + d_P = c_Q^\top x + d_Q for all x in the intersection P \cap Q, which simplifies to matching values at vertices of the shared face. This ensures the function is well-defined and continuous across boundaries. Piecewise linear functions are differentiable at interior points of each piece, where the derivative equals the constant slope (gradient) of the affine expression for that piece. At breakpoints, differentiability requires the slopes of adjacent pieces to match; if A_l = A_r in one dimension (or gradients agree across the boundary hyperplane in higher dimensions), the function is differentiable with derivative A_l. Otherwise, the function is not differentiable at the breakpoint due to a corner or kink. The one-sided derivatives always exist and are given by the adjacent slopes: the left derivative f'_-(c) = A_l and the right derivative f'_+(c) = A_r. For convex piecewise linear functions, such as the pointwise maximum of affine functions f(x) = \max_i (a_i^\top x + b_i), the subdifferential at a point x is the convex hull of the gradients of the active pieces: \partial f(x) = \mathrm{conv}\{a_i \mid i \in I(x)\}, where I(x) = \{i \mid a_i^\top x + b_i = f(x)\}. At interior points or smooth boundaries where only one piece is active, the subdifferential is a singleton equal to the gradient, indicating differentiability. At kinks where multiple pieces are active, the subdifferential is a polytope with dimension greater than zero, confirming non-differentiability. For example, the absolute value function f(x) = |x|, which is piecewise linear with pieces f(x) = -x for x < 0 and f(x) = x for x \geq 0, is continuous everywhere but not differentiable at x = 0, where \partial f(0) = [-1, 1].

Convexity and Other Structural Properties

A piecewise linear function in one dimension is convex if and only if the slopes of its successive linear pieces are non-decreasing, assuming the pieces are ordered according to their domains along the real line. This condition ensures that the function lies below any chord connecting two points on its graph, preserving the defining property of convexity. More generally, in higher dimensions, a piecewise linear function is convex if it can be represented as the pointwise maximum of a finite collection of affine functions, as the maximum of convex functions is itself convex. Monotonicity of a piecewise linear function follows directly from the signs of its slopes. Specifically, the function is non-decreasing over its domain if every slope is non-negative, and it is strictly increasing if every slope is positive. In the context of convex piecewise linear functions, non-decreasing slopes already imply a form of global non-decreasing behavior when combined with non-negative initial slopes, though strict monotonicity requires stricter conditions on the slopes. The boundedness of a piecewise linear function depends on its domain and the slopes of the boundary pieces. On a bounded domain, the function is always bounded above and below, as it is continuous and the domain is compact. On an unbounded domain, such as the entire real line, the function is bounded below if the leftmost slope is non-positive and the rightmost slope is non-negative (or vice versa for bounded above), but it becomes unbounded in the direction where the boundary slope has the same sign as the direction of extension. Piecewise linear functions exhibit Lipschitz continuity when their slopes are bounded. In particular, if the absolute values of all slopes are finite, the function is globally over its domain, with the Lipschitz constant equal to the supremum of the absolute values of the slopes; this follows from the fact that the function's variation is controlled by the steepest linear piece. For convex piecewise linear functions, local holds on open convex sets, but global bounds require uniform control on the slopes.

Approximation Techniques

Fitting to Curves

Piecewise linear functions provide a fundamental approach to approximating smooth nonlinear curves by dividing the domain interval [a, b] into subintervals and fitting linear segments on each. In uniform piecewise linear interpolation, the interval is partitioned into n equal subintervals using knots x_i = a + i h for i = 0, 1, \dots, n, where h = (b - a)/n. The approximation p(x) is then constructed by connecting the points (x_i, f(x_i)) with straight line segments, ensuring the function passes exactly through these evaluation points. This method is straightforward to implement and computationally efficient, as each segment is defined by the simple linear formula p(x) = f(x_i) + \frac{f(x_{i+1}) - f(x_i)}{h} (x - x_i) for x \in [x_i, x_{i+1}]. For a twice continuously differentiable function f, the error in this approximation is bounded by |f(x) - p(x)| \leq \frac{1}{8} h^2 \max_{t \in [x_i, x_{i+1}]} |f''(t)| on each subinterval [x_i, x_{i+1}], leading to an overall error of order O(h^2 \max |f''|). With equidistant knots, this simplifies to |f(x) - p(x)| \leq C (b - a)^2 / n^2, where C depends on the maximum second derivative, demonstrating second-order convergence as the number of subintervals increases. These bounds highlight the method's accuracy for functions with bounded curvature, though the error grows quadratically with subinterval length. To enhance accuracy without uniformly increasing the number of segments, adaptive methods refine the knot placement in regions of high curvature, such as where the second derivative is large. These approaches estimate curvature—often via local quadratic approximations or direct computation of |f''(x)|—and insert additional knots to reduce subinterval lengths proportionally to the local variation, ensuring a more equitable error distribution across the domain. For instance, curvature-threshold-based subdivision identifies key points where the curve's bending exceeds a specified tolerance, allowing targeted refinement while maintaining computational efficiency. Piecewise linear interpolation corresponds to a spline of degree 1, characterized by C^0 continuity at the knots, meaning the function is continuous but its derivative may exhibit jumps. This structure arises naturally from matching function values at knot points without enforcing derivative continuity, distinguishing it from higher-degree splines. In practice, the construction algorithm begins with knot selection (uniform or adaptive), followed by evaluation of f at these points, and then determination of linear coefficients on each segment either by direct connection for interpolation or via local least-squares fitting for minor smoothing. Historical roots of these techniques lie in 19th-century graphical methods for astronomical tabulations and engineering drawings, with formal theoretical development in numerical analysis accelerating after the 1950s alongside electronic computing advancements.

Fitting to Data

Fitting piecewise linear functions to discrete, noisy datasets is a common task in statistical modeling, where the goal is to approximate the underlying relationship between variables using segments of linear functions joined at knots, while accounting for observational errors. The standard approach employs least squares estimation to minimize the sum of squared residuals between observed data points and the predicted values from the piecewise linear model. This involves solving an optimization problem where the model parameters—slopes, intercepts, and knot locations—are adjusted to best fit the data, assuming the errors are independent and identically distributed. For a dataset \{(x_i, y_i)\}_{i=1}^n, the objective is to minimize \sum_{i=1}^n (y_i - f(x_i))^2, where f is a continuous piecewise linear function with k breaks (knots) at positions \tau_1 < \tau_2 < \dots < \tau_k, such that f(x) = \beta_{j0} + \beta_{j1} x for x in the j-th interval defined by the knots. When knots are fixed in advance, the problem reduces to separate linear regressions on each segment, with continuity constraints enforced at the joins to ensure smoothness. For variable knots, optimization algorithms such as differential evolution or dynamic programming are used to simultaneously estimate knot positions and segment parameters, as the objective function is non-convex but can leverage the piecewise linearity for efficient computation. Knot selection is crucial for balancing model complexity and fit, often guided by information criteria like the (AIC) or (BIC), which penalize additional pieces to avoid overfitting. These criteria evaluate models with varying numbers of knots, selecting the one that minimizes AIC = -2 log(L) + 2p or BIC = -2 log(L) + p log(n), where L is the likelihood and p the number of parameters. Alternative methods include forward selection, starting with a single line and iteratively adding knots where they most reduce residuals, or backward selection from an overparameterized model. Change-point detection methods identify potential knots by testing for structural breaks in the data, such as abrupt shifts in slope. Statistical tests like the cumulative sum (CUSUM) procedure monitor the partial sums of residuals or score statistics to detect deviations from a null linear model, signaling a change point when the CUSUM statistic exceeds a threshold derived from asymptotic distributions or simulations. For multiple change points, sequential testing or penalized likelihood approaches extend this framework. Implementations are available in statistical software; for example, the R package 'segmented' fits piecewise linear models via iterative least squares, estimating breakpoints and supporting AIC/BIC for selection. In Python, the 'pwlf' library uses differential evolution to optimize continuous piecewise linear fits for a specified number of segments. A basic pseudocode for segmented regression with fixed knots might proceed as follows:
Initialize parameters: slopes and intercepts for each segment
While convergence not achieved:
    For each segment j:
        Fit linear regression to data points in interval j, using continuity at knots
    Update knot positions if variable (e.g., via grid search or optimization)
    Compute residuals and check change in objective < tolerance
Return fitted parameters and function
This iterative process ensures continuity and minimizes the least squares objective. Noisy data is typically handled under the assumption of Gaussian errors in the least squares framework, leading to maximum likelihood estimates equivalent to the minimizer of the squared residual sum. For robustness against outliers, the objective can integrate Huber loss, which combines quadratic behavior for small residuals (|r| \leq \delta) with linear penalties for large ones (|r| > \delta), defined as \rho(r) = r^2/2 if |r| \leq \delta, else \delta(|r| - \delta/2). This approach reduces sensitivity to heavy-tailed errors while maintaining efficiency for .

Extensions

Generalizations

Piecewise linear functions generalize to multivariate settings by extending the domain to \mathbb{R}^n and partitioning it into polyhedral regions defined by intersections of hyperplanes, ensuring compatibility across boundaries to maintain function properties such as continuity where desired. On each polyhedron P_i in this partition, the function takes the affine form f(x) = a_i^T x + b_i, where a_i \in \mathbb{R}^n and b_i \in \mathbb{R}, allowing representation of complex behaviors through linear pieces in higher dimensions. Such multivariate piecewise linear functions are particularly useful in constructing piecewise linear homeomorphisms, which are bijective mappings that preserve topological structure and are affine on each polyhedral cell, facilitating approximations of invertible transformations in geometric and optimization contexts. In settings with potentially infinite pieces, continuous piecewise linear (CPL) functions emerge as limits of finite piecewise linear approximations, where linearity holds on intervals or polyhedra separated by countably many breakpoints, while ensuring overall . A key theoretical foundation is the density of piecewise linear functions in the space of on compact sets, such as C[0,1] under the ; this follows from a Stone-Weierstrass-type , where piecewise linears form an algebra that separates points and approximates any continuous function arbitrarily closely. This density extends to multivariate cases on compact subsets of \mathbb{R}^n, underscoring the expressive power of piecewise linear structures for universal approximation. Piecewise linear functions integrate deeply with nonsmooth analysis, particularly in handling variational inequalities, where the function's linear pieces on polyhedral domains allow second-order variational analysis to characterize stability and regularity without relying on differentiability. For instance, generalized derivatives of these functions enable the formulation and solution of variational inequalities over nonsmooth sets, providing tools for parametric stability in optimization problems. Recent advances in the 2020s have leveraged to construct high-dimensional piecewise linear approximations, notably through ReLU neural networks, which inherently define multivariate piecewise linear functions via partitions induced by activation thresholds, achieving efficient rates for functions in \mathbb{R}^n. More recent developments include trainable piecewise-linear spline activations for improved performance in image classification and physics modeling as of 2025, and the TeLU for faster and more stable training in 2024. These networks exploit the piecewise linear nature to scale to high dimensions, offering improved over traditional methods for tasks requiring fine-grained approximations on complex domains.

Specializations

Continuous piecewise linear functions, often abbreviated as CPL functions, impose the additional constraint of at all breakpoints on the general piecewise linear form. This ensures that the function value matches from both sides at each , resulting in a smooth connection between adjacent linear segments without jumps. The set of all such functions over a fixed with specified breakpoints forms a , as linear combinations and scalar multiples preserve both the piecewise linearity and properties. For CPL functions defined on a closed [a, b] with k interior knots, the dimension of this is k + 2. This dimension arises because the is equivalent to specifying independent values at the k + 2 knots (including the endpoints), with the linearly interpolating between them. piecewise linear represent a further restricted subclass where the overall is , typically achieved by ensuring that the slopes of successive linear segments are nondecreasing across the ordered breakpoints. This ordering of slopes guarantees that the lies above its tangents and satisfies the convexity inequality f(\lambda x + (1-\lambda) y) \leq \lambda f(x) + (1-\lambda) f(y) for \lambda \in [0,1]. Such are particularly useful in optimization contexts due to their compatibility with tools. A key representation of convex piecewise linear functions is as the pointwise maximum of a finite collection of affine functions, i.e., f(x) = \max_i (a_i^T x + b_i), where the epigraph \{(x, t) \mid t \geq f(x)\} forms a convex polyhedral set. This max-affine form highlights the convexity, as the maximum of convex (affine) functions is convex, and the epigraph's polyhedral structure facilitates computational handling. Linear splines constitute a specialization of piecewise linear functions with fixed knot locations, where the function is linear between knots and continuous overall. These form the foundational basis for constructing higher-degree splines, such as cubic splines, through recursive integration or differentiation while maintaining the knot structure. The fixed knots ensure a stable partition of the domain, enabling systematic basis expansion for approximation purposes. Tropical piecewise linear functions arise in the context of max-plus algebra, a structure where addition is replaced by maximization and multiplication by standard addition. In this framework, a tropical piecewise linear function takes the form f(x) = \max_i (A_i x + b_i), representing the tropical analog of classical polynomials and inheriting piecewise linearity from the dominating linear terms in different regions. These functions are inherently and play a role in for modeling piecewise linear surfaces. Multivariate extensions of these specializations exist, adapting the , , or tropical structures to higher dimensions while preserving core properties.

Applications

In Optimization

Piecewise linear functions are essential in optimization for modeling nonsmooth objectives and constraints that arise in applications such as and network flows. A piecewise linear function can be represented as the pointwise maximum of a finite collection of affine functions, which facilitates its reformulation into an equivalent through the introduction of epigraph variables. This approach transforms nonsmooth minimization problems into tractable linear constraints, preserving the problem's structure while enabling standard solvers. Consider minimizing f(x) where f is a piecewise linear defined over pieces with affine expressions a_i^T x + b_i for i = 1, \dots, m. This is equivalent to the linear program \begin{align*} \min &\quad t \\ \text{s.t.} &\quad t \geq a_i^T x + b_i, \quad i = 1, \dots, m \\ &\quad x \in \mathbb{R}^n, \ t \in \mathbb{R}. \end{align*} The epigraph variable t upper-bounds the function value, ensuring the objective captures the maximum over the linear pieces. To incorporate piecewise linear constraints directly into linear programming frameworks, special ordered sets of type 2 (SOS2) are utilized, which enforce that at most two adjacent variables in an ordered set are nonzero, modeling convex combinations within a single piece efficiently without auxiliary integer variables. This technique integrates seamlessly with the , avoiding the need for explicit mixed-integer formulations in convex cases. For nonconvex piecewise linear functions, where the objective or constraints may exhibit local minima or discontinuities, mixed-integer programming formulations employ binary variables to select the active piece among the candidates. A representative approach is the disaggregated convex combination formulation, which uses binary variables y_P \in \{0,1\} to indicate the selected piece P and nonnegative continuous variables \lambda_{P,v} to form a convex combination of breakpoints v within that piece, subject to \sum_{P} y_P = 1 and \sum_{v \in V(P)} \lambda_{P,v} = y_P. This ensures global optimality by enumerating piece selections through integer constraints. Efficient algorithms exploit the structure of these formulations: for convex piecewise linear problems, the convex simplex method extends the classical simplex algorithm to handle separable convex objectives under linear constraints, pivoting along piecewise linear rays to achieve finite convergence. In nonconvex settings, branch-and-bound or branch-and-cut procedures partition the domain based on SOS2 constraints or piece selections, generating valid inequalities such as lifted cover cuts to strengthen relaxations and prune suboptimal branches without mandatory binary variables for separable functions. The use of piecewise linear functions in optimization traces back to operations research in the 1960s, when E. M. L. Beale developed early mixed-integer programming codes incorporating branch-and-bound for nonsmooth problems, enabling practical solutions to industrial applications like planning. Beale and Tomlin's introduction of SOS2 in 1970 further solidified their role by providing a specialized mechanism for nonconvex piecewise linear modeling within general mathematical programming systems.

In Computer Graphics and Modeling

In , piecewise linear functions serve as a foundational tool for approximating complex curves and surfaces, enabling efficient rendering, manipulation, and interaction in visual modeling. By representing smooth geometries as sequences of straight line segments, these functions facilitate rasterization and on GPUs, balancing computational cost with visual fidelity. This approach is particularly valuable in scenarios requiring real-time performance, such as and . One key application is the polygonization of curves, where smooth paths are approximated by polygonal chains composed of connected linear segments. This process reduces continuous curves to discrete piecewise linear representations suitable for display and processing. For example, algorithms optimize segment placement based on chord properties to achieve accurate approximations of space curves while minimizing the number of vertices. Similarly, Bézier curves are linearized through subdivision into linear segments for rasterization; each cubic or Bézier segment is iteratively split and triangulated within its , allowing GPU-based rendering of vector art without excessive computational overhead. The form of a linear underscores its simplicity and utility in pipelines. For a segment connecting points P_i = (x_i, y_i) and P_{i+1} = (x_{i+1}, y_{i+1}) over t \in [0, 1], \begin{align*} x(t) &= x_i + t (x_{i+1} - x_i), \\ y(t) &= y_i + t (y_{i+1} - y_i). \end{align*} This , often denoted as , ensures C^0 at junctions and directly supports transformations like and . In , triangular meshes represent surfaces as piecewise linear approximations over simplicial complexes, where each forms a planar facet connecting three vertices. This structure approximates curved manifolds with connected, non-overlapping , enabling of attributes like normals and textures via barycentric coordinates for smooth shading. Such meshes are ubiquitous in rendering pipelines, supporting applications from to finite element analysis. Piecewise linear boundaries also enhance by defining object geometries as polygonal or planar patches, allowing efficient intersection tests between elements like edges and faces. In composite models, linear segments reduce 3D collision queries to lower-dimensional problems, such as line-quadric or line-line intersections, solved via root-finding for contact times. This enables robust, continuous detection in dynamic scenes without full curve evaluations. Modern implementations leverage these principles in CAD software and games. In tools like , polylines embody piecewise linear paths as single objects comprising sequential line segments, supporting precise drafting and editing of complex outlines. In gaming, within shaders—performed automatically by the GPU rasterizer—blends vertex attributes across , optimizing real-time effects like color gradients and deformations on post-2010 hardware with unified shader architectures.

References

  1. [1]
    Piecewise Linear Function - Department of Mathematics at UTSA
    Oct 30, 2021 · A piecewise linear function is a function defined on a (possibly unbounded) interval of real numbers, such that there is a collection of intervals on each of ...
  2. [2]
    [PDF] LECTURE 2 – LINEAR OPTIMIZATION MODEL - NC State ISE
    Properties: (i) A piecewise linear function is a convex function. (ii) If 𝑓𝑓 and 𝑔𝑔 are piecewise linear, then 𝑓𝑓 + 𝑔𝑔 is piecewise linear. (iii) ...Missing: mathematics | Show results with:mathematics
  3. [3]
    8.8 - Piecewise Linear Regression Models | STAT 501
    The basic idea behind piecewise linear regression is that if the data follow different linear trends over different regions of the data then we should model the ...
  4. [4]
    [PDF] A Superior Representation Method for Piecewise Linear Functions ...
    This paper shows that two Mixed Integer Linear Programming (MILP) formulations for piecewise linear functions introduced by Li et al. (2008) are both ...
  5. [5]
    Piecewise linear optimization in machine learning
    Piecewise linear functions in ML, like ReLU, are used in optimization over trained neural networks and in second price auctions, solved using LP and MIP.
  6. [6]
    [PDF] PIECEWISE-LINEAR NETWORK THEORY - DSpace@MIT
    The application of operational calculus to linear electrical networks, and more recently, of. Boolean algebra to switching circuits, are two striking examples.
  7. [7]
    Piecewise Affine Systems | SpringerLink
    Polyhedral piecewise affine systems or piecewise affine systems (PWA) [Son81, HDB01] for short are defined by partitioning the extended state-input space ...<|control11|><|separator|>
  8. [8]
    Piecewise affine functions and polyhedral sets ∗ : Optimization
    In this paper we present a number of characterizations of piecewise affine and piecewise linear functions defined on finite dimesional normed vector spaces. In ...
  9. [9]
    Optimal complexity reduction of polyhedral piecewise affine systems
    Polyhedral piecewise systems are defined by partitioning a (polyhedral) input-space into polyhedra and associating with each polyhedron a function. Major ...
  10. [10]
    Fixed points of rational continuous piecewise affine maps
    Dec 21, 2022 · (People sometimes write "piecewise linear" where I write "piecewise affine" and abbreviate "continuous piecewise linear" as "cpl".) The ...
  11. [11]
    [PDF] 1. Introduction - Stanford Engineering Everywhere
    ... piecewise-linear functions). Introduction. 1–6. Page 7. Convex optimization ... • 1947: simplex algorithm for linear programming (Dantzig). • 1960s: early ...
  12. [12]
    Piecewise Function -- from Wolfram MathWorld
    A piecewise function is a function that is defined on a sequence of intervals. A common example is the absolute value, |x|={-x for x<0; 0 for x=0; x for x>0.
  13. [13]
    [PDF] Rectified Linear Units Improve Restricted Boltzmann Machines
    The discriminative models use the deterministic version of NReLUs that implement the function y = max(0,x). ... compute explicitly (Nair & Hinton, 2008). This is ...
  14. [14]
    Robust Estimation of a Location Parameter - Project Euclid
    March, 1964 Robust Estimation of a Location Parameter. Peter J. Huber · DOWNLOAD PDF + SAVE TO MY LIBRARY. Ann. Math. Statist. 35(1): 73-101 (March, 1964). DOI ...
  15. [15]
    [PDF] V.3 Piecewise Linear Functions - Duke Computer Science
    A piecewise linear (PL) function f : K → R is defined by f(x) = Pi bi(x)f(ui), where ui are vertices of K and bi(x) are barycentric coordinates of x.
  16. [16]
    Differential Equations - Step Functions - Pauls Online Math Notes
    Nov 16, 2022 · In this section we introduce the step or Heaviside function. We illustrate how to write a piecewise function in terms of Heaviside functions ...
  17. [17]
    [PDF] Fitting Piecewise Linear Continuous Functions
    Aug 25, 2011 · For our purposes, a piecewise linear function is a continuous function f with domain SP∈P P, where P is finite, each P ∈ P is a full-dimensional.
  18. [18]
    [PDF] 2. Subgradients
    Example: piecewise-linear function. 𝑓 (𝑥) = max. 𝑖=1,...,𝑚 (𝑎. 𝑇. 𝑖𝑥 + 𝑏𝑖). 𝑓 (𝑥). 𝑎𝑇. 𝑖𝑥 + 𝑏𝑖. 𝑥 the subdifferential at 𝑥 is a polyhedron. 𝜕 𝑓 ...
  19. [19]
    [PDF] Chapter-17-Piecewise-Linear-Programs ... - AMPL
    A piecewise-linear function is convex if successive slopes are nondecreasing (along with the breakpoints), and is concave if the slopes are nonincreasing.
  20. [20]
    [PDF] 3. Convex functions
    if f1,..., fm are convex, then f(x) = max{f1(x),...,fm(x)} is convex examples. • piecewise-linear function: f(x) = maxi=1,...,m(a. T i x + bi) is convex. • sum ...
  21. [21]
    [PDF] 2. Convexity
    2.49 Theorem (convex piecewise linear functions). A proper function f is both convex and piecewise linear if and only if epi f is polyhedral. In general for ...
  22. [22]
    Engineering at Alberta Courses » Piecewise Interpolation
    A spline of degree 1 is a piecewise linear (piecewise affine) function constructed by connecting a straight line between every two data points (Figure 1). A ...
  23. [23]
    5.2. Piecewise linear interpolation - Toby Driscoll
    Piecewise linear interpolation is simply a game of connect-the-dots. That is, the data points are joined pairwise by line segments.Missing: twice | Show results with:twice
  24. [24]
    [PDF] Piecewise polynomial interpolation - UMD MATH
    If we choose equidistant points with hi = (b−a)/(n−1) we have |f(x)− p(x)| ≤C(b−a)2/n2, i.e., doubling the number of points reduces the error bound by a factor ...Missing: twice differentiable
  25. [25]
    An efficient and accurate interpolation method for parametric curve ...
    Sep 26, 2022 · A subsection interpolation method based on the curve curvature threshold is proposed to resolve the incompatible problem of machining accuracy and machining ...
  26. [26]
    [PDF] Data point selection for piecewise linear curve approximation
    First, the method for weighting data points and selecting a proper subset of them is described. Second, the selection technique is applied to curves (graphs of ...
  27. [27]
    [PDF] A chronology of interpolation: from ancient astronomy to modern ...
    This paper presents a chronological overview of the develop- ments in interpolation theory, from the earliest times to the present.
  28. [28]
    Fitting a least squares piecewise linear continuous curve in two ...
    We solve the problem for the special case k = 2 by showing that an optimal solution essentially consists of two least squares linear regression lines in which ...
  29. [29]
    A Python Library for Fitting 1D Continuous Piecewise Linear Functions
    Feb 20, 2019 · PWLF is based on a differential evolution optimization algorithm where users can specify the location or numbers of break points (Storn and ...
  30. [30]
    [PDF] Fast Algorithms for Segmented Regression - Jerry Li
    Jun 21, 2016 · The least squares fit simply the best fit linear function to the data. ... The k-piecewise linear least squares estimator, denoted bfLS k. , is ...
  31. [31]
    A Free-Knot Spline Modeling Framework for Piecewise Linear ... - NIH
    Sep 29, 2014 · Our approach provided accurate knot selections when complex sampling weights were incorporated, while AIC and BIC were not effective under these ...
  32. [32]
    [PDF] segmented: An R Package to Fit Regression Models with Broken ...
    The package segmented offers facilities to esti- mate and summarize generalized linear models with segmented relationships; virtually, no limit on the number of ...Missing: Python pwlf<|control11|><|separator|>
  33. [33]
    The use of cumulative sums for detection of changepoints in the rate ...
    The purpose of this paper is to develop a cumulative sum approach for detection of changepoints in the piecewise constant rate of a Poisson process by means of ...
  34. [34]
    Optimal and fast online change point estimation in linear regression
    Mar 7, 2025 · We demonstrate that a certain CUSUM-type statistic attains the minimax optimal rates for localizing the change point. Our minimax analysis ...
  35. [35]
    [PDF] Duality in Robust Linear Regression Using Huber's M-Estimator f(x) =
    Abstract--The robust linear regression problem using Huber's piecewise-quadratic M-estimator function is considered. Without exception, computational ...
  36. [36]
    [PDF] Piecewise-Linear Approximation: Multidimensional
    An issue that comes up is how to partition the domain. ▷ Let P be a set of polyhedrons defining a partitioning of D: ∪P∈P P = D, int(P) ∩ P0 = ∅, ∀P, P0 ∈ P.
  37. [37]
    Multivariate splines and hyperplane arrangements - ScienceDirect
    Multivariate splines, as piecewise polynomials, are defined on a partition of a higher dimensional domain. For convenience, we consider bivariate splines ...
  38. [38]
    Piecewise linear homeomorphisms - SpringerLink
    A homeomorphism is piecewise linear (relative to K and L) if there is a subdivision K 1 of K such that for each σ ∈ K 1 , f|σ maps σ linearly into a simplex of ...Missing: multivariate | Show results with:multivariate
  39. [39]
    Identification algorithm for standard continuous piecewise linear ...
    Standard continuous piecewise linear neural network (SCPLNN) is a new continuous piecewise linear (CPL) model. It can represent all the CPL functions and ...
  40. [40]
    Second-Order Analysis of Piecewise Linear Functions with ...
    This paper is devoted to second-order variational analysis of a rather broad class of extended-real-valued piecewise liner functions and their applications ...Missing: inequalities | Show results with:inequalities
  41. [41]
    Generalized differentiation of piecewise linear functions in second ...
    In this paper we calculate the second-order subdifferentials (generalized Hessians) of arbitrary convex piecewise linear functions, together with the ...
  42. [42]
    [PDF] Deep ReLU neural networks in high-dimensional approximation
    Jul 23, 2021 · We study the computation complexity of deep ReLU (Rectified Linear Unit) neural networks for the approximation of functions from the ...
  43. [43]
    ReLU Networks Are Universal Approximators via Piecewise Linear ...
    Nov 1, 2020 · This letter proves that a ReLU network can approximate any continuous function with arbitrary precision by means of piecewise linear or constant approximations.Abstract · TRLU via ReLU Networks · T-Bias Design · Approximation to Univariate...
  44. [44]
    [PDF] MA-219 Linear Algebra - 4. Dimension of vector spaces - IISc Math
    In particular, DimK U = n + 1. b). Let V be the R-vector space of the continuous piecewise linear functions R → R with partition points.
  45. [45]
    [PDF] Representing Piecewise-Linear Functions by Functions with Minimal ...
    The linear subspace W is isomorphic to the k-dimensional vector space Rk. ... General constructive representations for continuous piecewise-linear functions.
  46. [46]
    [PDF] Convex functions
    ▷ piecewise-linear function: f (x) = maxi=1,...,m(aT i x + bi). ▷ sum of r ... ▷ max(x, y) is convex; x − y is affine. ▷ 1 − max(x, y) is concave.
  47. [47]
    [PDF] lecture 10: B-Splines
    A set of basis splines, depending only on the location of the knots and the degree of the approximating piecewise polynomials can be developed in a convenient, ...
  48. [48]
    Chapter 9 Splines | Machine Learning - Bookdown
    The basis of regression splines is piecewise polynomial regression. ... For a start lets fit a linear spline using our selected placement of knots.
  49. [49]
    Tropical Geometry and Piecewise-Linear Approximation of Curves ...
    Dec 9, 2019 · Tropical Geometry and Mathematical Morphology share the same max-plus and min-plus semiring arithmetic and matrix algebra.
  50. [50]
    [PDF] Multivariate tropical regression and piecewise-linear surface fitting
    For a known partition the convex PWL function is formed as the max of the local affine fits. Then, a PWL function generates a new partition which can be used to ...
  51. [51]
    Stable parameterization of continuous and piecewise-linear functions
    Continuous and piecewise-linear function. Stable parameterization. Riesz basis ... This network parameterizes a CPWL function; hence, its Lipschitz constant is ...
  52. [52]
    [PDF] Lecture 2 Piecewise-linear optimization
    let x be k-sparse with support I (i.e., with PI x = x); define y = Ax. • consider any feasible x (i.e., satisfying Ax = y), different from x.Missing: step | Show results with:step
  53. [53]
    [PDF] Mixed-Integer Models for Nonseparable Piecewise Linear ...
    We study the modeling of non-convex piecewise linear functions as Mixed Integer Programming (MIP) problems. We review several new and existing MIP formulations ...
  54. [54]
    The Convex Simplex Method | Management Science - PubsOnLine
    This paper presents a method, called the convex simplex method, for minimizing a convex objective function subject to linear inequality constraints.
  55. [55]
    A Branch-and-Cut Algorithm Without Binary Variables ... - PubsOnLine
    Oct 1, 2006 · We give a branch-and-cut algorithm for solving linear programs (LPs) with continuous separable piecewise-linear cost functions (PLFs).
  56. [56]
    None
    **Summary of Beale's Contributions to Operations Research in the 1960s:**
  57. [57]
    Chapter 25. Rendering Vector Art on the GPU - NVIDIA Developer
    We have presented an algorithm for rendering vector art defined by closed paths containing quadratic and cubic Bézier curves. We locally triangulate each Bézier ...Missing: linearization | Show results with:linearization
  58. [58]
    [PDF] CMSC427 Notes on piecewise parametric curves: Hermite, Catmull ...
    Options for piecewise curves are linear, quadratic, cubic and higher order curves. • Piecewise linear approximation – commonly used. • Quadratic curve – used ...
  59. [59]
    [PDF] Triangle meshes - Cornell: Computer Science
    Triangles. 11. Page 14. • A bunch of triangles in 3D space that are connected together to form a surface. • Geometrically, a mesh is a piecewise planar surface.
  60. [60]
    [PDF] Continuous collision detection for composite quadric models - HKU
    CQMs are modeled by piecewise linear or quadric surface patches. The boundary elements of a CQM may either be a face (a linear or quadric surface patch), an ...
  61. [61]
    AutoCAD 2024 Help | About Polylines | Autodesk
    A polyline is a connected sequence of line segments created as a single object. You can create straight line segments, arc segments, or a combination of the two ...
  62. [62]
    Vertex shader output and varyings - Arm Developer
    Values are linearly interpolated so it is possible to perform per-vertex computations and reuse them in the fragment shader, that is, a value that can be ...