Fact-checked by Grok 2 weeks ago

Linearization

Linearization is a fundamental technique in and its applications, such as , for approximating nonlinear functions or systems with linear ones near a specific point, enabling simplified analysis and computation. In , particularly , linearization approximates a f(x) at a point x = a using its line, defined as L(x) = f(a) + f'(a)(x - a), which closely matches f(x) for inputs near a. This first-order Taylor expansion provides an efficient way to estimate function values without direct evaluation, with the diminishing as the from a decreases. For multivariable functions f(x, y), the linearization at (a, b) generalizes to L(x, y) = f(a, b) + f_x(a, b)(x - a) + f_y(a, b)(y - b), using partial derivatives to capture local behavior in higher dimensions. In engineering and , linearization simplifies nonlinear dynamical systems by expanding their equations around an equilibrium point via , truncating higher-order terms to yield a amenable to standard tools like eigenvalue analysis for . For instance, in robotic systems or applications, this method linearizes to design controllers, such as proportional-integral-derivative () regulators, by focusing on small perturbations from operating points. The validity holds locally, where nonlinear effects are negligible, making it indispensable for simulating and stabilizing complex systems like pendulums or dynamics.

Mathematical Foundations

Single-Variable Linearization

Single-variable linearization provides a method to approximate a differentiable function f(x) near a point a using a linear function, specifically the tangent line at that point. This approximation, known as the first-order Taylor polynomial, is given by L(x) = f(a) + f'(a)(x - a), where f'(a) is the derivative of f at a. The process assumes that f is differentiable at a, ensuring the existence of the tangent line. The derivation of this linearization stems from the first-order Taylor series expansion of f(x) around a. Taylor's theorem states that if f is twice continuously differentiable in an interval containing a and x, then f(x) = f(a) + f'(a)(x - a) + R_1(x), where R_1(x) is the remainder term. Neglecting the higher-order remainder yields the linear approximation L(x). Geometrically, L(x) represents the tangent line to the curve y = f(x) at x = a, which matches both the function value and its slope at that point, providing the best linear fit locally. Error analysis for the linearization is provided by the remainder term in . In the Lagrange form, the first-order remainder is R_1(x) = \frac{f''(c)}{2}(x - a)^2 for some c between a and x. This quadratic term implies that the approximation error decreases as x approaches a, since |R_1(x)| \leq \frac{M}{2}|x - a|^2, where M bounds |f''(c)| in the interval. Thus, linearization becomes increasingly accurate near the expansion point. A example illustrates this: consider linearizing f(x) = \sqrt{x} at a = 4. Here, f(4) = 2 and f'(x) = \frac{1}{2\sqrt{x}}, so f'(4) = \frac{1}{4}. The is L(x) = 2 + \frac{1}{4}(x - 4). To approximate \sqrt{4.001}, substitute x = 4.001: L(4.001) = 2 + \frac{1}{4}(0.001) = 2.00025. The true value is \sqrt{4.001} \approx 2.0002499375, so the absolute error is approximately $6.25 \times 10^{-8}. For the error bound, f''(x) = -\frac{1}{4x^{3/2}}, and over [4, 4.001], |f''(c)| \leq \frac{1}{4 \cdot 4^{3/2}} = \frac{1}{32}, yielding |R_1(4.001)| \leq \frac{1/32}{2} (0.001)^2 = 1.5625 \times 10^{-8}, which confirms the approximation's accuracy.

Multivariable Linearization

Multivariable linearization extends the technique to functions of multiple variables, providing a local affine model that captures the function's behavior near a specified point. In the single-variable case with n=1, this reduces to the standard tangent line approximation. For a scalar-valued f: \mathbb{R}^n \to \mathbb{R} differentiable at a point \mathbf{a} \in \mathbb{R}^n, the linearization L: \mathbb{R}^n \to \mathbb{R} at \mathbf{a} takes the form L(\mathbf{x}) = f(\mathbf{a}) + \nabla f(\mathbf{a}) \cdot (\mathbf{x} - \mathbf{a}), where \nabla f(\mathbf{a}) = \left( \frac{\partial f}{\partial x_1}(\mathbf{a}), \dots, \frac{\partial f}{\partial x_n}(\mathbf{a}) \right) is the vector of partial derivatives evaluated at \mathbf{a}. This expression derives from the first-order term of the multivariable Taylor expansion. If f is continuously differentiable in an open neighborhood of \mathbf{a}, the Taylor theorem states that f(\mathbf{x}) = f(\mathbf{a}) + \nabla f(\mathbf{b}) \cdot (\mathbf{x} - \mathbf{a}) for some \mathbf{b} on the joining \mathbf{a} and \mathbf{x}; under of the partial derivatives, \nabla f(\mathbf{b}) approaches \nabla f(\mathbf{a}) as \mathbf{x} \to \mathbf{a}, yielding the with o(\|\mathbf{x} - \mathbf{a}\|). The proof applies the to the auxiliary function g(t) = f(\mathbf{a} + t(\mathbf{x} - \mathbf{a})) for t \in [0,1], using the chain rule to relate g'(t) to the . The linearization L(\mathbf{x}) serves as the best affine to f in the sense that it matches f and its first partial derivatives at \mathbf{a}, with the of L forming the to the of f at (\mathbf{a}, f(\mathbf{a})). This provides the first-order description of the function's local geometry. For vector-valued functions \mathbf{f}: \mathbb{R}^n \to \mathbb{R}^m, the linearization at \mathbf{a} generalizes to L(\mathbf{x}) = \mathbf{f}(\mathbf{a}) + J_{\mathbf{f}}(\mathbf{a}) (\mathbf{x} - \mathbf{a}), where J_{\mathbf{f}}(\mathbf{a}) is the m \times n Jacobian matrix with entries (J_{\mathbf{f}}(\mathbf{a}))_{ij} = \frac{\partial f_i}{\partial x_j}(\mathbf{a}), stacking the gradients of the component functions f_1, \dots, f_m. This matrix form captures the local linear transformation induced by \mathbf{f} near \mathbf{a}. Consider the example of f(x,y) = x^2 + y^2, a representing the squared distance from the . At \mathbf{a} = (1,1), f(1,1) = 2 and \nabla f(1,1) = (2,2), so the linearization is L(x,y) = 2 + 2(x-1) + 2(y-1) = 2x + 2y - 2. Near (1,1), the plane defined by L tangentially approximates the upward-opening , closely matching values for small perturbations like (1+\delta x, 1+\delta y) where |\delta x|, |\delta y| \ll 1, but diverging quadratically farther away. The approximation holds provided f is continuously differentiable (C^1) near \mathbf{a}, meaning all first partial derivatives exist and are continuous in some open ball around \mathbf{a}; this ensures the error term is negligible compared to \|\mathbf{x} - \mathbf{a}\| locally.

Applications in Analysis

Stability Analysis

In the analysis of nonlinear dynamical systems, linearization provides a to approximate the behavior near points of autonomous ordinary differential equations (ODEs) of the form \dot{x} = f(x), where x \in \mathbb{R}^n and f: \mathbb{R}^n \to \mathbb{R}^n is a smooth satisfying f(x^*) = 0 for an equilibrium x^*. This technique reduces the problem to studying a , enabling the use of established theory to assess local qualitative behavior. The linearized system is obtained by considering perturbations y = x - x^* around the , yielding \dot{y} = J(x^*) y, where J(x^*) is the matrix of f evaluated at x^*. For hyperbolic equilibria—where no eigenvalue of J(x^*) has zero real part—the guarantees that the nonlinear flow is topologically conjugate to the linear flow in a neighborhood of x^*, preserving properties. Specifically, the x^* is asymptotically stable if all eigenvalues of J(x^*) have negative real parts and unstable if at least one has a positive real part. A classic example is the logistic equation modeling population growth, \dot{x} = r x (1 - x/K), with equilibria at x^* = 0 (unstable) and x^* = K (stable), where r > 0 is the growth rate and K > 0 the carrying capacity. Linearizing at x^* = 0 gives the Jacobian J(0) = r > 0, indicating an unstable node since the eigenvalue is positive. At x^* = K, J(K) = -r < 0, yielding a stable node with negative eigenvalue. This approach is inherently local, providing reliable predictions only in a small neighborhood of the where higher-order terms in the remain negligible. It fails for non-hyperbolic equilibria, where at least one eigenvalue has zero real part, as the does not capture the full nonlinear dynamics, potentially leading to incorrect conclusions. Recent extensions to stochastic dynamical systems, such as stochastic versions of the developed since 2017, address robustness in noisy environments for applications in modern by adapting linearization to mean-square criteria.

Microeconomics

In , linearization serves as a key technique for handling nonlinear relationships in and behavior, enabling tractable of optimization problems and conditions. By constructing Taylor expansions around reference points, such as steady states or optimal bundles, economists simplify complex functions and systems while preserving essential marginal properties. This approach facilitates insights into consumption choices, , and stability without resorting to full numerical solutions. A prominent application arises in utility maximization under intertemporal constraints, where linearization approximates the nonlinear Euler equation governing . The Euler equation, u'(c_t) = \beta E_t [u'(c_{t+1}) (1 + r_{t+1})], equates the marginal utility of current to the discounted expected marginal utility of future adjusted for returns. Around a where is constant (c_t = c_{t+1} = \bar{c}) and returns equal the inverse ($1 + r = 1/\beta), a log-linear yields \Delta \ln c_{t+1} \approx \frac{1}{\gamma} (r_{t+1} - \rho), with \gamma as the coefficient of relative risk aversion and \rho = -\ln \beta as the subjective . This linear form reveals how expected returns drive growth, aiding analysis of saving behavior and precautionary motives in dynamic models. In demand theory, linearization approximates nonlinear indifference curves via their tangent lines at chosen points, directly deriving the marginal rate of substitution (MRS). For a utility function u(x, y), the MRS at a bundle (x_0, y_0) is the negative slope of the first-order Taylor expansion, \MRS_{x,y} = -\frac{\partial u / \partial x}{\partial u / \partial y} \big|_{(x_0, y_0)}, which locally represents trade-offs maintaining utility constant. This tangent approximation equates the MRS to the price ratio at the optimum, simplifying the identification of demand responses to price or income changes without solving the full constrained maximization. Consider the Cobb-Douglas utility u(x, y) = x^\alpha y^{1-\alpha}, where $0 < \alpha < 1. Linearized at an interior point (x_0, y_0), the first-order approximation is u(x, y) \approx u(x_0, y_0) + \alpha \frac{u(x_0, y_0)}{x_0} (x - x_0) + (1-\alpha) \frac{u(x_0, y_0)}{y_0} (y - y_0), yielding a linear segment with slope -\frac{\alpha y_0}{(1-\alpha) x_0}. At budget tangency, this slope matches the price ratio -p_x / p_y, so \alpha / (1-\alpha) = p_x x_0 / (p_y y_0), which resolves optimal demands as x_0 = \alpha I / p_x and y_0 = (1-\alpha) I / p_y for I. This demonstrates how linearization streamlines solving for Marshallian demands, highlighting constant expenditure shares inherent to the form. Linearization also underpins stability analysis in general equilibrium by approximating excess demand functions around Walrasian equilibria. Excess demand z(p) = d(p) - s(p) for prices p is typically nonlinear; its Jacobian matrix at equilibrium p^* (where z(p^*) = 0) provides a linear system \dot{p} \approx J z(p), with J governing tâtonnement price adjustments. Stability requires eigenvalues of J to have negative real parts, ensuring convergence to p^* under gross substitutability, as aggregate excess demands satisfy Walras' law and homogeneity. This local linear analysis tests equilibrium robustness without global simulations. Post-2017 extensions in behavioral economics apply linearization to prospect theory's nonlinear value function, enhancing risk analysis in decision-making. Prospect theory posits a value function v(x) concave for gains and convex for losses relative to a reference point, with loss aversion. Linearizing around the reference, v(x) \approx v(0) + v'(0) x, captures local risk attitudes but adjusts for kinks at zero.

Applications in Computation and Modeling

Optimization

In nonlinear programming, successive linear programming (SLP) methods approximate the nonlinear objective function and constraints by their first-order Taylor expansions around the current iterate, transforming the problem into a sequence of linear programs that guide the search toward an optimal solution. This iterative approach, also known as sequential linear programming, updates the linear approximations at each step to better capture the local behavior of the nonlinear functions, enabling the use of efficient linear programming solvers like the simplex method. Newton's method for optimization relies on a linearization of the optimality conditions, such as approximating the nonlinear equation \nabla f(x) = 0 via the (or for second-order) to compute the update direction \Delta x, as in J(x_k) \Delta x = -\nabla f(x_k), where J is the matrix at the current point x_k. Although the full method incorporates second-order information for quadratic convergence near the solution, its core step begins with this of the , iteratively refining the until the is sufficiently small. For instance, consider minimizing the nonlinear function f(x) = x^2 + \sin(x), whose minimum satisfies \nabla f(x) = 2x + \cos(x) = 0; linearizing this equation at an initial guess x_k yields the approximation L(x_k + \Delta x) = [2x_k + \cos(x_k)] + [2 - \sin(x_k)] \Delta x \approx 0, solved for \Delta x \approx - \frac{2x_k + \cos(x_k)}{2 - \sin(x_k)}, with the update x_{k+1} = x_k + \Delta x repeated until convergence to the root near x \approx -0.45. This process demonstrates how linearization facilitates iterative root-finding for unconstrained optimization problems. Linearization also connects to the simplex method through linear approximations, where nonlinear functions are represented as convex combinations of linear segments, allowing the to efficiently optimize over these approximations in frameworks. Such techniques extend the simplex method's capability to handle mildly nonlinear objectives while preserving its polynomial-time average-case performance. By decomposing nonlinear problems into simpler linear subproblems, linearization methods like SLP significantly reduce , often achieving faster and for large-scale instances compared to direct nonlinear solvers, as evidenced by their successful application in industrial optimization tasks. In the , hybrid approaches integrating linearization with mixed-integer (MINLP) have advanced solutions for large-scale energy systems, such as optimizing battery storage in renewable grids to minimize costs while ensuring reliability.

Multiphysics

In multiphysics simulations, linearization facilitates the coupling of governing equations from diverse physical domains, such as and in (FSI) problems. Nonlinear interactions at the fluid-solid are approximated through linearized conditions, enabling iterative exchange of data between solvers while maintaining computational efficiency. This approach is particularly valuable for problems where direct monolithic solving of the full nonlinear system is prohibitive due to complexity and scale. For instance, in FSI, the deformed interface is handled via linearized kinematic and dynamic conditions that approximate the motion and force transfer without resolving the full nonlinearity at each step. A cornerstone of nonlinear solvers in finite element methods for multiphysics is the Newton-Raphson iteration, which linearizes the equation R(\mathbf{u}) = 0 around the current estimate \mathbf{u}_k. The update is computed by solving the linearized system \mathbf{J}(\mathbf{u}_k) \Delta \mathbf{u} = -\mathbf{R}(\mathbf{u}_k), where \mathbf{J} is the matrix representing the partial derivatives of the with respect to the variables, and \Delta \mathbf{u} provides the correction to yield \mathbf{u}_{k+1} = \mathbf{u}_k + \Delta \mathbf{u}. This method is widely adopted in coupled simulations, such as those involving thermal-fluid-structural interactions, as it promotes rapid by exploiting the local linearity of the nonlinear s. In sequential-implicit schemes for multiphysics, an outer Newton loop linearizes the overall coupled , enhancing robustness for problems with strong nonlinear couplings. An illustrative application is in magneto-hydrodynamics (MHD) for (MRI) systems, where describing electromagnetic fields are coupled with the Navier-Stokes equations for fluid flow in conductive media. Linearization around an —such as the steady-state blood flow velocity and static —allows prediction of field distortions induced by Lorentz forces, which is critical for assessing and in high-field . High-order finite element methods discretize this coupled , solving the linearized increments iteratively to capture the interaction effects without full nonlinear resolution at each time step. The benefits of linearization in multiphysics include enabling modular simulations of disparate phenomena, such as thermal conduction, mechanical deformation, and electromagnetic propagation, by decoupling the domains into manageable linearized subproblems that can be solved independently before coupling. This modularity supports scalable implementations on parallel architectures and facilitates integration of legacy codes for specific physics. However, challenges arise in handling stiff systems stemming from disparate spatial and temporal scales—for example, fast electromagnetic waves versus slow mechanical responses—which can cause ill-conditioned Jacobians and slow convergence, often necessitating advanced preconditioning or multirate time-stepping strategies. In recent applications to modeling, linearization has addressed ice-ocean interactions by applying linear response theory to approximate the sensitivity of basal melting rates to perturbations in and circulation around ice shelves. This technique linearizes the coupled ice-sheet-ocean dynamics around equilibrium states in global climate models, enabling efficient projections of sea-level rise contributions over decadal to centennial timescales without exhaustive nonlinear integrations. Such methods have been integrated into simulations to quantify uncertainties in ice-shelf stability under warming scenarios.

Machine Learning

In , linearization serves as a fundamental technique for approximating the behavior of complex, non-linear s locally, leveraging first-order expansions to simplify analysis and computation. This approach draws on multivariable linearization concepts, where the provides a of the function around a point. By linearizing outputs or loss functions, practitioners gain insights into model behavior, enhance training procedures, and improve robustness without requiring full non-linear evaluations. A key application is in local interpretability, where linearization attributes importance to input features by approximating neural network outputs around a specific input x_0. Saliency maps, for instance, compute the \nabla_x f(x_0) of the 's output f(x) with respect to the input, revealing how small perturbations in features affect predictions and highlighting influential regions, such as edges or textures in images. This first-order approximation effectively linearizes the decision near x_0, enabling explanations of model predictions; for a classifier f(\theta, x), linearizing \nabla_x f at x approximates local decision boundaries, aiding interpretability in tasks like image recognition where it pinpoints object parts critical to . Introduced in early convolutional visualizations, this method remains foundational for feature attribution in . Linearization also plays a crucial role in adversarial , where first-order approximations of the loss landscape generate perturbations to improve model robustness. The Fast Gradient Sign Method (FGSM), for example, crafts adversarial examples by taking a step in the direction of the sign of the of the loss with respect to the input, effectively using a to maximize loss under bounded perturbations. This enables on robust examples, reducing to attacks in classifiers. To boost training efficiency, variants of incorporate linearization of activations, approximating non-linear operations to accelerate in deep networks. Linear Backpropagation (LinBP) linearizes the backward pass through non-linearities, replacing exact gradients with their linear approximations, which has been shown to yield faster on tasks like image classification while maintaining comparable accuracy to standard . Post-2017 developments have extended linearization to , where policy gradient methods use local linear approximations around to estimate policy improvements. For instance, in continuous control problems, policy gradients for linearized dynamics ensure global to optimal policies, bridging model-free learning with theoretical guarantees and enabling efficient in high-dimensional spaces. In the , linear probes have advanced interpretability in large language models (LLMs) by training simple linear classifiers on frozen internal representations to analyze task-specific capabilities, such as sentiment detection or syntactic parsing. These probes reveal linearly separable structures encoding emergent abilities, like truthfulness directions in hidden states, providing scalable diagnostics without retraining the full model and highlighting how LLMs internally represent linguistic knowledge.

References

  1. [1]
    Calculus I - Linear Approximations - Pauls Online Math Notes
    Nov 16, 2022 · In this section we discuss using the derivative to compute a linear approximation to a function. We can use the linear approximation to a ...
  2. [2]
    [PDF] Linearization
    Definition. The linearization, or linear approximation, of the function f near point x = a is the linear function L(x) = f(a) + f′(a)(x − a) . f(x) ≈ L(x) ...
  3. [3]
    Linearization
    The Linearization of a function f(x,y) at (a,b) is L(x,y) = f(a,b)+(x−a)fx(a,b)+(y−b)fy(a,b). This is very similar to the familiar formula L(x) ...Missing: definition | Show results with:definition
  4. [4]
    Chapter 15. Stabilizing Controlled Dynamical Systems
    1 Linearization¶. The simplest method for examining the stability of a nonlinear system is to linearize the system about the equilibrium point and apply ...<|separator|>
  5. [5]
    [PDF] Linearization - MSU College of Engineering
    To linearize functions, we use a Taylor expansion about an operating point. The Taylor expansion requires a new set of coordinates )~,~( yx ...
  6. [6]
    [PDF] AA450: Control in Aerospace Systems Linearization 1 Theory
    The analysis we use for setting up the appropriate linear approximation of a given nonlinear system is local linearization (sometimes referred to as a small ...
  7. [7]
    [PDF] Three types of linearization and the temporal aspects of speech ...
    Linearization is a commonly used term in many domains of linguistic the- ory. The purpose of this paper is to clarify what linearization is by demon-.
  8. [8]
    [PDF] A Distinctness Condition on Linearization* Norvin Richards, MIT ...
    The new restriction on ordering statements will have the effect of making multiple syntactic nodes with the same label impossible to linearize if they are ...
  9. [9]
    [PDF] Principles & Parameters Theory and Minimalism
    This approach to linearization is an example of the minimalist agenda to reduce linguistic constraints to bare output conditions, which are determined by the ...
  10. [10]
    Calculus II - Taylor Series - Pauls Online Math Notes
    Nov 16, 2022 · So, the remainder is really just the error between the function f(x) f ( x ) and the n n th degree Taylor polynomial for a given n n .Missing: single | Show results with:single
  11. [11]
    Taylor's Theorem – Calculus Tutorials
    This form for the error 𝑅 𝑛 + 1 ⁡ ( 𝑥 ) , derived in 1797 by Joseph Lagrange, is called the Lagrange formula for the remainder. The infinite Taylor series ...Missing: expansion | Show results with:expansion
  12. [12]
    Linear approximation - Ximera - The Ohio State University
    This tangent line is the graph of a linear function, called the linear approximation. ... Geometrically, differentials can be interpreted via the diagram below.
  13. [13]
    Introduction to Taylor's theorem for multivariable functions - Math Insight
    ### Summary of Multivariable Taylor Theorem (First Order)
  14. [14]
    [PDF] Taylor's Theorem in One and Several Variables - Rose-Hulman
    The Matrix Form of Taylor's Theorem​​ There is a nicer way to write the Taylor's approximation to a function of several variables. Let's write all vectors like x ...
  15. [15]
    [PDF] 3.3 Gradient Vector and Jacobian Matrix
    The gradient vector is the vector of derivatives for scalar functions, and the Jacobian matrix is the matrix of derivatives for vector-valued functions.
  16. [16]
    [PDF] Unit 17: Taylor approximation
    It is the single variable Taylor on the line x+tv. The directional derivative. Dvf is there the usual derivative as limt→0[f(x + tv) - f(x)]/t = Dvf(x).Missing: error | Show results with:error
  17. [17]
    [PDF] Linearization and Stability Analysis of Nonlinear Problems
    This paper uses linearization and linear differential equation theory to analyze nonlinear differential equations, using phase portraits and stability analysis.
  18. [18]
    [PDF] Stability Analysis for ODEs
    Sep 13, 2005 · Linear stability analysis can be carried out for higher-dimensional systems although, predictably, it gets harder to do things analytically. ...
  19. [19]
    [PDF] Linearization and Invariant Manifolds
    In particular, we show that about a non-hyperbolic equilibrium it is possible to reduce the original system to a lower dimensional system defined on a center ...
  20. [20]
    [PDF] The Hartman-Grobman Theorem - University of Utah Math Dept.
    Oct 15, 2012 · ed. Birkhäuser 1982; reprinted from original John Wiley & Sons, 1964. This argument follows Chicone, who simplifies Hartman's proof. Details.Missing: paper citation
  21. [21]
    [PDF] Linearization in the large of nonlinear systems and Koopman ...
    According to the Hartman–Grobman Theorem, a nonlinear system can be linearized in a neighborhood of a hyperbolic stationary point.<|separator|>
  22. [22]
    Hartman-Grobman Theorem for Stochastic Dynamical Systems - arXiv
    Apr 19, 2025 · Full-text links: Access Paper: View a PDF of the paper titled Hartman-Grobman Theorem for Stochastic Dynamical Systems, by Paul Bekima. View ...Missing: original | Show results with:original
  23. [23]
    [PDF] The Traditional Approach to Consumer Theory - Nolan H. Miller
    The indifference curve Iy is defined as the set of all bundles that are ... This gives a linear approximation to f (x) that is tangent to f (x) at x0 ...
  24. [24]
    [PDF] Utility Maximization Steps
    Unless there is a Corner Solution, the solution will occur where the highest indifference curve is tangent to the budget constraint. Equivalent to that is the ...
  25. [25]
    [PDF] Cowles Foundation for Research in Economics - Yale University
    Excess demand functions (the difference between supply and demand) are linearized to give:2. (1) where X, P and 立 are. I Mp + n. (nxl) vectors of excess ...
  26. [26]
    [PDF] The Welfare Economics of Reference Dependence
    Jun 27, 2023 · ... linear approximation of any pay- off formulation that falls into the Single ... Prospect Theory: An Analysis of Decision Under Risk. Econometrica,. 47(2): ...
  27. [27]
    Nonlinear Optimization by Successive Linear Programming
    Successive Linear Programming (SLP), which is also known as the Method of Approximation Programming, solves nonlinear optimization problems via a sequence ...
  28. [28]
    Numerical Optimization | SpringerLink
    In stock Free deliveryNumerical Optimization presents a comprehensive and up-to-date description of the most effective methods in continuous optimization.Derivative-Free Optimization · Sequential Quadratic... · Quasi-Newton Methods
  29. [29]
    A simplex algorithm for piecewise-linear programming I
    This three-part paper develops and analyzes a general, computationally practical simplex algorithm for piecewiselinear programming.
  30. [30]
    Linearization method for MINLP energy optimization problems - Nature
    Jul 30, 2025 · To address this, many studies apply linearization techniques to reduce MINLP problems to MILP form, trading off model fidelity for tractability.Missing: 2020s | Show results with:2020s
  31. [31]
    Linear and nonlinear solvers for simulating multiphase flow within ...
    Numerical solution entails discretization of the coupled system of nonlinear governing equations and solving a linear system of equations at each Newton–Raphson ...
  32. [32]
    [PDF] Multiphysics Simulations: Challenges and Opportunities
    5.3.8 Combining continuum-based and discrete phenomena. Many multiphysics problems span scales from atomistic (or other discrete) phenomena to the continuum.
  33. [33]
    Projecting Antarctica's contribution to future sea level rise from basal ...
    Here we apply a linear response theory approach to 16 state-of-the-art ice sheet models to estimate the Antarctic ice sheet contribution from basal ice shelf ...