Fact-checked by Grok 2 weeks ago

Remez algorithm

The Remez algorithm is an iterative numerical method for determining the unique polynomial of best uniform (minimax) approximation to a continuous function on a closed interval, minimizing the maximum deviation between the function and its polynomial approximation of a given degree. Developed by Soviet mathematician Evgeny Yakovlevich Remez in 1934, it leverages Chebyshev's equioscillation theorem, which states that the error in the optimal approximation equioscillates—reaching its maximum absolute value with alternating signs—at exactly n+2 points for a polynomial of degree n. This approach ensures the approximation is optimal in the L norm, distinguishing it from least-squares methods that minimize average error. The algorithm proceeds through an exchange process: it begins with an initial set of n+2 reference points (often for stability), solves a to find coefficients that force the weighted error to equioscillate at these points, and then updates the points by locating the actual extrema of the via root-finding or optimization techniques. Iterations continue until the reference points converge to the true equioscillation points, typically requiring few steps for rapid convergence, though careful initialization is crucial to avoid local minima. For rational approximations, the method extends by linearizing the nonlinear denominator, solving iteratively for numerator and denominator coefficients while estimating the error bound. Remez's work built on earlier approximation theory, including Chebyshev's 1853 contributions, and has influenced fields beyond , such as where the Parks-McClellan algorithm—a discrete variant—designs optimal filters with minimal passband/stopband . Implementations appear in numerical libraries like .Math and Wolfram's tools, underscoring its practical utility in approximating special functions (e.g., ) and engineering applications requiring high-fidelity low-degree approximations. Despite its efficiency, the algorithm can be sensitive to function smoothness and interval choice, prompting variants for weighted or discrete data scenarios.

Overview and History

Definition and Purpose

The Remez algorithm is an iterative, exchange-based procedure designed to construct the best uniform-norm to a on a closed interval. Given a f(x) defined on [a, b] and a specified n, it seeks a p_n(x) of at most n that minimizes the maximum deviation, formally \|f - p_n\|_\infty = \max_{x \in [a, b]} |f(x) - p_n(x)|. This minimax is unique and characterized by the property that the error f(x) - p_n(x) attains its maximum at exactly n+2 points in [a, b] with alternating signs, known as equioscillation. The primary purpose of the Remez algorithm is to achieve optimal error control in the Chebyshev norm (), ensuring the smallest possible maximum deviation across the entire interval rather than an measure. This makes it particularly valuable in numerical computations where bounding the worst-case error is critical, such as in or function evaluation routines. In contrast to least-squares methods, which minimize the integral of squared errors and may yield large localized deviations, the Remez approach provides superior uniform approximation by directly targeting the supremum norm.

Historical Development

The Remez algorithm was developed by the Soviet mathematician Evgeny Yakovlevich Remez and first published in 1934 through a series of three papers in mathematical journals, including contributions in the Communications of the Kharkov Mathematical Society and Comptes Rendus de l'Académie des Sciences. These publications presented an iterative procedure for computing polynomial approximations, emerging from Remez's work at institutions in Kharkov and Kiev. The algorithm built upon earlier foundational results in , notably Pafnuty Chebyshev's 19th-century work on minimax polynomials and Charles-Jean de la Vallée Poussin's 1919 theorem providing bounds on approximation errors. Amid the flourishing of Soviet mathematics in , characterized by advances in constructive function theory despite political challenges, Remez's contributions addressed practical numerical needs for approximations of continuous functions. Early computations using the , such as approximations to the function up to degree 11, were performed by Remez's students at the University of Kiev, demonstrating its feasibility before widespread computing. Post-World War II, the algorithm received increasing attention in Western literature during the and , aligning with the growth of electronic computers and . Implementations appeared in FORTRAN-based software libraries, including ACM Algorithm 318 in 1967 for polynomial approximation and subsequent routines in the 1970s. By the mid-20th century, it influenced developments in related areas, such as algorithms for approximations and design, exemplified by the Parks-McClellan method in 1972. The 75th anniversary of the algorithm's publication in prompted reflections on its lasting impact, with analyses emphasizing its role in modern numerical methods. Contemporary adaptations, including barycentric forms integrated into systems like Chebfun, have enhanced its applicability to high-degree polynomials and complex domains.

Mathematical Foundations

Chebyshev Approximation Theory

The uniform norm, also known as the supremum or infinity norm, measures the maximum deviation of an approximation from the target function over a given interval [a, b], defined as \|e\|_\infty = \max_{x \in [a,b]} |f(x) - p(x)|, where f is the continuous function to be approximated and p is a polynomial of degree at most n. The minimax problem seeks the polynomial p_n that minimizes this norm, yielding the best uniform approximation, which ensures the smallest possible worst-case error across the interval. For a continuous function f on a compact interval [a, b], there exists a unique polynomial of degree at most n that achieves this minimal uniform norm, distinguishing uniform approximation from other settings where multiple solutions may exist. This uniqueness guarantees a well-defined best approximation, facilitating theoretical analysis and practical computation in approximation theory. Chebyshev polynomials of the first kind, T_n(x), play a central role as they provide the monic polynomial of degree n with the smallest maximum deviation from zero on [-1, 1], where the leading coefficient is $2^{n-1} for n \geq 1. These polynomials equioscillate between -1 and $1, achieving the minimal deviation bound of $1/2^{n-1}. To extend this to an arbitrary interval [a, b], Chebyshev polynomials are scaled and shifted via the affine transformation x = \frac{b-a}{2} t + \frac{a+b}{2}, where t \in [-1, 1], preserving the minimax properties while adapting to the new domain. The uniform norm is preferred in numerical analysis over other norms, such as the L^2 norm, because it directly bounds the worst-case error, which is critical for applications requiring guaranteed performance across the entire interval rather than average behavior.

Equioscillation Theorem

The equioscillation theorem, also known as Chebyshev's alternation theorem, provides a fundamental characterization of the best uniform (minimax) polynomial approximation to a continuous function on a closed interval. Specifically, for a continuous function f: [a, b] \to \mathbb{R} and polynomials of degree at most n, a polynomial p_n is the unique best uniform approximation to f if and only if the error e(x) = f(x) - p_n(x) equioscillates at exactly n+2 points in [a, b]. That is, there exist points x_0 < x_1 < \cdots < x_{n+1} such that |e(x_i)| = \|e\|_\infty for all i = 0, \dots, n+1, and the signs alternate: e(x_{i+1}) = -e(x_i). A closely related result is the de la Vallée Poussin theorem, which states that if there exist n+2 points t_0 < t_1 < \cdots < t_{n+1} in [a, b] such that f(t_i) - p(t_i) = (-1)^i \epsilon_i for some p of degree at most n and \epsilon_i > 0 with constant sign, then the best satisfies E_n(f) \geq \min_i \epsilon_i. The proof of the equioscillation theorem relies on a argument. Assume p_n is the best approximation but the error equioscillates at fewer than n+2 points. Let M(e) = \{x \in [a, b] : |e(x)| = \|e\|_\infty\} denote the set of extremal points. With fewer than n+2 alternating extrema, one can construct a nonzero r of degree at most n such that e(x) r(x) > 0 at all points in M(e). Then, for sufficiently small \delta > 0, the modified approximation p_n + \delta r yields a smaller uniform error, contradicting the optimality of p_n. The ensures that the perturbed error does not introduce new extrema that exceed the original norm, as the sign consistency at existing extrema forces the adjustment to reduce the maximum deviation. The theorem has key implications for uniqueness and optimality in uniform . Equioscillation at exactly n+2 points guarantees that p_n is the unique polynomial, as any other would violate the alternation property. Fewer than n+2 alternating points indicates a suboptimal , where further refinement can reduce the error norm. This property underpins the theoretical foundation for algorithms seeking solutions.

Algorithm Procedure

Initialization Methods

A good initialization is crucial for the Remez algorithm's efficiency, as it influences convergence speed and helps avoid local minima or stagnation in the iterative process. Poor starting points can lead to slower convergence or even failure to reach the global minimax solution, particularly in high-degree approximations or ill-conditioned problems. One simple method involves selecting an initial set of equioscillation points using a uniform grid. For a of degree n on the [a, b], this entails choosing n+2 equally spaced points across the , excluding regions where any weighting function is zero if applicable. This approach is straightforward to implement but may suffer from uneven error distribution due to the Runge phenomenon, potentially slowing initial iterations. A more robust strategy uses Chebyshev nodes, which provide a better-conditioned starting point by leveraging the properties of Chebyshev polynomials. The initial points are taken as the extrema of the Chebyshev polynomial of degree n+1, scaled and shifted to the target interval [a, b], such as via the mapping x_i = \frac{a+b}{2} + \frac{b-a}{2} \cos\left(\frac{i\pi}{n+1}\right) for i = 0, \dots, n+1. This distribution minimizes interpolation errors and promotes faster convergence, often requiring only 5–10 iterations total. Another effective technique starts with a least-squares to obtain coefficients, followed by selecting where the is largest as the set. This "don't care" least-squares method ensures the exhibits the required number of alternating extrema, making it particularly robust for challenging designs like FIR filter , where it can reduce computational time by 20–50% compared to uniform starts and prevent failures. To handle ill-conditioned cases, it is advisable to scale the approximation interval to [-1, 1] via an , as this aligns with the natural domain of Chebyshev polynomials and improves . Additionally, normalizing the target function f(x) by dividing by its maximum value on the interval can prevent overflow and ensure balanced error scaling during computations. Potential pitfalls include choosing points too close to singularities or endpoints, which can cause ill-conditioning or non-equioscillation in the initial ; simple numerical checks, such as verifying that the initial maximum is below a reasonable (e.g., 10% of the function's range) and that the alternates at least [n+1](/page/N+1) times, help detect such issues early.

Iterative Steps

The iterative steps of the Remez algorithm form the core loop that refines the approximating toward the solution by alternately updating the coefficients and the reference points where the error is evaluated. Given an initial set of n+2 reference points x_0 < x_1 < \cdots < x_{n+1} in the [a, b], the algorithm proceeds in two main phases per iteration: solving for the polynomial coefficients and the deviation, followed by exchanging the reference points based on the computed . This exchange-and-update mechanism leverages the equioscillation property to progressively reduce the maximum error, ensuring monotonic improvement in the approximation quality under suitable conditions. In the first step, the coefficients a_0, \dots, a_n of the approximating p_n(x) = \sum_{k=0}^n a_k \phi_k(x) (where \phi_k are the basis functions, such as monomials \phi_k(x) = x^k) and the current deviation \delta are determined by solving a that enforces approximate equioscillation at the reference points. Specifically, the system requires p_n(x_i) = f(x_i) - (-1)^i \delta for i = 0, \dots, n+1, which can be rearranged to the following (n+2) \times (n+2) : \begin{bmatrix} \phi_0(x_0) & \phi_1(x_0) & \cdots & \phi_n(x_0) & 1 \\ \phi_0(x_1) & \phi_1(x_1) & \cdots & \phi_n(x_1) & -1 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ \phi_0(x_{n+1}) & \phi_1(x_{n+1}) & \cdots & \phi_n(x_{n+1}) & (-1)^{n+1} \end{bmatrix} \begin{bmatrix} a_0 \\ a_1 \\ \vdots \\ a_n \\ \delta \end{bmatrix} = \begin{bmatrix} f(x_0) \\ f(x_1) \\ \vdots \\ f(x_{n+1}) \end{bmatrix}. To enhance numerical stability, the system is often solved by first forming the square (n+1) \times (n+1) subsystem from the first n+1 equations to obtain the coefficients a_k, then using the last equation to compute \delta. For ill-conditioned Vandermonde matrices arising from monomial bases, orthogonal polynomials (such as shifted Chebyshev polynomials) or QR decomposition of the basis matrix can be employed to mitigate round-off errors and ensure reliable solutions. In the second step, the e(x) = f(x) - p_n(x) is evaluated over [a, b] to identify its local extrema, where the maximum absolute occur. The reference points are then updated by replacing the current set with a new set of n+2 points selected from these extrema locations, prioritizing those that maintain alternating error signs and exhibit the largest among candidate sets to promote progress toward the global minimax approximation. If multiple maxima exist with comparable magnitudes, the selection favors the consecutive set of n+2 extrema that maximizes the smallest absolute in the set, ensuring the new \delta is at least as large as the previous one. This multiple-exchange strategy accelerates compared to single-point exchanges. The overall iteration can be outlined in pseudocode as follows:
while not equioscillation_achieved(within_tolerance):
    Solve [linear system](/page/Linear_system) for a_0, ..., a_n and δ using current reference points x_0, ..., x_{n+1}
    Compute error e(x) = f(x) - ∑_{k=0}^n a_k φ_k(x) over [a, b]
    Find local extrema of e(x), including their signs and magnitudes
    Select new reference points as the n+2 extrema set with the largest min |e| and alternating signs
    Update x_0, ..., x_{n+1} to the new points
This loop repeats until the reference points coincide with the error extrema within a specified , yielding the desired minimax .

Convergence and Termination

The Remez algorithm converges quadratically to the unique minimax approximation under suitable conditions, such as when the target function f is continuous on a compact and the initial reference points are appropriately chosen. This convergence rate holds for twice-differentiable functions, with the typically halving in squared every few iterations once near the . The proof of convergence relies on the equioscillation theorem, which characterizes the solution by the attaining its maximum magnitude with alternating signs at n+2 points, where n is the polynomial degree. Each iteration of the algorithm enforces a levelled on the current reference set, and a theorem by Remez guarantees that the deviation —the maximum level—increases monotonically or remains constant while the alternation improves, driving the process toward the unique equioscillation points. Convergence speed is generally rapid for low-degree polynomials, often requiring only 5-10 iterations to achieve high , as demonstrated in filter design examples where equiripple behavior stabilizes within 5-8 steps. However, for higher degrees, the process slows due to increasing ill-conditioning in the linear systems solved at each iteration, potentially necessitating more exchanges and higher computational effort. Termination occurs when the computed \delta aligns closely with the actual maximum on the , typically within a small \epsilon such as $10^{-10}, or when the reference points shift by less than a predefined between iterations, indicating . In practice, implementations may cap iterations at 20 to avoid prolonged runs, monitoring the relative change in . Post-convergence error estimation involves evaluating the approximation over the full interval to confirm equioscillation at exactly n+2 points with equal ripple magnitude; if the alternation is not precise due to numerical effects, additional dense sampling or refinement checks ensure the solution meets the minimax criterion within machine precision. Despite its robustness, the algorithm may fail to converge for non-smooth target functions or with poor initial guesses that miss key extremal points, in which case restarting with adjusted initialization—such as denser grids or Chebyshev nodes—is required to recover progress. Numerical round-off can also stall monotonicity in \delta for ill-conditioned high-degree cases, leading to premature termination.

Variants and Extensions

Second Remez Algorithm

The second Remez algorithm, an extension of the original exchange method introduced by Evgeny Yakovlevich Remez, was developed to address scenarios where the first algorithm experiences slow convergence or instability due to inadequate initial reference points. Detailed in Remez's 1957 monograph on computational methods for Chebyshev approximation, it offers a more robust framework for nonlinear problems by iteratively refining both approximation coefficients and error extremal points. A primary distinction from the standard Remez algorithm lies in its approach to minimizing the maximum deviation error: rather than performing single-point exchanges, it simultaneously optimizes coefficients and reference points through a nonlinear iterative process that aligns the error curve with the equioscillation theorem more directly. This is achieved by replacing the entire reference set at each step with the current error's local extrema, including the global maximum deviation. The procedure often leverages the output of the first Remez algorithm as an initial warm start to accelerate . From there, it proceeds iteratively: a is solved over the current n+2 reference points to compute trial coefficients and an bound, after which the reference points are updated to the locations of the new function's extrema (with signs alternating). Weights are then adjusted based on these maximum positions, and the cycle repeats until the bound stabilizes within a . This variant provides advantages in managing nonlinear constraints, particularly for weighted Chebyshev approximations or bases beyond polynomials, such as general Haar systems, where the standard method may falter. Regarding , the second Remez demonstrates rates under conditions of twice continuous differentiability for the target function, rendering it more resilient to ill-conditioned problems compared to the (every n+2 iterations) typical of the first ; in practice, it often necessitates fewer reference set updates for challenging cases. For instance, when approximating functions like e^x on intervals such as [0,1], it attains the equioscillation property with greater efficiency, reducing the number of iterations required relative to the standard method.

Approximations for Rational Functions

The Remez algorithm extends to approximation by seeking polynomials p(x) of degree at most m and q(x) of degree at most n (with q(x) \not\equiv 0) to minimize the \|f - p/q\|_\infty = \max_{x \in [a,b]} |f(x) - p(x)/q(x)| over a compact [a,b], where f is continuous and the approximation is sought without poles of r(x) = p(x)/q(x) in [a,b]. To address non-uniqueness arising from scaling (since \lambda p / \lambda q = p/q for \lambda \neq 0), the problem is typically normalized by fixing the leading of q(x) to 1 or setting q(c) = 1 for some point c \in [a,b]. Unlike the linear polynomial case, the rational variant is nonlinear due to the quotient structure, requiring adaptations such as differential corrections or successive linearizations to solve the underlying optimization. Initialization often begins with approximations: for instance, a -m for the numerator based on f, and a -n (or constant) for the denominator, potentially refined via least-squares fitting to ensure q(x) has no zeros in [a,b]. The iterative process selects an initial set of m + n + 2 reference points in [a,b] and solves a enforcing equioscillation of the weighted at these points, typically via Newton-like methods or by linearizing around a current estimate of q(x)—for example, approximating f(x) q(x) with p(x) while treating the as \delta (-1)^i at each point x_i, yielding a in the coefficients of p, q, and \delta. New reference points are then determined by locating the extrema of the current |f(x) - p(x)/q(x)|, exchanging those that violate the equioscillation pattern, and repeating until the maximum stabilizes (e.g., within a like $1.05 times the previous minimum). Key challenges include potential non-convergence due to the nonlinearity, sensitivity to initial guesses that may lead to denominator zeros or poles entering [a,b], and numerical instability from ill-conditioned systems when degrees m and n are high. These are often mitigated by techniques, such as restricting pole locations outside [a,b] or using regularization to bound the denominator. The error characterization generalizes the equioscillation property from polynomial approximations, stating that the unique best rational approximation attains its maximum error magnitude at exactly m + n + 2 points in [a,b] with alternating signs, though sign changes may occur near poles if present (but poles are excluded from the interval). Modern developments, such as the (adaptive Antoulas–Anderson) algorithm, introduced in 2017, draw inspiration from Remez principles by adaptively selecting support points and poles for stable barycentric rational functions, followed by Lawson iterations to refine toward the minimax solution, enhancing robustness for high-degree cases.

Applications and Examples

Digital Signal Processing

The Remez algorithm plays a pivotal role in , most notably in the design of (FIR) filters, where it computes coefficients that yield equiripple characteristics. This minimizes the maximum deviation—or —between the desired and actual frequency responses across passbands and stopbands, ensuring uniform error distribution as per the equioscillation theorem. Such optimality in the sense allows for efficient filters with the smallest possible transition bandwidth for a given order, which is crucial for resource-limited hardware implementations. A key implementation of the Remez algorithm in this domain is the Parks-McClellan algorithm, introduced in the early 1970s for designing optimal filters tailored to frequency-selective tasks like low-pass, high-pass, and bandpass filtering. This method reformulates the approximation problem using a cosine basis, which exploits the of linear-phase filters to reduce and enable faster iterations. By specifying the desired H(\omega) over the normalized interval [0, \pi], the algorithm iteratively refines a that approximates the ideal response while adhering to user-defined band edges and ripple tolerances. The advantages of this Remez-based approach lie in its guaranteed optimality for error, outperforming simpler methods like windowing in terms of filter sharpness and efficiency for bandwidth-constrained scenarios. It proves especially valuable in applications such as , where precise control prevents distortion, and in communications systems, where tight suppresses . Historically, the Parks-McClellan algorithm's availability as early software tools spurred the development of real-time hardware in the , transitioning filters from theoretical designs to practical embedded systems. Today, it remains embedded in industry-standard software, exemplified by MATLAB's firpm function, which automates the design process for engineers. Extensions of the algorithm further broaden its utility in , supporting multiband filter designs that handle multiple pass and stop bands simultaneously for complex spectral shaping, such as in equalizers or channelizers. Additionally, weighted approximations enable unequal control, allowing users to prioritize lower error in critical bands (e.g., ) at the expense of others (e.g., ), thus customizing the between performance and filter length. These enhancements maintain the core Remez exchange procedure while adapting to diverse demands.

Numerical Examples

A concrete illustration of the Remez algorithm involves approximating f(x) = \sin(x) on the [0, \pi/2] with a of degree 3. Initial reference points are selected uniformly as x_i = i \cdot (\pi/2)/4 for i = 0, 1, 2, 3, 4, yielding points 0, \pi/8, \pi/4, $3\pi/8, \pi/2. In the first , the algorithm identifies the error maxima and exchanges reference points to those where the alternates in sign, updating the set for the next linear solve. The computation proceeds by solving the for polynomial coefficients at the current reference points, yielding updated approximations. The reference points evolve through iterations, with the final set converging to locations where the error equioscillates. The exhibits equioscillation at 5 points across the interval, confirming the property. Another example applies the Remez algorithm to the Runge function f(x) = 1/(1 + 25x^2) on [-1, 1] using a degree-5 . Starting with scaled Chebyshev points for initialization, the iterative exchanges lead to a that minimizes the maximum deviation, avoiding the large oscillations () seen in approximations of similar functions. The final approximation demonstrates uniform error distribution with equioscillation at 7 points. To highlight the superiority of the criterion, the Remez algorithm typically achieves a smaller maximum error compared to the least-squares method for these examples. In practice, the Remez algorithm is initialized with scaled Chebyshev points to accelerate , typically requiring 4-6 iterations for these cases, and results can be confirmed via tools like MATLAB's remez function or equivalent libraries.

References

  1. [1]
    Remez Algorithm -- from Wolfram MathWorld
    The Remez algorithm in effect goes a step beyond the minimax approximation algorithm to give a slightly finer solution to an approximation problem. Parks and ...
  2. [2]
    [PDF] FUNCTION APPROXIMATION AND THE REMEZ ALGORITHM
    Jan 18, 2021 · Minimax polynomial and Rational approximations were used for example in the de- sign of FUNPACK in 1970[5]. The goal of this paper is to give a ...
  3. [3]
    DLMF: §3.11 Approximation Techniques ‣ Areas ‣ Chapter 3 Numerical Methods
    ### Summary of Remez Algorithm and Minimax Approximation Techniques
  4. [4]
    The Remez Method - Boost
    The Remez algorithm is a methodology for locating the minimax rational approximation to a function. This short article gives a brief overview of the method, ...
  5. [5]
    [PDF] Barycentric-Remez algorithms for best polynomial approximation in ...
    Oct 10, 2009 · Abstract The Remez algorithm, 75 years old, is a famous method for computing minimax polynomial approximations. Most implementations of this ...Missing: textbook | Show results with:textbook
  6. [6]
    [PDF] A Short Course on Approximation Theory
    In the present context, the focus is primarily on the approximation of real-valued continuous functions by some simpler class of functions, such as algebraic or ...
  7. [7]
    [PDF] BARYCENTRIC-REMEZ ALGORITHMS FOR BEST POLYNOMIAL ...
    Sep 2, 2009 · The Remez algorithm, 75 years old, is a famous method for computing minimax polynomial approximations. Most implementations of this algorithm ...
  8. [8]
    Evgeny Remez (1896 - 1975) - Biography - MacTutor
    All of Remez's work is characterised by great skill in applying the subtlest theoretical studies to finding a numerical solution to concrete problems.
  9. [9]
    [PDF] Best Approximation: Minimax Theory
    Hence the best uniform approximation is also called a minimax approximation. Theorem: (Existence) Let f is continuous in [a, b]. Then there exists p∗ n ...
  10. [10]
    Uniform Approximation - Wiley Online Library
    The best uniform approximating polynomial is unique. PROOF.– In order to ... As done previously, through theorem 1.2, the best polynomial approximation P.
  11. [11]
    [PDF] Approximation Theory --- Chebyshev Polynomials & Least Squares
    Definition (Monic Polynomial). A monic polynomial is a polynomial with leading coefficient 1. We get the monic Chebyshev polynomials ˜Tn(x) by dividing Tn(x).
  12. [12]
    [PDF] lecture 15: Chebyshev Polynomials for Optimal Interpolation
    the same points, appropriately shifted and scaled, will be optimal. Similar ... the size of the Chebyshev polynomial outside the interval [1, 1]. Fig ...
  13. [13]
    [PDF] Approximation by Polynomials - Penn Math
    People who use approximations can be positioned on an axis of caring most about the worst case error to caring most about the average case error. Worst case ...
  14. [14]
    [PDF] Notes on how to prove Chebyshev's equioscillation theorem
    Chebyshev's equioscillation theorem. The main issue is to characterize the minimax approximation p of a con- tinuous function f:[a, b] → R over all ...
  15. [15]
    [PDF] Best uniform approximation; • Chebyshev polynomials; • Analysis of ...
    Answer: Chebyshev's equi-oscillation theorem. 12-3. Text: 6.11 – cheby. Page 4. Chebyshev equi-oscillation theorem: pn is the best uniform approximation to f ...
  16. [16]
    [PDF] Lecture 5 5 Best approximation in C[a, b] - DAMTP
    Prove that the polynomial r := p − λq, where λ is chosen so that r belongs to Pn, is the best approximation to f on (ti) and find the error. Remark.
  17. [17]
    [PDF] Theodore J. Rivlin (1926–2006)I - OSU Math
    The exact form of best polynomial approximation in the maximum norm to a given function is rare. The Chebyshev polynomials are an example. Ted pursued this ...
  18. [18]
    [PDF] the remez algorithm - Electrical and Computer Engineering
    This section describes how to design linear-phase FIR filters based on the Chebyshev (or minimax) error criterion. The minimization.
  19. [19]
  20. [20]
    Remez Exchange Algorithm | Spectral Audio Signal Processing
    The Remez multiple exchange algorithm works by moving the frequency samples each iteration to points of maximum error (on a denser grid).
  21. [21]
    Convergence of Remez Exchange
    According to a theorem of Remez, $ \delta $ is guaranteed to increase monotonically each iteration, ultimately converging to its optimal value.
  22. [22]
    [PDF] Barycentric-Remez algorithms for best polynomial approximation in ...
    The focus of much of this work was the algorithm introduced by Evgeny Yakovlevich Remez in a series of three papers published in 1934 [27, 28, 29], and in ...Missing: original | Show results with:original
  23. [23]
    Algorithm 414: Chebyshev approximation of continuous functions by ...
    The second algorithm of Remez can be used to compute the minimax approximation to a function, ƒ(x), by a linear combination of functions, {Qi(x)}n0, ...Missing: steps | Show results with:steps
  24. [24]
  25. [25]
    [PDF] SLAC-m-146 October 1965 - Stanford University
    with a prescribed accuracy. This process is called the second Remez algorithm. ' For network ap- tlications , usuaily x& . The convergence conditions are ...
  26. [26]
    On the Remez algorithm for non-linear families
    Cheney, E. W., Loeb, H. L.: Generalized rational approximation. SIAM Journal Series B, Numerical Analysis1, 11–25 (1964). Google Scholar. Loeb, H. L. ...
  27. [27]
  28. [28]
    firpm - Parks-McClellan optimal FIR filter design - MATLAB
    The Parks-McClellan algorithm uses the Remez exchange algorithm and Chebyshev approximation theory to design filters with an optimal fit between the desired ...