Fact-checked by Grok 2 weeks ago

Zero of a function

In mathematics, a zero of a function f, also known as a root, is a value x in the domain of f such that f(x) = 0. This concept is central to solving equations of the form f(x) = 0, which arises in algebra, analysis, and applied sciences, and it reveals critical points where the function intersects the horizontal axis in its graph. Zeros can be real or complex, simple or multiple (with multiplicity greater than one if the function and its derivatives up to a certain order vanish at that point), and their locations determine key properties like the number of sign changes or the overall shape of the function. For polynomial functions, zeros play a pivotal role in factorization and equation solving. The states that every non-constant of degree n with complex coefficients has exactly n complex zeros, counting multiplicities, ensuring that such polynomials can be fully factored into linear terms over the complex numbers. In the real numbers, polynomials of odd degree always have at least one real zero, while even-degree polynomials may have none, depending on the leading coefficient and . Tools like the help identify possible rational zeros by testing factors of the constant term over factors of the leading coefficient. Existence and location of zeros for general continuous functions are addressed by theorems like the , which guarantees at least one zero in an interval [a, b] if f(a) and f(b) have opposite signs. Analytical methods, such as factoring or the , suffice for low-degree polynomials, but numerical approaches are essential for transcendental or high-degree functions. These include the , which iteratively narrows an interval containing a sign change, and , which uses the function's to approximate zeros via the iteration x_{n+1} = x_n - f(x_n)/f'(x_n). Beyond , zeros of functions underpin numerous applications in science and . In physics, they solve conditions in equations modeling oscillations or particle trajectories. In , the zeros of transfer functions, alongside poles, dictate system stability and in control systems. In optimization and , locating zeros aids in for models or simulating physical processes, highlighting the topic's interdisciplinary significance.

Fundamental Concepts

Definition and Notation

In mathematics, a zero (also known as a root) of a function f is a value c in the domain of f such that f(c) = 0. This concept applies to functions from real numbers, complex numbers, or more general spaces to their codomains, where the zero represents a point where the function value vanishes. Zeros are distinct from fixed points of a ; while a zero satisfies f(c) = 0, a fixed point x_0 satisfies f(x_0) = x_0. The relation arises because fixed points of f correspond to zeros of the g(x) = f(x) - x. Standard notation for a zero of f denotes it as a to the equation f(x) = 0, often using variables like x or symbols such as \alpha for specific . For polynomials or analytic functions, may be indexed, as in the notation \text{Root}[p(x), k] for the k-th of a p(x). A repeated zero, or zero of multiplicity m > 1, occurs when the function and its first m-1 derivatives vanish at c, but the m-th derivative does not; that is, f^{(k)}(c) = 0 for k = 0, 1, \dots, m-1 and f^{(m)}(c) \neq 0. This definition extends to smooth functions, measuring the order of contact with the zero line. Zeros of multiplicity 1 are called simple zeros. For example, consider the f(x) = x^2 - [1](/page/1). Its zeros are x = [1](/page/1) and x = -[1](/page/−1), each with multiplicity 1, since f([1](/page/1)) = 0, f(-[1](/page/−1)) = 0, and f'(x) = 2x yields f'([1](/page/1)) = 2 \neq 0 and f'(-[1](/page/−1)) = -2 \neq 0. The term "zero" in this context derives directly from the equation f(x) = 0 being solved, with early methods for finding such points appearing in ancient around 1800 BCE, where quadratic equations were solved geometrically via to identify .

Relation to Equations

The zeros of a function f are precisely the solutions to the f(x) = 0, making the problem of finding zeros mathematically equivalent to solving this for x. This direct correspondence forms the foundation of root-finding in and , where identifying such points reveals critical behaviors like intercepts or fixed points. More broadly, to solve an of the form f(x) = k for a constant k, one can reformulate it by considering the zeros of the g(x) = f(x) - k, which shifts the problem to a standard zero-finding task. The number of zeros varies significantly depending on the type of equation. Linear equations, such as ax + b = 0 with a \neq 0, possess exactly one zero, given by x = -b/a. Quadratic equations, of the form ax^2 + bx + c = 0, can have up to two distinct real zeros, determined by the sign of the discriminant b^2 - 4ac; a positive discriminant yields two, zero yields one (with multiplicity two), and negative yields none in the reals. In contrast, transcendental equations—those involving non-algebraic functions like exponentials or trigonometric terms, such as \sin x = e^{-x}—may exhibit no zeros, a finite number, or infinitely many, as their oscillatory or asymptotic behaviors can lead to multiple intersections with the x-axis. A concrete example illustrates this relation: consider the x^2 + 2x + 1 = 0, which factors as (x + 1)^2 = 0 and thus has a double zero at x = -1. This point is the zero of the associated f(x) = x^2 + 2x + 1, where the multiplicity reflects the tangency of the parabola to the x-axis at that location. Such multiplicities highlight how equation-solving captures not just locations but also the nature of the solutions. In applied contexts, zeros underpin the analysis of equilibria in physical systems. For a falling object under gravity with linear drag, the terminal velocity occurs where the net force is zero, solving mg - kv = 0 for v = mg/k, with m the mass, g gravitational acceleration, and k the drag coefficient; this equilibrium stabilizes the velocity as drag balances weight.

Zeros of Polynomials

Existence and Multiplicity

A polynomial of degree n over the complex numbers has exactly n zeros, counting multiplicities. This count includes both real and complex zeros, where repeated zeros are accounted for according to their multiplicity. The multiplicity of a zero r of a polynomial p(x) is the largest positive integer m such that (x - r)^m divides p(x) evenly, or equivalently, p(x) = (x - r)^m q(x) where q(r) \neq 0 and \deg q = n - m. This multiplicity influences the graph of the polynomial near the zero: for odd multiplicity, the graph crosses the x-axis, while for even multiplicity, it touches the x-axis and turns back, creating a flatter appearance at the root as m increases. For example, consider p(x) = x^2, which factors as (x - 0)^2, so the zero at x = 0 has multiplicity 2; the graph touches the x-axis at the origin without crossing it. In contrast, for p(x) = x(x - 1), the distinct zeros at x = 0 and x = 1 each have multiplicity 1, and the graph crosses the x-axis at both points. Not all zeros of a polynomial with real coefficients are real; non-real zeros occur in complex conjugate pairs. Specifically, if a polynomial p(x) with real coefficients has a complex zero a + bi where b \neq 0, then its complex conjugate a - bi is also a zero, ensuring that the non-real zeros contribute evenly to the total count of n. Vieta's formulas relate the coefficients of a to symmetric functions of its zeros. For a p(x) = x^n + a_{n-1} x^{n-1} + \cdots + a_1 x + a_0 with zeros r_1, r_2, \dots, r_n (counting multiplicities), the sum of the zeros is r_1 + r_2 + \cdots + r_n = -a_{n-1}, the sum of the products of the zeros taken two at a time is r_1 r_2 + r_1 r_3 + \cdots + r_{n-1} r_n = a_{n-2}, and in general, the elementary symmetric sums of the zeros equal the coefficients up to sign, with the product r_1 r_2 \cdots r_n = (-1)^n a_0. For the case p(x) = x^2 + b x + c with zeros r_1 and r_2, this simplifies to r_1 + r_2 = -b and r_1 r_2 = c. These relations hold even with multiplicities; for instance, in p(x) = (x - r)^2 = x^2 - 2r x + r^2, the sum is $2r = -(-2r) and the product is r^2.

Fundamental Theorem of Algebra

The states that every non-constant of n with coefficients has at least one root, and consequently, exactly n roots counting multiplicities. This theorem, also known as d'Alembert's theorem or the d'Alembert–Gauss theorem, traces its origins to attempts in the 17th and 18th centuries to resolve equations over the . published the first known proof in 1746, though it relied on questionable assumptions about continuity and was not fully rigorous by modern standards. The theorem is widely attributed to , who provided the first generally accepted proof in his 1799 doctoral dissertation, addressing polynomials with real coefficients and extending the result to ones. Gauss later developed four additional proofs between 1815 and 1849, employing varied methods including geometric arguments and , solidifying the theorem's foundation. One elegant proof uses and , which asserts that every bounded on the is constant. Suppose, for contradiction, that a non-constant p(z) = a_n z^n + a_{n-1} z^{n-1} + \cdots + a_0 with a_n \neq 0 has no complex zeros. Then $1/p(z) is entire, as it has no poles. For large |z|, the leading term dominates, so |p(z)| \approx |a_n| |z|^n, implying |1/p(z)| \to 0 as |z| \to \infty. Thus, $1/p(z) is bounded on \mathbb{C}. By , $1/p(z) is constant, so p(z) is constant, contradicting the assumption. Therefore, p(z) must have at least one zero. The theorem implies that every such polynomial factors completely as p(z) = a_n (z - r_1)^{m_1} (z - r_2)^{m_2} \cdots (z - r_k)^{m_k}, where the r_i are distinct , the m_i are their multiplicities (as discussed in the section on existence and multiplicity), and \sum m_i = n. This complete factorization over \mathbb{C} enables the algebraic resolution of higher-degree equations by reducing them to linear factors, bridging algebra and . For example, the polynomial p(z) = z^3 - 1 has degree 3 and factors as (z - 1)(z - \omega)(z - \omega^2), where $1 is a real root and \omega = e^{2\pi i / 3} = -\frac{1}{2} + i \frac{\sqrt{3}}{2}, \omega^2 = -\frac{1}{2} - i \frac{\sqrt{3}}{2} are complex conjugates, illustrating the theorem's guarantee of three roots in total.

Computational Methods

Analytical Techniques

Analytical techniques provide exact methods for determining the zeros of functions, primarily through algebraic manipulation and closed-form expressions, applicable mainly to polynomials of low degree and certain transcendental equations. These approaches rely on solving equations symbolically without approximation, leveraging theorems and formulas derived from classical algebra. While powerful for simple cases, they become impractical for higher complexities due to the intricate nature of the solutions. For polynomials with integer coefficients, factoring is a fundamental analytical method to identify zeros. The rational root theorem states that any possible rational zero, expressed in lowest terms as p/q, has p as a factor of the constant term and q as a factor of the leading coefficient. This theorem guides the testing of candidate roots, often using synthetic division to factor the polynomial efficiently once a root is found. For example, applying synthetic division to divide a cubic polynomial by a linear factor corresponding to a rational root reduces it to a quadratic, whose zeros can then be solved exactly. The offers a complete analytical for second-degree polynomials of the form ax^2 + bx + c = [0](/page/0), where a \neq [0](/page/0). The zeros are given by x = \frac{ -b \pm \sqrt{b^2 - 4ac} }{2a}. The \Delta = b^2 - 4ac determines the nature of the zeros: if \Delta > [0](/page/0), there are two distinct real zeros; if \Delta = [0](/page/0), one real zero (repeated); and if \Delta < [0](/page/0), two complex conjugate zeros. This formula, with roots tracing back to Babylonian methods around 1800 BC and fully generalized in the 16th century, provides precise solutions without iteration. For cubic polynomials, Cardano's formula enables exact solutions. For the depressed cubic x^3 + px + q = 0, a root is x = \sqrt{{grok:render&&&type=render_inline_citation&&&citation_id=3&&&citation_type=wikipedia}}{ -\frac{q}{2} + \sqrt{ \left( \frac{q}{2} \right)^2 + \left( \frac{p}{3} \right)^3 } } + \sqrt{{grok:render&&&type=render_inline_citation&&&citation_id=3&&&citation_type=wikipedia}}{ -\frac{q}{2} - \sqrt{ \left( \frac{q}{2} \right)^2 + \left( \frac{p}{3} \right)^3 } }. Published by in 1545, this formula applies after depressing the cubic via substitution to eliminate the quadratic term, though the other two roots require factoring the result. However, polynomials of degree five or higher generally lack solutions by radicals, as established by the , which proves the unsolvability of the general quintic equation using finite additions, subtractions, multiplications, divisions, and root extractions. provided a rigorous proof in 1824, building on earlier work by . Beyond polynomials, analytical techniques yield exact zeros for certain non-polynomial functions. For instance, the equation \sin(x) = 0 has solutions x = k\pi, where k is any integer, derived from the periodicity and zeros of the at integer multiples of \pi. Similar closed-form expressions exist for other trigonometric, exponential, or logarithmic equations, often using inverse functions or identities. These methods are limited to low-degree polynomials and specific transcendental forms, as higher-degree cases defy radical solutions per the , necessitating numerical alternatives for practical computation. Symbolic computation tools, such as computer algebra systems, extend these techniques by automating algebraic manipulations for moderately complex equations, though they cannot overcome fundamental unsolvability barriers.

Numerical Algorithms

Numerical algorithms are essential for approximating zeros of functions when analytical solutions are not feasible, particularly for nonlinear equations where exact roots cannot be expressed in closed form. These methods rely on iterative processes starting from an initial guess or interval, progressively refining the approximation until a desired tolerance is achieved. Common approaches include bracketing methods that guarantee convergence within a bounded interval and derivative-based or quasi-derivative methods that accelerate convergence under suitable conditions. The bisection method is a robust bracketing technique applicable to continuous functions f on an interval [a, b] where f(a) and f(b) have opposite signs, ensuring at least one root exists by the . The algorithm proceeds by repeatedly bisecting the interval: compute the midpoint c = (a + b)/2, evaluate f(c), and replace the endpoint where the sign change occurs with c, narrowing the interval by half each step. This linear convergence rate guarantees the error bound halves per iteration, making it reliable but slow for high precision. The Newton-Raphson method, also known as , offers faster convergence for differentiable functions. It iterates via the formula x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}, starting from an initial guess x_0 sufficiently close to the root. Under conditions that f is twice continuously differentiable, f'(x^*) \neq 0 at the root x^*, and the sequence converges to x^*, the method exhibits quadratic convergence, where the error satisfies |x_{n+1} - x^*| \leq M |x_n - x^*|^2 for some constant M > 0 and large n. For example, to approximate \sqrt{2}, solve f(x) = x^2 - 2 = 0 with f'(x) = 2x and initial guess x_0 = 1: the first iteration yields x_1 = 1.5, and the second x_2 \approx 1.4167, rapidly approaching the true value. The secant method extends the Newton-Raphson approach without requiring explicit derivatives, using two initial points x_0 and x_1. It approximates the derivative as \frac{f(x_n) - f(x_{n-1})}{x_n - x_{n-1}}, yielding the iteration x_{n+1} = x_n - f(x_n) \frac{x_n - x_{n-1}}{f(x_n) - f(x_{n-1})}. This finite-difference approximation of the secant slope enables superlinear convergence similar to Newton's method, often with order approximately 1.618, while avoiding derivative computations, which is advantageous when f' is costly or unavailable. Numerical root-finding faces challenges such as multiple roots, where f(x) = (x - \alpha)^m g(x) with m > 1 and g(\alpha) \neq 0, causing methods like Newton-Raphson to converge only linearly due to the vanishing to order m-1 at \alpha. Error analysis typically bounds the approximation via the interval length or residual |f(x_n)|, with stopping criteria like |f(x_n)| < \epsilon or |x_{n+1} - x_n| < \delta ensuring practical termination. For complex zeros, techniques leveraging the argument principle integrate f'/f along contours to locate and count within regions, though iterative refinement is still needed./12:_Argument_Principle/12.01:_Principle_of_the_Argument) Software libraries implement these algorithms efficiently; for instance, NumPy's roots function computes all roots of polynomials from coefficient arrays using eigenvalue methods on the companion matrix. MATLAB's fzero solves for zeros of general nonlinear univariate functions, combining bisection, secant, and inverse quadratic interpolation for robust convergence from an initial point or bracketing interval.

Generalizations

Zero Sets in Multiple Variables

In the context of functions from \mathbb{R}^n to \mathbb{R}^m, the zero set of a map F: \mathbb{R}^n \to \mathbb{R}^m is defined as the set \{ x \in \mathbb{R}^n \mid F(x) = 0 \}, consisting of all points where the components of F simultaneously vanish. When F is a single function, this zero set forms a in \mathbb{R}^n, a geometric object of whose is determined by the polynomial's degree and coefficients. For polynomial systems, zero sets provide foundational examples in . Consider the polynomial f(x,y) = x^2 + y^2 - 1 = 0 in two variables, whose zero set is the unit in \mathbb{R}^2, a compact of 1. More generally, the intersection of zero sets of two polynomials defines curves, with stating that two projective plane curves of degrees d_1 and d_2 without common components intersect in exactly d_1 d_2 points, counting multiplicities, over an . Algebraic varieties generalize these zero sets to irreducible components defined over a field k. For a subset V \subseteq k^n, the vanishing ideal I(V) consists of all polynomials in k[x_1, \dots, x_n] that are zero on every point of V, forming an ideal whose radical determines the variety's structure. establishes a between radical ideals and varieties, asserting that for an , the radical of I(V) equals the ideal of polynomials vanishing on V, linking algebraic and geometric objects precisely. The of a V \subseteq k^n is the largest d such that V contains a d-dimensional irreducible component, with defined as n - d. For a —where the number of defining equations equals the —the zero set has expected if the equations intersect transversely, meaning their gradients are linearly independent at generic points. Real varieties, defined over \mathbb{R}^n, may have different from their counterparts over \mathbb{C}^n, as real zero sets can be non-compact or empty even when complex ones are not, due to sign constraints absent in the complex case. To compute zero sets of polynomial systems, Gröbner bases provide an algorithmic tool, introduced by Buchberger in 1965 as a canonical generating set for ideals that simplifies solving F(x) = 0 by reducing to triangular systems via monomial orderings. This extends the single-variable case, where zeros are isolated roots, to higher dimensions where solutions form varieties of positive dimension.

Applications Across Disciplines

In physics, the zeros of the of a yield its eigenvalues, which describe the natural frequencies of vibrational systems and the quantized levels in quantum mechanical models. For instance, in the analysis of molecular vibrations, these eigenvalues from the at points determine the frequencies of modes, essential for understanding spectroscopic properties. In , particularly , the zeros of a system's , defined as the roots of its numerator , influence the and , while their placement relative to poles affects overall ; systems with zeros inside the in the z-plane contribute to bounded output responses for bounded inputs. In , root locus plots visualize how the locations of closed-loop poles—starting from open-loop poles and approaching zeros as gain varies—guide the design of controllers to ensure desirable margins and transient . In , equilibrium points often emerge as zeros of functions derived from maximization or supply-demand balances; for example, in , price vectors satisfy (I - B)p = 0, where B represents input-output coefficients, marking the condition for without or demand. In biology, population dynamics models like the logistic equation dP/dt = rP(1 - P/K) have equilibria at zeros P = 0 (extinction threshold) and P = K (), where the growth rate vanishes, illustrating thresholds beyond which populations stabilize or decline. In , root-finding techniques underpin optimization algorithms, such as , which iteratively solves for zeros of the function ∇f(x) = 0 to locate minima in objective functions for problems like least-squares fitting. Recent developments in , particularly since the 2010s resurgence of neural networks, frame training as minimizing loss functions to approach zero error on datasets; for instance, adjusts parameters to find points where the loss—measuring prediction discrepancies—is ideally zero for overparameterized models achieving perfect .

References

  1. [1]
    Section 4-2-3
    A zero of a function `f` is a root (or solution) of the equation f ( t ) = 0 , that is, a value of the independent variable for which the corresponding ...
  2. [2]
    Finding Roots of an Equation - Department of Mathematics at UTSA
    Oct 18, 2021 · ... equation obtained by equating the function to 0", and the study of zeros of functions is exactly the same as the study of solutions of equations ...
  3. [3]
    [PDF] Zeros of Analytic Functions - Trinity University
    A zero of an analytic function at z0 is where f(z0)=0. Zeros can be simple (order 1) or higher order, and can be infinite order.
  4. [4]
    [PDF] The Fundamental Theorem of Algebra - UC Davis Math
    Feb 13, 2007 · The aim of these notes is to provide a proof of the Fundamental Theorem of Algebra using concepts that should be familiar to you from your study ...
  5. [5]
    [PDF] Section 4.5. The Real Zeros of a Polynomial Function
    Sep 26, 2021 · A polynomial function (with real coefficients) of odd degree has at least one real zero. Definition. A number M is an upper bound to the ...
  6. [6]
    Algebra - Finding Zeroes of Polynomials - Pauls Online Math Notes
    Nov 16, 2022 · Process for Finding Rational Zeroes · Use the rational root theorem to list all possible rational zeroes of the polynomial P(x) P ( x ) .
  7. [7]
    Zeros of Polynomial Functions, Part II - West Texas A&M University
    Mar 15, 2010 · Since there is a sign change between f(-2) = -9 and f(-1) = 12, then according to the Intermediate Value Theorem, there is at least one value ...
  8. [8]
    Newton's method for finding the zeros of a function - UMD Physics
    So you can solve it numerically by guessing a value for x x , seeing what the left side comes to, and adjusting so that it converges to zero. Newton knew ...
  9. [9]
    Solving for zeros with julia - CSI Math
    The bisection algorithm is a simple, iterative procedure for finding a numeric, approximate root of a continuous function over the closed interval \([a,b]\) ...
  10. [10]
    [PDF] Zeros of Polynomials and their Applications to Theory: A Primer
    Oct 22, 2013 · Problems in many different areas of mathematics reduce to questions about the zeros of complex univariate and multivariate polynomials. Recently ...
  11. [11]
    [PDF] Understanding Poles and Zeros 1 System Poles and Zeros - MIT
    N(s)=0, (3) and are defined to be the system zeros, and the pi's are the roots of the equation. D(s)=0, (4)<|control11|><|separator|>
  12. [12]
    Root -- from Wolfram MathWorld
    The roots (sometimes also called "zeros") of an equation f(x)=0 are the values of x for which the equation is satisfied.
  13. [13]
    Fixed Point -- from Wolfram MathWorld
    Fixed points are also called critical points or equilibrium points. If a variable starts at a point that is not a critical point, it cannot reach a critical ...Missing: zero | Show results with:zero
  14. [14]
    multiplicity - PlanetMath.org
    Mar 22, 2013 · 1. The zero a a of a polynomial f(x) f ⁢ ( x ) with multiplicity m m is a zero of the f′(x) f ′ ⁢ ( x ) with multiplicity m−1 m - 1 . · 2. The ...
  15. [15]
    Simple Root -- from Wolfram MathWorld
    A root having multiplicity n=1 is called a simple root. For example, f(z)=(z-1)(z-2) has a simple root at z_0=1, but g=(z-1)^2 has a root of multiplicity 2 ...
  16. [16]
    Completing the Square – Feature Column - Math Voices
    Nov 1, 2020 · Quadratic equations have been considered and solved since Old Babylonian times (c. 1800 BC), but the quadratic formula students memorize today ...
  17. [17]
    [PDF] NEWTON'S METHOD AND FRACTALS 1. Solving the equation f(x ...
    Solving the equation f(x)=0. Given a function f, finding the solutions of the equation f(x) = 0 is one of the oldest mathematical problems.
  18. [18]
    [PDF] COUNTING ROOTS OF POLYNOMIALS In R[T], a ... - Keith Conrad
    In R[T], a linear polynomial aT + b has exactly one root in R: at + b = 0 if and only if t = −b/a. By the quadratic formula, a quadratic polynomial in R[T] has ...
  19. [19]
    Quadratic Equations - Department of Mathematics at UTSA
    Jan 22, 2022 · A quadratic equation with real coefficients can have either one or two distinct real roots, or two distinct complex roots. In this case the ...
  20. [20]
    An Efficient Method to Find Solutions for Transcendental Equations ...
    Oct 15, 2015 · Equation (14) is a transcendental equation with an infinite number of roots called characteristic values or eigenvalues. The characteristic ...Introduction · Method Development · Example · Conclusion
  21. [21]
    [PDF] Roots of Polynomials
    The fundamental theorem of algebra states that p has n real or complex roots, counting multiplic- ities. If the coefficients a0,a1,...,an are all real, then the ...
  22. [22]
    [PDF] The Complex Numbers - MAT246H1S Lec0101 Burbulla
    How Many Roots Can a Polynomial Have? Theorem 9.3.8: a polynomial p(z) of degree n has at most n roots; if multiplicities are counted, it has exactly n roots.
  23. [23]
    10.1 Roots of Polynomials
    The multiplicity of a root of a polynomial is the number of times it is repeated. From the formula, we can tell the multiplicity by the power on the factor. 🔗.
  24. [24]
  25. [25]
    Vieta's Formulas -- from Wolfram MathWorld
    Vieta's formulas states that the theorem was proved by Viète (also known as Vieta, 1579) for positive roots only, and the general theorem was proved by Girard.
  26. [26]
    Fund theorem of algebra - MacTutor History of Mathematics
    The Fundamental Theorem of Algebra (FTA) states Every polynomial equation of degree n with complex coefficients has n roots in the complex numbers.
  27. [27]
    [PDF] On Gauss's First Proof of the Fundamental Theorem of Algebra - arXiv
    Apr 21, 2017 · The fundamental theorem of algebra is the statement that every nonconstant polynomial with complex coefficients has a root in the complex ...
  28. [28]
    [PDF] Gauss and the First “Rigorous” Proof of the Fundamental Theorem of ...
    Feb 10, 2023 · This is the statement of the theorem that Gauss set out to prove. Namely, that every non-constant polynomial with real coefficients has a ...
  29. [29]
    [PDF] Cauchy, Liouville, and the Fundamental Theorem of Algebra
    The Fundamental Theorem of Algebra. Every non-constant polynomial with real or complex coefficients has a zero in C. Proof. Let p be a non-constant say of ...
  30. [30]
    The Fundamental Theorem of Algebra (with Liouville)
    Jan 17, 2012 · This proof assumes knowledge of complex analysis, specifically the notions of analytic functions and Liouville's Theorem (which we will state below).Missing: sketch | Show results with:sketch
  31. [31]
    [PDF] Proof of rational root theorem
    Proof of rational root theorem: Suppose anxn +an-1xn-1 +···+a1x+a0 = 0 with an 6= 0 and ai ∈ Z. Suppose x is a rational root and x = p.
  32. [32]
    Quadratic, cubic and quartic equations - MacTutor
    It is often claimed that the Babylonians (about 1800 BC) were the first to solve quadratic equations. This is an over simplification, for the Babylonians ...
  33. [33]
    [PDF] Cardano and the Solution of the Cubic - Mathematics
    In 1545, Cardano published his book Ars. Magna, the “Great Art.” In it he published the solution to the depressed cubic, with a preface crediting del Ferro with ...
  34. [34]
    [PDF] Abel–Ruffini's Theorem: Complex but Not Complicated
    Nov 5, 2020 · Abel wrote the first complete proof of the theorem (a short proof published in 1824 [11] at his own expense, and a longer, more detailed ...
  35. [35]
    Software components using symbolic computation for problem ...
    Software components using symbolic computation ... Automatic selection of methods for solving stiff and nonstiff systems of ordinary differential equations.
  36. [36]
  37. [37]
    [PDF] Quadratic Convergence of Newton's Method - NYU Computer Science
    The quadratic convergence rate of Newton's Method is not given in A&G, except as Exercise 3.9. However, it's not so obvious how to derive it, even though.
  38. [38]
    Newton-Raphson and Secant Methods
    Newton-Raphson uses a tangent line and derivative, while Secant uses a secant line and two initial points, not the derivative.
  39. [39]
    [PDF] MULTIPLE ROOTS We study two classes of functions for which there ...
    There are two main difficulties with the numerical cal- culation of multiple roots (by which we mean m > 1 in the definition). 1. Methods such as Newton's ...
  40. [40]
    numpy.roots — NumPy v2.3 Manual
    - **Function**: `numpy.roots(p)` returns roots of a polynomial.
  41. [41]
    fzero - Root of nonlinear function - MATLAB - MathWorks
    The `fzero` function in MATLAB finds a point x where `fun(x)` equals 0, where `fun(x)` changes sign. It cannot find roots like x^2.Missing: NumPy | Show results with:NumPy
  42. [42]
    [PDF] Chapter 1 Affine algebraic geometry
    Definition 1.1 The zero locus of a collection f1,...,fr of elements in k[x1,...,xn] is called an affine algebraic variety or a closed subvariety of An. We ...
  43. [43]
    Algebraic Variety -- from Wolfram MathWorld
    ### Summary of Algebraic Variety from Wolfram MathWorld
  44. [44]
    [PDF] Bézout's Theorem for curves - UChicago Math
    Aug 26, 2011 · The goal of this paper is to prove Bézout's Theorem for algebraic curves. Along the way, we introduce some basic notions in algebraic geometry.
  45. [45]
    [PDF] The basic theory of varieties in algebraic geometry
    In algebraic geometry, a variety is a set of zeroes of a set of polynomial equations in an arbitrary finite number of variables.
  46. [46]
    Hilbert's Nullstellensatz -- from Wolfram MathWorld
    Becker, T. and Weispfenning, V. "The Hilbert Nullstellensatz." §7.4 in Gröbner Bases: A Computational Approach to Commutative Algebra. New York: Springer-Verlag ...
  47. [47]
    [PDF] 3264 & All That Intersection Theory in Algebraic Geometry
    ... codimension of Y in X, written codimX Y (or simply codimY when X is clear ... transverse intersections). If A; B. X are subvarieties of a smooth variety ...
  48. [48]
    Bruno Buchberger's PhD thesis 1965: An Algorithm for Finding the ...
    Aug 6, 2025 · The Gröbner basis, first introduced by Buchberger in [31] , is a systematic method to solve a system of multivariate polynomial equations ...
  49. [49]
    [PDF] A Historic Introduction to Gröbner Bases - RISC
    Jul 9, 2005 · This paper will be copied and distributed. B. Buchberger. Gröbner-Bases: An Algorithmic Method in Polynomial Ideal Theory. Chapter 6 in: N.K..
  50. [50]
    [PDF] Chapter 7. The Eigenvalue Problem
    The eigenvalues of a matrix A are the roots of the characteristic polynomial det[A − λI]. There is no law that prevents some of these roots from being equal.
  51. [51]
    10.6: Root Locus Plots - Effect of Tuning - Engineering LibreTexts
    Mar 11, 2023 · The lines of a Root locus plot display the poles for values of the control variable(s) from zero to infinity on complex coordinate system. ...
  52. [52]
    General Equilibrium - 2012 Book Archive
    Let p be the vector of prices. Then we can write the equilibrium conditions as (I – B) p = 0, where 0 is the zero vector. Thus, for an equilibrium (other than ...
  53. [53]
    7.6 Population growth and the logistic equation - Active Calculus
    The equation d P d t = P ( 0.025 − 0.002 P ) is an example of the logistic equation, and is the second model for population growth that we will consider. This ...
  54. [54]
    [PDF] Newton's Method (Optimization) and Steepest Gradient Descent (GD)
    Optimization becomes a root finding problem, and we can apply the techniques we've discussed before! In particular, notice that solving ∇f(x)=0 is equivalent to ...