Fact-checked by Grok 2 weeks ago

Symmetric polynomial

In mathematics, a symmetric polynomial is a multivariate polynomial that remains invariant under any permutation of its variables; that is, for a polynomial f(x_1, \dots, x_n) in the polynomial ring over a field F, f is symmetric if f(x_{\sigma(1)}, \dots, x_{\sigma(n)}) = f(x_1, \dots, x_n) for every permutation \sigma in the symmetric group S_n. The set of all symmetric polynomials in n variables forms a of the full F[x_1, \dots, x_n], generated by the elementary symmetric polynomials e_k(x_1, \dots, x_n) = \sum_{1 \leq i_1 < \cdots < i_k \leq n} x_{i_1} \cdots x_{i_k} for k = 1, \dots, n, which correspond to the coefficients (up to sign) of the monic polynomial (t - x_1) \cdots (t - x_n). A foundational result, known as the fundamental theorem of symmetric polynomials, states that every symmetric polynomial can be uniquely expressed as a polynomial in these elementary symmetric polynomials, providing a complete set of generators for the ring. Notable examples include the power sum polynomials p_k(x_1, \dots, x_n) = \sum_{i=1}^n x_i^k, which can be recursively expressed in terms of the elementary symmetric polynomials via , and simpler cases like the first elementary symmetric polynomial e_1 = x_1 + \cdots + x_n or the complete homogeneous symmetric polynomials h_k = \sum_{1 \leq i_1 \leq \cdots \leq i_k \leq n} x_{i_1} \cdots x_{i_k}. Symmetric polynomials play a central role in algebraic contexts, such as determining the coefficients of polynomials from their roots without regard to order, and appear in applications to , representation theory, and the study of invariants under group actions. Historically, their properties were leveraged by in his second proof of the around 1816, using symmetric functions to analyze polynomial roots.

Definition and Properties

Formal Definition

In mathematics, a symmetric polynomial in n variables x_1, \dots, x_n over a field K, typically the rationals \mathbb{Q} or the complexes \mathbb{C}, is a polynomial P \in K[x_1, \dots, x_n] such that P(\sigma(x_1), \dots, \sigma(x_n)) = P(x_1, \dots, x_n) for every permutation \sigma in the symmetric group S_n. The set of all symmetric polynomials in n variables over K forms a subring of the polynomial ring K[x_1, \dots, x_n], denoted \Lambda_n(K) or \Sym_n(K). Basic forms of symmetric polynomials include multilinear symmetric polynomials, which are linear in each variable separately and thus homogeneous of degree equal to the number of variables in the monomials. For instance, in three variables, a multilinear symmetric polynomial has degree 3 and consists of terms where each variable appears exactly once in the product, symmetrized over permutations.

Basic Properties and Invariance

A symmetric polynomial in the variables x_1, \dots, x_n over a field k is invariant under the action of the S_n, which permutes the variables via \sigma \cdot f(x_1, \dots, x_n) = f(x_{\sigma(1)}, \dots, x_{\sigma(n)}) for \sigma \in S_n. The ring of invariants \Lambda_n = k[x_1, \dots, x_n]^{S_n} thus consists precisely of the symmetric polynomials, and this fixed-point subring inherits the structure of a commutative ring from the polynomial ring. The fundamental theorem of symmetric polynomials states that every element of \Lambda_n can be uniquely expressed as a polynomial in the elementary symmetric polynomials e_1, \dots, e_n. This uniqueness follows from the algebraic independence of the e_i and the fact that they generate \Lambda_n as a k-algebra. Newton's identities provide a recursive relation that expresses the power-sum symmetric polynomials in terms of the e_i, facilitating explicit computations of this representation. The ring \Lambda_n is \mathbb{N}-graded by total degree, with the homogeneous component \Lambda_n^d spanned by the symmetric monomials of degree d. The dimension of \Lambda_n^d equals the number of integer partitions of d into at most n parts, reflecting the combinatorial structure of the ring. Since the e_i are homogeneous of degrees $1 through n and form a regular sequence of algebraically independent generators, \Lambda_n is isomorphic to a polynomial ring in n variables with the induced grading, yielding the Hilbert series H_{\Lambda_n}(t) = \prod_{k=1}^n \frac{1}{1 - t^k}. The monomial symmetric polynomials m_\lambda, indexed by partitions \lambda of length at most n, form a \mathbb{Z}-basis for \Lambda_n. This basis arises by averaging monomials over S_n-orbits and provides a monomial-like description of the ring's elements.

Examples and Illustrations

Elementary Examples

A fundamental example of a symmetric polynomial in two variables is P(x, y) = x + y. This expression remains unchanged under the permutation that swaps x and y, yielding y + x = x + y. Constant polynomials, such as P(x, y) = 1, are trivially symmetric, as they do not depend on the variables and thus invariant under any permutation. In three variables, consider P(x, y, z) = x^2 + y^2 + z^2. Any permutation of x, y, z, such as cycling to y, z, x, results in y^2 + z^2 + x^2, which equals the original polynomial. Similarly, the linear sum x + y + z is symmetric for the same reason, with permutations merely reordering the terms without altering the value. To visualize this, expand (x + y + z)^2 = x^2 + y^2 + z^2 + 2(xy + xz + yz), where both the squared terms and the cross terms form , illustrating how symmetry preserves structure under variable exchanges. For contrast, the polynomial Q(x, y, z) = xy + z is not symmetric. Applying the transposition permutation that swaps y and z (mapping x \to x, y \to z, z \to y) yields xz + y, which differs from the original unless y = z. A common pitfall is assuming all simple expressions are symmetric; while constants and linear sums are invariant by design, polynomials like xy + z fail permutation invariance because they treat variables asymmetrically.

Applications in Simple Equations

Symmetric polynomials play a fundamental role in the analysis and solution of simple polynomial equations, where the roots exhibit symmetric relations under permutation. For a quadratic equation ax^2 + bx + c = 0 with roots \alpha and \beta, the sum \alpha + \beta = -b/a and the product \alpha \beta = c/a are elementary in the roots, remaining invariant under interchange of \alpha and \beta. This symmetry allows the roots to be expressed via the quadratic formula \alpha, \beta = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}, where the discriminant b^2 - 4ac = a^2 (\alpha - \beta)^2 is also symmetric. In cubic equations, symmetric polynomials similarly capture relations among the roots. Consider a depressed cubic x^3 + px + q = 0 with roots a, b, c satisfying a + b + c = 0; here, the identity a^3 + b^3 + c^3 - 3abc = 0 holds, linking the power-sum symmetric polynomial a^3 + b^3 + c^3 to the product abc. This relation simplifies the substitution method for solving the cubic, as a^3 = -p a - q (and similarly for b, c), reducing the problem to finding roots invariant under permutation. More generally, for a monic cubic x^3 + a_2 x^2 + a_1 x + a_0 = 0, the coefficients are symmetric polynomials in the roots: a_2 = -(a + b + c), a_1 = ab + bc + ca, and a_0 = -abc. Higher-degree symmetric expressions can be constructed iteratively from lower-degree ones through substitution into elementary symmetric polynomials. For instance, starting with elementary symmetric polynomials e_1 = \sum x_i and e_2 = \sum_{i < j} x_i x_j, one substitutes to express power sums like p_3 = \sum x_i^3 = e_1^3 - 3 e_1 e_2 + 3 e_3, building recursively for increased degrees while preserving symmetry. This process leverages the fundamental theorem of symmetric polynomials, ensuring unique representation and enabling efficient computation via lexicographic reduction. The symmetry inherent in permutable variables significantly simplifies solving systems of polynomial equations. When variables are interchangeable under permutation groups like S_n, the solution set inherits the symmetry, reducing the effective dimension—for example, partial symmetry of type p on a variable subset collapses p-fold redundant solutions into orbits, shrinking the Gröbner basis computation from size r \times r to approximately r/p \times r/p. This exploitation of invariance accelerates numerical solvers, as seen in applications where symmetric systems yield fewer distinct cases to enumerate.

Fundamental Symmetric Polynomials

Elementary Symmetric Polynomials

The elementary symmetric polynomials e_k(x_1, \dots, x_n) for k = 0, 1, \dots, n in n indeterminates x_1, \dots, x_n are defined as the sums of all distinct products of k variables, where e_0 = 1 and for k \geq 1, e_k(x_1, \dots, x_n) = \sum_{1 \leq i_1 < i_2 < \dots < i_k \leq n} x_{i_1} x_{i_2} \cdots x_{i_k}. For small values of k, these take explicit forms such as e_1(x_1, \dots, x_n) = x_1 + x_2 + \dots + x_n and e_2(x_1, \dots, x_n) = \sum_{1 \leq i < j \leq n} x_i x_j. These polynomials form a basis for the ring of and are central to the algebraic structure of symmetric functions. The generating function for the elementary symmetric polynomials is given by the product \prod_{i=1}^n (1 + x_i t) = \sum_{k=0}^n e_k(x_1, \dots, x_n) t^k, which expands directly from the definition by collecting terms of each degree in t. This formal power series encapsulates the structure of the e_k and facilitates computations in symmetric function theory. Newton's identities provide recurrence relations connecting the elementary symmetric polynomials e_k to the power sums p_k = x_1^k + \dots + x_n^k. The identities state that for k = 1, 2, \dots, n, k e_k = \sum_{m=1}^k (-1)^{m-1} e_{k-m} p_m, with e_0 = 1 and e_k = 0 for k > n or k < 0. To derive this, consider the generating function E(t) = \prod_{i=1}^n (1 + x_i t) = \sum_{k=0}^n e_k t^k and the logarithmic derivative \frac{E'(t)}{E(t)} = \sum_{i=1}^n \frac{x_i}{1 + x_i t} = \sum_{i=1}^n \sum_{m=1}^\infty (-1)^{m-1} x_i^m t^{m-1} = \sum_{m=1}^\infty (-1)^{m-1} p_m t^{m-1}. On the other hand, E'(t) = \sum_{k=1}^n k e_k t^{k-1}, so \sum_{k=1}^n k e_k t^{k-1} = E(t) \sum_{m=1}^\infty (-1)^{m-1} p_m t^{m-1}. Equating coefficients of t^{k-1} on both sides yields the identity after multiplying through by t^{k-1} and collecting terms. These relations allow recursive computation of e_k from the p_m and vice versa, highlighting the interplay between bases in the . A fundamental result is that every symmetric polynomial in n variables admits a unique expression as a polynomial in the elementary symmetric polynomials e_1, \dots, e_n, which are algebraically independent over the base field. This uniqueness theorem establishes the e_k as a universal generating set for the ring of .

Power-Sum Symmetric Polynomials

Power-sum symmetric polynomials are defined for positive integers k as p_k(x_1, \dots, x_n) = \sum_{i=1}^n x_i^k, where x_1, \dots, x_n are indeterminates. These polynomials are symmetric, as permuting the variables leaves them invariant, and they generate the ring of symmetric polynomials over the integers. A key feature of the power-sum polynomials is their relation to the elementary symmetric polynomials via Newton's identities, which provide recursive formulas for converting between the two bases. For k \geq 1, the identities state that p_k - e_1 p_{k-1} + e_2 p_{k-2} - \cdots + (-1)^{k-1} e_{k-1} p_1 + (-1)^k k e_k = 0, where e_j denotes the j-th elementary symmetric polynomial (with e_j = 0 for j > n or j < 0). This relation allows expressing either set in terms of the other; for instance, solving for e_k yields k e_k = (-1)^{k-1} \left( p_k - e_1 p_{k-1} + \cdots + (-1)^{k-1} e_{k-1} p_1 \right). These identities, originally derived by Isaac , facilitate computations in algebraic contexts where one basis may be more convenient than the other. In linear algebra, the power sums evaluate the traces of matrix powers when applied to eigenvalues. Specifically, if \lambda_1, \dots, \lambda_n are the eigenvalues of an n \times n matrix A, then p_k(\lambda_1, \dots, \lambda_n) = \operatorname{tr}(A^k) = \sum_{i=1}^n \lambda_i^k. This connection arises from the spectral properties of matrices and is useful for analyzing powers and iterations without explicitly computing eigenvalues. For computational purposes, the power sums can be calculated directly for small n and related to other symmetric polynomials using Newton's identities. Consider n=2 with variables x and y: then p_1 = x + y = e_1, and p_2 = x^2 + y^2 = e_1 p_1 - 2 e_2 = (x+y)^2 - 2xy. For n=3, p_3 = x^3 + y^3 + z^3 = e_1 p_2 - e_2 p_1 + 3 e_3, illustrating the recursive conversion. These examples highlight how power sums relate to statistical moments, as p_k corresponds to the k-th raw moment of the multiset \{x_1, \dots, x_n\} under uniform weighting.

Advanced Symmetric Polynomials

Complete Homogeneous Symmetric Polynomials

The complete homogeneous symmetric polynomial of degree k in n variables x_1, \dots, x_n, denoted h_k(x_1, \dots, x_n), is defined as the sum of all monomials of total degree k in these variables, where the sum is taken over all non-negative integer solutions to \alpha_1 + \cdots + \alpha_n = k, yielding h_k = \sum_{\alpha_1 + \cdots + \alpha_n = k} x_1^{\alpha_1} \cdots x_n^{\alpha_n}. This construction ensures that h_k is symmetric in the variables, as permuting the x_i merely reorders the terms in the sum without altering its value. By definition, h_0 = 1 and h_k = 0 for k < 0. For a partition \lambda = (\lambda_1, \lambda_2, \dots), the complete homogeneous symmetric polynomial is extended multiplicatively as h_\lambda = h_{\lambda_1} h_{\lambda_2} \cdots. The generating function for the sequence \{h_k\}_{k \geq 0} provides a compact way to encode these polynomials: \sum_{k=0}^\infty h_k t^k = \prod_{i=1}^n \frac{1}{1 - x_i t}. In the case of infinitely many variables, the product extends over i \geq 1. This formal power series identity arises from expanding each geometric series \frac{1}{1 - x_i t} = \sum_{m=0}^\infty (x_i t)^m and collecting terms by total degree in t. The generating function highlights the combinatorial interpretation of h_k as the number of ways to distribute k indistinguishable items into n distinguishable bins, weighted by the product of the bin sizes raised to their occupancy powers. Complete homogeneous symmetric polynomials exhibit a duality with the elementary symmetric polynomials \{e_k\}, manifested through their generating functions. Let E(t) = \sum_{k \geq 0} e_k t^k = \prod_{i=1}^n (1 + x_i t); then the relation H(t) E(-t) = 1 holds, where H(t) = \sum_{k \geq 0} h_k t^k. Expanding this identity yields the recurrence \sum_{i=0}^k (-1)^i e_i h_{k-i} = \delta_{k0} for all k \geq 0, with e_0 = h_0 = 1 and the Kronecker delta \delta_{k0} = 1 if k=0 and $0 otherwise. For k \geq 1, this simplifies to \sum_{i=0}^k (-1)^i e_i h_{k-i} = 0, allowing recursive computation of h_k from the e_i. Specific low-degree cases include h_1 = e_1 and h_2 = e_1 h_1 - e_2 = e_1^2 - e_2. For illustration, consider n=2 variables x_1, x_2: h_1 = x_1 + x_2, and h_2 = x_1^2 + x_1 x_2 + x_2^2. For n=3 variables x_1, x_2, x_3, h_1 = x_1 + x_2 + x_3, h_2 = x_1^2 + x_2^2 + x_3^2 + x_1 x_2 + x_1 x_3 + x_2 x_3, and h_3 = x_1^3 + x_2^3 + x_3^3 + x_1^2 x_2 + x_1^2 x_3 + x_2^2 x_1 + x_2^2 x_3 + x_3^2 x_1 + x_3^2 x_2 + x_1 x_2 x_3. These expansions confirm the symmetry and the inclusion of all possible monomial terms of the specified degree.

Schur Polynomials

Schur polynomials, denoted s_\lambda(x_1, \dots, x_n) where \lambda is a partition with at most n parts, provide a fundamental basis for the ring of and are intimately connected to the representation theory of the general linear group \mathrm{GL}_n and the symmetric group S_m, where the characters of irreducible representations are given by these polynomials evaluated at the eigenvalues of group elements. Combinatorially, the Schur polynomial is defined as the sum over all semistandard Young tableaux T of shape \lambda with entries from \{1, \dots, n\}, where each tableau contributes the monomial \prod_{i=1}^n x_i^{m_i(T)} with m_i(T) being the multiplicity of i in T; semistandard tableaux require weakly increasing rows and strictly increasing columns. This definition ensures that s_\lambda is symmetric and homogeneous of degree |\lambda|, forming an orthonormal basis under the Hall scalar product. A key algebraic expression for Schur polynomials is the Jacobi-Trudi identity, which represents s_\lambda as a determinant involving complete homogeneous symmetric polynomials h_k: s_\lambda = \det\left( h_{\lambda_i - i + j} \right)_{1 \leq i,j \leq \ell}, where \ell \geq \ell(\lambda) is sufficiently large, and h_0 = 1 while h_k = 0 for k < 0. There is a dual form using elementary symmetric polynomials e_k: s_\lambda = \det\left( e_{\lambda'_j - j + i} \right)_{1 \leq i,j \leq \ell}, with \lambda' the conjugate partition. This determinant formula facilitates computations and proofs of positivity properties, as the entries are themselves positive sums of monomials. Special cases of Schur polynomials recover other classical bases: for the single-row partition \lambda = (k), s_{(k)} = h_k, the k-th complete homogeneous symmetric polynomial; for the single-column partition \lambda = (1^k), s_{(1^k)} = e_k, the k-th elementary symmetric polynomial. More generally, every Schur polynomial expresses as a positive integer linear combination of monomial symmetric polynomials, reflecting the combinatorial expansion via tableaux. q-analogs of Schur polynomials have played a central role in the representation theory of quantum groups, particularly through q-Schur algebras—introduced by Dipper and James in 1989—that generalize classical Schur-Weyl duality to quantum settings.

Monomial Symmetric Polynomials

Monomial symmetric polynomials, denoted m_\lambda for a partition \lambda = (\lambda_1, \lambda_2, \dots, \lambda_l) of an integer k, are defined in n variables x_1, \dots, x_n (with n \geq l) as the sum m_\lambda(x_1, \dots, x_n) = \sum x_{i_1}^{\lambda_1} x_{i_2}^{\lambda_2} \cdots x_{i_l}^{\lambda_l}, where the sum runs over all distinct monomials arising from permutations of the indices i_1, \dots, i_l, i_{l+1}, \dots, i_n with i_{l+1} = \cdots = i_n assigned exponent 0, ensuring each unique monomial appears exactly once with coefficient 1. The collection \{ m_\lambda \}, indexed by all partitions \lambda, forms a \mathbb{Z}-basis for the ring of symmetric polynomials in n variables over \mathbb{Z}, meaning every symmetric polynomial can be uniquely expressed as a finite integer linear combination of these monomials. This basis property holds because the monomial symmetric polynomials are the orbit sums under the action of the symmetric group S_n on the ordinary monomials, and they span the invariants without relations among themselves in the graded components. The number of terms in m_\lambda equals the size of the orbit under S_n, given by n! / \prod_j m_j!, where m_j is the multiplicity of the exponent j in the padded partition (including m_0 = n - l(\lambda)). For example, for \lambda = (2) in n variables, there is one part 2 and n-1 zeros, so m_2 = 1, m_0 = n-1, yielding n! / (1! \cdot (n-1)!) = n terms, corresponding to \sum_{i=1}^n x_i^2. For \lambda = (1,1), m_1 = 2, m_0 = n-2, giving n! / (2! \cdot (n-2)!) = \binom{n}{2} terms, such as \sum_{1 \leq i < j \leq n} x_i x_j. For \lambda = (2,1), m_2 = 1, m_1 = 1, m_0 = n-2, resulting in n! / (1! \cdot 1! \cdot (n-2)!) = n(n-1) terms, like \sum_{i \neq j} x_i^2 x_j. Monomial symmetric polynomials relate to other bases, such as the s_\mu, via integer transition matrices involving K_{\mu \lambda}, which count semistandard of shape \mu and weight \lambda. Specifically, s_\mu = \sum_\lambda [K_{\mu \lambda}](/page/Kostka_number) m_\lambda, and the inverse expresses m_\lambda in terms of Schur polynomials with signed coefficients. For small degrees, these matrices are lower unitriangular (in dominance order on partitions). The table below shows the transition matrix for expressing Schur polynomials in the monomial basis for degree 2, with partitions ordered lexicographically: (2), (1,1).
m_{(2)}m_{(1,1)}
s_{(2)}11
s_{(1,1)}01
This yields s_{(2)} = m_{(2)} + m_{(1,1)} and s_{(1,1)} = m_{(1,1)}, so inversely m_{(2)} = s_{(2)} - s_{(1,1)}, m_{(1,1)} = s_{(1,1)}. For degree 3, with partitions (3), (2,1), (1,1,1), the matrix is
m_{(3)}m_{(2,1)}m_{(1,1,1)}
s_{(3)}111
s_{(2,1)}012
s_{(1^3)}001
Thus, s_{(3)} = m_{(3)} + m_{(2,1)} + m_{(1^3)}, s_{(2,1)} = m_{(2,1)} + 2 m_{(1^3)}, s_{(1^3)} = m_{(1^3)}.

Relations to Univariate Polynomials

Connection to Roots via Vieta's Formulas

Vieta's formulas establish a fundamental connection between the coefficients of a univariate polynomial and symmetric functions of its roots. Consider a monic polynomial p(t) = \prod_{i=1}^n (t - x_i) = t^n + a_{n-1} t^{n-1} + \dots + a_1 t + a_0 of degree n, where the x_i are the roots (possibly complex and with multiplicity). The formulas state that the coefficients are given by a_{n-k} = (-1)^k e_k for k = 1, \dots, n, where e_k denotes the k-th elementary symmetric sum of the roots: e_1 = \sum x_i, e_2 = \sum_{i < j} x_i x_j, and in general e_k = \sum_{1 \leq i_1 < \dots < i_k \leq n} x_{i_1} \dots x_{i_k}, with e_0 = 1. Thus, the expanded form is p(t) = \sum_{k=0}^n (-1)^{n-k} e_{n-k} t^k. A proof sketch follows from the direct expansion of the product \prod (t - x_i). The coefficient of t^{n-k} arises from selecting the constant term -x_i from exactly k factors and t from the remaining n-k factors, yielding (-1)^k times the sum of all distinct products of k roots, which is precisely (-1)^k e_k. This relation holds over any commutative ring, generalizing beyond fields like the complexes. Historically, these formulas were introduced by François Viète in his 1591 treatise In artem analyticam isagoge, marking a key advancement in algebraic notation and polynomial theory during the 16th century. More broadly, Vieta's formulas imply that any symmetric polynomial in the roots x_1, \dots, x_n can be expressed uniquely as a polynomial in the elementary symmetric polynomials e_1, \dots, e_n. This is the content of the fundamental theorem on symmetric polynomials (FTSP), which asserts that the e_k form an algebraic basis for the ring of symmetric polynomials. Isaac Newton extended these ideas in the late 17th century, intuiting the FTSP around 1665 and developing relations (now known as Newton's identities) that express power sums of roots in terms of the e_k, as detailed in his later work Arithmetica Universalis (1707); this allowed computation of higher symmetric functions without resolving the roots explicitly. A notable example is the discriminant of the polynomial, defined as \Delta = \prod_{1 \leq i < j \leq n} (x_i - x_j)^2, which measures whether the roots are distinct and is manifestly symmetric in the x_i. By the FTSP, \Delta can be expressed as a polynomial in the e_k (and hence in the coefficients a_i); explicitly, it equals (-1)^{n(n-1)/2} times the resultant of p(t) and its derivative p'(t), up to the leading coefficient factor. This symmetric expression ties directly to the square of the V = \det(x_i^{j-1})_{1 \leq i,j \leq n} = \prod_{1 \leq j < i \leq n} (x_i - x_j), since \Delta = (-1)^{n(n-1)/2} V^2, providing a bridge to determinantal forms in linear algebra.

Generating Functions for Symmetric Polynomials

Generating functions provide a powerful tool for encoding and studying families of symmetric polynomials, allowing for compact representations and facilitating computations and identities across the theory of symmetric functions. These functions typically take the form of products or series over the variables x_1, x_2, \dots, x_n, where the coefficients of powers of a formal variable t yield the symmetric polynomials themselves. Such generating functions are foundational in and have been systematically developed since the mid-20th century. The generating function for the elementary symmetric polynomials e_k in n variables is the finite product \prod_{i=1}^n (1 + x_i t) = \sum_{k=0}^n e_k(x_1, \dots, x_n) t^k, where e_0 = 1 and e_k = 0 for k > n. This expression arises naturally from the of the product, with each term corresponding to selections of distinct variables, reflecting the combinatorial of e_k as the sum over products of k distinct variables. This form is central to and has been a since early works on symmetric functions. For the complete homogeneous symmetric polynomials h_k, the generating function is the infinite series obtained from the reciprocal form \prod_{i=1}^n \frac{1}{1 - x_i t} = \sum_{k=0}^\infty h_k(x_1, \dots, x_n) t^k, valid as a , where h_k sums over all monomials of total degree k with nonnegative exponents. This product expands by including all possible multisets of variables, capturing the unrestricted partitions underlying h_k. The duality with the elementary generating function highlights key relations in the ring of symmetric polynomials. The power-sum symmetric polynomials p_k = \sum_{i=1}^n x_i^k admit a logarithmic generating function, specifically \sum_{k=1}^\infty \frac{p_k(x_1, \dots, x_n)}{k} t^k = -\sum_{i=1}^n \log(1 - x_i t), which follows from the series expansion of the logarithm and interchanges sums over cycles with power sums. This connection is instrumental for deriving Newton's identities relating power sums to other bases. In the limit of infinitely many variables, this form extends to the full ring of symmetric functions. In the context of symmetric functions with infinitely many variables, the plethystic exponential provides an advanced generating mechanism, defined as \text{PE}[f(t)] = \exp\left( \sum_{k=1}^\infty \frac{f(p_k)}{k} t^k \right), where f is a formal series and the operation acts plethystically on the power sums p_k. For the identity function f(u) = u, it yields the generating function for the complete homogeneous symmetric functions: \text{PE} = \sum_{k=0}^\infty h_k t^k = \prod_{i=1}^\infty \frac{1}{1 - x_i t}. This notation, introduced to handle compositions and substitutions in the ring \Lambda of symmetric functions, has seen extensive use in 21st-century , including enumerations of statistics and Hilbert series in .

Algebraic Structure

Ring of Symmetric Polynomials

The ring of symmetric polynomials in n variables over a K, denoted \Sym_n(K), is the subring of the K[x_1, \dots, x_n] consisting of all polynomials invariant under the action of the S_n by permuting the variables. This ring captures the algebraic structure of symmetric functions and serves as a fundamental object in . A key result is the structure theorem, which states that \Sym_n(K) \cong K[e_1, \dots, e_n] as K-algebras, where e_i denotes the i-th of degree i. This isomorphism implies that the elementary symmetric polynomials freely generate \Sym_n(K) and provide a universal characterization of the . To see this, any can be expressed uniquely as a polynomial in the e_i via recursive relations, such as those derived from the \prod_{j=1}^n (1 + x_j t) = \sum_{i=0}^n e_i t^i, ensuring no relations among the generators. In the broader context of , the finite generation of \Sym_n(K) follows from Hilbert's finite basis theorem, which asserts that the ring of invariants under any on a over a field is finitely generated as a . For the specific case of S_n, the proof outline proceeds by first noting that the monomials form a basis for K[x_1, \dots, x_n], and averaging over S_n yields symmetric polynomials; then, using the fact that power sums p_k = \sum x_i^k (for k \leq n) generate the ring and relate to the e_i via , one establishes the finite set \{e_1, \dots, e_n\} as generators without syzygies. This application resolves the general finiteness for groups and highlights S_n as a prototypical example. As a graded ring, \Sym_n(K) inherits the total degree grading from K[x_1, \dots, x_n], but it is more naturally multigraded by the set of partitions \lambda \vdash d with at most n parts, where the degree-d component is spanned by the monomial symmetric polynomials m_\lambda. The Hilbert series, which encodes the dimensions of these graded pieces, is given by H_{\Sym_n(K)}(t) = \prod_{i=1}^n \frac{1}{1 - t^i}, reflecting the free polynomial structure in generators of degrees $1 through n. This series counts the number of monomials in the e_i up to total degree, aligning with the partition function restricted to at most n parts. Computational implementations of the ring of symmetric polynomials have advanced significantly since 2010, particularly in systems like , which provides comprehensive support for constructing \Sym_n(K), expanding in bases such as the elementary or , and performing operations like change-of-basis via built-in classes for symmetric functions. For instance, enables efficient computation of the structure theorem by expressing arbitrary symmetric polynomials in terms of the e_i, leveraging algorithms for symmetric reduction and generating functions.

Quotient Rings and Representations

The coinvariant algebra of the symmetric group S_n acting on the polynomial ring k[x_1, \dots, x_n] over a field k of characteristic zero is the quotient k[x_1, \dots, x_n] / I, where I is the ideal generated by all symmetric polynomials of positive degree. This quotient is finite-dimensional as a k-vector space, with dimension exactly n!, which equals the order of S_n. A monomial basis for this algebra consists of the elements x_1^{a_1} x_2^{a_2} \cdots x_n^{a_n} where $0 \leq a_i < i for each i. As an S_n-module, the coinvariant algebra realizes the of S_n, meaning it decomposes as a of all irreducible representations with multiplicities equal to their dimensions. The irreducible representations of S_n over fields of characteristic zero are the Specht modules S^\lambda, indexed by partitions \lambda \vdash n. Thus, the coinvariant algebra is isomorphic to \bigoplus_{\lambda \vdash n} S^\lambda \otimes (S^\lambda)^*, where the multiplicity of each S^\lambda is \dim S^\lambda. The characters of these Specht modules are given by Schur polynomials s_\lambda, linking the representation-theoretic to the theory of symmetric functions. The coinvariant algebra is a graded , as its finite dimension implies that every descending chain of ideals stabilizes. In the graded setting, applies to determine minimal generators of graded in such quotients; for instance, in quotients by S_n-stable generated by symmetric polynomials, the degrees of minimal generators are dictated by a basis of the ideal the .

Applications

In Galois Theory

In the development of , symmetric polynomials play a crucial role in constructing resolvents, which are auxiliary polynomials designed to probe the structure of the of a given . A resolvent for a f(X) with a_1, \dots, a_n is defined as R(F, f)(X) = \prod (X - s_F(a_1, \dots, a_n)), where the product runs over a set of coset representatives s of a S of the S_n, and F is a on the , often taken to be symmetric to ensure the coefficients of the resolvent lie in the base field. The coefficients of such resolvents are symmetric polynomials in the a_i, and thus, by the fundamental theorem on symmetric polynomials, they can be expressed as polynomials in the elementary symmetric sums, which are the coefficients of f(X). This construction allows the factorization pattern of the resolvent over the base field to reveal about the action of the on the cosets, effectively detecting whether the is contained in the conjugate of S or has specific cycle types. Historically, introduced resolvents in his 1830s memoir on the conditions for solvability by radicals, building on earlier work by on permutations of roots. These tools provided a systematic way to analyze the without explicitly computing the , by using symmetric invariants to generate intermediate extensions corresponding to subgroups. For instance, linear resolvents, where F is the sum of a of roots, partition the roots into orbits under the , enabling the identification of transitive subgroups through the degrees of irreducible factors. This approach underpins effective algorithms in computational for determining group structures modulo primes or via resolvents. The exemplifies a key symmetric polynomial in , defined for a f(X) = \prod (X - r_i) of degree n as \Delta_f = \prod_{i < j} (r_j - r_i)^2, which is symmetric in r_i and thus resides in the base K. Adjoining \sqrt{\Delta_f} to K yields a extension, and the of f over K(\sqrt{\Delta_f}) has contained in the A_n if \Delta_f is a square in K; otherwise, the full S_n acts. This adjunction distinguishes even and odd permutations in the , providing a criterion for the of the group without resolving the full extension. For separable cubics, the is precisely K(r, \sqrt{\Delta_f}) for a r, illustrating how the bridges symmetric functions to field-theoretic structure. The unsolvability of the general quintic equation, as established by the in the 1820s, finds its modern explanation in through the non-solvability of the A_5. For the general quintic X^5 + s_1 X^4 + \cdots + s_5 with indeterminate coefficients s_i (elementary symmetric polynomials), the over the function field \mathbb{Q}(s_1, \dots, s_5) is S_5, whose sole proper is A_5, a simple non-abelian group of order 60 with no abelian composition factors. Solvability by radicals requires the to be solvable, meaning a with abelian quotients, but A_5's simplicity precludes this. Resolvents invariant only under proper subgroups (non-symmetric under the full S_5) are employed to verify and distinguish S_5 from smaller groups; for example, the resolvent for the subgroup confirms the presence of 5-cycles, while the being non-square places the group outside A_5. Even when the is A_5 (as for certain quintics with square ), its non-abelian simplicity ensures unsolvability.

In Representation Theory

In the representation theory of the symmetric group S_n, irreducible representations are labeled by partitions \lambda of n, and symmetric polynomials play a central role through the Frobenius characteristic map, which establishes an isomorphism between the ring of class functions on S_n and the ring of symmetric functions. Specifically, this map sends the irreducible character \chi^\lambda corresponding to \lambda to the Schur function s_\lambda, allowing characters to be expressed in terms of symmetric function bases. The Frobenius character formula further computes \chi^\lambda(\mu), the value of the character on a conjugacy class of cycle type \mu, as \chi^\lambda(\mu) = z_\mu \langle s_\lambda, p_\mu \rangle, where p_\mu are the power sum symmetric functions, z_\mu is the order of the centralizer of elements of type \mu, and \langle \cdot, \cdot \rangle denotes the Hall scalar product. This connection bridges combinatorial representation theory with the algebra of symmetric polynomials, enabling the use of generating functions and recursions from symmetric function theory to derive character tables. The Hall scalar product on the ring of symmetric functions, defined by \langle p_\lambda, p_\mu \rangle = \delta_{\lambda\mu} z_\lambda and extended linearly, induces an inner product under which the Schur functions form an : \langle s_\lambda, s_\mu \rangle = \delta_{\lambda\mu}. This mirrors the orthogonality of irreducible characters of S_n with respect to the group inner product \langle \chi^\lambda, \chi^\mu \rangle = \delta_{\lambda\mu}, and the Frobenius map preserves this structure, making it an between the two settings. Consequently, expansions of symmetric functions in the Schur basis yield multiplicity coefficients that correspond to decomposition numbers in representations of S_n, facilitating computations in both algebraic and geometric contexts. Young symmetrizers provide an explicit construction of the irreducible Specht modules S^\lambda using actions on tensor powers, incorporating symmetric and alternating polynomials. For a standard t of shape \lambda, the Young symmetrizer is the element c_t = \sum_{\sigma \in R_t} \sigma \cdot \sum_{\tau \in C_t} \operatorname{sgn}(\tau) \tau in the group \mathbb{C}[S_n], where R_t is the row stabilizer and C_t the column stabilizer. Applying c_t to the tensor V^{\otimes n} (for a V) projects onto an isomorphic copy of S^\lambda, with the row symmetrization enforcing invariance under even permutations (analogous to symmetric polynomials as invariants) and column antisymmetrization using signs (related to alternating polynomials like the Vandermonde determinant). This idempotent construction, up to scalar, generates the and intertwines the S_n- with the symmetric polynomial via Schur-Weyl duality. In physics, representations of the classify the exchange symmetries of identical particles in quantum many-body systems, extending beyond standard Bose-Einstein (fully symmetric) and Fermi-Dirac (fully antisymmetric) statistics to parastatistics using higher-dimensional irreducible representations. The symmetrization postulate requires the n-particle to decompose into isotypic components under S_n, with the choice of representation determining allowed statistics; for instance, para-bosons of order p correspond to the sum of the trivial representation and its multiples up to p. Recent advances, such as exact solutions in interacting periodic chains and realizations in low-dimensional materials, demonstrate non-abelian parastatistics inequivalent to bosons or fermions.

Alternating Polynomials

Alternating polynomials are multivariate polynomials that transform under the action of the S_n by multiplying by the sign of the , specifically Q(\sigma(x_1, \dots, x_n)) = \operatorname{sgn}(\sigma) Q(x_1, \dots, x_n) for all \sigma \in S_n, assuming the base has characteristic not equal to 2. This condition ensures that alternating polynomials coincide with anti-symmetric polynomials, which change sign under odd permutations while remaining invariant up to sign under even ones. They form a over the of symmetric polynomials and play a key role in the of S_n, particularly in the of Schur functions. The Vandermonde determinant provides the fundamental example of a nonzero alternating of lowest degree: \Delta(x_1, \dots, x_n) = \prod_{1 \leq i < j \leq n} (x_j - x_i). This is homogeneous of degree \binom{n}{2} and generates the alternating polynomials as a over the symmetric polynomials; specifically, every alternating factors uniquely as a symmetric times \Delta. Equivalently, the space of alternating polynomials can be viewed as the of the ring of symmetric polynomials with the one-dimensional sign representation of S_n, realized concretely via multiplication by \Delta. The antisymmetrization projects arbitrary onto the of alternating . For a f \in \mathbb{Q}[x_1, \dots, x_n], it is defined as \operatorname{Alt}(f) = \frac{1}{n!} \sum_{\sigma \in S_n} \operatorname{sgn}(\sigma) \, \sigma(f), where \sigma(f) denotes the action of \sigma on f by permuting the variables. This is the Reynolds for the sign , idempotent, and produces an alternating from any input; applying it to a yields a multiple of the Vandermonde if the is not strictly decreasing in exponents. For instance, \operatorname{Alt}(x_1^{n-1} x_2^{n-2} \cdots x_n^0) = \Delta.

Antisymmetric Polynomials

Antisymmetric polynomials of rank k in n variables generalize the notion of alternating polynomials by incorporating antisymmetry across k indices, corresponding to elements in the k-th exterior power \Lambda^k V of the V spanned by the variables x_1, \dots, x_n. Specifically, they are represented as alternating multilinear forms on k copies of V, satisfying B(v_{\sigma(1)}, \dots, v_{\sigma(k)}) = \operatorname{sgn}(\sigma) B(v_1, \dots, v_k) for any \sigma \in S_k, where B is the form. This structure arises in the , which quotients the by the relations enforcing antisymmetry, such as v \wedge v = 0 for v \in V. In the full case where k = n, these reduce to the standard alternating polynomials, which change sign under odd permutations of all n variables. A key basis for the space of decomposable antisymmetric tensors in \Lambda^k V is provided by the Plücker coordinates, which parametrize points in the Grassmannian \operatorname{Gr}(k, n). These coordinates p_I, for increasing k-subsets I \subseteq \{1, \dots, n\}, are defined as the determinants of the k \times k submatrices extracted from a matrix whose rows span a k-dimensional subspace, embedding the Grassmannian into the projective space \mathbb{P}(\Lambda^k V). The antisymmetry is inherent in these determinants, as swapping two rows or columns negates the value, reflecting the alternating property. The relations among Plücker coordinates, known as Grassmann-Plücker relations, are quadratic polynomials that define the embedding and ensure the coordinates arise from actual subspaces. The connection to determinants extends the classical Vandermonde determinant, which serves as the antisymmetric in k variables: \det\left( (x_i^{j-1})_{1 \leq i,j \leq k} \right) = \prod_{1 \leq i < j \leq k} (x_j - x_i). This expression factorizes into products of differences, capturing the antisymmetry. For multiple rows, the generalized Vandermonde arises in contexts like confluent or multipoint evaluations, where additional rows correspond to higher-order terms or repeated variables, providing a basis for higher-degree antisymmetric invariants in the . Such determinants underpin the coordinate functions on \Lambda^k V, linking back to Plücker embeddings. Recent developments in the have highlighted connections between antisymmetric polynomials and in , particularly through the affine fiber—sheaf correspondence. Antisymmetric polynomials, as W-antiinvariants under the action, form explicit bases for computing equivariant Borel-Moore of affine Springer fibers and Hilbert schemes, with isomorphisms like H_*( \widetilde{\operatorname{Sp}}^\gamma )^\varepsilon \cong H_*(\operatorname{Sp}^\gamma_0)[-2N] tying them to sheaf-theoretic structures on Coulomb branches. These links address gaps in earlier theories by integrating techniques with geometric representations, enabling computations of graded modules and partial resolutions in varieties associated to Lie groups like \mathrm{GL}_n.

References

  1. [1]
    [PDF] SYMMETRIC POLYNOMIALS 1. Introduction Let F be a field. A ...
    Introduction. Let F be a field. A polynomial f(X1,...,Xn) ∈ F[X1,...,Xn] is called symmetric if it is unchanged by all permutations of its variables:.Missing: reliable sources
  2. [2]
    1.3: Symmetric Polynomials - Mathematics LibreTexts
    Mar 5, 2022 · We say that f is a symmetric polynomial if every way of switching around (ie, permuting) the variables leaves f the same. For example, the ...Missing: reliable sources
  3. [3]
    [PDF] An Introduction to Symmetric Polynomials and Applications
    Apr 14, 2023 · A permutation of a set S is defined as a bijection from S to itself. Example. The permutation (3,1,2) is described by the function α defined as.Missing: reliable sources
  4. [4]
    [PDF] Symmetric Polynomials: The Fundamental Theorem and Uniqueness
    Nov 21, 2019 · We explore the Fundamental Theorem of Symmetric Polynomials (FTSP), using a classical proof that is streamlined by rigorously proving lemmas and.Missing: reliable | Show results with:reliable
  5. [5]
    Symmetric Polynomial -- from Wolfram MathWorld
    A symmetric polynomial on n variables x_1, ..., x_n (also called a totally symmetric polynomial) is a function that is unchanged by any permutation of its ...
  6. [6]
    [PDF] Symmetric multilinear forms and polarization of polynomials
    Jan 9, 2009 · The goal of this paper is to investigate generalizations of the three con- cepts (quadratic form, homogeneous quadratic polynomial and symmetric.
  7. [7]
    [PDF] The ring of symmetric polynomials - UChicago Math
    Symmetric polynomials in n variables are polynomials which are invariant under any permutation of their variables. They form a ring with many interesting ...
  8. [8]
    [PDF] 15. Symmetric polynomials
    Now use the same polynomial P but with the elementary symmetric functions augmented by insertion of xn, by g(x1,...,xn) = P(s1(x1,...,xn),...,sn−1(x1,...,xn)).
  9. [9]
    [PDF] solving the cubic and quartic
    Then, we can again write α,β,γ,δ as the symmetric polynomials in a,b,c,d. To ... So, we can use the quadratic equation to find ab and cd. Similarly, we ...
  10. [10]
    None
    ### Summary of Symmetric Polynomials in Quadratic and Cubic Equations
  11. [11]
    [PDF] Symbolic Computation with Symmetric Polynomials in Real ... - arXiv
    Jul 31, 2025 · 1.3.1 Fundamental Theorem of Symmetric Polynomials. From the ele- mentary symmetric polynomials, we can construct other symmetric polynomials.
  12. [12]
    [PDF] Partial Symmetry in Polynomial Systems and its Applications in ...
    Recently, full symmetry in the polynomial systems has been utilized to simplify and speed up state-of-the-art polynomial solvers based on Gröbner basis method.
  13. [13]
    [PDF] On Differentiating Symmetric Functions - arXiv
    Jan 30, 2023 · It is well-known that the elementary symmetric functions er := ∑|I|=r ∏i∈I xi (where the sum is over the subsets I of size r of the indices ...
  14. [14]
    [PDF] Symmetric Functions and Hall Polynomials - UC Berkeley math
    ... symmetric functions when t = 1. Many of their properties generalize known properties of. S-functions. Finally, Chapters IV and V apply the formalism ...Missing: reliable sources
  15. [15]
    [PDF] arXiv:1301.7116v5 [math.HO] 10 Oct 2020
    Oct 10, 2020 · This theorem is now known as the Fundamental Theorem on Symmetric Polynomials (FTSP). This essay has three goals: the first expository, the ...
  16. [16]
    Power Sum -- from Wolfram MathWorld
    Power sums are related to symmetric polynomials by the Newton-Girard formulas. The sum of k times the k th power of x is given analytically by. sum_(k=0) ...
  17. [17]
    Newton-Girard Formulas -- from Wolfram MathWorld
    ### Summary of Newton's Identities from Newton-Girard Formulas
  18. [18]
    [PDF] NUMERICAL METHODS FOR LARGE EIGENVALUE PROBLEMS ...
    The trace of a matrix is equal to the sum of all its diagonal elements, tr(A) ... The total energy of the system is the sum of the occupied eigenvalues, Ei ...
  19. [19]
    [PDF] An introduction to Schur polynomials
    Definition 7.3 (Semistandard tableau). A semistandard tableau of shape λ = (λ1,...,λl) and type µ = (µ1,...,µm) is the Young diagram of λ filled with ...
  20. [20]
    [PDF] The theory of Schur polynomials revisited - UMD MATH
    We use Young's raising operators to give short and uniform proofs of several well known results about Schur polynomials and symmetric func- tions, starting from ...
  21. [21]
    The q-Schur algebra , by Stephen Donkin, London Mathematical ...
    Jun 27, 2000 · Schur worked out the polynomial representation theory for the groups GL(n, C), establishing complete reducibility and calculating the ...Missing: post- | Show results with:post-
  22. [22]
    [PDF] IV. An Introduction to Symmetric Functions.
    That is, every symmetric function can be written uniquely as a finite Z-linear combination of monomial symmetric functions. ELEMENTARY SYMMETRIC FUNCTIONS. Next ...
  23. [23]
    Vieta's Formula | Brilliant Math & Science Wiki
    Vieta's formula relates the coefficients of polynomials to the sums and products of their roots, as well as the products of the roots taken in groups.
  24. [24]
    [PDF] 1 Vieta's Theorem
    We know that two polynomials are equal if and only if their coefficients are equal, so x2 + ax + b = x2 - (p + q)x + pq means that a = -(p + q) and b = pq. In ...
  25. [25]
    Vieta's Formulas -- from Wolfram MathWorld
    Vieta's formulas states that the theorem was proved by Viète (also known as Vieta, 1579) for positive roots only, and the general theorem was proved by Girard.<|control11|><|separator|>
  26. [26]
    [PDF] 17. Vandermonde determinants
    Give a polynomial condition on a, b for the αi to be distinct. (One might try to do this as a symmetric function computation, but it's a bit tedious.) If P(x) = ...
  27. [27]
    Symmetric Functions and Hall Polynomials - I. G. Macdonald
    This book is a source on Macdonald & Hall polynomials, central to mathematics and mathematical physics, used in string theory, stochastic models, and more.
  28. [28]
    Enumerative Combinatorics
    Table of contents for Volume 2. Excerpt (27 page PDF file) from Volume 2 on problems related to Catalan numbers (including 66 combinatorial interpretations of ...
  29. [29]
    [PDF] Plethystic formulas for permutation enumeration - Brandeis
    These formulas involve plethysm, an operation on symmetric functions originating in the representation theory of the general linear groups GLn(C) and the ...
  30. [30]
    Plethystic exponential calculus and characteristic polynomials of ...
    May 27, 2021 · This paper proves identities for generating functions of characteristic polynomials of permutations using plethystic exponentials, related to  ...
  31. [31]
    [PDF] Enumerative Combinatorics 6: Symmetric polynomials
    The best-known occurrence of the elementary symmetric polynomials is the ... For example, if n = 3 and λ is the partition (2,1) of 3, we have eλ. = (x1x2 + ...
  32. [32]
    [PDF] An Introduction to Hilbert's Finiteness Theorem in Invariant Theory
    Jul 12, 2020 · We include the proofs of Hilbert's basis theorem and Hilbert's finiteness theorem in this paper. Hilbert's finiteness theorem led to the ...
  33. [33]
    [PDF] On Invariant Theory Of Finite Groups - University of Kent
    May 26, 2006 · This proves the main theorem of symmetric polynomials, stating that every ... tools of modern algebra (Hilbert's basis theorem). Applying these he ...
  34. [34]
    [PDF] Lecture 15: Introduction to Invariant Theory
    Mar 8, 2021 · Elementary symmetric polynomials are a fundamental system of invariants. 25 / 62. Page 30. Proof of Invariant Ring of Symmetric Polynomials.Missing: Gauss | Show results with:Gauss
  35. [35]
    Symmetric Functions - Combinatorics
    Symmetric functions are defined in the Schur and elementary bases, and are graded Hopf algebras indexed by integer partitions.
  36. [36]
    [PDF] a gentle introduction to coinvariant algebras - Joshua P. Swanson
    The Sn-invariants of Q[xn] are the symmetric polynomials. Example 1.2. If we wanted to come up with many examples of symmetric polynomials, we would quickly.
  37. [37]
    [PDF] Representation Theory of Symmetric Groups - Lecture Notes
    Modular representation theory: char = p > 0, ordinary representation theory: ... If charF > 2, then it is known that Specht modules are always indecomposable.
  38. [38]
    [PDF] Free extensions and Jordan type - arXiv
    Jan 6, 2020 · We denote by k{x, y} the regular local ring in variables x, y over the field k. Lemma 1.4 (Height two Artinian algebras are strong Lefschetz).
  39. [39]
    [PDF] Symmetric complete intersections - arXiv
    By the graded version of Nakayama's Lemma (see [Eis95, Cor. 4.8]), the homogeneous polynomials f1,...,ft form a minimal generating set of I if and only if ...
  40. [40]
    [PDF] L. Cangelmi RESOLVENTS AND GALOIS GROUPS
    In this paper, we describe the main results in effective Galois theory which make use of resolvent polynomials. In Section 2, we explain how the Galois group of ...
  41. [41]
    [PDF] galois theory at work - keith conrad
    Because discf is a symmetric polynomial in the roots of f(X), when f(X) is monic its discriminant is a polynomial in the coefficients of f(X) (which are, up ...
  42. [42]
    A bridge between two worlds: the Frobenius map
    Jul 29, 2012 · The Frobenius map essentially takes a character of a representation of the symmetric group and assigns it a symmetric function.
  43. [43]
    General representation theory - symmetric functions
    Mar 26, 2024 · Frobenius characteristic. Let χ be a class function on S n . We then define the Frobenius characteristic (or Frobenius image ) as the map.
  44. [44]
    [PDF] 32.2 Representations of the symmetric groups - UC Berkeley math
    The Scur functions are interpreted differently for the symmetric groups and the special linear groups: for symmetric groups the characters are given by ...
  45. [45]
    Schur polynomials - The symmetric functions catalog
    Oct 13, 2025 · Combinatorial formula​​ The skew Schur polynomials can be expressed as a sum over skew semi-standard Young tableaux: s λ / μ ( x ) = ∑ T ∈ SSYT ( ...Schur polynomials · Weyl formula · Jacobi–Trudi identity · Skew Schur polynomials
  46. [46]
    [PDF] representations of the symmetric group - UChicago Math
    Sep 1, 2010 · As shown in Section 4, any partition λ of n corresponds to a. Young symmetrizer cλ. The image of the action of cλ on V ⊗n, which we denote.
  47. [47]
    Quantum Statistics of Identical Particles | Foundations of Physics
    Jul 14, 2022 · Symmetrization Postulate (SP): State vectors describing several identical particles are either symmetric (bosons) or antisymmetric (fermions) ...
  48. [48]
    Particle exchange statistics beyond fermions and bosons - Nature
    Jan 8, 2025 · Here we show that non-trivial parastatistics inequivalent to either fermions or bosons can exist in physical systems.
  49. [49]
    Linear projections of the Vandermonde polynomial - ScienceDirect
    Nov 26, 2019 · An alternating polynomial is one that changes sign when any two variables of { x 1 , … , x n } are swapped. Vandermonde polynomials are central ...
  50. [50]
    [PDF] 4 Exterior algebra - People
    Definition 13 The second exterior power Λ2V of a finite-dimensional vector space is the dual space of the vector space of alternating bilinear forms on V .
  51. [51]