Fact-checked by Grok 2 weeks ago

Multilinear polynomial

A multilinear polynomial is a multivariate over a (such as or complex numbers) that is linear in each of its separately, meaning that in every term, each appears with exponent at most 1 and no is repeated. This distinguishes multilinear polynomials from general multivariate polynomials, where higher powers of individual are permitted. Formally, for x_1, \dots, x_n, a multilinear can be written as \sum_{S \subseteq } c_S \prod_{i \in S} x_i, where c_S are coefficients and the sum is over subsets S of \{1, \dots, n\}, ensuring square-free monomials. Multilinear polynomials form a of $2^n for n variables, as each possible corresponds to a basis , and they uniquely represent any from \{0,1\}^n to the field when evaluated on the boolean . This property makes them foundational in the analysis of boolean functions, where the expansion over the uses multilinear terms. Evaluation of a multilinear polynomial at a point can be computed efficiently using techniques like the sum-over-subsets method, with O(2^n) in the worst case but optimizations for sparse representations. In applications, multilinear polynomials arise in and . They also play a role in theory, where surjectivity properties hold under certain algebraic conditions, such as in unital algebras with surjective derivations.

Definition and Fundamentals

Formal Definition

A multilinear polynomial in n variables over a \mathbb{F} is a polynomial where each variable appears with degree at most 1 in every term. Its general form is given by p(x_1, \dots, x_n) = \sum_{S \subseteq \{1, \dots, n\}} c_S \prod_{i \in S} x_i, where the c_S \in \mathbb{F} are coefficients. Multilinearity means that the polynomial is linear in each x_i when the other variables are held fixed. A multilinear is homogeneous of degree k (or k-linear) if every term involves exactly k variables, so the sum is restricted to subsets S with |S| = k. Unlike general multivariate polynomials, multilinear polynomials contain no terms with higher powers such as x_i^2 or x_i x_j^2.

Key Characteristics

Multilinear polynomials build upon foundational concepts in algebra, particularly univariate polynomials over fields such as the real numbers \mathbb{R} or the complex numbers \mathbb{C}, where these are finite sums of monomials with non-negative integer exponents and coefficients from the field. Extending this to multivariate polynomials involves considering expressions in several indeterminates, allowing interactions among variables through products, but with restrictions that define multilinearity. This prerequisite knowledge is essential, as multilinear polynomials operate within the polynomial ring K[x_1, \dots, x_n] over a field K, inheriting properties like addition and scalar multiplication while imposing degree constraints per variable. A defining characteristic of multilinear polynomials is their structure as sums of multilinear s, where each is a product of distinct s raised to the first power only, ensuring no squared or higher powers of any appear. In the K[x_1, \dots, x_n], this means every has the form c \cdot x_{i_1} x_{i_2} \cdots x_{i_k} for distinct indices i_1, \dots, i_k and c \in K, with k \leq n. This in each separately distinguishes multilinear polynomials from general multivariate ones, which may include higher-degree s like x_i^2, and aligns them closely with multilinear forms in . The absence of higher powers per simplifies analysis in contexts like and testing, as the polynomials exhibit controlled growth and sparsity relative to full-degree counterparts. The total degree of a multilinear polynomial, defined as the maximum degree across all its monomials, is inherently bounded by the number of variables n, since no monomial can exceed degree n without repeating variables, which is prohibited. This bound implies that multilinear polynomials are homogeneous of degree at most n if restricted to such terms, providing a natural cap on that facilitates computational bounds in algorithms evaluating or factoring them. For instance, in n variables, the space of multilinear polynomials has $2^n, spanned by the n-choose-k monomials for degrees up to n. The representation of a multilinear polynomial in the \{x_{i_1} \cdots x_{i_k} \mid 1 \leq i_1 < \cdots < i_k \leq n\} (or including the constant term) is unique over any K, due to the of these monomials in the . This uniqueness stems from the property of monomials in K[x_1, \dots, x_n], allowing unambiguous decomposition and extraction, which is crucial for theoretical results in algebraic and proof systems.

Examples and Representations

Basic Examples

A multilinear polynomial in a single , known as the univariate case, is simply a linear of the form ax + b, where a and b are constants; this is trivially multilinear since the degree in the sole variable x is at most 1. In two variables, a basic example is the polynomial xy + 2x + 3y + 4, which consists of monomials each having degree at most 1 in both x and y. Similarly, for three variables, the polynomial xyz + xy + yz + x + y + z + 1 is multilinear, as no variable appears with exponent greater than 1 in any term. A homogeneous multilinear polynomial, where all terms have the same total degree, can be seen in the quadratic form xy + yz + zx over three variables, which is linear in each variable separately and has total degree 2. To highlight the distinction from general polynomials, consider the expansion of (x + y + z)^2: (x + y + z)^2 = x^2 + y^2 + z^2 + 2xy + 2yz + 2zx. The squared terms x^2, y^2, and z^2 each have degree 2 in one variable, violating multilinearity; thus, only the cross terms $2xy + 2yz + 2zx constitute a multilinear polynomial.

Matrix and Tensor Forms

In the bilinear case, a multilinear polynomial involving two sets of variables corresponds to a on vector spaces, which admits a . For finite-dimensional vector spaces V and W over a K, with bases \{e_i\} and \{f_j\}, the bilinear form B: V \times W \to K defined by B(x, y) = \sum_{i,j} a_{ij} x_i y_j, where x = \sum x_i e_i and y = \sum y_j f_j, is represented by the A = (a_{ij}). If the form is symmetric, meaning B(v, w) = B(w, v) for all v \in V, w \in W, then A is a , allowing the form to be expressed as B(x, y) = x^T A y for column vectors x, y. This matrix encodes the coefficients directly, facilitating computations like for quadratic forms derived from the bilinear case. For higher degrees, multilinear polynomials generalize this to tensor representations. A multilinear polynomial of order n in variables grouped into vectors x^{(1)}, \dots, x^{(n)} from finite-dimensional spaces V_1, \dots, V_n is represented by an n-way tensor C \in V_1^* \otimes \cdots \otimes V_n^*, where the entries c_{i_1 \dots i_n} are the coefficients corresponding to the monomial \prod_{k=1}^n x^{(k)}_{i_k}. This tensor captures all multilinear terms without higher powers in individual variables, with the polynomial given by f(x^{(1)}, \dots, x^{(n)}) = \sum_{i_1, \dots, i_n} c_{i_1 \dots i_n} \prod_{k=1}^n x^{(k)}_{i_k}. The universal property of the ensures that every from V_1 \times \cdots \times V_n to K corresponds uniquely to a linear functional on the tensor product space V_1 \otimes \cdots \otimes V_n, providing an algebraic foundation for this representation. Evaluation of the polynomial involves tensor contraction, where the tensor C is multiplied with the input vectors to yield the scalar value. Specifically, the sum over multi-indices contracts the tensor with each x^{(k)}, effectively applying the encoded by C. This operation is linear in each vector separately and scales efficiently in finite dimensions, with complexity tied to the tensor's order and dimensions. In applications, such contractions appear in coordinate expressions of multilinear maps between spaces, where the form arises from choosing bases in the and . As an example, consider a trilinear polynomial in three vector variables x, y, z \in K^m. It is represented by a 3-tensor C \in (K^m)^{\otimes 3} with entries c_{ijk}, and evaluates as f(x, y, z) = \sum_{i,j,k=1}^m c_{ijk} x_i y_j z_k, which is the full contraction C : x \otimes y \otimes z. This extends the bilinear matrix case, where the 3-tensor slices correspond to bilinear forms parameterized by the third index. Such representations are pivotal in for decomposing complex forms into basis expansions.

Properties

Algebraic Properties

Multilinear polynomials form a over the base , meaning that the sum of two multilinear polynomials and the of a multilinear polynomial are themselves multilinear. This preservation follows directly from the in each variable defining multilinearity, as addition and scalar multiplication are linear operations that do not introduce higher powers in any variable. The product of two multilinear polynomials, however, is not necessarily multilinear, since multiplication can produce terms with repeated variables and thus higher degrees in those variables. For instance, consider the product (xy)(xz) = x^2 y z, where the resulting monomial has a quadratic term in x, violating multilinearity. If the coefficients of a multilinear polynomial are symmetric under permutations of its variables, the polynomial relates to classical invariant theory, where such symmetric multilinear forms serve as invariants or covariants under the action of groups like the general linear group. These symmetric structures capture permutation-invariant properties essential for studying representations and decompositions in multilinear algebra. The of a multilinear polynomial with respect to one yields another multilinear polynomial of lower , linear in the remaining . This property arises because differentiation reduces the degree in the differentiated from 1 to 0 without affecting in the others, preserving the multilinear structure overall. To extend multilinearity to non-homogeneous polynomials, the homogenization process scales the terms by powers of an auxiliary variable, transforming the into a homogeneous form that can then be polarized into a multilinear expression. Specifically, for a Q(p) of degree n, the homogenized form is Q(x, y) = y^n Q(x/y), which facilitates conversion to a multilinear symmetric form via polarization identities.

Domain-Specific Properties

Multilinear polynomials exhibit distinct behaviors when restricted to specific domains, such as the unit or rectangular regions, arising from their separability and linearity in each variable. Over the unit [0,1]^n, a multilinear polynomial attains its maximum value at one of the due to its multilinearity, which implies convexity with respect to each variable when the others are fixed; this property ensures that extrema occur at the boundaries, propagating recursively to the cube's corners. This vertex attainment simplifies optimization tasks over the , as evaluating the polynomial at the $2^n suffices to identify the global maximum. In the positive \mathbb{R}_{\geq 0}^n, a multilinear polynomial with all positive coefficients is monotonically increasing, since each with to a x_i yields a lower-degree multilinear polynomial with non-negative coefficients, ensuring non-negative gradients throughout the . This monotonicity facilitates in optimization and probability contexts where positivity is assumed. On bounded rectangular domains, such as [a_1, b_1] \times \cdots \times [a_n, b_n], multilinear polynomials demonstrate separability: each term factors into a product of univariate linear functions in distinct variables, allowing the overall expression to be decomposed into separable components without cross-variable dependencies beyond the product structure. This enables efficient bounding and evaluation by reducing multivariate computations to univariate ones over individual intervals. Regarding regularity, multilinear polynomials are continuous over compact rectangular domains, with the constant bounded by the supremum norm of the ; since each component of the is a multilinear polynomial of at most n-1, it remains bounded on the compact set, yielding a finite constant that scales with the domain's size and the polynomial's coefficients. For volume computations, the of a multilinear polynomial over a rectangular domain factors into a product of univariate s due to the separability of its terms under Fubini's theorem: for a c \prod_{i \in S} x_i where S \subseteq \{1, \dots, n\}, the equals c \left( \prod_{i \in S} \int_{a_i}^{b_i} x_i \, dx_i \right) \left( \prod_{j \notin S} (b_j - a_j) \right), extending linearly to the full . This property is particularly useful in probabilistic expectations and .

Applications and Extensions

In Algebra and Combinatorics

In the 19th century, pioneered the study of invariants for binary forms, developing methods that extended to multilinear forms as a foundational aspect of classical . His work on hyperdeterminants and partial differential operators provided tools for constructing invariants of homogeneous polynomials under linear transformations, laying the groundwork for later algebraic applications of multilinear structures. Cycle index polynomials serve as generating functions for counting s under group actions, functioning as invariants that encode the structure of elements in permutation groups like the S_n. Defined as Z_G = \frac{1}{|G|} \sum_{g \in G} \prod_k x_k^{c_k(g)}, where c_k(g) is the number of cycles of length k in g, these polynomials facilitate via Pólya's theorem by substituting variables to generate counts of orbits. For example, substituting x_k = s^k yields the , which counts permutations by fixed points and types, providing a framework for symmetric invariants in combinatorial . In , characters of the S_n are analyzed through multilinear polynomials, which decompose into components corresponding to irreducible representations labeled by partitions of n. A multilinear polynomial f(x_1, \dots, x_n) of n can be expressed as a sum over association types, with each term evaluated using the \phi_\lambda of the irreducible representation V^\lambda, yielding matrices whose ranks determine polynomial identities in the group algebra \mathbb{F} S_n. This approach, rooted in the Wedderburn decomposition, allows characters to express the multiplicity and structure of representations, connecting multilinear forms to the Frobenius for symmetric functions. Multilinear extensions play a key role in combinatorial optimization, particularly for maximizing monotone submodular functions over matroid intersections. For a submodular function f: 2^X \to \mathbb{R}^+, the multilinear extension F(y) = \mathbb{E}[f(\hat{y})] for y \in [0,1]^X and random \hat{y} with coordinates independently 1 with probability y_i, relaxes the discrete problem to a continuous one over the polytope. The continuous iteratively maximizes F(y) over a single matroid constraint, achieving a (1 - 1/e)- via to a feasible set, while extensions to multiple matroids (intersections) yield $1/(k + \epsilon)-approximations for k constraints using local search on the relaxation. This framework enables efficient algorithms for problems like sensor placement or diversity maximization under independence constraints. Boolean multilinear polynomials provide a unique representation for switching functions—Boolean functions arising in switching circuit theory—eliminating higher-degree terms due to the identity x_i^2 = x_i over \{0,1\}. Any Boolean function f: \{ -1,1 \}^n \to \{ -1,1 \} expands uniquely as f(x) = \sum_{S \subseteq } \hat{f}(S) \prod_{i \in S} x_i, where \hat{f}(S) are Fourier coefficients computable via expectations, offering a degree-d characterization equivalent to approximation by d-juntas. This multilinear form underpins analysis in learning theory and complexity, with the degree measuring the function's sensitivity to variable subsets.

In Analysis and Approximation

Multilinear polynomials serve as effective approximators for continuous functions in analytical settings, particularly when higher-degree terms in individual variables are not required. While the full algebra of multivariate polynomials is dense in the space of continuous functions on compact sets by the Stone-Weierstrass theorem, which guarantees uniform approximation due to the algebra's separation of points and under and conjugation, multilinear polynomials impose strict degree constraints (at most degree 1 per variable), limiting their representational power for functions with significant nonlinearity in single variables. This makes them suitable for approximating functions that exhibit low interaction complexity across variables, such as those arising in or interaction models, where least-squares projections onto the multilinear on domains like [0,1]^n yield interpretable coefficients representing variable interactions. In high-dimensional analysis, multilinear polynomials facilitate tensor product approximations, which decompose multivariate functions into sums of products of univariate functions, thereby mitigating the curse of dimensionality that plagues full tensor grid methods requiring exponential storage and computation in the dimension d. For instance, the Tucker decomposition represents a function f: \mathbb{R}^d \to \mathbb{R} as a low-rank tensor product \sum_{k_1=1}^{r_1} \cdots \sum_{k_d=1}^{r_d} b_{k_1 \dots k_d} \prod_{j=1}^d V^{(j)}_{k_j}(x_j), where ranks r_j \ll n (grid size per dimension) reduce complexity from O(n^d) to O(d r n \log n), enabling efficient approximation of operators like convolutions or Green's functions in physics simulations. Such approaches are particularly valuable for smooth functions on high-dimensional domains, preserving accuracy while scaling linearly with dimension. Error bounds for multilinear approximations of smooth functions draw from generalizations of Jackson-type theorems in multivariate settings, which relate the best to moduli of or mixed , adapted for bounded multi-degree (here, degree 1 per variable). For functions in Sobolev spaces on the unit cube, the error in best uniform approximation by multilinear polynomials satisfies E(f) \leq C \omega_k(f, h_1, \dots, h_d), where \omega_k is a mixed modulus of and h_j are grid spacings, with constants C depending on the order k; this establishes O(h) convergence for functions, though slower than higher-degree polynomials for univariate nonlinearity. These bounds highlight multilinear methods' in anisotropic or tensor-structured problems, where scales better than isotropic total-degree approximations. In , multilinear polynomials form the basis for on tensor product grids, providing a straightforward extension of to higher dimensions without introducing cross-derivatives. On a d-dimensional rectangular with nodes \{x_{i_j}^{(j)}\}_{i_j=1}^{n_j} for j=1,\dots,d, the interpolant is P(x) = \sum_{i_1=1}^{n_1} \cdots \sum_{i_d=1}^{n_d} f(x_{i_1}^{(1)}, \dots, x_{i_d}^{(d)}) \prod_{j=1}^d \ell_{i_j}^{(j)}(x_j), where \ell_{i_j}^{(j)} are univariate linear basis functions; this yields exact reproduction on grid points and O(h) for smooth functions, commonly applied in finite element methods or data visualization to handle scattered high-dimensional data efficiently. Fourier-Motzkin elimination extends to projecting of multilinear inequalities in optimization and feasibility analysis, where linear relaxations of multilinear terms (e.g., via McCormick envelopes) allow to derive valid bounds without solving the full . By applying the elimination sequentially to the linearized constraints, one obtains a reduced whose captures the of the original multilinear inequalities, useful for deriving cutting planes in mixed-integer with 0-1 variables. This technique ensures completeness in for inequalities, though with exponential growth in the number of inequalities for high dimensions.

In Computing and Optimization

Maximizing a multilinear polynomial over the unit [0,1]^n is NP-hard, as it generalizes challenging problems like quadratic assignment. Approximation algorithms leverage (SDP) relaxations of the multilinear objective, followed by randomized to obtain feasible points; for instance, dependent rounding schemes yield guarantees such as (1-1/e)- for submodular cases, while general multilinear maximization benefits from similar techniques with multiplicative error bounds. In machine learning, multilinear polynomials model interactions in high-dimensional tensor data through decompositions like the canonical polyadic (CP) decomposition, which expresses a tensor as a sum of rank-1 components: \mathcal{X} \approx \sum_{r=1}^R \lambda_r \mathbf{a}_r \circ \mathbf{b}_r \circ \mathbf{c}_r \dots, where \circ denotes outer product and the factors correspond to coefficients in a symmetric multilinear polynomial representation. This approach enables dimensionality reduction and feature extraction in applications such as topic modeling, image classification, and web link analysis, preserving latent multilinear structures in multi-way arrays. Software tools for symbolic manipulation of multilinear polynomials include Mathematica, which supports conversion to multilinear form via functions like Expand and Collect on multivariate expressions with degree constraints, allowing efficient handling of homogeneous sums-of-products. Similarly, SageMath provides multivariate polynomial rings over various coefficient fields, enabling operations like factorization and substitution tailored to multilinear constraints through its PolyDict backend and libSINGULAR interface.

References

  1. [1]
    [PDF] Algebraic Methods in Combinatorics: Multilinear - Boris Bukh
    Example: Recall that a polynomial on Fn is called multilinear if it is a linear combination of monomials of the form xI def. = Qi∈I xi. Such a polynomial is ...
  2. [2]
    Definition of a multilinear polynomial - Mathematics Stack Exchange
    Jan 16, 2012 · A multilinear polynomial is a polynomial that is linear in each of its variables. In other words, no variable occurs to a power of 2 or higher.common factors of multilinear polynomial - Math Stack Exchangeterminology of multilinear form - Math Stack ExchangeMore results from math.stackexchange.com
  3. [3]
    [PDF] A Note on Multilinear Polynomial Evaluation - Bryan Gillespie's
    Feb 23, 2023 · The degree of a multivariate polynomial in terms of a variable xj is defined as the largest d such that cα is nonzero for some tuple α whose j- ...
  4. [4]
    [PDF] Multilinear Polynomials Modulo Composites - Uni Ulm
    a polynomial P to satisfy P(x) = f(x) for ea h point x in the ube seems too restri tive. A more natural definition, at least from a omputational perspe tive ...
  5. [5]
  6. [6]
    Multilinear polynomials are surjective on algebras with surjective ...
    Jan 1, 2021 · If A is a unital algebra with a surjective inner derivation, then every nonzero multilinear polynomial is surjective on A. Let us point out the ...
  7. [7]
    [PDF] Zeromorph: Zero-Knowledge Multilinear-Evaluation Proofs from ...
    Abstract. A multilinear polynomial is a multivariate polynomial of de- gree at most one in each variable. This paper introduces a new scheme to.
  8. [8]
    [PDF] Multi-Linear Formulas for Permanent and Determinant are of Super ...
    We prove that any multi-linear arithmetic formula for the permanent or the determinant of an n × n matrix is of size super-polynomial in n. 1 Introduction.
  9. [9]
    [PDF] ON ZEROS OF MULTILINEAR POLYNOMIALS
    We refer to a homogeneous polynomial. F(x1,...,xn) ∈ K[x1,...,xn] of degree g but linear in every variable as an (n, g)- multilinear form over K. Such forms ...
  10. [10]
    [PDF] Chapter 9 Multilinear Algebra - LSU Math
    In this chapter we study multilinear algebra, functions of several variables that are linear in each variable separately. Multilinear algebra is a ...
  11. [11]
    [PDF] The Strength of Multilinear Proofs
    Dec 8, 2005 · proofs, where each polynomial has a unique representation as a sum of monomials (disregarding the order of monomials inside a polynomial)).
  12. [12]
    [PDF] Bilinear forms and their matrices
    Mar 11, 2011 · A bilinear form H is called symmetric if H(v, w) = H(w, v) for all v, w ∈ V . A bilinear form H is called skew-symmetric if H(v, w) = −H(w, v) ...
  13. [13]
    [PDF] On Multilinear Forms: Bias, Correlation, and Tensor Rank - arXiv
    Apr 25, 2018 · Let T : [k]d → 1 be a d dimensional tensor. Then, the set-multilinear polynomial associated with T is the polynomial fT in variables Xi,j ...
  14. [14]
  15. [15]
    [PDF] LONDON MATHEMATICAL SOCIETY STUDENT TEXTS
    The text starts in earnest in Chapter 2, which provides an overview of the basics of classical invariant theory within the context of binary forms. Here we meet ...
  16. [16]
    [PDF] Deterministically Factoring Sparse Polynomials into Multilinear ...
    Formally, we say that a polynomial P is multilinearly-split if it can be written as a product of multilinear polynomials. Clearly, this model extends the ...
  17. [17]
    [PDF] Lower Bounds on Arithmetic Circuits via Partial Derivatives
    X, and p ∈ F[X] a multilinear polynomial. Then define the partial derivative of p with respect to the variables in Y by. ∂. ∂Y p = ∂. ∂y1. ···. ∂. ∂yk p. It ...
  18. [18]
    Vertex results for the robust analysis of uncertain biochemical systems
    Sep 19, 2022 · Sufficiency follows from the fact that a multiaffine function defined on a hyper-rectangle takes its maximum and minimum values at the vertices ...
  19. [19]
    [PDF] The Multilinear polytope for acyclic hypergraphs - Optimization Online
    Moreover, it is simple to show that the maximum value of a multilinear function over a box is attained at a vertex of the box [28]. Clearly, multilinear ...
  20. [20]
    The Interval Analysis of Multilinear Expressions - ScienceDirect.com
    Expressions are multilinear when variable occurrences are linear and products have factors using different variables. We demonstrate that multilinear ...
  21. [21]
  22. [22]
    [PDF] representations of the symmetric group and polynomial identities
    Abstract. Let Sn denote the symmetric group on n symbols. When F has characteristic zero or greater than n, the group algebra FSn is a direct sum of.
  23. [23]
    [PDF] Maximizing a Monotone Submodular Function Subject to a Matroid ...
    In recent work, the multilinear extension and pipage rounding have been used directly to derive approximation algorithms for nonmonotone submodular maximization ...
  24. [24]
    [PDF] Submodular Maximization over Multiple Matroids via Generalized ...
    An exchange digraph is a well-known construct for devising efficient algorithms for exact maximization of linear functions over the intersection of two matroids.<|separator|>
  25. [25]
    [PDF] ANALYSIS OF BOOLEAN FUNCTIONS Ryan O'Donnell
    The purpose of this May 2021 revision is to fix 100+ small typos and er- rors present in the first edition, and to make the result available on arXiv.
  26. [26]
    [PDF] arXiv:0912.1547v2 [math.OC] 6 Apr 2011
    Apr 6, 2011 · By considering a least squares approximation of a given square integrable function f∶[0, 1]n → R by a multilinear polynomial of a specified.
  27. [27]
    [PDF] Tensor-Product Approximation in Computational Physics
    • Separable approximation to multi-dimensional functions and operators getting rid of the “curse of dimensionality”. • Tensor approx. of Green's kernel. 1. |x ...
  28. [28]
    Improvement of Jackson Type Inequalities for Moduli of Continuity of ...
    ... Jackson Theorem. Article. Feb 2015. Vladimir V. Zhuk · Vladimir Bure. We consider the generalized Jackson theorem ... multilinear weighted convolution ...
  29. [29]
    [PDF] HIERARCHICAL, MULTILINEAR REPRESENTATION OF FEW ...
    Multilinear interpolation as described in Section 2 was implemented for full tensor product grids and sparse grids. The cross sections were interpolated on ...
  30. [30]
    A class of valid inequalities for multilinear 0–1 optimization problems
    ... Fourier–Motzkin method (the analysis would be symmetric for j ∈ ( T ∖ S ) ∖ J ). This leads to inequality x j ≤ 1 by combining constraints (84), (85) ...
  31. [31]
    [PDF] Approximating Multilinear Monomial Coefficients and ... - arXiv
    Jul 15, 2010 · Abstract. This paper is our third step towards developing a theory of testing monomials in multivariate polynomials and concentrates on two ...
  32. [32]
    [PDF] Recursive McCormick Linearization of Multilinear Programs
    Jul 1, 2025 · ... greedy algorithm, as they select sets for the pairwise cover based on how frequently they appear in the monomials. This heuristic frequently ...
  33. [33]
    [PDF] Global optimization of nonconvex problems with multilinear ... - OPUS
    May 10, 2014 · In fact, it has been shown that finding the convex envelope of a multilinear function over the unit hypercube is an NP-hard problem, in ...<|separator|>
  34. [34]
    [PDF] Dependent Randomized Rounding via Exchange Properties of ...
    It is known that the multilinear opti- mization problem max{F(x) : x ∈ P} can be approximated within a factor of 1 − 1/e for any monotone submodular function ...
  35. [35]
    [PDF] Tensor Decompositions and Applications
    Two particular tensor decompositions can be considered to be higher-order extensions of the matrix sin- gular value decomposition: CANDECOMP/PARAFAC (CP) ...
  36. [36]
    Algebraic Manipulation - Wolfram Language Documentation
    There are several ways to write any polynomial. The functions Expand, FactorTerms, and Factor give three common ways.
  37. [37]
    Multivariate Polynomials and Polynomial Rings
    Sage implements multivariate polynomial rings through several backends. The most generic implementation uses the classes sage.rings.polynomial.polydict. ...