Fact-checked by Grok 2 weeks ago

Triangle inequality

The triangle inequality is a fundamental principle in mathematics that asserts, in the context of Euclidean geometry, that for any triangle with sides of lengths a, b, and c, the inequality a + b > c, a + c > b, and b + c > a holds, ensuring the sides can form a non-degenerate triangle. This geometric property, first articulated in Euclid's Elements (circa 300 BCE) as Proposition I.20, underscores that the straight line is the shortest path between two points and prevents the collapse of a triangle into a line segment. Beyond geometry, the triangle inequality extends to abstract settings, such as metric spaces, where for a metric d on a set X, the distance satisfies d(x, z) \leq d(x, y) + d(y, z) for all x, y, z \in X, capturing the intuitive notion that indirect paths are at least as long as direct ones. In normed vector spaces, it manifests as \| \mathbf{u} + \mathbf{v} \| \leq \| \mathbf{u} \| + \| \mathbf{v} \| for vectors \mathbf{u} and \mathbf{v}, forming one of the axioms defining a norm and enabling key results in functional analysis and linear algebra. This inequality's versatility makes it indispensable across fields like real analysis, where it supports proofs of convergence and continuity, and in applications ranging from optimization to machine learning distance metrics.

Euclidean Geometry

Triangle Side Constraints

In , the triangle inequality imposes fundamental constraints on the side lengths of a triangle. For a triangle with sides of lengths a, b, and c, the inequalities a + b > c, a + c > b, and b + c > a must hold. These conditions ensure that the sides can form a closed, non-degenerate figure in the plane. The inequality is strict for non-degenerate triangles, which have positive area and interior angles less than 180 degrees. Equality occurs only in degenerate cases, where the three vertices are collinear, resulting in a line segment rather than a proper with area. This principle originates in Euclid's Elements, Book I, Proposition 20 (circa 300 BCE), which establishes that in any ABC, the length of one side is less than the sum of the other two, underscoring the line as the shortest path between two points. To illustrate, consider potential side lengths of 3, 4, and 5 units: $3 + 4 = 7 > 5, $3 + 5 = 8 > 4, and $4 + 5 = 9 > 3, satisfying the conditions and forming a valid . In contrast, lengths of 1, 2, and 4 units fail because $1 + 2 = 3 < 4, preventing the formation of a triangle. Geometrically, these constraints reflect that the direct straight-line path between two points is always shorter than any broken path along the sides, ensuring the sides close without overlap or gap.

Applications in Right Triangles

In a right triangle with legs a and b and hypotenuse c, the triangle inequality manifests as the strict condition a + b > c, where c = \sqrt{a^2 + b^2} by the . This ensures the sides form a valid non-degenerate triangle with a at the between a and b. If a + b \leq c, the lengths cannot enclose a triangular region, preventing the formation of any and rendering the Pythagorean relation inapplicable. For instance, consider sides 3, 4, and 5: here $3 + 4 = 7 > 5, satisfying the inequality and confirming the via $3^2 + 4^2 = 9 + 16 = 25 = 5^2; in contrast, sides 3, 4, and 8 yield $3 + 4 = 7 < 8, which violates the condition and cannot form a triangle, let alone a . The bound a + b > \sqrt{a^2 + b^2} can be derived algebraically by assuming a > 0 and b > 0. Squaring both sides of the proposed gives (a + b)^2 > a^2 + b^2, which expands to a^2 + 2ab + b^2 > a^2 + b^2. Subtracting a^2 + b^2 from both sides yields $2ab > 0, true since ab > 0. Taking the positive preserves the inequality, confirming a + b > c. This derivation highlights how the positive cross term $2ab in the expansion strictly exceeds the Pythagorean equality, distinguishing right triangles from degenerate cases. In practical contexts like and , the triangle inequality verifies that proposed side lengths for right triangles are feasible before applying the for computations. Surveyors, for example, use it to check baseline and offset measurements when triangulating land plots, ensuring the segments form a for accurate distance calculations without physical reconstruction. Similarly, architects apply it when designing right-angled structural elements, such as roof trusses or corner supports, to confirm that material lengths will assemble into stable right triangles, avoiding costly errors in load distribution. Historically, incorporated the triangle inequality (Proposition I.20) as a foundational principle in various proofs within to establish geometric constraints essential for triangle constructions. Visualization of the inequality in right triangles often involves considering near-degenerate configurations, where one leg approaches zero relative to the other, causing the acute to near 0° while the persists, but the sum a + b remains strictly greater than c until collapse. In the limiting degenerate case—approaching a straight line with total length a + b = c and an nearing 180°— holds, but this no longer constitutes a , illustrating the inequality's role in maintaining geometric integrity.

Proofs and Converse

The triangle inequality in Euclidean geometry states that for any triangle with sides of lengths a, b, and c, the inequalities a + b > c, a + c > b, and b + c > a hold. A geometric proof, originating from Euclid's Elements (Book I, Proposition 20), proceeds by constructing an auxiliary isosceles triangle. Consider triangle ABC with sides BC = a, AC = b, and AB = c. To prove a + b > c, extend side BA beyond A to point D such that AD = b = AC. Connect C to D, forming isosceles triangle ACD with AC = AD = b, so base angles \angle ACD = \angle ADC. The exterior angle \angle BCD at A in triangle ABC exceeds the opposite interior angle \angle ACD (by I.16). Thus, in triangle BCD, \angle BCD > \angle ADC, so opposite side BD > BC = a (by I.19, larger angle opposite larger side). But BD = BA + AD = c + b, yielding c + b > a. Similar constructions apply to the other inequalities. Equality holds only if points are collinear, forming a degenerate case. An algebraic proof utilizes the . For ABC with angle C opposite side c, the law states c^2 = a^2 + b^2 - 2ab \cos C. Since -1 < \cos C \leq 1 for $0 < C < \pi, and -2ab < 0, multiplying \cos C > -1 by -2ab reverses the inequality: -2ab \cos C < 2ab. Thus, c^2 < a^2 + b^2 + 2ab = (a + b)^2, so c < a + b. The other inequalities follow analogously. Equality occurs when \cos C = -1, i.e., C = \pi, degenerating the to a line segment. The converse asserts that if positive real numbers a, b, and c satisfy a + b > c, a + c > b, and b + c > a, then they form the side lengths of a triangle. Without loss of generality, assume a \leq b \leq c. The conditions reduce to a + b > c (the others hold automatically). To prove this, consider Heron's formula for the area K = \sqrt{s(s - a)(s - b)(s - c)}, where s = (a + b + c)/2. The strict inequalities imply s - a = (b + c - a)/2 > 0, s - b > 0, and s - c = (a + b - c)/2 > 0, with s > 0. Thus, the product s(s - a)(s - b)(s - c) > 0, so K > 0. A positive area confirms the existence of a non-degenerate triangle with these side lengths. Alternatively, a coordinate proof places vertex A at (0, 0), B at (c, 0), and C at (x, y) with y > 0, yielding distances a = \sqrt{x^2 + y^2} and b = \sqrt{(x - c)^2 + y^2}. The condition a + b > c ensures the circles of radii a and b centered at A and B intersect at points with y \neq 0. In edge cases, equality in the triangle inequality, such as a + b = c, corresponds to degenerate triangles where the vertices are collinear and the area is zero. For instance, sides 3, 4, 7 satisfy $3 + 4 = 7, forming a straight line rather than a proper . Such configurations satisfy the non-strict form a + b \geq c but fail the strict inequality for non-degeneracy.

Geometric Generalizations

Polygon Inequalities

The triangle inequality extends naturally to polygons, providing necessary conditions for the existence of an n-gon in the Euclidean plane with positive side lengths a_1, a_2, \dots, a_n > 0. Specifically, the sum of any n-1 sides must exceed the length of the remaining side: for each k=1, \dots, n, a_1 + \cdots + \hat{a_k} + \cdots + a_n > a_k, where \hat{a_k} denotes omission of a_k. This ensures the polygon can close without degenerating into a line segment or worse. Equivalently, the longest side must be strictly shorter than the sum of the others, preventing the figure from collapsing. These conditions are both necessary and sufficient for the existence of such a polygon when combined with the requirement that the side lengths satisfy the vector closure in the plane. For the case of a (n=4) with sides a, b, c, d, the inequalities simplify to a + b + c > d, a + b + d > c, a + c + d > b, and b + c + d > a. Consider an attempted with sides $1, 1, 1, 4: here, 1 + 1 + 1 = 3 < 4, violating the condition for the longest side, so no such [quadrilateral](/page/Quadrilateral) exists (it would degenerate into a line segment of length &#36;4 with the other sides folding back). In contrast, sides $3, 4, 5, 6satisfy all inequalities (e.g.,3+4+5=12 > 6$), allowing formation of a valid . These constraints generalize the case by ensuring no single side dominates the perimeter. A key application of polygon inequalities arises in path geometry: the total length of any polygonal chain connecting two points P and Q in the plane strictly exceeds the straight-line distance |PQ| unless the chain is degenerate (i.e., a single segment). This follows from repeated application of the triangle inequality: divide the chain into consecutive triangles sharing vertices, and sum the inequalities along the to bound the endpoint distance by the path length. Thus, straight lines minimize distances among all polygonal approximations. This polygonal extension of the triangle inequality, building on Euclid's foundational work in Elements (ca. 300 BCE), was systematically developed in 19th-century Euclidean geometry texts amid broader advances in synthetic and metric approaches.

Higher-Dimensional Extensions

The triangle inequality extends naturally to higher-dimensional Euclidean spaces via generalizations to simplices, the higher-dimensional analogs of triangles. In an n-dimensional space, an n-simplex defined by vertices V_0, V_1, \dots, V_n requires that its edge lengths satisfy the strict triangle inequality for every subset of three vertices: for any distinct indices i, j, k, the distance d(V_i, V_j) + d(V_j, V_k) > d(V_i, V_k). This ensures consistency across all triangular faces. To guarantee the simplex has positive n-dimensional volume and can be embedded without degeneracy, the Cayley-Menger determinant constructed from the squared edge lengths must be positive. For polyhedra like the (a 3-simplex), the edge length constraints include the triangle inequalities on each of its four faces, plus the positive Cayley-Menger determinant for the overall volume. Equivalently, these can be expressed through conditions on paths: for points A, B, C, D, any chain of edges such as d(A,B) + d(B,C) + d(C,D) > d(A,D) must hold, reflecting that the straight-line distance is shorter than any broken path in . Complementary inequalities apply to face areas; if A, B, C, D denote the areas of the four triangular faces, then A + B + C > D and all cyclic permutations hold, ensuring the areas are compatible with a non-degenerate . This area condition arises from the vector sum of oriented face areas being zero, implying no single face area exceeds the sum of the others. These higher-dimensional extensions find applications in , where they validate 3D models by checking if proposed edge lengths or face configurations yield embeddable, non-degenerate structures, such as in geometric solving for CAD systems or . They also underscore the convexity of in higher dimensions, as the inequalities enforce that geodesic (straight-line) paths remain the shortest, preserving combinations of points within the .

Abstract Mathematical Settings

Normed Vector Spaces

In a normed vector space (V, \|\cdot\|) over the real or complex numbers, the triangle inequality states that for all vectors u, v \in V, \|u + v\| \leq \|u\| + \|v\|. This , combined with the requirements of non-negativity (\|u\| \geq 0, with equality if and only if u = 0) and absolute homogeneity (\|\alpha u\| = |\alpha| \|u\| for scalars \alpha), defines a that measures vector "length" in a manner consistent with intuitive geometric distances. The triangle inequality ensures the norm's , preventing pathological behaviors where sums could exceed expected lengths, and it underpins the condition in Banach spaces—complete normed spaces where every converges. This property is essential for developing theories of linear operators and , as it allows norms to model and in infinite-dimensional settings. Prominent examples of norms satisfying the triangle inequality are the p-norms on \mathbb{R}^n (or \mathbb{C}^n) for $1 \leq p \leq \infty. The Euclidean norm (2-norm) is given by \|x\|_2 = \sqrt{\sum_{i=1}^n x_i^2}, and it obeys the triangle inequality via the Cauchy-Schwarz inequality: |\langle u, v \rangle| \leq \|u\|_2 \|v\|_2, which implies \|u + v\|_2^2 \leq (\|u\|_2 + \|v\|_2)^2. The 1-norm, or Manhattan norm, \|x\|_1 = \sum_{i=1}^n |x_i|, satisfies \|u + v\|_1 = \sum |u_i + v_i| \leq \sum (|u_i| + |v_i|) = \|u\|_1 + \|v\|_1 by the subadditivity of the absolute value. Similarly, the infinity norm, \|x\|_\infty = \max_{1 \leq i \leq n} |x_i|, fulfills the inequality since \|u + v\|_\infty = \max |u_i + v_i| \leq \max (|u_i| + |v_i|) \leq \|u\|_\infty + \|v\|_\infty. For general p-norms, \|x\|_p = \left( \sum_{i=1}^n |x_i|^p \right)^{1/p} (with the \infty-case as the limit), the triangle inequality holds by Minkowski's inequality, a generalization proven using Hölder's inequality for $1 < p < \infty. The formalization of normed vector spaces and their triangle inequality emerged in 20th-century functional analysis, systematized by Stefan Banach in his 1932 book Théorie des opérations linéaires, which introduced the complete normed spaces now bearing his name. Earlier contributions by Fréchet and others laid groundwork, but Banach's work established the framework for modern operator theory.

Metric Spaces

The concept of metric spaces was introduced by Maurice Fréchet in his 1906 doctoral thesis, providing a foundation for abstract distance measures. In a metric space (X, d), the triangle inequality is a fundamental axiom that states for all points x, y, z \in X, the distance satisfies d(x, z) \leq d(x, y) + d(y, z). This condition ensures that the direct distance between two points is no greater than the length of any path connecting them via an intermediate point, capturing the intuitive notion that detours cannot shorten distances. The full definition of a metric space requires d to also satisfy non-negativity (d(x, y) \geq 0 for all x, y \in X, with equality if and only if x = y), symmetry (d(x, y) = d(y, x)), and the identity of indiscernibles (already implied in non-negativity with equality condition). Together, these properties, with the triangle inequality as the key relational axiom, define a structure where distances behave consistently for measuring separation in abstract sets. The triangle inequality plays a crucial role in establishing the metric's compatibility with geometric intuition, preventing pathological distances where indirect paths would appear shorter than direct ones. For instance, in the discrete metric on any set X, defined by d(x, y) = 1 if x \neq y and d(x, x) = 0, the inequality holds trivially since d(x, z) \leq 1 and d(x, y) + d(y, z) \geq 1 unless all points coincide. Another example arises in , where the shortest-path distance d(x, z) between vertices x and z in a connected graph satisfies the triangle inequality because any path from x to z via y has length at least as long as the shortest direct path, reflecting the metric induced by edge weights. This axiom underpins much of modern mathematics, serving as the foundation for topological spaces (where continuous functions preserve the inequality) and functional analysis (enabling convergence notions like Cauchy sequences). In data science, the triangle inequality facilitates efficient algorithms in clustering, such as hierarchical clustering or k-means variants, by ensuring that distance-based approximations (e.g., in nearest-neighbor searches) remain bounded and scalable. These applications highlight its enduring relevance beyond pure theory, particularly in advancements where metric spaces model high-dimensional data structures like embeddings in natural language processing.

Reverse Triangle Inequality

The reverse triangle inequality provides a lower bound complement to the standard triangle inequality, establishing that distances or norms cannot differ too drastically without a corresponding separation between the points involved. In a metric space (X, d), it states that for all x, y, z \in X, |d(x, z) - d(y, z)| \leq d(x, y). This formulation implies that the distance between x and y serves as an upper bound on the absolute difference in their distances to any fixed point z. Similarly, in a normed vector space with norm \|\cdot\|, the reverse triangle inequality takes the form \big| \|u\| - \|v\| \big| \leq \|u - v\| for all vectors u, v in the space, highlighting how the norm difference is controlled by the separation between the vectors themselves. The proof follows directly from the standard triangle inequality. For the metric case, apply the triangle inequality to obtain d(x, z) \leq d(x, y) + d(y, z), which rearranges to d(x, z) - d(y, z) \leq d(x, y). Interchanging x and y yields d(y, z) - d(x, z) \leq d(x, y). Combining these gives the absolute value form. For norms, start with \|u\| = \|(u - v) + v\| \leq \|u - v\| + \|v\|, so \|u\| - \|v\| \leq \|u - v\|; swapping u and v completes the proof. This inequality finds key applications in bounding errors and ensuring stability in computational settings. In numerical analysis, it is used to estimate approximation errors by relating the deviation between a computed solution and the true value to distances from a reference point, aiding in convergence proofs for iterative methods. For instance, in error analysis for linear systems or optimization algorithms, it helps quantify how perturbations in inputs propagate to outputs, supporting assessments of backward stability where computed results are close to exact solutions of nearby problems. A concrete example arises in Euclidean geometry, where for a triangle with side lengths a, b, and c, the reverse triangle inequality implies |a - c| \leq b, ensuring that no side is shorter than the difference of the other two, which is essential for the existence of such a triangle. This geometric interpretation extends naturally to higher dimensions via the .

Cosine Similarity Inequality

The cosine similarity between two non-zero vectors \mathbf{u} and \mathbf{v} in an inner product space is defined as \text{sim}(\mathbf{u}, \mathbf{v}) = \frac{\mathbf{u} \cdot \mathbf{v}}{\|\mathbf{u}\| \|\mathbf{v}\|} which ranges from -1 to 1 and measures the cosine of the angle between them. Unlike norms or distances that satisfy the standard triangle inequality, cosine similarity does not, as the associated cosine distance d(\mathbf{u}, \mathbf{v}) = 1 - \text{sim}(\mathbf{u}, \mathbf{v}) can violate d(\mathbf{u}, \mathbf{w}) \leq d(\mathbf{u}, \mathbf{v}) + d(\mathbf{v}, \mathbf{w}). For instance, consider unit vectors in \mathbb{R}^2: \mathbf{u} = (1, 0), \mathbf{v} = (0, 1), and \mathbf{w} = \frac{1}{\sqrt{2}}(1, 1). Here, \text{sim}(\mathbf{u}, \mathbf{v}) = 0 so d(\mathbf{u}, \mathbf{v}) = 1, while \text{sim}(\mathbf{u}, \mathbf{w}) = \text{sim}(\mathbf{v}, \mathbf{w}) = \frac{1}{\sqrt{2}} \approx 0.707 so d(\mathbf{u}, \mathbf{w}) = d(\mathbf{v}, \mathbf{w}) \approx 0.293, and $1 > 0.293 + 0.293, violating the inequality. These properties enable efficient approximations in high-dimensional settings. In information retrieval, cosine similarity ranks document relevance using TF-IDF vectors, where non-negative entries ensure positive similarities, and the metric properties facilitate pruning in similarity searches. In natural language processing, post-2010 embedding models like word2vec use cosine similarity to measure semantic closeness between word vectors, supporting scalable nearest-neighbor queries in tasks such as semantic search and recommendation systems.

Minkowski Space Reversal

In , the fundamental arena of , the spacetime metric is given by ds^2 = -c^2 dt^2 + dx^2 + dy^2 + dz^2, where c is the , t is the time coordinate, and x, y, z are spatial coordinates. This indefinite metric (signature (- + + +)) defines the interval \sigma between two events A and C as \sigma^2 = ds^2 integrated along a path. Unlike Euclidean distances, the interval's sign distinguishes causal character: timelike (\sigma^2 < 0), lightlike (\sigma^2 = 0), or spacelike (\sigma^2 > 0). For timelike separations, the magnitude |\sigma| (proportional to \tau = |\sigma|/c) satisfies a reverse triangle inequality: if B lies on a timelike path from A to C, then |\sigma(A,C)| \geq |\sigma(A,B)| + |\sigma(B,C)|. This holds because the straight-line maximizes among timelike paths connecting the events. The reversal stems from the indefinite metric, which contrasts with positive-definite metrics where the standard triangle inequality \sigma(A,C) \leq \sigma(A,B) + \sigma(B,C) applies. For spacelike separations, the usual inequality indeed holds, treating the interval as a distance. In the timelike case, the reverse ensures causality: no timelike path can exceed the light cone, forbidding superluminal signaling, as any deviation from the geodesic reduces total proper time. This property underpins the light cone structure, where the causal future (or past) of an event is bounded by null geodesics, preserving the relativistic ordering of events. A classic example is the twin paradox: one twin travels at relativistic speed to a and returns, while the other remains on . The traveling twin's worldline is broken (with at turnaround), yielding total \tau_{\text{travel}} < \tau_{\text{Earth}}, the direct timelike interval from departure to reunion. Here, the manifests as the sum of outbound and inbound proper times being less than the stationary twin's elapsed time, highlighting how curved paths shorten proper time. Hermann Minkowski introduced this spacetime in his 1908 address "Space and Time," unifying space and time into a four-dimensional with the above metric to geometrize . The reverse inequality's role in structures became central to understanding in . In modern , these cones extend to enforce microcausality: field operators at spacelike separation commute, ensuring no observable effects propagate outside , a cornerstone of local quantum theories on .

References

  1. [1]
    Euclid's Elements, Book I, Proposition 20 - Clark University
    This proposition is known as the triangle inequality. It is part of the statement that the shortest path between two points is a straight line.
  2. [2]
    [PDF] NOTES FOR MATH 4510, FALL 2010 1. Metric Spaces The ...
    Jan 6, 2011 · (3) For all x, y, z ∈ X, d(x, z) ≤ d(x, y) + d(y, z) (called the triangle inequality). The function d is called the metric, it is also called ...
  3. [3]
    [PDF] Norms
    RvR, and. (iii) (triangle inequality) (Vv, w G V) Rv + wR < RvR + RwR. The pair (V, R . R) is called a normed linear space (or normed vector space). Fact. A ...
  4. [4]
    [PDF] Some Important Inequalities Math 354, Winter 2008 Triangle Inequality
    Triangle Inequality: For all real a and b, |a + b|≤|a| + |b|. Inverse Triangle Inequality: For all real a and b, |a − b| ≥ ||a|−|b||. Proof. By the ...<|control11|><|separator|>
  5. [5]
    Triangle Inequality - Department of Mathematics at UTSA
    Feb 13, 2022 · The triangle inequality states that for any triangle, the sum of the lengths of any two sides must be greater than or equal to the length of the remaining side.
  6. [6]
    Triangle Inequality
    Triangle Inequality. Theorem: In a triangle, the length of any side is less than the sum of the other two sides. So in a triangle ABC, |AC| < |AB| + |BC|.
  7. [7]
    Triangle Inequality | Brilliant Math & Science Wiki
    The inequality is strict if the triangle is non-degenerate (meaning it has a non-zero area).<|control11|><|separator|>
  8. [8]
    Proof of $c<a+b$ for a right triangle - Mathematics Stack Exchange
    Feb 7, 2018 · ... b, c are the sides of a right traingle and c is the hypotenuse then c<a+b. Here is my proposed proof: Proof: Assume that a,b,c are sides of ...
  9. [9]
    Triangle Inequality -- from Wolfram MathWorld
    The right-hand part of the triangle inequality states that the sum of the lengths of any two sides of a triangle is greater than the length of the remaining ...Missing: definition | Show results with:definition
  10. [10]
    Triangle Inequality Theorem, Proof & Applications - GeeksforGeeks
    Jul 23, 2025 · It validates triangle construction and determines possible side ranges. Its applications span geometry, physics, and computer science ...
  11. [11]
    The sides of a triangle. Euclid I. 20. - The Math Page
    To prove, in triangle ABC, that sides BA, AC are together greater than side BC, on side AC we construct the isosceles triangle DAC. Since AC is equal to AD, ...Missing: inequality | Show results with:inequality
  12. [12]
    Linear Algebra, Part 5: Dot product (Mathematica)
    c≤a+b. Proof of Triangle inequality: Let θ be the angle between the sides whose lengths are a and b. Since −cosθ ≤ 1, the law of cosines theorem tells us ...
  13. [13]
    [PDF] 2023 - Solutions Thursday, September 14, 2023 Inequalities
    inequalities: 0 < (1−a)(1−b)(1−c) ≤. 1. 27 . By Heron's formula, the area A of this triangle is exactly A = p(1−a)(1−b)(1−c). Since. A > 0, we get the ...
  14. [14]
  15. [15]
    [PDF] the triangle inequality - UCLA Math Circle
    For any polygon, the sum of the lengths of all but one of the sides is greater than the length of the remaining side.
  16. [16]
    [PDF] Inequalities that Imply the Isoperimetric Inequality - Math
    Mar 4, 2002 · By the triangle inequality, the length of each side is less than the sum of the other three sides.
  17. [17]
    [PDF] geometry & inequalities - UCLA Math Circle
    Feb 9, 2014 · The polygonal lines make triangles, which are minimized by a straight line because of the triangle inequality. Copyright c 2008-2014 Olga ...
  18. [18]
    Cayley-Menger Determinant -- from Wolfram MathWorld
    The Cayley-Menger determinant is a determinant that gives the volume of a simplex in j dimensions. If S is a j-simplex in R^n with vertices v_1,...,v_(j+1) ...Missing: condition | Show results with:condition
  19. [19]
    [PDF] Geometric Inequalities on Parallelepipeds and Tetrahedra
    Abstract. We prove an inequality comparing the sum of areas of faces of a parallelepiped to its the volume. Then we prove an inequality on a tetrahedron.
  20. [20]
    [PDF] Using Cayley-Menger Determinants for Geometric Constraint Solving
    We use Cayley-Menger Determinants (CMDs) to obtain an intrinsic formulation of geometric constraints. First, we show that classical CMDs are very convenient to ...<|control11|><|separator|>
  21. [21]
    [PDF] Introduction to Normed Vector Spaces - UCSD Math
    Mar 29, 2009 · Definition 2 A vector space V is a normed vector ... To prove the triangle inequality N3, we need to use the Cauchy-Schwarz inequality.
  22. [22]
    [PDF] 1 Norms and Vector Spaces
    Suppose we have a complex vector space V . A norm is a function f : V → R which satisfies. (i) f(x) ≥ 0 for all x ∈ V. (ii) f(x + y) ≤ f(x) + f(y) for all x ...
  23. [23]
    [PDF] Banach Spaces I: Normed Vector Spaces - KSU Math
    Definition. A locally convex vector space, satisfying (one of) these three equivalent conditions, is declared normable. Of course, a normable topological vector ...
  24. [24]
    [PDF] Metric Spaces - UC Davis Math
    The triangle inequality is geometrically obvious, but requires an analytical proof (see Section 7.6). Example 7.5. The Euclidean metric d : Rn × Rn → R on Rn is ...
  25. [25]
    [PDF] Young's, Hölder's and Minkowski's Inequalities
    In class I derived the triangle inequality for the 2-norm (often called the Euclidean norm) on the vector space R2 ,. ||x||2 ≡ p|x1|2 + |x2|2.<|control11|><|separator|>
  26. [26]
    Stefan Banach (1892 - 1945) - Biography - MacTutor
    Banach founded modern functional analysis and made major contributions to the theory of topological vector spaces. In addition, he contributed to measure theory ...
  27. [27]
    Reverse Triangle Inequality with Proof - Math Monks
    For Metric Spaces. If M = (X, d) is a metric space, then d(x, y) ≥ |d(x, z) – d(y, z)|, for all x, y, z Є X. Proof. Let us consider a metric space M = (X, d).
  28. [28]
    Reverse Triangle Inequality - Definition and Examples
    Jul 25, 2023 · In these areas, the reverse triangle inequality can be used for error estimation, bounding the difference between predicted and actual values.Missing: numerical | Show results with:numerical
  29. [29]
    [PDF] Numerical Matrix Analysis - Ilse Ipsen
    Reverse Triangle Inequality. Let x,y ∈ Cn and let · be a vector norm. Prove: x − y. ≤ x −y . 3. Theorem of Pythagoras. Prove: If x,y ∈ Cn and x. ∗ y = 0 ...
  30. [30]
    [2107.04071] A Triangle Inequality for Cosine Similarity - arXiv
    Jul 8, 2021 · In this paper, we derive a triangle inequality for Cosine similarity that is suitable for efficient similarity search with many standard search structures.
  31. [31]
    [PDF] Minkowski space
    Mar 21, 2013 · This terminology comes from the use of Minkowski space in the theory of relativity. ... Reversed triangle inequality. If v and w are two equally ...Missing: reverse | Show results with:reverse
  32. [32]
    On the reversal of the triangle inequality in Minkowski spacetime in ...
    On the reversal of the triangle inequality in Minkowski spacetime in relativity. Edward B Manoukian ... Download Article PDF. Article metrics. 465 Total ...
  33. [33]
    [PDF] Notes on Lorentzian causality
    Aug 4, 2014 · Example: Minkowski space, the spacetime of Special Relativity. ... The Reverse triangle inequality is the source of the twin paradox. 1.2 ...
  34. [34]
    [PDF] Space and Time - UCSD Math
    It was Hermann Minkowski (Einstein's mathematics professor) who announced the new four- dimensional (spacetime) view of the world in 1908, which he deduced from ...
  35. [35]
    Arrow of Causality and Quantum Gravity | Phys. Rev. Lett.
    Oct 24, 2019 · Causality in quantum field theory is defined by the vanishing of field commutators for spacelike separations.