Binary operation
A binary operation on a nonempty set S is a function * : S \times S \to S that combines any two elements a, b \in S to produce a unique element a * b \in S, ensuring the result remains within the set.[1] This operation is often denoted using infix notation, such as a * b, and is fundamental to arithmetic and algebraic computations where the inputs and output share the same domain.[2] Binary operations form the cornerstone of abstract algebra, enabling the definition of more complex structures like groups, rings, and semigroups by imposing additional properties on the operation.[3] Key properties include associativity, where (a * b) * c = a * (b * c) for all a, b, c \in S, which allows unambiguous grouping of multiple operands; commutativity, where a * b = b * a; the existence of an identity element e \in S such that a * e = e * a = a; and invertibility, where for each a \in S, there exists b \in S with a * b = b * a = e.[4] These properties distinguish basic binary operations from specialized ones, such as addition on integers (which is associative, commutative, with identity 0) or matrix multiplication (associative but not commutative).[5] The concept of binary operations has roots in classical mathematics, with early examples appearing in number theory and geometry, but gained formal prominence in the 19th century through the development of group theory by mathematicians like Évariste Galois and Arthur Cayley, who used them to model symmetries and solve polynomial equations.[6] Today, binary operations extend beyond pure mathematics into computer science for defining data structures and algorithms,[7] and in physics for describing interactions in quantum mechanics and relativity.[8]Definition and Basic Concepts
Definition
In mathematics, a binary operation on a set S is a function *: S \times S \to S, where S \times S denotes the Cartesian product of S with itself, consisting of all ordered pairs (a, b) with a, b \in S, and the image of each such pair under * is an element of S.[9] This means that for every pair of elements in S, the operation produces a unique result also belonging to S, forming what is known as a magma, the most basic algebraic structure consisting of a set equipped with a binary operation.[3] A familiar example is addition on the set of integers, where a + b yields another integer for any integers a and b. The concept of a binary operation generalizes other types of operations based on the number of inputs, or arity: a unary operation takes a single element from a set and produces another element in the set (e.g., negation on integers), while a nullary operation produces a constant element without any inputs (e.g., the zero element in a group). However, binary operations specifically emphasize the combination of exactly two elements, serving as a foundational tool in abstract algebra for studying structures like groups and rings.[9] Early recognition of the importance of such "laws of composition" came in the 19th century through the development of abstract algebra by mathematicians like Arthur Cayley and Richard Dedekind, building on arithmetic examples that had been studied for centuries.[10] The specific term "binary operation" emerged in the early 20th century.[11] This formalization assumed familiarity with basic set theory, including the Cartesian product as a means to pair elements systematically.Closure
In mathematics, the closure property of a binary operation on a set S requires that for all elements a, b \in S, the result a * b also belongs to S.[12] This ensures the operation maps the Cartesian product S \times S into S itself, forming a well-defined structure without elements escaping the set.[13] The closure property is foundational to algebraic structures such as semigroups and groups, where it guarantees that repeated applications of the operation remain within the set, enabling the study of associativity, identities, and inverses.[13] In a semigroup, closure combined with associativity defines the minimal requirements for an algebraic system, while in a group, it supports additional axioms like the existence of inverses, as seen in structures like the integers under addition.[13] Without closure, these structures could not be consistently defined or analyzed, as operations would produce extraneous elements requiring an expanded domain.[12] A classic example of a non-closed binary operation is subtraction on the natural numbers \mathbb{N} = \{1, 2, 3, \dots\}, where $2 - 3 = -1 \notin \mathbb{N}, violating closure.[12] In contrast, addition on \mathbb{N} is closed, as the sum of any two natural numbers remains a natural number. To see why closure is necessary for iterated operations, consider a binary operation * on S lacking closure, so there exist a, b \in S with a * b = c \notin S. Any further iteration involving c, such as c * d for d \in S, would be undefined within S, preventing the formation of finite or infinite products like a * b * d without leaving the set.[13] Thus, closure ensures that all finite sequences of elements in S can be combined via the operation, staying entirely within S, which is essential for defining higher-order structures like subgroups or quotients.[13]Domain, Codomain, and Range
In the context of abstract algebra, a binary operation on a set S is fundamentally a function whose domain is the Cartesian product S \times S, consisting of all ordered pairs (a, b) where a, b \in S.[14] This domain structure distinguishes binary operations from unary functions, which operate on individual elements from S alone, by requiring two inputs combined in a specific order.[3] The codomain of a binary operation is the set into which the outputs are mapped; for operations defined on S, it is typically S itself, ensuring that the result of applying the operation to any pair from S remains within S.[15] However, the codomain can more generally be any superset T where S \subseteq T, allowing the operation to produce elements outside S while still starting from elements of S.[3] For instance, subtraction defined on the natural numbers \mathbb{N} (positive integers) has domain \mathbb{N} \times \mathbb{N} and codomain the integers \mathbb{Z}, since differences can be negative.[3] The range, also known as the image, of a binary operation is the actual subset of the codomain consisting of all possible outputs obtained by applying the operation to elements in the domain.[14] This range may be a proper subset of the codomain; for example, multiplication on the positive real numbers \mathbb{R}^+ can be defined with domain \mathbb{R}^+ \times \mathbb{R}^+ and codomain \mathbb{R}, but the range is precisely \mathbb{R}^+, as products of positives are always positive.[3] When the range is contained within S, the operation satisfies closure, a property often required in algebraic structures.[15]Properties
Associativity
In mathematics, a binary operation * on a set S is associative if, for all elements a, b, c \in S, the equality (a * b) * c = a * (b * c) holds.[16] This property ensures that the grouping of operands does not affect the outcome of the operation, allowing expressions involving multiple applications of * to be evaluated without ambiguity regarding parenthesization.[17] The associativity of a binary operation has significant implications for algebraic structures, as it enables the unambiguous definition of iterated operations and powers of elements, such as a^n for n \geq 1, by recursively applying the operation without concern for bracketing.[18] For instance, in the context of integer addition, which is associative, this property underpins the well-defined nature of sums like a + b + c.[16] Associativity plays a foundational role in defining key algebraic structures, such as semigroups and monoids. A semigroup is a set equipped with an associative binary operation, while a monoid extends this by including an identity element.[18] The concept of associativity in abstract algebraic settings was advanced in the 19th century, notably by Arthur Cayley, who in 1854 incorporated it into his pioneering definition of groups as sets with an associative operation satisfying additional axioms like identity and inverses.[19] Not all binary operations are associative; a prominent non-associative example is the cross product of vectors in three-dimensional Euclidean space, where \mathbf{u} \times (\mathbf{v} \times \mathbf{w}) \neq (\mathbf{u} \times \mathbf{v}) \times \mathbf{w} in general, as verified by substituting specific vectors such as the standard basis vectors \mathbf{i}, \mathbf{j}, \mathbf{k}.[20] To verify associativity for a given binary operation, one checks the defining equality by direct substitution and computation for all relevant elements in S, leveraging the operation's explicit form to equate the left and right sides.[16]Commutativity
In algebra, a binary operation * on a set S is said to be commutative if a * b = b * a for all a, b \in S. This property implies that the order of the operands does not affect the outcome of the operation, allowing elements to be rearranged freely in expressions involving multiple applications of *.[21] The commutativity of a binary operation has significant implications in algebraic structures. It simplifies computations by permitting the reordering of terms, which streamlines algebraic manipulations and the solution of equations without needing to track operand positions.[21] Furthermore, when a set S is equipped with two binary operations, addition and multiplication, both satisfying certain axioms including commutativity for multiplication, the resulting structure is a commutative ring, a foundational concept in commutative algebra used to study polynomials, ideals, and geometric objects like varieties. Not all binary operations are commutative; a prominent counterexample is matrix multiplication on the set of n \times n matrices over the real numbers, where for distinct matrices A and B, the product AB generally differs from BA.[22] This non-commutativity arises because matrix multiplication corresponds to the composition of linear transformations, where the order of application matters. In physics and geometry, commutativity of binary operations often reflects underlying symmetries of the system. For instance, in group theory, commutative groups (also known as abelian groups) model symmetries such as translations in Euclidean space or certain rotations, where the order of successive transformations does not alter the final configuration.[23] Such structures capture the invariance of physical laws under symmetric changes, as seen in conservation principles derived from Noether's theorem.[24] To verify commutativity for a given binary operation, one tests the defining equality through direct substitution: select elements a and b from S, compute both a * b and b * a, and confirm they are identical for all pairs, often requiring proof by exhaustion or general argument depending on the set's size. When paired with associativity, commutativity yields an abelian magma, facilitating further structural analysis./02:_Introduction_to_Groups/2.02:_Binary_Operation)Identity Element
In a binary structure (S, *), where S is a set and *: S \times S \to S is a binary operation, an identity element is an element e \in S such that a * e = e * a = a for all a \in S.[25] This element acts as a neutral or "do-nothing" component under the operation, preserving every element unchanged when combined with it on either side.[26] The identity element, when it exists, is unique in the structure. To see this, suppose e and f are both identity elements in (S, *). Then, since f is an identity, e * f = e; but since e is also an identity, e * f = f. Thus, e = f.[25] This uniqueness holds without assuming associativity or commutativity of the operation.[27] Common examples include the real numbers \mathbb{[R](/page/R)} under addition, where [0](/page/0) serves as the identity since a + 0 = 0 + a = a for all a \in \mathbb{[R](/page/R)}, and under multiplication, where [1](/page/1) is the identity because a \cdot 1 = 1 \cdot a = a for all a \in \mathbb{[R](/page/R)}.[28] The presence of an identity element plays a central role in defining monoids, which are associative binary structures equipped with such an element. Specifically, a monoid is a set G with an associative binary operation and an identity e \in G satisfying e \cdot a = a \cdot e = a for all a \in G.[29] In contrast, more basic structures like magmas—sets with a binary operation but no additional requirements—may lack an identity altogether; for instance, the positive integers \mathbb{Z}_{\geq 1} under addition form a magma without an identity, as no element e \geq 1 satisfies a + e = e + a = a for all a \geq 1.[30] In non-commutative binary operations, one-sided identities may exist independently: a left identity e satisfies e * a = a for all a \in S, while a right identity satisfies a * e = a for all a \in S. A two-sided identity is both left and right. If both a left identity and a right identity exist, they coincide and form the unique two-sided identity.[27]Inverse Element
In a set S equipped with a binary operation * and an identity element e \in S, an element b \in S is called a two-sided inverse of a \in S if a * b = e and b * a = e.[25] More generally, b is a left inverse of a if a * b = e, and a right inverse if b * a = e.[31] The existence of an inverse for any a presupposes the presence of an identity element in the structure.[25] When the binary operation is associative, the existence of both a left inverse and a right inverse for a implies they are equal, forming a unique two-sided inverse.[31] This uniqueness holds in the context of groups, where the structure includes associativity, an identity, and inverses for all elements; here, each element has precisely one inverse.[5] Without associativity, left and right inverses may differ or fail to coincide.[31] A classic example is the additive inverse under the binary operation of addition on the real numbers \mathbb{R}, where the identity is $0 and the inverse of a is -a, satisfying a + (-a) = 0 = (-a) + a.[32] In contrast, not all elements are invertible; for instance, under multiplication on \mathbb{R}, the element $0 has no inverse because there exists no b \in \mathbb{R} such that $0 \cdot b = 1.[32] Similarly, in the integers \mathbb{Z} under multiplication, only \pm 1 possess inverses, while all other elements, including $0, do not.[32]Idempotence
In the context of binary operations, an element a in a set S equipped with a binary operation \ast is called idempotent if a \ast a = a.[33] A binary operation \ast on S is said to be idempotent, or strongly idempotent, if every element of S is idempotent, that is, a \ast a = a for all a \in S.[33] This property contrasts with weak idempotence, where only some elements of S satisfy the condition, allowing for selective self-application without change while others may not.[33] Examples of idempotent binary operations abound in foundational algebraic structures. In Boolean algebra, the logical OR operation \lor is idempotent because p \lor p = p for any proposition p, preserving the truth value upon repetition.[34] Similarly, the set intersection operation \cap on the power set of a universe is idempotent, as A \cap A = A for any set A; in particular, singleton sets \{x\} satisfy \{x\} \cap \{x\} = \{x\}, illustrating the property at the level of individual elements.[35] Idempotence finds significant applications in operator theory and algebraic structures. In linear algebra, a projection operator onto a subspace is represented by an idempotent matrix P satisfying P^2 = P, ensuring repeated application yields the same projection without alteration.[36] In semigroup theory, a band is defined as a semigroup where the binary operation is idempotent, meaning every element e fulfills e \cdot e = e, which models structures like transformation semigroups with fixed points under composition.[37] Within lattice theory, idempotence is a core property of the meet \wedge and join \vee operations, where a \wedge a = a and a \vee a = a hold for all elements a.[38] This directly relates to the absorption laws, such as a \vee (a \wedge b) = a, which leverage idempotence to ensure that one operation absorbs the result of the other without redundancy, forming the basis for modular and distributive lattices.[38]Examples
Arithmetic Operations
Addition serves as a fundamental binary operation on the set of real numbers \mathbb{R}, where for any a, b \in \mathbb{R}, a + b \in \mathbb{R}, ensuring closure under the operation.[39] This operation is associative, satisfying (a + b) + c = a + (b + c) for all a, b, c \in \mathbb{R}, and commutative, with a + b = b + a.[39] The additive identity element is $0, such that a + 0 = 0 + a = afor alla \in \mathbb{R}, and every element ahas an inverse-awherea + (-a) = (-a) + a = 0. Similar properties hold for addition on the integers \mathbb{Z}, which is closed, associative, commutative, with identity $0 and inverses.[3] Multiplication, denoted \times or \cdot, is another binary operation on \mathbb{R}, closed such that a \times b \in \mathbb{R} for all a, b \in \mathbb{R}.[39] It shares associativity and commutativity with addition: (a \times b) \times c = a \times (b \times c) and a \times b = b \times a.[39] The multiplicative identity is $1, satisfying a \times 1 = 1 \times a = a, but inverses exist only for nonzero elements, as a \times (1/a) = 1fora \neq 0, while $0 lacks an inverse.[40] These traits also apply to multiplication on \mathbb{Z}.[3] Subtraction, defined as a - b = a + (-b), forms a binary operation on \mathbb{Z} and \mathbb{R}, which are closed under it, but it is neither associative nor commutative, as (a - b) - c \neq a - (b - c) and a - b \neq b - a in general.[3] On the natural numbers \mathbb{N}, subtraction is not closed, since results can be negative or undefined for a < b.[3] Division, a / b = a \times (1/b) for b \neq 0, is a binary operation on the nonzero reals \mathbb{R} \setminus \{0\}, closed there, but lacks associativity and commutativity, and is undefined for division by zero.[40] On \mathbb{N}, division often yields non-integers, violating closure.[41] Vector addition extends scalar addition component-wise to \mathbb{R}^n, where for \mathbf{u} = (u_1, \dots, u_n) and \mathbf{v} = (v_1, \dots, v_n), \mathbf{u} + \mathbf{v} = (u_1 + v_1, \dots, u_n + v_n) \in \mathbb{R}^n, ensuring closure.[42] This operation is associative, commutative, with identity \mathbf{0} = (0, \dots, 0) and inverses -\mathbf{u}.[43] It generalizes to any finite dimension n \geq 1.[44] Arithmetic operations like addition and multiplication on natural numbers provided prototypes for structures in abstract algebra, formalized through the Peano axioms, which define the naturals via a successor function and recursively construct addition as a + 0 = a and a + S(b) = S(a + b), where S is successor.[45] These axioms, introduced by Giuseppe Peano in 1889, underpin the development of algebraic systems by establishing closure and recursive properties for arithmetic.Logical Operations
In Boolean logic, binary operations operate on truth values—typically denoted as true (T) or false (F)—and form the foundation of propositional logic and digital circuit design. These operations, often visualized through truth tables that list all possible input combinations and their outputs, enable the evaluation of compound statements and are essential for implementing computational logic in hardware and software.[46] The logical AND operation, symbolized as ∧, yields true only when both inputs are true; it is false otherwise. This makes it useful for conditions requiring all prerequisites to be satisfied, such as in conditional branching in programming. AND is idempotent, as applying it to identical inputs returns the input itself (a ∧ a = a), and it possesses a weak identity element of true, since true ∧ a = a for any a. Its truth table is as follows:| A | B | A ∧ B |
|---|---|---|
| T | T | T |
| T | F | F |
| F | T | F |
| F | F | F |
| A | B | A ∨ B |
|---|---|---|
| T | T | T |
| T | F | T |
| F | T | T |
| F | F | F |
| A | B | A ⊕ B |
|---|---|---|
| T | T | F |
| T | F | T |
| F | T | T |
| F | F | F |
| A | B | A NAND B |
|---|---|---|
| T | T | F |
| T | F | T |
| F | T | T |
| F | F | T |
| A | B | A NOR B |
|---|---|---|
| T | T | F |
| T | F | F |
| F | T | F |
| F | F | T |
Function Composition
Function composition provides a fundamental example of a binary operation defined on the set of functions between sets. Given two functions f: A \to B and g: C \to A, where the codomain of g matches the domain of f, their composition f \circ g: C \to B is defined by (f \circ g)(x) = f(g(x)) for all x \in C./01%3A_Functions/1.04%3A_Composition_of_Functions) This operation combines the functions to produce a new function whose domain is the domain of g and codomain is the codomain of f.[51] Function composition is associative, meaning that for compatible functions f, g, and h, (f \circ g) \circ h = f \circ (g \circ h)./07%3A_Functions/7.03%3A_Function_Composition) However, it is generally not commutative; for instance, if f(x) = x^2 and g(x) = x + 2 on the real numbers, then f(g(x)) = (x + 2)^2 = x^2 + 4x + 4, while g(f(x)) = x^2 + 2, so f \circ g \neq g \circ f./01%3A_Functions/1.04%3A_Composition_of_Functions) The identity element for this operation is the identity function \mathrm{id}_D: D \to D defined by \mathrm{id}_D(x) = x for all x \in D, satisfying f \circ \mathrm{id}_A = \mathrm{id}_B \circ f = f whenever the domains and codomains align..pdf) To illustrate on finite sets, consider the set S = \{0, 1, 2\} and functions f, g: S \to S where f(0) = 1, f(1) = 2, f(2) = 0, and g(0) = 2, g(1) = 0, g(2) = 1. The composition f \circ g yields f(g(0)) = f(2) = 0, f(g(1)) = f(0) = 1, f(g(2)) = f(1) = 2, resulting in the function mapping $0 \mapsto 0, $1 \mapsto 1, $2 \mapsto 2, which is the identity on S./07%3A_Functions/7.03%3A_Function_Composition) In calculus contexts, composition appears in operations like successive differentiation, where the derivative operator D satisfies D \circ D = D^2, representing the second derivative, though the focus here remains on set-theoretic functions./02%3A_Introduction_to_Groups/2.02%3A_Binary_Operation)Notation and Representation
Symbolic Notation
Binary operations in mathematics are most commonly expressed using infix notation, where the operator symbol is placed between the two operands, as in a + b for addition or a \cdot b for multiplication.[3] This convention facilitates readability by mimicking natural language structure and has become the standard for arithmetic and algebraic expressions.[3] The plus sign (+) originated in printed form with Johannes Widman's 1489 Mercantile Arithmetic, initially denoting surpluses and deficits before evolving into the general addition symbol by Robert Recorde's 1557 The Whetstone of Witte.[52] For multiplication, William Oughtred introduced the obelus-like × in his 1631 Clavis Mathematicae, while Gottfried Wilhelm Leibniz advocated the centered dot (·) in a 1698 letter to Johann Bernoulli, preferring it in infix form to distinguish from the variable x.[52] Prefix and postfix notations, where the operator precedes or follows the operands respectively (e.g., +ab or ab+), are rare for binary operations in standard mathematical writing but appear in specialized contexts like logical expressions or computer evaluation algorithms.[53] These forms, known as Polish (prefix) and reverse Polish (postfix) notations, were developed by logician Jan Łukasiewicz in the 1920s to eliminate ambiguity in propositional logic without parentheses.[54] Juxtaposition, or implied multiplication by placing operands adjacent (e.g., fg for function composition), serves as a compact notation for certain binary operations, a practice standardized after René Descartes's 1637 La Géométrie.[55] In programming languages, operator overloading extends symbolic notation by allowing the same symbol to represent different binary operations based on operand types, such as using + for numeric addition or string concatenation.[56] This feature was pioneered in languages like Ada (1980) and popularized in C++ (introduced in 1985 by Bjarne Stroustrup) to enable intuitive syntax for user-defined types, though it requires careful implementation to avoid confusion.[57] The historical evolution of these notations traces from early symbolic innovations by figures like Leibniz, who emphasized clear infix forms with dots for multiplication, to the diverse Unicode symbols (e.g., U+22C5 for dot operator) now supporting precise rendering in modern mathematical typography.[52][58]Tabular Representation
A tabular representation of a binary operation on a finite set, known as a Cayley table, arranges the elements of the set along the rows and columns, with each entry at the intersection of row a and column b denoting the result a * b. This method, introduced by Arthur Cayley in his 1854 paper on group theory, provides a complete and explicit depiction of the operation, facilitating the analysis of algebraic structures.[59][60] To construct a Cayley table, list the set's elements in a consistent order for both rows and columns, then compute and fill each entry according to the operation's definition. For instance, consider the set \{0, 1, 2\} under addition modulo 3, where the operation yields the remainder when the sum is divided by 3. The resulting table is:| +_3 | 0 | 1 | 2 |
|---|---|---|---|
| 0 | 0 | 1 | 2 |
| 1 | 1 | 2 | 0 |
| 2 | 2 | 0 | 1 |