Binary
The binary numeral system is a positional notation for expressing numbers using only two distinct symbols, typically 0 and 1, with each digit's value determined by its position representing successive powers of 2, starting from the rightmost digit as $2^0.[1] This base-2 system enables efficient representation of integers, fractions, and other data structures essential for arithmetic operations and logical processing.[2] Binary's foundational role in computing arises from its direct mapping to the physical states of electronic components, such as transistors switching between conducting (1) and non-conducting (0) modes, allowing reliable storage, transmission, and manipulation of information at scale.[3] Every digital device, from microprocessors to memory units, processes data in binary form, where sequences of bits (binary digits) encode instructions, numbers, text, and images via standards like ASCII or Unicode.[4] Its simplicity facilitates boolean algebra for logic gates—AND, OR, NOT—which form the building blocks of circuits, enabling complex computations through layered abstraction without intermediate analog instabilities.[5] Although Gottfried Wilhelm Leibniz formalized binary arithmetic in 1703, drawing inspiration from the ancient Chinese I Ching's yin-yang trigrams as a binary-like divination tool predating him by millennia, practical positional binary emerged in European mathematics during the 16th-17th centuries amid explorations of non-decimal bases.[6] Empirical evidence also indicates hybrid binary-decimal systems among Polynesian islanders on Mangareva around the 13th-14th centuries, used for mental accounting in resource-scarce environments, challenging Eurocentric narratives of invention.[7] In the 20th century, binary's causal primacy in digital revolution crystallized through Claude Shannon's 1937 thesis applying Boolean logic to telephone switching, proving its scalability for automated computation.[8]Definition and Fundamentals
General Definition
Binary denotes a state, system, or structure characterized by two parts, elements, or states, often implying duality or opposition. Dictionaries define it as "compounded or consisting of or marked by two things or parts," emphasizing composition from dual components rather than multiplicity.[9] This contrasts with unary (one) or unary systems, highlighting binary's fundamental reliance on pairwise relations, as in binary choices limited to two mutually exclusive options.[10][11] In broader application, binary describes phenomena reducible to two categories or values, such as on/off states in mechanisms or true/false in logic, without implying equality between the pair—causal interactions between the two may favor one outcome over the other based on empirical conditions.[12] This twofold essence underpins its utility across disciplines, where complexity emerges from interactions within the binary framework rather than from inherent gradations. Specific implementations, such as numeral systems or astronomical pairs, build upon this core duality.[13]Etymology and Historical Usage
The term "binary" entered English in the mid-15th century from Late Latin binarius, meaning "consisting of two" or "twofold," derived from bini ("two by two" or "in pairs") and ultimately from the Proto-Indo-European root dwo- ("two").[14][9] This etymological foundation reflects a fundamental duality, akin to paired elements or dual structures, as seen in Latin usages for matched pairs like yoked animals.[9] Historically, "binary" described dualistic phenomena across disciplines by the 16th century, such as in astronomy for paired celestial bodies or in chemistry for compounds formed from two elements, though systematic application in mathematics emerged later.[14] In mathematics and logic, Gottfried Wilhelm Leibniz formalized the binary numeral system in his 1703 treatise Explication de l'Arithmétique Binaire, drawing inspiration from the ancient Chinese I Ching's hexagrams, which encode binary-like patterns dating to the 9th century BCE, though Leibniz credited the system's creation to divine creation from nothing (0) and unity (1).[15] Earlier precursors to binary concepts appear in Indian scholar Pingala's 3rd-century BCE prosody treatise Chandahshastra, using binary-like sequences for meter patterns, and in Polynesian khipu-like artifacts from around 1350 CE evidencing base-2 counting.[7][16] The term gained prominence in 19th-century logic through George Boole's The Laws of Thought (1854), where binary operations (e.g., AND, OR) modeled deductive reasoning as two-valued propositions.[17] By the 20th century, "binary" standardized digital computing, with Claude Shannon's 1937 master's thesis applying Boolean algebra to electrical circuits, establishing binary as the foundational representation for data and operations in electronic systems.[16] This evolution underscores the term's shift from descriptive duality to a precise, verifiable framework for encoding information, validated empirically through hardware implementations like the ENIAC (1945), which processed binary arithmetic at speeds up to 5,000 additions per second.[17]Mathematics and Logic
Binary Numeral System
The binary numeral system is a positional numeral system employing base 2, utilizing only the digits 0 and 1 to represent values.[18] [19] Each digit, known as a bit, corresponds to a coefficient (0 or 1) multiplied by the corresponding power of 2, with positions read from right to left starting at 2^0 for the least significant bit.[20] [21] This structure allows any non-negative integer to be uniquely expressed as a sum of distinct powers of 2, where the presence of a 1 in a position indicates inclusion of that power.[22] To convert a binary number to its decimal equivalent, multiply each bit by 2 raised to the power of its position index (starting from 0 on the right) and sum the results. For instance, the binary numeral 1101 equals 1×2^3 + 1×2^2 + 0×2^1 + 1×2^0 = 8 + 4 + 0 + 1 = 13 in decimal.[23] [24] Conversely, decimal-to-binary conversion involves repeated division by 2, recording remainders as bits from least to most significant.[25] Binary fractions extend this by using negative exponents, such as 0.1 in binary representing 1×2^{-1} = 0.5 in decimal.[26]| Position (from right) | Power of 2 | Binary Example: 10110 |
|---|---|---|
| 4 | 2^4 = 16 | 1 (16) |
| 3 | 2^3 = 8 | 0 (0) |
| 2 | 2^2 = 4 | 1 (4) |
| 1 | 2^1 = 2 | 1 (2) |
| 0 | 2^0 = 1 | 0 (0) |
| Total | 22 (decimal) |
Binary Operations and Boolean Algebra
A binary operation on a set S is a function \star: S \times S \to S that associates to each ordered pair (a, b) of elements from S a unique element a \star b in S. This concept formalizes operations like addition and multiplication on the real numbers, where +: \mathbb{R} \times \mathbb{R} \to \mathbb{R} and \times: \mathbb{R} \times \mathbb{R} \to \mathbb{R}, preserving closure within the set. Binary operations exhibit properties such as associativity ((a \star b) \star c = a \star (b \star c)), commutativity (a \star b = b \star a), the existence of an identity element e where a \star e = e \star a = a, and inverses for elements where there exists b such that a \star b = b \star a = e. For instance, matrix multiplication on the set of n \times n matrices over the reals is associative but neither commutative nor invertible for all elements./02%3A_Groups/2.01%3A_Binary_Operations) Boolean algebra extends binary operations to the algebraic study of logic, operating on a set of two elements, typically \{0, 1\} or \{\text{false}, \text{true}\}, with binary operations conjunction (AND, denoted \land or \cdot), disjunction (OR, denoted \lor or +), and the unary operation negation (NOT, denoted \neg or '). Formally introduced by George Boole in his 1854 work An Investigation of the Laws of Thought, it satisfies axioms including commutativity (x \land y = y \land x), associativity, distributivity (x \land (y \lor z) = (x \land y) \lor (x \land z)), identity elements (1 for \land, 0 for \lor), complements (x \land \neg x = 0, x \lor \neg x = 1), and absorption (x \land (x \lor y) = x). De Morgan's laws, \neg (x \land y) = \neg x \lor \neg y and \neg (x \lor y) = \neg x \land \neg y, derive from these axioms and enable simplification of logical expressions. Truth tables define these operations exhaustively:| x | y | x \land y | x \lor y |
|---|---|---|---|
| 0 | 0 | 0 | 0 |
| 0 | 1 | 0 | 1 |
| 1 | 0 | 0 | 1 |
| 1 | 1 | 1 | 1 |
Computing and Digital Systems
Binary Encoding and Representation
In digital computing, all data is encoded as sequences of binary digits, or bits, each representing one of two states—typically low and high voltage levels in electronic circuits, corresponding to logical 0 and 1. This binary foundation arises from the simplicity and reliability of distinguishing two states in hardware, enabling compact storage and manipulation via logic gates. Bits are grouped into larger units, such as a byte of 8 bits, which can encode 256 unique values (2^8), serving as the standard unit for data representation in most systems.[31][32] Integer values are represented in binary using fixed-width formats. Unsigned integers employ straightforward positional notation, where the value is the sum of powers of 2 weighted by each bit (e.g., the 8-bit binary10110100 equals 128 + 32 + 16 + 4 = 180 in decimal). For signed integers, two's complement is the dominant scheme, allowing seamless arithmetic operations including negation without separate sign handling. In an n-bit two's complement system, positive numbers use the leading bit as 0, while negative numbers are formed by inverting all bits of the absolute value and adding 1 (e.g., -5 in 8 bits: invert 00000101 to 11111010, add 1 to get 11111011). This method, widely adopted since the 1960s in processors like the IBM System/360, ensures the range spans from -2^(n-1) to 2^(n-1)-1, with arithmetic overflow behaving predictably for addition and subtraction.[33][34]
Floating-point numbers, essential for approximating real numbers with fractional parts, follow the IEEE 754 standard, first published in 1985 and revised in 2019. This binary format divides bits into sign, exponent, and significand (mantissa) fields; for single-precision (32 bits), it allocates 1 bit for sign, 8 for biased exponent (adding 127 to the true exponent for normalization), and 23 for significand with an implicit leading 1. Representing π ≈ 3.14159 requires about 24 bits of precision, stored as 01000000010010010000111111011011 (sign 0, exponent 128, significand ≈1.5708 scaled). Double-precision (64 bits) extends this for higher accuracy, with 1 sign bit, 11 exponent bits (bias 1023), and 52 significand bits, supporting values up to ≈1.8 × 10^308. Special cases include NaN (Not a Number) for undefined operations and infinities for overflow, ensuring consistent behavior across compliant hardware like x86 processors.[35]
Character data uses dedicated binary encodings to map symbols to bit patterns. The ASCII (American Standard Code for Information Interchange) standard, finalized as ANSI X3.4 in 1963, employs 7 bits to encode 128 control and printable characters, with values 0–31 for controls (e.g., 9 for tab) and 65 for 'A'. Extended 8-bit variants added 128 more symbols for regional needs, but limitations in handling non-Latin scripts prompted Unicode, a universal standard assigning unique code points (e.g., U+0041 for 'A') to over 149,000 characters as of version 15.0 in 2023. UTF-8, the dominant Unicode encoding since its 1993 proposal and widespread adoption by 2008, uses variable-length bytes: 1 byte for ASCII (0xxxxxxx), 2 for Latin-1 supplement (110xxxxx 10xxxxxx), up to 4 for rare scripts, preserving ASCII compatibility while enabling efficient storage (e.g., 'é' U+00E9 as 11100010 10011011). This backward compatibility reduced migration costs, with UTF-8 now comprising over 97% of web pages.[36][37][38]
Other data types build on these primitives: images as bitmaps (e.g., RGB pixels with 8 bits per channel in 24-bit color), audio as quantized samples (e.g., 16-bit PCM at 44.1 kHz for CD quality), and compression via algorithms like Huffman coding to minimize bit usage. Endianness—big-endian (MSB first, as in network protocols) versus little-endian (LSB first, as in x86)—affects multi-byte interpretation but is standardized in contexts like TCP/IP. These representations underpin all digital processing, from CPUs executing binary instructions to storage in NAND flash cells holding multiple bits per cell via multi-level cell technology.[31]