Ones' complement
One's complement is a binary numeral system used to represent signed integers in computer arithmetic, where the magnitude of a positive number is encoded in its standard binary form prefixed with a leading zero as the sign bit, and the corresponding negative value is obtained by performing a bitwise inversion (complement) of all bits in the positive representation.[1] This approach allows the most significant bit (MSB) to serve as the sign indicator—0 for non-negative and 1 for negative—while enabling arithmetic operations like addition and subtraction to be unified without separate circuitry for each.[2] Unlike unsigned binary, one's complement supports both positive and negative values within a fixed bit width, typically ranging from -(2^{n-1} - 1) to +(2^{n-1} - 1) for an n-bit representation, resulting in an asymmetric range and dual representations for zero (all zeros for +0 and all ones for -0).[1] In one's complement arithmetic, negation of a number simply requires flipping all its bits, which simplifies the hardware implementation of additive inverses compared to other schemes like sign-magnitude representation.[2] Addition proceeds via standard binary addition, but with a key modification: any carry-out from the MSB is "end-around carried" back to the least significant bit (LSB) to handle overflow correctly and resolve the dual-zero issue during summation.[1] For example, in an 8-bit system, the positive integer 5 (00000101) becomes -5 as 11111010, and adding -5 to 5 yields 11111111 (-0), illustrating the dual-zero issue.[2] Subtraction is reduced to addition by complementing the subtrahend and adding it to the minuend, eliminating the need for dedicated subtraction logic in early processors.[1] Historically, one's complement was employed in several pioneering computers during the mid-20th century, such as the UNIVAC 1100 series and CDC 6600 supercomputers, due to its straightforward bit-flip negation and compatibility with binary adders for subtraction.[3] However, its drawbacks—including the asymmetric range, dual zeros requiring special handling in comparisons, and added complexity from end-around carry—led to its gradual replacement by two's complement in most modern architectures by the 1970s.[2] Despite this, one's complement persists in specific applications like error detection, notably in the Internet Protocol (IP) header checksum, where it computes the 16-bit one's complement of the sum of all 16-bit words in the header to verify data integrity during transmission.[4] This usage leverages the system's properties for efficient modular arithmetic in checksum validation, ensuring the received sum complements to zero if no errors occurred.[5]Fundamentals
Definition
Ones' complement is a method for representing signed integers in binary systems, where the negative counterpart of a positive number is obtained by applying a bitwise NOT operation—also known as bitwise inversion or complement—to its binary representation, flipping each bit from 0 to 1 and vice versa.[2][6] This approach contrasts with other signed representations by directly using the inversion to denote negation, allowing the most significant bit to implicitly indicate the sign (0 for positive or zero, 1 for negative) within the fixed-width bit field.[7] Historically, ones' complement served as one of the earliest schemes for handling signed numbers in computer arithmetic, enabling efficient implementation of operations like subtraction through addition of complemented values, as seen in pioneering machines from the mid-20th century.[8] It was employed in systems such as the DEC PDP-1[9] and CDC 6600,[10] where it supported binary fixed-width representations without relying on explicit sign-magnitude formatting for all calculations. This representation was particularly suited to the hardware constraints of early computers, providing a straightforward way to encode both positive and negative values in n-bit words.[11] A basic example illustrates the process: in a 4-bit representation, the positive integer +3 is encoded as 0011 in binary, and applying the bitwise NOT yields 1100, which represents -3.[12] Understanding ones' complement is a prerequisite for grasping binary fixed-width representations, such as those used in n-bit computer words, where the full range of values from -(2^{n-1} - 1) to +(2^{n-1} - 1) can be expressed, excluding the negative zero issue addressed elsewhere.[1]Bitwise Operation
The bitwise NOT operation, commonly referred to as the one's complement operation, forms the core of ones' complement by inverting every bit in a binary number's fixed-width representation. This unary operation processes the entire word length, transforming each bit independently without regard to its positional value.[13] To perform the bitwise NOT, begin with the binary form of the number and systematically flip each bit: a 0 becomes 1, and a 1 becomes 0, applied to all positions from the most significant bit to the least significant bit. Mathematically, for a bit b_i at position i, the complemented bit is given by $1 - b_i, or equivalently \sim b_i using the bitwise NOT notation in binary systems. This inversion ensures the operation is deterministic and reversible within the same word size.[14] The fixed-width requirement is essential, as leading zeros in positive numbers must also be inverted to maintain the full bit length and avoid truncation errors in hardware implementations. For instance, in an 8-bit word, the positive value 5 is represented as00000101; applying bitwise NOT inverts all bits to 11111010, which denotes -5 in ones' complement. This example illustrates how the operation preserves the word size while achieving bit-level negation.[13][15]
Number Representation
Positive and Zero Values
In ones' complement representation, positive integers and zero are encoded identically to their unsigned binary counterparts, with the most significant bit (MSB) serving as the sign bit set to 0 to indicate non-negativity. This direct mapping ensures that the bit patterns for these values remain unchanged from standard binary notation, preserving simplicity in their storage and interpretation.[16][17] For an n-bit ones' complement system, the range of representable non-negative values spans from 0 to $2^{n-1} - 1, allowing half of the total $2^n bit patterns to denote positives and zero while reserving the other half for negatives. This symmetric allocation (excluding the dual zero representations) provides a balanced capacity for signed integers compared to unsigned systems.[18][19] To illustrate, consider a 4-bit ones' complement system: the value +0 is encoded as0000, +1 as 0001, +3 as 0011, and the maximum positive +7 as 0111. These patterns mirror unsigned binary exactly, with the MSB consistently at 0. Zero specifically uses the all-zero bit pattern, termed positive zero, which contrasts with the all-ones pattern representing negative zero—a unique feature of ones' complement that arises from bitwise inversion for negatives but does not affect positive encodings.[20][21][22]
Negative Values
In one's complement representation, negative integers are encoded by performing a bitwise inversion on the binary representation of their absolute (positive) value. This involves flipping every bit—changing each 0 to 1 and each 1 to 0—across the fixed number of bits used for the representation.[11][23] The resulting bit pattern serves as the encoding for the negative number, building directly on the standard binary encoding of the corresponding positive value.[22] For example, in a 4-bit system, the positive value +1 is represented as0001 in binary. Inverting all bits yields 1110, which represents -1. Similarly, +4 is 0100, and its inversion is 1011, representing -4. These examples illustrate how the complemented form preserves the structure of binary while denoting negativity through the inversion process.[22][11]
The range of representable negative values in an n-bit one's complement system spans from -(2^{n-1} - 1) to -1, providing symmetric magnitude coverage relative to the positive range, with the all-1s pattern separately representing negative zero (distinct in bit pattern from positive zero but equal in value). For a 4-bit system, this corresponds to -7 through -1.[11] In an n-bit encoding, the most negative value, such as -7 in 4 bits (1000), arises from inverting the largest positive value (0111).[24]
The most significant bit (MSB) in one's complement is always 1 for negative values, serving as a sign indicator, but unlike pure sign-magnitude systems, it is not isolated; the entire bit pattern, including the MSB, contributes integrally to the complemented magnitude calculation. To decode a negative representation, one inverts the bits back to obtain the positive magnitude and then applies the negative sign.[11][25] This holistic bit involvement distinguishes one's complement from other signed representations.[24]
Arithmetic Operations
Addition
In one's complement arithmetic, addition of two operands proceeds via standard binary addition, treating the numbers as unsigned values regardless of their signs. The bits are added column by column from the least significant bit (LSB) to the most significant bit (MSB), propagating any carries as in binary addition. If a carry is generated out of the MSB (indicating overflow in the fixed-width representation), this carry bit is not discarded; instead, it is added back to the LSB of the sum in a process known as the end-around carry. This adjustment ensures the result correctly represents the sum modulo $2^n - 1, where n is the number of bits, preserving the arithmetic properties of the representation.[26] Consider a 5-bit example adding +7 (00111) and +5 (00101). The binary addition yields 01100 with no carry out from the MSB, so no end-around carry is needed, resulting in +12 (01100). Now, adding -1 (11110) and -2 (11101): the initial sum is 11011 after discarding the carry out of 1 from the MSB, but applying the end-around carry adds 1 to the LSB, yielding 11100, which represents -3. For mixed signs, such as +1 (00001) and -3 (11100), the sum is 11101 with no carry out, directly giving -2. These steps demonstrate how the end-around carry corrects the result when necessary.[26] The result of addition can be expressed as the sum bits plus the end-around carry if an overflow carry occurs: r = (a + b) + c_o \mod 2^n, where c_o is 1 if there is a carry out from the MSB and 0 otherwise. This mechanism handles cases where the sum exceeds the representable range but requires careful implementation in hardware to route the carry back efficiently.[26] Overflow in one's complement addition is detected when both operands have the same sign but the result has the opposite sign, indicating the sum has exceeded the bounds of the representation (e.g., adding two large positive numbers yielding a negative result). In such cases, the operation signals an overflow condition, though the end-around carry still produces a valid (but wrapped) value. For instance, in 5 bits, adding +11 (01011) and +9 (01001) gives a direct sum of 10100, which is negative despite positive operands, confirming overflow. No overflow occurs for operands of different signs, as the result stays within bounds.[26]Subtraction
In ones' complement arithmetic, subtraction of two numbers A and B (A - B) is implemented by adding A to the ones' complement of B, which effectively computes A + (-B), and then applying an end-around carry adjustment if a carry-out is generated during the addition.[27][28] This approach leverages the addition circuitry, as the ones' complement of B represents its negation.[29] Consider a 4-bit example where A = 5 (0101₂) and B = 3 (0011₂). The ones' complement of B is 1100₂. Adding 0101₂ + 1100₂ yields 10001₂, producing a carry-out of 1 and a sum of 0001₂. The end-around carry adds this 1 back to the least significant bit of the sum: 0001₂ + 0001₂ = 0010₂, which correctly represents 2 (5 - 3 = 2).[27][29] If no carry-out occurs, the result is the ones' complement of the true difference, indicating a negative value that requires further complementation to obtain the magnitude.[30] The ones' complement operation is its own inverse: applying it twice to a number yields the original value (within the fixed word length), which underpins the negation used in subtraction.[28] Overflow detection in subtraction follows the same principle as in addition, where inconsistency in the carries into and out of the sign bit signals an overflow condition.[27]Negative Zero Issue
Occurrence
In one's complement arithmetic, negative zero occurs when a positive number is added to its exact negative complement, yielding a bit pattern consisting of all 1s.[31] This happens under the condition of symmetric addition of opposite values, where the bitwise inversion used to form the negative ensures that each bit position sums to 1 without generating carries that propagate across bits.[32] A representative example in a 4-bit system illustrates this: the positive value +1 is represented as0001, while its negative -1 is 1110; adding these produces 1111, interpreted as negative zero.[33]
This negative zero (1111 in the 4-bit case) is distinct from positive zero (0000), as the former has the sign bit set to 1, creating two separate representations for the value zero despite their numerical equivalence.[31]
Consequences
In ones' complement representation, negative zero and positive zero are mathematically equivalent but possess distinct bit patterns—all ones for negative zero and all zeros for positive zero—which introduces logical inconsistencies in comparisons and equality tests. For instance, direct bitwise or numerical equality checks may fail to recognize them as identical, necessitating additional logic to treat both as zero and potentially leading to erroneous program behavior in conditional statements or loops.[34][35] Arithmetic operations exacerbate these issues, as subtracting a number from itself can yield negative zero instead of positive zero, resulting in unexpected signs or infinite loops in algorithms that rely on zero detection without accounting for the dual representations. Such errors propagate in subsequent computations, altering results in ways that deviate from expected mathematical outcomes, particularly in subtraction-heavy routines.[33][34] In early hardware like the UNIVAC 1100 series and UNIVAC 490, these dual zeros required specialized circuitry and instruction sets to handle inconsistencies, such as distinct skip conditions in shift and logical operations that could trigger unintended program branches when registers held negative zero. This increased design complexity and debugging challenges for programmers, as negative zero appearances in division quotients or remainders—especially in edge cases like dividing zero by a non-zero value with unlike signs—could lead to misinterpretations in further calculations.[34][36] The presence of negative zero also disrupts sorting algorithms and relational tests, where positive zero may be treated as greater than negative zero due to bit pattern differences, inverting expected orderings in data structures or search operations and compromising the reliability of ordered outputs in applications like databases or numerical simulations.[34][35]Mitigation Techniques
End-Around Carry Adjustment
In ones' complement arithmetic, the end-around carry adjustment is a corrective step applied after binary addition to ensure accurate results, particularly by normalizing the sum and handling overflow correctly. This technique involves detecting any carry-out from the most significant bit (MSB) during the addition of two operands and then adding that carry (equivalent to adding 1) back to the least significant bit (LSB) of the resulting sum. The adjustment propagates any resulting carries through the word, effectively restoring the correct value without requiring separate hardware for subtraction, as negative numbers are handled by complementing one operand and adding.[37] The algorithm for performing addition with end-around carry proceeds as follows:- Represent the operands in ones' complement form and add them using a standard binary adder, ignoring signs.
- Note any carry-out from the MSB.
- If a carry-out exists, add 1 to the LSB of the sum, allowing carries to propagate as needed to update the entire result.
Sign-Magnitude Alternative
In sign-magnitude representation, an alternative to ones' complement, the most significant bit serves as a dedicated sign bit (0 for positive or zero, 1 for negative), while the remaining bits represent the absolute magnitude of the value in standard unsigned binary form.[21] This approach inherently distinguishes the sign from the value, allowing for straightforward encoding of both positive and negative numbers without inverting bits for negation. For instance, in a 4-bit system, the number -3 is encoded as 1011, where the leading 1 indicates negativity and the trailing 011 (binary 3) gives the magnitude.[21] Unlike ones' complement, which produces a negative zero as the bitwise inverse of positive zero (all 1s), sign-magnitude represents positive zero as all bits zero (0000 in 4 bits) and negative zero as sign bit 1 with magnitude zero (1000 in 4 bits). This separation simplifies zero handling, as negative zero can be detected by checking if the magnitude is zero and normalized to positive zero by clearing the sign bit, avoiding the propagation issues seen in ones' complement arithmetic.[39] However, sign-magnitude introduces trade-offs in arithmetic operations, requiring separate logic for sign determination and magnitude computation: addition of numbers with the same sign involves adding magnitudes and retaining the sign, while differing signs necessitate subtracting the smaller magnitude from the larger and assigning the sign of the latter, often with an initial comparison step. This increases hardware complexity compared to the uniform adder used in ones' complement (with its end-around carry).[39] Historically, sign-magnitude was employed in early computers like the IBM 704 to represent fixed-point integers, providing a hybrid approach that bypassed some flaws of ones' complement by easing sign and zero management in software or hardware normalization routines.[40]Comparisons
With Two's Complement
Two's complement representation of negative numbers is obtained by taking the ones' complement and adding 1, which eliminates the negative zero present in ones' complement systems.[41] This adjustment ensures a single representation for zero, simplifying arithmetic operations and avoiding ambiguities in computations.[31] For example, in a 4-bit system, the representation of -1 in ones' complement is1110, obtained by inverting the bits of +1 (0001). In two's complement, it becomes 1111, as the ones' complement 1110 plus 1 yields 1111.[41] This pattern holds generally: the two's complement of a number is its bitwise inversion plus one, shifting the encoding to include an additional negative value without duplicating zero.[42]
In terms of arithmetic, two's complement streamlines addition and subtraction by treating all operations uniformly as addition, without the need for the end-around carry required in ones' complement addition to correct for negative zero.[21] For instance, adding two numbers in two's complement produces the correct result directly, including overflow detection via the sign bit, whereas ones' complement negation is simpler—requiring only bit inversion—compared to two's complement, which needs the additional increment step.[31]
The range of representable values also differs: for n bits, ones' complement spans from -(2^{n-1} - 1) to +(2^{n-1} - 1), such as -7 to +7 in 4 bits. Two's complement extends to -2^{n-1} to +(2^{n-1} - 1), like -8 to +7 in 4 bits, providing one more negative value at the expense of symmetry.[42] This asymmetry arises from the absence of negative zero, allowing fuller utilization of the bit patterns for negative integers.[21]