Bit
A bit, short for binary digit, is the fundamental unit of information in computing and digital communications, representing a choice between two possible states: 0 or 1.[1] The term "bit" was coined by statistician John W. Tukey in a January 1947 memorandum at Bell Laboratories as a contraction of "binary digit," providing a concise way to describe the basic elements of binary systems.[2] In 1948, Claude E. Shannon formalized the bit's role in his seminal paper "A Mathematical Theory of Communication," defining it as the unit of information corresponding to a binary decision that resolves uncertainty between two equally probable alternatives, laying the foundation for information theory.[3] Bits serve as the building blocks for all digital data, where combinations of bits encode more complex information such as text, images, and instructions; for instance, eight bits form a byte, the standard unit for data storage and processing in most computers.[4] This binary structure enables the reliable storage, manipulation, and transmission of information in electronic devices, from simple logic gates in hardware to algorithms in software.[5] In measurement standards, the bit is recognized as the base unit for quantifying information capacity, with prefixes like kilobit (1,000 bits) and prefixes for binary orders like kibibit (1,024 bits) distinguishing decimal and binary scales in data rates and storage.[6] The concept of the bit underpins modern computing architectures, including processors that perform operations on bit strings and networks that transmit data as bit streams, influencing fields from cryptography—where bits represent keys and messages—to data compression, where algorithms minimize the number of bits needed to represent information without loss of fidelity.[3] Advances extending the bit include the qubit in quantum computing, which can exist in superpositions of 0 and 1, promising exponential increases in computational power for certain problems. Overall, the bit's simplicity and universality have driven the digital revolution, enabling the scalability of information technology from personal devices to global networks.Fundamentals
Definition
A bit, short for binary digit, is the fundamental unit of information in computing and digital communications, representing one of two mutually exclusive states, conventionally denoted as 0 or 1.[1] These states can equivalently symbolize logical values such as false/true or off/on, providing a basic building block for decision-making in information systems.[7][8] As a logical abstraction, the bit exists independently of any particular physical embodiment, functioning as the smallest indivisible unit of data that computers and digital devices can process, store, or transmit.[1] This abstraction allows bits to underpin all forms of binary data representation, from simple flags to complex algorithms, without reliance on specific hardware characteristics.[7] In practice, a bit captures binary choices akin to a light switch toggling between on and off positions, where each position corresponds to one of the two states.[9] Similarly, it models the outcome of a fair coin flip, yielding either heads or tails as the discrete alternatives.[8] Unlike analog signals, which convey information through continuous variations in amplitude or frequency, bits embody discrete, binary states that facilitate error-resistant and reproducible digital operations.[10][11]Role in Binary Systems
In binary numeral systems, information is encoded using base-2 positional notation, where each bit represents a coefficient of 0 or 1 multiplied by a distinct power of 2, starting from the rightmost position as the zeroth bit.[12] For instance, the least significant bit (bit 0) corresponds to $2^0 = 1, the next (bit 1) to $2^1 = 2, bit 2 to $2^2 = 4, and so on, allowing any non-negative integer to be uniquely represented as a sum of these powers where the coefficient is 1.[13] This structure enables efficient numerical representation in digital systems, as each additional bit doubles the range of expressible values.[14] Bit strings, or sequences of multiple bits, extend this to form complex data such as numbers, characters, or machine instructions. For example, the three-bit string 101 in binary equals $1 \cdot 2^2 + 0 \cdot 2^1 + 1 \cdot 2^0 = 5 in decimal, illustrating how positional weighting allows compact encoding of values up to $2^n - 1 with n bits.[12] These strings serve as the foundational units for all digital processing, where operations manipulate them bit by bit to perform arithmetic, logical decisions, or data transformations.[15] Fundamental operations on bits include bitwise AND, OR, XOR, and NOT, which apply logical rules to individual bit pairs (or single bits for NOT) across strings of equal length. The bitwise AND operation outputs 1 only if both inputs are 1, used for masking or selective retention of bits; its truth table is:| Input A | Input B | A AND B |
|---|---|---|
| 0 | 0 | 0 |
| 0 | 1 | 0 |
| 1 | 0 | 0 |
| 1 | 1 | 1 |
| Input A | Input B | A OR B |
|---|---|---|
| 0 | 0 | 0 |
| 0 | 1 | 1 |
| 1 | 0 | 1 |
| 1 | 1 | 1 |
| Input A | Input B | A XOR B |
|---|---|---|
| 0 | 0 | 0 |
| 0 | 1 | 1 |
| 1 | 0 | 1 |
| 1 | 1 | 0 |
| Input A | NOT A |
|---|---|
| 0 | 1 |
| 1 | 0 |