Fact-checked by Grok 2 weeks ago

Binary data

Binary data is information encoded using the , consisting of sequences of bits—each bit being a that represents one of two possible states, typically 0 or 1, corresponding to the absence or presence of an electrical signal in systems. This representation forms the foundational building block of all , enabling the , , and transmission of diverse types of such as text, images, audio, and numerical values through combinations of these bits. Groups of eight bits, known as bytes, allow for 256 possible values (from 0 to 255), which are commonly used to encode individual characters in standards like ASCII or to represent small integers and memory addresses. The system's simplicity aligns with the on/off nature of electronic switches in , making it efficient for reliable data manipulation across all modern devices, from microcontrollers to supercomputers. Despite its basic structure, underpins complex operations, including arithmetic, logical functions, and the encoding of higher-level abstractions like programming languages and files, with larger units such as words (32 or 64 bits) handling more intricate computations.

Fundamentals

Definition and Properties

Binary data consists of expressed as a sequence of bits, where each bit represents one of two distinct states, typically denoted as 0 or 1. These states are often analogous to on/off switches in electronic systems or true/false values in logical operations, forming the foundational unit for digital representation. In , bits serve as the smallest indivisible elements of data, enabling the encoding of more complex structures through combinations. A key property of binary data is its discreteness, which contrasts with analog data's continuous variation; binary values are confined to exact, finite states without intermediate gradations, making them robust against noise in transmission and storage. The base-2 system is immutable in its structure, relying solely on powers of two for representation, which ensures consistent interpretation across digital systems. Each bit carries \log_2(2) = 1 unit of information, quantifying the choice between two equally likely alternatives as the fundamental measure in information theory. Binary data's efficiency in electronic representation stems from its two-state simplicity, which aligns directly with the on/off behavior of transistors and switches, unlike systems requiring ten states that complicate implementation. This simplicity enhances reliability and reduces power consumption in digital circuits compared to multi-state alternatives like , where representation is less efficient for and . For instance, the bit pattern 1010 represents the decimal value 10 or can serve as a indicating a specific condition, such as an enabled feature in software.

Historical Context

The concept of binary data traces its roots to early philosophical and mathematical explorations of dualistic systems. Gottfried Wilhelm Leibniz outlined his dyadic arithmetic, a binary number system representing numbers using only 0 and 1, in 1679, which he explicitly drew from the ancient Chinese I Ching's hexagrams composed of broken and unbroken lines. He published this as "Explication de l'Arithmétique Binaire" in 1703, positioning binary as a universal language akin to the I Ching's divinatory framework. Complementing these ideas, George Boole introduced algebraic logic in his 1854 book An Investigation of the Laws of Thought, formalizing operations on binary variables (true/false) that laid the groundwork for logical computation without direct reference to numerical representation. The marked the practical adoption of in engineering and computing. Claude Shannon's 1937 master's thesis, "A Symbolic Analysis of and Switching Circuits," demonstrated how could optimize electrical switching circuits using states, bridging abstract logic to physical devices. This insight influenced early computers, culminating in John von Neumann's 1945 "First Draft of a Report on the ," which formalized encoding as the basis for stored-program architecture in electronic computing systems. Key milestones included Samuel Morse's development of the electromagnetic telegraph and in the late 1830s and early 1840s, employing on/off signals via dots and dashes for long-distance communication, which predated computational uses but exemplified signaling in practice. Standardization accelerated in the mid-20th century amid the transition from to systems. The , completed in 1945, represented an early adoption of electronic digital computation, though it primarily used decimal rings; its design influenced subsequent implementations like the . IBM pioneered (BCD) in the 1940s for punch-card systems and early machines, encoding decimal digits in binary to facilitate in applications. By the 1950s, mainframes such as the (1952) and (1951) shifted to pure arithmetic for efficiency in scientific computing, marking a widespread move away from decimal machinery. Post-1991, evolved encoding standards, starting with its inaugural version in 1991 and introducing in 1993 to support global characters via variable-length sequences, ensuring compatibility across diverse data systems.

Mathematical Foundations

Combinatorics and Counting

The number of distinct binary strings of length n is $2^n, as each of the n positions can independently be either 0 or 1. For example, when n=3, there are 8 possible strings: 000, 001, 010, 011, 100, 101, 110, and 111. Binary strings of length n also correspond to the subsets of an n-element set, where each 1 indicates inclusion of an element and each 0 indicates exclusion. Consequently, the power set of an n-element set contains exactly $2^n subsets. This equivalence underpins the binomial theorem, which expands (1 + 1)^n = \sum_{k=0}^n \binom{n}{k}, where \binom{n}{k} counts the number of ways to choose k positions for 1s in a binary string of length n. The of a binary is defined as the number of 1s it contains. The between two binary strings x and y of equal length is the number of positions at which they differ, given by d(x, y) = \mathrm{wt}(x \oplus y), where \oplus denotes the bitwise XOR operation and \mathrm{wt} is the . In , these concepts enable error detection. For instance, a can be appended to a binary to ensure an even number of 1s overall; for the 101 (which has two 1s), adding a 0 yields 1010, preserving even . If transmission flips a bit, the received will have odd , signaling an error.

Information Theory Basics

In , binary data serves as the foundational unit for quantifying and , where each digit (bit) represents the smallest indivisible unit of . Self- measures the surprise or conveyed by a specific , defined as I(x) = -\log_2 P(x), where P(x) is the probability of the x occurring; for a like receiving a 1 with probability p, this yields I(1) = -\log_2 p bits, establishing bits as the fundamental currency of in systems. For a binary source emitting symbols 0 and 1 independently with probability p for 1 (and $1-p for 0), the average information per symbol is captured by the Shannon entropy, also known as the : H(p) = -p \log_2 p - (1-p) \log_2 (1-p) This function reaches its maximum value of 1 bit when p = 0.5, indicating maximum for a flip, and decreases symmetrically to 0 as p approaches 0 or 1, reflecting predictability in biased sources. In communication channels, binary data transmission is often modeled by the binary symmetric channel (BSC), where bits are flipped with error probability p_e independently of the input. The , representing the maximum achievable, is C = 1 - H(p_e), measured in bits per channel use; for p_e = 0, capacity is 1 bit (noiseless), while it approaches 0 as p_e nears 0.5, highlighting the impact of noise on reliable binary communication. Huffman coding provides an optimal method for of binary data sources with known probabilities, constructing prefix-free codes that minimize average codeword length. For symbol probabilities {0.5, 0.25, 0.25}, the algorithm assigns code lengths of 1, 2, and 2 bits respectively (e.g., 0 for the most probable, 10 and 11 for the others), achieving an average length of 1.5 bits per symbol, which equals the bound for efficient encoding.

Statistical Applications

Binary Variables and Distributions

In statistics, a binary variable is a categorical random variable that assumes exactly one of two possible values, typically coded as 0 or 1 to represent distinct outcomes such as failure/success or false/true. This coding facilitates quantitative analysis while preserving the dichotomous nature of the data. The provides the foundational probabilistic model for a single binary variable X, defined by a single parameter p \in [0,1] representing the probability of success. Specifically, the is given by: P(X = 1) = p, \quad P(X = 0) = 1 - p. The () is \mu = p, and the variance is \sigma^2 = p(1 - p), which achieves its maximum of 0.25 when p = 0.5. This distribution, first rigorously developed by in his seminal 1713 treatise , underpins much of modern for two-state systems. For scenarios involving multiple independent binary trials, the models the count of successes K in n fixed trials, each with the same success probability p. The is: P(K = k) = \binom{n}{k} p^k (1 - p)^{n - k}, \quad k = 0, 1, \dots, n, where \binom{n}{k} denotes the . The mean is np and the variance is np(1 - p), reflecting the additive properties of independent trials. This distribution is particularly useful for aggregating outcomes over repeated experiments. Common applications include modeling coin flips, where a has p = 0.5, yielding symmetric probabilities for heads or tails in a single trial under the Bernoulli distribution or multiple flips under the . In hypothesis testing, binary outcomes appear in A/B tests, such as comparing conversion rates (success as user engagement) between two website variants, where the quantifies the number of successes in each group to assess differences in p.

Regression and Modeling

In statistical modeling, binary data often serves as the response variable in regression analyses where the outcome is dichotomous, such as success/failure or presence/absence. is a foundational method for this purpose, modeling the probability of the positive outcome through the link function. For a binary response Y taking values 0 or 1, the model specifies \log\left(\frac{P(Y=1 \mid X)}{1 - P(Y=1 \mid X)}\right) = \beta_0 + \beta_1 X, where X is a predictor and \beta_0, \beta_1 are parameters estimated via maximum likelihood. The exponentiated coefficient \exp(\beta_1) represents the , quantifying how the odds of Y=1 change with a one-unit increase in X. An alternative to logistic regression is the probit model, which links the probability to the inverse cumulative distribution function of the standard normal distribution. Here, \Phi^{-1}(P(Y=1 \mid X)) = \beta_0 + \beta_1 X, where \Phi is the normal CDF, providing a similar interpretation but assuming an underlying normal latent variable. Probit models are particularly common in econometrics and biostatistics for their connection to threshold models of decision-making. Binary data can also appear as predictors in models, typically encoded as variables that take values 0 or 1 to represent categories. In a context, including a variable D shifts by \beta_D when D=1, with the interpreted as the average difference in the response between the two groups, holding other variables constant. This approach allows categorical binary information, such as treatment versus control, to be incorporated without assuming . Evaluating models with binary outcomes requires metrics beyond , as they assess classification performance. The area under the curve (AUC-ROC) measures the model's ability to discriminate between classes, with values ranging from 0.5 (random) to 1 (perfect separation); it represents the probability that a randomly chosen positive instance ranks higher than a negative one. For instance, in predicting disease presence (coded as Y=1) from clinical predictors, an AUC-ROC of 0.85 indicates strong discriminatory power for identifying at-risk patients.

Computer Science Usage

Representation and Encoding

Binary data in computing systems is typically represented using standardized encoding schemes that map higher-level data types, such as characters and numbers, into fixed or variable-length sequences of bits. These encodings ensure compatibility across hardware and software platforms for storage and transmission. One of the earliest and most foundational schemes is the American Standard Code for Information Interchange (ASCII), which uses 7 bits to represent 128 characters, including uppercase and lowercase letters, digits, and control symbols; for example, the character 'A' is encoded as 01000001 in binary. Extended 8-bit versions, such as ISO-8859-1, allow for 256 characters by utilizing the full byte, accommodating additional symbols like accented letters. For broader international support, modern systems employ , a variable-length encoding of the character set that uses 1 to 4 bytes per character, preserving ASCII compatibility for the first 128 code points while efficiently handling over a million possible characters with longer sequences for rarer symbols. This scheme is particularly advantageous for transmission, as it minimizes bandwidth for English text (1 byte per character) while scaling for multilingual content, such as encoding the Unicode character U+1F600 (grinning face) as the 4-byte sequence 11110000 10011111 10011000 10000000. Numerical values are encoded in binary using conventions that support both integers and floating-point numbers. Signed integers commonly use representation, where the most significant bit indicates the sign (0 for positive, 1 for negative), and negative values are formed by inverting all bits of the and adding 1; for instance, -5 in 8-bit is 11111011, allowing arithmetic operations to treat positive and negative numbers uniformly without special hardware. Floating-point numbers follow the standard, which defines binary formats with a , an exponent field, and a (significand); the single-precision (32-bit) format allocates 1 bit for the sign, 8 bits for the biased exponent, and 23 bits for the , enabling representation of numbers from approximately ±1.18 × 10⁻³⁸ to ±3.40 × 10³⁸ with about 7 decimal digits of precision. In file formats, binary data contrasts with text files by storing information in a machine-readable rather than human-readable characters, often without line endings or delimiters that imply textual ; text files encode as sequences of printable characters (e.g., via ASCII), while binary files directly embed raw bytes for efficiency. A prominent example is the image format, where the file header begins with the bytes FF D8 (Start of Image marker) followed by application-specific in , such as JFIF identifiers and quantization tables, before the compressed image data. Compression techniques tailored to binary data exploit its bit-level simplicity, particularly for repetitive patterns. (RLE) is a lossless method ideal for binary images, where sequences of identical bits (runs of 0s or 1s) are replaced by a count and the bit value; for a row like 0000011110 in a image, it might be encoded as (5 zeros, 4 ones, 1 zero), reducing storage for sparse or uniform regions like scanned documents. This approach achieves high compression ratios on binary data due to the prevalence of long runs, though it performs poorly on complex patterns.

Storage and Operations

Binary data is stored in as sequences of bits, where each bit represents either a 0 or 1 state. In (RAM), particularly dynamic RAM (DRAM), individual bits are stored using capacitors that hold a charge to indicate 1 or discharge for 0, refreshed periodically to prevent data loss. Static RAM (SRAM), used in caches, employs flip-flops—circuits that maintain state using feedback transistors—to store bits without refresh. (ROM) stores bits more permanently, often via fuses, programming, or floating-gate transistors in flash ROM, ensuring data persistence even without power. is organized into bytes, each comprising 8 bits, allowing efficient addressing and access. Addresses themselves are binary numbers, with the (CPU) using these to locate specific bytes via address lines in hardware. Arithmetic operations on binary data mimic processes but operate bit by bit. Binary sums two bits plus any carry-in, producing a bit and carry-out: for instance, + = (no carry), + = with carry , and + + (with carry-in) = with carry . Subtraction uses representation, where the subtrahend is inverted and added to the minuend plus , propagating borrows akin to carries. is achieved through shifts and : the multiplicand is shifted left (multiplying by 2) for each bit in the multiplier, then added to an accumulator, as in the basic shift-and-add . Bitwise operations enable direct manipulation of representations for tasks like masking or . The AND operation (&) outputs 1 only if both input bits are 1, useful for clearing bits; for example, 1010 & 1100 = 1000 (10 in AND 12 = 8). OR (|) outputs 1 if at least one input is 1, setting bits; XOR (^) outputs 1 for differing bits, aiding checks; and NOT (~) inverts all bits (0 to 1, 1 to 0). Left shift (<<) moves bits toward higher significance, equivalent to multiplication by powers of 2 (e.g., x << 1 = x * 2), while right shift (>>) divides by powers of 2, filling with zeros (unsigned) or sign bits (signed). In CPU processing, the (ALU) executes these operations on binary data fetched from registers or . The ALU handles bitwise logic, shifts, and arithmetic using combinational circuits like full adders for carries. Instructions are encoded as binary ; in x86 architecture, for example, the ADD operation between registers uses 0x01 followed by ModR/M bytes specifying operands, such as 01 18 for ADD [EAX], EBX in certain encodings. This binary encoding allows the CPU to decode and route signals to the ALU for execution.

Broader Applications

In Digital Electronics and Engineering

In digital electronics, binary data forms the foundation of logic circuits, where binary states (0 and 1) are represented and manipulated using electronic components to perform computational operations. Logic are the basic building blocks, implementing functions through networks that process binary inputs to produce binary outputs. These enable the construction of complex systems by combining simple binary decisions. The fundamental logic gates include AND, OR, NOT, and XOR, each defined by their truth tables that specify outputs for all possible binary input combinations. For the , the output is 1 only if both inputs are 1; otherwise, it is 0.
ABAND Output
000
010
100
111
The outputs 1 if at least one input is 1.
ABOR Output
000
011
101
111
The NOT gate inverts a single binary input, outputting 1 for input 0 and vice versa.
ANOT Output
01
10
The outputs 1 if the inputs differ.
ABXOR Output
000
011
101
110
These gates are physically realized using transistors, such as in technology, where NMOS and PMOS transistors are arranged in series for AND-like functions and parallel for OR-like functions to control binary signal flow. For instance, a AND gate uses two PMOS transistors in series for the pull-up network and two NMOS in series for pull-down, ensuring low power consumption by avoiding direct paths from supply to ground except during transitions. Sequential circuits extend by incorporating memory elements to store states, allowing outputs to depend on both current inputs and prior states. Flip-flops, such as the SR latch, serve as basic storage units, maintaining a value until changed by set (S) or reset (R) inputs. An SR latch, built from two cross-coupled NOR gates, sets the output Q to 1 when S=1 and R=0, resets it to 0 when S=0 and R=1, and holds the when both are 0; the input combination S=1 and R=1 is typically avoided to prevent instability. Clocks synchronize these operations in larger systems, using periodic pulses to trigger changes only at defined intervals, ensuring coordinated updates across multiple flip-flops in synchronous designs. Binary signals in digital electronics are encoded as distinct voltage levels to represent 0 and 1 reliably. In Transistor-Transistor Logic (TTL) families, a low voltage near 0V (typically 0 to 0.8V) denotes 0, while a high voltage near 5V (2V to 5V) denotes 1, providing clear thresholds for interpretation by subsequent . This voltage separation grants binary signals inherent noise immunity, as minor perturbations (e.g., from ) are unlikely to flip the state across the wide margin between low and high levels, enabling robust transmission over wires or buses. Applications of binary data in include analog-to-digital converters (ADCs), which sample continuous analog signals and quantize them into binary codes for processing. Successive approximation ADCs, for example, iteratively compare the input voltage against binary-weighted references to generate an n-bit output, where each bit corresponds to a in voltage resolution. In microcontrollers like the , I/O is handled via pins configured as inputs to read binary states from sensors (e.g., switches) or outputs to drive binary signals to actuators (e.g., LEDs), with pin 13 often used for a built-in LED to visualize binary high (5V) or low (0V).

In Biological and Physical Sciences

In biological sciences, deoxyribonucleic acid (DNA) serves as a primary carrier of genetic information, structured as a double helix composed of four nucleotide bases: adenine (A), thymine (T), cytosine (C), and guanine (G). The complementary base pairing—A with T, and C with G—ensures accurate replication and repair. The quaternary nature of DNA encoding allows for dense information storage, and in bioinformatics, binary representations are used for biallelic genetic variations, such as single nucleotide polymorphisms (SNPs), which occur at specific genomic positions and are commonly encoded as 0 for the major allele and 1 for the minor allele. Such binary representations enable efficient storage and processing of SNP data in genome-wide association studies, where genotypes are scored as binary matrices to identify genetic markers associated with traits or diseases. In quantum computing, a qubit represents the fundamental unit of quantum information, analogous yet distinct from classical binary bits. A qubit exists in a superposition of basis states denoted as |0\rangle and |1\rangle, described by a linear combination \alpha |0\rangle + \beta |1\rangle, where \alpha and \beta are complex amplitudes satisfying |\alpha|^2 + |\beta|^2 = 1. Unlike classical bits fixed in 0 or 1, this superposition allows qubits to process multiple states simultaneously until measurement, at which point the wave function collapses probabilistically to either |0\rangle or |1\rangle according to the Born rule, yielding a binary outcome. This collapse to binary distinguishes quantum from classical binary data, enabling exponential computational advantages in algorithms like Shor's for factorization, though practical implementations face decoherence challenges. In physical sciences, binary states manifest in fundamental particle properties and astronomical systems. Spin-1/2 particles, such as electrons and protons, possess intrinsic with two possible projections along a quantization axis: +1/2 (spin-up) or -1/2 (spin-down), forming a natural binary dichotomy that underpins and . These states can be measured to yield binary outcomes, analogous to bits, and their correlations in entangled systems inform the physics of binary information. In astronomy, stars constitute gravitationally bound two-body systems where two stars orbit a common , comprising about half of all stellar systems and serving as key probes for and mass determination. Notable examples include binary pulsars, compact systems of a and companion star whose pulsed radio signals exhibit orbital Doppler shifts, enabling precise , such as the detection of predicted by Einstein's theory. Binary states also appear in simulations of physical systems, as in Monte Carlo methods applied to statistical physics. The Ising model, a cornerstone for studying phase transitions, represents magnetic materials as lattices of binary spins (±1 or 0/1), where Monte Carlo simulations randomly flip spins to sample equilibrium configurations and compute properties like magnetization and specific heat. These digital simulations of binary physical states efficiently model complex phenomena, such as ferromagnetism, by approximating Boltzmann distributions without solving the full partition function.

References

  1. [1]
    4. Binary and Data Representation - Dive Into Systems
    In binary, each signal corresponds to one bit (binary digit) of information: a zero or a one. It may be surprising that all data can be represented using just ...
  2. [2]
    [PDF] Lecture #1: Bits, Bytes, and Binary
    - We are used to working with the decimal number system which has 10 digits (0, 1, 2, 3, …, 8, and 9). A number system having only 2 digits is called a binary ...
  3. [3]
    [PDF] Bit, Byte, and Binary
    byte: Abbreviation for binary term, a unit of storage capable of holding a single character. On almost all modern computers, a byte is equal to 8 bits. Large ...
  4. [4]
    Binary and Hexadecimal - E 115 - NC State University
    Computers process information in binary (base 2), which has only two digits: 0 and 1. In decimal (base 10) there are ten digits: 0–9. Both are positional ...
  5. [5]
    2.1: Binary — the basis of computing - Engineering LibreTexts
    Oct 19, 2022 · Binary is a base-2 number system that uses two mutually exclusive states to represent information. A binary number is made up of elements called bits.<|control11|><|separator|>
  6. [6]
    3.3. Binary and Its Advantages — CS160 Reader - Chemeketa CS
    Binary means “two states.” The two states are sometimes called “1” and “0”, or called “true” and “false”, or called “on” and “off”.
  7. [7]
    Binary & data (video) | Bits and bytes - Khan Academy
    Jan 16, 2018 · Computers use bits (binary digits) to represent data as ones and zeroes. Bits are the smallest piece of information a computer can store.Missing: authoritative sources
  8. [8]
    Digital and analog information (video) - Khan Academy
    Nov 16, 2021 · In contrast, digital signals are discrete, meaning they only take on specific values. In digital technology, translation of information is into binary format ( ...
  9. [9]
    [PDF] Information, Entropy, and the Motivation for Source Codes
    Amount of information = log2(2/1) = 1 bit. Simple roll of two dice. Each die has six faces, so in the roll of two dice there are 36 possible combinations for.
  10. [10]
    Binary To Decimal - Innovate Tech Hub
    The binary system's simplicity makes it ideal for digital circuits, where the two states of an electronic switch (on or off) correspond directly to the ...
  11. [11]
    Fingers or fists? (the choice of decimal or binary representation)
    Decimal representation is never more efficient than binary representation, and only for N = 9 and N = 10 are they equally efficient.
  12. [12]
    Binary numbers | AP CSP (article) - Khan Academy
    In order to represent numbers with just 0 ‍ s and 1 ‍ s, computers use the binary number system. Here's what it looks like when a computer counts to ten: ...
  13. [13]
    Gottfried Wilhelm Leibniz: How the 'I Ching' Inspired His Binary System
    Jul 1, 2018 · The 17th-century philosopher and mathematician developed the binary number system that is still being used today, but his approach to writing in ...
  14. [14]
    [PDF] On Leibniz and the I Ching
    Apr 26, 2007 · This paper will investigate, albeit in breif, Leibniz' interests in the Chinese, and in particular the nature of this ancient binary system ...<|separator|>
  15. [15]
    An Investigation of the Laws of Thought - Google Books
    Author, George Boole ; Publisher, Walton and Maberly, 1854 ; Original from, Harvard University ; Digitized, Dec 2, 2005 ; Length, 424 pages.
  16. [16]
    [PDF] Shannon-1937.pdf
    Claude Elwood Shannon. B.S., University of Eichigan. Submitted in Partial ... symbols of Boolean algebra admit of two logical inter- pl"etati6ns. If ...Missing: digital | Show results with:digital
  17. [17]
    [PDF] First draft report on the EDVAC by John von Neumann - MIT
    In existing digital computing devices various mechanical or electrical devices have been used as elements: Wheels, which can be locked into any one of ten (or ...
  18. [18]
    Invention of the Telegraph | Articles and Essays | Digital Collections
    His system used an automatic sender consisting of a plate with long and short metal bars representing the Morse code equivalent of the alphabet and numbers. The ...Missing: 1847 binary
  19. [19]
    Mysteries of the Ancients: Binary Coded Decimal (BCD) - EEJournal
    Jan 19, 2023 · Early computers that were exclusively decimal include the ENIAC, IBM NORC, IBM 650, IBM 1620, IBM 7070, and UNIVAC Solid State 80. In these ...
  20. [20]
    The First Mainframes - CHM Revolution - Computer History Museum
    ... 1950s, spurred developments in hardware and software. Manufacturers commonly built small numbers of each model, targeting narrowly defined markets.
  21. [21]
    Summary Narrative - Unicode
    Aug 31, 2006 · Unicode alphabetics and symbols were essentially complete by the spring of 1990, but the cross-mapping effort continued. This extensive mapping ...
  22. [22]
    [PDF] Lecture 6: Combinatorics - Steven Skiena
    There are 27 length-3 strings on 123. The number of binary strings of length n is identical to the number of subsets of n items (why?) Page 8. Recurrence ...
  23. [23]
    [PDF] CMPSCI 250 Lecture #17
    Mar 2, 2012 · Not only is the number of binary strings of length n the same as the number of subsets of an n-element set, the two numbers seem to be 2n for ...
  24. [24]
    [PDF] Let S be a set of n elements. Then the power set of S has 2
    May 11, 2014 · Theorem: Let S be a set of n elements. Then the power set of S has 2n elements; that is, S has 2n subsets. Proof 1: If S is the empty set, then ...Missing: combinatorics | Show results with:combinatorics
  25. [25]
    10-06 Binomial Theorem
    The binomial theorem is a shortcut to expand exponents of binomials. The first 6 powers of (x + y) n are given in the triangle below.
  26. [26]
    [PDF] Hamming Code Notes 18.310, Spring 2010, Prof. Peter Shor
    We define the Hamming weight of a string to be the number of non-zero bits it contains (in the binary case, these bits have to be ones). We will define the.
  27. [27]
    [PDF] Lecture 1+2 1 Logistics 2 Error-correction
    The Hamming distance between two vectors x, y ∈ {0, 1}n is defined to be the number of coordinates they do not match. In other words, d(x, y) = w(x + y mod 2).
  28. [28]
    [PDF] ERROR DETECTION: PARITY BITS AND CHECK DIGITS
    ... parity is used, and even if even parity is used. For example, the ASCII code for 'A' is 0100 0001. Using odd parity, it is 1 0100 0001. The extra bit is the ...
  29. [29]
    [PDF] A Mathematical Theory of Communication
    This case has applications not only in communication theory, but also in the theory of computing machines, the design of telephone exchanges and other fields.
  30. [30]
    [PDF] A Method for the Construction of Minimum-Redundancy Codes*
    PROcEEDINGS OF THE J.R.E.. A Method for the Construction of. Minimum-Redundancy Codes*. DAVID A. HUFFMANt, ASSOCIATE, IRE. Summary-An optimum method of coding ...
  31. [31]
    [PDF] Bernoulli and Binomial Random Variables
    Jul 10, 2017 · A Bernoulli random variable is the simplest kind of random variable. It can take on two values,. 1 and 0. It takes on a 1 if an experiment with ...
  32. [32]
    [PDF] The Bernoulli process and discrete distributions Math 217 ...
    A random variable X that has two outcomes, 0 and 1, where P(X=1) = p is said to have a Bernoulli distribution with parameter p, Bernoulli(p).
  33. [33]
    [PDF] Ars conjectandi three hundred years on - James Hanley
    XII Bernoulli devotes a section to developing the binomial distribution for general chances. He describes what are now known as “Bernoulli trials ...
  34. [34]
    Lesson 10: The Binomial Distribution | STAT 414
    We'll do exactly that for the binomial distribution. We'll also derive formulas for the mean, variance, and standard deviation of a binomial random variable.
  35. [35]
    [PDF] Hypothesis Testing
    From each user we observe a binary outcome, which is whether they click on the ad. The aggregate outcome is the. “click-through rate” (CTR) in both the ...
  36. [36]
    The Regression Analysis of Binary Sequences - Cox - 1958
    Dec 5, 2018 · A sequence of 0's and 1's is observed and it is suspected that the chance that a particular trial is a 1 depends on the value of one or more independent ...
  37. [37]
    The Method of Probits - Science
    The Method of Probits. C. I. BlissAuthors Info & Affiliations. Science. 12 Jan 1934.
  38. [38]
    Use of Dummy Variables in Regression Equations
    Apr 11, 2012 · The use of dummy variables requires the imposition of additional constraints on the parameters of regression equations if determinate estimates ...
  39. [39]
    The meaning and use of the area under a receiver operating ...
    Hanley. JA, McNeil. BJ. Comparing the areas under two. ROC curves derived from the same sample of patients. Radiology. (forth- coming). 13. Metz. CE, Kronman.
  40. [40]
    RFC 20 - ASCII format for network interchange - IETF Datatracker
    RFC 20 uses 7-bit ASCII embedded in 8-bit bytes with a high order bit of 0, for general interchange of information among systems.
  41. [41]
    RFC 3629: UTF-8, a transformation format of ISO 10646
    UTF-8, the object of this memo, has a one-octet encoding unit. It uses all bits of an octet, but has the quality of preserving the full US-ASCII [US-ASCII] ...
  42. [42]
    Two's Complement - Cornell: Computer Science
    To get the two's complement negative notation of an integer, you write out the number in binary. You then invert the digits, and add one to the result.Contents and Introduction · Conversion from Two's... · Conversion to Two's...
  43. [43]
    IEEE 754-2019 - IEEE SA
    Jul 22, 2019 · This standard specifies interchange and arithmetic formats and methods for binary and decimal floating-point arithmetic in computer programming environments.
  44. [44]
    File Input and Output - UT Computer Science
    Files provide both sequential and random access. A text file is processed as a sequence of characters. A binary file is processed as a sequence of bytes. In a ...
  45. [45]
    [PDF] JPEG File Interchange Format
    Sep 1, 1992 · JPEG File Interchange Format is a minimal format for exchanging JPEG bitstreams between platforms, using JPEG compression and a standard color ...
  46. [46]
    8.2.2 Run Length Encoding Image Compression - DICOM
    DICOM provides a mechanism for supporting the use of Run Length Encoding (RLE) Image Compression, which is a byte oriented lossless compression scheme.
  47. [47]
    [PDF] 10- main memory
    1- Static RAM : consists essentially of internal flip-flops that store the binary information.the stored information remains valid as long as power is ...
  48. [48]
    Memory Concepts
    Each D-latch can maintain a single bit of data within a single memory address or location. For example, if a memory stores eight bits per memory address, then ...
  49. [49]
    [PDF] Unit 2 Skills & Outcomes Binary Arithmetic
    • In binary addition we carry when the sum is 2 or more. • Add bits in binary to produce a sum bit and a carry bit. 0. + 0. 00 no need to carry sum bit. 0. + 1.
  50. [50]
    Binary Arithmetic - ChipVerify
    In digital design, subtraction is generally performed by converting the subtraction operation into an addition with the use of two's complement. Two's ...<|separator|>
  51. [51]
    Arithmetic Operations of Binary Numbers - GeeksforGeeks
    Jul 12, 2025 · You can add, subtract, multiply, and divide binary numbers using various methods. These operations are much easier than decimal number arithmetic operations.
  52. [52]
    [PDF] Arithmetic and Bitwise Operations on Binary Data
    □ Operate on Bit Vectors. ▫ Operations applied bitwise. ▫ Bitwise-AND operator: &. ▫ Bitwise-NOR operator: |. ▫ Bitwise-XOR operator: ^. ▫ Bitwise-NOT operator:.
  53. [53]
    [PDF] Bitwise Operations - Department of Computer Science and ...
    Operators. The C bitwise operators divide into unary and binary operators: Unary: ~x: Bitwise complement of x (0 → 1, 1 → 0). Binary: x | y: Bitwise OR of x ...
  54. [54]
    Bitwise operators: bit shifts, AND, OR, XOR, NOT - UAF CS
    There are two flavors of right shift: signed, and unsigned. Unsigned shift fills in the new bits with zeros. Signed shift fills the new bits with copies of the ...
  55. [55]
    5.5. Building a Processor - Dive Into Systems
    For our ALU, the opcode requires two bits because the ALU supports four operations, and two bits can encode four distinct values (00, 01, 10, 11), one for each ...
  56. [56]
    Encoding x86 Instructions - TU Chemnitz
    The x86 opcode bytes are 8-bit equivalents of iii field that we discussed in simplified encoding. This provides for up to 512 different instruction classes, ...<|control11|><|separator|>
  57. [57]
    X86-64 Instruction Encoding - OSDev Wiki
    The x86-64 instruction set defines many opcodes and many ways to encode them, depending on several factors. Legacy opcodes. Legacy (and x87) opcodes consist of, ...
  58. [58]
    [PDF] From Transistors to Logic Gates and Logic Circuits
    ▫ Used to build logic functions. ▫ There are seven basic logic gates: AND, OR, NOT,. NAND (not AND), NOR (not OR), XOR, and XNOR (not XOR) [later]. A B Out. 0 ...
  59. [59]
    [PDF] Transistors and Logic Gates - cs.wisc.edu
    Feb 6, 2017 · Logic Gates. Use switch behavior of MOS transistors to implement logical functions: AND, OR, NOT. Digital symbols: • recall that we assign a ...
  60. [60]
    From transistors to gates!
    To summarize, if we take transistors as a building block, one can see how to combine a handful of transistors to build logical functions such as AND, OR, NOT, ...
  61. [61]
    ECE 394 Experiment No.3: Sequential Circuits - NJIT
    An S-R latch consists of two cross-coupled NOR gates and possibly two inverters, as shown in Fig. 1. An S-R flip-flop can also be design using cross-coupled ...
  62. [62]
    [PDF] Chapter 6 Synchronous Sequential Circuits
    Synchronous sequential circuits use flip-flops and a clock signal to control operation, where outputs depend on past behavior and present inputs.
  63. [63]
    [PDF] 5 Digital Logic - NJIT ECE Labs - New Jersey Institute of Technology |
    For TTL chips, logic 1 is a voltage at or near 5V, the voltage of the chip's power supply. A voltage near 0V is used for logic 0. The same voltage values are ...
  64. [64]
    [PDF] Introduction to Digital Communication Systems
    Digital vs. Analog Communications (2). • Noise immunity of digital signals – digital data can be recovered without any error as long as the distortion and noise.
  65. [65]
    Analog to Digital Conversion - HyperPhysics
    The basic principle of operation is to use the comparator principle to determine whether or not to turn on a particular bit of the binary number output. It is ...
  66. [66]
    [PDF] Microcontrollers (Arduino) Overview Digital I/O
    Each pin can be used as a digital input or a digital output. ▫. For output pins: Your code determines what value ('1' or. '0') appears on the pin and can ...
  67. [67]
    ACGT - National Human Genome Research Institute
    ACGT is an acronym for the four types of bases found in a DNA molecule: adenine (A), cytosine (C), guanine (G), and thymine (T).
  68. [68]
    [PDF] Genetic Code as Binary BCD and Gray Code - RJPBCS
    Genetic information is encoded as a sequence of nucleotides (guanine, adenine, thymine, and cytosine) recorded using the letters G, A, T, and C. Most DNA.
  69. [69]
    [PDF] haplotype inference from short sequence reads using a population ...
    Sep 24, 2010 · Thus, we use binary alleles (0 and 1) to represent the state at any SNP site. In this paper, we focus on SNPs and do not consider other ...
  70. [70]
    [PDF] Analysing genome-wide SNP data using adegenet 2.0.0
    The class SNPbin is the core representation of biallelic SNPs which allows to represent data with unprecedented efficiency. The essential idea is to code binary ...
  71. [71]
    The Qubit in Quantum Computing - Azure Quantum | Microsoft Learn
    Feb 21, 2025 · A measurement corresponds to the informal idea of “looking” at a qubit, which immediately collapses the quantum state to one of the two ...
  72. [72]
    What Is Quantum Computing? | IBM
    When a quantum system is measured, its state collapses from a superposition of possibilities into a binary state, which can be registered like binary code as ...
  73. [73]
    NMR Theory and Techniques
    Spin-1/2 spins account for most nuclei detected by NMR. Spin-1/2 particles (called fermions), such as electrons, protons, and neutrons, also form the bulk of ...
  74. [74]
    [PDF] Physics of Binary Information - arXiv
    Mar 24, 2013 · The idea that particle physics is ultimately based on bits is supported by the existence of spin-1/2 particles, which can be understood as ...
  75. [75]
    Multiple Star Systems - NASA Science
    Oct 22, 2024 · Binary Stars​​ Binary systems also can host orbiting planets that have two stars in their skies, as on the fictional Tatooine in the Star Wars ...
  76. [76]
    A binary pulsar in a 53-minute orbit - Nature
    Jun 20, 2023 · Here we report radio observations of the binary millisecond pulsar PSR J1953+1844 (M71E) that show it to have an orbital period of 53.3 minutes.
  77. [77]
    [PDF] LECTURE 18 The Ising Model (References
    Monte Carlo simulations are used widely in physics, e.g., condensed matter physics, astrophysics, high energy physics, etc. Typically in Monte Carlo simulations ...
  78. [78]
    [PDF] Monte Carlo Simulation of Spins - aiichironakano
    Ising model: A model in statistical mechanics, which was originally used to study the behavior of magnetic particles in a magnetic field.