Fact-checked by Grok 2 weeks ago

Bitwise operation

Bitwise operations are fundamental procedures in and digital electronics that manipulate the individual bits (binary digits of 0 or 1) within the binary representation of , such as integers, enabling low-level over processing at the level. These operations differ from or logical operations by directly applying functions to corresponding bits across operands, often performed by the CPU's (ALU) for efficiency in tasks like compression, , and optimization. The most common bitwise operators include AND (&), OR (|), XOR (^), NOT (~), and shift operators such as left shift (<<) and right shift (>>). The AND operator produces a 1 in each bit position only if both input bits are 1, as defined by its truth table: 0&0=0, 1&0=0, 0&1=0, 1&1=1; this is useful for masking or clearing specific bits. Similarly, OR sets a bit to 1 if at least one input bit is 1 (truth table: 0|0=0, 1|0=1, 0|1=1, 1|1=1), enabling bit setting or union operations. The XOR operator outputs 1 when input bits differ (truth table: 0^0=0, 1^0=1, 0^1=1, 1^1=0), commonly used for toggling bits or error detection. NOT inverts all bits of a single operand (0 becomes 1, and vice versa), while shift operators relocate bits horizontally, with left shifts multiplying by powers of 2 and right shifts dividing, though behavior on signed integers can vary by implementation. In practice, bitwise operations are essential for low-level programming in languages like C and assembly, where they facilitate efficient data packing, flag manipulation in registers, and hardware interfacing in embedded systems. They also play a key role in algorithms for cryptography (e.g., bit scrambling), computer graphics (e.g., pixel manipulation), and network protocols (e.g., header field extraction), offering performance advantages over higher-level abstractions by reducing computational overhead.

Fundamentals

Definition and Purpose

Bitwise operations are fundamental primitives in that manipulate the individual bits of numbers, treating them as sequences of 0s and 1s without regard to their interpreted numerical value. These operations enable direct interaction with the binary representation of , allowing for precise control over bit-level details in integers or other fixed-width data types. The purpose of bitwise operations lies in providing efficient, low-level mechanisms for data manipulation, which are essential for optimizing algorithms, interfacing with , and conserving resources in . They facilitate tasks such as extracting specific bits (masking), combining flags into compact structures, and implementing high-performance computations that avoid the overhead of higher-level . In systems and performance-critical applications, bitwise operations often outperform equivalent approaches due to their simplicity and direct support. Bitwise operations trace their origins to the foundational designs of early digital computers in the 1940s, particularly in John von Neumann's 1945 "First Draft of a Report on the EDVAC," which specified a central arithmetic unit capable of performing both arithmetic and logical operations on fixed-length machine words. This inclusion of logical operations, which equate to bitwise manipulations, became integral to the arithmetic logic unit (ALU) in the von Neumann architecture that underpins most modern processors. Unlike arithmetic operations, which process binary data as numerical quantities to yield mathematically meaningful results, bitwise operations focus exclusively on the structural patterns of bits, enabling independent bit-level transformations.

Binary Representation

In computing, a bit is the fundamental unit of information, representing a binary digit that can hold one of two values: or . This binary nature stems from the physical implementation in digital circuits, where a bit is typically stored as a voltage level—low for and high for . Bits are grouped into larger units for practical storage and processing; a byte consists of exactly 8 bits, capable of representing 256 distinct values (from to 255 in unsigned form). Beyond bytes, data is often organized into words, which are fixed-size sequences of bits processed as a single unit by a computer's —commonly bits or bits in modern architectures, depending on the instruction set. When representing multi-byte values, such as integers spanning multiple bytes, the ordering of bytes in introduces the concept of . Big-endian systems store the most significant byte (containing the highest-order bits) at the lowest , followed by less significant bytes in increasing order of address. In contrast, little-endian systems reverse this, placing the least significant byte at the lowest address and the most significant byte at the highest. This byte-ordering convention affects how data is interpreted across different hardware architectures, such as x86 processors (little-endian) versus some network protocols (big-endian). Integer representations also distinguish between signed and unsigned formats, impacting how bit patterns encode values. Unsigned integers use all bits for , allowing a range from 0 to $2^n - 1 for n bits. Signed integers, however, reserve the most significant bit as a (0 for positive, 1 for negative) and employ encoding for negative numbers to simplify arithmetic operations. In , a negative number is formed by inverting all bits of its positive counterpart and adding 1, which extends the range to approximately half positive and half negative values (e.g., -128 to 127 for 8 bits). This method ensures that the bit patterns for negative values align seamlessly with unsigned patterns for addition and subtraction, avoiding separate handling for signs. For illustration, the number 5 is represented in as 101, corresponding to $1 \times 2^2 + 0 \times 2^1 + 1 \times 2^0 = 4 + 1 = 5. In fixed-width formats like an 8-bit byte, this is padded with leading zeros to 00000101, preserving the value while filling the available bits. For signed in 8 bits, -5 would be the inversion of 00000101 (yielding 11111010) plus 1, resulting in 11111011.

Core Bitwise Operators

NOT Operation

The bitwise NOT operation, also known as the one's complement, is a unary operator that inverts the bits of its integer operand, transforming every 0 to 1 and every 1 to 0 in the binary representation. In most programming languages, including C, C++, Java, and Python, this operator is denoted by the tilde symbol (~). In systems using two's complement representation for signed integers—which is the predominant method in modern computer architectures—the bitwise NOT of a number x is mathematically equivalent to -x - 1. This equivalence arises because inverting all bits of x yields the one's complement, and adding 1 would produce the two's complement negation; thus, the inversion alone subtracts 1 from the negation. For example, consider an 8-bit of the decimal number 5, which is $00000101_2. Applying the NOT operation yields $11111010_2, equivalent to 250 in unsigned interpretation but -6 in signed (since -5 - 1 = -6). The of the NOT operation differs between unsigned and signed integers. For unsigned types, it simply produces the maximum value of the type minus the (e.g., for 8-bit unsigned, \sim x = 255 - x), without any sign-related effects. For signed types in , it inverts the along with others, often resulting in a sign flip and potential implementation-defined if the result exceeds the representable range, though is typically . The can be summarized in a single-bit :
InputOutput
01
10
This table illustrates the inversion for each bit independently.

AND Operation

The bitwise AND is a that performs a logical AND on each pair of corresponding bits in two numbers, producing a result bit of 1 only if both input bits are 1, and 0 otherwise. This is commonly denoted by the symbol & in most programming languages and mathematical contexts. It can be viewed as a per-bit in , where the aligns the bits of the operands by their and applies the AND independently to each pair. The for the bitwise AND operation, showing all possible input combinations for , is as follows:
Input AInput BA & B
000
010
100
111
This table illustrates the conjunctive nature of the operation, where the output is true (1) exclusively when both inputs are true (1). For example, consider the numbers 00000101 ( 5) and 00000011 ( 3). Applying bitwise AND bit by bit yields 00000001 ( 1), as only the least significant bit is 1 in both operands. This demonstrates how the operation clears bits that are not set in both inputs, preserving only the common set bits. A primary use of the bitwise AND operation is in masking, where it extracts or isolates specific bits from a by ANDing with a that has 1s in the desired positions and 0s elsewhere. For instance, to check if a number is even or odd, one can perform the operation with the mask 00000001 (decimal ), which examines the least significant bit: a result of 0 indicates even, while indicates odd. This technique is efficient for bit-level manipulations in low-level programming and hardware design.

OR Operation

The bitwise OR operation, denoted by the symbol |, is a bitwise that performs a logical OR on each pair of corresponding bits in two operands. For each bit position, the result is 1 if at least one of the input bits is 1, and 0 otherwise. The for the bitwise OR operation is as follows: | Input A | Input B | Output (A | B) | |---------|---------|------------| | 0 | 0 | 0 | | 0 | 1 | 1 | | 1 | 0 | 1 | | 1 | 1 | 1 | This table illustrates all possible combinations for a single bit pair. For example, applying bitwise OR to the 8-bit representations of 5 (00000101) and 2 (00000010) yields 7 (00000111), as the result combines the 1s from both inputs without overlap resolution. A primary use of the bitwise OR is setting specific bits in a , such as enabling flags in a configuration register by ORing the value with a containing 1s in the desired positions (e.g., x = x | 0x01 sets the least significant bit). This is common in low-level programming for tasks like assembling bit-packed structures or activating options in bitmasks.

XOR Operation

The bitwise XOR (exclusive OR) operation is a fundamental binary bitwise operator that compares two input bits and outputs 1 only if the bits differ, and 0 if they are the same. This per-bit comparison results in a logical exclusive OR, where the output bit is set when exactly one of the input bits is 1. In digital logic and computing, XOR is defined as performing this operation independently on each corresponding pair of bits in the binary representations of two operands of equal length. The for the two-input XOR operation is as follows:
Input AInput BOutput (A XOR B)
000
011
101
110
This table illustrates the core behavior: the output is 1 for differing inputs (01 or 10) and 0 for identical inputs (00 or 11). In programming languages such as , , and , the XOR operator is denoted by the symbol ^, which applies the operation bit by bit to operands. For example, performing XOR on the 8-bit values 00000101 ( 5) and 00000011 ( 3) yields 00000110 ( 6), as the differing bits in positions 1 and 2 (0-based from the right) result in 1s, while matching bits remain 0. Mathematically, bitwise XOR is equivalent to 2 without carry for each bit position, aligning it with operations in arithmetic and error-correcting codes. A distinctive property of XOR is its self-inversivity: applying the operation with the same operand twice returns the original value, expressed as x \oplus y \oplus y = x, since y \oplus y = 0 and x \oplus 0 = x. This involutory nature enables efficient bit toggling—such as flipping specific bits in a by XORing with a —and underpins its utility in reversible data transformations or basic primitives.

Bit Shifting Operations

Logical Shift

A is a that shifts the bits of an to the left or right by a specified number of positions, filling the empty bit positions with zeros regardless of the operand's . In a left logical shift, bits move toward the most significant bit (MSB), with zeros inserted into the least significant bit (LSB) positions; bits shifted beyond the MSB are typically discarded. Conversely, a right logical shift moves bits toward the LSB, filling the new MSB positions with zeros, and discards bits shifted beyond the LSB. This operation treats the binary representation as a simple sequence of bits without regard to numerical , making it suitable for unsigned integers. For unsigned integers, a left logical shift by n positions is mathematically equivalent to multiplying the value by $2^n, as each shift effectively doubles the value by repositioning bits to higher place values. Similarly, a right logical shift by n positions corresponds to division by $2^n, the result by discarding the least significant bits. These equivalences hold because the operation preserves the unsigned interpretation and avoids sign-related adjustments. For example, consider the 8-bit unsigned value 00000101 ( 5): shifting left by 1 position yields 00001010 ( 10, equivalent to $5 \times 2^1); shifting right by 1 position yields 00000010 ( 2, equivalent to $5 \div 2^1, ). In programming languages such as C and its derivatives, the left shift operator is denoted by <<, which always performs a logical shift by filling with zeros. The right shift operator >> behaves as a logical shift for unsigned types, ensuring zero-filling of the MSB positions, while for signed types it may perform an arithmetic shift instead. This distinction allows developers to manipulate bit patterns predictably for unsigned data, enabling efficient implementations of multiplication and division by powers of 2 without invoking the processor's arithmetic units. The operation's behavior is defined at the hardware level in instruction sets like x86 and MIPS, where dedicated instructions (e.g., sll for shift left logical and srl for shift right logical) execute the zero-filling semantics.

Arithmetic Shift

An arithmetic shift is a bit-shifting operation designed for signed integers in two's complement representation, where the goal is to preserve the number's sign during the shift. In a right arithmetic shift, the bits vacated on the left are filled with copies of the original sign bit—0 for positive numbers and 1 for negative numbers—effectively extending the sign to approximate division by a power of two. The left arithmetic shift behaves identically to a logical left shift, filling vacated bits on the right with zeros, which approximates multiplication by a power of two./03:_MIPS_Arithmetic_and_Logical_Operators/3.10:_Shift_Operations) For example, consider the 8-bit representation of -5, which is 11111011. A right by 1 bit produces 11111101, equivalent to -3 in , approximating the result of -5 divided by 2 (with truncation toward negative infinity). This ensures the shifted value remains negative for negative inputs, distinguishing it from logical shifts that always fill with zeros and are typically used for unsigned integers. Arithmetic shifts are particularly useful for efficient signed division by powers of two, as the sign-preserving behavior aligns with arithmetic semantics. In programming languages, the right-shift operator >> often implements arithmetic shifting for signed types; for instance, in , >> performs , while the unsigned right-shift operator >>> fills with zeros regardless of sign. In C, the behavior of >> on signed integers is implementation-defined but commonly arithmetic on systems. However, arithmetic shifts—especially left shifts on signed integers—can introduce issues related to overflow. If a left shift causes the result to exceed the representable range (e.g., shifting a value such that bits enter the sign bit position), it may invoke undefined behavior in languages like C, where the standard does not specify outcomes for such cases. Programmers must ensure shifts do not overflow to avoid portability problems across implementations.

Circular Shift

A circular shift, also known as a , is an operation that moves the bits of a in a cyclic manner, where the bits shifted out from one end of the are reintroduced at the opposite end, preserving the total number of bits without loss or gain. This contrasts with non-circular shifts, such as logical shifts, which discard the overflow bits and fill the vacated positions with zeros. Circular shifts exist in two primary directions: left (or rotate left), which moves bits toward the most significant bit (MSB) with overflow from the MSB wrapping to the least significant bit (LSB), and right (or rotate right), which moves bits toward the LSB with overflow from the LSB wrapping to the MSB. There are two main subtypes of circular shifts: pure rotation, which operates solely on the bits within the operand without involving external flags, and rotation through carry, which incorporates a carry flag in the process. In pure rotation, the operation is self-contained; for example, a left rotation shifts all bits leftward, directly inserting the overflow bit(s) from the MSB into the LSB position(s). In contrast, rotation through carry uses the processor's carry flag: the bit shifted out updates the carry flag, and the previous carry flag value is inserted into the vacated position, effectively treating the carry as an extension of the operand. To illustrate, consider an 8-bit value 00010111 ( 23). A left by 1 position yields 00101110 ( 46), as the MSB (0) wraps to the LSB, and all other bits move left. Similarly, a right by 1 position on the same value produces 10001011 ( 139), with the LSB (1) wrapping to the MSB. Circular shifts are commonly implemented in languages using instructions such as ROL (rotate left) and ROR (rotate right) for pure rotations, with variants like RCL and RCR for rotations through carry. They find applications in hash functions, where operations like cyclic shifts help generate pseudorandom outputs efficiently, as seen in chaotic hash algorithms that employ variable circular shifts for one-way compression. Additionally, circular shifts underpin cyclic error-correcting codes, where the code's invariance under cyclic permutations enables efficient encoding and computation via linear feedback shift registers.

Language Implementations

C Family Languages

In the C family of languages, including and , bitwise operations are fundamental for low-level manipulation of types, providing direct control over binary representations. These operations apply to operands after integer promotions or usual arithmetic conversions, enabling efficient bit-level computations essential for . The core bitwise operators are the unary NOT (~), binary AND (&), OR (|), and XOR (^), along with the shift operators left shift (<<) and right shift (>>). The bitwise NOT operator (~) inverts all bits of its operand, equivalent to subtracting the value from one less than the maximum representable value in the promoted type (e.g., for a 32-bit unsigned , ~x = 0xFFFFFFFF - x). The binary AND (&) sets each bit in the result to 1 only if both corresponding bits in the operands are 1. The OR (|) sets a bit to 1 if at least one corresponding bit in the operands is 1, while XOR (^) sets it to 1 if the bits differ. These operations perform usual arithmetic conversions on operands, promoting them to a common type, typically or unsigned , before applying the bitwise logic. Shift operations move bits left (<<) or right (>>) by the number of positions specified by the right . The left shift fills vacated bits with zeros and discards overflowed bits, effectively multiplying by powers of two for positive values within the type's . The right shift behaves similarly but fills vacated bits with zeros for unsigned types; for signed integers, the behavior is implementation-defined, often preserving the () to maintain the sign for negative values. In , the behavior aligns closely with , though unified some shift semantics for consistency. Operator precedence in C family languages places unary ~ at a high level (precedence 2, right-to-left associativity), above multiplicative operators (*, /, % at precedence 3). The shifts (<<, >>) follow at precedence 5 (left-to-right), below additive operators (+, - at precedence 4) but above relational operators. The binary bitwise operators then occur at lower levels: & at precedence 8, ^ at 9, and | at 10, all left-to-right, positioned after operators (==, != at precedence 7) but before logical AND (&& at precedence 11). This hierarchy ensures bitwise operations have lower priority than but higher than assignments (precedence 14, right-to-left). Parentheses are recommended for clarity in mixed expressions. Type considerations are critical: for unary ~ and shifts, the operand undergoes integer promotion to int or unsigned int if smaller. Binary bitwise operators (&, |, ^) use usual arithmetic conversions, balancing signedness and potentially promoting to unsigned if one operand is unsigned. Shifts promote the left operand via integer promotion and the right to int; however, behavior is undefined if the right operand is negative, zero (for some implementations), or greater than or equal to the bit width of the promoted left operand (e.g., shifting a 32-bit int by 32 or more). For signed left shifts, if the result is not representable in the type, behavior is undefined per C11 (ISO/IEC 9899:2011, section 6.5.7). These rules promote portability but require careful handling to avoid undefined behavior, especially in C++ where similar constraints apply. A simple example demonstrates the AND operation:
c
#include <stdio.h>

int main(void) {
    int x = 5 & 3;  // 5 is 101 in binary, 3 is 011; result is 001 (1)
    printf("%d\n", x);  // Outputs: 1
    return 0;
}
This illustrates bit masking, a common use where & extracts specific bits. Similar patterns apply to other operators, with C and C++ providing identical syntax for these fundamentals.

Java and JavaScript

In Java, bitwise operations are performed on the integral primitive types—byte, short, int, and long—with int (32 bits) and long (64 bits) being the primary types for such computations. The operators include & for bitwise AND, | for bitwise OR, ^ for bitwise XOR, ~ for bitwise NOT, << for left shift, >> for signed right shift (which preserves the sign bit via arithmetic shift), and >>> for unsigned right shift (which fills the leftmost bits with zeros regardless of the sign). These operations treat operands as two's complement representations, ensuring consistent handling of negative values where the sign bit is propagated in signed shifts. Unlike languages such as C++, does not support overloading, meaning the bitwise operators always apply directly to the types without custom redefinition for user-defined classes. For example, the expression 0x2222 & 0x000F evaluates to 2 (), as the bitwise AND masks the lower 4 bits. The >>> operator is particularly useful for treating negative int values as unsigned during right shifts, converting them to positive results by zero-filling. In , bitwise operators similarly perform computations on 32-bit signed integers, automatically converting non-integer operands (such as floating-point numbers or other types) to signed 32-bit integers before applying the operation, with any excess bits discarded through truncation. The operators are &, |, ^, ~, <<, >> (arithmetic right shift that preserves the ), and >>> (logical right shift that zero-fills). Negative numbers are handled using representation, consistent with Java's approach. For instance, 5 | 2 evaluates to 7, as the 0101 | 0010 yields 0111. JavaScript's bitwise operations on standard Number types wrap around the 32-bit boundary; shifting large numbers, such as Math.pow(2, 53) << 1, truncates to fit the 32-bit signed integer range, potentially leading to unexpected results for values exceeding 2^31 - 1. To handle arbitrary-precision integers, JavaScript provides BigInt support for most bitwise operators (e.g., 5n & 3n results in 1n), but >>> throws a TypeError on BigInt due to the lack of a fixed bit width. The syntax for these operators in both and derives from the C family languages, facilitating familiar usage in managed environments.

Other Languages

In Pascal, bitwise operations are supported through keywords such as and for bitwise AND, or for bitwise OR, xor for bitwise XOR, and not for bitwise NOT, while shifts are handled by shl for left shift and shr for right shift. These operations apply to fixed-width integer types, such as the 32-bit Integer in most implementations, where overflow behavior depends on the compiler but typically wraps around. Python provides bitwise operators including & for AND, | for OR, ^ for XOR, ~ for NOT (inversion), << for left shift, and >> for right shift, which operate on the representations of integers. Unlike fixed-width languages, Python's integers are arbitrary-precision, allowing operations on numbers of any size without , though shifts on very large integers may be computationally expensive. Other languages exhibit variations in bitwise support. Rust mirrors C-style syntax with operators like &, |, ^, !, <<, and >>, but emphasizes safety through type-checked integers that prevent in shifts and via checked variants. In assembly languages, such as x86, bitwise operations are implemented directly via hardware instructions like AND, OR, XOR, NOT, SHL, SHR, and ROL for rotate left, enabling low-level tied to the processor's architecture. Some languages, like standard SQL, lack native bitwise operators, restricting such functionality to database-specific extensions (e.g., T-SQL's &, |, ^) applied only to types, while others limit them to prevent misuse in non-numeric contexts.

Applications

Data Manipulation

Bitwise operations play a crucial role in data manipulation by enabling precise control over individual bits within integers, facilitating tasks such as field extraction, value packing, and status checking without altering unrelated data. Bit masking is a primary for extracting or setting specific fields in a , typically using the to isolate bits by applying a that has 1s in the desired positions and 0s elsewhere. For example, to extract the lowest 8 bits (a byte) from a 32-bit value, one performs value & 0xFF, which clears all higher-order bits while retaining the target field intact. To set bits in a field, the combines the original value with a containing the new bits to insert, ensuring other bits remain unchanged. Bit fields, or bit packing, allow multiple small values to be stored compactly within a single integer by assigning fixed bit widths to each component, which is particularly useful for memory-constrained environments or when transmitting data. In languages like C, bit fields are explicitly declared in structures to enforce this packing; for instance, a structure might define fields for day (5 bits), month (4 bits), and year (7 bits) within a single 16-bit integer, enabling efficient storage of date components. A common application is packing RGB color values into a 24-bit or 32-bit integer, where the red component (8 bits) can be extracted using (color & 0xFF0000) >> 16, isolating and shifting the relevant bits to form an independent byte. Flag operations leverage bitwise operations to manage status indicators or options as individual bits within an , allowing multiple to coexist in a single variable for compact representation. To test a , such as determining if a number is even by checking its least significant bit, one uses num & 1, which yields 0 for even numbers and for odd, as the targets only that bit. Setting a involves ORing the value with a shifted , like flags |= (1 << position), while clearing it requires ANDing with the inverted , flags &= ~(1 << position). In modern data serialization, bitwise operations support bit-packing to produce efficient binary formats, as exemplified by , where variable-length integer encoding (varint) packs values by storing fewer bytes for smaller numbers, reducing overall payload size during transmission. This approach, combined with packed representations for repeated primitive fields, minimizes storage and bandwidth needs in distributed systems.

Cryptography and Security

Bitwise operations, particularly the exclusive-or (XOR), form the cornerstone of many cryptographic primitives by enabling efficient mixing and diffusion of data while preserving computational security. In the one-time pad encryption scheme, perfect secrecy is achieved by XORing the plaintext message bit-by-bit with a random key of equal length, ensuring that the ciphertext reveals no information about the original message to an eavesdropper, as proven by Claude Shannon in his foundational work on secrecy systems. This property relies on XOR's self-inverse nature, where applying the same operation with the key decrypts the message. Block ciphers like the Advanced Encryption Standard (AES) incorporate bitwise operations extensively to provide confusion and diffusion across rounds of encryption. AES employs XOR in its AddRoundKey step to combine the state with a round key, alongside bitwise substitutions in SubBytes and linear mixes in MixColumns that operate on bytes using Galois field arithmetic, all of which contribute to its resistance against cryptanalytic attacks as standardized by NIST. These operations ensure that small changes in the input propagate unpredictably throughout the output, enhancing security for symmetric encryption. In hash functions such as , bitwise operations drive the compression process to produce a fixed-size digest resistant to preimage and collision attacks. The algorithm uses right rotations (ROTR), XOR, and majority functions (MAJ) in its message schedule and compression steps, along with choices (CH) and sigma functions involving AND, XOR, and shifts, to achieve avalanche effects where altering one input bit affects roughly half the output bits, as detailed in the NIST Secure Hash Standard. These bitwise chains promote rapid diffusion, making SHA-256 suitable for integrity verification in protocols like digital signatures. Cyclic redundancy checks (CRC) leverage XOR and bit shifts for efficient error detection in data transmission and storage. CRC computation treats data as a polynomial over GF(2), performing modulo-2 division via XOR-based shifts with a generator polynomial, which detects burst errors up to the degree of the polynomial with high probability, as originally formalized by W. Wesley Peterson. In post-quantum cryptography, lattice-based schemes selected by NIST, such as , increasingly rely on bitwise operations for masking secrets to mitigate side-channel attacks. Implementations use higher-order masking, splitting sensitive values into shares combined via bitwise AND and XOR, to protect against power analysis on lattice operations like number-theoretic transforms, ensuring security in quantum-resistant key encapsulation as standardized in 2024. This approach addresses vulnerabilities in arithmetic-heavy lattice computations, aligning with NIST's ongoing standardization efforts through 2025.

Performance Optimization

Bitwise operations provide substantial performance advantages in computing due to their implementation as single-cycle instructions on most modern , with latencies often around one cycle and high throughput, outperforming arithmetic operations or conditional logic that may involve multiple cycles or branch prediction overhead. For instance, bitwise AND can select values without branching, avoiding the penalties of mispredicted branches that can stall pipelines for 10-20 cycles on . Common optimization tricks leverage these operations for efficient code. The XOR swap technique exchanges two variables without a temporary storage using three XOR assignments: x ^= y; y ^= x; x ^= y;, which can be slightly faster in register-bound scenarios by minimizing memory accesses, though compilers often inline equivalent optimizations for standard swaps. For parity computation (determining if the number of set bits is odd or even), successive XOR folds—such as v ^= v >> 16; v ^= v >> 8; v ^= v >> 4; v ^= v >> 2; v ^= v >> 1; return v & 1;—reduce the 32-bit word to a single bit in about nine operations, enabling fast checks in hashing or error detection without full population counts. Population count (), useful for algorithms like , can be computed via parallel prefix sums with shifts and masks, such as v = v - ((v >> 1) & 0x55555555); v = (v & 0x33333333) + ((v >> 2) & 0x33333333); followed by further reductions, achieving constant-time execution in 12 operations comparable to table lookups but without cache misses. Hardware accelerations further amplify these benefits through parallelism. Intel's (AVX) and support bitwise operations on 256- or 512-bit vectors, allowing simultaneous processing of multiple 32- or 64-bit elements in a single instruction, which can yield 4-8x speedups in vectorizable workloads like data compression or image processing compared to scalar code. On GPUs, bitwise operations benefit from massive parallelism across thousands of cores, with 32-bit integer bitwise throughput matching floating-point rates on architectures, enabling up to 10-100x gains in bulk operations like or bit-parallel string matching over CPU equivalents. The ARM Scalable Vector Extension (SVE), introduced in Armv8-A and extended in 2025 updates, provides scalable bitwise operations on vectors from 128 to 2048 bits, facilitating hardware-agnostic for high-performance computing tasks such as or simulations, with performance scaling linearly with vector length. Bitwise shifts can briefly reference multiplication or division by powers of two as a related shortcut, where compilers automatically convert such operations to shifts for 2-5x reductions in inner loops.

Theoretical Aspects

Boolean Algebra Integration

Bitwise operations are fundamentally rooted in , where individual bits represent truth values (0 for false, 1 for true), enabling the manipulation of through logical functions. This integration was pioneered by in his 1938 master's thesis, which demonstrated how could model the behavior of electrical , laying the groundwork for digital computation by treating states as binary propositions. In this framework, bitwise operators directly correspond to Boolean logical connectives: the bitwise AND (&) maps to conjunction (∧), producing 1 only if both inputs are 1; bitwise OR (|) maps to disjunction (∨), producing 1 if at least one input is 1; bitwise NOT (~) maps to negation (¬), inverting the input bit; and bitwise XOR (^) maps to exclusive or, producing 1 if the inputs differ. These mappings allow bits to function as atomic propositions in Boolean expressions, with operations preserving the algebra's semantic structure. When applied to multi-bit integers, bitwise operations extend pointwise across each corresponding bit position, treating the operands as vectors of independent Boolean variables. For instance, the bitwise AND of two n-bit numbers computes the for each bit pair separately, yielding an n-bit result where the i-th bit is a_i \wedge b_i. This parallelism enables efficient vectorized logical processing in hardware and software. Boolean algebraic laws apply seamlessly to these operations, facilitating simplification and optimization. Commutativity holds for AND and OR, such that x \wedge y = y \wedge x and x \vee y = y \vee x, allowing operand reordering without altering results. Associativity similarly applies, as (x \wedge y) \wedge z = x \wedge (y \wedge z) and (x \vee y) \vee z = x \vee (y \vee z), supporting grouped evaluations in expressions. These properties, inherited from the underlying algebra, ensure bitwise computations remain consistent and predictable across bit positions.

Properties and Identities

Bitwise operations on integers, interpreted as bit vectors, satisfy the same algebraic properties as operations in , where AND corresponds to (∧), OR to disjunction (∨), NOT to (¬), and to . These properties hold bit by bit across the operands, enabling simplification of expressions involving bitwise operators. Idempotence is a fundamental property of the AND and OR operations: for any bit vector x, x \& x = x and x | x = x. This means applying AND or OR with itself yields the original value, reflecting the idempotent nature of conjunction and disjunction in . The absorption laws further simplify expressions: x | (x \& y) = x and x \& (x | y) = x. These identities arise because any bit set in x dominates the result when combined with bits from y, absorbing the influence of the secondary operand. De Morgan's laws provide dualities involving negation: \sim(x \& y) = \sim x | \sim y and \sim(x | y) = \sim x \& \sim y. These allow rewriting negated conjunctions as disjunctions of negations (and vice versa), facilitating equivalence transformations in bitwise expressions. Distributivity holds for AND over OR: x \& (y | z) = (x \& y) | (x \& z), and dually for OR over AND: x | (y \& z) = (x | y) \& (x | z). This mirrors the in Boolean lattices, allowing bitwise operations to factor across alternatives. The XOR operation exhibits an inverse property due to its group structure under addition modulo 2: if x = y \oplus z, then x \oplus z = y and y \oplus z = x. Thus, XOR is its own inverse, enabling reversible computations such as swapping values without temporary storage via a \oplus= b; b \oplus= a; a \oplus= b. In linear algebra over the , bitwise operations extend naturally to bit vectors, where addition is XOR and (by 0 or 1) is AND. This framework supports efficient matrix operations, such as on binary matrices, using bitwise parallelism for solving systems modulo 2. For instance, vector addition \mathbf{u} + \mathbf{v} corresponds to \mathbf{u} \oplus \mathbf{v}, preserving the vector space axioms over .

Order of Operations

In programming languages that support bitwise operations, such as those in the C family and Java, the order of evaluation follows a strict precedence hierarchy among the operators. The bitwise NOT operator (~) has the highest precedence, binding more tightly than any other bitwise operator. This is followed by the shift operators (<< and >>), which have precedence between arithmetic operators like addition and subtraction on one hand, and relational operators on the other. The bitwise AND (&) comes next, with higher precedence than the bitwise XOR (^), which in turn precedes the bitwise OR (|). Associativity determines how operators of equal precedence are grouped in expressions. For most bitwise operators, including &, ^, |, <<, and >>, associativity is left-to-right, meaning that in an expression like x & y & z, it is evaluated as (x & y) & z. The unary NOT operator (~), however, is right-to-left associative, so ~~x is parsed as ~(~x). This left-to-right convention applies consistently across C, Java, and similar languages, ensuring predictable parsing without ambiguity for binary operations. Parentheses are essential to override these default rules and enforce a desired order. For instance, the expression ~x & y evaluates as (~x) & y due to the high precedence of ~, potentially yielding different results from ~(x & y), which requires explicit grouping to apply NOT after the AND. Without parentheses, unintended groupings can lead to errors in bit manipulation tasks. While consistent in languages like C and Java, the precedence of bitwise operations integrates with broader order-of-operations rules, such as PEMDAS (parentheses, exponents, multiplication/division, addition/subtraction), where bitwise shifts occur after additive arithmetic but before comparisons. In mathematical contexts, such as Boolean algebra, precedence is more explicitly defined for logical equivalents: NOT holds the highest priority, followed by AND, then OR, with XOR often aligned at the level of OR; shifts are not typically part of these formal systems.

References

  1. [1]
    What is Bitwise? - TechTarget
    Jul 6, 2022 · Bitwise is a level of operation that involves working with individual bits which are the smallest units of data in a computing system.
  2. [2]
    [PDF] Bitwise Operations - Department of Computer Science and ...
    Every bitwise operation (except shift) is defined by a truth table. A truth table represents one or two input bits and their output bit. For example, bitwise OR ...
  3. [3]
    [PDF] CS107 Lecture 3
    Lecture 3 takeaway: We can use bit operators like &, |,. ~, <<, etc. to manipulate the binary representation of values. A number is a bit pattern that can be.
  4. [4]
    Bitwise Operator - an overview | ScienceDirect Topics
    A bitwise operator is a type of operator used in computer science to compare and manipulate integers and binary data at the level of individual bits.
  5. [5]
    Bitwise Operations - Systems Encyclopedia
    In many cases, bitwise operations can be significantly faster ways to perform certain mathematical operations instead of standard arithmetic at the CPU ...
  6. [6]
    Bitwise Operators
    The Bitwise Operators​​ Bits in the result set to 1 if at least one of the corresponding bits in the two operands is 1. 0 otherwise.
  7. [7]
    NUM01-J. Do not perform bitwise and arithmetic operations on the ...
    Numeric values must be exclusively operated on using arithmetic operations, whereas bit collections must be exclusively operated on using bitwise operations.
  8. [8]
    [PDF] First draft report on the EDVAC by John von Neumann - MIT
    First Draft of a Report on the EDVAC. JOHN VON NEUMANN. Introduction. Normally first drafts are neither intended nor suitable for publication. This report is ...
  9. [9]
    5.2. The von Neumann Architecture - Dive Into Systems
    A bus is a communication channel that transfers binary values between communication endpoints (the senders and receivers of the values). For example, a data bus ...
  10. [10]
    Bits and Bytes
    Everything in a computer is 0's and 1's. The bit stores just a 0 or 1: it's the smallest building block of storage. Byte. One byte = collection of 8 bits ...
  11. [11]
    Bits Bytes and Words
    Definitions. Bit = Binary digIT = 0 or 1. Byte = a sequence of 8 bits ... A bit can be represented as a voltage in a hardware computer. For example, 3 ...Missing: basics | Show results with:basics
  12. [12]
    [PDF] Bit, Byte, and Binary
    byte: Abbreviation for binary term, a unit of storage capable of holding a single character. On almost all modern computers, a byte is equal to 8 bits.
  13. [13]
    [PDF] Bits, Words, and Integers - Computer Science Department
    The basic unit of information in a computer is the bit; it is simply a quantity that takes one of two values, 0 or 1. A sequence of k bits is a k-bit word.
  14. [14]
  15. [15]
    Big Endian and Little Endian
    Big Endian Byte Order: The most significant byte (the "big end") of the data is placed at the byte with the lowest address. The rest of the data is placed ...Missing: explanation | Show results with:explanation
  16. [16]
    Big endian and Little endian computers - Linux Howtos
    In a so-called big endian computer, the highest order byte is stored at A, and the lowest order byte is stored at address A+3. In a so-called little endian ...Missing: endianness explanation
  17. [17]
    An Essay on Endian Order
    Apr 19, 1996 · "Little Endian" means that the low-order byte of the number is stored in memory at the lowest address, and the high-order byte at the highest address.Missing: explanation | Show results with:explanation
  18. [18]
    The Two's Complement
    In the (8-bit) two's complement representation of a integer value between -127 and 127, positive integers are represented in the normal binary manner.
  19. [19]
    Two's Complement - Cornell: Computer Science
    To get the two's complement negative notation of an integer, you write out the number in binary. You then invert the digits, and add one to the result.Contents and Introduction · Conversion from Two's... · Conversion to Two's...
  20. [20]
    [PDF] Two's Complement
    Step 1: Write the absolute value of the given number in binary form. Prefix this number with 0 indicate that it is positive. Step 2: Take the complement of ...
  21. [21]
    Conversion from Decimal to Binary - Math Alive Crypto 1
    10's last binary digit is 0. 10/2 = 5. So [binary represenation of 85]=[binary representation of 5]0101. 5's last binary digit is 1.
  22. [22]
  23. [23]
  24. [24]
    [PDF] Bit-Level Representation of Two's Complement Negation
    Jun 5, 2012 · Using C notation, if we define y to be x-1, then we have ˜y+1 equal to -y, and hence ˜y equals. -y-1. Substituting gives the expression -(x-1)-1 ...
  25. [25]
    [PDF] Logic and bit operations - Computer Science, UWO
    We can extend bit operations to bit strings. We define bitwise. OR, bitwise AND and bitwise XOR of two strings of the same length to be the strings that have ...<|control11|><|separator|>
  26. [26]
    Bitwise AND (&) - JavaScript - MDN Web Docs
    Jul 8, 2025 · The operator is applied to each pair of bits, and the result is constructed bitwise. The truth table for the AND operation is: x, y, x AND y. 0 ...
  27. [27]
    2.11. Supplemental: Bitwise Operators
    Operating on two bits with a bitwise operator produces a single bit. The following figures detail. the symbol used for each operator,
  28. [28]
    Bitwise operations - UAF CS
    Sep 9, 2005 · Here's the "truth table" for the 3 main binary bitwise operations: AND, OR, and XOR. These are accessible directly from C-like languages using ' ...
  29. [29]
    XOR bitwise operation (article) | Ciphers - Khan Academy
    The XOR operator outputs a 1 whenever the inputs do not match, which occurs when one of the two inputs is exclusively true.
  30. [30]
    Bitwise XOR (^) - JavaScript - MDN Web Docs
    Jul 8, 2025 · The bitwise XOR operator returns a number or BigInt with a 1 in each bit position where either but not both operands have a 1. It works on ...
  31. [31]
    [PDF] 16.36 Communication Systems Engineering - MIT OpenCourseWare
    xor: bit-wise xor, also denoted by ⊕, is a modulo-2 addition or subtraction operator, with no carry. xor(1,1)=0 and xor(1,0)=1. B) If you construct the ...
  32. [32]
    All About XOR - ACCU
    XOR is one of the sixteen possible binary operations on Boolean operands. That means that it takes 2 inputs (it's binary) and produces one output (it's an ...
  33. [33]
    [PDF] CS107, Lecture 3
    Logical Right Shift: fill new high-order bits with 0s. • Arithmetic Right Shift: fill new high-order bits with the most-significant bit. Unsigned numbers are ...
  34. [34]
    [PDF] Shift Instructions Shift left logical (sll) - University of Pittsburgh
    The `sll` instruction moves bits left (lower to higher) and fills in 0s when the value is moved left by a number of positions.
  35. [35]
    [PDF] Introduction to Computer Systems
    Unsigned Power-of-2 Divide with Shift. □ Quotient of Unsigned by Power of 2. ▫ u >> k gives u / 2k. ▫ Uses logical shift. Division Computed. Hex. Binary x.
  36. [36]
    [PDF] Integer Operations (Arithmetic, Overflow, Bitwise Logic, Shifting)
    • The C AND , OR, XOR, NOT bitwise operations perform the operation on each ... Notice there is no difference between an arithmetic and logical left shift.
  37. [37]
    CS107 Assignment 4 advice
    The right shift operator >> comes in two flavors: logical and arithmetic. The logical shift is used for unsigned types, arithmetic for signed. A logical ...
  38. [38]
    Bitwise and Bit Shift Operators - Java™ Tutorials
    The signed left shift operator " << " shifts a bit pattern to the left, and the signed right shift operator " >> " shifts a bit pattern to the right. The bit ...
  39. [39]
    [PDF] Shift and Rotate Instructions
    The Shift Left instruction performs a left shift on the destinations operand, filling the lowest bit with 0. The highest bit is moved into the Carry Flag. •.
  40. [40]
    Binary arithmetic - Isaac Computer Science
    Arithmetic shifts are similar to a logical shift but preserves the sign of the binary number. This is crucial for signed numbers in two's complement ...Missing: definition | Show results with:definition
  41. [41]
    [PDF] Arithmetic Shifting Considered Harmful - DSpace@MIT
    On a twos complement machine, arithmetic right shift performs a modulus-style division; the bits shifted out may be interpreted as the (non-negative) remainder.Missing: science | Show results with:science
  42. [42]
    8. Arithmetic and Logic. - University of Iowa
    Shifting is included in the instruction set for two reasons: First, it allows moving bits around, for example, to extract successive bits from a word for ...
  43. [43]
    The C Book — Expressions and arithmetic
    Right shift is fussier. Your implementation is allowed to choose whether, when shifting signed operands, it performs a logical or arithmetic right shift.<|separator|>
  44. [44]
    Rotation of bits: a classical and quantum perspective
    Feb 27, 2021 · Circular bit rotation is an operation where bits are moved in a circular fashion such that bits change their positional values and do not ...
  45. [45]
    [PDF] CMSC 313 COMPUTER ORGANIZATION & ASSEMBLY ...
    The rotate left (ROL) and rotate through carry left (RCL) instructions shift all the bits toward more-significant bit positions, except for the most-significant ...
  46. [46]
    [PDF] Chaotic hash function based on circular shifts with variable parameters
    We propose a chaotic hash algorithm based on circular shifts with variable parameters in this paper. We exploit piecewise linear chaotic map and one-way ...
  47. [47]
    [PDF] ECE 590: Error Correcting Codes - Henry D. Pfister
    Definition 1 (Cyclic Shift). A cyclic (or circular) right shift of a vector is formed by moving each vector element to the right and wrapping the last ...
  48. [48]
    [PDF] ISO/IEC 9899:201x
    ISO/IEC 9899:201x specifies the form and interpretation of C programs, promoting portability, reliability, maintainability, and efficient execution.
  49. [49]
  50. [50]
  51. [51]
  52. [52]
    Chapter 15. Expressions
    The Java programming language uses two's-complement representation for integers, and the range of two's-complement values is not symmetric, so negation of ...<|control11|><|separator|>
  53. [53]
  54. [54]
  55. [55]
    Operators - Free Pascal wiki
    Jan 17, 2022 · Logical operators ; or, Bitwise or ; xor, Bitwise exclusive or ; shl, Bitwise shift to the left ; shr, Bitwise shift to the right.
  56. [56]
    Pascal - Bit Operators - Tutorials Point
    Pascal supports bitwise operators like AND, OR, NOT, XOR, left shift (shl, <<), and right shift (shr, >>).
  57. [57]
  58. [58]
    Assembly - Logical Instructions - Tutorials Point
    The bitwise OR operator returns 1, if the matching bits from either or both operands are one. It returns 0, if both the bits are zero. For example, Operand1: ...
  59. [59]
    Bitwise operators (Transact-SQL) - SQL Server | Microsoft Learn
    Nov 22, 2024 · Bitwise operators convert two integer values to binary bits, perform the AND, OR, or NOT operation on each bit, producing a result.
  60. [60]
    Bitmasking In C - GeeksforGeeks
    Jul 23, 2025 · A bitmask is a sequence of bits that can also be known as a bitset or bit field and is used to perform bitwise operations on the given data.
  61. [61]
    Bitmasking in Java with Bitwise Operators | Baeldung
    Jan 16, 2024 · In this tutorial, we learned how to use bitwise operators to create bitmasks and apply them to extract binary information from integers. The ...
  62. [62]
    C Bit Fields | Microsoft Learn
    Jul 26, 2023 · In this case, first and second are stored in one integer, may_straddle is stored in a second integer, and last is stored in a third integer. ...
  63. [63]
    Bit Fields in C - GeeksforGeeks
    Jul 29, 2025 · Bit-fields are variables that are defined using a predefined width or size. Format and the declaration of the bit-fields in C are shown below.
  64. [64]
    Bitwise flags - Glossary - MDN Web Docs
    Jul 11, 2025 · Bitwise flags are sets of variables, usually simple number values, which can be used to enable or disable specific usages or features of a method or other code ...
  65. [65]
    Encoding | Protocol Buffers Documentation
    When a message is serialized, there is no guaranteed order for how its known or unknown fields will be written. Serialization order is an implementation detail, ...
  66. [66]
    Language Guide (proto 3) | Protocol Buffers Documentation
    This guide describes how to use the protocol buffer language to structure your protocol buffer data, including .proto file syntax and how to generate data ...Enum Behavior · Proto Limits · Language Guide (proto 2) · Reference Guides
  67. [67]
    [PDF] Communication Theory of Secrecy Systems - cs.wisc.edu
    First, there are three general types of secrecy system: (1) concealment systems, including such methods as invisible ink, concealing a message in an innocent ...
  68. [68]
    [PDF] Advanced Encryption Standard (AES)
    May 9, 2023 · A block cipher is a family of permutations of blocks that is parameterized by a sequence of bits called the key. In 1997, NIST initiated the ...
  69. [69]
  70. [70]
    FIPS 203, Module-Lattice-Based Key-Encapsulation Mechanism ...
    This standard specifies a key-encapsulation mechanism called ML-KEM. The security of ML-KEM is related to the computational difficulty of the Module Learning ...
  71. [71]
    [PDF] Revisiting Higher-Order Masked Comparison for Lattice-Based ...
    This paper introduces new masked comparison algorithms for lattice-based cryptography, focusing on side-channel protection, and improves on existing methods.
  72. [72]
    Post-Quantum Cryptography | CSRC
    The goal of post-quantum cryptography (also called quantum-resistant cryptography) is to develop cryptographic systems that are secure against both quantum and ...Workshops and Timeline · Presentations · Email List (PQC Forum) · Post-QuantumMissing: bitwise masking
  73. [73]
    Bit Twiddling Hacks - Stanford Computer Graphics Laboratory
    This method of swapping is similar to the general purpose XOR swap trick, but intended for operating on individual bits. The variable x stores the result of ...
  74. [74]
    [PDF] Intel® AVX-512 - Instruction Set for Packet Processing
    Ternary Bitwise Operations ... Single Instruction Multiple Data: a term used to describe vector instructions sets such as Intel® SSE and Intel® AVX. SSE.
  75. [75]
    A GPU-based Bit-Parallel Multiple Pattern Matching Algorithm
    In this paper, we present a parallel multiple pattern matching method based on general purpose graphic processing units to realize fast string searching.<|separator|>
  76. [76]
    SVE
    ### Summary of SVE Support for Bitwise Operations and Scalability for Parallelism
  77. [77]
    [PDF] CSC2/452 Computer Organization Bits and Integers
    ▷ This is also the total number of boolean functions of I inputs ... ▷ Each bit in the first 8-bit input is ANDed to its corresponding ... ▷ Bitwise Operations.
  78. [78]
    [PDF] Bits, Bytes, and Integers
    General Boolean Algebras. Operate on Bit Vectors. ▫ Operations applied bitwise. All of the Properties of Boolean Algebra Apply. 01101001. & 01010101. 01000001.
  79. [79]
    [PDF] Laws of logic: Bitwise Operators
    Bitwise Operators. Zero is always represented by all 0's. For signed ... De Morgan's laws. ~(p | q) = ~p & ~q. ~(p & q) = ~p | ~q. ~( ~p | ~q) = p & q.
  80. [80]
    [PDF] Boolean algebras
    Absorption law. (6) a + b = a + ¯a · b. Absorption law. The 2-element Boolean algebra B. The Boolean algebra we shall use in circuit design is the 2-element ...<|separator|>
  81. [81]
    [PDF] Linear Algebra Review - Rice ECE
    Let V be the set of elements (vectors) and I denote. '+' between a field. Two operations are defined: the elements of V and !.' between elements of.Missing: bitwise | Show results with:bitwise
  82. [82]
    [PDF] Solving Systems of Linear Equations over GF(2) on FPGAs
    Aug 13, 2021 · However, when working in GF(2), to keep values restricted to 0s and 1s we must substitute algebraic Boolean operations to accomplish the same ...
  83. [83]
    [PDF] A fast algorithm for Gaussian Elimination over GF(2) and its ...
    A bit operation is defined as a test operation on a single-bit variable (e.g., A = 1) or a binary operation in GF(2). PROPOSITION 1. The sequential Gaussian ...Missing: bitwise algebra
  84. [84]
    [PDF] Optimizing Galois Field Arithmetic for Diverse Processor ...
    Addition and sub- traction in GF(2) is done with the bitwise XOR operator, and multiplication is the bitwise AND operator.<|control11|><|separator|>
  85. [85]
    Operators - Learning the Java Language - Oracle Help Center
    Operators are special symbols that perform specific operations on one, two, or three operands, and then return a result.
  86. [86]
    Precedence and order of evaluation | Microsoft Learn
    Aug 3, 2021 · Precedence and associativity of C operators ; < > <= >= Relational, Left to right ; == != Equality, Left to right ; &, Bitwise-AND, Left to right.Missing: ISO standard
  87. [87]
    5.1 Boolean Algebra Operations - Robert G. Plantz
    Elementary algebra has four operations, addition, subtraction, multiplication, and division, but Boolean algebra has only three operations.