Fact-checked by Grok 2 weeks ago

Bit manipulation

Bit manipulation is the algorithmic process of directly operating on individual bits or small groups of bits within representations of , such as integers or words, using bitwise operators to achieve efficient and handling in computer systems. This technique leverages the nature of digital computers, where is fundamentally stored and processed as sequences of 0s and 1s, enabling low-level control over information encoding, extraction, and transformation. Key operations in bit manipulation include logical bitwise functions—such as AND (&) for masking specific bits, OR (|) for setting bits, XOR (^) for toggling or finding differences, and NOT (~) for inversion—as well as arithmetic shifts like left shift (<<) to multiply by powers of two and right shift (>>) to divide. These operators allow programmers to perform tasks like bit packing, where multiple values are compressed into a single to optimize usage, or bit unpacking to extract subfields from packed data. For instance, in embedded systems or performance-critical applications, bit manipulation facilitates direct interaction, such as setting flags in registers or implementing custom data formats without relying on higher-level abstractions. Bit manipulation is essential in areas like for rendering bitmapped images, for secure encoding, and algorithm optimization in or systems software. It underpins efficient implementations in languages like and , where it can reduce computational overhead compared to arithmetic alternatives, and is particularly valuable in resource-constrained environments such as microcontrollers or . Advanced techniques, known as bit hacks or twiddling, further exploit these operations for tasks like counting set bits (population count) or reversing bit orders, enhancing speed in pipelines. Overall, understanding bit manipulation bridges low-level behavior with high-level programming, forming a core competency in .

Fundamentals

Definition and overview

Bit manipulation refers to the process of directly operating on individual bits or groups of bits within structures, allowing programmers and designers to achieve precise control over representation and computation at the lowest level of systems. In computers, all is encoded as sequences of bits—binary digits representing 0 or 1—making bit manipulation essential for tasks that require efficiency beyond what higher-level types like bytes or words can provide. This technique underpins a wide range of optimizations by treating not as abstract numbers or characters, but as manipulable patterns of electrical signals or magnetic states. The origins of bit manipulation trace back to the early design of digital computers in the 1940s, when pioneers shifted from decimal-based systems to representations for greater simplicity in implementation. In the seminal 1945 report on the computer, outlined an architecture where arithmetic and logical operations were performed serially on digits, using elementary operations like to build complex computations. This foundation evolved through the with the construction of stored-program computers like (1949), where machine instructions inherently involved bit-level processing in vacuum-tube logic circuits. Bit manipulation's importance lies in its ability to enable compact , where multiple flags or small integers can be packed into a single machine word, reducing usage in constrained systems. It also facilitates fast computations, such as bit shifts for efficient or by powers of two, and supports low-level optimizations like masking irrelevant bits during extraction—capabilities that higher-level abstractions obscure and often cannot match in performance. These features are particularly vital in embedded systems, , and graphics processing, where direct interaction yields measurable gains in speed and resource efficiency. Although a basic understanding of binary numbers is presumed—where each position represents a —bit-level access surpasses operations on larger units like bytes (8 bits) or words (typically 32 or 64 bits) by providing granular control essential for tasks such as setting individual status flags or aligning in without unnecessary overhead. This precision is crucial in environments where every cycle or byte matters, allowing developers to bypass the inefficiencies of treating as indivisible wholes.

Binary representation and bit positions

In digital computing, every value is represented as a sequence of , where each bit is either 0 or 1, and the positions of these bits correspond to distinct powers of 2, starting from the rightmost bit as $2^0 = 1. For instance, in the 1011, the rightmost bit (position 0) contributes $1 \times 2^0 = 1, the next (position 1) contributes $1 \times 2^1 = 2, the following (position 2) contributes $0 \times 2^2 = 0, and the leftmost (position 3) contributes $1 \times 2^3 = 8, yielding a value of 11. This positional system allows representations to efficiently encode integers and other data types by summing the weighted contributions of set bits (1s). Bit numbering in binary representations conventionally starts with the least significant bit (LSB) at position 0, which holds the smallest power of 2, and proceeds leftward to higher positions. The most significant bit (MSB) occupies the highest position, determined by the data width; for a 32-bit , the MSB is at position 31, representing $2^{31}. This numbering facilitates precise addressing and manipulation of individual bits within a word, such as in registers or . For signed integers, the representation is the standard method, where the MSB serves as the : 0 indicates a positive number or , while 1 indicates a . In this scheme, positive values use standard encoding, but negative values are formed by inverting all bits of the and adding 1, with the MSB's weight treated as negative (-2^{n-1} for an n-bit number). For example, in an 8-bit system, -5 is represented as 251 in (binary 11111011), where the MSB (1) denotes negativity, and the value is calculated as -2^7 + 2^5 + 2^4 + 2^3 + 2^2 + 2^1 + 2^0 = -128 + 64 + 32 + 16 + 8 + 4 + 2 + 1 = -5. Unsigned representations, by contrast, interpret all bits as positive magnitudes, ranging from 0 to $2^n - 1. Endianness addresses the byte order for multi-byte data types, such as 32-bit or 64-bit integers, stored in . In big-endian format, the MSB (highest-order byte) is stored at the lowest , followed by successively lower-order bytes; for the 32-bit value 0x12345678, memory would hold 0x12 at address 0, 0x34 at 1, 0x56 at 2, and 0x78 at 3. Conversely, little-endian stores the LSB (lowest-order byte) first; the same value would appear as 0x78 at address 0, 0x56 at 1, 0x34 at 2, and 0x12 at 3. This convention affects data portability across systems, such as network protocols favoring big-endian for consistency.

Basic Bitwise Operations

Logical operations

Logical operations in bit manipulation refer to bitwise applications of logic functions applied independently to each corresponding pair of bits in the operands, producing a result of the same bit width. These operations form the foundation for manipulating at the bit level in systems, enabling tasks such as bit testing, setting, and clearing without altering the overall numerical value interpretation. The bitwise AND operation, denoted as &, evaluates and outputs 1 only if both inputs are 1; otherwise, it outputs 0. This operation is defined by the following :
ABA & B
000
010
100
111
Mathematically, for operands a and b, the result is a \& b. The bitwise OR operation, denoted as |, outputs 1 if at least one of the input bits is 1, and 0 only if both are 0. Its is:
ABA | B
000
011
101
111
The formula is a | b. The bitwise XOR operation, denoted as ^, outputs 1 if the input bits differ and 0 if they are the same, effectively toggling bits where the s disagree. This property makes XOR useful for when one is a mask of 1s in specific positions. Its is:
ABA ^ B
000
011
101
110
The formula is a ^ b. The bitwise NOT operation, denoted as ~, is unary and inverts all bits in the operand, changing 0s to 1s and 1s to 0s, which corresponds to the one's complement representation. In two's complement systems, applying NOT to a signed integer requires careful handling of sign extension, where the result's sign bit is the inverse of the original, potentially leading to unexpected negative values if the operand was positive. The formula is \sim a. De Morgan's laws extend to bitwise operations, providing equivalences for negating combinations: \sim(a \& b) = \sim a | \sim b and \sim(a | b) = \sim a \& \sim b. These identities allow rewriting expressions using complementary operations, useful in optimization and .

Shift operations

Shift operations in bit manipulation involve moving the bits of a left or right by a specified number of positions, effectively altering the numerical value without changing the bit pattern's relative order, except for bits shifted out. These operations are fundamental in low-level programming and hardware design, as they enable efficient , by powers of two, and bit alignment. The behavior depends on the (signed or unsigned) and the programming language or , with potential for or implementation-defined results. The left shift operation, denoted as << in languages like C and C++, shifts all bits of the operand to the left by n positions, filling the vacated least significant bits with zeros. For unsigned integers, this is mathematically equivalent to multiplying the value by $2^n, as each shift left by one position doubles the value. The formula is a \ll n = a \times 2^n, assuming no overflow occurs. However, if the shift causes bits to exceed the available width (e.g., shifting a 32-bit integer by 32 or more positions), the result is undefined in C++ standards. Overflow risks arise when the most significant bits are shifted out, potentially leading to data loss or wrap-around in modular arithmetic contexts. (ISO/IEC 14882:2011, section 5.8) The right shift operation, denoted as >>, moves bits to the right by n positions, shifting out the least significant bits. For unsigned integers, it performs a , filling the vacated most significant bits with zeros, which approximates by $2^n (exact for positive values without remainder). The formula for unsigned right shift is a \gg n \approx a / 2^n. In contrast, for signed integers in representation, the right shift is typically , preserving the by filling with copies of the original (1 for negative, 0 for positive), which maintains the sign during division-like operations on negative numbers. This distinction ensures arithmetic right shifts handle signed correctly, while logical shifts treat the value as unsigned. Implementation details for signed right shifts can vary by , but in Microsoft Visual C++, negative values are sign-extended. (ISO/IEC 14882:2011, section 5.8) Logical shifts always fill with zeros regardless of the sign, making them suitable for unsigned types or when sign preservation is unnecessary, whereas arithmetic shifts are essential for signed integers to avoid incorrect sign changes. In and C++, the >> operator uses logical shifting for unsigned types and arithmetic for signed types. , however, mandates arithmetic right shift (>>) for all integers and provides a separate unsigned right shift (>>>) that always zero-fills. Assembly languages like x86 often provide explicit instructions: SHL/[SAL](/page/Sal) for left shifts, SHR for logical right shifts, and [SAR](/page/SAR) for arithmetic right shifts. (Java Language Specification, section 15.19) Rotate operations, such as rotate left (ROL) and rotate right (), differ from plain shifts by wrapping the shifted-out bits around to the opposite end, preventing any loss of within the fixed bit width. For a value A of width w, rotate right by n positions is defined as (A \text{ ROR } n) = A[n-1:0] || A[w-1:n], where || denotes , effectively cycling the bits circularly. Similarly, rotate left by n is (A \text{ ROL } n) = A[w-n:0] || A[w-1:w-n]. These operations are common in computer architectures for tasks like bit-field rotations without overflow, and they are implemented in x86 assembly as ROL and ROR instructions, which operate on registers or memory. Unlike shifts, rotates preserve all bits, making them useful in and functions where bit integrity is crucial. (Intel 64 and IA-32 Architectures Software Developer's Manual, Volume 2B, section on rotate instructions)
assembly
; Example in x86 [assembly](/page/Assembly): Rotate left by 3 bits
MOV [AX](/page/MOV), 0x0001  ; AX = 0000000000000001
ROL AX, 3       ; AX = 0000000000001000 (bits wrapped: 00100000... but for 16-bit)
In programming, post-shift masking with bitwise AND can isolate specific bits if needed, but shifts alone handle positional adjustments efficiently.

Advanced Manipulation Techniques

Masking and extraction

Masking is a fundamental technique in bit manipulation used to isolate or modify specific bits within a value by applying bitwise operations with a predefined known as a . A consists of a where the positions corresponding to the bits of interest are set to 1, while others are 0. To extract a of bits, such as the lowest 8 bits from a 32-bit , the bitwise AND operator (&) is applied between the original and a like 0xFF ( 11111111), yielding the : extracted_value = & . This operation retains only the bits where the mask has 1s, effectively zeroing out the unwanted positions. Clearing specific bits involves using the bitwise AND with the bitwise NOT (~) of , which inverts the mask to set the target bits to 0 while preserving the rest. For instance, to clear the lowest 8 bits of a value, one computes data & ~0xFF, ensuring those positions become 0 without affecting higher bits. Conversely, setting specific bits to 1 uses the bitwise OR operator (|) with the mask directly: modified_value = data | mask, which forces 1s in the masked positions regardless of their original state. These operations are essential for targeted bit-level modifications in low-level programming. In languages like , bit fields provide a declarative way to access and manipulate fixed-width subsets of bits within a or , using syntax such as unsigned [int](/page/INT) field_name : width;. This allows compilers to pack multiple bit fields into a single type, optimizing for flags or small values; for example, a might define struct { unsigned [int](/page/INT) flags : 8; [int](/page/INT) value : 16; }; to allocate 8 bits for flags and 16 for value within a 32-bit . To extract a bit field manually, the process typically involves right-shifting the data by the field's offset and then applying a : field_value = (data >> offset) & ((1 << width) - 1), where the mask ((1 << width) - 1) creates a contiguous sequence of width 1s. Shift operations are often used to align the mask with the target bits before extraction. Handling sign extension is crucial when extracting signed bit fields, as shifting can propagate the sign bit incorrectly if not managed. For a signed field of width b, sign extension can be achieved using the formula r = (x ^ m) - m, where m = 1U << (b - 1) and x is the extracted value, ensuring the sign bit is properly extended to the full integer width. In C, bit fields declared as signed int automatically handle sign extension based on the most significant bit of the field, though implementation details may vary by compiler.

Bit packing and unpacking

Bit packing is a technique used to combine multiple smaller values into a single larger integer or word, optimizing storage by minimizing unused bits. This process typically involves shifting each value to its designated bit position and then combining them using bitwise OR operations, ensuring no overlap between fields. For instance, in representing a 24-bit RGB color value, the red component (8 bits) is shifted left by 16 bits, the green by 8 bits, and the blue remains unshifted, followed by an OR to merge them: result = (r << 16) | (g << 8) | b. Unpacking reverses this process to extract individual values from the packed word. It employs right shifts to align the desired bits to the least significant position, followed by bitwise AND with a mask to isolate them, such as r = (packed >> 16) & 0xFF for the component. Masking here ensures clean extraction by zeroing out extraneous bits. Proper and are essential during packing to prevent bit overlap and accommodate variable field widths. Values must be positioned without interference, often requiring calculations for offsets based on prior field sizes; padding bits may be added to reach word boundaries or handle inefficiencies in variable-length data. In network protocols, bit packing compresses headers for efficiency; the IPv4 header, for example, packs fields like the 4-bit , 4-bit header length, 3-bit flags, and 13-bit fragment offset into 32-bit words, with to maintain . Similarly, enums with flags use packing to store multiple states in a single , where each flag occupies a distinct bit position via powers of two, enabling compact representation of combinations. Endianness influences bit packing across systems, as big-endian architectures store the most significant byte (and bit) first, while little-endian reverses this, potentially requiring byte swaps or bit reordering for interoperability in packed structures.

Applications and Examples

In programming and algorithms

Bit manipulation is widely employed in programming to efficiently manage boolean states through bit flags, where individual bits within an integer represent distinct options or conditions. This technique allows multiple flags to be stored compactly in a single variable, using the bitwise OR operator (|) to combine them and the bitwise AND operator (&) to test for their presence. For instance, in languages like C or C++, enums can define powers-of-two constants to ensure each flag occupies a unique bit position, enabling operations such as setting a flag with flags |= OPTION_READ or checking it with if (flags & OPTION_READ). This approach is particularly useful in configuration settings, permission systems, and event handling, reducing memory usage compared to separate boolean variables. Another common application is swapping the values of two variables without requiring a temporary location, achieved via the . The sequence a ^= b; b ^= a; a ^= b; leverages the property that XOR is its own inverse, effectively exchanging the bits of a and b in place. This method, dating back to early practices, is beneficial in memory-constrained environments or low-level routines where minimizing variable allocation is critical, though modern compilers often optimize standard swaps similarly. Bit counting, or population count (popcount), determines the of an integer by tallying the number of set bits (1s), which is essential in for tasks like error detection and data compression. Brian Kernighan's provides an efficient software implementation with a loop that repeatedly clears the least significant set bit using n &= (n - 1) until n becomes zero, incrementing a each time; its is proportional to the number of set bits, making it optimal for sparse bit patterns. This technique, introduced in foundational programming texts, avoids full iteration over all bits and is implemented in hardware instructions like POPCNT on x86 processors for further acceleration. For arithmetic operations involving powers of two, bit shifts offer a performant alternative to and . Left-shifting an by k bits (e.g., x << k) multiplies it by $2^k, while right-shifting (e.g., x >> k) divides by $2^k for unsigned or positive values, exploiting the place-value system. In C and similar languages, this is commonly used for alignment, scaling, or , as shifts execute faster than general multipliers on most , though care must be taken with signed integers to avoid from arithmetic shifts. In , simple XOR-based ciphers apply the XOR operation between and a stream to produce , relying on XOR's invertibility for decryption with the same . While this forms the basis of stream ciphers like , standalone XOR ciphers with repeating or short keys are insecure due to vulnerabilities like known-plaintext attacks and , which reveal patterns in the output; they are thus limited to pedagogical or low-stakes rather than robust .

In hardware and low-level systems

Bit manipulation plays a crucial role in hardware and low-level systems, particularly in programming where direct control over processor registers and is required. In x86 architecture, a Complex Instruction Set Computing (CISC) design, bitwise operations are executed via specialized instructions that operate on registers or locations. For instance, the AND instruction performs a logical AND between the destination operand and the source operand, clearing bits in the destination where the source has zeros; its syntax includes forms like AND reg, imm for immediate values or AND reg, reg for register-to-register operations. Similarly, the SHL (shift left) instruction shifts the bits of the destination operand left by a specified count, typically held in the CL register, filling the least significant bits with zeros; an example is SHL reg, cl. These instructions enable precise bit-level control essential for tasks like masking interrupt flags in the EFLAGS register, where operations such as OR reg, imm can set specific flags (e.g., enabling interrupts by setting the IF bit) without affecting others. In embedded systems, register manipulation via bit operations is fundamental for controlling hardware peripherals like General Purpose Input/Output (GPIO) pins. For example, in the STM32 family of microcontrollers based on ARM architecture, GPIO pins are configured and toggled using memory-mapped registers such as GPIOx_MODER for mode selection (e.g., output mode by setting bits 1:0 to 01 for a pin) and GPIOx_ODR for output data, where bitwise OR sets a pin high (e.g., GPIOx_ODR |= (1 << pin_number)) and AND with inverted mask clears it (e.g., GPIOx_ODR &= ~(1 << pin_number)). Atomic bit manipulation is supported by the GPIOx_BSRR register, allowing simultaneous set and reset without read-modify-write hazards; writing 1 to bit 0 sets pin 0 high, while writing 1 to bit 16 clears it. This approach is critical for real-time control, such as toggling pins to drive LEDs or sensors while preserving other pin states. In AVR microcontrollers like the ATmega328P, similar bit operations on PORTx registers set or clear GPIO outputs; for instance, writing 0b00000001 to DDRB configures PB0 as an output, and subsequent writes to PORTB toggle its state. Memory-mapped I/O further integrates bit manipulation by mapping hardware registers to specific memory addresses, allowing software to write bit patterns directly to control peripherals. In the ATmega328P, GPIO and other peripherals occupy the I/O memory space (0x00–0xFF), where instructions like SBI (set bit in I/O register) and CBI (clear bit in I/O register) enable targeted modifications; for example, SBI 0x05, 0 sets bit 0 in PORTB (address 0x05) to enable a feature like an LED on PB0, while writing a full byte pattern such as 0b00000011 to PORTB activates pull-ups on multiple pins when in input mode. In STM32 devices, enabling a peripheral like a timer involves writing bit patterns to RCC_AHB1ENR (e.g., setting bit 0 to 1 for GPIOA clock enable) before manipulating GPIO registers to route signals. This method is widely used in microcontrollers to activate features such as UART transmission or ADC sampling by configuring control registers with precise bit masks. Instruction Set Architecture (ISA) features influence how bit operations are handled, with notable differences between Reduced Instruction Set Computing (RISC) and CISC designs. CISC architectures like x86 support variable-length instructions that can perform complex bit manipulations in fewer steps, often combining shifts with arithmetic, though this increases decoding complexity. In contrast, RISC architectures like ARM emphasize uniform, fixed-length instructions for simplicity and pipelining efficiency; a key feature is the inline barrel shifter integrated into the Arithmetic Logic Unit (ALU), which shifts or rotates the second operand (up to 32 bits) as part of data-processing instructions without extra cycles, supporting operations like logical left shift (LSL) or rotate right (ROR) for bit packing or alignment. For example, in ARM assembly, an instruction like ADD R0, R1, R2, LSL #3 adds R1 to R2 shifted left by 3 bits, enabling efficient multiplication by powers of 2 directly in the opcode. This barrel shifter enhances bit manipulation performance in resource-constrained embedded systems compared to separate shift instructions in some CISC ISAs. In firmware development for embedded systems, bit manipulation is essential for bit-banging protocols that emulate hardware interfaces using general-purpose CPU pins. A prominent example is simulating the I²C bus, where GPIO pins act as SCL (clock) and SDA (data) lines; software toggles these pins with precise timing via bit sets and clears to generate start/stop conditions, clock pulses, and data bits. In Microchip's application note for interfacing with 24XXXX serial EEPROMs, bit-banging I²C on mid-range MCUs involves using bitwise operations on port registers to drive SDA high/low (e.g., OR to set, AND to clear) while polling for acknowledgments, achieving communication speeds up to 100 kHz without dedicated hardware. This technique is particularly useful in low-cost microcontrollers lacking native I²C peripherals, allowing flexible protocol implementation through direct pin control.

Performance and Optimization

Efficiency considerations

Bitwise operations, such as AND, OR, and XOR, are executed with a latency of 1 cycle on modern x86 processors like Intel Skylake and AMD Zen architectures, enabling high throughput of up to 3 operations per cycle in register-to-register scenarios. This single-cycle performance stems from the simplicity of the arithmetic logic unit (ALU) circuitry dedicated to these operations, which requires minimal gates compared to more complex arithmetic instructions. Shift operations similarly achieve 1-cycle latency, further underscoring the efficiency of bit-level manipulations in the CPU pipeline. To enhance parallelism, single instruction, multiple data (SIMD) extensions like and allow vectorized bitwise operations across multiple data elements simultaneously, processing up to 256 bits (or more with ) in a single instruction. This capability yields significant throughput improvements for data-intensive tasks, as instructions can execute bitwise AND/OR/XOR on wider registers with latencies comparable to scalar versions, often achieving 2-4x speedup over non-vectorized code depending on data alignment and workload. In embedded systems, bitwise operations exhibit low power overhead relative to floating-point operations, particularly on processors without a dedicated floating-point unit (FPU), where FP emulation relies on multiple integer instructions and increases energy use by up to 10-100x. Even on FPU-equipped chips, integer bitwise tasks consume less dynamic power due to simpler ALU activation, avoiding the higher voltage and capacitance demands of FP pipelines. Compiler optimizations further boost efficiency through intrinsics like GCC's __builtin_popcount, which maps directly to the x86 for single-cycle bit counting on supported hardware when compiled with -msse4.2. This approach outperforms software loops or inline assembly by allowing the compiler to inline the native instruction, reducing overhead and enabling further vectorization. Inline assembly remains an option for fine control but often yields similar or inferior results due to missed optimizations. Benchmarks reveal that bitwise operations can be approximately 3x faster in latency than integer multiplication on recent CPUs—for instance, 1 cycle for AND versus 3 cycles for on —though throughput differences are smaller at 1 operation per cycle for both in pipelined execution. In scenarios like power-of-two scaling via shifts, the gap widens to 5-10x over general multiplication on older or resource-constrained processors, highlighting bit operations' role in performance-critical designs.

Common pitfalls and best practices

One common pitfall in bit manipulation arises from sign extension during right shifts on signed integers. In languages like C, the right shift operator (>>) on a signed integer performs an , which propagates the (filling with 1s for negative values) into the vacated positions, potentially leading to unexpected results when treating the value as unsigned or when logical shifting is intended. This behavior is implementation-defined for negative signed operands, exacerbating portability issues across compilers. Another frequent error involves in shift operations, particularly when the shift amount equals or exceeds the bit width of the . In and C++, shifting by an amount greater than or equal to the number of bits in the promoted left results in , which can cause crashes, incorrect computations, or vulnerabilities depending on the and . For instance, shifting a 32-bit by 32 or more bits invokes this , unlike smaller shifts that are well-defined. Portability problems often stem from endianness mismatches when packing and unpacking bit fields across different platforms. Bit fields in structures have implementation-defined layout, including the order of allocation within storage units, which varies between little-endian and big-endian systems; for example, on little-endian architectures like x86, bit fields typically start from the least significant bit, while big-endian systems like some PowerPC implementations may reverse this order, leading to corrupted data when serializing or deserializing packed structures. This issue is particularly problematic in protocols or formats where byte order assumptions are not explicitly handled. To mitigate these pitfalls, several best practices should be followed. Use unsigned types for pure bit operations to avoid and ensure predictable shifting behavior, as bitwise operators on signed integers can yield implementation-defined results. Always validate mask widths and shift amounts to ensure they do not exceed the operand's bit size, preventing ; for example, masking techniques can cap shift values to the type's width. Document bit layouts explicitly in code comments or headers to clarify field positions and widths, facilitating maintenance and cross-platform compatibility. For debugging bit manipulation code, print binary representations of values before and after operations to visualize changes, such as using a loop to extract and display each bit. Additionally, employ assertions to verify bit-level invariants, like checking that specific bits are set or cleared as expected after manipulation.

References

  1. [1]
    Programming lab 1: Bit Manipulation and Data Representation
    Implement techniques for representing and operating different kinds of data (integers, decimal digits, bit-mapped images), by individually handling bits inside ...
  2. [2]
    [PDF] Chapter 1 - All the world's a bit-pattern
    Digital computers manipulate patterns of binary digits, usually called bits. One bit is a single binary digit, just about the smallest piece of information.
  3. [3]
    Machine-level Operations, Bit Manipulation, and Unions
    This lab provides practice in working with data at the bit level in C. Specific work involves the representation of integers, the manipulation of bits in C, and ...
  4. [4]
    Bit Packing - CS 3410
    Bit packing treats integers as bit sequences, using bitwise operations to pack irregularly sized values into integers, saving space when structs are not ...Missing: computer science
  5. [5]
    [PDF] Bitwise Operators - Computer Science & Engineering
    Other data types are stored in larger numbers of bytes. The bitwise operators are used to manipulate the bits of integral operands (char, short, int, and long; ...Missing: applications | Show results with:applications
  6. [6]
    Lecture 3: Bit Hacks | Performance Engineering of Software Systems
    Prof. Shun discusses an array of bit hacks, the types of hacks compilers do, and bit hacks to do by hand when the compiler doesn't optimize.Missing: applications | Show results with:applications
  7. [7]
    Bit Twiddling Hacks - Stanford Computer Graphics Laboratory
    Counting bits set, computing parity (1 if an odd number of bits set, 0 otherwise), swapping values, reversing bit sequences, modulus division.
  8. [8]
    [PDF] Representing and Manipulating Information
    Computers use several different binary representations to encode numeric values. You will need to be familiar with these representations as you progress into ...Missing: science | Show results with:science
  9. [9]
    [PDF] First Draft of a Report on the EDVAC - Purdue Computer Science
    A consistent use of the binary system is also likely to simplify the operations of multiplication and division considerably. Specifically it does away with the ...
  10. [10]
    [PDF] First draft report on the EDVAC by John von Neumann - MIT
    5.2 A consistent use of the binary system is also likely to simplify the operations of multiplication and division considerably. Specifically it does away with ...
  11. [11]
    Binary and Hexadecimal - E 115 - NC State University
    Positional Values. Binary place values are powers of 2, starting at 20 on the right: · Example 1: 1001 (binary to decimal). 1×8 + 0×4 + 0×2 + 1×1 = 9 in decimal.
  12. [12]
    [PDF] The Binary Number System
    In other words, a binary numeral is a sequence of binary digits, called bits. Each bit position is multiplied by a power of 2. Because these are called bits ...
  13. [13]
    [PDF] CHAPTER TWO
    It is important to note that we number the bit positions starting with 0 identifying the rightmost bit. Next, add the powers of 2 for each position containing a ...
  14. [14]
    [PDF] positional number representation binary = base 2
    MSB: most significant bit. LSB: least significant bit. (size in bytes). Java Data Type C Data Type. 32-bit word 64-bit word ... (4-bit) unsigned integer ...
  15. [15]
    Binary Number System - Computer Engineering Group
    The rightmost bit in a binary number is bit position zero. Each bit to the left is given the next successive bit number. · unsigned numeric values in the range 0 ...
  16. [16]
    Numbers - CS 3410 - CS@Cornell
    The modern way is two's complement notation. In two's complement, there is still a sign bit, and it is still the leftmost (most significant) bit in the ...
  17. [17]
    [PDF] Chapter 2 Bits, Data Types, and Operations
    How can we represent fractions? • Use a “binary point” to separate positive from negative powers of two (just like “decimal point”).
  18. [18]
    [PDF] Memory, Data, & Addressing I - CSE 351, Summer ... - Washington
    ▫ big-endian, little-endian. ▫ pointer. ❖ Questions from the Reading? 13. Page 14 ... ❖ Endianness determines memory storage order for multi-byte data. 31.
  19. [19]
    An Essay on Endian Order
    Apr 19, 1996 · The two orders are called "Little Endian" and "Big Endian". The Basics "Little Endian" means that the low-order byte of the number is stored in ...Missing: endianness | Show results with:endianness
  20. [20]
    [PDF] Big Endian vs. Little Endian Storage of Numeric Data
    There are some advantages for little-endian: – Regardless of int, long ,etc, you always consistently address the low order byte with pointers.
  21. [21]
    [PDF] Bitwise Operations - Department of Computer Science and ...
    Every bitwise operation (except shift) is defined by a truth table. A truth table represents one or two input bits and their output bit. For example, bitwise OR ...
  22. [22]
    [PDF] Bitwise Instructions - Computer Science
    Logical Operators. ❖ Truth Table: standard table listing all possible combinations of inputs and resultant output for each. ❖ Truth Table for AND, OR and XOR.
  23. [23]
    Operators - CS 121
    Truth Tables​​ To understand how the bitwise operators work on the bits on variable values you must understand how the bitwise AND, OR, and XOR (exclusive OR) ...
  24. [24]
    2.11. Supplemental: Bitwise Operators
    Three operators act on the corresponding bits of the two operands; we can summarize these and the complement operators with truth tables. Two operators treat ...
  25. [25]
    [PDF] Representations - cs.Princeton
    Bitwise Operators: Not and XOR. • One's complement (~) o Turns 0 to 1, and 1 to 0 o E.g., set last three bits to 0. – x = x & ~7;. • XOR (^) o 0 if both bits ...
  26. [26]
    [PDF] Bit-‐Level Operators
    Assuming 8-bit 2's complement numbers. Observation 2 – Sign Extension. ❑ To add two numbers, we must represent them with the same number of bits. ❑ If we ...
  27. [27]
    Left shift and right shift operators: << and >> | Microsoft Learn
    Mar 1, 2024 · The bitwise shift operators are the right-shift operator ( >> ), which moves the bits of an integer or enumeration type expression to the right, and the left- ...Missing: rotate 754
  28. [28]
    [PDF] Intel 64 and IA-32 Architectures Software Developer's Manual
    Maj(A,B,C) ← (A AND B) XOR (A AND C) XOR (B AND C) ROR is rotate right operation. (A ROR N) ← A[N-1:0] || A[Width-1:N] ROL is rotate left operation.
  29. [29]
    C Bit Fields | Microsoft Learn
    Jul 26, 2023 · A structure declarator can also be a specified number of bits, called a "bit field." Its length is set off from the declarator for the field name by a colon.
  30. [30]
    2.9.1. Packed RGB formats - The Linux Kernel Archives
    They occupy 8, 16, 24 or 32 bits per pixel. These are all packed-pixel formats, meaning all the data for a pixel lie next to each other in memory.
  31. [31]
    RFC 791 - Internet Protocol - IETF Datatracker
    The internet protocol is designed for use in interconnected systems of packet-switched computer communication networks. Such a system has been called a catenet.
  32. [32]
    System.FlagsAttribute class - .NET - Microsoft Learn
    Jan 8, 2024 · The FlagsAttribute attribute indicates that an enumeration can be treated as a bit field; that is, a set of flags. Bit fields are generally ...
  33. [33]
    Byte and Bit Order Dissection | Linux Journal
    Sep 2, 2003 · Bit order usually follows the same endianness as the byte order for a given computer system. That is, in a big endian system the most ...
  34. [34]
    (PDF) DATA ENCRYPTION USING XOR CIPHER - ResearchGate
    Aug 7, 2025 · This article offers an example of using an application whose main task is to encrypt data such as files and private messages.
  35. [35]
    [PDF] Intel® 64 and IA-32 Architectures Software Developer's Manual
    NOTE: The Intel® 64 and IA-32 Architectures Software Developer's Manual consists of nine volumes: Basic Architecture, Order Number 253665; Instruction Set ...
  36. [36]
    [PDF] RM0383 Reference manual - STMicroelectronics
    May 1, 2025 · This Reference manual targets application developers. It provides complete information on how to use the memory and the peripherals of the ...
  37. [37]
    [PDF] ATmega328P - Microchip Technology
    ... [DATASHEET]. 7810D–AVR–01/15. 6. 2. Overview. The Atmel® ATmega328P is a low-power CMOS 8-bit microcontroller based on the AVR® enhanced RISC architecture. By.
  38. [38]
    RISC vs. CISC - Stanford Computer Science
    The CISC approach attempts to minimize the number of instructions per program, sacrificing the number of cycles per instruction. RISC does the opposite, ...
  39. [39]
    Access to the inline barrel shifter - Arm Developer
    The ARM arithmetic logic unit has a 32-bit barrel shifter that is capable of shift and rotate operations. The second operand to many ARM and Thumb data- ...
  40. [40]
    AN1488 - Microchip Technology
    This application note is intended to serve as a reference for communicating with Microchip's 24XXXX series Serial EEPROM devices without relying on a hardware ...
  41. [41]
    [PDF] 4. Instruction tables - Agner Fog
    Sep 20, 2025 · The present manual contains tables of instruction latencies, throughputs and micro-operation breakdown and other tables for x86 family ...Missing: bitwise | Show results with:bitwise
  42. [42]
    Intel® Instruction Set Extensions Technology
    The Intel AVX instruction set extends 128-bit SIMD instruction sets by employing a new instruction encoding scheme via a vector extension prefix (VEX). Intel ...
  43. [43]
  44. [44]
    Energy-Based Accounting and Scheduling of Virtual Machines in a ...
    Contrary to popular belief, the energy consumption dif- ferences between floating point operations and integer oper- ations at μoperation-level do not show ...
  45. [45]
    x86 Built-in Functions (Using the GNU Compiler Collection (GCC))
    These built-in functions are available for the x86-32 and x86-64 family of computers, depending on the command-line switches used.
  46. [46]
    Is multiplication slower than addition? - Daniel Lemire's blog
    Jul 19, 2010 · Intel Skylake has a 1 cycle throughput on the imul instruction, but 3 cycle latency. This means it can process 1 integer multiplication per ...<|separator|>
  47. [47]
    How Endianness Effects Bitfield Packing
    Big endian machines pack bitfields from most significant byte to least. Little endian machines pack bitfields from least significant byte to most.
  48. [48]
    How to print Binary format in C using printf alternatives? - w3resource
    Learn how to print binary numbers in C with custom functions, as printf lacks a built-in binary converter. Examples of bit manipulation included.
  49. [49]
    How to debug bitwise operation errors - LabEx
    Debugging Techniques. Use explicit type casting; Print binary representations; Validate input ranges; Use compiler warnings; Leverage LabEx debugging tools ...