Fact-checked by Grok 2 weeks ago

Program counter

The program counter (PC), also known as the instruction pointer (IP) in certain processor architectures such as x86, is a special-purpose within a computer's (CPU) that stores the of the next machine instruction to be fetched and executed. This is essential for maintaining the sequential flow of program execution, incrementing automatically after each instruction fetch to point to the subsequent address in . In the CPU's fetch-decode-execute cycle, the program counter directs the to retrieve the from the specified memory location, after which it is typically updated—either by incrementing for linear execution or by loading a new value for operations like branches, jumps, or interrupts. The PC's value is critical for ensuring orderly processing, and its manipulation enables features such as subroutine calls, loops, and in computer programs. Without the program counter, the CPU would lack a mechanism to track execution progress, rendering sequential or conditional program flow impossible. Historically, the concept of the program counter emerged with early stored-program computers like the in the late 1940s, where it formalized the addressing of instructions in to support principles. Modern implementations vary by architecture—for instance, in RISC processors like , the PC may be accessible as a general register, while in CISC designs, it often operates more opaquely—but its core function remains invariant across systems. The program counter's reliability is vital for system stability, as corruption of its value, often resulting from vulnerabilities like buffer overflows, can lead to crashes or unauthorized code execution via control-flow hijacking.

Fundamentals

Definition and Purpose

The program counter (PC), also known as the , is a special-purpose within a computer's (CPU) that holds the of the next to be fetched and executed by the processor. This is essential for directing the CPU to the precise location in where the subsequent machine resides, facilitating the processor's ability to retrieve and process code in a structured manner. The primary purpose of the program counter is to ensure the orderly and sequential execution of instructions by maintaining a pointer to the current position in program memory; after each instruction is fetched, the PC is incremented to point to the next address, thereby preserving the linear flow of program execution unless altered by specific control operations. This mechanism allows the CPU to progress through a program's instructions in the intended order, supporting reliable computation without manual intervention for each step. The PC's role is integral to the instruction fetch-decode-execute cycle, where it initiates the fetch phase by providing the target address. In the context of the von Neumann architecture, the program counter enables the stored-program concept by treating instructions and data within the same unified address space, allowing the CPU to fetch executable code from memory locations just as it accesses operands. This design, proposed in John von Neumann's 1945 report on the EDVAC computer, revolutionized computing by making programs modifiable and stored alongside data, with the PC serving as the dynamic locator for sequential instruction retrieval. For instance, in a basic CPU initialization, the PC is often set to the program's entry point, such as memory address 0x0000, from which it advances incrementally—typically by the fixed size of each instruction—to execute the code in sequence.

Basic Operation in Instruction Execution

The program counter (PC) is integral to the CPU's instruction execution cycle, which encompasses the fetch, decode, and execute stages. In the fetch stage, the CPU retrieves the located at the held in the PC and loads it into the (). This process ensures that the processor executes instructions in the intended sequence stored in memory. After fetching the , the PC is automatically incremented by the of the fetched to prepare for the next one. For instance, in byte-addressable with fixed-length 32-bit instructions, the PC advances by 4 bytes; in contrast, byte-addressable systems with variable-length instructions adjust based on the specific instruction . The decode stage then analyzes the IR contents to identify the operation, operands, and any required resources, while the execute stage performs the or access as specified, without altering the PC in sequential cases. PC updates follow specific rules to maintain program flow. For linear execution, the increment occurs post-fetch, as illustrated in this pseudocode:
Fetch: IR ← Memory[PC]
PC ← PC + sizeof(instruction)
This simple operation supports sequential processing. However, control instructions like jumps or subroutine calls explicitly load a new address into the PC, overriding the increment to redirect execution. If the PC references an invalid memory address during fetch—such as unmapped or protected regions—it triggers a hardware fault or exception, interrupting execution and transferring control to an operating system handler for resolution.

Hardware Aspects

Physical Implementation

The program counter (PC) is physically realized as a dedicated array composed of flip-flops or latches, each bit position storing one element of the instruction in . This design enables stable storage of the address value until updated on the clock edge, preventing from asynchronous noise. In modern 64-bit processors, the PC spans 64 flip-flops to handle addresses up to 2^64 bytes, aligning with the architecture's addressing capabilities. Supporting circuitry includes a adder for incrementing the PC by the length of the in bytes (e.g., 4 in 32-bit RISC architectures)—during sequential execution, ensuring the next is fetched promptly. route either the incremented value or an external (e.g., from a target) to the PC input, selected by control signals from the instruction decoder. For example, in the Rice educational , the 8-bit PC employs an 8-bit incrementer using an S-R for detection and a 16-to-8 built from 2-to-1 units to choose between current PC and data. Clock synchronization governs PC updates, with flip-flops triggered on rising or falling edges to coordinate with the CPU's fetch and maintain timing integrity across the chip. In pipelined designs, the PC generates addresses for concurrent instruction fetches in multiple stages, incorporating buffers to mitigate propagation delays and branch prediction logic to reduce stalls from hazards. This integration optimizes throughput while managing by limiting unnecessary increments or loads only when signals assert. A representative early implementation appears in the 8-bit , where the 16-bit PC comprises two cascaded 8-bit registers with internal carry propagation logic to form the full address for its 64 KB space.

Integration with CPU Components

The program counter (PC) primarily outputs its value to the memory address bus to facilitate instruction fetching, where it is first loaded into the () before being transmitted to main memory via the address bus. This connection ensures that the address of the next instruction is accurately directed to the memory subsystem for retrieval. The PC receives inputs through dedicated load paths, such as from (ALU) computations that generate branch targets or from immediate values in branch instructions, allowing it to update to a new address when changes. It interacts closely with the (IR), as the fetched instruction from the address specified by the PC is loaded into the IR for subsequent decoding and execution. During subroutine calls and returns, the PC collaborates with the stack pointer (SP); on a call, the current PC value (or PC incremented) is pushed onto the using the SP to save the return address, while on return, the saved address is popped from the stack and loaded into the PC. In terms of bus protocols, the PC drives the address lines on the shared address bus, often in conjunction with other registers like the , enabling multiplexed access where the bus selects the PC's output for instruction fetch cycles. In pipelined or multi-core systems, bus mechanisms, such as encoders or schedulers, manage contention when multiple components (including multiple PCs in multi-core setups) attempt to drive the address bus simultaneously, preventing conflicts and ensuring orderly access to . For example, in RISC architectures like , the PC feeds directly into the fetch unit, where its value is updated and latched at the end of each clock cycle to maintain and avoid race conditions between fetch and execution stages. This clock-edge alignment ensures that the PC increment—typically adding the instruction length—occurs reliably without overlapping with ongoing fetches.

Architectural Consequences

Impact on Machine Design

The width of the (PC) fundamentally constrains the maximum addressable space in a , as it determines the range of addresses that can be directly referenced for fetch. For instance, a 32-bit PC limits the system to 2^{32} or 4 gigabytes of addressable memory, necessitating additional mechanisms like paging or segmentation for larger systems. This design choice influences overall architecture by dictating depth and bus widths, where narrower PCs reduce hardware overhead but cap scalability, as seen in early 32-bit architectures transitioning to 64-bit for terabyte-scale addressing. In pipelined processors, particularly superscalar and designs, PC updates occur dynamically during the fetch stage to sustain high instruction throughput, but control hazards like branches introduce delays that can stall the . To mitigate this, branch mechanisms forecast the next PC value, allowing to overlap fetch with resolution; mispredictions flush the , incurring penalties proportional to pipeline depth. In execution, the PC is managed by the front-end fetch unit, which increments or redirects it based on predicted , decoupling it from execution completion to maximize parallelism while ensuring precise state recovery on exceptions. These adaptations highlight how PC handling shapes efficiency, with prediction accuracy directly impacting in modern CPUs. Security features like (ASLR) leverage the PC's role in address generation to thwart exploits, by randomizing the base addresses of code segments at load time, making it difficult for attackers to predict instruction locations for or . ASLR perturbs the , altering effective PC targets during execution and increasing the entropy required for successful memory corruption attacks. This integration into machine design enhances resilience against buffer overflows but demands compatible hardware support for randomized mapping without performance degradation. Designing wider PCs enables larger address spaces but introduces trade-offs in power consumption and silicon area, as broader registers and address buses require more transistors and wiring, escalating dynamic switching energy and static leakage. For example, extending from 32-bit to 64-bit PC widths increases energy demands in components like file due to larger sizes and port configurations, contributing significantly to overall CPU power budgets in high-performance systems. These costs also elevate expenses due to greater die area, prompting optimizations like Gray encoding for PC counters to minimize transition activity and power during increments. Thus, architects balance PC width against efficiency constraints, often favoring 64-bit standards for modern despite the overhead.

Role in Control Flow and Branching

The program counter (PC) plays a pivotal role in managing non-sequential execution by altering its value to direct the to different , thereby implementing mechanisms such as branches and jumps. In unconditional jumps, the PC is directly loaded with a target specified in the , immediately transferring without evaluating any . For conditional branches, the PC is updated only if a specified —such as a in the status register being set—is met; otherwise, it proceeds with its normal increment. In such cases, the new PC value is typically computed as the current PC plus a sign-extended offset, enabling relative for efficient code organization: \text{new_PC} = \text{current_PC} + \text{sign-extended_offset} This formula allows branches to reference nearby without absolute . Subroutine calls extend this by first pushing the current PC () onto the before loading the target subroutine into the PC, facilitating a via stack pop. Interrupt handling further demonstrates the PC's role in asynchronous control transfers, where external events suspend normal execution. Upon detecting an , the automatically saves the current PC (along with status information) to the or a designated area, then loads a new PC value from an that points to the handler routine's . After the handler completes its task, the saved PC is restored from the , resuming the interrupted from the precise point of suspension. This mechanism ensures reliable context switching without , with vector tables serving as a fixed of interrupt types to handler addresses. To mitigate performance penalties from frequent branches and interrupts, modern processors employ prediction mechanisms centered on the PC. The branch target buffer (BTB) is a that associates recent PC values of branch instructions with their predicted target addresses and outcomes (taken or not taken), allowing the fetch unit to speculatively load instructions from the anticipated next PC before resolution. If a misprediction occurs—such as an incorrect target guess—the flushes incorrect instructions and corrects the PC, incurring a penalty proportional to pipeline depth; the BTB reduces these by caching historical patterns, improving overall throughput in branch-heavy code. In the x86 architecture, the JMP instruction exemplifies direct PC manipulation for unconditional jumps, loading the specified target address directly into the EIP (extended instruction pointer, the 32-bit PC equivalent) without saving return information, thus enabling arbitrary control transfers. Relative variants of JMP or conditional jumps like (jump if equal) apply the offset formula to the current EIP, supporting compact encoding for loops and decisions common in compiled code.

Programming Implications

Representation in Low-Level Code

In low-level languages, the program counter (PC) is manipulated primarily through dedicated and instructions that alter the flow of execution by updating the PC to a new . In x86 architecture, instructions such as (unconditional ) directly load a specified into the instruction pointer (EIP in 32-bit mode or RIP in 64-bit mode), while CALL pushes the current PC onto the stack and sets the PC to the target subroutine for function invocation. Similarly, in ARM architecture, the () instruction unconditionally sets the PC to a target , and ( with link) performs the same while saving the return address in the (R14). These operations enable explicit control over the PC without intermediate loads in most cases, though indirect manipulation occurs when addresses are computed in general-purpose before . PC-relative addressing modes further integrate the PC into low-level code by calculating effective addresses as offsets from the current PC value, facilitating that remains functional across memory relocations. In , RIP-relative addressing encodes operands as signed offsets added to the RIP, commonly used in instructions like or for accessing global data without absolute addresses. supports PC-relative addressing in load/store instructions such as LDR (load register), where the offset is added to the PC (R15) to fetch data from a nearby location, typically within ±4KB in state or larger ranges in via ADRP/ADD combinations. This mode is essential for compact, relocatable binaries, as the assembler resolves labels to offsets during linking without embedding fixed addresses. During debugging, tools like the GNU Debugger (GDB) provide visibility into the PC's value, allowing developers to inspect and trace execution. In GDB, the command print $pc displays the current PC address in the selected frame, while x/i $pc disassembles the instruction at that location, aiding in step-by-step analysis of . This feature is particularly useful for verifying branch targets or diagnosing infinite loops where the PC repeatedly cycles through addresses. A representative example of PC manipulation appears in loops, where conditional jumps implicitly update the PC based on flags set by comparison instructions. Consider this simplified loop that decrements a until zero:
loop_start:
    cmp rax, 0          ; Compare RAX to 0, sets flags
    je loop_end         ; Jump if equal (ZF=1), updates RIP to end
    sub rax, 1          ; Decrement RAX
    jmp loop_start      ; Unconditional jump back, sets RIP to start
loop_end:
Here, the PC advances sequentially through instructions unless altered by JE or JMP, which compute targets relative to the current RIP for the backward branch. In ARM, an equivalent uses BNE (branch if not equal) for the condition:
loop_start:
    cmp r0, #0          ; Compare R0 to 0
    beq loop_end        ; Branch if equal, updates PC to end
    sub r0, r0, #1      ; Decrement R0
    b loop_start        ; Branch back, PC-relative offset to start
loop_end:
The branches employ PC-relative offsets, ensuring the loop relocates correctly.

Abstraction in High-Level Languages

In high-level programming languages, the program counter (PC) is abstracted away from developers, with compilers translating control flow constructs like if statements and while loops into low-level jumps and branches without exposing the underlying PC mechanics. For instance, a compiler processes an if statement by evaluating the condition and generating conditional jumps to skip or execute the corresponding code block, ensuring sequential execution while optimizing branch predictions for performance. Similarly, while loops are transformed into a structure involving a jump back to the condition test after the loop body, often using labels in intermediate representations to manage flow without direct PC manipulation. This abstraction allows programmers to focus on logic rather than hardware-specific addressing, as detailed in standard compiler design principles. At , high-level languages indirectly influence the effective PC through mechanisms like exceptions and non-local jumps. , the setjmp and longjmp functions enable non-local gotos by saving and restoring the execution context, including the program counter, to jump to a previously saved state without following normal call-return semantics, which can abruptly alter across function boundaries. In managed environments such as those using garbage collection, the pauses all threads during collection (a "stop-the-world" phase), effectively halting the PC at the current instruction, scans for live objects, and then resumes execution from the paused point to continue program progress. These operations maintain the illusion of seamless execution while handling memory and error recovery behind the scenes. Virtual machines further encapsulate the PC by maintaining a separate virtual program counter distinct from the host 's PC. In the (JVM), each thread possesses its own PC register that points to the current instruction being executed, allowing the JVM to interpret or JIT-compile code while abstracting details for independence. Likewise, the .NET Common Language Runtime (CLR) employs an instruction pointer within its execution engine to track progress through (CIL) instructions, managing in a virtualized manner before mapping to native execution. This separation enables portable code execution across diverse architectures. Despite these abstractions, limitations arise when high-level constructs mimic low-level , potentially leading to . Overuse of statements in languages like can bypass variable initializations or scope rules, resulting in accesses to uninitialized objects or lifetime violations, which invoke undefined behavior per the language standard. Additionally, during error handling, stack unwinding in exception scenarios restores the PC context by propagating the exception up the , destroying local objects and transferring to matching handlers, ensuring proper cleanup without explicit PC management. These cases highlight the boundaries where breaks down, risking portability and correctness if not handled carefully.

Historical and Modern Variations

Origins and Evolution

The origins of the program counter trace back to the transition from wired-program machines to stored-program computers in the mid-1940s. The , completed in 1945, lacked an explicit program counter and relied instead on plugboards and program trays to control the sequence of operations among its functional units. This manual reconfiguration approach, involving physical patch cords to route control pulses, represented the limitations of early electronic before automated instruction sequencing became feasible. Key milestones emerged with the advent of stored-program architectures. The , operational in 1948, incorporated a denoted as "C," functioning as a program counter to manage instruction sequencing in its accumulator-based design. This was followed by the in 1949, which introduced a dedicated 10-bit instruction counter—essentially a program counter—to store the address of the next instruction in its system. By 1952, the formalized the program counter in a commercial context, using it within its electronic analytic to track instruction addresses in electrostatic storage tubes, marking the integration of such mechanisms into production scientific computers. The evolution of the program counter reflected broader architectural shifts, progressing from fixed-word addressing in early stored-program machines—where instructions occupied uniform memory slots—to variable-length addressing in later designs that accommodated diverse instruction formats for greater flexibility. The , outlined in the 1945 report, profoundly influenced this development by specifying a central with a sequence counter for fetching instructions from a unified , enabling sequential execution and branching. In contrast, the , implemented in machines like the , employed separate program and data memories but retained a program counter to sequence instructions within the dedicated program store. The term "program counter" first appeared in technical literature around 1946, with usage solidifying in the late 1940s and 1950s amid the shift to transistor-based systems that demanded precise instruction tracking. A significant advancement occurred in the with the rise of RISC designs, which introduced pipelined program counters to prefetch and overlap instruction execution, as seen in prototypes like the project, enhancing throughput in simplified instruction sets.

Differences Across Architectures

The program counter (PC) behaves differently in Complex Instruction Set Computing (CISC) and Reduced Instruction Set Computing (RISC) architectures, reflecting their design philosophies. In CISC systems like x86, the PC—known as the in 64-bit —increments by the variable length of the executed , which can range from 1 to 15 bytes or more, enabling complex PC-relative addressing for branches and jumps that account for instruction size variability. This approach supports denser code but complicates decoding and pipelining due to the need to determine instruction boundaries dynamically. In contrast, RISC architectures such as use a fixed-length format, typically 32 bits (4 bytes), so the PC increments by a constant 4 bytes per (or 2 bytes in ), promoting simpler, more predictable fetch stages and easier superscalar execution. Architectures also differ in memory organization, particularly between von Neumann and Harvard models. designs, prevalent in general-purpose processors like x86 and , employ a unified PC to address a single memory space for both instructions and data, which simplifies hardware but can create bottlenecks during simultaneous instruction fetch and data access. Harvard architectures, common in processors (s), feature a dedicated program counter for instruction memory separate from data generators or pointers, allowing parallel access to instructions and data via distinct buses and improving throughput for tasks. For example, in ' TMS320C54x family, the program controller includes a 16-bit PC that sequences instruction fetches from program memory while independent data address units handle access, exemplifying this modified Harvard structure. Modern processor extensions adapt the PC to handle larger address spaces and specialized domains. In x86-64, the RIP register expands to 64 bits, supporting a theoretical 2^64-byte linear address space (though implementations often limit virtual addressing to 48 bits for practicality), enabling execution of large-scale applications without segmentation constraints common in 32-bit x86. Embedded systems like the 8051 microcontroller use a simpler 16-bit PC to address up to 64 KB of program memory in a Harvard-like setup with separate code and data spaces, where the PC fetches opcodes from ROM while data operations use indirect addressing via registers like R0-R7 or DPTR. In AI accelerators such as Google's Tensor Processing Units (TPUs), the control unit employs a minimalistic CISC instruction set with about a dozen high-level operations (e.g., MatrixMultiply) to sequence tensor computations across systolic arrays, diverging from traditional PCs by focusing on deterministic, dataflow-driven execution rather than general-purpose branching. Emerging paradigms in introduce analogs to the PC through sequencing, eschewing classical sequential counters in favor of compiled execution. Post-2020 highlights models where quantum —sequences of unitary applied in superposition—replace the PC with classical and scheduling, as seen in dynamic frameworks that link multiple quantum processors without a persistent pointer. For instance, decomposition techniques break multi-qubit operations into native two-qubit for direct execution, enabling scalable control without von Neumann-style linearity. This shift supports error-mitigated, parallel quantum operations, contrasting with classical PCs by leveraging entanglement and measurement for .

References

  1. [1]
    What is program counter? | Definition from TechTarget
    Mar 6, 2024 · A program counter is a special register in a computer processor that contains the memory address (location) of the next program instruction to be executed.
  2. [2]
    [PDF] The Program Counter - The University of Texas at Dallas
    The program counter is a register that always contains the memory address of the next instruction (i.e., the instruction following the one that is currently.
  3. [3]
    PROGRAM COUNTER
    The program counter, PC, is a special-purpose register that is used by the processor to hold the address of the next instruction to be executed.
  4. [4]
    1. Overview of the Hawk Computer - University of Iowa
    1.3.2. Program Counter. The CPU operates by fetching and executing successive instructions from memory. The program-counter holds the address of the next ...
  5. [5]
    1. The Fetch and Execute Cycle: Machine Language
    The CPU uses one of these registers -- the program counter, or PC -- to keep track of where it is in the program it is executing.
  6. [6]
    CS 537 Notes, Section #3A: Processes and Threads - cs.wisc.edu
    The execution state includes the processors registers, including the program counter (PC) and stack pointer (SP).
  7. [7]
    Fetch, decode, execute (repeat!) – Clayton Cafiero
    Sep 9, 2025 · The CPU maintains a special register called the program counter (PC), which holds the memory address of the next instruction to be executed.
  8. [8]
    8.4 Program Execution in the CPU - Robert G. Plantz
    The memory address where the first instruction is located is copied to the program counter. The CPU sends the address in the program counter to memory via the ...
  9. [9]
    4. The Fetch Execute Cycle - University of Iowa
    One register within the central processor, today called the program counter, holds the address of the next instruction to be executed from the program. The ...
  10. [10]
    One Address Machine - The University of Iowa
    During the increment phase, the value in the program counter is increased by 1, meaning that the "next" instruction to execute can be found in memory ...
  11. [11]
    5.2. The von Neumann Architecture - Dive Into Systems
    What von Neumann Knew: Computer Architecture · 5.2. The von Neumann Architecture ... program counter (PC) keeps the memory address of the next instruction to ...
  12. [12]
    [PDF] Chapter 4 The Von Neumann Model - Computer Sciences
    The basic structure proposed in the draft became known as the “von Neumann machine” (or model). ... Program Counter (PC) contains the address of the next ...
  13. [13]
    [PDF] Laboratory 5 Instruction Set Architecture and Microarchitecture
    The Program Counter (PC) register holds the address of the currently executing instruction. The PC is initialized to 0 by a reset to begin execution of the ...
  14. [14]
    5.6. The Processor's Execution of Program Instructions
    The memory address of the instruction to fetch is stored in another special-purpose register, the program counter (PC).
  15. [15]
    [PDF] Chapter 9: Following Instructions: Principles of Computer Operation
    It is called the program counter (PC). • Because instructions use 4 bytes of memory, the next instruction must be at PC + 4, 4 bytes further ...
  16. [16]
    [PDF] CAD6 Program Counter Winter 2007 Assignment
    It must also increment, load in a value for jumps and either load in a value from the ALU for or calculate and load a target address for branches. Your.
  17. [17]
    [PDF] Formal Verification of Pipelined Y86-64 Microprocessors with UCLID5
    One design limitation of PIPE is that it does not set the program counter correctly when an exception occurs. ... invalid address or an invalid instruction, it ...
  18. [18]
  19. [19]
    [PDF] Designing a CPU - cs.Princeton
    Design a program counter (3 devices, 3 control wires). Next. Design TOY-Lite computer (10 devices, 27 control wires). Page 10. TOY-Lite: Interface. CPU is a ...Missing: hardware | Show results with:hardware
  20. [20]
    Introduction to Pipelining
    Everything in a CPU moves in lockstep, synchronized by the clock ("heartbeat" of the CPU.) A machine cycle : time required to complete a single pipeline stage.
  21. [21]
    How do the two program counter registers work in the 6502?
    Oct 9, 2017 · The 6502 has only one program counter. It is 16 bits wide. Because a lot of other things in the CPU are exactly 8 bits wide, it makes hardware sense to cut the ...
  22. [22]
    Unit 1 - ECE 2620
    The contents of the PC are sent to the memory address register (MAR). b. The microprocessor provides this address to the memory (via the address bus) and ...Missing: components | Show results with:components
  23. [23]
    [PDF] Chapter 15 Control Unit Operation Computer Organization and ...
    Holds last instruction fetched. Fetch Sequence. • Address of next instruction is in PC. — Address (MAR) is placed on address bus.Missing: integration | Show results with:integration
  24. [24]
    [PDF] Module 1: BASIC STRUCTURE OF COMPUTERS - BIET
    • The PC(Program Counter) contains the memory-address of the next-instruction to be fetched & executed. • During the execution of an instruction, the ...Missing: integration | Show results with:integration
  25. [25]
    [PDF] Organization
    Rough steps to execute an instruction. 1. Load the next instruction from memory into the instruction register. 2. Update the program counter to point the ...Missing: targets | Show results with:targets
  26. [26]
    CS360 Lecture notes -- Assembler Lecture #4: Conditionals
    When a procedure returns, the program counter is set by reading it from the stack. That is how it "returns" to its caller. When a procedure returns to its ...
  27. [27]
    Stack Computers: 4.2 ARCHITECTURE OF THE WISC CPU/16
    ... Program Counter value. This saves a Program Counter increment that would otherwise cost a clock cycle. Program memory is organized as 64K words of 16 bits ...
  28. [28]
    [PDF] COMPUTER ORGANIZATION AND ARCHITECTURE
    The three registers are connected to a common address bus, which connects the three registers and either one can provide an address for memory. PC is used ...
  29. [29]
    [PDF] PIPELINED MULTITHREADING TRANSFORMATIONS AND ...
    However, single program counter (PC), limited dynamic ... In Proceedings of the 26th International Symposium on. Computer Architecture, pages 186–195, May 1999.
  30. [30]
    [PDF] Figure 4.1 An abstract view of the implementation of the RISC-V ...
    The program counter is a 32-bit register that is written at the end of every clock cycle and thus does not need a write control signal. The adder is an ALU ...Missing: synchronization | Show results with:synchronization
  31. [31]
    [PDF] Processor Microarchitecture - UCSD CSE
    The instruction fetch unit is the responsible for feeding the processor with instructions to execute, and thus, it is the first block where instructions are ...
  32. [32]
    Memory interface – Clayton Cafiero - University of Vermont
    Oct 28, 2025 · The width of address lines determines the addressable memory space. For example, if address lines are 32 bits wide, the CPU can address up ...
  33. [33]
  34. [34]
    [PDF] The Microarchitecture of Superscalar Processors - cs.wisc.edu
    Aug 20, 1995 · Upon completing the execu- tion of the instruction, the processor uses an incremented program counter to fetch the next instruction, with sequen ...
  35. [35]
    [PDF] Circuit Implementation of a 600MHz Superscalar RISC Microprocessor
    However, the fetch unit requires a cycle to analyze the results of the previous fetch for any stalls, traps, or branches before updating the program counter (PC) ...
  36. [36]
    [PDF] A STUDY OF BRANCH PREDICTION STRATEGIES
    If the prediction is correct, but instructions are not issued before the outcome is known, there is a 3 cp delay to wait for the branch decision.
  37. [37]
    [PDF] On the Effectiveness of Address-Space Randomization
    Address-space randomization is a technique used to fortify systems against buffer overflow attacks. The idea is to introduce artificial diversity by randomizing ...
  38. [38]
    [PDF] ASLR-Guard: Stopping Address Space Leakage for Code Reuse ...
    Address Space Layout Randomization (ASLR) aims to make the first prerequisite unsatisfiable by making the addresses of code gadgets unpredictable. In theory, ...
  39. [39]
    [PDF] The Energy Complexity of Register Files - CECS
    Register files (RF) represent a substantial portion of the energy budget in modern processors, and are growing rapidly with the trend towards wider ...
  40. [40]
    [PDF] Reduced program counter power consumption with Gray encoding
    SC(Avg) converges to 8 as the width of the counter grows. For n = 8. SC(Avg) - 8 - # =7.9s. Computation of SC(Avg) for the Gray counter is more involved. To ...
  41. [41]
    2. Instruction Set Architecture - UMD Computer Science
    Interrupts can also change the flow of a program. A program interrupt refers to the transfer of program control from a currently running program to another ...
  42. [42]
    Chapter 12: Interrupts
    An interrupt is the automatic transfer of software execution in response to a hardware event that is asynchronous with the current software execution.
  43. [43]
    [PDF] Chapter 5 The LC-3
    • Changing the Program Counter (PC). Conditional Branch. • Branch taken if a specified condition is true. ➢ New PC computed relative to current PC. • Otherwise ...
  44. [44]
    [PDF] Computer Organization and Design, Revised Fourth Edition
    Program counter = Register + Branch address. This sum allows the program to be as large as 232 and still be able to use condi tional branches, solving the ...
  45. [45]
    [PDF] Interrupts - CS@Columbia
    Feb 8, 2006 · Returning From an Interrupt. • Load old program counter, stack pointer, CPU and memory state, etc., from the interrupt handler's stack.
  46. [46]
    [PDF] Interrupt Handling - CS@Cornell
    Interrupt handler is a program: it needs a stack! so, each process has two stacks pointers: one when running in user mode a second one when running ...
  47. [47]
    OS Lecture #3 - NYU Computer Science
    The hardware loads new program counter from the interrupt vector. Loading the program counter causes a jump. Steps 2 and 3 are similar to a procedure call ...Executing The Interrupt... · 2.2 Threads · 2.4 Process Scheduling<|separator|>
  48. [48]
    [PDF] 16-branch-prediction.pdf - Overview of 15-740
    For all other instructions the next PC is (PC)+4 ! How to achieve this effect without decoding the instruction? Page 35. Branch Target Buffer (tagged). 35.
  49. [49]
    [PDF] Branch and Target Predictions
    Target Prediction: Branch Target Buffer. Branch Target Buffer (BTB): Address of branch index to get prediction AND branch address (if taken). ▫ Note: must ...
  50. [50]
    [PDF] Intel® 64 and IA-32 Architectures Software Developer's Manual
    Intel® 64 architecture is the instruction set architecture and programming environment which is the superset of Intel's 32-bit and 64-bit architectures. It ...
  51. [51]
    JMP — Jump
    Transfers program control to a different point in the instruction stream without recording return information.
  52. [52]
    Program Counter - ARM Compiler armasm User Guide Version 5.06
    You can use the Program Counter explicitly, for example in some ARM data processing instructions, and implicitly, for example in branch instructions. The ...
  53. [53]
    R15, Program Counter (PC) - Arm Developer
    R15 is the Program Counter (PC). It is readable and writable. A read returns the current instruction address + 4 while writing to a PC.
  54. [54]
    LDR (PC-relative) - Arm Developer
    In ARMv7-M, LDRD (PC-relative) instructions must be on a word-aligned address. LDR (PC-relative) in Thumb. You can use the .W width specifier to force LDR to ...
  55. [55]
    PC-relative addressing - Arm Developer
    armasm syntax assembly provides the symbol {pc} to let you specify an address relative to the current instruction. For example: ADRP x0, {pc} ...
  56. [56]
    Machine Code (Debugging with GDB) - Sourceware
    The default memory range is the function surrounding the program counter of the selected frame. A single argument to this command is a program counter value; ...
  57. [57]
    Compilers Lecture #12 - NYU Computer Science
    we simply evaluate the boolean expression as if it was part of an assignment statement and then have two jumps to where we should go if the result is true or ...
  58. [58]
    CS 4120 Spring 2021 Introduction to Compilers
    We translate while -statements by introducing a label to serve as a loop header: ... As we see next, the translations of if and while are actually less efficient ...
  59. [59]
    [PDF] Compilers - Computer Science - University of San Francisco
    If (not <test>) goto WHILEEND. < code for statement > goto WHILESTART. WHILEEND: Page 82. 08-80: While Statements while (<test>) <statement>. More Efficient ...
  60. [60]
    CS360 Lecture notes -- Setjmp and Longjmp - UTK-EECS
    Setjmp() and longjmp() are subroutines that let you perform complex flow-of-control in C/Unix. One of the keys to understanding setjmp() and longjmp() is to ...
  61. [61]
    CS360 Lecture notes -- Setjmp and Longjmp - UTK-EECS
    Mar 28, 2014 · The contents of the registers includes the stack pointer (sp), frame pointer (fp), and program counter (pc). ... setjmp()/longjmp() call.s. Let's ...
  62. [62]
    Memory Management and Garbage Collection - CS@Cornell
    Not only does garbage collection add time to execution of the mutator, but some garbage collectors add significant pauses to execution, which is a serious ...
  63. [63]
    Chapter 2. The Structure of the Java Virtual Machine
    Each Java Virtual Machine thread has its own pc (program counter) register. At any point, each Java Virtual Machine thread is executing the code of a single ...
  64. [64]
    [PDF] Technical Overview of the Common Language Runtime (or why the ...
    Jun 8, 2001 · An instruction pointer (IP) which points to the next CLI instruction to ... NET Common Language Runtime. In Proceedings PLDI'01, 2001. [17] ...
  65. [65]
    Why is goto Bad Programming Practice in C++? – April Crockett
    Potential for Undefined Behavior and Bugs: Careless use of goto can lead to undefined behavior, especially when jumping into the middle of a variable's ...
  66. [66]
    EXP54-CPP. Do not access an object outside of its lifetime
    Use of an object, or a pointer to an object, outside of its lifetime frequently results in undefined behavior. The C++ Standard, [basic.life], paragraph 5 ...
  67. [67]
    Chapter 8 Subroutines and Control Abstraction
    ... program counter (again, usually the bailiwick of the CALL instruction) ... setjmp and longjmp. Between the ad hoc methods often employed in languages ...
  68. [68]
    ENIAC, Electronic, Computing - Britannica
    Oct 17, 2025 · One problem that the stored-program idea solved was the need for rapid access to instructions. Colossus and ENIAC had used plugboards, which ...
  69. [69]
    Electronic Computers Within The Ordnance Corps, ENIAC
    The other set was called program trays and carried pulses controlling the sequence of operations of the different units and could be plugged into the trays ...
  70. [70]
    [PDF] Computer Development at Manchester University
    Thus when Tom Kilburn. (C was the value of Control i.e., the Program Counter). gave a series of four lectures on the University prototype Mark 1 (8th - 12th Nov ...Missing: sequencing | Show results with:sequencing
  71. [71]
    About EDSAC - BREDSAC Project
    Program Counter, 10 bits, Address of next instruction to be executed. Accumulator, 71 bits, Holds results of arithmetic and logical operations. Multiplier, 35 ...
  72. [72]
    [PDF] IBM 701 speedcoding system; 1953
    The IBM 701 Speedcoding System was designed to minimize the amount of time spent in problem preparation. It is applicable to small computing problems.
  73. [73]
    Programmed word length computer - ACM Digital Library
    Dec 12, 2019 · INTRODUCTION. The concept of a programmable word length computer has evolved through an attempt to minimize wasted storage in any fixed word ...
  74. [74]
    [PDF] Von Neumann Computers 1 Introduction - Purdue Engineering
    Jan 30, 1998 · The key concept of the von Neumann architecture is that data and instructions are stored in the memory system in. 2. Page 3. exactly the same ...<|separator|>
  75. [75]
    Von Neumann Architecture - an overview | ScienceDirect Topics
    The control unit has a special register called the program counter . It stores the address of the next instruction to be executed. Instructions and data are ...Introduction · Core Components and... · Limitations and Modern...
  76. [76]
    program counter, n. meanings, etymology and more
    The earliest known use of the noun program counter is in the 1940s. OED's earliest evidence for program counter is from 1946, in Mathematical Tables & Other ...Missing: origin | Show results with:origin
  77. [77]
    [PDF] Design and implementation of RISC I - UC Berkeley EECS
    The CPU can be subdivided naturally into the following functional blocks: the register-file, the ALU, the shifter, a set of program counter (PC) registers,.
  78. [78]
    How is value of Program Counter incremented? - Stack Overflow
    Sep 8, 2020 · In CISC, PC increments by 1 or the instruction size. In RISC, PC increments by the instruction size, except for branches.How does the ARM architecture differ from x86? - Stack OverflowARM vs RISC and x86 vs CISC - Stack OverflowMore results from stackoverflow.com
  79. [79]
    RISC vs CISC - GeeksforGeeks
    Oct 25, 2025 · RISC and CISC are two different ways of designing computer processors. RISC uses a small set of simple, fixed-size instructions designed to ...
  80. [80]
    [PDF] TMS320C54x DSP Functional Overview
    Some of the hardware elements included in the program controller are the program counter, the status and control register, the stack, and the address ...
  81. [81]
    ADSP-21160 SHARC DSP Hardware Reference, Revision 4.1, April ...
    ... Program Counter Register (PC) ... instruction pointer. If the instruction fetch is to an address in one of the external memory banks, the MSx output for ...
  82. [82]
    Manuals for Intel® 64 and IA-32 Architectures
    ### Summary of RIP Register in x86-64
  83. [83]
    [PDF] Atmel 8051 Microcontrollers Hardware Manual - Microchip Technology
    Fetches from external Program memory always use a 16-bit address. Accesses to external Data Memory can use either a 16-bit address (MOVX @DPTR) or an 8-bit.
  84. [84]
    An in-depth look at Google's first Tensor Processing Unit (TPU)
    May 12, 2017 · In this post, we'll take an in-depth look at the technology inside the Google TPU and discuss how it delivers such outstanding performance.
  85. [85]
    Combining quantum processors with real-time classical ... - Nature
    Nov 20, 2024 · Our work demonstrates that we can use several quantum processors as one with error-mitigated dynamic circuits enabled by a real-time classical link.
  86. [86]
    [PDF] Quantum Gate Decomposition - arXiv
    Apr 28, 2025 · Abstract. Similar to classical programming, high-level quantum pro- gramming languages generate code that cannot be executed.