Fact-checked by Grok 2 weeks ago

Interrupt

In computing, an interrupt is a signal that causes the to temporarily halt the execution of the current program and transfer control to a specific routine known as an , allowing immediate response to asynchronous events such as hardware signals or software requests. This mechanism enables efficient handling of time-sensitive operations without constant polling by the CPU, which would otherwise consume excessive resources. Interrupts originated as a fundamental feature in early computer architectures to manage devices and have evolved into a core component of for multitasking and processing. They are categorized primarily into hardware interrupts, which are asynchronous signals from peripheral devices like keyboards or disks indicating readiness or errors, and software interrupts, which are synchronous calls generated by application code to invoke operating system services such as system calls. Related concepts include exceptions, which are interrupts triggered by internal conditions like division by zero or invalid memory access, often treated as a subset for error handling. The interrupt process typically involves the CPU saving its current state upon receiving the signal, executing the handler routine via an that maps interrupt types to addresses, and then restoring the state to resume normal execution. This design minimizes latency in event detection compared to alternative polling methods, making interrupts indispensable for systems, applications, and general-purpose where responsiveness is critical. Modern processors support prioritized interrupts to ensure higher-urgency events are handled first, enhancing system performance and reliability.

Fundamentals

Definition and Purpose

An interrupt is a signal generated by hardware or software that temporarily suspends the processor's current execution to handle a higher-priority event through an interrupt service routine (ISR), which is the dedicated software code executed in response. Hardware interrupts arise from external devices, such as input/output (I/O) completion or timer expirations, while software interrupts are triggered by specific instructions, like those for system calls or error conditions. This mechanism ensures the processor can address urgent, asynchronous events without continuously monitoring for them. The primary purpose of interrupts is to facilitate efficient handling of asynchronous events in computing systems, enabling the processor to respond promptly to activities like device status changes or computational errors while minimizing resource waste. By invoking an ISR, interrupts support multitasking in operating systems, where multiple processes can share the CPU, and real-time responsiveness in embedded systems, where timely reactions to events are critical. They also underpin exception handling, allowing the system to recover from faults such as division by zero or invalid memory access. Key benefits include reduced CPU idle time compared to polling, where the processor repeatedly checks device status, leading to inefficiency in systems with infrequent events; interrupts only engage the CPU when needed, lowering latency and overhead. This approach optimizes resource utilization, supports system calls for secure OS interactions, and enhances overall system performance in multitasking environments. Interrupts originated in the to manage I/O operations in early mainframes, allowing overlapped and data transfer to avoid halting the CPU during slow peripheral activities.

Terminology

In computer architecture, an interrupt is defined as an asynchronous exception signaled by an I/O device, such as a timer or network adapter, that alters the processor's control flow to handle the event. This contrasts with synchronous exceptions, which are tied to the execution of the current instruction. A trap refers to a synchronous, intentional exception, typically generated by software for system calls or debugging, such as invoking an operating system service. For example, in the x86 architecture, the INT instruction explicitly triggers a software interrupt by referencing an entry in the interrupt descriptor table. An exception is a broader category encompassing any abrupt change in due to events like interrupts, traps, faults, or aborts, often managed through a processor's exception-handling mechanism. Within this, a fault denotes a synchronous exception arising from a potentially recoverable , such as a where the operating system loads missing data from disk before resuming execution. In contrast, an abort represents an unrecoverable synchronous fatal , like a hardware check failure, which typically terminates the program without return to the interrupted instruction. Architecture-specific terminology highlights variations in interrupt handling. In ARM processors, hardware interrupts are categorized as IRQ (normal interrupt request) for standard events and FIQ (fast interrupt request) for high-priority, low-latency scenarios, with FIQ providing dedicated registers to minimize context switching. Similarly, employs machine-mode interrupts at the highest privilege level, managed via control and status registers like mtvec for trap vectoring and mie for enabling specific interrupt types, such as external, software, or timer interrupts. Related concepts include the interrupt vector, which is an entry in a table (often called the interrupt vector table) that stores the memory address of the corresponding interrupt service routine (ISR), enabling direct or indirect dispatch to the handler. An interrupt request (IRQ) line serves as the physical or logical hardware signal path from a peripheral device to the processor, asserting a request for attention. A non-maskable interrupt (NMI) is a critical hardware interrupt that cannot be disabled by standard masking mechanisms, reserved for urgent conditions like hardware failures, ensuring it preempts all other activities. Interrupts differ fundamentally from polling, where the processor periodically checks device status in a loop; interrupts enable event-driven responses, allowing the CPU to perform other tasks until signaled, thus improving efficiency in systems with infrequent events.

History

Early Developments

The earliest implementations of interrupt mechanisms in emerged in the mid-1950s, marking a pivotal shift from inefficient polling-based (I/O) handling in batch-oriented systems to more responsive event-driven architectures. The 1103, introduced in 1953 by Engineering Research Associates (later acquired by ), is credited as the first computer system to incorporate interrupts, allowing the processor to temporarily suspend execution for external events such as I/O completion. This was followed closely by the National Bureau of Standards DYSEAC in 1954, which pioneered I/O-specific interrupts through a dual-program-counter design that switched execution contexts upon receiving an I/O signal, enabling and data transfer without constant CPU oversight. The , delivered starting in 1954, advanced this further by introducing interrupt masking—a feature that allowed selective disabling of interrupts to prevent unwanted disruptions during critical operations, addressing the limitations of resource-constrained early hardware. Key advancements in the late and built upon these foundations, enhancing interrupt systems for greater efficiency in mainframes and emerging minicomputers. The Laboratory's TX-2 computer, operational in 1957, was the first to implement multiple levels of interrupts, defining 25 interrupt sequences ranked by urgency (e.g., restart as the highest ), which optimized handling of diverse I/O devices in research applications. By 1960, the Digital Equipment Corporation's introduced a 16-channel automatic interrupt system (known as the Sequence Break), specifically designed to support control tasks, such as interfacing with external devices in settings without halting primary computation. The family, announced in 1964, standardized vectored interrupts across its architecture, where an interrupt directly addressed a specific handler routine via a vector table, facilitating uniform implementation across models and promoting compatibility in enterprise environments. These developments addressed core challenges in early , particularly the inefficiency of polling—where the CPU repeatedly checked status in loops, wasting cycles in batch systems ill-suited for interactive or time-sensitive tasks. Interrupts enabled a more efficient , allowing the to focus on until notified of events, which was crucial for the to mainframes and minicomputers handling diverse workloads. A notable application came with IBM's OS/360 in , where interrupts underpinned the first practical multiprogramming capabilities, permitting multiple tasks to share the CPU by switching contexts on I/O completions or timers, though masking was essential to avoid nested interrupts overwhelming limited hardware resources.

Modern Evolution

The microprocessor era marked a significant shift in interrupt handling, integrating vectored interrupts directly into CPU designs for greater efficiency. The , introduced in 1978, pioneered a 256-entry , allowing software to dispatch to specific handlers based on vector numbers, which laid the for scalable interrupt management in personal computing. This was complemented by the Intel 8259 (PIC) in 1976, the first dedicated chip to manage up to eight prioritized vectored interrupts for 8080/8085-based systems, enabling external devices to signal the CPU without constant polling. By the 1990s, the (APIC), integrated into processors, extended this to multi-processor x86 environments, supporting (SMP) by distributing interrupts across cores and replacing the 8259 PIC for improved scalability in servers and workstations. Post-2000 developments emphasized over traditional pin-based lines to accommodate high-speed I/O and . The Base Specification Revision 1.0 in 2003 supported (), a feature originally introduced in the 2.2 specification (2000), allowing devices to generate interrupts via memory writes rather than dedicated wires, which enhanced scalability in multi-core systems by eliminating physical interrupt lines. This evolved with MSI-X, introduced in the 3.0 specification (2002) and supported in Revision 1.1 (2005), supporting up to 2048 independent vectors per device with per-vector masking, further optimizing for dense, setups where multiple functions share buses. Concurrently, ARM's Generic Interrupt Controller (GIC), specified in the 2000s with , provided hierarchical prioritization and extensions, enabling secure interrupt routing to virtual machines in mobile and embedded SoCs. In the 2010s, open-source architectures like advanced interrupt controllers for diverse ecosystems, with the Platform-Level Interrupt Controller (PLIC) defined around 2017 as part of the privileged specification, offering configurable priority and routing for up to thousands of sources in open-source, royalty-free systems. Security concerns drove enhancements, such as Intel's Interrupt Remapping in VT-d (introduced 2008 and refined in the 2010s), which maps device interrupts to host vectors to prevent DMA attacks, with post-Spectre (2018) microcode updates bolstering isolation against side-channel exploits. Into the 2020s, cloud environments like AWS Nitro Enclaves (launched 2019) implemented hardware-enforced interrupt isolation, offloading handling to dedicated secure processors to protect boundaries from interference. extensions, such as those in low-power cores like Zero-riscy (2024), integrated efficient interrupt support with and modes for devices, minimizing in battery-constrained scenarios. Overall, interrupt evolution trended from fixed wiring and limited vectors in early microprocessors to message-signaled mechanisms, enabling in multi-core, virtualized, and distributed systems by reducing wiring complexity and supporting .

Types

Hardware Interrupts

Hardware interrupts are signals generated by external hardware devices or peripherals to notify the of requiring immediate attention, such as or completion of an operation. These signals are transmitted via dedicated (IRQ) lines to an interrupt controller, which signals the via specific pins or messages. When a peripheral detects an event, it asserts its IRQ line to the controller, altering the voltage level or generating a that the controller detects and relays to the , thereby suspending normal execution to invoke an interrupt routine (ISR). This mechanism allows efficient asynchronous communication between the CPU and I/O devices without constant polling. A common example is the Universal Asynchronous Receiver/Transmitter (UART) used for serial I/O, where the receipt or transmission of data bytes triggers an interrupt by asserting the UART's IRQ line, enabling the to read or write data from the without software intervention. Similarly, the (PIT) generates hardware interrupts for periodic events; each of its three 16-bit counters decrements from a loaded value at a , asserting an output pin (and thus an IRQ) upon reaching zero to signal timeouts or schedule tasks. To manage interrupt handling during sensitive operations, processors support masking, which temporarily disables specific interrupts. In the x86 architecture, maskable hardware interrupts are blocked by clearing the Interrupt Flag (IF) bit in the EFLAGS register via the CLI instruction, ensuring no IRQ assertions disrupt critical sections like atomic operations. Non-maskable interrupts (NMIs), however, bypass this flag and cannot be disabled, reserved for high-priority hardware errors such as parity failures or watchdog timeouts to guarantee system integrity. Hardware interrupts can sometimes fail to occur, known as missing interrupts, where a device neglects to assert its IRQ due to causes like timing glitches in signal propagation or transient faults. In such cases, the system may revert to polling, where software periodically queries the device's to detect events that the interrupt mechanism missed. This fallback ensures reliability but increases CPU overhead. Conversely, spurious interrupts arise from false assertions on IRQ lines, often triggered by electrical noise, , or glitches that mimic valid signals. Detection typically involves the ISR acknowledging the interrupt and then verifying if the device actually requires service; if not, it is classified as spurious. strategies include edge-triggering filters in hardware, which respond only to signal transitions rather than sustained levels, ignoring brief noise pulses, and acknowledgment checks in controllers like the 82371AB PIIX4 to safeguard against invalid triggers.

Software Interrupts

Software interrupts, also known as synchronous exceptions or traps, are events initiated deliberately by executing specific software instructions or implicitly by the processor detecting certain internal conditions during instruction execution. These interrupts allow programs to request operating system services or handle errors in a controlled manner, distinguishing them from asynchronous hardware interrupts triggered by external devices. In the x86 architecture, explicit software interrupts are generated using the INT n instruction, which specifies an interrupt vector n (ranging from 0 to 255) to invoke a handler routine, saving the current instruction pointer and flags on the stack before transferring control. Similarly, in ARM architectures, the SVC (Supervisor Call) instruction serves this purpose, embedding an SVC number to identify the requested service, such as system calls for resource access, and it synchronously transfers control to a privileged handler. Implicit software interrupts arise from processor-detected faults during instruction execution. These interrupts are inherently synchronous, meaning they occur precisely at the point of the triggering or , allowing the to maintain precise state for resumption after handling. A classic example is the divide-by-zero exception in x86, classified as a fault (#DE, 0), which halts execution on instructions like DIV or IDIV when the divisor is zero, pushing the faulting address onto the for the handler to restart or correct the operation. Another common implicit case is invalid memory access leading to a (#PF, 14), where the detects a reference to a non-resident or protected page. Software interrupts also support specialized uses, such as traps for via the INT3 in x86 ( 3, #BP), which inserts a one-byte to pause execution and inspect state, or in environments where they simulate privileged operations without direct access. Handling of software interrupts involves dispatching control to the operating system through predefined interrupt vectors stored in structures like the (IDT) in x86, which maps vectors to handler entry points and enforces privilege checks. Upon invocation, the automatically saves the execution and jumps to the kernel handler, which processes the event—such as allocating a page for a fault or executing the requested service—and returns via instructions like IRET. In Unix-like systems, page faults trigger kernel routines to manage , potentially swapping pages from disk to to resolve the access. Similarly, signals like SIGINT (signal 2 in ), often generated by input but handled synchronously in software contexts, allow the kernel to dispatch user-defined or default actions, such as terminating a process, integrating with the broader exception framework for error recovery. The primary advantages of software interrupts lie in their ability to facilitate secure, controlled transitions from user mode to privileged mode without relying on external hardware signals, using gate descriptors in the to validate access and switch stacks if needed. This mechanism contrasts with insecure direct jumps to code, as it enforces privilege levels (e.g., from 3 to 0 in x86) and clears interrupt flags to prevent nesting issues, thereby enhancing system security by isolating untrusted user code from sensitive operations. In , the similarly ensures that unprivileged applications can request privileged actions, with the handler extracting parameters from the stacked registers to maintain .

Triggering and Detection

Level-Triggered Interrupts

Level-triggered interrupts are a type of interrupt where the interrupt signal remains active as long as the requesting asserts a specific voltage level on the interrupt line, typically high (e.g., +5V) or low, until the interrupt is serviced and the signal is deasserted by the device. Unlike transient signals, this sustained assertion ensures the interrupt persists, prompting the to detect and respond when it samples the interrupt line (INTR) during its normal operation. The does not poll continuously but checks the line state at boundaries or via the interrupt controller, which latches the request for delivery. In operation, the device holds the interrupt line at the active level to indicate an ongoing need for service, such as data availability or an error condition. The interrupt controller, upon detecting the level, prioritizes and signals the via the INTR pin. The acknowledges the interrupt through an interrupt acknowledge (INTA) , during which the controller provides an interrupt vector to fetch the handler. The device must then deassert the line after servicing, often triggered by the handler writing to a to clear the condition; failure to do so results in re-assertion upon handler exit, potentially causing repeated interrupts. This mode supports multiple devices sharing a single line through wired-OR logic, where any asserting keeps the line active, and daisy-chaining allows cascaded controllers to resolve which originated the request. The advantages of level-triggered interrupts include simple wiring requirements, as no precise timing for pulse generation is needed, and inherent support for line sharing without missing requests from multiple sources, making it suitable for bus architectures with limited interrupt lines. However, disadvantages arise from the need for explicit signal deassertion, which can lead to infinite re-interruption loops if the handler fails to clear the source, and increased complexity in to manage persistent states. These interrupts are commonly implemented in early programmable interrupt controllers like the 8259, which can be configured for level-sensitive mode via its initialization command word (ICW1, LTIM bit set to 1), latching requests based on sustained line levels rather than edges. This approach was prevalent in legacy x86 systems for handling I/O device notifications.

Edge-Triggered Interrupts

Edge-triggered interrupts are activated by a in the interrupt signal, specifically a rising (from low to high, such as 0-to-1) or a falling (from high to low), allowing the signal to return to its idle state immediately after the without deasserting the . This mechanism ensures that the interrupt is generated only once per detected , capturing discrete, momentary events rather than sustained conditions. In operation, edge detectors—typically implemented as flip-flops or comparators in the interrupt controller— the signal line for these transitions and the upon detection, preventing repeated triggers if the signal remains in the active state. This latching allows the system to process the event asynchronously, even if the originating device deasserts the signal quickly, but it requires the to acknowledge and clear the latch to avoid missing subsequent edges. The primary advantages of edge-triggered interrupts include high precision for one-shot events, such as presses, where only the initial transition matters, and improved resilience since brief glitches are less likely to propagate as full edges compared to sustained false levels. However, they pose challenges in shared interrupt lines, as rapid successive edges from multiple devices can be missed if not properly synchronized, complicating multi-device . These characteristics make edge-triggered interrupts a standard in protocols like , where message-signaled interrupts (MSIs) behave as edge-sensitive by delivering a single pulse per event, and in USB implementations, such as dual-role controllers detecting connector ID pin changes. Representative examples include PS/2 and interfaces, where a keypress or mouse click generates an interrupt on the rising of the data-ready signal from the controller, enabling efficient event capture without requiring the signal to hold active. This approach filters transient noise effectively, as isolated spikes do not sustain long enough to form a reliable , outperforming level-triggered methods in environments with intermittent .

Processor Handling

Response Mechanism

Upon detecting an interrupt signal, the completes the execution of the current , ensuring that interrupts occur at instruction boundaries and instructions execute atomically with respect to interrupts. It then automatically saves essential state information, including the (PC) and processor status register (such as flags or ), onto the to preserve the context of the interrupted program. For hardware interrupts, the acknowledges the signal, typically by issuing an interrupt acknowledge cycle; in x86 architectures, this involves the INTA cycle, where the signals the interrupt controller (such as the or APIC) over the bus to provide the interrupt number. This , a number from 0 to 255, is used to determine the of the interrupt service routine (): in single-vector systems, all interrupts branch to a fixed location where software dispatches the appropriate handler, while multi-vector systems employ an (such as the in x86) for direct indexing to the . The then loads this into the PC and jumps to the , executing the handler code to service the interrupt. The hardware saves only minimal state during this process, such as the PC and status registers; the operating system extends context switching by saving additional registers and data in software to fully preserve the program's execution environment. Upon completion of the ISR, the processor restores the saved state—using an instruction like IRET in x86, which pops the PC, status register, and other elements from the stack—and resumes the interrupted program. This sequence ensures atomicity with respect to user code, as the response integrates seamlessly at instruction boundaries without splitting ongoing operations.

Priority and Nesting

In processors, interrupt priority schemes determine the order in which multiple pending interrupts are serviced, ensuring critical events are handled promptly. Fixed hardware priority levels are common in architectures like , where Fast Interrupt Requests (FIQs) hold the highest priority, preempting all other interrupts, followed by standard Interrupt Requests (IRQs) and lower-priority vectored or daisy-chained interrupts. Programmable schemes, such as those implemented in the 8259A (), allow software to configure priorities across up to eight interrupt lines, supporting modes like fully nested priority where the highest active request gains service, or specific rotation to equalize handling among equal-priority sources. When multiple interrupts are pending, resolution occurs through preemption based on : a higher- interrupt will interrupt the handling of a lower one, with the saving the current before switching. In systems, such as x86, each interrupt is assigned a unique number from 0 to 255 in the (IDT), where lower vector numbers often imply higher in legacy configurations, though modern APICs allow flexible mapping via the Local APIC's registers. Interrupt nesting enables recursive handling, where an interrupt service routine (ISR) for a lower-priority event can be preempted by a higher-priority one, facilitated by selective masking that disables only interrupts at or below the current level while allowing higher ones to proceed. Upon entry to an ISR, the processor typically masks interrupts globally or by priority level; the ISR may then explicitly re-enable interrupts to permit nesting, but this risks if nesting depth exceeds available space, particularly in deep chains. In systems, strict schemes are employed to prevent , where a high- task is delayed by lower- interrupt handling; techniques like protocols extend these to interrupt threads, ensuring predictable execution by elevating the of servicing routines to match the highest blocked task. In multi-core processors, interrupt affinity mechanisms direct specific interrupts to designated cores, reducing nesting contention on a and improving parallelism by distributing load without recursive preemption on the same execution path.

System Implementation

Interrupt Controllers and Lines

Interrupt Request (IRQ) lines function as dedicated electrical paths connecting peripheral devices to the , enabling hardware signals to request immediate attention from the CPU. In legacy x86 architectures, these systems typically support IRQ lines, numbered from 0 to 15, each providing an independent channel for interrupt signaling. The core hardware for routing interrupts along these lines is the interrupt controller, which prioritizes and directs signals to the appropriate processor. In early x86 systems, the Intel 8259 Programmable Interrupt Controller (PIC) serves this role, managing up to eight vectored priority interrupts per unit and supporting cascaded configurations for expansion to 16 IRQs. The 8259 is programmable, allowing initialization for specific operating modes, priority schemes, and interrupt handling behaviors. For symmetric multiprocessing (SMP) environments, the Advanced Programmable Interrupt Controller (APIC) provides an enhanced framework, featuring a local APIC integrated into each processor for handling core-specific interrupts and an I/O APIC for routing device interrupts across the system bus. The APIC architecture also enables inter-processor interrupts (IPIs) to facilitate communication between multiple CPUs. Interrupt controllers execute essential functions to streamline processing, including vectoring, which supplies the CPU with a number that maps to the interrupt service routine () address in the . Masking capabilities allow selective enabling or disabling of individual lines via dedicated registers, preventing unwanted interrupts during critical operations. Status polling is another key feature, where the CPU interrogates controller registers to identify active or pending interrupts for resolution. In the 8259, for example, the Interrupt Mask Register (IMR) handles masking, while the In-Service Register () supports status checks. The APIC extends these with programmable registers for assignment, masking, and status monitoring tailored to multiprocessor setups. Despite their effectiveness, traditional controllers like the 8259 face scalability limitations due to the fixed number of supported lines, which constrains the number of directly addressable devices and often requires interrupt sharing to accommodate more peripherals. Each IRQ line in these controllers can be configured for either edge-triggered or level-triggered detection, influencing signal sensitivity and potential for missed or spurious interrupts. The APIC mitigates some scalability issues in SMP systems but inherits configuration needs for trigger modes per input.

Shared and Message-Signaled Interrupts

In computer systems, shared interrupts allow multiple devices to utilize the same (IRQ) line, a common practice in architectures like where the number of available lines is limited compared to the potential number of peripherals. This sharing is facilitated by connecting multiple devices to a single interrupt pin on the interrupt controller, enabling efficient use of resources in buses such as . Upon receiving an interrupt on a shared line, the operating system or invokes handlers from all associated drivers, which then probe their respective registers to determine if their device generated the signal. Sharing interrupts introduces several challenges, particularly in edge-triggered systems where the brief signaling an interrupt can be missed if multiple devices assert it simultaneously, leading to race conditions that cause lost interrupts or incorrect attribution. Serialization delays also arise as drivers sequentially poll their devices, increasing overall and potentially degrading system performance, especially with high-frequency interrupt sources. These issues are mitigated through careful driver design, such as using level-triggered modes where possible to ensure persistent signaling until acknowledged, or by assigning unique interrupt vectors via advanced controllers to avoid altogether. To the limitations of wire-based shared interrupts, Message-Signaled Interrupts () were introduced in the PCI 2.2 specification as a pinless alternative, where devices signal interrupts by performing a dedicated write transaction to a system-specified , encoding the interrupt in the data payload. This approach eliminates physical interrupt lines entirely, using the PCI bus itself for signaling and inherently supporting edge semantics without hardware-level acknowledgment. allows up to 32 vectors per device through and data configuration, promoting better scalability in dense device environments like multi-function PCI cards. The MSI-X extension, defined in the 3.0 specification, further enhances this mechanism by providing a programmable table of up to 2048 independent interrupt messages, each with unique address, data, and per-vector masking capabilities to enable or disable specific interrupts without affecting others. This allows precise control and affinity assignment to specific processors, reducing contention in multiprocessor systems. By obviating the need for shared lines and minimizing probing overhead, MSI and MSI-X improve scalability for high-device-count systems, such as those in servers or system-on-chips (SoCs), while reducing pin count and wiring complexity in designs.

Advanced Variants

Hybrid interrupts integrate level- or edge-triggered mechanisms with polling to optimize for variable scenarios, such as in USB controllers where frequent small transfers benefit from polling to reduce interrupt overhead while reserving interrupts for infrequent high-priority events. This approach balances CPU utilization and , as demonstrated in interface cards (NICs) where hybrid methods handle delay-tolerant traffic by combining interrupt-driven processing with active polling, achieving up to 20% better energy savings compared to pure interrupt modes in low-load conditions. Doorbell interrupts employ memory-mapped writes to a dedicated , signaling event completion without traditional interrupt lines, which is particularly advantageous in high-throughput like GPUs and NVMe storage for minimizing overhead in frequent operations. In NVMe protocols, a writes to the to notify the controller of new submission entries, enabling scalable I/O queuing with up to 64,000 queues per and reducing latency by avoiding per-interrupt context switches. For GPUs, mechanisms allow direct management from GPU , bypassing CPU and supporting zero-overhead sharing in PCIe networks. Multiprocessor inter-processor interrupts (IPIs) facilitate communication between cores in () systems, often using the (APIC) to deliver targeted signals for tasks like maintenance or thread migration. In x86 architectures, IPIs synchronize caches across processors by invalidating or flushing lines, with typical latencies around 100-200 cycles depending on core count, ensuring data consistency in shared-memory environments. ARM-based systems implement IPIs via the Generic Interrupt Controller (GIC), allocating up to 16 private interrupts per core for inter-core signaling without impacting external device lines. The Generic Interrupt Controller versions 3, 4, and 5 (GICv3 and GICv4 introduced in 2015, and GICv5 in 2025) incorporate extensions to support secure multi-tenant environments, allowing to route interrupts directly to virtual machines via virtual interrupt distributors and injectors. GICv3 enables system-wide interrupt with up to 1,024 signals, while GICv4 adds direct virtual-LPI (localized pending interrupts) support for low-latency delivery in nested , reducing intervention by 50-70% in benchmarks. GICv5 introduces a rearchitected design for scalable interrupt management in multi-chiplet systems, supporting -free and improved . RISC-V's Core-Local Interrupt Controller (CLIC), specified in version 0.9 from and advanced to Technical Committee approval in September 2025 toward ratification, provides a customizable, low-latency with vectored dispatching and per-interrupt privilege levels, supporting up to 4,096 lines for and high-performance systems. CLIC achieves sub-10-cycle latencies through mode-based configuration (non-vectoring for simplicity or vectored for speed) and preemptive handling, outperforming traditional PLIC in applications by enabling direct handler jumps without software vector tables. In devices like the , low-power wakeup interrupts leverage (real-time clock) domain signals to exit deep-sleep modes with minimal energy draw, typically under 10 μA, using timer, touch, or external GPIO triggers routed through low-power peripherals. These variants prioritize ultra-low duty cycles, allowing wakeups from external interrupts on GPIOs without full CPU reactivation, which extends battery life to years in sensor networks while maintaining responsiveness to events like button presses or sensor thresholds.

Performance and Optimization

Latency and Overhead

Interrupt latency encompasses several distinct components that contribute to the overall delay in processing hardware signals. The hardware latency, often termed request-to-acknowledgment time, measures the period from the assertion of an interrupt request (IRQ) by a device to the processor's acknowledgment and dispatch to the interrupt service routine (ISR). This phase typically involves minimal cycles for vectoring in modern architectures, such as x86, where it can range from a few to tens of nanoseconds depending on pipeline depth and interrupt controller efficiency. Software latency includes the execution time of the itself, which handles the immediate response to the interrupt, such as acknowledging the device and performing critical operations. In x86 systems, this handler execution often consumes several microseconds, with end-to-end —from IRQ assertion to full ISR completion and return to the interrupted task—typically falling in the 1-10 µs range under normal conditions without heavy contention. Factors like clock speed and ISR complexity directly influence these durations, emphasizing the need for concise handler code to minimize bottlenecks. Overhead in interrupt processing primarily arises from context save and restore operations, where the processor must preserve the state of the interrupted task (e.g., registers, ) before entering the and restore it upon exit. On x86 architectures, this can require 100-500 CPU cycles, equivalent to roughly 30-150 ns at 3 GHz clocks, depending on the extent of saved state and hardware support like fast switching features. Efficient design is crucial, as prolonged execution exacerbates this overhead by delaying task resumption and potentially increasing system-wide . Several factors influence interrupt latency and overhead in multi-core environments. Bus contention occurs when multiple devices compete for shared interconnects, delaying IRQ delivery and acknowledgment, which can extend hardware latency by hundreds of cycles in high-load scenarios. Cache misses during context save/restore or ISR execution further amplify delays, as fetching data from main memory incurs 100-300 cycles of stall time per miss, disrupting temporal affinity. Assigning interrupts to specific cores via affinity mechanisms mitigates migration latency by preserving cache locality, reducing overhead by up to 20-50% in I/O-bound workloads through decreased cross-core communication. To optimize and overhead, systems employ deferred processing techniques that split interrupt handling into immediate (top-half) and postponed (bottom-half) components. In , softirqs serve as bottom halves, scheduling non-urgent work outside the hard interrupt context to avoid prolonging ISR execution and reduce frequency. This approach can cut end-to-end by deferring cache-intensive tasks, improving overall throughput. is commonly measured using tools like cyclictest, which generates periodic interrupts and reports maximum response times in microseconds, aiding in tuning for constraints. Recent advances as of 2025 include hardware-assisted techniques like interrupt caching, which can reduce average by approximately 77% compared to traditional OS mechanisms by minimizing software overhead in context switching. Adaptive methods using for dynamic interrupt prioritization have also emerged to optimize performance in varying workloads.

Interrupt Storms and Mitigation

An interrupt storm occurs when a processor receives an excessive number of interrupts in a short period, overwhelming the CPU and leading to saturation where the majority of processing time is consumed by interrupt handling rather than useful work. For example, a misbehaving network interface controller (NIC) can generate rates exceeding 100,000 interrupts per second, causing complete CPU utilization and rendering the system unresponsive. This condition not only degrades performance but also creates a denial-of-service (DoS) vulnerability, as the flood of interrupts can be exploited to halt system operations. Common causes include faulty device drivers or malfunctions, such as defective NICs that fail to properly signal completion of operations, resulting in repeated interrupt triggers. In virtualized environments, vectors like interrupt injection enable malicious hypervisors or guests to flood a victim (VM) with interrupts, compromising and . Operating systems have implemented protections such as interrupt to curb potential abuse in shared environments, a measure adopted in the for DoS prevention. To mitigate interrupt storms, interrupt coalescing batches multiple events into a single interrupt, reducing the overall rate while using timers or packet thresholds to balance and throughput; for instance, NICs employ receive () coalescing to group incoming packets. In , the New API (NAPI) offloads interrupt processing by switching to polling mode after an initial interrupt, allowing to process packets in batches without further hardware interrupts during high . Interrupt pinning assigns specific interrupts to designated CPU cores via mechanisms like the smp_affinity attribute, distributing load and preventing any single core from being overwhelmed. For security, the Input-Output Memory Management Unit (IOMMU) provides device isolation by remapping interrupts and restricting DMA access, preventing malicious peripherals from targeting arbitrary memory or injecting unauthorized interrupts. In cloud environments, providers like AWS implement interrupt throttling through dynamic moderation in Nitro-based instances, adjusting rates based on load to enhance protection against in multi-tenant setups during the 2020s. These strategies collectively ensure system stability by limiting interrupt floods without excessively impacting normal operations.

Applications

Operating Systems and I/O

In general-purpose operating systems like , interrupts play a central role in managing (I/O) operations by providing asynchronous notifications from devices to the . When a device such as a completes a data transfer, it signals the CPU via an interrupt request (IRQ), allowing the kernel to handle the completion without continuous polling. For instance, the Advanced Host Controller Interface (AHCI) driver in Linux uses interrupt threads to process command completions directly, enabling efficient disk read operations by queuing and acknowledging the event in the interrupt handler. This mechanism ensures that I/O-bound tasks, like reading sectors from a hard drive, are serviced promptly while minimizing CPU overhead. IRQs facilitate these asynchronous notifications by routing hardware signals through the kernel's generic interrupt subsystem, which abstracts the underlying controller hardware for device drivers. Timer interrupts are essential for process scheduling in preemptive multitasking environments, where the operating system periodically relinquishes control from one to another to ensure fairness and responsiveness. Hardware timers such as the (PIT) or (HPET) generate periodic clock ticks, typically at rates like 250 Hz, triggering the kernel's scheduler to evaluate task priorities and perform context switches if necessary. In , the tick handler invoked by these interrupts updates , decrements process time slices, and invokes the scheduler routine to select the next runnable task, thereby enabling context switches on each clock tick. This approach contrasts with cooperative scheduling by enforcing time limits, preventing any single from monopolizing the CPU. Specific examples illustrate how interrupts bridge hardware events to software layers. For keyboard and mouse inputs, device drivers like usbkbd or psmouse register IRQ handlers that capture events—such as a key press or cursor movement—via interrupts, then translate them into structured input events with timestamps. These events are dispatched to user space through the input subsystem's evdev , accessible via character devices like /dev/input/event0, allowing applications to read them asynchronously without kernel-level polling. Similarly, when a network card () receives a packet, it raises an IRQ to notify the kernel; the driver processes the interrupt, enqueues the packet in a receive ring buffer via , and triggers the (e.g., and layers) for demultiplexing and delivery to the appropriate . This interrupt-driven flow ensures low-latency packet handling in high-throughput scenarios. Operating systems integrate interrupts through structured mechanisms that map hardware signals to code while deferring non-critical work. In x86 architectures, the (IDT) serves as a jump with 256 entries, each an 8-byte gate pointing to handlers for specific interrupt vectors; the CPU uses the IDTR to locate the IDT and dispatch to the appropriate routine upon signal receipt. employs a three- handling : a critical for , an immediate for the primary handler, and a deferred "bottom-half" for non-urgent tasks like , implemented via softirqs or threaded IRQs to avoid blocking longer operations. This design allows efficient resource use, with bottom halves executing after interrupts are re-enabled, ensuring the system remains responsive to new events.

Embedded and Real-Time Systems

In and systems, interrupts play a critical role in ensuring deterministic responses to time-sensitive events, particularly in resource-constrained environments where meeting deadlines is paramount. operating systems (RTOS) such as leverage high-priority interrupts to lower-priority tasks, allowing critical operations like processing or loops to adhere to strict timing deadlines. For instance, supports configurable interrupt priorities that enable rapid task switching upon interrupt occurrence, minimizing delays in deadline-driven applications. To prevent —where a high-priority task is blocked by a low-priority task holding a implements a basic priority inheritance protocol, temporarily elevating the low-priority task's priority to match the highest waiting task, thus bounding the inversion duration and preserving guarantees. In microcontrollers commonly used in embedded applications, interrupts facilitate efficient handling of peripheral events with minimal overhead. For example, AVR microcontrollers from Microchip employ timer interrupts to periodically sample sensor data, triggering an interrupt service routine (ISR) when a timer overflow or compare match occurs, which allows precise timing for tasks like analog-to-digital conversion without continuous polling. Similarly, in IoT devices, STM32 microcontrollers from STMicroelectronics use External Interrupt/Event Controller (EXTI) lines to enable low-power wakeups from sleep modes, where an external event such as a sensor trigger or network signal generates an interrupt to resume operation, optimizing battery life in always-on scenarios. These mechanisms ensure that embedded systems respond promptly to environmental inputs while conserving energy. Key challenges in these systems include minimizing —the variation in response time—to achieve predictable behavior, especially in safety-critical domains like automotive applications where jitter must often be kept below 1 µs to maintain control stability. processors address this through their Nested Vectored Interrupt Controller (NVIC), which supports and fast (FIQ) modes for low-latency handling, reducing entry latency to as few as 12 clock cycles and enabling quick ISR execution for time-critical responses. Post-2020 advancements in architectures further enhance this for edge AI devices; the Core-Local Interrupt Controller (CLIC) provides low-latency, with preemptive capabilities, facilitating efficient in real-time applications by allowing direct vectoring to ISRs without software overhead. In ecosystems, such interrupt mechanisms underpin low-power event handling for seamless device and communication.

Virtualization and Security

In virtualized environments, posted interrupts facilitate low-latency delivery of external interrupts directly to virtual machines without requiring a VM exit to the , significantly reducing overhead in systems using VT-x or AMD-V. Introduced in VT-x with the microarchitecture in 2011 and enhanced in subsequent generations through the , posted interrupts operate by queuing interrupt requests in a dedicated descriptor structure within the virtual-APIC page, allowing the processor to update the 's virtual register (VIRR) while maintaining non-root operation. Similarly, AMD's Advanced Virtual Interrupt Controller (AVIC), part of AMD-V since the Family 17h processors in the mid-2010s and refined in later models, enables direct interrupt posting to virtual processors by mapping APIC registers to physical structures, bypassing intervention for improved scalability in multi-VM setups. For device passthrough in , virtual I/O Memory Management Units (vIOMMUs) emulate IOMMU functionality to securely assign devices to guests, handling DMA translations and interrupt remapping without full IOMMU dedication per VM. The vIOMMU design, as proposed in early 2010s research, uses shadow page tables for IOMMU context , enabling efficient passthrough by trapping and device requests while minimizing overhead through batched translations. This approach supports in cloud environments, where multiple VMs share physical I/O resources. Security in virtualized interrupts relies on mechanisms like Extended Page Table (EPT) violations to enforce between guest and domains. EPT violations, triggered by unauthorized memory accesses related to interrupt handling, can be processed exitlessly using Exceptions (VE) in modern processors, converting faults into guest-handled exceptions (vector 0x14) that the verifies without full context switches, thus preventing side-channel leaks during interrupt delivery. The 2019 Microarchitectural Data Sampling (MDS) vulnerability (disclosed in 2019, with CVEs from 2018), affecting CPUs from 2008 onward, exposed data in microarchitectural s during interrupt-induced context switches, mitigated by buffer clearing via the VERW and microcode updates that flush store buffers before ring transitions, with full protection requiring coordinated OS- synchronization. In cloud virtualization platforms like KVM, interrupt forwarding schemes such as Direct Interrupt Delivery (DID) route device interrupts from SR-IOV-enabled NICs directly to guest vCPUs, using posted interrupt hardware to avoid VM exits and enhance I/O throughput in multi-tenant setups. Recent developments extend secure interrupt handling to ARM and RISC-V architectures. ARM TrustZone partitions interrupts into Normal world (IRQ) and Secure world (FIQ) domains, with the Secure Configuration Register preventing non-secure code from masking secure interrupts, ensuring isolation by routing FIQs directly to the secure monitor mode without Normal world intervention. In RISC-V, Physical Memory Protection (PMP) extensions, ratified in the 2010s and expanded in 2020s trusted execution environments (TEEs), enforce interrupt isolation by trapping unauthorized accesses via PMP checks, notifying a security monitor for handling while maintaining enclave confidentiality during interrupt processing. For virtualized graphics, VFIO enables GPU passthrough with shared interrupts, where a single physical interrupt line is multiplexed across virtual GPUs using IOMMU-protected mappings, allowing direct guest access to graphics interrupts in KVM-based systems. A key risk in is denial-of-service (DoS) attacks via excessive guest-generated interrupts, which can overwhelm the by flooding interrupt queues. Mitigations include through interrupt masking in the and CPU rules to bind guest vCPUs to specific physical cores, distributing load and preventing resource exhaustion, as implemented in frameworks like iSotEE for .

References

  1. [1]
    Chapter 12: Interrupts
    An interrupt is the automatic transfer of software execution in response to a hardware event that is asynchronous with the current software execution.
  2. [2]
    [PDF] Computer System Overview: Part 2 3 Interrupts
    3.1 What is interrupt? A rough definition of interrupt is that: interrupt is a mechanism by which computer compo- nents, like memory or I/O modules, may ...
  3. [3]
    [PDF] 6. Interrupts - Illinois Institute of Technology
    Interrupts are events that require a change in the control flow, other than jumps or branches. They appeared as an efficient way to manage I/O.
  4. [4]
    CSCI 4717 -- Interrupts
    An interrupt is an asynchronous event caused typically by a device external to the processor. It is unexpected by the code and it can occur at any time ...
  5. [5]
    [PDF] Safe and Structured Use of Interrupts in Real-Time and Embedded ...
    Nov 3, 2006 · An interrupt is a hardware-supported asynchronous transfer of control to an interrupt vector, which is the execution of an interrupt handler.
  6. [6]
    Arm GIC fundamentals
    The GIC can deal with four different types of interrupt sources. Each interrupt source is identified by an ID number, which is referred to as an INTID.Interrupt Types · How Interrupts Are Signaled... · Level Sensitive Interrupts
  7. [7]
    [PDF] CS 423 Operating System Design: Interrupts
    □ Software Interrupts: □ Interrupts caused by the execution of a software instruction: □ INT <interrupt_number>. □ Used by the system call interrupt().
  8. [8]
    Asynchronous Events: Polling Loops and Interrupts
    To avoid this inefficiency, interrupts are often used instead of polling. An interrupt is a signal sent by another device to the CPU. The CPU responds to an ...
  9. [9]
  10. [10]
    Chapter 15 Interrupts and Exceptions
    a software interrupt can be used to request a service from the OS. most I/O devices can generate a hardware interrupt when they are ready to transfer data.
  11. [11]
    Interrupts - Mark Smotherman - Clemson University
    Summary: Interrupts are a vital part of sequencing a modern computer. They were developed for exception handling and were later applied to I/O events. .. under ...
  12. [12]
    [PDF] Exceptional Control Flow
    Exceptions can be divided into four classes: interrupts, traps, faults, and aborts. ... Trap. 129–255. OS-defined exceptions. Interrupt or trap. Figure 8.9: ...
  13. [13]
    INT n/INTO/INT3/INT1 — Call to Interrupt Procedure
    The INT n instruction is the general mnemonic for executing a software-generated call to an interrupt handler. The INTO instruction is a special mnemonic for ...
  14. [14]
    FIQ and IRQ - ARM Cortex-R Series (Armv7-R) Programmer's Guide
    FIQ is for high-priority interrupts with fast response, while IRQ is for other interrupts. FIQ handlers don't generate other exceptions and are for special ...
  15. [15]
    [PDF] The RISC-V Instruction Set Manual: Volume II: Privileged Architecture
    The Smrnmi extension adds support for resumable non-maskable interrupts (RNMIs) to RISC-V. The extension adds four new CSRs (mnepc, mncause, mnstatus, and ...
  16. [16]
    What is an interrupt request (IRQ) and how does it work? - TechTarget
    Jan 18, 2023 · An interrupt request (IRQ) is a signal sent to a computer's processor to momentarily stop (interrupt) its operations.
  17. [17]
    3.3. Non-Maskable Interrupts | Red Hat Enterprise Linux for Real Time
    A non-maskable interrupt (NMI) cannot be ignored, and is generally used only for critical hardware errors. NMIs are normally delivered over a separate interrupt ...
  18. [18]
    The Lincoln TX-2 input-output system - ACM Digital Library
    The input-output system of the Lincoln TX-2 computer contains a variety of input-output devices suitable for general research and control applications.
  19. [19]
    [PDF] Programmed Data Processor-1, 1960. - Computer History Museum
    -The PDP-1 is also available with the optional Sequence Break. System. This is a 16-channel (or more, when needed) automatic interrupt feature which permits ...
  20. [20]
    [PDF] Architecture of the IBM System / 360
    When the channel program ends, the CPU program is interrupted, and complete channel and device status information are available. ... 393-396 (1961). Received ...Missing: vectored | Show results with:vectored
  21. [21]
    [PDF] IBM System/360 Operating System Multiprogramming With a Fixed ...
    This publication describes the basic concepts of multiprogramming with a fixed number of tasks (MFT). It also includes aspects that must be considered to gain ...
  22. [22]
  23. [23]
    [PDF] 8259A PROGRAMMABLE INTERRUPT CONTROLLER ... - PDOS-MIT
    The Intel 8259A Programmable Interrupt Controller handles up to eight vectored priority interrupts for the CPU. It is cascadable for up to 64 vectored priority ...Missing: 1976 | Show results with:1976
  24. [24]
  25. [25]
    ARM Generic Interrupt Controller Architecture Specification - Version 2.0 (B.b)
    - **ARM GIC Introduction**: The ARM Generic Interrupt Controller (GIC) was introduced with the architecture specification, with Version 2.0 (B.b) being the latest, indicating development in the 2000s as part of ARM's system architecture evolution.
  26. [26]
    Ratified Specifications - RISC-V International
    Ratified RISC-V specifications are free, publicly available, and no changes are allowed; modifications must be through extensions.
  27. [27]
    Nitro Enclaves
    ### Summary of AWS Nitro Enclaves (2020s)
  28. [28]
    Optimised Extension of an Ultra-Low-Power RISC-V Processor to ...
    Additionally, Zero-Riscy supports hardware interrupts, low-power modes, and clock gating, enabling efficient operation across various application scenarios. For ...Missing: 2020s | Show results with:2020s
  29. [29]
    Introduction to Message-Signaled Interrupts - Windows drivers
    Feb 21, 2025 · Message-signaled interrupts (MSIs) were introduced in the PCI 2.2 specification as an alternative to line-based interrupts.
  30. [30]
    8.2. Nios® V Processor Hardware Interrupt Service Routines - Intel
    Software often communicates with peripheral devices using hardware interrupts. When a peripheral asserts its IRQ, it diverts the processor's normal execution ...
  31. [31]
    2.1.5 Interrupt-driven UART Implementation - Microchip Online docs
    The interrupt-driven UART driver has the same hardware requirement as the polled UART driver. The basic functionality of the interrupt-driven implementation ...
  32. [32]
    [PDF] 8253.pdf - CPCWiki
    The 8253 is programmable interval timer/counter specifically designed for use with the IntelTM Micro- computer systems. Its function is that of a general.
  33. [33]
    [PDF] Intel® 64 and IA-32 Architectures Software Developer's Manual
    ... Interrupt and Exception Handling ... Masking Maskable Hardware Interrupts ...
  34. [34]
    3.7.12. Handling Nonmaskable Interrupts - Intel
    NMIs leave intact the processor state associated with maskable interrupts and other exceptions, as well as normal, nonexception processing, when each NMI is ...
  35. [35]
    Improving lost and spurious IRQ handling - LWN.net
    Jun 15, 2010 · Missing interrupts can have a number of causes, including flaky devices or an interrupt routing problem somewhere in the system.
  36. [36]
    [PDF] 82371ab pci-to-isa / ide xcelerator (piix4) - Intel
    acknowledges the interrupt. This can be a useful safeguard for detecting interrupts caused by spurious noise glitches on the IRQ inputs. To implement this ...
  37. [37]
    Supervisor calls - Arm Developer
    The SVC instruction (formerly SWI ) generates a Supervisor Call (formerly called a Software Interrupt). Supervisor calls are normally used to request ...
  38. [38]
    <signal.h>
    Causes signal not to be automatically blocked on entry to signal handler. Process is executing on an alternate signal stack.Missing: via | Show results with:via
  39. [39]
    [PDF] Intel® 64 and IA-32 Architectures Software Developer's Manual
    NOTE: The Intel® 64 and IA-32 Architectures Software Developer's Manual consists of ten volumes: Basic Architecture, Order Number 253665; Instruction Set ...Missing: triggered cons PIC
  40. [40]
    Interrupt Line - an overview | ScienceDirect Topics
    Level triggered interrupts are often used when multiple devices share the interrupt line. The interrupt outputs from a number of devices can be electrically ...
  41. [41]
    Interrupt Request - an overview | ScienceDirect Topics
    Both types of interrupts have their strengths and drawbacks. With a level-triggered interrupt, as shown in the example in Figure 4-65a , if the request is ...Missing: ISA | Show results with:ISA<|control11|><|separator|>
  42. [42]
    Introduction to Microcontrollers - Interrupts - Mike Silva
    Sep 18, 2013 · An edge-triggered interrupt generates an interrupt request only on an edge - that is, when the interrupt line goes from one state to the ...
  43. [43]
    Level-triggered vs. Edge-triggered Interrupts - Gary Stringham
    Nov 29, 2008 · I see no benefit to level-triggered interrupts. Some engineers prefer level-triggered interrupts because they require fewer gates or because of ...Missing: disadvantages | Show results with:disadvantages
  44. [44]
    Chapter 10: PCI Advanced Features - WinDriver
    Legacy PCI interrupts are level sensitive. Edge-triggered interrupts: These are interrupts that are generated once, when the physical interrupt signal goes ...Missing: disadvantages PS/
  45. [45]
    [PDF] NVM Express PCIe Transport Specification 1.0c
    Oct 3, 2022 · Unlike INTx virtual wire interrupts which are level sensitive, MSI interrupts are edge sensitive. Pin-based and single MSI only use one ...
  46. [46]
    USB Dual Role Driver Stack Architecture - Windows - Microsoft Learn
    Sep 20, 2024 · Two edge-triggered interrupts: one that fires when the ID pin on the connector is grounded, and another one that fires when the ID pin is ...
  47. [47]
    IBM PS/2 Model 50 Keyboard Controller | OS/2 Museum
    Aug 24, 2012 · Perhaps unexpectedly, edge triggered interrupts are still generated by the KBC, even though the PS/2 uses level triggered interrupts. To ...
  48. [48]
    8.2: Context switching - Engineering LibreTexts
    Nov 30, 2020 · Interrupt handlers can be fast because they don't have to save the entire hardware state; they only have to save registers they are planning to ...Missing: software | Show results with:software
  49. [49]
    Interrupt priority - Arm Developer
    FIQ interrupts have the highest priority, followed by the vectored interrupts 0-31, and the daisy-chained interrupt has the lowest priority. The priority order ...Missing: schemes | Show results with:schemes
  50. [50]
    Nested interrupt handling - Arm Developer
    Nested interrupt handling allows software to accept another interrupt before finishing the current one, enabling prioritization and improved latency.
  51. [51]
  52. [52]
    Linux generic IRQ handling - The Linux Kernel documentation
    The effective IRQ affinity on SMP as some irq chips do not allow multi CPU destinations. ... Handle a nested irq from a irq thread. Parameters. unsigned int irq.<|separator|>
  53. [53]
  54. [54]
    IRQs Explained - Real World Tech
    May 24, 1998 · When people speak of IRQs, they are referring to hardware interrupt requests. In the IBM PC there are 16 such interrupts defined, which are all maskable ...
  55. [55]
    [PDF] IOAPIC datasheet for web - PDOS-MIT
    The Local Unit further provides inter-processor interrupts and a timer, to its local processor. The register level interface of a processor to its local APIC is ...Missing: SMP | Show results with:SMP
  56. [56]
    Problems with shared interrupts - QNX
    Sharing interrupts can increase interrupt latency, depending upon exactly what each of the drivers does. After an interrupt fires, the kernel doesn't unmask the ...
  57. [57]
    When Poll is More Energy Efficient than Interrupt - ACM Digital Library
    Jun 28, 2022 · Our experimental results indicate that although hybrid polling provides a good trade-off in CPU utilization, it is the least energy efficient, ...
  58. [58]
    [PDF] Hardware-Accelerated Platforms and Infrastructures for Network ...
    through polling-alone or interrupts-alone, or through hybrid approaches: The common approach of the NICs keeping the. CPUs alive for delay tolerant traffic ...
  59. [59]
    Low Overhead & Energy-efficient FPGA-based Storage Multi-paths
    Then the new tail is written in a memory-mapped PCIe register called “doorbell” (Step 2), which notifies the NVMe controller for new submission entries.
  60. [60]
    [PDF] A NVMe Storage Virtualization Solution with Mediated Pass-Through
    Jul 13, 2018 · To overcome the two-part overhead, a direct idea is to change the trap of interrupts into an active polling mech- anism in the 2-way overhead.
  61. [61]
    [PDF] 2 SmartIO: Zero-overhead Device Sharing through PCIe Networking
    Avoiding CPU synchronization: By hosting I/O queues in GPU memory and mapping doorbell reg- isters for the GPU, a CUDA kernel running on the GPU can operate the ...
  62. [62]
    Symmetric multiprocessing (SMP) - QNX
    The processors communicate with each other through IPIs (interprocessor interrupts). IPIs can effectively schedule and control threads over multiple processors.Missing: coherence | Show results with:coherence
  63. [63]
    Cost of IPI (inter-processor interrupt) ? - Intel Community
    Sep 18, 2009 · IPIs are typically used to implement a cache coherency synchronization point. (...) In x86 based systems, an IPI synchronizes the cache and ...Missing: APIC SMP
  64. [64]
    Interrupts on MPCore Development Boards
    Inter-processor interrupts (IPI) can, for example, be used for interrupt-driven communication between CPUs in an SMP system. The GIC implements IPI using 16 ...Missing: APIC coherence
  65. [65]
    GICv3 and GICv4 Software Overview - Arm Developer
    This document provides an overview of version 3 of the Generic Interrupt Controller Architecture (GICv3). It is primarily intended for software engineers ...Missing: virtualization extensions
  66. [66]
    [PDF] Generic Interrupt Controller v3 and v4, Virtualization - Arm
    Jul 18, 2022 · This guide describes the support for virtualization in the GICv3 and GICv4 architecture. It covers the controls available to a hypervisor ...Missing: extensions | Show results with:extensions
  67. [67]
    [PDF] RISC-V Core-Local Interrupt Controller (CLIC) Version 0.9-draft ...
    The Core-Local Interrupt Controller (CLIC) is designed to provide low-latency, vectored, pre-emptive interrupts for RISC-V systems.
  68. [68]
    CV32RT: Enabling Fast Interrupt and Context Switching for RISC-V ...
    Mar 21, 2024 · The RISC-V core local interrupt controller (CLIC) specification addresses this concern by enabling preemptible, low-latency vectored interrupts ...Missing: customizable | Show results with:customizable
  69. [69]
    Sleep Modes - ESP32 - — ESP-IDF Programming Guide v5.5.1 ...
    To wakeup from a touch sensor interrupt, users need to configure the touch pad interrupt before the chip enters Deep-sleep or Light-sleep modes. Revisions 0 and ...
  70. [70]
    Interrupt Latency - an overview | ScienceDirect Topics
    Interrupt latency is defined as the delay from the start of an interrupt request to the execution of the interrupt handler. This latency can vary based on ...
  71. [71]
    Embedded Systems: Interrupts & Latency
    Jun 1, 2001 · Interrupts are asynchronous breaks in program flow from external events. Interrupt latency is the time from interrupt to ISR execution. The CPU ...
  72. [72]
    Design and Implementation of a High-Performance X86 Embedded ...
    Feb 20, 2025 · Interrupt response, Simulates hardware interrupt system response time, Average interrupt response time: 15 microseconds. Task scheduling ...Missing: typical processors
  73. [73]
    Interrupt Latency - The Ganssle Group
    Understanding and dealing with interrupt latency in embedded firmware.
  74. [74]
    How long does a context switch take? - Quora
    Apr 20, 2010 · A context switch could take anywhere from a few 100 nanoseconds to few microseconds depending upon the CPU architecture and the size of the context that is to ...To switch between processes, the operating system must save the ...How to minimize the context switch time in an operating systemMore results from www.quora.com
  75. [75]
    Context Switching on x86 - samwho
    Jun 1, 2013 · Context switching is the method an operating system employs to implement multitasking. It's the practice of having multiple contexts of execution in your ...Missing: cycles | Show results with:cycles
  76. [76]
    [PDF] Performance Implications of Cache Affinity on Multicore Processors
    Due to the ability of masking the cache miss latency with dy- namically scheduling instructions, deep out-of-order pipelines have a higher tolerance to L1 ...
  77. [77]
    Juggling software interrupts and realtime tasks - LWN.net
    Dec 2, 2022 · pre-Linux Unix systems often included the concept of a "bottom half" as a way of deferring work that could not be done in an interrupt handler.
  78. [78]
    [PDF] I'll Do It Later: Softirqs, Tasklets, Bottom Halves, Task Queues, Work ...
    The Linux kernel offers many different facilities for postponing work until later. Bottom Halves are for deferring work from interrupt context. Timers allow.
  79. [79]
    realtime:documentation:howto:tools:cyclictest:start [Wiki]
    Jan 19, 2025 · The Cyclictest Test Design page goes into more detail about how to choose the right options for measuring a specific latency on a given system.
  80. [80]
    Optimizing Storage Performance with Calibrated Interrupts
    Mar 2, 2022 · Calibrated interrupts increase throughput by up to 35%, reduce CPU consumption by as much as 30%, and achieve up to 37% lower latency when ...
  81. [81]
    Real World Issue - Broadcast storm from 169.254/16 causes CPU ...
    Feb 16, 2020 · Real World Issue - Broadcast storm from 169.254/16 causes CPU DoS on 6500's ... cpu utilization - showed saturation of the uplinks of the ...
  82. [82]
    Debugging an Interrupt Storm - Windows drivers | Microsoft Learn
    Dec 15, 2021 · This example demonstrates one method for detecting and debugging an interrupt storm. When the machine hangs, use a kernel debugger to break in.Missing: 100k | Show results with:100k
  83. [83]
    Interrupt coalescing - IBM
    Interrupt coalescing collects packets and generates one interrupt for multiple packets, using a timer and delay or buffer count method.Missing: mitigation | Show results with:mitigation
  84. [84]
    NAPI - The Linux Kernel documentation
    In basic operation the device notifies the host about new events via an interrupt. The host then schedules a NAPI instance to process the events. The device may ...Missing: offloading | Show results with:offloading
  85. [85]
    4.3. Interrupts and IRQ Tuning | Red Hat Enterprise Linux | 6
    An interrupt request (IRQ) is a request for service, sent at the hardware level. Interrupts can be sent by either a dedicated hardware line, or across a ...
  86. [86]
    [PDF] Using IOMMU for DMA Protection in UEFI Firmware - Intel
    Interrupt remapping: for supporting isolation and routing of interrupts from devices and external interrupt controllers to appropriate. VMs. • Interrupt ...
  87. [87]
    Improve network latency for Linux based EC2 instances
    Dynamic interrupt moderation is an enhanced form of interrupt moderation that dynamically adjusts the interrupt rate based on the current system load and ...
  88. [88]
    ahci — Serial ATA Advanced Host Controller Interface driver
    ahci.X.direct controls whether the driver should use direct command completion from interrupt thread(s), or queue them to CAM completion threads. Default ...
  89. [89]
    Linux generic IRQ handling — The Linux Kernel documentation
    ### Summary of Interrupt Handling in Linux Kernel for Device I/O
  90. [90]
    [PDF] IA-PC HPET (High Precision Event Timers) Specification 1.0a - Intel
    The IA-PC HPET Specification defines timer hardware that is intended to initially supplement and eventually replace the legacy 8254 Programmable Interval Timer ...
  91. [91]
    [PDF] The Context-Switch Overhead Inflicted by Hardware Interrupts (and ...
    The interrupt invokes a kernel routine, called the tick handler that is responsible for various important OS activities including (1) delivering timing services ...
  92. [92]
    1. Introduction — The Linux Kernel documentation
    ### Summary: How Keyboard and Mouse Events Are Handled via Interrupts and Dispatched to User Space
  93. [93]
    Scaling in the Linux Networking Stack — The Linux Kernel documentation
    ### Summary: Network Packet Arrival Triggers Interrupts and Protocol Stack Processing in Linux
  94. [94]
    Interrupts — The Linux Kernel documentation
    An interrupt is an event that alters the normal execution flow of a program and can be generated by hardware devices or even by the CPU itself.
  95. [95]
    FreeRTOS mutexes
    FreeRTOS implements a basic priority inheritance mechanism which was designed to optimize both space and execution cycles. A full priority inheritance mechanism ...Missing: deadlines | Show results with:deadlines
  96. [96]
    [PDF] Mastering the FreeRTOS™ Real Time Kernel
    If two interrupts of differing priority occur at the same time, then the processor will execute ... the MP task, so the amount of time that priority inversion ...
  97. [97]
  98. [98]
    Getting started with PWR - stm32mcu - ST wiki
    The ultra-low-power STM32L476xx supports six low-power modes to achieve the best compromise between low-power consumption, short startup time, available ...
  99. [99]
    [PDF] Real-Time Challenges and Opportunities in SoCs - Intel
    Constraining the data access types will reduce the interrupt latency. ... In more typical usage, the ARM-based solutions have >1 µs of interrupt jitter.
  100. [100]
    [PDF] Efficient Interrupts on Cortex-M Microcontrollers - Keil
    This paper explores how the architectural features of ARM® Cortex®-M ... processing of low latency interrupts without having to drop into the foreground.
  101. [101]
    [PDF] Intel® 64 and IA-32 Architectures Software Developer's Manual
    This chapter describes the basics of virtual machine architecture and an overview of the virtual-machine extensions. (VMX) that support virtualization of ...
  102. [102]
    [PDF] Open-Source Register Reference For AMD Family 17h Processors ...
    Jul 3, 2018 · The local APIC contains logic to receive interrupts from a variety of sources and to send interrupts to other local APICs, ... 13 AVIC: AMD ...
  103. [103]
    vIOMMU: efficient IOMMU emulation - ACM Digital Library
    Abstract. Direct device assignment, where a guest virtual machine directly interacts with an I/O device without host intervention, is appealing, because it ...
  104. [104]
    [PDF] (Mostly) Exitless VM Protection from Untrusted Hypervisor through ...
    Jan 17, 2020 · If the VE feature is enabled, an EPT violation can be transformed into an exception (Vector 0x14) without any VM exit. Before using the VE, the ...
  105. [105]
    Microarchitectural Data Sampling - Intel
    Mar 11, 2021 · The mitigation for microarchitectural data sampling issues includes clearing store buffers, fill buffers, and load ports before transitioning to ...
  106. [106]
    A Comprehensive Implementation and Evaluation of Direct Interrupt ...
    This paper describes the design, implementation, and evaluation of a KVM-based direct interrupt delivery system called DID.
  107. [107]
    Secure interrupts - Arm Developer
    This document provides an overview of the ARM TrustZone technology and how this can provide a practical level of security through careful System-on-a-Chip ...
  108. [108]
    A Survey of RISC-V Secure Enclaves and Trusted Execution ... - MDPI
    The architecture relies on the untrusted OS for managing enclave memory and providing essential services like interrupt handling and I/O operations. This ...Missing: 2020s | Show results with:2020s
  109. [109]
    GPU full virtualization of VFIO shared vGPU in heterogeneous SoC ...
    Expose vGPU interfaces to guest through VFIO; Support for custom interrupt mechanisms (e.g., a single physical interrupt is shared by all vGPUs) and its ...