Interrupt request
An interrupt request (IRQ) is a hardware signal generated by peripheral devices or internal components to notify the central processing unit (CPU) that immediate attention is required, thereby suspending the current program execution to handle the event asynchronously.[1] This mechanism enables efficient multitasking in computer systems by allowing devices such as keyboards, disks, or network interfaces to communicate with the processor without constant polling.[2] IRQs are typically routed through dedicated interrupt lines connected to an interrupt controller, such as the Programmable Interrupt Controller (PIC) in x86 architectures, which prioritizes and dispatches the signals to the CPU.[3] In traditional PC systems, there are a limited number of IRQ lines (e.g., IRQ 0 through 15), each assigned to specific devices to avoid conflicts, though modern systems use advanced controllers like the Advanced Programmable Interrupt Controller (APIC) to support more lines and dynamic allocation.[4] Interrupts are classified into maskable types, which the CPU can temporarily ignore by setting the interrupt enable flag (e.g., via the IF bit in x86), and non-maskable interrupts (NMIs), which cannot be disabled and are reserved for critical events like hardware failures or memory parity errors.[5] The handling of an IRQ involves the CPU saving its current state, jumping to an interrupt service routine (ISR) via an interrupt vector table, processing the request, and then resuming normal execution, which is essential for real-time responsiveness in operating systems like Linux or Windows.[6] This process has evolved from early mainframe designs to support the complexity of contemporary multiprocessor environments, where interrupts facilitate input/output operations and system events without stalling the primary computation.[7]Core Concepts
Definition and Purpose
An interrupt request (IRQ) is a hardware signal sent from a peripheral device or internal component to the central processing unit (CPU), prompting it to temporarily suspend its current execution and attend to a specific event requiring immediate attention, such as the completion of an input/output operation or the expiration of a timer.[8] This mechanism ensures that the CPU can respond efficiently to asynchronous hardware events without dedicating continuous resources to monitoring them.[9] The concept of interrupts dates back to early computers such as the UNIVAC 1103 (1953), and was notably implemented in mainframe architectures like the IBM System/360, announced in 1964 and first delivered in 1965, where it formed part of a structured interruption system to manage events from the CPU, I/O units, and external sources.[10][11] This design evolved to support multitasking environments by handling asynchronous occurrences, marking a shift from purely sequential processing in prior systems toward more responsive computing paradigms.[12] The primary purpose of an IRQ is to enable efficient resource sharing among multiple devices and processes, eliminating the need for constant CPU polling that would otherwise waste cycles on idle checks.[13] For instance, when a user presses a key on a keyboard, the device generates an IRQ to notify the CPU of the input, allowing immediate processing without the processor repeatedly querying the device status.[14] Key benefits include enhanced system throughput by freeing the CPU for other tasks, improved power efficiency through reduced idle operations, and greater scalability in setups with numerous peripherals, as the CPU only intervenes when events occur.[15]Types of Interrupts
Interrupts in computer systems are broadly classified into several categories based on their origin, priority, triggering mechanism, and dispatch method. These classifications help in understanding how interrupt requests (IRQs) integrate into the overall interrupt handling framework, enabling efficient response to asynchronous events without constant polling.[16] Maskable and Non-Maskable InterruptsMaskable interrupts, often associated with standard IRQs, can be temporarily disabled or ignored by the processor through specific control flags, such as the Interrupt Flag (IF) in the x86 architecture's EFLAGS register. This allows the CPU to defer handling during critical code sections, preventing unwanted disruptions. In contrast, non-maskable interrupts (NMIs) cannot be disabled and are reserved for the highest-priority events, such as critical hardware failures like parity errors or power supply issues, ensuring immediate attention even when maskable interrupts are blocked. NMIs typically bypass the standard interrupt controller and directly invoke a dedicated handler, underscoring their role in system integrity.[16][16] Hardware and Software Interrupts
Hardware interrupts originate from external peripherals and devices, signaling the CPU via dedicated lines or controllers when events like data arrival occur; for example, a disk controller may generate an IRQ upon completing a read operation to notify the system of available data. These are asynchronous to the current program execution and form the core of IRQ functionality in facilitating device communication. Software interrupts, on the other hand, are synchronous events triggered internally by the processor, either through explicit instructions like the INT n opcode for system calls (e.g., syscalls in operating systems) or automatically via exceptions such as division by zero or page faults arising from program errors. Unlike hardware IRQs, software interrupts do not rely on external signals but serve to transition control to the operating system kernel or error handlers.[16][2][16] Vectored and Non-Vectored Interrupts
Vectored interrupts provide the processor with a direct interrupt vector—a unique identifier or address—that specifies the exact handler routine, enabling rapid dispatch without additional identification steps; this is common in modern architectures where the interrupting device supplies the vector via hardware lines or tables. In x86 systems, for instance, the interrupt vector table maps these vectors to handler addresses for efficient resolution. Non-vectored interrupts, by comparison, lack this direct provision, requiring the processor to poll or scan multiple sources sequentially to identify the interrupting device after receiving a general signal, which introduces latency but offers simplicity in basic setups. Vectored mechanisms are preferred for performance-critical environments due to their speed in handler invocation.[17][2][16] Edge-Triggered and Level-Triggered Interrupts
Edge-triggered interrupts activate upon detecting a specific transition in the signal line, such as a rising or falling edge, making them suitable for signaling single, discrete events like a keypress or a one-time data pulse; once triggered, the interrupt is typically cleared automatically or by the handler, preventing repeated invocations unless a new edge occurs. Level-triggered interrupts, conversely, respond to a sustained signal level (e.g., high or low) on the line, remaining active until explicitly acknowledged by the handler, which is ideal for persistent conditions such as a device waiting with data ready or an ongoing error state. In practice, edge triggering reduces the risk of missing short pulses in noisy environments, while level triggering ensures the interrupt persists for reliable detection in multi-device systems.[18][18] Specific examples in the x86 architecture illustrate these types: IRQ 0, a maskable hardware interrupt typically edge-triggered, is assigned to the system timer for periodic scheduling and timekeeping tasks.[19] Similarly, IRQ 13 serves as a maskable hardware interrupt for floating-point unit (FPU) errors, often edge-triggered to report coprocessor exceptions promptly. These assignments highlight how IRQs embody various interrupt characteristics in real-world implementations.[16]