Fact-checked by Grok 2 weeks ago

Interrupt descriptor table

The Interrupt Descriptor Table (IDT) was introduced with the microprocessor in 1982 to support interrupt handling. It is a system in the x86 architecture used by the to handle interrupts and exceptions, associating each of the 256 possible interrupt vectors (numbered 0 through 255) with a specific handler routine or task through descriptors that specify the segment selector, offset, and attributes for control transfer. In and IA-32e mode, the IDT resides in memory and is accessed via the IDTR register, which stores the table's base address and limit; it supports up to 256 entries of 8 bytes each (totaling 2 KB) in or 16 bytes each in 64-bit mode to accommodate 64-bit offsets. The IDT's primary purpose is to provide an efficient mechanism for the operating system to manage hardware interrupts (such as from devices), software interrupts (invoked via instructions like INT n), processor-detected exceptions (like page faults), and traps, enabling the to transfer control to the appropriate memory-resident procedure or task upon event occurrence. Vectors 0 through 31 are reserved for processor exceptions and non-maskable interrupts (NMI, at vector 2), while vectors 32 through 255 are available for maskable hardware interrupts, software interrupts, or use by the (APIC). The table must be initialized by the operating system using the LIDT instruction (requiring current level 0) before interrupts are enabled, with empty or unused entries required to have the present (P) flag cleared to 0 to prevent general-protection exceptions (#GP) on invalid vector references. Each IDT entry is a gate descriptor, of which there are three types: interrupt gates, which transfer control to a handler and automatically clear the (IF) to disable further maskable interrupts; trap gates, which perform the transfer without affecting IF and are used for or certain exceptions; and task gates, which initiate a task switch via a Task State Segment (TSS) but are not supported in IA-32e mode. Descriptors include controls via the descriptor privilege level (DPL) for checks, support for inter-privilege-level calls with stack switching, and alignment on an 8-byte boundary, with the limit set to 8N – 1 for N entries. In , a simpler (IVT) at physical address 0 is used for interrupt handling, while the IDT extends these capabilities in for segmented memory and larger address spaces.

Introduction

Definition and Purpose

The Interrupt Descriptor Table (IDT) is a system data structure in the x86 architecture, consisting of an array of up to 256 entries that map interrupt vectors numbered from 0 to 255 to corresponding gate descriptors. Each entry in the IDT defines the segment selector, offset, and attributes for an interrupt or exception handler procedure or task, allowing the processor to reference these descriptors during event processing. The table's location and size are specified by the Interrupt Descriptor Table Register (IDTR), which holds the base linear address and limit of the IDT. The primary purpose of the is to provide a mechanism for the to locate and invoke appropriate routines in response to and exceptions, ensuring orderly handling of system events. It supports hardware interrupts from external devices, software interrupts initiated by the instruction, and processor-generated exceptions including faults, , and aborts. By associating each vector with a descriptor—such as an interrupt gate, trap gate, or task gate—the enables controlled transfers of execution while enforcing protection rules like privilege levels. In the x86 architecture, the IDT has been central to protected-mode interrupt handling since its introduction with the processor, where it superseded the simpler real-mode by incorporating segmented addressing and privilege checks. Upon occurrence of an or exception, the processor uses the event's vector number as an index into the IDT (multiplied by 8 to locate the entry), retrieves the associated descriptor, and transfers control to the specified handler or task accordingly. This process maintains system integrity by validating the descriptor's presence and attributes before execution.

Historical Context

The Interrupt Vector Table (IVT) originated with the microprocessor in 1978, serving as the foundational mechanism for interrupt handling in . It occupied a fixed region from 0x0000 to 0x03FF, comprising 256 entries of four bytes each—a 16-bit followed by a 16-bit selector—enabling direct jumps to interrupt service routines within the 1 MB . The Interrupt Descriptor Table (IDT) emerged with the in 1982, marking a pivotal shift to and addressing the IVT's limitations in supporting advanced operating system features. Unlike the IVT's static location, the IDT's base address and limit were managed dynamically via the IDTR register, with each of its 256 entries expanded to eight bytes to incorporate gate types (task, , and ) and segment selectors, facilitating addressing and privilege-level checks. This evolution was primarily driven by the demand for to isolate processes and prevent unauthorized access, as well as hardware-assisted multitasking to enable efficient context switching in multi-user environments, overcoming real mode's 1 MB address ceiling and lack of security mechanisms. Subsequent enhancements in the Intel 80386 (1985) solidified the IDT's role by fully supporting 256 vectors with refined trap and interrupt gates, accommodating 32-bit offsets for broader addressability and improved in segmented environments. The transition to via the AMD64 architecture in 2003 extended IDT entries to 16 bytes in , incorporating 64-bit offsets while restricting gate types to 64-bit interrupt and trap variants, thus enabling robust interrupt redirection and stack management for larger virtual address spaces. These developments were essential for modern operating systems such as and , which rely on the IDT for secure interrupt isolation and multitasking, while preserving real-mode IVT compatibility to support legacy BIOS and DOS applications.

IDT Structure

Overall Organization

The Interrupt Descriptor Table (IDT) is organized as a linear of up to 256 entries in , where each entry corresponds to an or exception and points to the of the associated handler routine. This structure allows the processor to quickly locate and invoke the appropriate service routine upon receiving an number. The table's fixed maximum size ensures efficient indexing without requiring dynamic allocation during processing. In , each IDT entry occupies 8 bytes, resulting in a maximum table size of 2 for 256 entries. In (IA-32e), entries are expanded to 16 bytes to support 64-bit addressing, yielding a maximum size of 4 . The operating system determines the actual number of populated entries, but the table is designed to accommodate the full range even if only a subset is used. The location and size of the are managed by the Interrupt Descriptor Table (IDTR), a dedicated that holds the linear base address of the table and its in bytes. In , the IDTR is a 48-bit structure (6 bytes total), comprising a 32-bit base address and a 16-bit field. In , it extends to 80 bits (10 bytes), with the base address widened to 64 bits while retaining the 16-bit . This enables the to access the from any position in the linear . The processor indexes into the using an 8-bit vector number ranging from 0 to 255, which serves as the entry index and is scaled by the entry size (8 bytes in or 16 bytes in ) to compute the offset from the base address. This mechanism supports sparse population, where unused entries can be left undefined or marked to generate a trap if accessed, allowing flexible allocation without requiring contiguous filling of the table. The operating system allocates the IDT in kernel linear address space, typically placing it in a protected memory region to prevent user-mode access. The IDTR limit must be set to at least 255 bytes in protected mode (to cover the first 32 exception vectors, each 8 bytes) or 511 bytes in long mode (for 16-byte entries), though full population requires limits of 2047 bytes or 4095 bytes, respectively, to encompass all 256 entries.

Descriptor Format

Each entry in the Interrupt Descriptor Table (IDT) is a gate descriptor that specifies the location and type of handler for an interrupt or exception. In protected mode, these descriptors are 8 bytes long and consist of several key fields that define the handler's address, target segment, and access attributes. The protected mode descriptor format includes the following bit fields: bits 0-15 hold the low 16 bits of the handler offset; bits 16-31 contain the segment selector for the code segment containing the handler; bits 32-47 are reserved (must be zero for interrupt and trap gates); bits 48-51 specify the gate type (with values 14 for interrupt gates and 15 for trap gates); bit 52 is reserved (zero); bits 53-54 indicate the Descriptor Privilege Level (DPL); bit 55 is the Present (P) flag; and bits 56-63 hold the high 16 bits of the handler offset. For task gates (type 5), the segment selector points to a Task State Segment (TSS) instead of a code segment, and offset fields are reserved. In (IA-32e), descriptors are extended to 16 bytes to support 64-bit addressing and additional features. The layout builds on the format but includes: bits 0-15 for offset low (15:0); bits 16-31 for segment selector; bits 32-35 for the Interrupt Stack Table (IST) index (0-7, with 0 indicating no IST); bits 36-39 reserved (zero); bits 40-43 for type (14 for interrupt gate, 15 for trap gate); bit 44 reserved; bits 45-46 for DPL; bit 47 for P; bits 48-63 for offset middle (31:16); bits 64-95 for offset high (63:32); and bits 96-127 reserved (zero). Task gates are not supported in long mode. Gate types determine handler behavior: an interrupt gate (type 14) clears the Interrupt Flag (IF) in EFLAGS to disable maskable hardware interrupts during execution, suitable for hardware interrupts; a trap gate (type 15) preserves the IF flag, allowing nested interrupts and used for software exceptions or debugging; a task gate switches to a new task via the referenced TSS but is rarely used after the 80386 due to the deprecation of task management in modern systems. Attribute flags control validity and access: the P flag (bit 55 in , bit 47 in numbering) must be 1 for the descriptor to be valid, or a not-present (#NP) exception occurs; the DPL (bits 53-54 in , bits 45-46 in ) specifies the minimum level (0 highest to 3 lowest) required for invocation, enforcing ring checks to prevent less-privileged code from triggering higher- handlers. Invalid descriptors, such as those with P=0 (triggering #NP), non-zero reserved bits, or invalid types, trigger a general-protection (#GP) fault during interrupt dispatch for the latter cases. The full handler is assembled as (offset_high << 32 | offset_middle << 16 | offset_low) in or (offset_high << 16 | offset_low) in , forming the into the target .
FieldProtected Mode BitsLong Mode BitsDescription
Offset Low0-150-15Lower 16 bits of 32/64-bit handler address
Segment Selector16-3116-31Index into GDT/LDT for code/TSS
Reserved/IST32-47 (reserved=0)32-39 (IST index 0-7)Stack table index in ; reserved otherwise
Type48-51 (e.g., 14=0xE, 15=0xF, 5=0x5)40-43 (e.g., 14=0xE, 15=0xF, 9=0x9)Defines gate behavior (, trap, task)
DPL53-54 (0-3)45-46 (0-3)Privilege level for
P55471 if descriptor present
Offset High/Middle56-63 (31:16)48-63 (31:16), 64-95 (63:32)Upper bits of handler address
ReservedN/A (beyond 63)96-127 (=0)Must be zero for compatibility

Operating Modes

Real Mode

In real mode, the x86 architecture employs the (IVT) as the functional equivalent of the (IDT), providing a simple mechanism for interrupt handling without the segmentation or protection features of . The IVT is fixed at physical memory addresses 00000h to 003FFh, occupying the first 1 of and consisting of 256 entries, each 4 bytes in length. Each entry comprises a 2-byte selector followed by a 2-byte , forming a far pointer to the routine in the 1 MB real-address space; unlike protected-mode descriptors, these entries do not include gates, selectors, or attribute fields such as type, privilege level, or present bit. The LIDT instruction, which loads the base address and limit into the IDT register (IDTR), behaves differently in compared to . In , the base address is ignored, as the IVT remains fixed at address 0, while the limit must be set to exactly 03FFh to match the 1 table size; any other limit value leads to during processing. When an occurs, the uses the vector number (0 to 255) to index the IVT—multiplying the vector by 4 to locate the entry—then fetches the segment:offset pair and transfers control directly to that address after pushing the current flags, code segment (CS), and instruction pointer (IP) onto the in 16-bit format. This direct jump lacks any validation, allowing interrupts to execute code anywhere within the 1 MB address space. Real mode imposes significant limitations on interrupt handling due to its simplified design. Without privilege levels or ring protections, any interrupt vector can access and potentially corrupt critical system areas, such as kernel code, if not properly handled, as there are no mechanisms to enforce access controls or stack switching. The addressing model is constrained to 20-bit physical addresses (up to 1 MB), with 16-bit offsets limiting handler reach without additional segmentation tricks, and no support for 32-bit code or data in standard configurations. These constraints make real mode suitable only for legacy or initialization environments. For compatibility with early x86 systems, the IVT is integral to bootloaders, which initialize vectors during (POST) to set up basic handlers before transitioning modes, and to , where applications hook IVT entries to extend functionality without kernel privileges. services, provided by in low memory (typically F0000h to FFFFFh), are invoked via software interrupts using the IVT; for example, accesses video services like mode setting or character output by vectoring to the BIOS handler at IVT offset 40h (10h * 4). This structure ensures for 16-bit code in environments like but requires careful vector management to avoid conflicts.

Protected Mode

In protected mode, the interrupt descriptor table (IDT) consists of 256 entries, each 8 bytes in length, that define the location and access rights for and exception handlers. Each entry includes a 16-bit segment selector that references a descriptor in the (GDT) or local descriptor table (LDT), along with a 32-bit that specifies the of the handler within that segment, forming a linear as CS:EIP. The entries can be task gates, gates, or gates; and gates directly invoke the handler routine, while task gates trigger a task switch via a task segment (TSS). This structure enables the processor to support segmented memory , distinguishing protected mode from real mode's flat model. Privilege enforcement is integral to IDT operations in protected mode, where the descriptor privilege level (DPL) of a gate is compared against the current privilege level (CPL) of the interrupted task. For software interrupts (such as INT n or INT 3), a general protection fault (#GP) is generated if the CPL exceeds the DPL, preventing less privileged code from invoking higher-privilege handlers; however, this check is bypassed for hardware interrupts and processor exceptions to ensure reliable error handling. Interrupt gates additionally clear the interrupt flag (IF) in EFLAGS upon entry to mask further interrupts, whereas trap gates preserve IF to allow nested interrupts or traps. The segment selector undergoes standard checks for validity, including conforming or non-conforming code segment rules, to maintain isolation between privilege rings. When an or exception occurs, the saves the current state on the , pushing EFLAGS, , and EIP; if a privilege-level change is required, it also pushes SS and ESP from the appropriate TSS stack pointer. The handler is then loaded by combining the segment selector and offset from the IDT entry, with the switching stacks if entering a more privileged level to isolate execution contexts. Certain exceptions (vectors 0 through 31) may push an error code immediately after EIP for diagnostic purposes, such as segment faults. Execution returns via the IRET instruction, which restores the , including EFLAGS, while re-enabling interrupts only if the original IF was set (for interrupt gates, this occurs after the handler clears the mask if needed). These mechanisms provide multitasking isolation and protection not available in , with vectors 0-31 reserved exclusively for -defined exceptions to enforce integrity.

Long Mode

In , the (IDT) supports 64-bit operation within the architecture, utilizing 16-byte descriptors to handle and exceptions across the full 64-bit linear . Each descriptor includes a 64-bit offset to the handler code with bits 15:0 in bytes 0–1, bits 31:16 in bytes 6–7, and bits 63:32 in bytes 8–11, along with a 16-bit code-segment selector, a 3-bit Interrupt Stack Table (IST) index, and attribute fields specifying the gate type ( or ), descriptor privilege level (DPL, 2 bits), and present bit. This format enables a flat memory model without code-segment base addresses, differing from segmented addressing in 32-bit , and eliminates support for task gates to streamline hardware behavior. The IST field provides a mechanism for automatic stack switching during interrupt delivery, referencing one of up to seven 64-bit stack pointers stored in the Task State Segment (TSS); an IST index of zero uses the current , while non-zero values load a dedicated to prevent overflows from nested or exceptions, such as double faults or machine checks. Interrupt gates clear the IF flag in RFLAGS upon entry to disable further maskable during handler execution, ensuring atomicity, while trap gates leave IF unchanged to allow nesting. The 256 vectors (0-255) align with protected mode assignments but generate #SS stack faults for invalid stack conditions in 64-bit contexts. Handlers execute in 64-bit code segments (with CS.L=1 and CS.D=0), typically using a fixed code selector like 0x08, and the pushes an 8-byte stack frame comprising SS, RSP, RFLAGS, CS, and RIP unconditionally upon entry. Returning from handlers employs the IRETQ instruction, which pops the stack frame in reverse order—RIP, CS, RFLAGS, RSP, SS—restoring the 64-bit processor state and re-enabling interrupts if the saved RFLAGS.IF was set. If the return targets compatibility mode (a 32-bit code segment with CS.L=0), the handler can invoke legacy 32-bit code, maintaining backward compatibility for mixed environments, though all IDT entries must use 64-bit offsets in canonical form to avoid general-protection faults. In operating systems such as 64-bit Linux and Windows, the IDT emphasizes IST usage for reliable nesting; for instance, Linux configures IST entries in the TSS for vectors like 8 (double fault) and 2 (NMI) to switch to per-CPU emergency stacks, while Windows employs similar mechanisms in its kernel for exception handling without task-state segment switches.

Setup and Initialization

Loading the IDT

The loading of the Interrupt Descriptor Table (IDT) into the processor's Interrupt Descriptor Table Register (IDTR) is a critical step performed by the during early initialization, typically after the (GDT) has been established. This process establishes the location and size of the IDT in memory, enabling the CPU to reference it for and in . The IDTR, a special , holds the linear base address of the IDT and its limit (the size minus one). The LIDT (Load Interrupt Descriptor Table) instruction is used exclusively for this purpose, with the syntax LIDT [memory operand], where the operand is a 6-byte in the format specified by the : the first word ( bits) contains the , followed by the ( bits in IA-32 mode or 64 bits in ). The instruction loads these values directly into the IDTR without segment translation, making it one of the few operations that handle linear addresses in . For a complete IDT supporting 256 vectors, the limit is set to 0x7FF (2047 bytes) in IA-32 , where each descriptor is 8 bytes (256 × 8 = 2048 bytes total), or 0xFFF (4095 bytes) in , where descriptors are bytes each. In , the IDT is not used; instead, the fixed (IVT) at 0x00000000 handles interrupts, and the LIDT primarily serves to load the value, with the being ignored by the processor. No explicit LIDT execution is required for basic real-mode operation beyond verifying the , but it may be invoked during the transition to . A typical initialization sequence in the involves allocating a contiguous block of kernel-accessible for the , zero-initializing the entries to prevent , computing the IDTR values (e.g., base as the linear of the allocated and as 0xFFF for a full table in ), and then issuing the LIDT instruction to load them. This occurs very early in the boot process, often within the initial , to ensure interrupts are properly vectored before enabling additional or user code. For example, in or inline code, an pointer structure is prepared and passed to LIDT as follows:
lidt [idtr_ptr]
where idtr_ptr points to the 6-byte (or 10-byte in 64-bit mode) descriptor. If the limit is invalid (e.g., zero or exceeding mode-specific maxima), the LIDT instruction triggers a general protection exception (#GP) immediately. In , the base address should ideally be page-aligned (multiples of 4096 bytes) to optimize with paging enabled, though the CPU does not enforce this alignment strictly. Execution of LIDT requires privilege level (CPL=0); otherwise, it raises #GP.

Configuring Descriptors

Configuring an IDT descriptor involves populating its fields to define the interrupt handler's location, privilege level, and behavior, typically after the IDT has been allocated and loaded into the IDTR. For a standard 32-bit interrupt gate, the offset field is set to the linear address of the handler routine, the selector field points to the kernel code segment (e.g., index 8 in the GDT, yielding selector 0x08), the type field is configured as 0x8E (indicating a 32-bit interrupt gate with present bit set, DPL=0 for kernel-only access, and interrupt flag clearing), and the present bit is asserted to 1. This assembly ensures the processor clears the (IF) upon entry, preventing nested interrupts unless explicitly re-enabled. In operating systems like , descriptors are configured using functions such as set_intr_gate, which initializes the entry with the handler address, code selector (__KERNEL_CS), type 14 (GATE_INTERRUPT, corresponding to 0x8E in the low byte with DPL=0 and present=1), and DPL=0 for ring 0 access. can also be used to directly write the 8-byte descriptor structure into , packing the offset into bits 15-0 and 47-32, selector into bits 55-40, and attributes (including type 0x8E) into bits 39-32 and 63-56. In (64-bit), descriptors expand to 16 bytes, with the offset becoming a full 64-bit value split across the structure, and an additional 3-bit IST (Interrupt Stack Table) index field (bits 31-0 of the high doubleword) specifying an entry in the TSS for switching on critical like NMIs. employs variants like set_nmi_gate with ISTG macros to set this index (e.g., IST_INDEX_NMI) for vectors requiring isolated stacks. Validation of configured descriptors includes verifying the present bit is set to 1 (otherwise triggering a #NP exception), confirming the selector indexes a valid executable code segment in the GDT with conforming access rights, and ensuring the DPL matches the intended privilege (e.g., 0 for handlers). In , the offset must form a canonical address to avoid #GP faults. Functionality can be tested by issuing CLI (clear IF) and STI (set IF) instructions around handler invocations to observe masking behavior, confirming the gate type's effect on the . Dynamic updates to individual descriptors require privilege (CPL=0) and involve first using the instruction to store the IDTR contents, retrieving the base address of the . The target entry's memory location is then computed as base + (vector_number × entry_size), where entry_size is 8 bytes in 32-bit modes or 16 bytes in , allowing direct overwrite of the descriptor fields with new values. Post-update, the automatically uses the modified entry on the next without requiring IDTR reload, though atomicity must be ensured to avoid partial reads during handler dispatch.

Interrupt Vectors and Assignments

Exceptions

The Interrupt Descriptor Table (IDT) reserves the first 32 vectors (0 through 31, or 0h to 1Fh) for exceptions and the (NMI at vector 2) in x86 architectures. These include synchronous events triggered by the CPU itself during instruction execution or due to internal errors. These exceptions are categorized into three types: faults, traps, and aborts, each with distinct handling behaviors to maintain system integrity. Faults occur before the completion of the faulting instruction and are restartable, allowing the processor to resume execution from the original instruction pointer after the handler returns; examples include the divide error (#DE, vector 0) and (#PF, vector 14). Traps, in contrast, are reported after the trapping instruction completes, enabling execution to continue at the subsequent instruction without restart; representative cases are the exception (#BP, vector 3) and (#OF, vector 4). Aborts represent severe, often unrecoverable conditions where program state may be lost and restart is impossible, such as the (#MC, vector 18) or double fault (#DF, vector 8). Certain exceptions push a 32-bit error code onto the stack immediately after the return address, providing diagnostic information to the handler; this applies to vectors like #TS (10, invalid TSS), #NP (11, segment not present), #SS (12, stack segment fault), #GP (13, general protection), #PF (14, page fault), and #AC (17, alignment check). For the page fault (#PF), the error code encodes bits indicating whether the fault was due to a present/not-present page (P), read/write access (W/R), user/supervisor mode (U/S), and other attributes like instruction fetch (I) or protection key (PK); additionally, the processor loads the faulting linear address into the CR2 register for handler use, though CR2 is not pushed onto the stack. Not all exceptions generate error codes—for instance, #DE (vector 0) and #BP (vector 3) do not—requiring handlers to infer causes from context or registers. The double fault exception (#DF, vector 8) is unique as an abort triggered by a second exception (often a contributory fault or page fault) during the handling of a prior one; it pushes an error code of 0 and uses a dedicated double-fault stack (via the task state segment) to avoid recursion, but if unhandled, it escalates to a triple fault, invoking a processor reset or shutdown. Operating systems must install valid descriptors (typically or trap gates) for all vectors 0-31 to ensure reliable , as failure to do so risks unrecoverable triple faults and system instability. In practice, modern OS kernels like implement comprehensive exception tables—arrays mapping faulting instruction addresses to fixup code—that allow handlers (e.g., for #PF) to search for and execute recovery routines, such as returning -EFAULT for invalid user-space accesses, thereby emulating safe fault resolution without full restarts. This setup prioritizes precise exception classification and minimal disruption, with trap gates used for debug-oriented traps like #BP to preserve flags.

Hardware Interrupts

Hardware interrupts, also known as external interrupts, are asynchronous signals generated by hardware devices to request service from the CPU, with the Interrupt Descriptor Table (IDT) providing the entry points for their handlers via specific vectors. In systems using the legacy 8259 (PIC), interrupt requests (IRQs) from devices are prioritized and remapped to IDT vectors in the range 0x20 to 0x2F to avoid overlap with exception vectors (0x00 to 0x1F). The master PIC handles IRQs 0-7, mapping them to vectors 0x20-0x27, while the slave PIC manages IRQs 8-15, mapping to 0x28-0x2F, with the slave cascaded through the master's IRQ 2. In modern systems employing the (APIC) or its extensions like x2APIC, hardware interrupts use vectors starting from 0x30 for local APIC interrupts, with the full range extending up to 0xFF (255 vectors total) for greater flexibility and scalability. The I/O APIC routes device IRQs to these vectors, while the local APIC handles internal events, allowing programmable assignment beyond the fixed PIC scheme. The handling process begins when a device asserts its IRQ line, prompting the or APIC to prioritize the request and deliver the corresponding number to the CPU via the APIC interface or INTA cycle. The CPU then uses this as an index into the to locate and invoke the interrupt gate or task gate descriptor, transferring control to the handler routine. Upon completion, the handler issues an End-of-Interrupt (EOI) command to the controller—via port 0x20 for the master , 0xA0 for the slave, or the APIC's EOI register—to clear the and re-enable the line for future events. Priority management ensures orderly handling: in the 8259 PIC, priorities are fixed in hardware with IRQ 0 (typically the system timer) as the highest and IRQ 7 as the lowest, resolved by a daisy-chain mechanism if multiple IRQs are pending. The APIC, in contrast, supports programmable priorities through registers like the Task Priority Register (TPR), allowing software to adjust levels dynamically for better control in multiprocessor environments. Interrupt nesting is facilitated by interrupt gates, which automatically clear the Interrupt Flag (IF) in the EFLAGS register to disable maskable interrupts during handler execution, preventing lower-priority interruptions until the handler re-enables IF or issues EOI. Representative examples include the system timer on IRQ 0, mapped to 0x20 in PIC systems or a programmable like 0x31 in APIC configurations for periodic timing events, and the keyboard controller on IRQ 1, using 0x21 or 0x32 to signal key presses. These mappings ensure that critical hardware events, such as timing and input, integrate seamlessly with the IDT for efficient CPU response.

Software Interrupts

Software interrupts in the x86 architecture are explicitly generated by software instructions to invoke handlers defined in the (IDT), allowing controlled transitions to privileged code such as operating system services. The primary mechanism is the INT n instruction, where n is an 8-bit immediate value specifying the interrupt vector (0 to 255), which serves as an index into the to locate the corresponding gate descriptor. Upon execution, the processor pushes the current values of the EFLAGS register (including the direction flag for string operations), the selector (CS), and the instruction pointer (EIP or RIP) onto the ; if the interrupt causes a privilege-level change (e.g., from user to ), it additionally pushes the segment selector (SS) and pointer (ESP or RSP) before the others. The processor then loads the handler's and offset from the IDT entry and jumps to it, with the saved CS:EIP/RIP pointing to the instruction immediately following the INT n. A specialized form is the INT 3 instruction (opcode 0xCC), a one-byte breakpoint interrupt that generates vector 3 to invoke a debug handler, commonly used for software breakpoints in debugging tools without requiring hardware support. Unlike general INT n, INT 3 is treated as a trap-class event, allowing single-stepping through the breakpoint itself if desired. To return from a software interrupt handler, the IRET (or IRETQ in 64-bit mode) instruction pops the stack in reverse order—restoring EIP/RIP, CS, EFLAGS (which reinstates the original direction flag, preserving its state for instructions like string moves), and if applicable, SS and ESP/RSP—thus returning control to the interrupted code while maintaining processor state integrity. Historically, software interrupts have been widely used for operating system interactions, such as system calls in legacy environments. In , the INT 21h (vector 0x21) provided a multipurpose interface for services like file I/O, program execution, and keyboard input, with the AH register specifying the subfunction. Similarly, early kernels on 32-bit x86 employed INT 0x80 (vector 128) as the primary syscall , where the syscall number in selected the kernel routine, passing arguments via registers; this legacy path remains supported for compatibility in modern s via the entry_INT80_compat handler. Vectors from 0x80 (128) to 0xFF (255) are typically reserved for user-defined software interrupts, as lower vectors (0-31) are dedicated to processor exceptions and non-maskable interrupts. In contemporary systems, however, the INT instruction for syscalls has been largely deprecated in favor of dedicated instructions like SYSCALL and SYSRET in 64-bit , which offer faster context switching by avoiding full IDT gate traversals and stack manipulations for privilege changes, reducing latency in high-frequency operations. These modern alternatives, introduced with AMD64 extensions and adopted by , bypass the overhead of interrupt gates (often configured as trap gates for software events) while maintaining security through model-specific registers for entry points. INT remains relevant for (e.g., INT 3) and legacy compatibility but is avoided in performance-critical paths due to its slower entry and exit compared to SYSCALL.

Common Layouts and Examples

Standard x86 Layout

The standard x86 layout for the (IDT) follows 's recommended assignments for the 256 available interrupt vectors, reserving the lowest numbers for processor exceptions to ensure priority handling of critical events. Vectors 0 through 31 are dedicated to exceptions, such as vector 0 for the #DE (divide error) exception triggered by or overflow, and vector 14 for the #PF () exception occurring on invalid memory access. Vectors 32 through 47 ( 20h to 2Fh) are assigned to interrupts from the legacy (PIC), mapping IRQ lines 0 through 15; for instance, vector 32 corresponds to IRQ0, typically handled by the system timer to avoid conflicts in multi-vendor environments. The remaining vectors 48 through 255 are available for operating system-defined interrupts or device-specific uses, providing extensibility for modern systems. This layout is summarized in the following :
Vector RangePurposeExamples/Notes
0–31Processor exceptions0: #DE (divide error); 14: #PF (page fault)
32–47 (20h–2Fh)Hardware interrupts ( IRQs)32 (IRQ0): System timer handler
48–255OS/device-defined interruptsAvailable for custom assignments
The rationale for this assignment prioritizes low vectors for exceptions to prevent overlap with software or hardware interrupts, enabling efficient prioritization and orderly handling across processor-detected events and external sources. Higher vectors support extensibility, accommodating growth in interrupt sources without disrupting core functionality. In variations from the basic PIC-based layout, the (APIC) architecture utilizes vectors 16 through 255 for inter-processor interrupts (IPIs), delivered via the Interrupt Command Register in fixed or NMI modes to facilitate multi-core communication. Message-signaled interrupts (MSIs), common in devices, are programmable within vectors 48 through 255 (often 010h to 0FEh), using memory writes for interrupt signaling to enhance scalability in high-performance systems.

IBM PC Specifics

In the original IBM PC (Model 5150), a single 8259 (PIC) was employed, mapping its eight interrupt requests (IRQs 0 through 7) to interrupt vectors 08h through 0Fh in the (IVT). This configuration directly overlapped with the lower range of x86 processor exceptions (00h through 1Fh), creating potential conflicts since exceptions like double fault (08h) and coprocessor segment overrun (0Dh) shared the same vectors. To mitigate this in operating systems transitioning to , the PIC interrupts are typically remapped by the OS during initialization: IRQs 0-7 to vectors 20h-27h and, for systems with a second PIC, IRQs 8-15 to 28h-2Fh, preserving the exception space. This initial mapping led to legacy conflicts in PC-compatible systems, where vectors 08h-0Fh were dedicated to the original 8259 hardware interrupts, while the reserved vectors 10h-1Fh for essential routines, such as video services (10h) and disk I/O (13h). Specific examples include IRQ4 for asynchronous communications (e.g., ports) initially assigned to vector 0Ch, IRQ6 for the controller at 0Eh, and IRQ7 for the parallel printer at 0Fh, all of which could interfere with if not managed. Modern operating systems maintain compatibility mode support for applications by preserving these original vector assignments in or through , ensuring legacy software can invoke hardware interrupts without modification. The evolution to the IBM PC AT (Model 5170) introduced a second cascaded 8259A PIC as a slave controller connected via the master's IRQ2 (vector 0Ah), expanding to 16 IRQs while addressing the limitations of the single-PIC design. Initially, the slave PIC's IRQs 8-15 were mapped by the to vectors 70h-77h to avoid overlap with the BIOS-reserved 10h-1Fh range and potential conflicts in lower vectors, with examples including IRQ8 for the () at 70h and IRQ14 for the fixed disk controller at 76h. Operating systems subsequently remap the master to 20h-27h and the slave to 28h-2Fh for standardized protected-mode operation, shifting from the AT's initial high-vector placement while retaining .

BIOS Interrupts

In the x86 architecture's environment, interrupts provide a standardized for software to access low-level services during system initialization and early operation. These software interrupts utilize in the range of 10h to 1Fh, which are reserved specifically for functions, allowing programs to invoke routines without direct manipulation. For instance, vector 10h handles video services, 13h manages disk operations, and 16h processes input, enabling essential tasks like displaying output or reading storage devices in the absence of a full operating system. These interrupts operate exclusively in real mode, where the interrupt vector table (IVT) at memory address 00000h points to handler code stored in read-only memory (ROM) segments starting from F0000h. Parameters are passed via CPU registers, with the AH register typically specifying the sub-function or service code; other registers like AL, BX, CX, and DX carry additional data or receive outputs. A representative example is INT 10h with AH=0Eh, which performs teletype output by writing the character in AL to the active display page, advancing the cursor, and interpreting control codes such as carriage return (CR) or line feed (LF), with BH selecting the page and BL the foreground color in graphics modes. This mechanism ensures compatibility across IBM PC-compatible systems by abstracting hardware variations through firmware. BIOS interrupts are inherently limited by real mode constraints, capping addressable memory at 1 MB and restricting access to 16-bit segmented addressing, which becomes inadequate for modern multitasking or large memory environments. Upon transitioning to , operating systems replace these interrupts with their own drivers and APIs, as the supersedes the IVT and BIOS services are no longer directly invocable without mode switches, which are inefficient and insecure. As a legacy component, BIOS interrupts remain relevant in the boot process, where the firmware uses them to perform power-on self-test (POST), initialize hardware, and load the bootloader via INT 19h before handing control to the OS. In contemporary systems adopting UEFI, this reliance diminishes, as UEFI boot services (e.g., EFI_SIMPLE_TEXT_OUTPUT_PROTOCOL for video akin to INT 10h, EFI_BLOCK_IO_PROTOCOL for disk like INT 13h, and EFI_SIMPLE_TEXT_INPUT_PROTOCOL for keyboard like INT 16h) and runtime services provide equivalent functionality through protocol-based calls, eliminating the need for interrupt vectors in favor of a modular, 64-bit capable interface.

Advanced Usage

Interrupt Hooking

Interrupt hooking refers to techniques employed by kernel-level software, such as drivers or rootkits, to intercept and redirect processing defined in the (IDT). This allows custom code to execute before or instead of the original handler, enabling monitoring, modification, or suppression of -related events. In x86 architectures, typically involves modifying IDT entries to point to new interrupt service routines (ISRs), often while preserving the original functionality through chaining mechanisms. Kernel drivers achieve this by first locating the IDT using the SIDT instruction, which stores the table's base address in the IDTR register. The driver then overwrites the relevant entry—typically an 8-byte gate descriptor on x86-32 or 16-byte on —by updating the offset fields to the address of a custom handler. For example, the low and high offset components are set to form the new ISR address, while preserving the segment selector and attributes for privilege level and type (e.g., interrupt or trap gate). To chain to the original handler and avoid breaking system behavior, the custom ISR saves the context, performs its logic (such as logging or filtering), and jumps or calls the original routine before returning. This is common for IRQ filters, where the hook inspects interrupt sources before dispatching. In , loadable kernel modules (LKMs) facilitate such modifications directly in kernel space. On Windows, direct IDT hooking targets hardware interrupts, such as 0x2E for calls, by altering IDT pointers to malicious or custom ISRs, often requiring per-processor updates since each CPU maintains its own IDT. However, for syscall interception, developers more commonly the System Service Dispatch Table (SSDT), accessed via the kernel variable KeServiceDescriptorTable, which maps syscall indices to handlers after the initial (e.g., via SYSENTER or SYSCALL instructions). This indirect approach modifies the dispatch table in to redirect specific services like NtQueryDirectoryFile, allowing result filtering without altering the IDT itself. SSDT requires disabling in the CR0 register or using memory descriptor lists (MDLs) to map pages writable. These methods carry significant risks, including system instability from improper context handling, such as stack corruption or deadlocks during nested interrupts, potentially leading to kernel panics. In 64-bit Windows, (PatchGuard) actively scans for modifications and other kernel alterations, invoking a (BSOD) upon detection to enforce integrity. Alternatives to direct IDT hooking include virtualization-based interception, where a virtual machine monitor (VMM) using VT-x (VMX) controls interrupt delivery. In VMX mode, the VMM configures VM-execution controls like "external-interrupt exiting" to cause VM exits on , allowing it to inspect and redirect them before injecting virtual via the guest's IDT. For instance, employs VMX to emulate and intercept guest , enabling transparent without kernel modifications in the guest OS. User-mode alternatives rely on OS APIs for event monitoring, such as Windows' registered I/O completion ports, but cannot directly access the IDT due to restrictions.

Security Considerations

The Interrupt Descriptor Table (IDT) is a critical that can be targeted by , particularly rootkits, through overwrite techniques to intercept and gain unauthorized control over system execution. Rootkits often modify IDT entries to redirect interrupt handlers, enabling persistent kernel-level persistence and evasion of detection mechanisms. Such modifications can elevate attacker privileges by hijacking software interrupts, like system calls, without triggering general protection faults. In historical kernels (pre-3.8.9), techniques like CVE-2013-2094 exploited buffer overflows to alter IDT entries, including DPL fields, for ; these have been patched since 2013. Recent research as of 2025 has demonstrated novel hijacking techniques in modern kernels, even with SMEP and enabled. By exploiting LIDT instruction gadgets, attackers can relocate the to user-controlled memory, enabling (ROP) chains via exception redirection or causing denial-of-service () through unhandled faults. Proposed mitigations include enforcing constant virtual addresses for the with post-LIDT validation checks. To mitigate IDT manipulation, operating systems employ mechanisms, such as marking the pages as read-only in the kernel's page tables after initialization, preventing unauthorized writes from kernel-mode code or loaded modules. In , the is protected with read-only mappings and a fixed address alias to prevent leakage of addresses via SIDT, in conjunction with (KASLR) that randomizes surrounding kernel space. Hardware features like Supervisor Mode Execution Prevention (SMEP) and (SMAP), introduced in processors, further protect against IDT-related risks by blocking kernel execution of user-mode code or access to user-mode memory, even if an attacker redirects an IDT entry to malicious payloads. In virtualized environments, hypervisors maintain a shadowed for guest virtual machines (VMs), allowing the host to intercept and validate deliveries without exposing the guest's to direct modification. Modern operating systems incorporate runtime integrity checks for the IDT. In Windows, (KPP), also known as PatchGuard, periodically scans the IDT for unauthorized modifications, such as altered gate descriptors or handler addresses, and triggers a halt if tampering is detected. For Linux, the IDT is locked into read-only pages during kernel initialization (), with subsequent protections enforced via module signing and modes to block unsigned kernel modules from accessing or altering it. Best practices for IDT security emphasize rigorous validation during setup and runtime. Kernel developers should verify segment selectors and DPL values in each IDT entry against expected kernel code segments (e.g., ensuring DPL=0 for most exceptions to restrict user invocation), using automated checks in initialization routines to detect misconfigurations. Additionally, utilizing the Interrupt Stack Table (IST) entries in the Task State Segment (TSS) isolates kernel interrupt stacks from user-mode faults; for example, assigning a dedicated IST for double faults or page faults prevents user-induced stack corruption from propagating to kernel execution, enhancing fault isolation.