Fact-checked by Grok 2 weeks ago

Message Signaled Interrupts

Message Signaled Interrupts () are an optional interrupt signaling mechanism introduced in the PCI Local Bus Specification Revision 2.2, enabling devices to request service from the host system by issuing a dedicated write to a system-allocated , rather than relying on traditional pins such as INTA# through INTD#. This approach uses a 32-bit message value written as a DWORD , with support for either 32-bit or 64-bit addressing, and is configured through a capability structure in the device's . MSI offers several key advantages over pin-based interrupts, including reduced hardware complexity by eliminating the need for dedicated interrupt lines, which lowers pin counts and simplifies board wiring in dense systems. It supports up to 32 unique interrupt vectors per device function by varying the low-order bits of the message data, allowing for more scalable interrupt handling without shared lines that can cause contention or latency issues in multi-device environments. System software initializes the message address and data during device enumeration, ensuring interrupts are non-posted and follow PCI memory write ordering rules to maintain data consistency. While MSI and traditional INTx# interrupts are mutually exclusive when enabled, devices often retain pin support for backward compatibility with legacy systems. An enhanced variant, MSI-X (Extended Message Signaled Interrupts), was introduced in PCI Local Bus Specification Revision 3.0 to address limitations in scalability and flexibility. Unlike MSI's register-based approach, MSI-X employs a memory-resident table of up to 2048 entries, each with independent 32- or 64-bit address, 16-bit data, and per-vector masking controls, along with a Pending Bit Array (PBA) to track unserviced interrupts. This table-based structure, pointed to by Base Address Registers in the MSI-X capability (ID 11h), enables finer-grained control, reduces configuration space overhead, and supports aliasing for efficient vector sharing, making it particularly suitable for devices with high interrupt volumes. MSI and MSI-X have been integral to (PCIe) since its inception, where are signaled via Memory Write Transaction Layer Packets (TLPs) on the serial link, inheriting the same capability structures while benefiting from PCIe’s point-to-point topology for lower and higher bandwidth. Widely adopted in modern systems for peripherals like network cards, storage controllers, and GPUs, these mechanisms improve overall system performance by minimizing overhead and enabling efficient multi-core distribution.

Introduction

Definition and Purpose

Message Signaled Interrupts (MSI) represent a fundamental method for interrupt delivery in modern computer architectures, where interrupts serve as asynchronous signals from peripherals to the processor indicating events such as data completion or errors that require immediate attention. Unlike traditional out-of-band mechanisms, MSI employs an in-band approach integrated into the system's memory fabric. At its core, MSI is an optional feature introduced in the PCI Local Bus Specification Revision 2.2, enabling PCI devices to generate interrupts by performing a memory write transaction to a specific system-allocated address, rather than asserting dedicated physical pins. This write consists of a 32-bit address and a 16-bit data payload, which collectively form the interrupt message and are configured by system software during device initialization. By leveraging existing memory write protocols, MSI avoids the hardware overhead of interrupt lines, allowing devices to signal events directly through the bus without additional signaling paths. The primary purpose of MSI is to facilitate scalable and efficient interrupt handling in dense, high-performance environments like and (PCIe) systems, where numerous devices compete for attention. It supports multiple distinct s per device—up to 32 vectors in basic MSI—enabling finer-grained event notification without the limitations of shared pin resources, and ensures that associated data transfers (such as operations) complete before the is processed due to inherent coherency rules. This integration enhances overall throughput by reducing from interrupt sharing and minimizing extraneous bus traffic. Furthermore, MSI's design promotes compatibility across bus architectures, with extensions adopted in non-PCI standards like for similar I/O link signaling.

Historical Development

The concept of message-signaled interrupts emerged in the 1990s through implementations in non-PCI bus architectures, such as Hewlett-Packard's General System Connector (GSC) bus, which was utilized in systems to natively generate messages without dedicated pins. Early proposals for pinless mechanisms addressed limitations in dense bus designs, paving the way for broader adoption in standardized interconnects. Message Signaled Interrupts (MSI) were formally standardized in the PCI Local Bus Specification Revision 2.2, released by the PCI Special Interest Group (PCI-SIG) on December 18, 1998, as an optional alternative to traditional pin-based interrupts. This introduction allowed PCI devices to signal interrupts using memory write transactions, reducing reliance on physical interrupt lines. The MSI-X extension, which enhanced MSI by supporting more vectors and independent addressing, was detailed in a PCI-SIG Engineering Change Notice (ECN) issued on August 6, 2003, and integrated into the PCI Local Bus Specification Revision 3.0, released on February 3, 2004. The transition to marked a pivotal evolution, with the Base Specification Revision 1.0, finalized in October 2003, mandating or MSI-X support for all new devices to ensure compatibility in serial interconnect environments. Additional ECNs from 2003 to 2005 further refined MSI-X capabilities for , including expansions for larger vector counts. Subsequent revisions, such as 6.0 released in 2021 and 7.0 on June 11, 2025, reaffirmed and MSI-X without major alterations, emphasizing ongoing compatibility and integration. Post-2010, adoption grew in embedded systems, driven by the increasing use of in resource-constrained designs for improved interrupt scalability.

Comparison to Traditional Methods

Pin-Based Interrupts

Pin-based interrupts in utilize dedicated hardware signals known as INTx# lines, where x represents A, B, C, or D, to allow devices to request service from the host . These signals are optional, level-sensitive, and asserted low using open-drain s, requiring external pull-up resistors on the system board to maintain a high state when inactive. Single-function devices typically employ only INTA#, while multi-function devices may use up to four lines to distribute requests across functions. The signals operate asynchronously to the PCI clock and have no defined ordering with respect to bus transactions, enabling devices to assert an by driving the line low until the device services the request and deasserts it. In systems, multiple devices share these INTx# lines through wire-OR connections, where any asserting device pulls the shared line low, necessitating by the host bridge or system hardware to identify the source. This sharing occurs across the bus, with up to four lines available per bus segment, and device drivers must support chaining to handle shared lines effectively. The host bridge routes these signals to the system's controller, managing prioritization and latency without protocol-level defined in the PCI specification. Level triggering ensures the remains pending until explicitly cleared, supporting reliable notification in multi-device environments. Pin-based interrupts dominated early PCI implementations from the PCI 1.0 specification released in 1992 through revisions up to PCI 2.1 in 1995, remaining prevalent until PCI 2.2 in 1998, with level-triggered operation standardized throughout. However, these systems face inherent constraints, including a maximum of four interrupt lines per bus, limiting the number of unique interrupts and complicating assignment in densely populated boards. Shared lines introduce wiring complexity in multi-slot configurations, as traces must be routed carefully to avoid signal integrity issues, while the open-drain design makes them susceptible to electrical noise from or in high-density layouts. Additionally, the fixed pin count per device restricts for boards with numerous peripherals, often requiring additional bridges or controllers to manage interrupt distribution. In the transition to , physical INTx# pins are absent due to the serial link architecture, which eliminates signals used in early PCI systems; instead, legacy compatibility requires through in-band messages. Early PCI implementations relied on these physical pins for interrupt delivery, but PCIe endpoints and bridges must support virtual wire to maintain without dedicated hardware lines.

Key Differences and Evolution

Message Signaled Interrupts (MSI) represent a fundamental shift from traditional pin-based interrupts, which rely on out-of-band electrical assertion via dedicated INTx# pins (INTA# through INTD#) to signal events. In contrast, MSI employs an in-band mechanism where devices generate interrupts by issuing a memory write transaction—specifically, a Transaction Layer Packet (TLP) in PCI Express—to a system-allocated address, embedding the interrupt vector in the data payload for precise targeting without requiring physical lines. This eliminates shared interrupt wiring, common in legacy PCI systems where multiple devices compete for limited pins, and enables per-vector addressing that avoids the ambiguity of pin-based signaling. The evolution of MSI addressed scalability limitations inherent in pin-based systems, which were constrained to four interrupt lines per bus, often leading to contention and requiring complex arbitration in multi-device environments. Introduced as an optional feature in the PCI Local Bus Specification Revision 2.2, MSI initially supported up to 32 interrupt vectors per function, configurable in powers of two, allowing devices to request multiple distinct interrupts without additional hardware. This expanded dramatically with the MSI-X extension, supporting up to 2048 vectors, and further reduced latency by enabling direct delivery to the Advanced Programmable Interrupt Controller (APIC), bypassing shared bus overhead. In the transition to PCI Express starting with version 1.0, legacy pin-based interrupts were emulated via in-band INTx messages (Assert_INTx and Deassert_INTx TLPs) for backward compatibility, but MSI became mandatory for all new interrupt-capable designs to promote scalable, point-to-point topologies over the shared buses of conventional PCI. Performance enhancements in MSI stem from its ability to mitigate issues like interrupt storms—where high-frequency assertions on shared pins overwhelm the system—and race conditions arising from indeterminate deassertion timing in level-sensitive pin signals. By treating interrupts as ordered memory writes, MSI ensures reliable delivery and supports interrupt affinity, permitting vectors to be bound to specific CPU cores in multi-core systems for optimized handling, though MSI-X provides finer-grained control with per-vector masking and independent addressing. This architectural progression from PCI's pin-limited, contention-prone model to PCIe’s message-based approach has made MSI the preferred method in modern high-density computing, reducing overall system latency and improving throughput in environments with numerous peripherals.

Core Mechanisms

MSI Protocol

The Message Signaled Interrupt (MSI) protocol, introduced in the PCI Local Bus Specification Revision 2.2, allows PCI devices to generate interrupts by issuing a memory write transaction rather than asserting a dedicated interrupt pin. System software configures the MSI capability structure within the device's , typically starting at offset 0x50, to enable this functionality. The structure includes a 16-bit Control register (offsets 0x02-0x03), a 32-bit Address register (offsets 0x04-0x07), an optional 32-bit Message Upper Address register for 64-bit addressing (offsets 0x08-0x0B), and a 16-bit Data register (offsets 0x0C-0x0D). To enable MSI, software sets bit 0 of the Message Control register to 1, which disables traditional pin-based interrupts (INTx#) for that function, establishing mutual exclusivity. The MSI message consists of a 32-bit (or 64-bit if the upper address is non-zero) and a 16-bit payload, delivered as a DWORD memory write transaction with relaxed ordering attributes. In x86 systems, the is fixed at 0xFEE00000 for delivery to the local APIC, with bits 31:20 set to 0xFEE, bits 19:12 encoding the 8-bit destination ID (specifying the target or processors), bit 11 reserved, bit 3 indicating a redirection hint (for logical destination mode), and bit 2 specifying the destination mode (0 for physical, 1 for logical). The field encodes the details as a 16-bit value: bits 7:0 hold the 8-bit vector (typically 0x10-0xFE), bits 10:8 specify the delivery mode (e.g., 000b for fixed delivery to the ), bit 14 indicates the level (0 for deassert, 1 for assert in level-triggered mode), and bit 15 denotes the trigger mode (0 for edge, 1 for level). Upon interrupt generation, the device performs the memory write to the configured address, which the system interrupt controller—such as the x86 local APIC—decodes to produce the interrupt. The APIC extracts the vector from the data to index the (IDT), applies the delivery to route the interrupt (e.g., fixed delivers directly to the target CPU), and respects the level and trigger settings for handling edge- or level-sensitive interrupts. The system ensures memory consistency by completing all prior device writes before the interrupt service routine executes. For multiple vectors, the Message Control register's bits 1:3 (Multiple Message Capable) advertise the device's support for up to 32 vectors in powers of 2 (1, 2, 4, 8, 16, or 32), while bits 4:6 (Multiple Message Enable, set by software) allocate the actual count; the device generates distinct messages by varying the low-order data bits corresponding to the enabled count. During device , the operating system allocates vectors by programming the MSI registers with platform-specific addresses and data values, ensuring no conflicts across devices. Masking is handled at the function level via bit 1 of the Message Control register (global mask, disabling all MSI messages when set), with no per-vector masking in the basic protocol. This contrasts with the MSI-X extension, which supports per-vector masking and more flexible addressing.

MSI-X Extension

The MSI-X extension enhances the basic (MSI) capability by introducing a scalable, table-based approach for handling multiple independent vectors per PCI function, allowing devices to generate up to 2048 distinct interrupts. Introduced in the PCI Local Bus Specification Revision 3.0, MSI-X maps interrupt configuration data into device memory space via Base Address Registers (BARs), enabling greater flexibility in vector assignment, per-vector masking, and targeting compared to the fixed 32-vector limit of standard MSI. This structure is particularly suited for high-performance peripherals requiring numerous interrupt sources, such as network controllers or storage devices, and is identified by Capability ID 0x11 in the . The MSI-X capability structure resides in the configuration space and includes key registers for table configuration: the Message Control register contains an 11-bit Size field (bits 10:0), where the actual number of table entries is the field value plus one, supporting sizes from 1 to 2048; a 3-bit BIR field specifying the index (0-5) for the table's location; and a 28-bit Offset field (bits 31:3) providing the offset from the base address. The table itself consists of contiguous 16-byte entries in , each containing a 32-bit Message Address, 32-bit Message Upper Address (for 64-bit addressing), 32-bit Message Data (carrying the vector), and a 32-bit Control register with a per-vector Mask bit (bit 0). Additionally, a separate Pending Bit Array (PBA) structure, also BAR-mapped via its own BIR and offset fields, tracks pending s for masked vectors using one bit per entry, organized in quadword units to indicate whether a masked remains pending after unmasking. The MSI-X Enable bit (bit 15 in Message Control) activates the table, while a global Function Mask bit (bit 14) can disable all vectors collectively. Interrupt delivery in MSI-X operates independently for each table entry: upon event, the device issues a memory write transaction using the pre-programmed 64-bit and 32-bit from the corresponding entry, allowing arbitrary values and affinity hints for processor core targeting without shared . Per-vector masking prevents delivery while preserving pending status in the PBA, enabling software to handle interrupts on demand and reducing system overhead in multi-vector scenarios. MSI-X is essential for devices exceeding the 32-vector capacity of basic , providing the scalability needed for modern I/O-intensive applications. The mechanism remains unchanged and fully compatible in subsequent PCI Express revisions, including version 6.0. Recent developments in non-x86 architectures include support for MSI-X within the Advanced Interrupt Architecture (AIA), where Incoming Message Signaled Interrupt Controllers (IMSICs) handle MSI-X deliveries to specific harts and privilege modes, as specified in the ratified AIA specification (June 2023).

Legacy Emulation in PCI Express

In , legacy emulation provides backward compatibility for traditional pin-based interrupts (INTx) by utilizing message signaled interrupts (MSI) mechanisms, allowing devices that do not natively support MSI to operate within the PCIe architecture. This emulation translates the conventional four interrupt pins—INTA#, INTB#, INTC#, and INTD#—into virtual wires tracked across PCIe links, without requiring physical pins. Devices assert or deassert these virtual interrupts by sending specific in-band Message Requests, such as Assert_INTx and Deassert_INTx, to the , which then maps them to the system's legacy interrupt controllers. The emulation process involves the receiving these messages and converting them into appropriate signals for the host platform. For instance, in x86 systems, the translates the virtual pin assertions into MSI writes directed to fixed vectors in the I/O APIC, typically using vectors 16 through 19 corresponding to INTA through INTD. These messages are routed using Traffic Class 0 (TC0) and include the device's Requester ID with Function Number set to 0, ensuring compatibility with the software model. Switches and bridges forward these messages along legal routing paths, maintaining state per port to simulate wire-level behavior. Configuration of is managed through standard configuration registers. The Interrupt Pin register at offset 3Dh indicates the emulated pin (values 01h-04h for INTA-INTD, or 00h for no interrupt support), while the Interrupt Disable bit (bit 10) in the Command register at offset 04h, when set, prevents direct INTx assertions and mandates Deassert_INTx messages for proper signaling. This capability is automatically enabled for devices lacking MSI or MSI-X support; however, if MSI or MSI-X is present and enabled, is disabled to prioritize the more efficient native modes. Bus Master Enable (Command register bit 2) must also be set for interrupt generation. Defined in the Base Specification Revision 1.0, this is mandatory for PCIe bridges and switches to ensure seamless integration, requiring no additional hardware pins and leveraging existing infrastructure for . While it avoids the wiring complexities of traditional , the introduces potential performance overhead compared to native , including the risk of spurious interrupts due to timing and the need for paired assert/deassert transactions, which can increase in high-throughput scenarios. This mechanism is crucial for compatibility, enabling legacy PCI devices to function in PCIe slots through PCI Express-to-PCI bridges, where the bridge maps secondary-side INTx virtual wires to the primary side based on device number and forwards messages upstream to the . System software, including and operating systems, handles the remapping across the topology to correlate interrupts correctly, preserving the legacy PCI interrupt routing model.

Advantages and Limitations

Primary Benefits

Message Signaled Interrupts () provide enhanced scalability compared to traditional pin-based s by allowing devices to support multiple interrupt vectors without resources. The MSI-X extension, an advanced variant, enables up to 2048 unique interrupt vectors per device function, making it particularly suitable for complex peripherals such as multi-function network interface controllers (NICs) and graphics processing units (GPUs) that require distinct handling for various queues or operations. In systems, MSI supports up to 224 simultaneous s, eliminating the need for interrupt and improving system-wide for high-density I/O environments. A key performance advantage of MSI is reduced interrupt latency through direct delivery mechanisms. Unlike pin-based interrupts that involve arbitration and routing through shared controllers, MSI uses memory write transactions to deliver interrupt vectors straight to targeted CPU cores, leveraging affinity settings for optimal processing. This approach avoids the overhead of legacy interrupt controllers like the I/O APIC. According to Intel benchmarks, MSI achieves approximately a 3x reduction in latency compared to I/O APIC delivery and over 5x compared to the older XT-PIC, enhancing responsiveness in latency-sensitive applications. MSI also offers superior pin efficiency, especially in PCI Express environments where no dedicated physical interrupt pins are required. Interrupts are signaled via in-band memory writes over the bus, freeing up board space and simplifying hardware design by integrating seamlessly with (DMA) operations without introducing race conditions. This design conserves pins that would otherwise be needed for signaling, reducing complexity in high-density systems. The message-based nature of MSI enhances reliability by mitigating electrical issues associated with physical pin signaling, such as or contact failures, and supports robust operation in dynamic scenarios like hot-plug events and high-speed serial links. This makes MSI essential for modern ing applications exceeding 100 Gbps, where multiple interrupt vectors enable efficient scaling of receive-side scaling (RSS) queues on NICs to handle intense traffic loads with minimal CPU overhead.

Challenges and Drawbacks

Message Signaled Interrupts () introduce significant complexity in and compared to traditional pin-based interrupts, as they require coordinated support from the operating system and device s for proper vector allocation and configuration. Misconfigurations, such as incorrect setup of interrupt messages or failure to allocate sufficient vectors, can result in missed interrupts or system instability, necessitating careful driver programming and platform-specific quirks to handle faulty implementations. In the case of MSI-X, the Pending Bit Array (PBA) table, which tracks masked or pending interrupts, consumes additional memory space within the device's , adding to usage on memory-constrained systems. Compatibility remains a key challenge, as is limited to 2.2 and later devices, leaving older hardware without support and requiring emulation modes that signal legacy interrupts using dedicated message TLPs, which can increase load on the . Certain chipsets, such as those using bridges, may fail to route MSI messages correctly, demanding manual interventions like controls or parameters to enable or disable MSI at the bus level. Early implementations of in the suffered from race conditions during setup and teardown, potentially leading to incorrect assignments or lost interrupts, with notable issues addressed through fixes by 2007 in versions following 2.6.20. MSI's edge-triggered nature, lacking hardware acknowledgment, can lead to duplicate interrupts if a device generates multiple signals before software handling, requiring proper driver masking; this may exacerbate issues in multi-tenant environments without IOMMU protections. Mitigations include automatic fallback to legacy pin-based interrupts when MSI allocation fails, global disablement via kernel parameters like pci=nomsi, and driver-specific workarounds for known hardware quirks. Recent enhancements, such as the comprehensive rework of the MSI domain subsystem in Linux kernel 6.2, improve allocation efficiency and reduce configuration errors across diverse architectures.

Hardware Implementations

x86 Architecture

In x86 architecture, Message Signaled Interrupts (MSI) integrate with the Advanced Programmable Interrupt Controller (APIC) by leveraging the Local APIC (LAPIC) on each processor core for interrupt delivery. Devices generate MSI by performing a memory write to a fixed physical address of 0xFEE00000, which maps to the LAPIC base in the processor's memory space (typically spanning 0xFEE00000 to 0xFEE00FFF), with the write data specifying the interrupt vector (8 bits, values 32-255) and optional delivery mode details. This mechanism allows direct targeting of specific LAPIC instances without relying on shared interrupt lines, enabling scalable interrupt distribution across multi-core systems. The x2APIC extension, introduced to support systems with over 255 logical processors, enhances MSI compatibility by expanding the destination ID field from 8 bits (in xAPIC) to 32 bits, facilitating precise interrupt affinity in multi-socket configurations while maintaining backward compatibility with existing PCI/MSI devices and the IOxAPIC. Chipset support for MSI bridging occurs via the Intel IOxAPIC (or AMD equivalent), which routes legacy pin-based interrupts as MSIs to the LAPIC and supports up to 256 total vectors, though practical limits per IOxAPIC were typically 24 redirection entries pre-2010, expandable through multiple units in larger systems. MSI has been a standard feature in modern Intel and AMD x86 processors since the Nehalem architecture in 2008, with integrated PCIe controllers further optimizing delivery. Benchmarks demonstrate that MSI significantly reduces IRQ sharing compared to pin-based interrupts, as each device can use dedicated vectors, minimizing contention and improving overall system throughput in high-interrupt scenarios like network processing. In PCIe environments, root ports handle legacy interrupt emulation by converting traditional INTx signals into MSI messages directed to the APIC, ensuring compatibility without dedicated pins. Recent PCIe generations, including 6.0 and 7.0, introduce no changes to the core MSI protocol or x86 integration, though x2APIC enhancements continue to improve interrupt affinity in multi-socket setups with hundreds of cores by enabling finer-grained vector targeting.

ARM and RISC-V Systems

In architectures, Message Signaled Interrupts (MSIs) are integrated through the Generic Interrupt Controller (GIC) versions 3 and 4, introduced starting with GICv3 in 2013 to support scalable, virtualization-aware handling in multi-core systems. GICv3 and later versions enable MSIs primarily via the Interrupt Translation Service (ITS), which translates memory writes into locality-specific peripheral interrupts (LPIs), allowing devices to signal interrupts by writing a specific address with data containing the interrupt ID in a frame-based mechanism. This approach supports up to 1024 interrupts per redistributor, facilitating efficient routing in systems with numerous peripherals. Such implementations are common in system-on-chips (SoCs) like those in the series, where GICv3+ handles PCIe device interrupts in mobile and platforms. GICv4 builds on this by enhancing virtualization support for virtual LPIs (vLPIs), using similar frame-based signaling but with additional stream protocol commands for direct hypervisor injection, improving performance in virtualized environments. However, MSI implementations in ARM embedded systems can vary due to optional features like MSI-capable shared peripheral interrupts (SPIs), leading to inconsistencies across vendors and requiring platform-specific configuration. In server contexts, such as the AWS Graviton2 processor based on ARM Neoverse cores with up to 64 PCIe 4.0 lanes (while newer generations like Graviton3 and Graviton4 support PCIe 5.0 with 64 and 96 lanes, respectively, as of 2025), PCIe compatibility for MSIs is achieved through GICv3+ , though tuning for low-latency routing remains a challenge in high-density deployments. For systems, was introduced through the Advanced Interrupt Architecture (AIA) extension, developed from 2022 and ratified as version 1.0 in June 2023, addressing limitations in earlier controllers like the Platform-Level Interrupt Controller (PLIC). The Incoming Message Signaled Controller (IMSIC) handles MSIs by receiving memory writes and mapping them to per-hart files, with explicit for MSI-X through configurable numbers up to 2047. AIA integrates hybrid wired and MSI handling via the APLIC for edge/level-sensitive wired interrupts and IMSIC for message-based ones, enabling flexible topologies in multi-core setups. This architecture was further incorporated into ratified profiles like RVA23 in October 2024, standardizing 64-bit application processors. AIA's MSI capabilities enable PCIe integration on RISC-V boards, such as those from , where IMSIC routes device-generated MSIs to appropriate harts, supporting scalable I/O in datacenter and embedded applications. support for RISC-V MSIs via AIA was added in version 6.8 (merged in February 2024), including drivers for IMSIC and per-device MSI domains to facilitate PCIe and platform device compatibility.

Software Support

Linux Kernel

The Linux kernel has supported Message Signaled Interrupts (MSI) since version 2.6.12, released in 2005, enabling devices to generate interrupts via memory writes rather than pin assertions. Full support for the MSI-X extension, which allows up to 2048 independent interrupt vectors per device, was introduced in kernel 2.6.19 in 2006, though early implementations in versions 2.6.19 and 2.6.20 had known issues with masking and unmasking that could lead to lost interrupts. This support requires the CONFIG_PCI_MSI kernel configuration option and is available on architectures like x86 and ARM that provide compatible interrupt controllers. Device drivers configure MSI and MSI-X through the PCI subsystem, which handles interrupt allocation and mapping to Linux IRQs via IRQ domains. The core logic resides in the drivers/pci/msi.c module, which manages the allocation of MSI descriptors, programming of device capability structures, and integration with the generic IRQ handling . Legacy APIs like pci_enable_msix() allow drivers to request a specific number of vectors (up to the device's MSI-X table size, a for MSI limited to 32), while the modern pci_alloc_irq_vectors() function provides flexible allocation supporting MSI, MSI-X, or INTx, with optional affinity spreading across CPUs via the PCI_IRQ_AFFINITY . Interrupt can be tuned post-allocation by writing hexadecimal CPU masks to files like /proc/irq/<irq_number>/smp_affinity, optimizing for multi-queue devices by binding interrupts to specific cores. Cleanup occurs via pci_free_irq_vectors() to release resources and restore pin-based interrupts if needed. Recent kernel developments have focused on enhancing MSI robustness and flexibility. Linux 6.2, released in February 2023, introduced a significant rework of the subsystem, establishing per-device MSI interrupt domains to better support endpoints and mixed interrupt types on the same device, including preparation for the Interrupt Message Store (IMS) specification that allows device-specific message storage beyond MSI-X table limits. In kernel 6.5, released in August 2023, fixes were applied to the generic IRQ resend mechanism, restoring proper handling of parent descriptors in hlist-based resend lists to prevent lost s during error recovery or high-load scenarios affecting MSI-enabled devices. MSI-X usage is prevalent in high-performance drivers, such as NVMe storage controllers for per-queue interrupts and networking adapters like Intel's igb for multi-queue receive scaling, where it reduces interrupt overhead compared to shared legacy IRQs. Debugging MSI configuration involves tools like -vv, which displays enabled MSI/MSI-X capabilities (marked with "+" if active) and vector counts for devices, alongside checking logs for allocation messages and /sys/bus//devices//msi_bus for bridge-level status. These features ensure scalable interrupt handling in modern systems, with the kernel prioritizing MSI-X for devices supporting more than one to maximize parallelism.

Microsoft Windows

Microsoft Windows provides support for Message Signaled Interrupts (MSIs) starting with partial implementation in and , where drivers could link to the Iointex.lib library to use IoConnectInterruptEx for basic MSI handling, though without full () manager integration. Full native support arrived with in 2007, integrated through the Windows Driver Framework (WDF), which allows drivers to register service routines using WdfInterruptCreate and related APIs for both MSI and MSI-X. In WDF-based drivers, MSI-X configuration often involves IoAllocateGenericIrql to allocate generic IRQLs, enabling scalable handling for high-performance devices. The manager handles assignment during device enumeration, configuring interrupt resources based on the device's capabilities declared in its , which must set the MSISupported registry value to 1 to enable MSIs. The Layer (HAL) abstracts the underlying (APIC), managing the message addresses and data values written to system to trigger , ensuring portability across platforms, including ARM-based systems. Windows prefers MSI-X over MSI for devices requiring multiple vectors, as the operating system automatically selects MSI-X when both are supported by the , allowing up to 2,048 vectors per device function in modern implementations, though constrained by the system's overall vector limit of approximately 255 available . Network Driver Interface Specification (NDIS) drivers commonly leverage MSIs for network interface controllers (NICs), particularly to enhance Receive Side Scaling (RSS) performance by directing interrupts to specific CPUs, reducing in high-throughput scenarios. For debugging, developers use the kernel debugger's !pci extension command to inspect device configurations, including MSI and MSI-X capabilities, interrupt assignments, and resource allocations during troubleshooting. No significant changes to MSI implementation have occurred post-2023, maintaining compatibility with evolving standards.

Other Operating Systems

has supported Message Signaled Interrupts (MSI) since version 7.0, released in May 2008, with the kernel's PCI support code incorporating both MSI and MSI-X capabilities to enhance interrupt handling for PCI devices. The pci_alloc_msix() enables drivers to allocate MSI-X interrupts, allowing flexible configuration of multiple interrupt vectors per device. This support extends to derivatives like , where system tunables such as hw.pci.enable_msix="0" permit disabling MSI-X to troubleshoot performance issues on specific network interfaces. NetBSD provides full MSI support since version 8.0, released in July 2018, including compatibility with Generic Interrupt Controller (GIC) for message-signaled interrupts on ARM-based systems. Enhancements continued in later releases, such as 10.0 in March 2024, which added MSI/MSI-X to drivers like ciss(4) for HP Smart Array RAID controllers. macOS, based on the kernel, has supported and MSI-X since macOS 10.5 (released in 2007), enabling efficient delivery for PCIe devices such as GPUs, peripherals, and storage controllers in Apple hardware systems. Oracle Solaris introduced MSI support in 2005 via Solaris Express 6/05 for x86 and PCI environments, implementing both standard MSI and extended MSI-X as in-band memory write messages to improve interrupt delivery without dedicated pins. Haiku OS added support for MSI-X interrupts via the FreeBSD compatibility layer in the R1/Beta 1 release (September 2018), enabling better integration for PCI devices including WiFi and Ethernet adapters. VxWorks 7, starting from service releases like SR0540 in 2018, supports and MSI-X for embedded systems, facilitating low-latency interrupt processing in environments such as ARM-based processors. 6.0, released in September 2016, introduced initial MSI-X support, including integration in the virtio(4) driver for virtualized environments. Support for MSI on architectures is emerging in experimental kernels as of 2024, leveraging the Incoming Message Signaled Controller (IMSIC) under the Advanced Architecture (AIA) specification to handle MSIs via memory transactions. In operating systems (RTOS), MSI adoption remains limited due to stringent determinism requirements, as the memory-based signaling can introduce variable latency in hard scenarios despite reducing overall CPU overhead.

References

  1. [1]
    [PDF] PCI Local Bus Specification
    Dec 18, 1998 · The PCI Local Bus Specification, revision 2.2, includes an introduction, overview, features, and benefits of the PCI Local Bus.
  2. [2]
    [PDF] PCI Local Bus Specification Revision 3.0
    Feb 3, 2004 · Revision 3.0 of the PCI Local Bus Specification, dated 2/3/04, removed support for the 5.0 volt keyed system board connector.
  3. [3]
    4.5.2. Message Signaled Interrupts (MSI) - Intel
    May 15, 2024 · MSI are single-Dword memory write TLPs used for interrupts, signaled on the PCI Express link, and implemented by the user application.
  4. [4]
    Introduction to Message-Signaled Interrupts - Windows drivers
    Feb 21, 2025 · Message-signaled interrupts (MSIs) were introduced in the PCI 2.2 specification as an alternative to line-based interrupts.
  5. [5]
    4. The MSI Driver Guide HOWTO - The Linux Kernel documentation
    A Message Signaled Interrupt is a write from the device to a special address which causes an interrupt to be received by the CPU. The MSI capability was first ...
  6. [6]
    IO-SAPIC - Linux on PA-RISC documentation
    The I/O sapic driver manages the Interrupt Redirection Table which is the control logic to convert PCI line based interrupts into a Message Signaled Interrupt ( ...Missing: origins | Show results with:origins
  7. [7]
    What do the different interrupts in PCIe do? I referring to MSI, MSI-X ...
    Jul 24, 2013 · All the interrupts perform the same function: a notification about some event which is sent from one (PCIe) agent to another.
  8. [8]
    Specifications - PCI-SIG
    PCI-SIG specifications define standards for peripheral component interconnects, including PCI Express 8.0, 7.0, and 6.0, and are accessible online.
  9. [9]
  10. [10]
    PCIe Standards: What You Need to Know | Keysight Blogs
    Mar 30, 2020 · The PCIe standard evolved from PCIe 1.0, released in 2003 supporting 2.5 gigatransfers per second (GT/s), to PCIe 5.0, released in 2019 supporting 32 GT/s.
  11. [11]
    [PDF] PCI Local Bus Specification - FriCAS
    Mar 29, 2002 · The PCI Local Bus Specification, revision 2.3, includes an introduction, signal definitions, and covers applications, features, and benefits.
  12. [12]
    None
    ### Summary of MSI and MSI-X History and Development from PCI-SIG
  13. [13]
    [PDF] PCI Express Base Specification, Revision 2.1 - Intel
    Apr 15, 2003 · This PCI Express Base Specification is provided “as is” with no warranties whatsoever, including any warranty of merchantability, ...
  14. [14]
    None
    Below is a merged summary of MSI (Message Signaled Interrupts) and Legacy Interrupts as described in the PCI Express Base Specification v1.0 (2002). To retain all information in a dense and organized manner, I’ve used a combination of narrative text and tables in CSV format where applicable. The response consolidates details from all provided segments, avoiding redundancy while ensuring completeness.
  15. [15]
    [PDF] Intel® 64 and IA-32 Architectures Software Developer's Manual
    This is Volume 3A, Part 1 of the Intel 64 and IA-32 manual, which is a system programming guide. The manual has nine volumes.
  16. [16]
    Device Interrupts
    Legacy interrupts – Legacy or fixed interrupts refer to interrupts that use older bus technologies. · Message-signaled interrupts – Instead of using pins, ...
  17. [17]
    3.4.3. MSI-X - Intel
    MSI-X is an optional feature that allows the user application to support large amount of vectors with independent message data and address for each vector.<|separator|>
  18. [18]
    [PDF] Reducing Interrupt Latency Through the Use of Message Signaled ...
    Message Signaled Interrupts​​ MSI was introduced in revision 2.2 of the PCI spec in 1999 as an optional component. However, with the introduction of the PCIe ...
  19. [19]
    Interrupt Signal - an overview | ScienceDirect Topics
    In current systems, message signaled interrupts (MSIs) use a message on the PCIe interface rather than a separate signal for a device interrupt, which saves ...
  20. [20]
    Overview of NDIS MSI-X - Windows drivers - Microsoft Learn
    Mar 13, 2023 · MSI-X support can provide significant performance benefits, especially for network interface cards (NICs) that support receive side scaling ...Missing: 100G | Show results with:100G
  21. [21]
    7.6.2.3. MSI-X Pending Bit Array Register - Intel
    May 15, 2024 · Address: Offset 0x8. This register specifies the base address of the MSI-X Pending Bit Array in the Function's memory. Table 112.Missing: overhead | Show results with:overhead
  22. [22]
    [PDF] Kernel Scalability—Expanding the Horizon Beyond Fine Grain Locks
    Jun 30, 2007 · Using extended features of MSI, a device can send messages to a specific set of processors in the system. When an interrupt occurs, the device ...
  23. [23]
    IOMMU protection against I/O attacks: a vulnerability and a proof of ...
    Jan 9, 2018 · These communication channels raise serious security concerns as they offer opportunities to attackers to corrupt the system and the hosted ...
  24. [24]
    Linux 6.2 Brings A Big Rework To The MSI Subsystem - Phoronix
    Dec 25, 2022 · The IRQ pull request that was merged early in the Linux 6.2 cycle has a big rework to the Message Signaled Interrupts (MSI) subsystem.
  25. [25]
    Arm Generic Interrupt Controller (GIC) Architecture Specification, v3 ...
    This specification describes the Arm Generic Interrupt Controller (GIC) architecture. It defines versions 3.0, 3.1, 3.2, 3.3 (GICv3), 4.0, 4.1, ...Missing: MSI | Show results with:MSI
  26. [26]
    [PDF] Arm® Generic Interrupt Controller Architecture Specification
    ... support ... It defines versions 3.0, 3.1,. 3.2 (GICv3), 4.0, and 4.1 (GICv4) of the GIC architecture.
  27. [27]
    AWS Graviton Processor - Amazon EC2
    AWS Graviton is a family of processors designed to deliver the best price performance for your cloud workloads running in Amazon Elastic Compute Cloud ...AWS Graviton Savings... · Graviton resources · Get started quickly and easily...Missing: PCIe MSI
  28. [28]
    Amazon Compares 64-core ARM Graviton2 to Intel's Xeon
    Dec 5, 2019 · It supports 64 PCIe 4.0 lanes and also has support for FP16 and INT8 numerics. Most interesting are the comparisons AWS provided to Intel ...
  29. [29]
    RISC-V Announces Ratification of the RVA23 Profile Standard
    Oct 21, 2024 · RVA Profiles align implementations of RISC-V 64-bit application processors that will run rich operating systems (OS) stacks from standard binary ...Missing: PCIe | Show results with:PCIe
  30. [30]
    RISC-V for Infrastructure: For Now, It's All About the Developer - SiFive
    Aug 11, 2025 · Discover how SiFive is advancing RISC-V for Infrastructure, with Red Hat and NVIDIA enabling powerful software support, the HiFive Premier ...Missing: PCIe | Show results with:PCIe
  31. [31]
    Linux RISC-V AIA Support - LWN.net
    Feb 20, 2024 · Linux RISC-V AIA Support ; Subject: [PATCH v13 00/13] Linux RISC-V AIA Support ; Date: Tue, 20 Feb 2024 11:37:05 +0530 ; Message-ID: < ...Missing: X draft 2023-2025
  32. [32]
    4. The MSI Driver Guide HOWTO — The Linux Kernel documentation
    ### Summary of Linux Kernel Support for MSI and MSI-X
  33. [33]
    intel/ethernet-linux-igb - GitHub
    * This driver supports receive multiqueue on all kernels that support MSI-X. Note: * Do not use MSI-X with the 2.6.19 or 2.6.20 kernels. * On some kernels a ...
  34. [34]
    4.3. Interrupts and IRQ Tuning | Red Hat Enterprise Linux | 6
    IRQs have an associated "affinity" property, smp_affinity , which defines the CPU cores that are allowed to execute the ISR for that IRQ. This property can be ...
  35. [35]
    ChangeLog-6.5 - The Linux Kernel Archives
    Add back the corresponding code that adds the parent descriptor to the resend list. Fixes: bc06a9e08742 ("genirq: Use hlist for managing resend handlers") ...
  36. [36]
    IoConnectInterruptEx function (wdm.h) - Windows drivers
    Feb 22, 2024 · Available on Windows Vista and later versions of the Windows operating system. Drivers that must also work on Windows 2000, Windows XP, or ...
  37. [37]
    Using IoConnectInterruptEx Prior to Windows Vista - Microsoft Learn
    Dec 15, 2021 · In this article. A driver for Windows 2000, Windows XP, or Windows Server 2003 can link to the Iointex.lib library to use IoConnectInterruptEx ...
  38. [38]
    Creating an Interrupt Object - Windows drivers | Microsoft Learn
    Supporting message-signaled interrupts. Windows Vista and later versions support message-signaled interrupts (MSIs). To enable the operating system to support ...
  39. [39]
    Enabling Message-Signaled Interrupts in the Registry
    Feb 21, 2025 · To receive message-signaled interrupts (MSIs), a driver's INF file must enable MSIs in the registry during installation.
  40. [40]
    RSS with Message Signaled Interrupts - Windows drivers
    Dec 14, 2021 · Miniport drivers can support message signaled interrupts (MSIs) to improve RSS performance. MSIs enable the NIC to request an interrupt on the CPU that will ...
  41. [41]
    !pci (WinDbg) - Windows drivers | Microsoft Learn
    Oct 25, 2023 · The !pci extension displays the current status of the peripheral component interconnect (PCI) buses, as well as any devices attached to those buses.
  42. [42]
  43. [43]
    FreeBSD 7.0-RELEASE Release Notes
    ### Summary of MSI Support in FreeBSD 7.0
  44. [44]
    pci_alloc_msix(9) - FreeBSD Manual Pages
    Message Signaled Interrupts Message Signaled Interrupts (MSI) and Enhanced Message Signaled Inter- rupts (MSI-X) are PCI capabilities that provide an ...
  45. [45]
    Hardware Tuning and Troubleshooting | pfSense Documentation
    Aug 26, 2025 · To nudge the card to use MSI, disable only MSI-X. To nudge the card to use regular Interrupts, disable both MSI and MSI-X. PPPoE with Multi ...
  46. [46]
    Announcing NetBSD 8.0 (July 17, 2018)
    ### Summary of MSI Support in NetBSD 8.0, Especially ARM GIC
  47. [47]
    Announcing NetBSD 10.0 (Mar 28, 2024)
    Hardware support. Improved support for Arm: Allwinner V3s SoC support, found in e.g. the Lichee Pi Zero. Amlogic G12 SoC support, found in e.g. the ODROID-N2+.
  48. [48]
    Standard and Extended Message-Signaled Interrupts
    Conventional PCI specifications include optional support for Message Signaled Interrupts (MSI). An MSI is an in-band message that is implemented as a posted ...
  49. [49]
    Haiku R1/Alpha 2 Released - OSnews
    May 10, 2010 · Improved USB mass storage performance; ACPI enabled by default; Bash command-line shell updated to version 4.x; Message Signal Interrupts (MSI) ...<|separator|>
  50. [50]
    [PDF] VXWORKS® 7 Release SR0540
    Jun 13, 2018 · The Virtualization Profile now allows you to share cores between virtual machines (VMs) on. ARM Cortex A53 platforms. SMMU Support. The ...
  51. [51]
    OpenBSD 6.0
    Sep 1, 2016 · Initial support for MSI-X has been added. Support MSI-X in the virtio(4) driver. Added a workaround for hardware DMA overruns to the dc(4) ...Missing: 2014 | Show results with:2014