Fact-checked by Grok 2 weeks ago

Deferred Procedure Call

A Deferred Procedure Call (DPC) is a kernel-mode mechanism in the Windows operating system that enables device drivers to postpone non-time-critical interrupt processing from an Interrupt Service Routine (ISR), which runs at a high Level (IRQL), to a lower-priority execution context at DISPATCH_LEVEL IRQL. This deferral ensures that ISRs complete quickly to minimize system latency, while allowing deferred tasks—such as completing I/O operations or updating device states—to execute later without blocking higher-priority interrupts. DPCs are queued by the system when an ISR calls a routine like IoRequestDpc, associating the call with a DPC object tied to the device's functional device object, which is initialized during driver startup. Once queued, the DPC routine executes on the same processor that handled the interrupt, in the context of an arbitrary system thread at DISPATCH_LEVEL, where it can access pageable memory and paged pool but must avoid operations that could cause deadlocks or high latency. Drivers may also create custom DPC objects for non-interrupt-related deferred work, such as timer expirations, using routines like KeInitializeDpc and KeInsertQueueDpc. In Windows kernel architecture, DPCs play a critical role in balancing responsiveness and efficiency, particularly for hardware drivers handling high-volume interrupts from devices like network cards or storage controllers. High DPC execution times can lead to system bottlenecks, measurable via tools like Event Tracing for Windows (ETW), and are often optimized to prevent audio glitches or input lag in applications. Unlike Asynchronous Procedure Calls (APCs), which run in user-mode thread contexts, DPCs operate strictly in kernel mode and are not bound to specific threads, making them suitable for interrupt-driven workloads.

Overview

Definition

A Deferred Procedure Call (DPC) is a kernel-mode mechanism in the Windows operating system designed to defer the execution of procedures from high-priority contexts, such as Interrupt Service Routines (ISRs), to a lower Level (IRQL) called DISPATCH_LEVEL after the initial high-IRQL processing completes. This deferral minimizes the duration spent at elevated IRQLs during interrupt handling, thereby improving overall system responsiveness and efficiency. The DPC is represented by an opaque kernel structure known as KDPC. Drivers initialize a KDPC object using routines like KeInitializeDpc, which associates a callback routine and optional context with the DPC before it can be queued for execution. DPCs execute exclusively at DISPATCH_LEVEL IRQL, positioned below the higher IRQLs of ISRs but above PASSIVE_LEVEL. At this level, interrupts at or below DISPATCH_LEVEL are masked, preventing preemption by lower-priority interrupts, and thread preemption by the scheduler is disabled. This IRQL ensures that DPC processing remains protected from routine interrupts but can be interrupted by higher-priority kernel activities if necessary.

Purpose

Deferred Procedure Calls (DPCs) serve as a critical in the Windows to minimize the execution time of Interrupt Service Routines (ISRs), which operate at high Levels (IRQLs) and must complete rapidly to avoid blocking subsequent interrupts. By allowing ISRs to queue non-urgent tasks for later processing at the lower DISPATCH_LEVEL IRQL, DPCs enable drivers to offload work such as completing interrupt servicing or handling secondary operations, thereby reducing overall and preventing potential system hangs from prolonged ISR execution. This deferral improves performance by executing tasks in a more appropriate context, where resources like and time are less constrained, and facilitates better , including automatic stack switching to a dedicated per-processor DPC stack to prevent overflows on the limited ISR stack. In interrupt-driven environments, particularly for device drivers managing hardware events, DPCs ensure that high-priority interrupts are not unduly delayed by lengthy , which is essential for maintaining system responsiveness and avoiding disruptions in time-sensitive operations. However, misuse of DPCs, such as queuing excessive or inefficient routines, can lead to backlogs in per-processor DPC queues, resulting in elevated that degrades system performance. High DPC execution times, ideally kept under 100 microseconds per guidelines, become observable through tools like the Windows Performance Toolkit, which analyzes trace data to identify problematic drivers and quantify delays in DPC processing.

Historical Development

Origins in Operating System Design

The concept of deferred procedure calls emerged in the and as operating systems grappled with the need to handle hardware interrupts efficiently without prolonging the time spent in interrupt service routines (s), which could otherwise degrade system responsiveness. Early systems recognized that ISRs should perform only urgent acknowledgment and state saving, deferring non-critical processing to avoid blocking higher-priority interrupts or extending disable periods excessively. This separation addressed fundamental limitations in interrupt architectures, such as those in the PDP-11 family, where vectored interrupts provided fast entry points but slow context switches—often taking hundreds of microseconds—necessitated minimizing ISR execution to maintain throughput in environments. One of the earliest structured approaches appeared in , a pioneering system operational from 1969, which implemented a dedicated interceptor to route hardware signals to appropriate handlers while supporting deferred processing through supervisor-level mechanisms for faults like page faults. In , interrupts triggered supervisor intervention to resolve conditions asynchronously, allowing the main program to resume quickly while deferred actions, such as , were queued for later execution outside the immediate context. This design emphasized modularity, influencing subsequent kernels by demonstrating how handling could integrate with higher-level abstractions like event channels for non-urgent work. In Unix, particularly with Version 7 released in , the bottom-half mechanism formalized interrupt deferral using software s to schedule post- work at a lower priority, ensuring ISRs remained brief. The top half of an would quickly acknowledge the event and set flags or queue data, while the bottom half—executed via a software —handled deferred tasks like I/O completion without re-enabling interrupts prematurely. This approach, detailed in contemporary documentation, optimized for the PDP-11's constraints by reducing ISR to tens of instructions, promoting modularity in multiprogrammed settings. Similarly, the Virtual Memory System (), introduced in 1978, employed asynchronous system traps (ASTs) and fork procedures to defer execution of routines outside the primary thread, delivering notifications at specified priority levels after interrupt acknowledgment. ASTs allowed processes to queue callbacks for events like resource availability, executed asynchronously to prevent ISR bloat, while fork procedures created lightweight subprocesses for deferred computation, enhancing capabilities on VAX . These mechanisms addressed early multiprocessor needs by enabling deferred work to run on idle CPUs, laying groundwork for structured deferral in later operating systems including .

Implementation in Windows NT

The implementation of Deferred Procedure Calls (DPCs) in originated with the release of in 1993, where it was introduced as a core kernel mechanism for deferring interrupt-related work. Developed by the Windows NT team led by Dave Cutler, a veteran of Digital Equipment Corporation's operating system, DPCs adapted concepts like VMS fork procedures to enable efficient handling of time-sensitive tasks in a multiprocessor environment. This design choice was integral to the NT kernel's executive subsystem, facilitating operations and management by allowing interrupt service routines (ISRs) to queue callbacks for execution at a lower interrupt request level (IRQL). In early Windows NT versions prior to , DPCs relied on Inter-Processor Interrupts (IPIs) to dispatch high-importance routines across multiple processors, ensuring prompt execution but incurring overhead from cross-processor communication. With the advent of in 2007, enhanced DPC dispatching by introducing configurable importance levels—normal and high—along with medium-high priorities, which optimized queue placement and reduced IPI usage in multiprocessor systems for better scalability and latency control. These changes allowed drivers to specify DPC urgency via APIs like KeSetImportanceDpc, placing high-importance DPCs at the front of per-processor queues to prioritize critical tasks without always triggering expensive IPIs. DPCs have remained a foundational element of the kernel lineage, deeply integrated with the for managing I/O request packets (IRPs) and device interactions, and continuing to evolve through subsequent releases up to as of 2025. Key milestones include refinements in (1996) for improved per-processor queue management to handle growing system complexity, and the introduction of threaded DPCs in (2009), which enabled low-priority DPCs to execute in dedicated threads rather than directly in context, mitigating in and scenarios. This persistence underscores DPCs' role in balancing responsiveness and efficiency across the NT 's 30-year evolution.

Mechanism and Implementation

DPC Objects and Initialization

In the Windows , Deferred Procedure Call (DPC) objects are represented by the opaque KDPC , which drivers allocate from resident memory such as a extension or nonpaged but do not directly manipulate. The KDPC includes fields such as Type (indicating the object type, e.g., DpcObject or ThreadedDpcObject), (specifying levels like MediumImportance or HighImportance), DpcListEntry (a LIST_ENTRY or SINGLE_LIST_ENTRY for queuing), DeferredRoutine (a pointer to the callback function), DeferredContext (a driver-supplied context value), SystemArgument1 and SystemArgument2 (additional parameters passed to the callback), and TargetProcessor (for targeting). These fields are managed internally by the , ensuring drivers interact solely through documented to maintain system stability. To initialize a custom DPC object, drivers call the KeInitializeDpc routine, providing a pointer to the allocated KDPC structure, a pointer to the DeferredRoutine callback, and an optional DeferredContext value. This routine sets up the DPC for later queuing via kernel functions like KeInsertQueueDpc, allowing drivers to defer non-urgent processing from high-IRQL contexts such as service routines. For DPCs associated with specific objects, the system automatically provides one pre-allocated DPC object per DEVICE_OBJECT, which drivers initialize by calling IoInitializeDpcRequest with the device object pointer and a pointer to an IO_DPC_ROUTINE (also known as DpcForIsr). This associates the callback with the device's DPC, enabling queuing through IoRequestDpc from the driver's . Drivers often create additional DPCs beyond the system-supplied one per device object, storing the KDPC in driver-allocated memory for specialized deferral needs, such as callbacks or multi-device handling. However, drivers have no direct access to the internal contents of any DPC object—whether system-provided or —as all configuration and management occur via APIs like KeInitializeDpc or IoInitializeDpcRequest. The callback routine specified during initialization follows the KDEFERRED_ROUTINE signature: VOID (*KDEFERRED_ROUTINE)(IN PKDPC Dpc, IN PVOID DeferredContext OPTIONAL, IN PVOID SystemArgument1 OPTIONAL, IN PVOID SystemArgument2 OPTIONAL). Here, the Dpc parameter points to the KDPC object, DeferredContext carries driver-specific data from initialization, and SystemArgument1 and SystemArgument2 provide optional system or driver-supplied arguments, enabling flexible handling of deferred tasks at DISPATCH_LEVEL IRQL.

Queuing and Execution

In the Windows , the queuing of a Deferred Procedure Call (DPC) typically occurs from an at an elevated higher than DISPATCH_LEVEL. For device-associated DPCs, the ISR calls IoRequestDpc, providing the device object, IRP, and context to queue the associated DPC routine. For custom DPCs, the ISR invokes KeInsertQueueDpc, passing a pointer to the initialized KDPC structure along with optional system arguments for context. Both mechanisms insert the DPC into the target processor's per-processor DPC queue, located within the Processor Control Region Block (PRCB), if the DPC is not already queued; KeInsertQueueDpc returns TRUE upon successful insertion and FALSE otherwise. The executes DPCs at DISPATCH_LEVEL IRQL, which is lower than typical levels but higher than thread execution levels, ensuring they preempt normal code while remaining interruptible by higher-priority hardware . To prevent from the limited stack space, the switches execution to a dedicated per-processor DPC during . The DPC is drained through mechanisms such as a software raised at DISPATCH_LEVEL or by the system idle thread on the processor, allowing the callback routine specified in the KDPC to run synchronously until completion. Dispatching of queued DPCs begins when the 's IRQL drops below DISPATCH_LEVEL, often immediately after the returns or at the end of the current thread's time quantum. For DPCs targeted to a remote via KeSetTargetProcessorDpc, the may send an Inter- Interrupt (IPI) to the target if the DPC importance is high, prompting it to drain its promptly; otherwise, execution awaits natural opportunities like quantum end or idling. The DPC routine processes items from the in order, executing each until it completes or the is empty. Queue management in the Windows relies on per-processor lists to minimize contention in multiprocessor s, with each PRCB maintaining a separate DPC ordered by —normal DPCs append to the end, while high- DPCs (set via KeSetImportanceDpc) insert at the front for earlier execution. If depth or insertion rate exceeds thresholds, the accelerates draining to prevent ; low- DPCs may be deferred during high load to prioritize urgent work, though the avoids outright rejection unless the DPC is already pending.

Types of DPCs

Ordinary DPCs

Ordinary DPCs, also known as normal DPCs, represent the standard type of deferred procedure call in the . They execute at DISPATCH_LEVEL level (IRQL) in kernel mode and operate at the default medium importance level unless otherwise specified. By default, they are queued to the processor currently executing the queuing routine, such as through a call to KeInsertQueueDpc without a pre-set target, ensuring execution occurs on the same CPU to maintain locality. This default affinity simplifies implementation for single-processor scenarios or when work is inherently tied to the interrupting CPU. Drivers can optionally designate a target processor for an ordinary DPC using KeSetTargetProcessorDpc after initialization with KeInitializeDpc but before queuing. The target can be a specific zero-based processor number, the current processor ((CCHAR)-1), or any available processor ((CCHAR)-2). This feature facilitates load balancing in (SMP) environments by allowing work to be offloaded to less busy CPUs. If targeted to a different processor, the kernel may issue an inter-processor interrupt (IPI) to prompt execution, depending on the DPC's importance level and state. Ordinary DPCs are particularly suited for quick, local deferrals of non-urgent tasks that must be postponed from higher IRQLs, such as completing I/O operations initiated by an service routine (). For instance, after an acknowledges an and performs minimal handling, an ordinary DPC can dequeue the next I/O request packet (IRP) for processing, complete the current IRP if feasible, or reprogram the device for subsequent transfers or error retries. Such use cases leverage the DPC's ability to run at DISPATCH_LEVEL IRQL, where it can access pageable and certain APIs unavailable at higher IRQLs, while keeping latency low. In terms of behavior, ordinary DPCs are inserted into the target processor's and processed in first-in, first-out () order among DPCs of similar . If the queue was previously empty, queuing an ordinary DPC triggers immediate processing at DISPATCH_LEVEL upon return from the , provided the importance is not set to low. The KeInsertQueueDpc routine returns TRUE if the DPC is successfully queued (indicating it was not already pending) or FALSE if it was already in the queue, preventing duplicate insertions. Ordinary DPCs support configurable importance levels via KeSetImportanceDpc, which influence queue positioning and dispatch timing: LowImportance places the DPC at the end of the without triggering immediate ; MediumImportance (default) appends to the end but initiates promptly; MediumHighImportance, introduced in , appends to the end while enabling more aggressive dispatching; and HighImportance positions the DPC at the queue's head and forces immediate execution. These levels evolved from the basic low/medium/high scheme in early implementations to provide finer control in modern multiprocessor kernels. Despite their simplicity, ordinary DPCs have limitations in SMP environments, as default queuing to the current processor can result in uneven load distribution across CPUs, potentially leading to bottlenecks on heavily interrupted processors without built-in balancing for inter-processor deferral. Additionally, since they run at DISPATCH_LEVEL, they cannot perform operations requiring PASSIVE_LEVEL, such as accessing pageable code or handling page faults, which may limit their use for more complex deferred work.

Threaded DPCs

Threaded DPCs represent an advanced variant of deferred procedure calls introduced in SP1 (version 5.2) and available in subsequent Windows versions. Unlike ordinary DPCs, threaded DPCs are designed to execute at PASSIVE_LEVEL IRQL in the context of a dedicated high-priority thread, allowing access to pageable memory, user-mode resources, and operations that might cause page faults without risking instability. This makes them suitable for longer-running or more complex deferred tasks that would be impractical at DISPATCH_LEVEL. Threaded DPCs are enabled by default as of , but drivers can disable them if needed for compatibility or latency reasons. To set up a threaded DPC, a driver initializes a DPC object using KeInitializeThreadedDpc instead of KeInitializeDpc. Like ordinary DPCs, threaded DPCs support targeting a specific via KeSetTargetProcessorDpc (or KeSetTargetProcessorDpcEx for processor groups in and later) and importance levels via KeSetImportanceDpc. They are queued using KeInsertQueueDpc or related routines, and if targeted to a different , an IPI may be used to schedule execution on the target. However, because they run in a thread context, threaded DPCs introduce slightly higher compared to ordinary DPCs but reduce the risk of DPC buildup and system-wide delays. When queued, a threaded DPC is placed in a separate per-processor (distinct from the ordinary DPC ) within the Processor Control Region (). The system processes these s when dropping to DISPATCH_LEVEL or during time, scheduling the threaded DPC on a worker thread at PASSIVE_LEVEL. If the system does not support threaded DPC execution (e.g., in older configurations), it falls back to DISPATCH_LEVEL execution as an ordinary DPC. High-importance threaded DPCs can trigger immediate IPIs for faster dispatching, while lower-importance ones wait for periods to minimize with active workloads. The primary advantages of threaded DPCs lie in their ability to handle resource-intensive deferred work without blocking kernel dispatch, improving overall responsiveness in multiprocessor environments. They are commonly used for tasks like complex I/O completions, processing, or callbacks that benefit from full access. However, drivers must ensure the DPC routine is written to execute safely at DISPATCH_LEVEL as a fallback and avoid recursive queuing to prevent overflows or deadlocks. This approach enhances for modern drivers while maintaining with legacy behaviors.

Usage in Device Drivers

Common Scenarios

In network device drivers, the interrupt service routine (ISR) quickly acknowledges the receipt of incoming packets by disabling further interrupts from the and queues a deferred procedure call (DPC) to process the packet data and complete the associated I/O request packet (IRP). This transition ensures that the ISR executes briefly at device IRQL (DIRQL), deferring resource-intensive tasks like data copying or protocol processing to the DPC at DISPATCH_LEVEL, thereby reducing system interrupt latency. DPCs integrate seamlessly with kernel timers in device drivers, serving as the callback mechanism invoked by functions such as KeSetTimer or KeSetTimerEx to execute periodic operations. For instance, a driver might set a recurring timer to poll hardware registers for status updates, with the associated DPC handling the polling logic, error checking, or resource allocation at an appropriate IRQL without requiring constant high-priority interrupt handling. In storage device drivers built on the StorPort framework, the responds to hardware interrupts by performing minimal acknowledgment and queuing a DPC using StorPortIssueDpc for deferred execution. The resulting DPC routine, such as HwStorDpcRoutine, then manages post-interrupt tasks including buffer synchronization, data transfer completion, or logging I/O events, allowing the driver to efficiently handle disk operations while maintaining low DIRQL dwell time. Audio device drivers leverage DPCs to address buffer underruns detected during interrupt handling, where the ISR identifies the event but defers buffer refilling or stream adjustment to the DPC to avoid prolonging the high-IRQL phase. This approach enables audio processing without ISR blocking, mitigating glitches in playback by shifting buffer management to DISPATCH_LEVEL execution.

Best Practices and Limitations

When implementing Deferred Procedure Calls (DPCs) in Windows device drivers, developers must adhere to strict guidelines to ensure system stability and performance. DPC routines should execute quickly, ideally completing within 100 microseconds, to minimize interference with other operations and prevent delays in system responsiveness. For tasks exceeding this threshold, routines should promptly queue worker threads using IoQueueWorkItem or ExQueueWorkItem to handle deferred processing at PASSIVE_LEVEL, avoiding prolonged execution at DISPATCH_LEVEL. Synchronization with interrupt service routines (ISRs) or shared resources requires the use of spin locks or KeSynchronizeExecution with a routine, as these mechanisms are suitable for DISPATCH_LEVEL without introducing contention. DPC routines must avoid operations that could block or sleep, such as acquiring mutexes, performing paging I/O, or accessing pageable and , since these are prohibited at DISPATCH_LEVEL and can lead to deadlocks or system crashes. Developers should also refrain from using KeStallExecutionProcessor for delays longer than 100 microseconds, opting instead for timer-based DPCs to schedule follow-up work. To verify compliance, the (WDK) provides tools like ETW-based tracing (e.g., via tracelog) for measuring DPC execution times during development and testing. Key limitations of DPCs include their inability to block, which restricts them to non-waiting operations, and the reliance on the kernel stack, limited to approximately 12 on x86 systems (or 24 on x64), posing a risk of from deep call chains, large local variables, or recursive calls. High volumes of queued DPCs can result in spikes, as they lower-priority threads and accumulate in per-processor queues, potentially disrupting applications like audio or . Troubleshooting such issues involves analyzing traces with Windows Performance Analyzer (), which visualizes DPC/ durations, queue depths, and offending modules to identify problematic drivers. In modern Windows versions ( and later), threaded DPCs offer an evolution for lower-priority work, executing at PASSIVE_LEVEL as kernel threads to reduce impact on latency, though they introduce slight overhead compared to traditional DPCs; these are enabled by default but can be disabled if needed. For non-time-critical tasks, system work items remain preferable over DPCs to further mitigate and risks.

Comparisons with Other Systems

In Linux Kernel

In the , mechanisms equivalent to Windows Deferred Procedure Calls (DPCs) for deferring non-critical work from service routines (ISRs) are softirqs and tasklets, which enable execution in a software context shortly after the hard handler completes. Softirqs are primarily used for system-wide deferred tasks such as reception (e.g., the NET_RX softirq) and timekeeping, while tasklets serve as a dynamic for driver-specific upper-half processing akin to DPCs, allowing ISRs to schedule callbacks without blocking on urgent hardware handling. Both operate in an atomic context, prohibiting sleepable operations, but they execute post- to minimize in the hardirq path. A primary distinction from DPCs lies in the design of softirqs, which are statically defined at with a fixed set of types (e.g., NET_RX for incoming data or TIMER_SOFTIRQ for timer events), limiting flexibility compared to the fully dynamic DPC objects in Windows. Tasklets, implemented atop softirqs, offer dynamic registration similar to DPCs but serialize execution per instance across CPUs, running from per-CPU lists (tasklet_vec and tasklet_hi_vec) without targeting a specific , which promotes locality and reduces the need for IPIs. Softirqs are raised via raise_softirq() from ISRs or other kernel code, with execution triggered by do_softirq()—typically called at the end of hardirq handlers via invoke_softirq() or during scheduler hooks when returning to user space—ensuring deferred work integrates seamlessly into the exit path. Linux's approach provides enhanced flexibility for (SMP) environments through per-CPU data structures and integration with (RCU) for lockless synchronization, allowing efficient scaling without the IRQL-bound constraints of DPCs at DISPATCH_LEVEL. Like Windows DPCs, Linux uses per-processor queuing for locality, but softirqs and tasklets lack discrete priority levels while supporting preemption in certain configurations, such as , where softirqs execute in dedicated ksoftirqd threads, enabling interruptible and higher-priority task insertion for workloads. This design favors locality-managed execution that can handle concurrent softirq instances across CPUs while tasklets ensure non-reentrancy per handler for simpler driver coding.

In Other Operating Systems

In early UNIX systems, interrupt handling employed a split approach known as top halves and bottom halves, where the top half performed minimal immediate processing in context, and the bottom half deferred non-urgent work to avoid prolonged interruption of responsiveness. This concept evolved in BSD variants, such as , where software interrupts (SWIs) serve as a mechanism to queue deferred work from hardware handlers, allowing handlers to be scheduled via lightweight threads for later execution outside the primary context.) In , additional deferral is achieved through callouts, which schedule timed callbacks via the softclock software for periodic or delayed tasks, and tsleep(), which enables threads to block on wait channels until an event like a wakeup signal, facilitating deferred processing in drivers. The operating system influenced later designs through its Asynchronous Procedure Calls (APCs), which defer user-mode work asynchronously in thread context, and fork procedures, kernel-level mechanisms that directly inspired Deferred Procedure Calls by postponing non-critical tasks from service routines. In real-time operating systems like , deferred interrupts are managed via Deferred Interrupt Service Routines (DISRs) or by signaling semaphores and message queues from the ISR to notify a dedicated task for subsequent processing, ensuring the ISR remains brief while offloading complex operations to task context. Similarly, IBM's mainframe environment uses a comparable ISR-deferred split, where initial handling is minimal, and further work is dispatched via Service Request Blocks (SRBs), dispatchable units that execute asynchronously to maintain system throughput. Modern operating systems increasingly favor work queues for thread-based deferral of work, enabling sleepable operations in context and reducing reliance on non-preemptible handlers to enhance predictability and .

References

  1. [1]
    Introduction to DPC Objects - Windows drivers - Microsoft Learn
    May 1, 2025 · Therefore, the system provides support for deferred procedure calls (DPCs), which can be queued from ISRs and which are executed at a later time ...
  2. [2]
    Example 15 Measuring DPC/ISR Time - Windows drivers
    Dec 14, 2021 · You can measure the amount of time that a driver spends in deferred procedure calls (DPCs) and interrupt service routines (ISRs) by tracing these events in the ...
  3. [3]
    Types of APCs - Windows drivers | Microsoft Learn
    Apr 3, 2023 · APCs are similar to deferred procedure calls (DPCs), but unlike DPCs, APCs execute within the context of a particular thread.
  4. [4]
    Introduction to DPCs - Windows drivers | Microsoft Learn
    Dec 14, 2021 · A DpcForIsr or CustomDpc routine is called in an arbitrary DPC context at IRQL DISPATCH_LEVEL. Running at DISPATCH_LEVEL restricts the set ...
  5. [5]
    KeInitializeDpc function (wdm.h) - Windows drivers | Microsoft Learn
    Feb 22, 2024 · The KeInitializeDpc routine initializes a DPC object, and registers a CustomDpc routine for that object.
  6. [6]
    Managing Hardware Priorities - Windows drivers | Microsoft Learn
    Feb 21, 2025 · For example, some driver support routines require that the caller be running at IRQL = DISPATCH_LEVEL. Others can't be called safely if the ...
  7. [7]
    Deferred Procedure Call Details - OSR
    A working definition of DPCs is that they are a method by which a driver can request a callback to an arbitrary thread context at IRQL DISPATCH_LEVEL.
  8. [8]
    [PDF] PDP-11 Architectural Enhancement Strategy - Gordon Bell
    Context switching and interrupt servicing are highly related and require an integrated approach. The PDP-11 architecture does not lend itself toward high speed ...
  9. [9]
    [PDF] 11/10/66 Identification Overview of Interrupt Handling - Multics
    Discussion. An Interrupt, by Multics definition, is caused by a signal from some source other than a condition within the hardware of the processor.
  10. [10]
    [PDF] INTRODUCTION AND OVERVIEW OF THE MULTICS SYSTEM
    Whenever a reference to a missing page occurs, the su- pervisor must interrupt the program, fetch the missing page, and reinitiate the pro- gram without loss of ...
  11. [11]
    [PDF] unix programmer's manual - Bitsavers.org
    An introduction to the capabilities of the com'mand interpreter, the shell. 7. Learn-Computer Aided Instruction on UNIX. 109. M. E. Lesk and B. W. Kernighan.
  12. [12]
    [PDF] The Unix I/O System - squoze.net
    This paper gives an overview of the workings of the Unix I/O system. It was written with an eye toward providing guidance to writers of device driver ...
  13. [13]
    Programming Concepts Manual, Volume I — VMS Software, Inc.
    It presents methods of synchronization such as event flags, asynchronous system traps (ASTs), parallel processing RTLs, and process priorities, and the effects ...
  14. [14]
    The History of Windows NT 3.1 - by Bradford Morgan White
    Aug 20, 2023 · David Neil Cutler Sr was born on the ... Procedure became the Deferred Procedure Call, while some other terminology was copied verbatim.
  15. [15]
    Windows NT and VMS: The Rest of the Story - ITPro Today
    Asynchronous Procedure Call (APC). Fork Procedure. Deferred Procedure Call ... A major difference between NT process management and VMS process management is that ...
  16. [16]
    [PDF] Custer_Inside_Windows_NT_19...
    Early in Windows NT's development, Dave Cutler created a kernel mutex object ... NT device drivers use deferred procedure calls (DPCs), described in.
  17. [17]
    KeSetImportanceDpc function (wdm.h) - Windows drivers
    Feb 24, 2022 · If the caller sets Importance to HighImportance, the DPC is placed at the beginning of the queue; otherwise, it is placed at the end.
  18. [18]
    Organization of DPC Queues - Windows drivers | Microsoft Learn
    Dec 15, 2021 · Drivers can control which queue the system assigns a DPC to, the location of the DPC within the queue, and when the queue is processed.Missing: evolution levels
  19. [19]
    [PDF] Microsoft Windows Internals, Fourth Edition
    ... David Solomon, David Cutler, and Mark Russinovich. I first met David Solomon when I was working at Digital Equipment Corporation on the VMS operating system ...
  20. [20]
    KDPC - Geoff Chappell, Software Analyst
    Jun 25, 2016 · The KDPC is the structure in which the kernel keeps the state of a Deferred Procedure Call (DPC). The latter is a routine that kernel-mode code can register ...
  21. [21]
    Windows Kernel Opaque Structures - Microsoft Learn
    Nov 5, 2025 · The KDPC structure is an opaque structure that represents a DPC object. Don't set the members of this structure directly. See DPC Objects and ...EPROCESS · ETHREAD
  22. [22]
    IoInitializeDpcRequest function (wdm.h) - Windows drivers
    Feb 22, 2024 · IoInitializeDpcRequest associates a driver-supplied DpcForIsr routine with a given device object. The driver's InterruptService routine (ISR) ...
  23. [23]
    IoRequestDpc function (wdm.h) - Windows drivers - Microsoft Learn
    Feb 22, 2024 · The IoRequestDpc routine queues a driver-supplied DpcForIsr routine to complete interrupt-driven I/O processing at a lower IRQL.<|separator|>
  24. [24]
    Registering and Queuing a CustomDpc Routine - Windows drivers
    Dec 14, 2021 · Most drivers with CustomDpc routines provide storage for their DPC objects in the device extension, but the storage can be in a controller ...
  25. [25]
    KDEFERRED_ROUTINE (wdm.h) - Windows drivers - Microsoft Learn
    Feb 28, 2023 · The callback routine performs actions, after an InterruptService returns, of a threaded DPC. The CustomDpc routine finishes the servicing of an I/O operation.
  26. [26]
    KeInsertQueueDpc function (wdm.h) - Windows drivers
    Feb 22, 2024 · Pointer to the KDPC structure for the DPC object. This structure must have been initialized by either KeInitializeDpc or KeInitializeThreadedDpc ...
  27. [27]
    Guidelines for Writing DPC Routines - Windows drivers
    Dec 14, 2021 · If a task requires longer than 100 microseconds and must execute at IRQL equal to DISPATCH_LEVEL, the DPC routine should end after 100 ...
  28. [28]
    KeSetTargetProcessorDpc function (wdm.h) - Windows drivers
    Feb 24, 2022 · The KeSetTargetProcessorDpc routine specifies which processor's queue the system should use when the driver calls KeInsertQueueDpc or ...
  29. [29]
    KeSetImportanceDpc function (ntddk.h) - Windows drivers
    ### Summary: Default Importance Level for DPCs with KeInsertQueueDpc
  30. [30]
    KeSetTargetProcessorDpc function (ntddk.h) - Windows drivers
    Oct 21, 2021 · The KeSetTargetProcessorDpc routine specifies which processor's queue the system should use when the driver calls KeInsertQueueDpc or IoRequestDpc to queue a ...
  31. [31]
    KeSetTargetProcessorDpcEx function (wdm.h) - Windows drivers
    Feb 25, 2022 · This parameter points to a KDPC structure, which is an opaque, system structure that represents the DPC object. This object must previously ...
  32. [32]
    Non-RSS Receive Processing - Windows drivers - Microsoft Learn
    Dec 14, 2021 · The ISR disables the interrupts and requests NDIS to queue a deferred procedure call (DPC) to process the received data. NDIS calls the ...
  33. [33]
    Introduction to Receive Side Scaling (RSS) - Windows drivers
    To process received data efficiently, a miniport driver's receive interrupt service function schedules a deferred procedure call (DPC). Without RSS, a typical ...
  34. [34]
    KeXxxTimer Routines, KTIMER Objects, and DPCs - Windows drivers
    Dec 14, 2021 · KeSetTimer always sets a timer that will expire just once. KeSetTimerEx accepts an optional Period parameter, which specifies a recurring timer ...Missing: callback | Show results with:callback
  35. [35]
    HW_INTERRUPT (storport.h) - Windows drivers | Microsoft Learn
    Sep 22, 2025 · The Storport driver calls the HwStorInterrupt routine after the HBA generates an interrupt request. Syntax: C++ Copy HW_INTERRUPT HwInterrupt;
  36. [36]
    HW_DPC_ROUTINE (storport.h) - Windows drivers | Microsoft Learn
    Feb 22, 2024 · The HwStorDpcRoutine routine is a routine that is deferred for execution at DISPATCH IRQL by means of the deferred procedure call (DPC) ...
  37. [37]
    Using the Kernel Stack - Windows drivers | Microsoft Learn
    Dec 14, 2021 · The size of the kernel-mode stack is limited to approximately three pages. Therefore, when passing data to internal routines, drivers cannot pass large amounts ...Missing: DPC | Show results with:DPC
  38. [38]
    CPU Analysis | Microsoft Learn
    Mar 25, 2021 · This guide provides detailed techniques that you can use to investigate Central Processing Units (CPU)-related issues that impact assessment metrics.Missing: latency | Show results with:latency
  39. [39]
    Introduction to Threaded DPCs - Windows drivers | Microsoft Learn
    Feb 21, 2025 · To initialize a KDPC structure for a threaded DPC, call the KeInitializeThreadedDpc routine, and pass it a CustomThreadedDpc routine that ...
  40. [40]
    Unreliable Guide To Hacking The Linux Kernel
    Softirqs are often a pain to deal with, since the same softirq will run simultaneously on more than one CPU. For this reason, tasklets ( include/linux/interrupt ...
  41. [41]
    4.7. Softirqs and Tasklets - Understanding the Linux Kernel, 3rd ...
    Softirqs and tasklets are strictly correlated, because tasklets are implemented on top of softirqs. As a matter of fact, the term “softirq,” which appears in ...
  42. [42]
    Softirq, Tasklets and Workqueues · Linux Inside - 0xax
    In this part we saw three concepts: the softirq, tasklet and workqueue that are used for the deferred functions.
  43. [43]
    Lock types and their rules - The Linux Kernel documentation
    In a PREEMPT_RT kernel, softirq context is preemptible, and synchronizing every bottom-half-disabled section via implicit context results in an implicit per-CPU ...
  44. [44]
    Unreliable Guide To Locking - The Linux Kernel documentation
    Since a tasklet is never run on two CPUs at once, you don't need to worry about your tasklet being reentrant (running twice at once), even on SMP. Different ...
  45. [45]
    The end of tasklets - LWN.net
    Feb 5, 2024 · The kernel community has developed a number of deferred-execution mechanisms designed to ensure that every task is handled at the right time.
  46. [46]
    Chapter 8. SMPng Design Document | FreeBSD Documentation Portal
    8.3.4. Callouts. The timeout kernel facility permits kernel services to register functions for execution as part of the softclock software interrupt.
  47. [47]
    How to Optimize Interrupt Latency in VxWorks - VxWorks6
    Aug 31, 2025 · In VxWorks, you can implement Deferred Interrupt Service Routines (DISRs) to offload time-consuming tasks from the ISR. DISRs allow you to ...
  48. [48]
    Interrupt Handling in VxWorks Device Drivers - VxWorks6
    avoid blocking, printing, or heavy computation. · Use semaphores or message queues to notify a task for deferred work.
  49. [49]
    [PDF] ABCs of OS/390 System Programming Volume 1 - IBM Redbooks
    3.3.3 Interrupt processing ... The access method returns control to the user program, which can then continue its processing.Missing: ISR DPC
  50. [50]
    [PDF] MVS Programming: Authorized Assembler Services Guide - Index of /
    This is the MVS Programming: Authorized Assembler Services Guide, a major revision applying to OS/390 Version 2 Release 10 and later.
  51. [51]
    Interrupt Mechanism - an overview | ScienceDirect Topics
    An interrupt mechanism is defined as a system that allows a microprocessor to change its flow of execution in response to asynchronous or synchronous events, ...Introduction to Interrupt... · Interrupt Handling and... · Interrupt Mechanism in...Missing: Multics | Show results with:Multics
  52. [52]
    Understanding Work Queues in Zephyr RTOS - Embedded Explorer
    Oct 1, 2025 · Work queues in Zephyr RTOS are a lightweight way to defer work, move processing out of interrupt context, and keep applications responsive ...