Fact-checked by Grok 2 weeks ago

Critical section

In , a critical section refers to a segment of code within a concurrent program—such as one involving multiple or processes—that accesses shared resources, such as variables, data structures, or devices, and must therefore execute atomically to prevent interference and ensure correct operation. This atomicity is crucial because concurrent access without proper can lead to race conditions, where the outcome depends on the unpredictable timing of thread execution, potentially causing data corruption or inconsistent states. The concept of critical sections emerged as a foundational problem in concurrency, first formalized by Edsger Dijkstra in 1965, who defined requirements for : ensuring that only one enters its critical section at a time, while allowing progress (no ) and bounded waiting (no indefinite postponement). These properties enable shared resources to behave as if accessed sequentially, even in parallel environments, which is essential for reliable software in operating systems, databases, and distributed applications. Without such safeguards, issues like the Mars Pathfinder's incident in 1997 highlight how failures can lead to system crashes or mission failures. To implement critical sections, programmers use synchronization primitives like locks, mutexes, semaphores, or monitors, which enforce mutual exclusion through hardware-supported atomic operations such as test-and-set or compare-and-swap. In systems like Windows, critical section objects provide lightweight, process-local synchronization via functions such as EnterCriticalSection and LeaveCriticalSection, offering efficiency over heavier inter-process mutexes by avoiding kernel-mode transitions when uncontended. Similarly, POSIX threads (pthreads) employ mutexes to protect critical sections, with implementations relying on operating system facilities like futexes for optimal performance. Modern languages and frameworks, including Java's synchronized blocks and C++'s std::mutex, abstract these mechanisms to simplify concurrent programming while upholding the core principles of safety and efficiency.

Fundamentals

Definition

In concurrent programming, a critical section is a segment of code within a or that accesses a , such as a or , and requires to ensure that only one such entity executes it at a time, thereby preventing or inconsistent states from concurrent modifications. This isolation is essential in multiprogramming environments where multiple or may attempt simultaneous access to the same resource. Solutions to the critical section problem must satisfy three key properties: , which guarantees that no two processes can execute their critical sections concurrently; , ensuring that if no process is in its critical section and some processes wish to enter, the selection of the next process to enter cannot be postponed indefinitely; and bounded waiting, which prevents any process from being indefinitely starved by requiring that there exists a bound on the number of times other processes can enter their critical sections after a given has requested entry. The term "critical section" originated in the context of multiprogramming during the , with early formalization by in his 1965 work on cooperating sequential processes, where he introduced semaphores as a mechanism to solve the problem for such sections. A simple example of a critical section involves updating a shared variable in a multithreaded program. The following illustrates this for two threads:
Thread 1 (and similarly for Thread 2):
do {
    // Entry protocol (e.g., acquire lock)
    critical_section {
        counter = counter + 1;  // Shared resource access
    }
    // Exit protocol (e.g., release lock)
    remainder_section;  // Non-critical code
} while (true);
This structure ensures the increment operation is atomic, avoiding race conditions where interleaved executions might lead to lost updates.

Purpose and Need

Critical sections are essential in concurrent programming to address race conditions, which occur when multiple threads or processes access shared resources in an interleaved manner, leading to unpredictable and incorrect outcomes. For instance, a lost update can happen when two threads read the same value from a shared variable, modify their local copies independently, and then write back, overwriting one another's changes. This problem arises because operations on shared data are not atomic, allowing interleaving that violates the intended sequential logic. The need for critical sections becomes particularly acute in multiprocessor systems or multithreaded environments, where parallelism is exploited to improve performance but introduces risks to data consistency and overall program correctness. Without protection, concurrent access to can result in inconsistent states, such as erroneous computations or corrupted data structures, undermining the reliability of the system. In these settings, critical sections enforce , ensuring that only one executes the sensitive at a time, thereby serializing access and preserving the integrity of shared resources. A classic consequence of neglecting critical sections is illustrated in scenarios involving financial transactions, like concurrent withdrawals from a . Suppose two threads each attempt to withdraw $50 from an account with a $100 balance: both may read the balance simultaneously, proceed with the withdrawal assuming sufficient funds, and then update the balance to $50 each, resulting in an incorrect final balance of $50 instead of $0. This inconsistency can lead to severe errors, such as overdrafts or financial losses, highlighting the critical importance of in real-world applications. To demonstrate the issue more concretely, consider two threads incrementing a shared initialized to 5. Without , both threads might read the 5, increment their local copies to 6, and write 6 back to the shared variable, yielding a final value of 6 rather than the expected 7 after two increments. This lost update exemplifies how race conditions erode the accuracy of even simple operations, necessitating critical sections to guarantee atomicity and correctness in concurrent execution.

Implementation Approaches

Software Techniques

Software techniques for implementing critical sections primarily rely on algorithmic approaches and higher-level synchronization primitives that operate within environments, ensuring without direct hardware intervention. These methods emphasize portability across different systems and focus on software-based coordination among processes or threads. Early solutions addressed the challenge of using busy-waiting strategies, where processes repeatedly check shared variables until conditions allow entry into the critical section. Dekker's algorithm, developed in 1965, represents the first known correct software solution to the problem for two processes. It uses two arrays of flags and a turn indicator to coordinate access, ensuring that only one process enters the critical section at a time while preventing through symmetric yielding. This approach assumes atomic reads and writes to and laid the groundwork for subsequent algorithms by demonstrating that mutual exclusion could be achieved purely through software coordination. Peterson's algorithm, introduced in 1981, simplifies Dekker's solution while maintaining the same guarantees of , progress, and bounded waiting for two processes. It employs two flags to indicate each process's interest in entering the critical section and a single turn variable to resolve contention by designating which process yields priority. The algorithm assumes with operations on individual variables. The for Peterson's algorithm for processes 0 and 1 is as follows:
shared boolean flag[2] = {false, false};
shared int turn;

void process0_critical_section() {
    flag[0] = true;
    turn = 1;
    while (flag[1] && turn == 1) {
        // busy wait
    }
    // critical section
    flag[0] = false;
}

void process1_critical_section() {
    flag[1] = true;
    turn = 0;
    while (flag[0] && turn == 0) {
        // busy wait
    }
    // critical section
    flag[1] = false;
}
To illustrate execution, consider two processes attempting to enter their critical sections concurrently. Initially, both flags are false. Process 0 sets flag[0] = true and turn = 1, then checks the while condition: if Process 1 has not yet set its flag, the condition is false, allowing Process 0 to enter. Meanwhile, if Process 1 sets flag[1] = true and turn = 0 afterward, its while condition evaluates to flag[0] && turn == 0 (true && true = true), so it waits. Once Process 0 exits and sets flag[0] = false, Process 1's condition becomes false, granting it entry. If both set flags before checking, the turn variable ensures only one proceeds, as the process that sets turn to the other's index yields. Correctness of relies on invariants such as: if both flags are true, the turn variable points to the process that must , ensuring that not both can exit their entry loops simultaneously (preventing mutual exclusion violation); the turn variable is always set to the index of a process that is interested (flag true) or has , ensuring progress and no . These invariants can be verified by over execution steps, showing holds because if both were in the critical section, a contradiction arises from the loop exit conditions and turn value. Higher-level primitives build on these foundations to abstract away low-level busy-waiting. Semaphores, introduced by in 1965, provide a versatile mechanism for using variables and two atomic operations: P (wait, decrement if positive or block) and V (signal, increment and wake a waiting if any). A binary semaphore, initialized to 1, functions as a mutex for critical sections: processes execute P before entry and V after exit, enforcing mutual exclusion while allowing efficient blocking instead of busy-waiting. Monitors, formalized by C. A. R. Hoare in 1974, offer a higher for concurrent programming by encapsulating shared data and procedures within a module that enforces automatically—only one can execute monitor procedures at a time. Monitors include condition variables for signaling, enabling wait (block and release the monitor) and signal (wake a waiting , potentially transferring control immediately), which supports more complex coordination like producer-consumer scenarios without explicit low-level management. Modern library implementations provide portable access to these concepts. In POSIX threads (pthreads), mutexes are initialized with pthread_mutex_init and used via pthread_mutex_lock (acquire, blocking if contended) and pthread_mutex_unlock (release), supporting recursive and error-checking variants for robust critical section protection. Similarly, the Windows API offers Critical Section objects, initialized with InitializeCriticalSection, acquired via EnterCriticalSection (which may spin briefly before blocking), and released with LeaveCriticalSection, optimized for intra-process use with low overhead for uncontended access. Trade-offs in software techniques often center on busy-waiting (spinlocks) versus blocking (sleep/wake) approaches. Spinlocks, akin to Peterson's busy-wait loops, are CPU-intensive but suitable for short critical sections where the expected wait time is less than context-switch overhead, minimizing in high-contention scenarios like interrupt handlers. For longer sections, blocking primitives like semaphores or mutexes conserve resources by suspending threads, yielding the CPU to others, though at the cost of scheduler involvement and potential wake-up delays.

Hardware Mechanisms

Hardware mechanisms provide low-level support for implementing critical sections by ensuring ity and ordering of operations across multiple cores, independent of higher-level software constructs. These primitives, such as instructions, enable and data consistency in concurrent environments by preventing intermediate states during access. instructions form the core of , allowing indivisible read-modify-write operations on locations. The (TAS) instruction, commonly implemented in x86 via the Bit Test and Set (BTS) or (XCHG) operations, atomically tests a bit in and sets it to 1 if it was 0, returning the original value. This is widely used to acquire locks, as a successful test (original bit 0) indicates the resource was free. In x86, the LOCK prefix ensures atomicity by locking the bus or cache line during execution. The (CAS) instruction is a more versatile primitive, enabling lock-free programming by conditionally updating a location only if it matches an . In x86, this is realized through the CMPXCHG instruction, which compares the accumulator (e.g., ) with the destination and, if equal, exchanges it with the source ; otherwise, it loads the destination into the accumulator. With the LOCK prefix, it guarantees execution across cores. ARM architectures provide direct CAS support in A64, where the instruction reads a word or doubleword, compares it to a value, and writes a new value if they match, with variants for acquire/release semantics to control visibility. for x86 CMPXCHG illustrates this:
TEMP ← DEST
IF ACCUMULATOR = TEMP THEN
    ZF ← 1
    DEST ← [SRC](/page/SRC)
ELSE
    ZF ← 0
    ACCUMULATOR ← TEMP
    DEST ← TEMP
FI
Load-linked/store-conditional (LL/SC) offers an alternative atomic mechanism prevalent in RISC architectures, pairing a load that establishes a with a conditional store that succeeds only if no intervening writes occur to the address. In , LR.W loads a word and sets a reservation, while SC.W stores a value and returns 0 on success or a nonzero value on failure due to reservation invalidation. This pair avoids the in loops and supports scalable synchronization in multi-processor systems. Memory barriers, or fences, enforce ordering of memory operations to ensure changes in one are visible to others, preventing or reordering that could violate critical section semantics. In x86, the MFENCE serializes all loads and stores issued before it, making them globally visible before subsequent operations, which is crucial for weakly ordered memory types in multi- setups. protocols maintain consistency of shared data across caches, underpinning the effectiveness of atomic instructions. The , used in many x86 and multi-core systems like the Cortex-A9, defines four states for cache lines: Modified (dirty, unique copy), Exclusive (clean, unique), Shared (clean, multiple copies), and Invalid (no valid data). Transitions ensure that writes invalidate or update other caches via snooping, preventing stale data during critical sections; for instance, a write in Shared state first invalidates remote copies before proceeding to Modified. Despite their utility, hardware mechanisms have limitations: atomic instructions handle only simple operations and can lead to high contention or livelock in busy-wait loops under heavy load, while barriers impose performance overhead by flushing buffers. They are insufficient alone for complex synchronization like condition variables, necessitating combination with software techniques. The evolution of these mechanisms began with early RISC architectures in the 1980s, where the MIPS project at Stanford introduced LL/SC to support efficient atomic operations in pipelined processors, influencing subsequent designs like RISC-V. Modern extensions include Intel's Transactional Synchronization Extensions (TSX), introduced in 2013 with the Haswell microarchitecture, which enables speculative execution of critical sections as hardware transactions using instructions like XBEGIN/XEND; however, asynchronous aborts due to cache conflicts or security vulnerabilities (e.g., TAA CVE-2019-11135) can cause rollbacks and data exposure, leading to recommendations for disabling TSX in vulnerable systems via microcode updates.

Applications

Operating System Kernels

In operating system kernels, critical sections are essential for synchronizing access to shared resources such as control blocks, memory allocators, and I/O queues, ensuring system stability in multiprocessor environments. These sections protect kernel data structures from concurrent modifications by multiple threads, interrupt handlers, or device drivers, preventing race conditions that could lead to crashes or . Kernel designers employ a of synchronization primitives tailored to the duration and context of the critical section, balancing performance with correctness. Spinlocks are lightweight primitives used for short, non-preemptible critical sections, particularly in interrupt handlers where low is critical. They operate by busy-waiting on a lock , making them suitable for uniprocessor or (SMP) systems but inefficient for longer holds due to wasted CPU cycles. In contrast, mutexes support scheduling and are preferred for extended critical sections, allowing blocked threads to yield the processor and sleep until the lock is available. A basic technique for enforcing critical sections in uniprocessor kernels involves temporarily disabling interrupts using instructions like CLI (clear interrupt flag) and (set interrupt flag) on x86 architectures. This prevents asynchronous interruptions during the section, ensuring atomicity without complex locking, though it is unsuitable for multiprocessor systems where other CPUs can still interfere. In the , the spinlock_t structure implements spinlocks, initialized via spin_lock_init() and acquired with spin_lock(), commonly used in the scheduler to protect runqueue data and in for page allocator locks. Mutexes, defined by the mutex struct and functions like mutex_lock(), handle longer operations such as filesystem updates, integrating with the scheduler for efficient blocking. Similarly, the kernel employs executive spinlocks for high-speed synchronization in service routines and fast mutexes for driver code that may involve waiting, with APIs like KeAcquireSpinLock() ensuring preemption safety. Kernel critical sections introduce challenges like , where a low-priority holding a lock blocks a high-priority one, potentially delaying tasks; this is mitigated by priority inheritance protocols that temporarily elevate the holder's priority. Deadlocks can also arise from circular waits on multiple locks, necessitating careful acquisition ordering and tools like lockdep in for detection. Historically, early UNIX kernels relied on simple disabling for , as seen in the original Version 6 implementation from the 1970s. Modern kernels like 2.6 and later shifted to fine-grained locking, distributing locks across subsystems to reduce contention and improve on multicore hardware.

Concurrent Data Structures

Concurrent data structures are designed to allow multiple threads to access and modify shared data safely without corrupting the structure, often relying on critical sections to protect operations like insertions, deletions, and searches. In lock-based approaches, mutual exclusion primitives such as mutexes enclose the critical sections around data structure operations to ensure atomicity. For example, in a concurrent queue, a mutex can protect the enqueue and dequeue operations, preventing race conditions where one thread might overwrite another's changes. Similarly, hash tables can employ per-bucket locks to achieve finer granularity, reducing contention by allowing concurrent access to non-overlapping buckets while still serializing updates within each bucket. Lock-free alternatives avoid traditional locks by using atomic operations, such as (CAS), to implement wait-free progress for at least one thread, enabling higher throughput under high contention. A seminal example is Treiber's lock-free stack algorithm, which uses CAS to atomically update the stack's top pointer during push and pop operations on a singly-linked list. Another influential design is the Michael-Scott non-blocking , which employs CAS for safe linked-list manipulations, supporting concurrent enqueues and dequeues without blocking. These structures leverage hardware atomic instructions for synchronization, as detailed in hardware mechanisms. For balanced trees, fine-grained locking in red-black trees—where locks are held only on affected nodes during rotations and insertions—contrasts with coarse-grained locking on simple lists, which protects the entire structure but limits scalability. Performance evaluations show that lock-based structures incur overhead from context switches and lock acquisition, leading to reduced throughput as thread count increases; for instance, in queue benchmarks on multiprocessors, lock-free implementations like Michael-Scott achieve up to 2-3 times higher operations per second under contention compared to mutex-based queues. Modern libraries incorporate these techniques: 's ConcurrentHashMap, as implemented since Java 8, employs fine-grained locking on individual hash bins along with operations to enable scalable concurrent updates across multiple s. In C++, std:: enables lock-free implementations of stacks and queues by providing primitives. However, lock-free designs face challenges like the , where a reuses a deallocated node with the same address, leading to incorrect successes; this is mitigated using tagged pointers, which append a version counter to addresses for unique identification during updates.

Peripheral Device Management

In device driver development, critical sections are essential for synchronizing access to hardware peripherals, particularly during (DMA) transfers and (ISRs). Mutexes are commonly employed to protect shared resources, such as buffers that could otherwise suffer from overruns if multiple threads or interrupts attempt concurrent modifications. For instance, during DMA operations, a mutex ensures that the driver serializes buffer preparation and data movement, preventing race conditions where an ISR might overwrite partially transferred data. This approach is detailed in foundational driver documentation, emphasizing the use of primitives to maintain in I/O paths. Specific examples illustrate these techniques in common peripherals. USB controllers often utilize semaphores to manage queues, ensuring that transfers are queued atomically and avoiding conflicts during high-speed exchanges. In the Enhanced Host Controller Interface (EHCI) specification, semaphores in the USB Legacy Support Register synchronize ownership handoff of the controller between and the operating system. Similarly, network interface cards (NICs) employ locks on packet s to safeguard receive and transmit rings; for example, the networking stack uses list locks to atomically insert or remove sk_buff structures, mitigating buffer corruption in multi-queue scenarios. These mechanisms allow drivers to handle asynchronous packet arrival without data races. In embedded systems, operating systems (RTOS) like leverage mutexes within critical sections to coordinate access to peripherals such as , ensuring timely and consistent . For data in applications, a mutex guards the peripheral interface (e.g., I2C or bus), preventing interruptions from other tasks that could corrupt readings or delay responses. mutexes incorporate priority inheritance to minimize blocking in high-priority tasks, supporting deterministic behavior critical for embedded control systems. This is particularly vital in resource-constrained environments where peripherals are shared among multiple tasks. Managing critical sections for peripherals introduces challenges, including latency and the need for when accessing registers. Hardware can introduce variable delays, potentially allowing races if a critical section spans register reads and writes; for example, in processors, latency arises from factors like pending exceptions or tail-chaining, requiring careful use of instructions or brief disablements to ensure register operations complete indivisibly. Ensuring across multiple registers often involves disabling temporarily, but this must balance against increased latency in systems. A notable case is bus access, where critical sections serialize commands to prevent on shared buses. In drivers, spinlocks protect the command queue, ensuring that only one command initiates at a time to avoid overlapping transfers that could garble on the bus. This is crucial for maintaining integrity, as concurrent commands might lead to mismatches or lost acknowledgments. The approach evolved from early polling-based systems, which continuously checked device status but wasted CPU cycles, to interrupt-driven models in like post-kernel 2.6. The 2.6 series introduced a unified device model with improved handling and , enabling efficient event-driven I/O while reducing through preemptible kernels and finer-grained locks.

References

  1. [1]
    Turing Lecture: The Computer Science of Concurrency
    Jun 1, 2015 · He posed the problem of synchronizing N processes, each with a section of code called its critical section, so that the following properties are ...
  2. [2]
    [PDF] threads-locks.pdf - cs.wisc.edu
    Locks ensure critical sections execute atomically, using a variable to control if a thread can enter. The lock is acquired and then released.
  3. [3]
    Critical Section Objects - Win32 apps - Microsoft Learn
    Jan 7, 2021 · A critical section object provides synchronization similar to that provided by a mutex object, except that a critical section can be used only by the threads ...
  4. [4]
    The Mutual Exclusion Problem: Part II-Statement and Solutions
    A malfunctioning process obviously cannot be prevented from executing its critical section while another process's critical section execution is in progress.
  5. [5]
    E.W.Dijkstra Archive: Cooperating sequential processes (EWD 123)
    In each cycle a so-called "critical section" occurs, critical in the sense that the processes have to be constructed in such a way, that at any moment at most ...
  6. [6]
    [PDF] Concurrent Programming: Critical Sections and Locks - CS@Cornell
    A Data Race occurs when two threads try to access the same variable and at least one access is non-atomic and at least one access is an update. o The outcome of ...<|control11|><|separator|>
  7. [7]
    Concurrency Hazards: Solving Problems In Your Multithreaded Code
    A data race—or race condition—occurs when data is accessed concurrently from multiple threads. ... For example, take a look at the bank account abstraction shown ...
  8. [8]
    Some myths about famous mutual exclusion algorithms
    Aug 5, 2003 · Abstract. Dekker's algorithm[9] is the historically first software solution to mutual exclusion problem for 2-process case.Missing: original | Show results with:original
  9. [9]
    A Proof of Peterson's Algorithm - James Wilcox
    May 8, 2015 · Peterson's algorithm is a two-thread mutual exclusion algorithm. Each thread sets its flag to true, then waits until the other's flag is false ...
  10. [10]
    [PDF] Monitors: An Operating System Structuring Concept - cs.wisc.edu
    This paper develops Brinch-Hansen's concept of a monitor as a method of structuring an operating system. It introduces a form of synchronization, ...Missing: original | Show results with:original
  11. [11]
    pthread_mutex_lock
    Each time the thread unlocks the mutex, the lock count shall be decremented by one. When the lock count reaches zero, the mutex shall become available for other ...
  12. [12]
    [PDF] The Performance of Spin Lock Alternatives for Shared-Memory ...
    For long critical sections, this initial slowdown is less significant, but for short critical sections, it dominates performance. Our discussion so far has ...
  13. [13]
  14. [14]
    cas, casa, casal, casl - Arm A-profile A64 Instruction Set Architecture
    This instruction reads a 32-bit word or 64-bit doubleword from memory, and compares it against the value held in a first register. If the comparison is equal, ...<|separator|>
  15. [15]
    Load-Reserved/Store-Conditional Instructions - Five EmbedDev
    Complex atomic memory operations on a single memory word or doubleword are performed with the load-reserved (LR) and store-conditional (SC) instructions. LR.W ...Missing: linked | Show results with:linked
  16. [16]
    MFENCE — Memory Fence
    The MFENCE instruction provides a performance-efficient way of ensuring load and store ordering between routines that produce weakly-ordered results.
  17. [17]
    MESI and MOESI protocols - Arm Developer
    There are a number of standard ways by which cache coherency schemes can operate. Most ARM processors use the MOESI protocol, while the Cortex-A9 uses the MESI ...
  18. [18]
    Synchronization - CS 3410
    The key takeaway here is that in order to implement correct synchronization primitives, we need hardware support. Atomic Instructions. RISC-V provides two basic ...
  19. [19]
    MIPS - Stanford Computer Science
    MIPS was developed at Stanford as an early RISC processor with a smaller, simpler instruction set, using pipelining, and 32-bit registers.<|separator|>
  20. [20]
    Intel® Transactional Synchronization Extensions (Intel® TSX)...
    Nov 12, 2019 · Intel TSX supports atomic memory transactions that are either committed or aborted. Upon an Intel TSX abort, all earlier memory writes inside ...Missing: 2013 2025
  21. [21]
    [PDF] Simple, Fast, and Practical Non-Blocking and Blocking Concurrent ...
    We use Treiber's simple and efficient non-blocking stack algorithm [21] to implement a non-blocking free list. Figure 2 presents commented pseudo-code for the ...
  22. [22]
    A scalable lock-free stack algorithm - ACM Digital Library
    This paper presents such a concurrent stack algorithm. It is based on the following simple observation: that a single elimination array used as a backoff ...
  23. [23]
    [PDF] Automatic Fine-Grain Locking using Shape Properties
    For the Red-Black Tree, the task of manually adding fine- grain locks proved to be too challenging and error prone. Rotations and deletions are much more ...
  24. [24]
    [PDF] DEVICE DRIVERS - Bootlin
    The Linux device model, which is new in 2.6, is covered in detail. There are new chapters on the USB bus and the serial driver subsystem; the chapter on PCI ...
  25. [25]
    [PDF] Enhanced Host Controller Interface Specification for - Intel
    Mar 12, 2002 · These are the USB endpoint characteristics including addressing, maximum ... semaphore to synchronize ownership changes of the EHCI controller.
  26. [26]
  27. [27]
    Linux Networking and Network Devices APIs
    Place a packet before a given packet in a list. The list locks are taken and this function is atomic with respect to other list locked calls. A buffer cannot be ...
  28. [28]
    FreeRTOS mutexes
    FreeRTOS mutexes are binary semaphores for mutual exclusion, acting as a token to guard a resource. They use priority inheritance to minimize blocking time.Missing: peripheral sensor
  29. [29]
    Beginner guide on interrupt latency and Arm Cortex-M processors
    Apr 1, 2016 · This blog will cover the basics of interrupt latency, and what users need to be aware of when selecting a microcontroller with low interrupt ...
  30. [30]
    Driver Basics — The Linux Kernel documentation
    Synchronization rules: Callers must prevent restarting of the timer, otherwise this function is meaningless. It must not be called from interrupt contexts ...
  31. [31]
    [PDF] What's new in Linux 2.6? - Bootlin
    Sep 15, 2009 · Linux 2.6 now features a preemptible option (CONFIG_PREEMPT). Original MontaVista Linux 2.4 patch now available in mainstream. Unlike in 2.4, ...