Fact-checked by Grok 2 weeks ago

Mutual exclusion

Mutual exclusion is a core synchronization mechanism in that guarantees only one or can execute a of code accessing shared resources at any given time, thereby preventing race conditions and maintaining in concurrent environments. This principle is essential for ensuring the correctness of multithreaded and multiprogrammed systems, where multiple execution units might otherwise interfere with shared mutable data, leading to unpredictable or erroneous outcomes. In concurrent programming, a refers to any segment of code that manipulates shared variables or resources, requiring exclusive access to avoid inconsistencies such as lost updates or incorrect computations. Race conditions arise when the interleaving of operations from multiple threads depends on timing, potentially causing failures like atomicity violations in operations such as incrementing a shared counter. Mutual exclusion addresses these issues by enforcing of access, often through hardware support like instructions or software constructs that block competing processes until the resource is free. Common implementations of mutual exclusion include mutexes (mutual exclusion locks), which provide a simple acquire-and-release interface for protecting critical sections, as standardized in threading with functions like pthread_mutex_lock and pthread_mutex_unlock. Semaphores, introduced by in the 1960s, offer a more general signaling mechanism that can enforce mutual exclusion via binary semaphores while also supporting producer-consumer coordination. Other techniques encompass spinlocks for short waits, monitors for object-oriented encapsulation of locks, and software algorithms like Lamport's bakery algorithm. The concept of mutual exclusion was formalized by Dijkstra in his 1965 paper "Solution of a Problem in Concurrent Programming Control," which addressed the challenge for an arbitrary number of processes and laid the groundwork for modern . Its significance has grown with the prevalence of multicore processors and , influencing standards in operating systems, , and applications to ensure reliability and liveness properties like avoidance.

The Mutual Exclusion Problem

Core Definition and Motivation

Mutual exclusion is a fundamental synchronization primitive in concurrent programming that ensures no two or threads can simultaneously execute their , thereby preventing interference with shared resources. A refers to a segment of code within a that accesses a shared or device, where concurrent execution could lead to inconsistencies. To enforce this, follow entry protocols to attempt access and exit protocols to relinquish control, allowing only one to proceed at a time. The primary motivation for mutual exclusion arises from the need to avoid , where the outcome of concurrent operations depends on their unpredictable interleaving, potentially causing or incorrect results. For instance, consider two processes attempting to increment a shared initialized to 5: each reads the value, adds 1 locally, and writes back, but if both read before either writes, the final value becomes 6 instead of the expected 7, resulting in a lost update. Such are prevalent in operating systems managing , where unsynchronized access to data structures can crash the system, and in environments where multiprocessor tasks manipulate common variables. In databases and real-world applications like banking systems, mutual exclusion is crucial to maintain during transactions; without it, concurrent updates—such as two users depositing funds into the same account—could fail to reflect both changes, leading to erroneous balances and financial errors. This is essential across domains, from operating systems coordinating scheduling to distributed systems ensuring consistent in multi-user environments, underscoring its role in reliable concurrent execution.

Formal Requirements

The mutual exclusion property requires that at no time can two or more processes be simultaneously executing within their critical sections, ensuring that shared resources are accessed by only one process at a time to prevent or inconsistent states. This property forms the foundational correctness criterion for any solution to the problem, as originally articulated by in his analysis of cooperating sequential processes. The progress property stipulates that if no process is currently in its and at least one process wishes to enter, then the selection of the next process to enter must be made in a finite amount of time, without allowing the system to or indefinitely delay decision-making among the contending processes. This ensures liveness by preventing scenarios where processes are perpetually stuck in their entry protocols, a Dijkstra emphasized to avoid finite speed assumptions that could lead to blocking. In formal terms, every trying operation must terminate, guaranteeing lockout freedom for processes seeking access. The bounded waiting property, also known as the no-starvation or finite-waiting condition, mandates that there exists a bound on the number of times other can enter the while a given is waiting to enter, preventing indefinite postponement of any individual . This property addresses fairness by limiting how long a can be overtaken, with variations such as r-bounded waiting specifying that no more than r entries by others occur before the waiting gains access. It strengthens progress by incorporating a fairness , ensuring that waiting processes are not perpetually bypassed. These properties involve inherent trade-offs, particularly between levels of fairness and efficiency; for instance, strict alternation—where processes must alternate access regardless of interest—guarantees strong fairness but violates if one remains , whereas weaker fairness models like lockout prioritize responsiveness at the potential cost of occasional . Stronger fairness requirements, such as first-come-first-served ordering, demand more shared memory resources (e.g., up to O(N!) bits for N processes) compared to minimal solutions that achieve only basic . The mutual exclusion problem for N processes, as formalized by , requires protocols to satisfy mutual exclusion (no concurrent critical section executions), absence of (infinite critical section entries if any process tries indefinitely), and no lockout (no process waits forever in its protocol), as analyzed by Gary L. Peterson in his extension of solutions to multiple processes. This generalization preserves the core properties while scaling to arbitrary process counts, influencing subsequent algorithmic designs.

Historical Development

The study of mutual exclusion originated in the early amid growing interest in concurrent programming for multiprogrammed systems. played a pivotal role in recognizing the challenges of coordinating cooperating sequential processes, particularly the need to prevent simultaneous access to shared resources. In his seminal paper published in Communications of the ACM, Dijkstra formalized the mutual exclusion problem and presented an initial software-based solution using only atomic reads and writes, laying the groundwork for subsequent algorithmic developments. A key milestone came that same year with the publication of what is known as , the first correct software solution to achieve for two processes without relying on specialized hardware instructions. Although attributed to T. J. Dekker, the algorithm was detailed by Dijkstra in the aforementioned CACM paper, demonstrating how shared flags could enforce the necessary through busy-waiting. This approach satisfied the core requirements of , , and bounded waiting using only basic operations. Building on these foundations, further advancements addressed limitations in scalability and fairness. In 1974, introduced the bakery algorithm, a software solution for an arbitrary number of processes that emulates a ticket system to ensure first-come, first-served access, thereby improving fairness without hardware support. Later, in 1981, Gary L. Peterson developed a more concise algorithm for two processes, refining the use of flags and a turn variable to guarantee mutual exclusion while debunking common misconceptions about prior solutions. Peterson's work extended naturally to n processes, influencing broader theoretical analyses. Parallel to software progress, advancements in the significantly influenced mutual exclusion mechanisms by introducing instructions like , which simplified implementation in multiprocessor environments. These primitives, first appearing in mainframe systems like the in the mid-1960s, enabled efficient busy-waiting locks by atomically reading and modifying a memory location, reducing the complexity of software-only approaches. Lamport continued to shape the field with his 1987 paper on a fast , which optimized access times under low contention by minimizing reads in the uncontested case, achieving O(1) performance while preserving correctness. This work extended earlier ideas toward more efficient shared- , bridging software elegance with practical performance.

Hardware Solutions

Atomic Instructions

instructions are low-level primitives that perform read-modify-write operations on locations indivisibly, providing the foundation for implementing mutual exclusion in concurrent systems without relying on complex software mechanisms. These instructions ensure that no other can interfere with the operation, enabling simple busy-waiting solutions like spinlocks for short critical sections. The (TS) instruction atomically reads the of a location and sets it to 1, returning the original . This primitive is commonly used to implement spinlocks, where a shared lock is initialized to 0; a repeatedly executes TS until it returns 0 (indicating the lock was free), then enters the and releases the lock by writing 0. The for TS is as follows:
int test_and_set(int *lock) {
    int old = *lock;
    *lock = 1;
    return old;
}
A basic spinlock using TS appears as:
while (test_and_set(&lock)) ;  // Spin until acquired
// Critical section
lock = 0;  // Release
This approach is efficient for low-contention scenarios but can lead to high CPU usage during prolonged waits. The (CAS) instruction atomically compares the contents of a to an and, if they match, replaces it with a new value, returning a success indicator. CAS enables more flexible lock-free , such as optimistic updates in data structures, by allowing conditional modifications without always setting a fixed value like TS. However, CAS is susceptible to the , where a reads value A from a , another changes it to B and back to A, and the first 's CAS succeeds incorrectly, assuming no intervening change occurred, which can corrupt linked structures or lead to lost updates. One solution is hazard pointers, a technique where publish pointers to objects they are accessing, preventing premature reclamation and mitigating ABA by tracking object versions or using tagged pointers alongside CAS. Fetch-and-add (FAA) atomically retrieves the current value of a memory location and adds a specified increment to it, returning the original value. This instruction is particularly useful for implementing in mutual exclusion schemes, such as locks, where processes atomically increment a ticket to determine their turn without overwriting shared state. Processor architectures provide variants of these instructions to ensure atomicity. On x86, the LOCK prefix can be applied to certain read-modify-write instructions (e.g., XCHG for or CMPXCHG for ) to enforce atomic execution by asserting the LOCK# signal, which grants exclusive bus access and prevents cache snooping by other during the . In ARM architectures, atomicity is achieved through load-exclusive (LDREX) and store-exclusive (STREX) pairs: LDREX loads a value and marks the as exclusive in a local monitor, while STREX stores a new value only if no other processor has modified the since the LDREX (failing otherwise), enabling conditional atomic updates for with minimal . In multiprocessor systems, atomic instructions incur performance overhead due to cache coherence protocols, which require invalidating or flushing lines across processors to maintain , leading to bus contention and increased under high contention. For instance, simple TS spinlocks can generate excessive coherence traffic as contending processors repeatedly snoop the shared lock variable, exacerbating scalability issues in large-scale shared-memory systems.

Specialized Hardware Mechanisms

Specialized hardware mechanisms extend beyond basic instructions to provide robust support for mutual exclusion in multi-core and multi-processor systems. These include memory barriers, protocols, dedicated lock instructions, and interrupt management techniques, each addressing specific aspects of , visibility, and atomicity. Memory barriers, also known as , are hardware instructions that enforce ordering constraints on memory operations to prevent unwanted reordering by the or , ensuring that changes made by one become visible to others in a controlled manner. In weak memory models common in modern architectures like , , and x86 variants, barriers prevent loads from being reordered before stores or vice versa, which is crucial for maintaining mutual exclusion properties such as those in Dekker's or Peterson's algorithms adapted for hardware. For instance, a full barrier serializes all preceding and succeeding memory accesses, while lighter variants like load or store barriers target specific reorderings. The memory model formalizes this with release- semantics, where a release ensures all prior writes are visible before subsequent operations, and an acquire guarantees that following reads see the effects of those writes, thus synchronizing threads without full . Cache coherence protocols, such as MESI (Modified, Exclusive, Shared, Invalid) and its variant (adding Owned state), maintain consistency across multiple caches in multi-core systems by managing cache line states and propagating updates via snooping or directory-based methods. These protocols ensure atomicity for accesses by invalidating or updating copies in other caches upon writes, preventing non-atomic updates that could violate mutual exclusion. In MESI, used in architectures, a write to a shared line transitions it to Modified state, invalidating other copies to enforce exclusivity. , employed in systems, allows the Owned state for modified lines to be read by others without full invalidation, reducing bus traffic but still guaranteeing coherence for synchronization primitives. This hardware-level support underpins the visibility required for locks built on instructions, as cache misses trigger coherence actions that serialize access. Hardware locks provide direct support for mutual exclusion through specialized instructions that atomically test and set lock variables. The introduced the (CS) instruction, which atomically compares a memory word to an expected value and swaps it with a new value if they match, enabling test-and-set operations for spinlocks without requiring additional synchronization. Modern architectures like extend this via the "A" standard extension for atomic instructions, including Load-Reserved/Store-Conditional (LR/SC) pairs that detect intervening writes between load and store, facilitating lock acquisition in multi-processor environments with acquire/release semantics. These mechanisms rely on underlying bus locking or coherence protocols to ensure indivisibility. In uniprocessor systems, disabling interrupts serves as a simple hardware mechanism for mutual exclusion by preventing context switches during critical sections, ensuring that only one executes at a time. This is achieved via instructions like CLI (Clear Interrupt Flag) on x86, which block timer and I/O interrupts, allowing execution of non-atomic code sequences without hardware atomics. However, this approach is limited to single-core setups, as it does not coordinate across multiple processors. Despite these advances, specialized hardware mechanisms face scalability challenges in large (NUMA) systems, where remote memory accesses and traffic lead to contention and latency spikes. Traditional spinlocks, even with hardware support, cause excessive cache-line bouncing across nodes, degrading performance as core counts increase, while NUMA-aware designs trade off single-thread efficiency for better contention handling. In systems with dozens of nodes, coherence protocol overhead can amplify lock acquisition times by factors of 2-7, necessitating hybrid software-hardware approaches for optimal scaling.

Software Solutions

Algorithmic Approaches for Two Processes

Software-based solutions for mutual exclusion between two processes rely on shared memory variables and busy-waiting loops, assuming atomic read and write operations to these variables and that processes execute finitely many steps before halting. These algorithms operate in a where two processes, say P0 and P1, alternate between non-critical, entry, critical, and exit sections, ensuring no hardware primitives beyond basic loads and stores are needed. Dekker's algorithm, the first known correct software solution for two processes, uses two boolean flags to indicate each process's desire to enter the and a shared turn variable to resolve contention. Attributed to T. J. Dekker and published by E. W. Dijkstra in 1965, it guarantees mutual exclusion, progress, and bounded waiting through careful coordination of flag settings and turn yielding. The pseudocode for Dekker's algorithm is as follows:
Shared variables:
boolean flag[2] = {false, false};  // initially false
int turn;  // initially 0 or 1

Process P_i (where i = 0 or 1, j = 1 - i):
do {
    flag[i] = true;
    while (flag[j]) {
        if (turn == j) {
            flag[i] = false;
            while (turn == j) { /* busy wait */ };
            flag[i] = true;
        }
    }
    // critical section
    turn = j;
    flag[i] = false;
    // remainder section
} while (true);
A proof sketch for mutual exclusion in proceeds by contradiction: suppose both es are in their s simultaneously. This requires both to have observed the other's flag as false while their own is true, but the turn variable ensures that when both flags are true, the process whose turn it is not yields, preventing both from proceeding without the other retreating. For , if no process is in the critical section and at least one wishes to enter, the looping process will eventually claim the turn after the other yields, ensuring entry without . Bounded waiting holds because each contention cycle ends with a turn switch, limiting waits to one full cycle. Peterson's algorithm simplifies Dekker's approach by using only one flag per process and a single turn variable, where a process yields by pointing to the other as the "victim" to enter first. Developed by Gary L. Peterson in 1981, it achieves the same properties with fewer variables and clearer intent signaling, making it more intuitive while maintaining efficiency in shared memory. The pseudocode for Peterson's algorithm is:
Shared variables:
boolean flag[2] = {false, false};  // initially false
int turn = 0;  // initially arbitrary

Process P_i (where i = 0 or 1, j = 1 - i):
do {
    flag[i] = true;
    turn = j;  // yield to other
    while (flag[j] && turn == j) { /* busy wait */ };
    // critical section
    flag[i] = false;
    // remainder section
} while (true);
A step-by-step execution trace illustrates when both processes attempt to enter the concurrently. Initially, flag = flag = false, turn = 0. P0 sets flag = true and turn = 1 (yielding to P1). P1 then sets flag = true and turn = 0 (yielding to P0). Now, P0 checks while(flag && turn == 1), which is true (flag=true, but turn=0 ≠1), so P0 skips the loop and enters the . Meanwhile, P1 loops because flag=true and turn=0 ==0 (j=0 for P1). After P0 exits and sets flag=false, P1's condition becomes false, allowing P1 to enter. This trace shows how the turn acts as a pointer, ensuring only one enters while the other waits boundedly. Correctness analysis for both algorithms confirms satisfaction of the three requirements without hardware support. Mutual exclusion is preserved as neither can enter unless the other's flag is false or the turn favors it, proven by contradiction assuming dual entry leads to inconsistent turn observations. Progress is ensured because if both wish to enter, the yielded-to enters first; if only one, it proceeds immediately, avoiding indefinite postponement. Bounded waiting is met via the turn mechanism, where a process waits at most one critical section duration before entering, preventing in finite executions.

Algorithmic Approaches for Multiple Processes

Software algorithms for mutual exclusion among an arbitrary number of processes extend the principles used in two-process cases, such as Dekker's or Peterson's algorithms, by introducing structures that maintain ordering and prevent conflicts across all participants.

Lamport's Bakery Algorithm

Lamport's algorithm, introduced in 1974, provides a software for mutual exclusion among n processes using only reads and writes to shared variables, mimicking a 's system to enforce first-come, first-served ordering. Each announces its intent to enter the by taking a number, which is one greater than the current maximum held by any , ensuring a unique or tied value that resolves ties via identifiers. The algorithm uses two arrays: choosing[i] to indicate if i is selecting a , and number[i] to store the ticket value, with ids serving as tie-breakers. The entry protocol proceeds as follows:
choosing[i] := true;
number[i] := max(number[0], ..., number[n-1]) + 1;
choosing[i] := false;

for j := 0 to n-1 {
    while choosing[j] do skip;  // Wait if j is choosing
    while number[j] != 0 and (number[j], j) < (number[i], i) do skip;
}
Here, the lexicographical order (number[j], j) < (number[i], i) prioritizes lower tickets, and for equal tickets, lower process ids. After entering the critical section, the process releases its ticket by setting number[i] := 0. This handles waiting queues implicitly through the ticket comparison loop, where processes wait for all higher-priority (earlier-arriving or lower-id) processes to finish. The algorithm guarantees mutual exclusion by ensuring no two processes can simultaneously have the minimal ticket in the bakery, as the selection phase prevents concurrent max computations that could violate ordering. It provides FIFO fairness, meaning no process is starved if it arrives before others, and uses O(n) space for the arrays. However, under high contention, it requires O(n) steps per entry due to scanning all processes.

Eisenberg and McGuire Algorithm

The Eisenberg and McGuire algorithm, published in 1972, achieves mutual exclusion for n processes with optimal O(n) space complexity and bounded waiting, using a shared turn variable and a flags array to simulate a linear queue without explicit linked lists. Each flag can be IDLE, WAITING, or ACTIVE, indicating a process's state, while turn points to the next process eligible to enter. Processes advance by claiming a position in the queue and spinning until they hold the turn without active predecessors. The pseudocode for process p is:
flag[p] := WAITING;
j := turn;
while (j != p) {
    if (flag[j] != IDLE) {
        j := turn;
    } else {
        j := (j + 1) % n;
    }
}
flag[p] := ACTIVE;
j := 0;
while (j < n and (j == p or flag[j] != ACTIVE)) {
    j := j + 1;
}
repeat until (j >= n and (turn == p or flag[turn] == IDLE));
turn := p;

// Critical section

j := (turn + 1) % n;
while (flag[j] == IDLE) {
    j := (j + 1) % n;
}
turn := j;
flag[p] := IDLE;
This structure ensures that only one process reaches ACTIVE without contention from others, enforcing mutual exclusion through the verification loop. Upon exit, the process passes the turn to the next waiting process, promoting fairness by bounding the wait to at most n-1 entries. The algorithm's efficiency stems from minimal constants in its loops, requiring O(n) time in the worst case but with low overhead for low contention.

Tournament Method

The tournament method for mutual exclusion, generalized in works like Yang and Anderson's 1995 algorithm, structures n processes in a where leaf nodes compete using two-process mutual exclusion primitives, and winners advance through internal nodes in a bracket-style elimination until one reaches the root for access. Each node in the tree employs a simple two-process lock, such as , to arbitrate between its children, ensuring that contention is localized and resolved hierarchically. Processes start at their leaf and traverse upward, acquiring locks along the path, while losers wait and retry. This approach scales by reducing multi-process competition to pairwise matches, with the providing logarithmic contention resolution. Upon exiting the , the winner releases locks downward, allowing queued processes to advance. The method uses space when optimized, as the can be array-based without explicit pointers.

Correctness and Efficiency

All three algorithms—Lamport's , Eisenberg-McGuire, and the method—satisfy the core requirements of mutual exclusion, , and bounded waiting for n processes, using only atomic reads and writes without hardware support. They achieve space complexity: via fixed-size arrays, Eisenberg-McGuire via a single turn and flags array, and via a compact representation. Fairness is guaranteed through ordering mechanisms— in , first-come-first-served in Eisenberg-McGuire, and bounded waits in —preventing under the assumption of finite times. Efficiency varies: and Eisenberg-McGuire incur remote memory references (RMRs) under contention, while achieves O(log n) RMRs, making it preferable for large n.

Busy-Waiting vs. Sleeping Solutions

Busy-waiting solutions for mutual exclusion, such as spinlocks, involve a repeatedly executing a tight loop to check the availability of a until it can acquire access. This approach relies on atomic instructions like to detect changes in the lock state without yielding the CPU. Spinlocks offer low in contended scenarios, particularly when critical sections are short, as they avoid the overhead of switches and scheduler . However, they consume significant CPU cycles, leading to wasted energy and reduced system throughput under high contention or when wait times are prolonged. In contrast, sleeping solutions suspend the waiting via operating system , allowing it to the CPU to other tasks until notified of availability. Early implementations in UNIX used and wakeup system calls, where a invokes on an event identifier to enter a dormant state, and another signals wakeup with the same identifier to resume it, ensuring efficient coordination without continuous polling. These mechanisms are particularly effective for long critical sections or high-contention environments, as they conserve CPU resources by placing idle processes in a blocked managed by the scheduler. The primary drawback is the latency introduced by context switches, which involve saving and restoring states, along with scheduler overhead. Performance trade-offs between these approaches hinge on workload characteristics, with busy-waiting excelling in low-latency, short-wait scenarios—such as wake-up times under 280 cycles—while sleeping mechanisms reduce energy consumption by up to 33% in oversubscribed systems by avoiding active CPU usage. Context switch overhead in sleeping solutions can exceed 7000 cycles for wake-up alone, making it less suitable for microsecond-scale operations, whereas busy-waiting incurs no such cost but elevates power draw to levels like 140W under multi-threaded contention. In benchmarks like and , sleeping locks demonstrate superior energy efficiency in high-contention workloads, though throughput may suffer compared to spinlocks in lightly loaded cases. Hybrid approaches, such as adaptive or optimistic spinning, mitigate these limitations by initially busy-waiting for a brief period (e.g., a few thousand cycles) before transitioning to sleep if the lock remains contested. This strategy, employed in mutexes, balances low for quick acquisitions with resource conservation for prolonged waits, yielding throughput improvements of up to 3.5x in benchmarks.

Theoretical Aspects

Impossibility and Bounds

In asynchronous shared-memory systems using only read-write registers, it is to implement wait-free mutual exclusion for more than one . This result follows from the consensus hierarchy established by Herlihy, where read-write registers have a consensus number of 1, insufficient to support the stronger synchronization required for wait-free mutual exclusion, which demands independent of other processes' speeds. Consequently, any mutual exclusion algorithm must rely on weaker guarantees, such as obstruction-freedom or lock-freedom, to ensure liveness in the presence of asynchrony. Regarding space complexity, optimal mutual exclusion algorithms for n processes require \Theta(n) shared memory locations. Burns and Lynch proved that at least n binary shared variables are necessary to achieve mutual exclusion with progress for n \geq 2 processes, as fewer variables allow adversarial scheduling to violate exclusion or deadlock. This bound is tight, as algorithms exist that achieve mutual exclusion using exactly n variables while ensuring bounded waiting. Time complexity in shared-memory mutual exclusion is often measured by remote memory references (RMRs) in cache-coherent models, capturing communication overhead between processors. Lower bounds establish that any mutual exclusion incurs \Omega(\log n / \log \log n) RMRs in the worst case for n processes, reflecting the need to coordinate access across distributed caches. Amortized RMR complexity can be lower, with some algorithms achieving O(\log n) amortized but \Omega(n) worst-case RMRs, highlighting the trade-off between average-case efficiency and robustness to contention spikes. The Fischer-Lynch-Paterson (FLP) impossibility result further underscores theoretical limits, showing that no deterministic algorithm exists in asynchronous distributed systems tolerant to even one crash failure; this directly impacts distributed mutual exclusion, as it requires consensus-like agreement on , rendering fault-tolerant solutions inherently probabilistic or partially synchronous.

Fairness and Liveness

In mutual exclusion protocols, such as ensuring no two es simultaneously the are fundamental, but liveness are equally vital to guarantee system and equitable . Liveness asserts that desirable events, like a process entering its , eventually occur, contrasting with safety's focus on preventing undesirable states. This distinction forms a where mutual exclusion represents a property, while liveness encompasses broader guarantees like and absence of indefinite blocking. Deadlock and livelock represent key liveness failures in concurrent systems implementing mutual exclusion. Deadlock occurs when two or more processes are permanently blocked, each waiting for resources held by the others, resulting in no further execution progress. In contrast, livelock involves processes remaining active—such as repeatedly yielding or attempting to coordinate—but failing to advance toward their goals, often due to continuous reactive changes without resolution. Distinguishing these requires analyzing execution traces for blocked states (deadlock) versus unproductive activity (livelock). Starvation prevention in mutual exclusion relies on mechanisms like to ensure no is indefinitely denied access. Bounded waiting stipulates that, after a completes its entry (or ""), no other can enter the more than a fixed number of times before the waiting gains entry. This property implies , as it bounds the wait time relative to others' accesses, preventing indefinite postponement under fair scheduling assumptions. First-come-first-served (FCFS) fairness provides a stronger guarantee by enforcing queue-based ordering in mutual exclusion. Under FCFS, if one finishes its before another begins, the first process enters the before the second, assuming bounded doorway and exit executions. This property, combined with deadlock-freedom, ensures lockout-freedom, where every attempting eventually succeeds, promoting orderly progression in shared-memory systems.

Advanced and Recoverable Mutual Exclusion

Recoverable Designs

In shared-memory systems, process crashes pose significant challenges to mutual exclusion, as a process may fail while holding a lock or midway through its critical section (CS), potentially blocking other processes indefinitely and violating liveness properties. Recoverable designs address this by incorporating fault-tolerance mechanisms that detect failures, release held locks, and allow the crashed process to recover and re-execute its CS atomically without corrupting shared state. These designs extend classic mutual exclusion to the crash-recovery model, where processes can fail arbitrarily and restart, ensuring both safety (mutual exclusion) and progress despite failures. A key requirement in such scenarios is the use of idempotent operations within the , meaning that re-executing the upon produces the same effect as completing it once, preventing duplicate updates or inconsistencies. For instance, if a crashes after partially updating a shared , must resume from a checkpoint where the operation can be safely retried without overcounting. This idempotency is achieved through techniques like single-writer variables or partial progress in durable memory, bounding the recovery steps to ensure efficiency. Seminal work on recoverable mutual exclusion formalizes properties like bounded re-entry, guaranteeing that a recovering incurs only a finite number of steps before re-executing its . Recoverable designs often build on , a correctness condition originally defined by Herlihy and Wing, which requires concurrent operations to appear and sequentially . In the recoverable context, recovery ensures that operations respect sequential consistency relative to failures, handling incomplete actions appropriately. For example, in settings, consistency mandates that recovered executions match a sequential order, preserving integrity without rollback. Common techniques include lock release upon , where surviving processes use auxiliary locks or helping protocols to identify and reset a failed holder's state, such as by invoking a cleanup routine that frees the primary mutex. In the recoverable mutual exclusion framework, this involves a dedicated phase where processes check for stalled locks and release them using wait-free operations, ensuring deadlock-freedom even under concurrent failures. Another approach employs -based locks, where locks are granted for a finite that the holder must periodically renew; upon , the lease expires automatically, allowing others to acquire the lock without explicit detection. These leases, implemented via timestamps in shared variables, provide bounded time proportional to the lease length. A practical example is Java's ReentrantLock, which supports patterns to avoid indefinite blocking through its tryLock(long timeout, TimeUnit unit) method (as of Java 8 and later), allowing threads to attempt lock acquisition with a bounded wait; if the holder fails and does not release, the timeout prevents indefinite blocking for the attempting thread, though the lock remains held and requires external mechanisms for full recovery. This approach helps maintain liveness by enabling retries but does not automatically release locks from failed threads, unlike dedicated fault-tolerant designs. Such implementations align with basic lock abstractions but highlight the need for additional monitoring in crash-prone applications. Recent advancements, such as those at PODC 2023, explore tradeoffs between remote memory reference (RMR) complexity and memory word size for recoverable mutual exclusion algorithms, improving efficiency in models.

Distributed Mutual Exclusion

In distributed systems, achieve mutual exclusion without by exchanging over a , ensuring that only one enters the at a time while addressing challenges such as communication delays, message losses, and failures. Algorithms for this purpose are classified into centralized, token-based (including token-ring variants), and permission-based approaches, each balancing trade-offs in efficiency, , and complexity. Performance is typically evaluated using metrics like message complexity—the total number of exchanged per critical section invocation—and synchronization delay—the time from sending a request until entering the , measured in message propagation units. The centralized algorithm designates one as a to serialize access requests. A requesting sends a REQUEST message to the , which replies with a if the is idle or queues the request otherwise; upon exit, the sends a RELEASE message to free the for the next queued request. This requires a message complexity of 3 per invocation (one each for request, grant, and release) and a delay of 2, making it simple to implement but vulnerable to coordinator failure, which can halt the system until recovery via . Token-ring algorithms maintain a single circulating that grants exclusive access to the , organized in a logical where forward the to neighbors. In the basic token-ring approach, hold the only if needed; otherwise, they pass it along the . When a requires access, it waits for the to arrive, enters the , and then forwards it after exit. This yields a worst-case of up to N (where N is the number of ) per , as the may traverse the entire , with a delay of up to N in the absence of contention. The Suzuki-Kasami algorithm enhances this by using request queues at each to the efficiently toward requesters, reducing to O(\sqrt{N}) under moderate load while preserving the token-passing paradigm. Ricart-Agrawala's timestamp-based algorithm is a permission-based method where each process assigns a unique timestamp (using Lamport clocks) to its request and multicasts REQUEST messages to all others, seeking replies from those with equal or later timestamps. A process enters the only after receiving replies from all N-1 peers, ensuring and fairness; it then multicasts RELEASE messages upon exit. This achieves a message complexity of 2(N-1) and a synchronization delay of 2, optimal among permission schemes for fully connected networks, though it assumes reliable channels and scales poorly with N due to broadcast overhead. Maekawa's voting algorithm introduces a quorum-based permission to reduce messages, where each process belongs to a voting set of size \sqrt{N} and a total set covering all processes. A requester multicasts REQUEST to its \sqrt{N} voters, entering the section upon receiving permissions from all of them; voters grant permission only if not currently in the section or waiting. This ensures mutual exclusion via intersecting s, with a message complexity of 3*\sqrt{N}* (request to voters, replies, and failed grants) and synchronization delay of 2, significantly better than Ricart-Agrawala for large N, though it risks deadlocks (mitigated by priority queues) and requires careful quorum construction for . Comparisons across these algorithms highlight trade-offs: centralized approaches excel in low-contention scenarios with minimal overhead but lack ; token-ring methods like Suzuki-Kasami offer bounded access and via token recovery but incur higher delays under contention; permission-based designs such as Ricart-Agrawala and Maekawa prioritize low delay and fairness at the cost of increasing messages with system size. Empirical studies confirm Maekawa's efficiency in high-contention environments, while token-based variants perform well in sparse networks.

Mutual Exclusion Primitives

Basic Locks and Mutexes

A mutex, short for mutual exclusion lock, is a synchronization primitive that ensures only one or can access a at a time by enforcing exclusive . The core operations are (lock), which attempts to take of the mutex—if already owned by another , the calling blocks until it becomes available—and release (unlock), which relinquishes , allowing another waiting to it. These semantics provide mutual exclusion for critical sections, preventing conditions in concurrent environments. Mutexes come in recursive and non-recursive variants, differing in how they handle repeated acquisition attempts by the same . A non-recursive mutex (also called a normal or simple mutex) causes , such as , if the owning attempts to acquire it again without releasing it first, as it lacks tracking of recursive calls. In contrast, a recursive mutex maintains an internal lock : each successful acquire by the owning increments the count, and releases decrement it; the mutex only becomes available to other when the count reaches zero. This makes recursive mutexes suitable for protecting code with nested or recursive functions that access the same resource, though they incur slight overhead due to count management. Spinlocks represent a lightweight mutex implementation that relies on busy-waiting via atomic hardware instructions, such as or , to atomically check and set a lock flag. A attempting to a spinlock repeatedly tests the flag in a until it can set it to the locked state, consuming CPU cycles without blocking or context switching. Due to this efficiency on multiprocessor systems for very short critical sections—where wait times are minimal and overhead from sleeping is avoided—spinlocks are ideal for low-contention scenarios, but they waste resources in high-contention or prolonged holds. In the Threads () , mutex operations are provided through functions like pthread_mutex_lock(), which blocks the calling until it can acquire the mutex, and pthread_mutex_unlock(), which releases it and wakes a waiting if applicable. These functions support configurable mutex attributes, including recursive behavior via PTHREAD_MUTEX_RECURSIVE, and return error codes for issues like attempting to unlock a non-owned mutex. Proper usage involves initializing the mutex with pthread_mutex_init() before use and destroying it with pthread_mutex_destroy() afterward to free resources. A common pitfall with mutexes in priority-based scheduling systems is , where a high-priority is blocked by a low-priority holding the mutex, allowing intermediate-priority to the low-priority holder and indefinitely delay the high-priority one. This can be mitigated using protocols like priority inheritance, where the low-priority temporarily inherits the high-priority 's priority while holding the mutex, ensuring bounded blocking times.

Semaphores and Condition Variables

Semaphores are synchronization primitives introduced by to coordinate cooperating sequential processes by controlling access to shared resources. They consist of a non-negative and two operations: P (proberen, or "test and decrement") and V (verhogen, or "increment"). The P operation decrements the semaphore value if it is greater than zero; otherwise, the calling process blocks until the value becomes positive. The V operation increments the value and wakes a blocked process if any are waiting. These operations ensure mutual exclusion when used appropriately around critical sections. Semaphores are classified as or . A semaphore, restricted to values 0 or 1, functions similarly to a mutex for enforcing exclusive access to a single resource, such as a . For example, initializing a semaphore free to 1 allows mutual exclusion as follows:
P(free);
critical section;
V(free);
This ensures only one process enters the at a time. In contrast, a counting permits values greater than 1, enabling coordination of multiple identical resources, such as buffer slots in a producer-consumer . In modern systems, are implemented via APIs like , which provides functions such as sem_wait and sem_post for unnamed semaphores of type sem_t. The sem_wait function atomically decrements the value if positive or blocks the calling if zero, mirroring the operation. Conversely, sem_post increments the value and unblocks a waiting if any exist, akin to . These functions integrate with mutexes for protecting manipulations in multithreaded environments. Condition variables complement semaphores by providing signaling mechanisms within monitor-like structures for efficient waiting on specific conditions without busy-waiting. Introduced by C. A. R. Hoare, a condition variable is associated with a that inherently enforces mutual exclusion via an internal mutex. The wait operation on a condition variable atomically releases the 's mutex and blocks the until signaled, then reacquires the mutex upon resumption. The signal operation wakes at least one waiting , transferring control immediately if within the . A broadcast variant wakes all waiting s. In threads (), variables are implemented with pthread_cond_wait, pthread_cond_signal, and pthread_cond_broadcast, always used under a mutex lock. pthread_cond_wait releases the mutex, waits, and reacquires it. Implementations may produce spurious wakeups, requiring threads to recheck the in a , such as while (!condition) pthread_cond_wait(&cv, &mutex);. This design ensures robustness against unexpected wakes without relying on the signal's meaning. A key for these primitives is the producer-consumer problem, where producers add items to a fixed-size and consumers remove them, requiring coordination to avoid or underflow. Three counting semaphores manage this: mutex (initialized to 1) for buffer access exclusion, full (0) for filled slots, and empty (buffer size N) for available slots. The producer is:
sem_wait(&empty);
sem_wait(&mutex);
// add item to buffer
sem_post(&mutex);
sem_post(&full);
The consumer follows symmetrically:
sem_wait(&full);
sem_wait(&mutex);
// remove item from buffer
sem_post(&mutex);
sem_post(&empty);
This prevents race conditions and ensures bounded waiting. Condition variables can similarly solve the problem within a monitor, using wait on empty/full conditions and signal after buffer operations.

Higher-Level Abstractions

Higher-level abstractions in mutual exclusion provide structured mechanisms that encapsulate synchronization logic, reducing the risk of errors like deadlocks and race conditions in concurrent programming. These abstractions build upon lower-level primitives such as mutexes and semaphores to offer safer, more intuitive interfaces for developers, often integrated into programming languages or operating systems. By hiding the details of locking and signaling, they promote modular code and better scalability in multi-threaded environments. Monitors represent a foundational abstraction for mutual exclusion, introduced as a programming language construct that associates data with procedures operating on it, ensuring that only one process can execute within the monitor at a time. Developed by C.A.R. Hoare, monitors automatically enforce mutual exclusion on entry to their procedures, eliminating the need for explicit locking, while providing condition variables for threads to wait on and signal events. This design supports safe coordination without busy-waiting, as waiting threads are suspended until signaled. In practice, monitors have influenced modern implementations, such as Java's synchronized methods and blocks, where the synchronized keyword acquires an intrinsic lock on an object, allowing only one thread to execute the protected code section. For example, in Java, a synchronized block can be written as synchronized (obj) { /* critical section */ }, which integrates condition variables via wait(), notify(), and notifyAll() methods on the object. This abstraction simplifies concurrent programming by tying exclusion to object monitors, preventing common pitfalls like forgotten unlocks. Read-write locks extend mutual exclusion to scenarios with asymmetric access patterns, permitting multiple concurrent readers but exclusive access for writers. Defined in the standard as pthread_rwlock_t, these locks use functions like pthread_rwlock_rdlock() for shared read access and pthread_rwlock_wrlock() for exclusive write access, optimizing throughput in read-heavy workloads such as database query processing. If a writer holds the lock, readers block until released, and vice versa, ensuring data consistency without unnecessary of reads. This is particularly valuable in systems where reads vastly outnumber writes, as it can improve performance by allowing parallelism among readers while maintaining exclusion for modifications. Implementations in libraries like enforce fairness policies to prevent , often using queuing for pending requests. Transactional memory (TM) offers a lock-free abstraction for mutual exclusion by treating critical sections as atomic transactions, similar to database operations, where concurrent executions either commit fully or abort and retry. Proposed by Maurice Herlihy and J. Eliot Moss, hardware TM augments processors with instructions to speculatively execute code while buffering changes in a transactional log, detecting conflicts via read/write sets and rolling back on violations. Software TM emulates this in user space using compiler support and , avoiding explicit locks altogether. This approach simplifies programming by allowing developers to mark sections with annotations like atomic { /* code */ }, handling retries automatically and reducing risks. TM has been adopted in hardware like Intel's (TSX) and software systems such as Java's JSR-166y updates, providing composable mutual exclusion for complex data structures. Evaluations show TM outperforming fine-grained locking in contended scenarios by minimizing overhead from lock acquisition. In modern programming languages, higher-level abstractions integrate mutual exclusion directly into type systems and standard libraries for enhanced safety. Rust's std::sync::Mutex<T> wraps data in a type-safe mutex, requiring the lock() to the , which enforces single-threaded mutable access via the borrow checker at , preventing data races without runtime checks for ownership. This design ensures that Mutex<T> is Send and Sync only if T is, facilitating safe sharing across threads. Similarly, Go's sync.Mutex provides a simple mutual exclusion lock with Lock() and Unlock() s, intended for protecting shared state in goroutines, and is not copyable after initialization to avoid subtle bugs. These language-specific implementations abstract away platform details, promoting idiomatic concurrency—Rust emphasizes prevention of errors, while Go prioritizes simplicity and performance in lightweight threading models.

References

  1. [1]
    Reading 23: Mutual Exclusion - MIT
    Ensure that the critical parts of the concurrent computations, which read and write the shared mutable data, can't possibly interleave. This strategy is called ...
  2. [2]
    Mutual Exclusion - an overview | ScienceDirect Topics
    Mutual exclusion is a fundamental synchronization technique in computer science used to ensure that shared variables are accessed by only one thread at a time.<|control11|><|separator|>
  3. [3]
    Mutual Exclusion & POSIX Mutexes - FSU Computer Science
    The need for mutual exclusion comes from the interactions between concurrent tasks. Tasks interact through resources they share. On a digital computer system, ...
  4. [4]
    E.W.Dijkstra Archive: Cooperating sequential processes (EWD 123)
    In order to effectuate this mutual exclusion the two processes have access to a number of common variables. We postulate, that inspecting the present value ...
  5. [5]
    None
    **Summary of Mutual Exclusion Problem (Lamport, 1980)**
  6. [6]
    [PDF] Chapter 6: Process Synchronization
    ▫ A race condition exists: Critical Section Problem! ○ counter++ could be implemented as register1 = counter register1 = register1 + 1 counter = register1.
  7. [7]
    Reading 21: Concurrency - MIT
    ... failed to take the other's deposit into account. Race condition. This is an example of a race condition. A race condition means that the correctness of the ...
  8. [8]
    [PDF] The Mutual Exclusion Problem Part II: Statement and Solutions
    An additional fairness property intermediate between lockout freedom and FCFS, called r-bounded waiting, has also been proposed [20]. It states that after ...
  9. [9]
    [PDF] MYTHSABOUTTHEMUTUALEX...
    Jun 13, 1981 · Mutual exclusion means that both processes can never be in their critical sections at the same time. No deadlock or lockout means that no ...
  10. [10]
    Solution of a problem in concurrent programming control
    Solution of a Problem in Concurrent Programming Control. Edsger Wybe Dijkstra. Read More · Additional comments on a problem in concurrent programming control.
  11. [11]
    [PDF] A New Solution of Dijkstra's Concurrent Programming Problem
    Leslie Lamport. Massachusetts Computer Associates, Inc. A simple solution to the mutual exclusion problem is presented which allows the system to continue to ...
  12. [12]
    [PDF] Programming Concurrent Systems - Microsoft
    Together with earlier papers that showed how to implement mutual exclusion using only atomic load and store instructions [Dijkstra 1965] and how to use ...
  13. [13]
    [PDF] Atomic Instructions! - CS@Cornell
    Atomic instructions, like test and set, are hardware primitives that must work despite adversarial scheduler/interrupts. Examples include test and set, atomic ...
  14. [14]
    [PDF] Review: Test-and-set spinlock
    Limited access (mutual exclusion):. - Resource can only be shared with finite users. 2. No preemption: - once resource granted, cannot be taken away. 3 ...
  15. [15]
    [PDF] Hazard Pointers: Safe Memory Reclamation for Lock-Free Objects
    2.3 The ABA problem. A different but related problem to memory reclamation is the ABA problem. It affects almost all lock-free algorithms. It was first reported ...
  16. [16]
    Manuals for Intel® 64 and IA-32 Architectures
    ### Summary of x86 LOCK Prefix for Atomic Instructions
  17. [17]
    Exclusive accesses - Arm Developer
    Exclusive accesses use LDREX/STREX pairs to ensure only one process can access a resource at a time, creating atomic behaviors with lower interrupt latency.
  18. [18]
    [PDF] A high-level operational semantics for hardware weak memory models
    Dec 3, 2018 · To aid the programmer, architectures typically provide memory barrier/fence instructions which can enforce thread-local or- der corresponding to ...
  19. [19]
    Everything you always wanted to know about synchronization but ...
    The Opteron uses the MOESI protocol for cache coherence. The 'O' stands for the owned state, which indicates that this cache line has been modified (by the ...
  20. [20]
    [PDF] Research Report - A Single-Atomic Algorithm for Spin-Suspend ...
    Jun 15, 2000 · Abstract. Locks for multithreaded applications are commonly implemented by augmenting suspend locking with spin locking.
  21. [21]
    [PDF] The RISC-V Instruction Set Manual - Brown CS
    May 7, 2017 · Figure 7.2: Sample code for mutual exclusion. ... Similarly, the AMO major opcode can be reused if the standard atomic extensions are not required ...
  22. [22]
    [PDF] LOCKS - cs.wisc.edu - University of Wisconsin–Madison
    Oct 17, 2016 · Can locks be implemented with loads and stores? Can locks be implemented with atomic hardware instructions? Are spinlocks a good idea?
  23. [23]
    Scalable and Practical Locking with Shuffling
    Oct 27, 2019 · For example, popular spinlocks suffer from excessive cache-line bouncing in NUMA systems, while scalable, NUMA-aware locks exhibit sub-par ...
  24. [24]
    [PDF] Shared-memory Mutual Exclusion: Major Research Trends Since 1986
    In this paper, we survey recent research on mutual exclusion algorithms for shared-memory systems. For the most part, we will consider only “user-level” ...Missing: seminal | Show results with:seminal
  25. [25]
    [PDF] Eisenberg & McGuire mutual exclusion algorithm
    The Eisenberg & McGuire algorithm [1] is a concurrent programming algorithm for mutual exclusion that allows two or more processes to share a single-use ...Missing: seminal paper
  26. [26]
    [PDF] Model: Eisenberg-McGuire Type: P/T Net Origin: Academic
    This PT net models Eisenberg-McGuire's algorithm for N processes mutual exclusion problem. The pseudo code of the algorithm is the following: // variables.Missing: seminal paper
  27. [27]
    Scalability Techniques For Practical Synchronization Primitives
    Jan 1, 2015 · Locks are a way of allowing multiple threads to execute concurrently, providing safe and correct execution context through mutual exclusion. To ...
  28. [28]
    [PDF] Unlocking Energy - USENIX
    Jun 22, 2016 · The two versions differ in how the lock handles con- tention: Mutexes use sleeping, while spinlocks employ busy waiting. With sleeping, the ...
  29. [29]
    [PDF] The UNIX Time- Sharing System
    This paper discusses the nature and implementation of the file system and of the user command interface. Key Words and Phrases: time-sharing, operating system, ...
  30. [30]
    Wait-free synchronization | ACM Transactions on Programming ...
    A wait-free implementation of a concurrent data object is one that guarantees that any process can complete any operation in a finite number of steps.
  31. [31]
    [PDF] Wait-free synchronization - Brown CS
    In a system of n or more concurrent processes, we show that it is impossible to construct a wait-free implementation of an object with consensus number n from.Missing: exclusion | Show results with:exclusion
  32. [32]
    Bounds on Shared Memory for Mutual Exclusion - ACM Digital Library
    The shared memory requirements of Dijkstra s mutual exclusion problem are examined. It is shown that n binary shared variables are necessary and sufficient ...
  33. [33]
    [PDF] Tight RMR Lower Bounds for Mutual Exclusion and Other Problems
    Our results establish that this is the best possible. Anderson and Kim proved a lower bound of Ω(logn/loglogn) on the RMR complexity of mutual exclusion ...<|control11|><|separator|>
  34. [34]
    [PDF] A Time Complexity Lower Bound for Adaptive Mutual Exclusion∗
    Abstract. We consider the time complexity of adaptive mutual exclusion algorithms, where “time” is measured by counting the number of remote memory ...
  35. [35]
    [PDF] Proving Liveness Properties of Concurrent Programs - Leslie Lamport
    A liveness property asserts that program execution eventually reaches some desirable state. While termination has been studied extensively, many other liveness ...
  36. [36]
    [PDF] On deadlock, livelock, and forward progress - University of Cambridge
    In this paper we propose a general dynamic framework for detecting deadlock and livelock in distributed systems. We unify both of these undesirable occurances ...
  37. [37]
    [PDF] Recoverable Mutual Exclusion
    First-Come-First-Served fairness, we assume that the door- way is well-defined and bounded only in a subset of execution histories that are relevant to our ...
  38. [38]
    [PDF] Fault Tolerance via Idempotence - Microsoft
    A failure models problems such as hardware failure, software crashes and reboots . Data stored in tables is persistent and is unaffected by agent failures.
  39. [39]
    [PDF] Linearizability of Persistent Memory Objects Under a Full-System ...
    Recoverable linearizability [2] requires that the operation linearizes or aborts before any subsequent linearization by the pending thread on that same object; ...
  40. [40]
    [PDF] Modular Recoverable Mutual Exclusion Under System-Wide Failures
    Abstract. Recoverable mutual exclusion (RME) is a fault-tolerant variation of Dijkstra's classic mutual exclusion. (ME) problem that allows processes to ...Missing: lease- | Show results with:lease-
  41. [41]
    ReentrantLock (Java Platform SE 8 ) - Oracle Help Center
    A ReentrantLock is owned by the thread last successfully locking, but not yet unlocking it. A thread invoking lock will return, successfully acquiring the lock, ...
  42. [42]
    A Taxonomy of Distributed Mutual Exclusion - ACM Digital Library
    Singhal ... In this paper, we survey major research trends since 1986 in work on shared-memory mutual exclusion. Read More · A Fair Distributed Mutual Exclusion ...
  43. [43]
    A √N algorithm for mutual exclusion in decentralized systems
    The algorithm uses c√N messages (c between 3 and 5) to create mutual exclusion in a network with N nodes, and is symmetric and parallel.Missing: original | Show results with:original
  44. [44]
    A distributed mutual exclusion algorithm - ACM Digital Library
    SUZUKI, I., AND KASAMI, T. An optimality theory for mutual exclusion algorithms in computer networks. In Proceedings of the 3rd International Conference on ...Missing: original paper
  45. [45]
    An optimal algorithm for mutual exclusion in computer networks
    Ricart, G., and Agrawala, A.K. Performance of a distributed network mutual exclusion algorithm. Tech. Rept. TR-774, Dept. Comptr. Sci., Univ. of Maryland ...Missing: original paper
  46. [46]
    [PDF] A Comparative Study of Ricart-Agrawala and Maekawa Distributed ...
    Abstract—Ricart-Agrawala's and Maekawa's distributed mu- tual exclusion algorithms were implemented to conduct exper- iments comparing these two algorithms. It ...Missing: Suzuki- Kasami centralized
  47. [47]
    [PDF] Mutual Exclusion - Department of Computer Science and Engineering
    The Mutex. A software tool for providing mutual exclusion is the mutex. It provides two operations: Lock. Unlock. Operation Mutex State Action. Lock. Unlocked.
  48. [48]
    pthread_mutex_lock
    Each time the thread unlocks the mutex, the lock count shall be decremented by one. When the lock count reaches zero, the mutex shall become available for other ...
  49. [49]
    [PDF] threads-locks.pdf - cs.wisc.edu
    Unlike the solutions we discuss here, which use special hardware instructions and even OS support, Dekker's algorithm uses just loads and stores (assuming they ...
  50. [50]
    Priority Inheritance Protocols: An Approach to Real-Time ...
    An investigation is conducted of two protocols belonging to the priority inheritance protocols class; the two are called the basic priority inheritance protocol
  51. [51]
    [PDF] Co-operating sequential processes - Pure
    In order to effectuate t~is mutual exclusion tr.e two processes have access to a number of common variables. We postulate, that inspecting the present value of ...
  52. [52]
    sem_wait
    The sem_wait() function locks the semaphore referenced by sem by performing a semaphore lock operation on that semaphore.
  53. [53]
    sem_post
    The sem_post() function unlocks the semaphore referenced by sem by performing a semaphore unlock operation on that semaphore.Missing: documentation | Show results with:documentation
  54. [54]
    Monitors: an operating system structuring concept
    This paper develops Brinch-Hansen's concept of a monitor as a method of structuring an operating system. It introduces a form of synchronization, ...Missing: original | Show results with:original
  55. [55]
    pthread_cond_wait
    The pthread_cond_wait() and pthread_cond_timedwait() functions are used to block on a condition variable. They are called with mutex locked by the calling ...
  56. [56]
    Synchronization of the producer/consumer problem using ...
    This paper addresses the following questions regarding single-processor synchronization of the producer/consumer problem using semaphores, monitors, and the ...
  57. [57]
    Monitors: an operating system structuring concept
    This paper develops Brinch-Hansen's concept of a monitor as a method of structuring an operating system. It introduces a form of synchronization, ...Missing: original | Show results with:original
  58. [58]
    Intrinsic Locks and Synchronization (The Java™ Tutorials ...
    When a thread invokes a synchronized method, it automatically acquires the intrinsic lock for that method's object and releases it when the method returns. The ...
  59. [59]
    pthread_rwlock_rdlock
    The pthread_rwlock_rdlock() function shall apply a read lock to the read-write lock referenced by rwlock. The calling thread acquires the read lock if a writer ...
  60. [60]
    Transactional memory: architectural support for lock-free data ...
    This paper introduces transactional memory, a new multiprocessor architecture intended to make lock-free synchronization as efficient (and easy to use) as ...
  61. [61]
    Mutex in std::sync - Rust
    Mutex<T> provides mutable access to T to one thread at a time. However, it's essential for T to be Send because it's not safe for non- Send structures to be ...MutexGuard · LockResult · Poison
  62. [62]