Dekker's algorithm
Dekker's algorithm is a software-based solution to the mutual exclusion problem in concurrent programming, enabling two processes to safely access a shared resource without hardware support, by using only atomic reads and writes to a small number of shared boolean flags and an integer turn variable.[1] It ensures that at most one process enters its critical section at a time, while also guaranteeing progress (no deadlock) and bounded waiting (no indefinite postponement).[2]
The algorithm was attributed to Dutch mathematician Th. J. Dekker by computer scientist Edsger W. Dijkstra in his 1965 manuscript Cooperating sequential processes (EWD 123), marking it as the first known correct implementation for two processes relying solely on shared memory variables.[1] Prior attempts by Dijkstra himself had flaws, but Dekker's version resolved them by introducing a turn variable to break symmetry when both processes simultaneously express interest in the critical section.[1] The core mechanism involves each process setting a flag to indicate its desire to enter the critical section, then politely yielding if the other process's flag is set and the turn favors the opponent; upon exiting, the process resets its flag and updates the turn to the other.[3]
Dekker's algorithm laid foundational groundwork for subsequent mutual exclusion solutions, including Peterson's algorithm for two processes and Dijkstra's generalization to N processes, influencing modern concurrent programming despite its complexity for practical use today.[2] Its proof of correctness relies on invariants ensuring that the flags and turn variable prevent simultaneous critical section entry, with no assumptions about process speeds.[1]
Background
Critical section problem
A critical section refers to a segment of code in a program where a process or thread accesses shared resources, such as variables, data structures, or files, that must be protected from concurrent modifications by other processes to prevent errors.[4] In concurrent programming, where multiple processes execute simultaneously, the absence of synchronization mechanisms can lead to race conditions, in which the outcome of operations depends on the unpredictable interleaving of process executions, resulting in data inconsistency or corruption.[5] For instance, if two processes attempt to increment a shared counter without coordination, one process might read the value, increment it locally, and write it back, only for the second process to overwrite the update with its own stale read, yielding an incorrect final value.[4]
The critical section problem is particularly pronounced in the two-process scenario, where two processes share a resource and alternate between non-critical sections (where they perform independent computations) and attempts to enter their critical sections. This setup assumes asynchronous execution, meaning processes proceed at arbitrary speeds without a global clock, and that reads and writes to shared variables are indivisible atomic operations, but no specialized hardware primitives (like test-and-set instructions) are available for synchronization.[6] A key requirement for resolving this issue is mutual exclusion, ensuring that only one process enters its critical section at a time.[4]
In the 1960s, the emergence of multiprogramming systems—designed to maximize utilization of expensive computing hardware by allowing multiple programs to reside in memory and share the CPU through time-sharing—intensified these concurrency challenges, as processes could interleave unpredictably and corrupt shared data.[7] This historical context motivated early researchers, including T. J. Dekker, to address the problem in software for two processes, laying the groundwork for solutions amid the growing complexity of systems like those developed at universities and labs in Europe and the United States.[8]
Requirements for solutions
Solutions to the mutual exclusion problem in concurrent programming must satisfy several essential properties to ensure correct and fair access to shared resources. Mutual exclusion and progress, originally articulated by Edsger W. Dijkstra in his analysis of cooperating sequential processes, form the foundation for algorithms like Dekker's.[1]
The primary property is mutual exclusion, which guarantees that at most one process can execute in its critical section at any given time. This prevents race conditions where multiple processes simultaneously modify shared data, leading to inconsistent states. Without this, the integrity of the shared resource cannot be maintained.[9]
A second key requirement is progress, ensuring that if no process is currently in its critical section and at least one process wishes to enter, then the selection of the next process to enter cannot be postponed indefinitely. In other words, the system must avoid deadlock, where processes are unable to proceed despite no active critical section execution. This property demands that the decision process terminates in finite time, regardless of relative process speeds.[1]
Additionally, bounded waiting (also known as lockout freedom) requires that no process is denied entry to its critical section indefinitely; there must exist a bound on the number of times other processes can enter the critical section while a given process is waiting. This prevents starvation, where one process is perpetually overtaken by others.[9]
These solutions operate under specific assumptions: exactly two processes are involved, they communicate via shared memory variables with atomic read and write operations, no assumptions are made about process speeds or priorities, and execution in the non-critical section is finite. Achieving these properties is non-trivial without hardware-supported atomic operations beyond basic reads and writes, as interleavings of process executions can lead to subtle failures in exclusion or progress if not carefully managed.[1][9]
Algorithm Description
Shared variables
Dekker's algorithm relies on a small set of shared memory variables to enable mutual exclusion for two concurrent processes without relying on specialized hardware instructions. The core variables include two boolean flags, conventionally named flag[0] and flag[1], where each flag corresponds to one process and is set to true by that process to indicate its desire to enter the critical section.
Complementing the flags is a single integer variable turn, which can hold values 0 or 1, serving to assign priority to one of the processes when both flags are simultaneously true.
These variables are initialized with both flags set to false, signaling that neither process initially seeks critical section access, and turn assigned an arbitrary value of 0 or 1.[10]
In operation, the flags primarily communicate intent, allowing processes to detect potential contention, while the turn variable resolves such conflicts by enforcing an order of service, thereby preventing deadlock and promoting progress.
The correctness of the algorithm presupposes that all reads and writes to these shared variables are atomic operations, executed indivisibly relative to the other process.[11]
Process protocols
Dekker's algorithm employs a symmetric protocol for two processes, labeled Process 0 and Process 1, which use indices 0 and 1 to access shared variables flag[0], flag[1], and turn. Each process follows an identical structure but substitutes its own index for i and the opponent's as j = 1 - i, ensuring fairness through the interplay of flags indicating intent to enter the critical section and the turn variable resolving contention.[12]
The entry protocol begins with the process setting its own flag to true, signaling its desire to enter the critical section. It then enters a loop checking the opponent's flag: if the opponent also wants entry (opponent's flag is true) and the turn favors the opponent, the process temporarily unsets its own flag to yield and waits in a inner loop until the turn changes to its favor before resetting its flag and retrying the check. This process repeats until the conditions allow entry, at which point the critical section is accessed. The exit protocol is straightforward: the process unsets its own flag and sets the turn to favor the opponent, permitting the other process to proceed if interested.[12][8]
The full protocol for each process operates within an infinite loop encompassing non-critical, entry, critical, and exit sections, as shown in the following pseudocode for Process P_i where j = 1 - i:
do {
// non-critical section
flag[i] = true;
while (flag[j]) {
if (turn == j) {
flag[i] = false;
while (turn == j) {
// busy wait
};
flag[i] = true;
}
}
// [critical section](/page/Critical_section)
turn = j;
flag[i] = false;
// non-critical section
} while (true);
do {
// non-critical section
flag[i] = true;
while (flag[j]) {
if (turn == j) {
flag[i] = false;
while (turn == j) {
// busy wait
};
flag[i] = true;
}
}
// [critical section](/page/Critical_section)
turn = j;
flag[i] = false;
// non-critical section
} while (true);
Process 0 uses i = 0, j = 1; Process 1 uses i = 1, j = 0. The busy waits implement spinlocks without additional synchronization primitives.[12]
Consider an example trace where Process 1 remains idle (flag[13] = false throughout) and Process 0 attempts entry, assuming initial turn = 0. Process 0 executes its non-critical section, then sets flag = true. The while (flag[13]) condition evaluates to false since flag[13] is false, allowing immediate entry to the critical section without looping or yielding. Upon exit, Process 0 sets turn = 1 and flag = false, completing the cycle and returning to non-critical execution. This demonstrates uncontested entry efficiency when one process is inactive.[12]
Correctness Analysis
Mutual exclusion property
Dekker's algorithm ensures mutual exclusion by guaranteeing that at most one process can be in its critical section at any time. The proof relies on analyzing the entry conditions of the processes based on the shared variables flag, flag[13], and turn. A process P_i (where i is 0 or 1, and j = 1 - i) sets flag = true before attempting to enter its critical section and executes a busy-wait loop: while (flag[j] && turn == j);. The loop exits if either flag is false (indicating P_j does not intend to enter) or turn == i (indicating it is P_i's turn). The process then enters the critical section while flag remains true, setting it to false only upon exit. Additionally, after exiting the critical section, the process sets turn = j to yield priority to the other process.[2][3]
Consider the case where only one process intends to enter the critical section, say flag = true and flag[13] = false. Process P_0 will exit its loop regardless of the value of turn, since ¬flag[13] holds, allowing P_0 to enter alone. Process P_1 cannot enter because flag[13] = false, so it does not attempt the loop. Similarly, if flag[13] = true and flag = false, P_1 enters alone. In these non-conflicting cases, mutual exclusion is trivially satisfied, as only one process seeks access.[2]
In the conflict case where both flag = true and flag[13] = true, the turn variable enforces strict alternation. Process P_0 exits its loop only if turn = 0, while P_1 exits only if turn = 1. Thus, depending on the current value of turn, exactly one process can proceed to its critical section, while the other remains in its loop. Upon exiting the critical section, the process sets turn to the value favoring the other, allowing it to eventually enter after the current occupant exits. This mechanism prevents simultaneous entry during conflicts.[3]
The algorithm maintains the invariant ¬flag ∨ turn = 0 ∧ ¬flag[13] ∨ turn = 1 throughout execution, particularly when processes are attempting entry. This ensures that if flag = true, then turn = 0, and if flag[13] = true, then turn = 1. When both flags are true, the invariant implies turn = 0 ∧ turn = 1, which is impossible, preventing both processes from satisfying their entry conditions simultaneously.[2]
To formalize mutual exclusion, proceed by contradiction: assume both processes are simultaneously in their critical sections. At this point, flag = true and flag[13] = true (since flags are cleared only after exit). For both to have entered, P_0's entry condition ¬(flag[13] ∧ turn = 1) must hold, simplifying to turn = 0 (given flag[13] = true). Similarly, P_1's condition requires turn = 1. This yields turn = 0 ∧ turn = 1, a contradiction. Therefore, both cannot be in their critical sections at once. The invariant and loop conditions preserve this property across all executions.[3]
The possible states of the shared variables and corresponding entry permissions are summarized in the following table, illustrating that mutual exclusion holds in all combinations (assuming processes have set their flags appropriately when attempting entry):
| flag | flag[13] | turn | P0 can enter | P1 can enter |
|---|
| true | true | 0 | yes | no |
| true | true | 1 | no | yes |
| true | false | 0 | yes | N/A |
| false | true | 1 | N/A | yes |
This table demonstrates that when both flags are true, exactly one process can enter based on turn, enforcing exclusion.[2]
Liveness properties
Dekker's algorithm satisfies the liveness properties of progress and bounded waiting, ensuring that processes neither deadlock nor starve when attempting to enter the critical section.[14] Progress is guaranteed because if both processes wish to enter their critical sections, the turn variable assigns priority to one, allowing it to proceed, while the other yields; if only one process is interested, it enters immediately without interference.[14] This dynamic priority mechanism, combined with finite loops and atomic read-write operations on shared variables, prevents indefinite postponement, as no state exists where all interested processes are permanently blocked.[2]
The algorithm also ensures bounded waiting. Upon exiting the critical section, a process updates turn to favor the opponent, guaranteeing that a waiting process enters after the current occupant completes its turn.[2] This alternation resolves contention fairly.
Deadlock is avoided because the combination of flag variables and turn prevents mutual waiting states; if both flags are set indicating interest, the turn value explicitly favors one process, breaking any potential cycle.[14] Earlier flawed algorithms, such as simple flag-based protocols without a tie-breaker, suffered deadlocks when both processes simultaneously signaled interest without resolution, a problem Dekker's design circumvents through its priority mechanism.[2]
Historical Context
Origins and development
Dekker's algorithm originated in the late 1950s at the Mathematical Centre in Amsterdam, where Dutch mathematician Th. J. Dekker developed the first correct software solution to the mutual exclusion problem for two processes. In 1959, Edsger W. Dijkstra, a colleague at the Centre, posed the challenge of synchronizing two cyclic processes to ensure only one accessed a shared critical section at a time, amid growing interest in concurrent programming. Dekker provided a complete solution, including a proof of correctness, within three hours, marking a pivotal moment in early concurrency research.[15]
The algorithm emerged from practical challenges in multiprogramming on resource-constrained early computers, such as the Electrologica X1, which Dijkstra and his team used for implementing systems like the first ALGOL 60 compiler. These machines, with limited memory (e.g., 4K words on the X1), demanded efficient software mechanisms for process coordination without hardware support, driving innovations in shared-memory synchronization. Dekker's work remained an internal contribution at the Mathematical Centre until Dijkstra presented a solution to the problem in his 1965 Communications of the ACM paper, "Solution of a Problem in Concurrent Programming Control," and attributed the algorithm to Dekker in his 1965 manuscript Cooperating sequential processes (EWD 123).[16][1]
Prior to Dekker's breakthrough, several informal software attempts at mutual exclusion had been explored in the nascent field of concurrent programming, but they were inherently flawed, often leading to deadlocks, starvation, or violations of progress requirements. Dijkstra noted in his 1965 publication that although the problem had been articulated by various researchers, no prior solution fully met the criteria of mutual exclusion, progress, and bounded waiting, underscoring the trial-and-error evolution that preceded a rigorous approach.[16] This context of iterative refinement at the Mathematical Centre formalized Dekker's algorithm as a foundational achievement in software-based concurrency control.
Influence and legacy
Dekker's algorithm holds a pivotal place in the history of concurrent programming as the first proven correct software-only solution to the mutual exclusion problem for two processes, relying solely on shared memory variables without hardware support.[1] This breakthrough, attributed to Th. J. Dekker in Edsger W. Dijkstra's 1965 manuscript Cooperating sequential processes (EWD 123) and presented by Dijkstra in his contemporaneous Communications of the ACM paper, laid foundational groundwork for operating systems theory by demonstrating that synchronization could be achieved purely through software mechanisms, influencing the development of reliable concurrent systems.[16][1][14]
The algorithm directly inspired subsequent innovations, notably Gary L. Peterson's 1981 algorithm, which simplifies Dekker's approach while retaining the use of flags and a turn variable to ensure mutual exclusion and liveness for two processes. It also paved the way for extensions to multiple processes, such as Leslie Lamport's 1974 bakery algorithm, which generalizes the priority-based contention resolution ideas from Dekker to n processes using ticket numbers for fair access.
In modern contexts, Dekker's algorithm serves as a cornerstone for lock-free programming techniques, where software-based synchronization avoids traditional locks to improve scalability in multiprocessor environments.[17] It remains a staple in operating systems textbooks for illustrating core concurrency principles and is frequently implemented in educational simulations to teach mutual exclusion without hardware primitives. Despite its complexity relative to simpler variants like Peterson's, which has largely superseded it for practical two-process scenarios, Dekker's value endures in understanding software synchronization fundamentals, even as hardware instructions like test-and-set dominate real-world applications.[18]
Ongoing analyses in concurrency literature, including formal verifications up to the 2020s, continue to explore and extend Dekker's ideas, confirming its enduring theoretical significance.[19]