Fact-checked by Grok 2 weeks ago

Work stealing

Work stealing is a scheduling strategy in designed to dynamically balance computational load across multiple processors by enabling idle processors to "steal" tasks from the private deques of busy processors, thereby reducing idle time and enhancing overall efficiency in executing multithreaded programs. Developed by Robert D. Blumofe and in 1999, this approach contrasts with work sharing, where tasks are pushed to a central , by minimizing thread migration and contention through localized operations on double-ended queues (deques). The algorithm for work stealing typically assigns each a private deque to manage ready threads or tasks. A executes tasks by popping them from the bottom of its own deque (treating it as a for local work) and, when idle, randomly selects another 's deque to steal a task from the top, which acts as a to facilitate safe concurrent access without blocking. This asymmetric access—local push/pop from one end and remote steals from the other—ensures low contention and supports non-blocking implementations using atomic operations like . Originally proposed for fully strict multithreaded computations on MIMD-style machines, work stealing has been extended to handle irregular parallelism and multiprogrammed environments. In terms of efficiency, work stealing provides strong theoretical guarantees: for a computation with total work T_1 (minimum serial time) and critical-path length T_\infty (longest dependency chain), the expected execution time on P processors is T_1/P + O(T_\infty), achieving near-linear speedup when parallelism (T_1/T_\infty) exceeds P. Space usage is bounded by O(S_1 P), where S_1 is the minimum serial space, and communication overhead remains low compared to alternatives. These bounds hold even under adversarial scheduling by the operating system kernel, making it robust for dynamic workloads. Work stealing has become a cornerstone of modern parallel programming frameworks due to its simplicity, scalability, and adaptability to fine-grained tasks. It powers the runtime of MIT Cilk, where it enables efficient execution of recursive divide-and-conquer algorithms. Intel's (TBB) employs work stealing in its task scheduler to dynamically balance workloads across cores in C++ applications. Similarly, the Java Fork/Join framework, introduced in Java 7, uses work stealing to manage recursive task decomposition, supporting high-throughput in JVM-based systems. Variants continue to evolve, addressing challenges like cache efficiency and energy consumption in multicore and distributed settings.

Introduction

Definition and Motivation

Work stealing is a decentralized scheduling strategy employed in to achieve load balancing in multithreaded programs, wherein idle processors proactively "steal" tasks from the local double-ended queues (deques) of busy processors. This approach contrasts with centralized scheduling mechanisms by distributing the responsibility for load redistribution across processors, enabling adaptive handling of computational workloads without requiring prior knowledge of task granularities or execution times. The primary motivation for work stealing arises in environments with dynamic parallelism, where task sizes and creation patterns vary unpredictably, leading to potential load imbalances that static scheduling techniques cannot effectively mitigate. By allowing adaptation, work stealing minimizes idle time and enhances overall throughput, particularly in applications exhibiting irregular parallelism such as recursive algorithms or data-driven computations. This dynamic redistribution proves essential for maintaining on multiprocessor systems, where uneven work distribution could otherwise result in significant performance degradation. Key benefits of work stealing include its inherent , achieved through task locality that preserves data access patterns and reduces recovery overhead in the event of processor failures; scalability to large numbers of due to its decentralized nature; and low synchronization overhead facilitated by lock-free deque operations. For instance, in divide-and-conquer algorithms like parallel mergesort, uneven subtree sizes can cause some processors to finish early while others remain burdened, but work stealing allows idle processors to extract subtasks from the deques of busy ones, thereby balancing the load and ensuring efficient completion.

Historical Development

Work stealing emerged in the 1990s as a load-balancing strategy for and multithreaded systems, addressing the challenges of dynamic workload distribution on multiprocessors. It was first formalized by Robert D. Blumofe and in their 1993 work on the Cilk at , detailed in the paper "Space-Efficient Scheduling of Multithreaded Computations." This introduced a randomized work-stealing scheduler that ensures efficient execution of fully strict multithreaded computations by allowing idle processors to "steal" tasks from busy ones, while bounding space usage to linear in the computation's size. The approach was designed to minimize scheduling overhead and achieve near-optimal parallelism in systems like Cilk, which targeted fine-grained, dynamic parallelism for applications such as scientific and . Key theoretical advancements followed in 1999 with Blumofe and Leiserson's paper "Scheduling Multithreaded Computations by Work Stealing," published in the Journal of the ACM, which provided a rigorous proving the algorithm's efficiency: it spans the computation in expected time proportional to the critical length plus logarithmic factors, with total work linear in the size. This work solidified work stealing's guarantees for fully strict programs, influencing subsequent scheduler designs. Practical adoption accelerated in the 2000s; Doug Lea's Java Fork/Join framework, proposed in 2000 and integrated into Java 7 in 2011 via JSR-166, adapted work stealing for recursive task decomposition, enabling scalable parallelism in managed environments. Similarly, Intel's (TBB) library, first released in 2007, incorporated work stealing in its task scheduler to support portable, high-performance parallelism in C++ applications. The technique evolved with extensions for robustness in shared-memory systems, notably the non-blocking work-stealing algorithm by Nimar S. Arora, Robert D. Blumofe, and C. Greg Plaxton in 1998, which eliminated locks using operations to support multiprogrammed multiprocessors and reduce contention. More recent innovations include formally verified implementations, such as the block-based work stealing (BWoS) design by Jiawei et al. in 2023 at the on Operating Systems and (OSDI), which partitions per-core queues into blocks for improved and guarantees under high contention. Influential systems beyond Cilk include modern runtimes like the Go programming language's scheduler, which employs work stealing since Go 1.1 (2013), with significant improvements in Go 1.5, to balance goroutines across OS threads, and OpenCilk, an open-source extension of Cilk integrated with the compiler infrastructure since 2019 for optimized parallel code generation.

Core Concepts

Execution Model

In the work stealing execution model, a parallel computation is executed by multiple processors, each maintaining its own double-ended queue (deque) to hold ready tasks, typically in the form of threads or subtasks. When a processor spawns new tasks during execution, it pushes them onto the bottom of its local deque, following a last-in, first-out (LIFO) order for its own task consumption to prioritize recently created work. Conversely, when an idle processor attempts to steal work from another, it pops tasks from the top of the victim's deque, enforcing a first-in, first-out (FIFO) order to balance the load by taking older, potentially larger subtasks. This asymmetric access pattern—LIFO for owners and FIFO for thieves—facilitates efficient locality for the task creator while promoting parallelism through theft. Processors operate in one of two primary states: busy or idle. A busy processor repeatedly pops and executes tasks from the bottom (head) of its own deque until it becomes empty, at which point it transitions to idle. An idle processor randomly selects a victim processor and attempts to steal a task from the top (tail) of that victim's deque; if successful, it executes the stolen task and returns to the busy state, but if unsuccessful, it selects another victim at random and tries again, repeating the process until it finds a task to execute or the computation is complete. This random selection of victims helps distribute load evenly across processors without centralized coordination. Synchronization in the execution model relies on lock-free atomic operations, such as (CAS), to manage concurrent access to deques, avoiding the overhead and potential deadlocks associated with traditional locks. For instance, and pop operations on a deque are implemented atomically to ensure that only one can modify the structure at a time, with failed attempts resolved through retries. The model is particularly suited to computations structured as recursive divide-and-conquer paradigms, where tasks spawn child subtasks upon encountering parallelism, adding them to the owner's deque bottom for immediate LIFO execution. This dynamic task creation ensures that thieves primarily acquire ready, independent subtasks from the deque tail, maintaining progress without violating dependencies in the computation graph. Such interactions enable scalable parallelism in shared-memory multiprocessors, with each acting autonomously in the runtime environment.

Task Representation and Deques

In work stealing, tasks are represented as or that encapsulate the remaining along with any , enabling dynamic parallelism. A typically includes a , arguments, and a frame for local state, while a captures the point in the program to resume after task execution. This structure supports operations such as , which creates a task for concurrent execution without blocking the , and sync, which suspends the current task until all spawned children complete, ensuring before proceeding. The foundational for managing tasks is a (deque) assigned to each , allowing efficient local access and remote stealing. Deques are typically implemented as array-based structures with atomic pointers to the top and bottom indices, ensuring thread-safety in concurrent environments without locks. These pointers are updated using (CAS) operations to handle races between the owning and potential thieves. The owner performs operations exclusively on the bottom of the deque: it pushes newly spawned tasks to the bottom for sequential execution and pops from the bottom to execute the next local task. In contrast, a thief (an idle ) attempts to steal a task from the top of another 's deque, succeeding only if the deque is non-empty, again using to atomically update the top pointer and claim the task. This asymmetric access—bottom for owner, top for thieves—maintains the LIFO order for the owner while providing FIFO-like stealing for load balancing. This design offers key advantages, as bottom-only access by the owner prevents thieves from interfering with the currently active task, preserving the illusion of sequential execution. Furthermore, the array-based deque with ensures amortized O(1) for push, pop, and steal operations, supporting scalable parallelism across multiprocessors.

Algorithm

Standard Work-Stealing Procedure

In the standard work-stealing procedure, each maintains its own (deque) of ready tasks and operates in a to execute tasks until global termination. The core steps involve: (1) if the local deque is non-empty, the processor pops a task from the bottom of its deque and executes it; (2) if the local deque is empty, the processor selects a random other processor and attempts to steal a task from the top of that processor's deque; (3) if the steal succeeds, the processor executes the stolen task; and (4) the processor repeats this process until all tasks are completed. The procedure incorporates operations for spawning child tasks and synchronizing with them. When a spawns a child task during execution, it pushes the child task onto the bottom of its local deque. (sync) at a join point waits for all child tasks spawned since the last sync to complete, ensuring dependencies are resolved before proceeding. This is typically implemented using a join that tracks unfinished children; the worker continues executing tasks until the reaches zero. Implementations vary: stalling schedulers block the at sync until children complete, while schedulers have the worker continue stealing tasks until the join is met, improving load . The following pseudocode outlines the detailed loop for a worker in the standard procedure, including and sync mechanisms (simplified; actual implementations handle task frames and counters):
while (true) {
    if (local_deque not empty) {
        task = pop_bottom(local_deque);  // [Atomic](/page/Atomic) pop from bottom
        execute(task);
    } else {
        victim = random_other_processor();
        task = try_steal_top(victim_deque);  // [Atomic](/page/Atomic) steal from top
        if (task != null) {
            execute(task);
        } else {
            // Idle; check for termination
            if (all_deques_empty() && no_active_tasks()) break;
        }
    }
}

spawn(child_task) {
    push_bottom(local_deque, child_task);  // Push to bottom
}

sync() {
    // For greedy schedulers: continue the main loop, processing tasks until the task's join counter reaches zero
    // (all children completed) or termination
    // For stalling: block until children complete
}
Global termination occurs when all processors' deques are empty and there are no active tasks in the computation. To handle concurrency, the procedure relies on atomic operations such as (CAS) for deque pops and steals, ensuring that races between a processor's local pops (from the bottom) and remote steals (from the top) are resolved safely without locks. This design allows multiple processors to access deques concurrently while maintaining consistency.

Child Stealing vs. Continuation Stealing

In child stealing, a spawning task pushes its ready child subtasks onto the owner's deque before executing its continuation, making the children immediately available for theft by other workers. This approach allows the owner to proceed with the remaining work after spawning, while thieves can access the explicit child tasks stored in the deque. Child stealing is commonly implemented in library-based systems, as it requires no special support to expose subtasks. In contrast, continuation stealing involves the owner executing the spawned child task while pushing a representation of its own —the remaining code after the —onto the deque for potential theft. If a thief steals this , it executes the unfinished portion of the original task, effectively serializing the work across workers. This method relies on or support to capture and resume continuations, often using techniques like stack allocation to avoid frequent dynamic task creation. The trade-offs between these approaches center on implementation complexity, space efficiency, and execution semantics. Child stealing simplifies library integration but can consume unbounded space proportional to the number of spawned tasks, potentially leading to O(n) memory usage for n tasks in unbalanced computations. It also requires dynamic task allocation, which may introduce overhead from locks or garbage collection. Continuation stealing, however, bounds space to O(P) for P processors by limiting enqueued items to active continuations, reducing memory pressure in deep or recursive parallelism, though it demands more invasive language support and may disrupt compiler optimizations if not integrated at the IR level. Additionally, continuation stealing preserves the original serial execution order in the absence of theft, aiding predictable behavior for features like reduction operations. Prominent systems exemplify these choices: child stealing is employed in Intel oneTBB and Parallel Patterns Library for straightforward , while continuation stealing underpins the Cilk , enabling efficient scheduling of recursive divide-and-conquer algorithms with bounded overhead.

Analysis

Time and Work

Work stealing exhibits perfect work , as the total computational work performed across all s equals the sequential work T_1, the minimum time required to execute the on a single ; each instruction in the multithreaded is executed exactly once, incurring no redundant operations or overhead beyond necessary scheduling actions. The time efficiency of work stealing is characterized by an expected parallel execution time of O(T_1 / P + T_\infty) on P processors, where T_\infty denotes the critical-path length, representing the longest chain of dependent tasks that limits parallelism. This bound provides linear relative to the available parallelism, with high probability (at least $1 - \epsilon) the execution time is T_1 / P + O(T_\infty + \log P + \log(1/\epsilon)). In the worst case, the time can reach O(T_1 + P \cdot T_\infty) due to potential imbalances, but the expected performance avoids this by minimizing idle time through dynamic load redistribution. The analysis relies on amortized techniques, including potential functions and delay-sequence arguments, to bound the overhead of steal operations; unsuccessful steal attempts are shown to have an amortized cost of O(1) per successful steal, ensuring that scheduling contention does not dominate the runtime. A key factor in achieving this efficiency is the randomized selection of victim processors for stealing, which probabilistically balances the load and reduces the likelihood of repeated failed attempts, as modeled by a balls-and-bins game that guarantees even distribution of work with high probability.

Space Usage

Work-stealing schedulers for fully strict multithreaded computations achieve a total space bound of O(P \cdot S_1), where P is the number of processors and S_1 is the minimum space required for a serial execution of the computation. This bound holds because each processor maintains at most S_1 live thread frames in its ready deque, owing to the parent-child structure of threads and the busy-leaves property that limits accumulation of unfinished subcomputations. Each task frame is stored exactly once across all deques and stacks at any time. Per-processor space usage is O(S_1), as each deque stores a bounded number of frames corresponding to the local subcomputation depth, plus constant overhead for the deque structure itself. The stealing mechanism ensures balanced distribution, preventing any single deque from growing disproportionately even under varying task assignments, and keeps per-processor space independent of the total number of processors. To further optimize space, implementations often task frames (activation frames) during execution, memory for new threads instead of allocating fresh space, which maintains the overall bound of O(P \cdot S_1) while reducing allocation costs. Child stealing variants minimize continuation storage by having the spawning immediately execute the parent and push only tasks to the deque, avoiding the need to hold continuation frames in the deque during child execution. A drawback of work stealing is the potential for temporary space peaks on individual during load imbalances, when tasks accumulate in a deque before thieves redistribute them; however, these peaks do not exceed the provable bound established by the analysis.

Variants

Multiprogramming Variant

The multiprogramming variant of work stealing adapts the standard to environments where multiple programs or concurrently share a multiprocessor , such as time-shared servers or multitenant platforms. Each job maintains its own collection of per-thread deques for tasks, allowing intra-job stealing as in the conventional model, but extends this by permitting inter-job stealing when a processor exhausts its local job's work. This cross-job stealing helps maintain high processor utilization in dynamic, mixed workloads where jobs may have varying computational demands or undergo preemption by the operating . The approach relies on minimal support to ensure fairness, avoiding the need for heavy time-slicing or centralized resource allocation. Key features include a global mechanism—often provided by the OS or —for assigning processors to job pools and steal attempts across jobs. For instance, processors prioritize stealing from their assigned job's deques but fall back to other jobs if necessary, using priorities or demand estimates to favor high-load jobs and reduce contention from underloaded ones. In the Balanced Work Stealing (BWS) scheduler, this is achieved by dynamically controlling the number of active "thieves" per job: a system tracks steal success rates and puts low-success thieves to sleep, while a "watchdog" ensures at least one active thief per job to prevent . This balances load within and between jobs, adapting to OS-level preemption in scenarios. Implementations often integrate lightweight OS extensions, such as for querying states and yielding cores directly to busy workers within the same job. The BWS system, built atop the Cilk++ runtime, requires only about 100 lines of Linux modifications to support these features, enabling fences or tickets to logically separate job queues without full isolation. Similarly, adaptive variants like demand-aware work stealing use runtime feedback on job progress to adjust stealing aggressiveness, employing per-job priority queues to throttle inter-job attempts in multi-core servers. These mechanisms ensure that stealing remains decentralized while accommodating multiprogrammed interference. This variant improves overall system utilization in mixed workloads by minimizing idle time and wasteful stealing attempts, outperforming naive work stealing under multiprogramming. For example, BWS boosts average throughput by 12.5% and cuts unfairness (measured as slowdown relative to dedicated execution) from 124% to 20% across benchmarks like and ray tracing. It also better handles preemption than single-job-focused schedulers, as controlled cross-job stealing redistributes work during OS context switches, leading to more predictable performance in shared environments without sacrificing scalability.

Hierarchical and Distributed Variants

Hierarchical work stealing extends the standard algorithm to multi-level architectures, such as those with clusters of processors or NUMA systems, by organizing tasks into nested deques that respect hardware topology. In this approach, processors are grouped into clusters, each maintaining local deques for intra-cluster stealing to minimize communication overhead and preserve cache locality. Group leaders manage additional global deques for inter-cluster stealing, where only larger tasks are transferred to reduce remote accesses. This structure ensures efficient load balancing by prioritizing local operations while allowing cross-cluster redistribution when necessary. The Qthreads library supports multi-threaded hierarchical scheduling for tasking runtimes. In Qthreads, stealing occurs first within local sockets or nodes to exploit locality, escalating to higher levels only if local work is exhausted, thus adapting to high-performance computing (HPC) environments with multi-socket nodes. This design has been shown to improve scalability in distributed-memory clusters by reducing inter-node communication. As of 2025, while languages like Chapel use Qthreads for task management, they assign tasks in a round-robin fashion without work stealing. Distributed variants adapt work stealing for networked environments, enabling remote deque access across nodes via . In these systems, idle processors request tasks from remote deques, often using protocols like MPI for communication, which introduces challenges from network and bandwidth constraints. To mitigate , techniques such as lazy stealing defer remote requests until local queues are critically empty, or proactively moves work to balance loads without immediate stealing. Examples include extensions to MPI runtimes that integrate work stealing for hybrid MPI+ applications, allowing selective remote task execution based on runtime monitoring. Recent advancements, such as the integration of work stealing into the Celerity distributed (as of 2025), enable dynamic load balancing in large-scale distributed environments using synthetic benchmarks to demonstrate . In actor models, distributed work stealing schedules message-driven computations by stealing actors or tasks across distributed nodes, preserving locality through affinity-aware selection. Recent advancements address in these settings by incorporating checkpointing or replication during steals, ensuring recovery from node failures without full recomputation; for instance, task-level checkpointing in coordinated work-stealing frameworks supports fail-stop semantics in large-scale distributed systems. Key challenges in both hierarchical and distributed variants include amplified from cross-level or remote operations, which can degrade efficiency in unbalanced workloads. Solutions involve bounded steals, limiting the number or size of remote operations per cycle, and approximations like probabilistic victim selection to avoid contention hotspots, maintaining near-optimal load balance with reduced overhead. Seminal work on hierarchical schedulers, such as that by Quintin and Wagner, demonstrates up to 20% performance gains over flat stealing in heterogeneous platforms.

Alternatives

Work Sharing

Work sharing is a load-balancing technique in where processors with excess tasks actively donate them to a central shared , allowing idle processors to pull tasks for execution. This mechanism typically employs a (first-in, first-out) global to manage tasks, ensuring fair distribution as busy processors push subtasks onto the when their local workload exceeds a , while idle ones dequeue and process them. Synchronization primitives, such as mutexes and condition variables, are used to coordinate access to the shared , preventing conditions in multithreaded environments. Historically, work sharing has been employed in early parallel systems predating advanced dynamic scheduling methods, including implementations using the threads () standardized in 1995, where thread pools often employ shared queues for task distribution in producer-consumer patterns. This approach contrasts with the later development of work stealing, which uses hybrid LIFO/FIFO deques per for more decentralized management, as introduced in systems like Cilk in the . Work sharing's centralized model provided a straightforward way to handle dynamic workloads in shared-memory multiprocessors before the efficiency gains of stealing were demonstrated. While simpler to implement due to its centralized structure, work sharing suffers from higher contention on the shared queue as the number of processors increases, leading to scalability bottlenecks, and poorer locality compared to work stealing's decentralized per-processor deques, which minimize unnecessary task migrations. In contrast to stealing, which only incurs communication when processors others, work sharing proactively migrates tasks, potentially increasing overhead in balanced scenarios but offering better predictability at low loads. For instance, some implementations of task pools utilize a centralized shared queue for work sharing, distributing tasks dynamically but facing contention issues in large-scale parallelism, whereas others adopt work stealing to improve ; frameworks like Java's ForkJoinPool employ work stealing with local deques for improved load balance and reduced interference.

Global and Centralized Scheduling

In global and centralized scheduling, a master scheduler maintains a shared of tasks and assigns them to worker threads or processors based on aggregated load information from the system, enabling a unified of utilization for balanced distribution. This top-down approach contrasts with decentralized methods by centralizing decision-making, often through a single point of that monitors worker statuses and dispatches tasks to underutilized units, which can include mechanisms for priority-based or fairness-oriented allocation. Such schedulers are particularly suited for scenarios requiring predictable task ordering or , as the central entity can enforce policies like deadline awareness or quotas across all workers. Variants of centralized scheduling incorporate elements with work stealing for enhanced flexibility, where the scheduler oversees a pool of per-worker deques and occasionally intervenes to redistribute tasks, blending global oversight with local autonomy to mitigate some contention issues. For instance, systems like Prompt I-Cilk maintain a centralized pool of deques that supports both work sharing and stealing, allowing the scheduler to tasks proactively while permitting idle workers to steal from busy ones under central coordination. These hybrids aim to retain the predictability of centralized control while borrowing from stealing's adaptability, though they still rely on periodic with the for load updates. In multiprogramming environments, such variants can briefly reference partitioned allocation strategies to handle multiple applications, but the core remains top-down . Centralized scheduling offers advantages in predictability and ease of implementing global policies, such as uniform load balancing or priority enforcement, due to its complete system visibility, but it suffers from bottlenecks caused by frequent on the shared , leading to contention and reduced throughput as the number of processors grows. degrades beyond small-scale systems because every task submission and retrieval requires locking the central structure, imposing significant overhead in high-contention scenarios, unlike decentralized alternatives that localize most operations. Additionally, while fault-tolerant in theory through redundancy, centralized designs create single points of failure, contrasting with work stealing's inherent resilience where no scheduler dominates task flow. A classic example is the traditional thread pool implementation in Java's ExecutorService, such as , where tasks are enqueued centrally and workers block on the for assignment, providing straightforward management for I/O-bound or coarse-grained parallelism but exhibiting contention under fine-grained workloads. This approach ensures ordered execution but lacks the dynamic load balancing of stealing, making it less fault-tolerant in heterogeneous environments where worker failures do not propagate globally. For modern large-scale adaptations, approximate centralized scheduling employs protocols to disseminate load information among nodes periodically, simulating global awareness without a persistent master; this reduces costs while approximating optimal assignments, as seen in self-organized grids where exchanges enable workers to request tasks from high-load peers, scaling to hundreds of nodes with near-centralized efficiency.

Applications

Parallel Programming Frameworks

Work stealing has been integrated into several high-level parallel programming frameworks to enable efficient task scheduling and load balancing in multithreaded environments, abstracting away low-level details from developers. These frameworks typically employ work-stealing schedulers where idle threads opportunistically take tasks from busy threads' deques, promoting dynamic parallelism without explicit user intervention. OpenCilk, the open-source successor to Cilk Plus (deprecated in 2017), is an extension to C and C++ that provides spawn and sync primitives allowing programmers to express parallelism declaratively, with the underlying runtime using a work-stealing scheduler to manage task distribution across threads. The spawn keyword initiates a parallel child task, while sync ensures completion before proceeding, enabling a fork-join model where the scheduler balances load by allowing thieves to steal continuations from victims' deques. This approach, rooted in the original Cilk system, ensures low overhead for serial execution and scales efficiently on multicore processors. OpenCilk 3.0, released in May 2025, includes enhancements for better performance on modern hardware. The Java Fork/Join Framework, part of java.util.concurrent since Java 7, implements a similar model through ForkJoinPool and RecursiveTask subclasses, where tasks are forked into subtasks and joined upon completion, with work stealing occurring via double-ended queues (deques) per worker thread. Idle workers steal tasks from the tail of another thread's deque, reducing contention and improving throughput for divide-and-conquer algorithms like parallel sorting or merging. This design minimizes synchronization overhead and supports automatic parallelism for recursive computations. Intel Threading Building Blocks (TBB) incorporates work stealing in its task-based parallelism model, initialized via task_scheduler_init to control size and affinity, allowing developers to define tasks that the scheduler distributes using a randomized stealing protocol. Tasks are enqueued in a graph-like structure, and the runtime ensures load balancing by having idle arenas steal from others, supporting both coarse- and fine-grained parallelism in C++ applications. In language runtimes, Go's scheduler employs work stealing for goroutines, lightweight threads managed by the runtime, where each processor (P) maintains a local run , and idle Ps steal half the tasks from another P's to balance execution across OS threads (M). This hides deque operations from users, enabling seamless concurrency for I/O-bound and CPU-bound workloads via goroutines and channels. Similarly, Rust's crate facilitates through parallel iterators (e.g., par_iter()), leveraging a work-stealing inspired by Cilk to partition collections and redistribute work dynamically, abstracting thread management for safe, efficient loops. These frameworks provide key features such as automatic load balancing, where work stealing mitigates imbalances without programmer tuning, and hide deque management entirely, allowing focus on algorithmic logic rather than scheduling details. Evolving from the system in the 1990s, which introduced provably efficient work stealing for multithreaded C extensions, modern implementations continue this legacy in production languages and libraries.

Real-World Implementations

Work stealing has been integrated into the compiler infrastructure's runtime for efficient task scheduling during compiler optimizations, where idle threads randomly select another thread's queue to steal tasks and balance load across cores. This approach enhances parallelism in tasks like and optimization passes, particularly for irregular workloads in . Task stealing has been proposed to mitigate data skew in aggregations over Resilient Distributed Datasets (RDDs) in , where larger partitions are split into segment tasks that idle executors can steal, improving resource utilization in distributed . This dynamically redistributes work to prevent bottlenecks in skewed datasets, leading to more balanced execution across cluster nodes. In (HPC), Charm++ leverages work stealing through load balancers like PackStealLB for distributed simulations, enabling asynchronous task migration across nodes to handle irregular computational loads in applications such as and plasma simulations. This distributed variant ensures scalability on large supercomputers by allowing over-loaded chares to donate work to idle ones, reducing communication overhead. Work stealing delivers notable performance gains in irregular workloads, such as graph processing, where benchmarks on heterogeneous CPU-FPGA systems show speedups of 20-50% over static scheduling by dynamically balancing traversals and computations. These improvements stem from reduced idle time and better adaptation to varying task granularities in algorithms like . Despite its benefits, implementing work stealing on (NUMA) architectures requires careful tuning to minimize remote memory accesses, as traditional stealers can incur high from cross-node thefts; locality-aware variants prioritize local queues before remote ones to optimize performance. Recent advancements include formal verifications ensuring properties like progress and , with a 2023 proof for block-based work stealing confirming its correctness under concurrent operations without races.

References

  1. [1]
    [PDF] Scheduling Multithreaded Computations by Work Stealing
    Abstract. This paper studies the problem of e ciently scheduling fully strict (i.e., well- structured) multithreaded computations on parallel computers.
  2. [2]
    [PDF] Thread Scheduling for Multiprogrammed Multiprocessors
    By implementing the work-stealing algorithm with non-blocking deques and judicious use of yield system calls, the non-blocking work stealer executes any ...
  3. [3]
    A Java fork/join framework - ACM Digital Library
    A Java Fork/Join Framework ... As discussed in section 3.2, work-stealing frameworks sometimes encounter problems dealing with frequent global synchronization of ...
  4. [4]
    [PDF] Intel Guide for Developing Multithreaded Application
    Intel Threading Building Blocks for Open Source ... By leveraging the work-stealing mechanism, Intel. TBB balances tasks among worker threads dynamically.
  5. [5]
    Space-Efficient Scheduling of Multithreaded Computations - SIAM.org
    Space-Efficient Scheduling of Multithreaded Computations. Authors: Robert D. Blumofe and Charles E. LeisersonAuthors Info & Affiliations. https://doi.org ...
  6. [6]
    Scheduling multithreaded computations by work stealing
    This paper studies the problem of efficiently schedulling fully strict (i.e., well-structured) multithreaded computations on parallel computers.
  7. [7]
    [PDF] A Java Fork/Join Framework - Doug Lea
    FJTask adapts the basic tactics pioneered in the Cilk work−stealing scheduler: • Each worker thread maintains runnable tasks in its own scheduling queue. • ...
  8. [8]
    [PDF] Formally Verified Block-based Work Stealing for Parallel Processing
    Jul 12, 2023 · Work stealing is a widely-used scheduling technique for paral- lel processing on multicore. Each core owns a queue of tasks and avoids idling by ...
  9. [9]
    [PDF] Scheduling Multithreaded Computations by Work Stealing
    In this paper, we give the first provably good work-stealing scheduler for multithreaded compu- tations with dependencies. Specifically, our analysis shows that ...Missing: seminal | Show results with:seminal
  10. [10]
    [PDF] Scheduling Multithreaded Computations by Work Stealing
    Abstract. This paper studies the problem of efficiently scheduling fully strict (i.e., well-structured) multithreaded computations on parallel computers.Missing: seminal | Show results with:seminal
  11. [11]
    Work-stealing with LIFO local pop and FIFO steal - Google Groups
    Right now, I have workers doing exponential back-off after failed steal attempts (with victims chosen randomly). I was thinking of maybe using SPSC queues from ...
  12. [12]
    Dynamic circular work-stealing deque - ACM Digital Library
    Our new algorithm presents a simple lock-free work-stealing deque, which stores the elements in a cyclic array that can grow when it overflows.
  13. [13]
    [PDF] A Primer on Scheduling Fork-Join Parallelism with Work Stealing
    Jan 15, 2014 · The asymptotic space difference between child-stealing and continuation stealing is: • With child stealing, the loop spawns all n tasks before ...
  14. [14]
    [PDF] Nowa: A Wait-Free Continuation-Stealing Concurrency Platform
    In child-stealing, a spawning function makes its child tasks available to be stolen. The example in Figure 3b shows that, at the fork point, W1 pushes its child ...
  15. [15]
    [PDF] Continuation Stealing in Julia August Trollbeck - DSpace@MIT
    Mar 31, 2023 · Memory blowup from scheduling a program with work stealing can be bounded by using continuation stealing when new tasks are spawned.
  16. [16]
    [PDF] Scheduling Multithreaded Computations by Work Stealing
    Reference: Scheduling multithreaded computations by work stealing, Blumofe, Robert D. and Leiserson,. Charles E. Journal of the ACM 46, 5 (Sep. 1999), 720 ...Missing: seminal | Show results with:seminal
  17. [17]
    [PDF] Executing Multithreaded Programs Efficiently Robert D. Blumofe
    Based on the technique of work stealing, this algorithm achieves time, space, and communication bounds that are all within a small constant factor of optimal.Missing: T_1) | Show results with:T_1)
  18. [18]
    The performance of work stealing in multiprogrammed environments ...
    The performance of work stealing in multiprogrammed environments (extended abstract). Authors: Robert D. Blumofe. Robert D. Blumofe. Department of Computer ...
  19. [19]
    [PDF] BWS: Balanced Work Stealing for Time-Sharing Multicores
    This paper introduced BWS (balanced work stealing), a novel and practical solution for efficiently running work- stealing applications on time-sharing ...
  20. [20]
    [PDF] Adaptive demand-aware work-stealing in multi-programmed multi ...
    Oct 31, 2015 · Therefore, work-stealing performs better than task-sharing as the number of workers increases, and it has been adopted in many parallel ...<|separator|>
  21. [21]
    [PDF] Hierarchical Work-Stealing - Hal-Inria
    Nov 5, 2009 · Abstract: In this paper, we study the problem of dynamic load-balancing on hetero- geneous hierarchical platforms.Missing: Acar | Show results with:Acar
  22. [22]
    [PDF] Hierarchical Scheduling of OpenMP Tasks in Qthreads - OSTI
    ... Chapel [9] languages. Hierarchical work stealing, i.e., stealing at each level of a hierarchical scheduler, has been implemented for clusters. Page 7. 0. 5. 10.
  23. [23]
    Hybrid MPI+OpenMP Reactive Work Stealing in Distributed Memory ...
    We develop a novel distributed work stealing concept that - based on on-line performance monitoring - selectively steals and remotely executes tasks across MPI ...
  24. [24]
    [PDF] Optimized Distributed Work-Stealing - Vivek Kumar
    BaselineWS follows prior work in implementing a distributed work-stealing ... a lazy approach by stealing from local computation workers. (line 4 and 13) ...
  25. [25]
    [PDF] Work-Stealing, Locality-Aware Actor Scheduling
    The first randomized work-stealing algorithm for fully-strict computing is given in [17]. The algorithm has an expected execution time of T1/P +O(T∞) on P ...Missing: synchronization | Show results with:synchronization
  26. [26]
    [PDF] POSIX Threads - SHARCNET
    threads that there is work to be done. – slave threads consume tasks from the shared queue and perform the work main thread - adds jobs to queue as needed.
  27. [27]
    [PDF] A Scalable Locality-aware Adaptive Work-stealing Scheduler for ...
    In work-stealing, each worker maintains its own pool (queue) of tasks and the underutilized workers take the initiative to steal work from other busy workers.Missing: tolerance | Show results with:tolerance
  28. [28]
    [PDF] Randomized Work Stealing versus Sharing in Large-scale Systems ...
    The main insight is that work stealing benefits significantly from having more variable job sizes and work sharing may become inferior to work stealing for ...
  29. [29]
    [PDF] Centralized and Distributed Dynamic Scheduling for Adaptive ... - DTIC
    We consider three basic scheduling approaches: centralized scheduling, which uses a master-slave model of computation; distributed scheduling, which uses local ...
  30. [30]
    Centralized versus Distributed Schedulers for Bag-of-Tasks ...
    In this paper we consider the problem of fairly and efficiently scheduling Bags of Tasks applications on a distributed network of processors organized as a tree ...
  31. [31]
    [PDF] An Efficient Scheduler for Task-Parallel Interactive Applications
    Proactive work-stealing is designed to provide efficient execution via reducing the number of “deviations” in the presence of futures; intuitively, reducing the ...<|control11|><|separator|>
  32. [32]
    What are the benefits of a work stealing scheduler? - help
    Feb 5, 2019 · Work-stealing queue scheduler is developed and optimized for traditional fork-join concurrency model, where each task can spawn zero or more subtasks.
  33. [33]
    [PDF] Hawk: Hybrid Datacenter Scheduling - USENIX
    Jul 10, 2015 · We show that both reserving a small part of the cluster and work stealing are essential to good per- formance for short jobs, with work stealing ...
  34. [34]
    Introduction to Thread Pools in Java | Baeldung
    Jun 11, 2024 · When we use a thread pool, we write our concurrent code in the form of parallel tasks and submit them for execution to an instance of a thread ...
  35. [35]
    Threads, ThreadPools and Executors - Java - SoftwareMill
    Feb 5, 2024 · As the name suggests, each submitted task gets its own thread bound to its execution, the thread starts alongside the start of task processing.The Executor Services · Threadpoolexecutor · Forkjoinpool
  36. [36]
    [PDF] Gossip-based Dynamic Load Balancing in a Self-organized Desktop ...
    To achieve rapid aggregation of runtime load infor- mation, we design an efficient gossip-based protocol based on an unstructured peer-to-peer dynamic network.
  37. [37]
    [PDF] Best practices for using Intel® Cilk™ Plus
    One is the original worker thread which continues executing the function called using cilk_spawn keyword and secondly the stealing worker thread takes up the ...
  38. [38]
    The implementation of the Cilk-5 multithreaded language
    Cilk-5 uses a "work-stealing" algorithm, a "work-first" principle, a "two-clone" compilation strategy, and a Dijkstra-like mutual-exclusion protocol.
  39. [39]
    [PDF] The Implementation of the Cilk-5 Multithreaded Language
    Cilk's scheduler uses a. \work-stealing" algorithm in which idle processors, called thieves, \steal" threads from busy processors, called vic- tims. Cilk's ...
  40. [40]
    Fork/Join - Essential Java Classes - Oracle Help Center
    The fork/join framework is distinct because it uses a work-stealing algorithm. ... These methods are similar to sort() , but leverage concurrency via the fork/ ...
  41. [41]
    How Task Scheduler Works - Intel
    Then rule 3 applies. It steals the oldest task spawned by another thread, which causes temporary breadth-first execution that converts potential parallelism ...
  42. [42]
    [PDF] Parallel Programming with Intel® Threading Building Blocks
    Create task_scheduler_init object in a thread that uses TBB. Constructor specifies thread pool size (as automatic, explicit or deferred) and thread stack size.
  43. [43]
    Go's work-stealing scheduler - rakyll.org
    Jul 16, 2017 · This article will go in depth explaining what work-stealing schedulers are and how Go implements one. Scheduling basics. Go has an M:N scheduler ...Missing: LLVM integration
  44. [44]
    Implementing data parallelism with Rayon Rust - LogRocket Blog
    Jul 3, 2023 · Rayon's method of parallelization is based on a principle known as work stealing. The first closure is executed on the current thread, while ...
  45. [45]
    [PDF] Task Self-Scheduling in OpenMP - DMI-HPC group at UniBas
    In this paper we compare and contrast the LLVM tasking implementation with some other implementations discussed in research papers. We then make changes the the ...
  46. [46]
    Handling Data Skew for Aggregation in Spark SQL Using Task ...
    The core idea is task stealing. Based on the relative size of data partitions, we add two types of tasks, namely segment tasks for larger partitions and ...
  47. [47]
    PackStealLB: A scalable distributed load balancer based on work ...
    PackStealLB is a scalable, distributed load balancer using work stealing and workload discretization, combining distributed load balancing and packing.
  48. [48]
    Balancing Graph Processing Workloads Using Work Stealing on ...
    Aug 17, 2020 · We propose, implement and evaluate a work stealing based scheduler, called HWS, for graph processing on heterogeneous CPU-FPGA systems that ...
  49. [49]
    Locality-Aware Work Stealing Based on Online Profiling and Auto ...
    While traditional work-stealing schedulers are designed for single-socket architectures, they incur severe shared cache misses and remote memory accesses in ...