Fact-checked by Grok 2 weeks ago

Fixed-priority pre-emptive scheduling

Fixed-priority preemptive scheduling is a task scheduling algorithm in which each task is assigned a static priority level, and the processor always executes the highest-priority task that is ready to run, preempting any lower-priority task currently in execution if a higher-priority one becomes ready. This approach ensures that critical tasks meet their deadlines in hard environments by prioritizing urgency and allowing interruptions to prevent delays in high-importance operations. The algorithm traces its origins to early research on multiprogramming in hard real-time systems, with foundational work by and Layland in 1973, who analyzed fixed-priority schedulers and established key theoretical bounds. A prominent implementation is , a fixed-priority policy that assigns higher priorities to tasks with shorter periods, proving optimal among static-priority assignments for periodic tasks. Under RMS, the processor utilization is bounded by approximately 69% (specifically, \ln 2 \approx 0.693) for large task sets to guarantee schedulability, though dynamic priority schemes like earliest deadline first can achieve up to 100% utilization. Over time, fixed-priority preemptive scheduling has evolved to address practical challenges, such as in multiprocessor and shared-resource environments. Developments include deadline monotonic scheduling, which prioritizes tasks based on relative deadlines rather than periods, and protocols like priority inheritance to mitigate blocking delays caused by lower-priority tasks holding resources. These enhancements have made it a cornerstone of operating systems, commonly applied in devices, , and automotive control systems for its predictability and analyzability.

Fundamentals

Definition and Core Principles

Fixed-priority preemptive scheduling is a scheduling discipline in which tasks are assigned static priorities during the system design phase, and the executes the highest-priority task that is currently ready, immediately preempting any lower-priority task in progress when a higher-priority one becomes available. This approach ensures predictable behavior in hard systems by prioritizing tasks based on their urgency, without altering priorities during runtime. The core principles revolve around the immutability of , which are determined statically and remain fixed throughout task execution, contrasting with dynamic priority schemes that adjust based on conditions such as deadlines or execution history. Preemption is triggered by interrupts, such as periodic clock ticks for task releases or external events signaling task readiness, allowing the scheduler to switch to a higher-priority task without delay. The underlying task model assumes periodic tasks, each characterized by a known (WCET), a fixed period between invocations, and a relative deadline typically equal to the period, enabling analysis of timing guarantees under worst-case assumptions. For instance, consider a system with two tasks: Task A with priority 1 (highest), WCET of 5 ms, and period of 10 ms; and Task B with priority 2, WCET of 8 ms, and period of 20 ms. If Task B is executing and Task A becomes ready at any point, the scheduler preempts Task B to run Task A immediately, resuming Task B only after Task A completes its current instance. This preemptive mechanism, while enhancing responsiveness for critical tasks, introduces risks such as priority inversion, where a high-priority task is indirectly delayed by lower-priority ones through shared resource contention, potentially extending blocking times beyond expected bounds.

Historical Development

Fixed-priority pre-emptive scheduling emerged in the 1970s amid growing research into systems, particularly for applications requiring predictable timing in hard environments. Its foundational theoretical contributions were detailed in the seminal 1973 paper by C. L. Liu and J. W. Layland, which analyzed scheduling algorithms for multiprogramming on a single processor. In this work, they introduced fixed-priority scheduling for periodic tasks and provided the first of its optimality under rate monotonic priority assignment, establishing that it could achieve up to approximately 69% processor utilization for schedulable task sets. Early adoption focused on and applications, where reliability and timeliness were paramount. A notable implementation came with the release of 1.0 in 1987 by , an RTOS that incorporated fixed-priority pre-emptive scheduling to support embedded control in high-stakes environments like and space missions. This marked a shift from theoretical models to practical deployment in resource-constrained systems. The brought significant advancements in schedulability analysis, enhancing the robustness of fixed-priority pre-emptive scheduling. Key progress was summarized in a 1995 historical perspective by N. C. Audsley, A. Burns, R. I. Davis, K. Tindell, and A. J. Wellings, which traced developments from roots and highlighted techniques for exact response-time analysis. Concurrently, its integration into standards accelerated adoption; the POSIX.1b (IEEE 1003.1b-1993) real-time extensions formalized support for fixed-priority pre-emptive policies like SCHED_FIFO, enabling portable implementation across systems. By the 2000s, the evolved with the rise of systems and multicore processors, necessitating extensions to address challenges like . protocols, initially proposed in the late 1980s and refined for multiprocessor contexts, became essential to mitigate blocking delays in fixed-priority environments on parallel hardware. This adaptation broadened its use beyond single-core roots to diverse applications in consumer and industrial domains.

Operational Mechanics

Priority Assignment and Task States

In fixed-priority pre-emptive scheduling, priorities are assigned statically to tasks at compile or link time, based on inherent task characteristics such as or deadline, rather than dynamically at to ensure predictability in systems. This static assignment prevents priority inversions from runtime decisions and facilitates schedulability analysis. A common method is (), where priorities are assigned inversely proportional to task periods, such that tasks with higher execution frequencies receive higher priorities. For a task \tau_i with period T_i, its priority P_i satisfies P_i > P_j if T_i < T_j, optimizing for periodic tasks with deadlines equal to periods. Another approach is (), which assigns higher priorities to tasks with shorter relative deadlines, extending RMS for cases where deadlines may differ from periods and proven optimal under similar conditions. Tasks in this scheduling model transition through distinct states managed by the scheduler: ready (awaiting CPU allocation), running (currently executing on the CPU), blocked (suspended pending an I/O event or ), and suspended (temporarily halted by external invocation). The ready queue is ordered by , ensuring the highest-priority ready task is selected for execution, often using efficient data structures like bitmaps for constant-time highest-priority retrieval or heaps for logarithmic operations. While base remain fixed, protocols such as priority inheritance temporarily elevate a task's during to avoid unbounded blocking, without altering the static assignment scheme. For instance, in , a task with period T_i = 10 ms would receive higher than one with T_i = 20 ms, placing the former ahead in the ready upon .

Preemption and Context Switching

In fixed-priority pre-emptive scheduling, preemption occurs when a higher- task becomes ready to execute while a lower- task is running on the . This is typically triggered by events such as interrupts, which periodically check for ready tasks; completion of I/O operations, which may signal the readiness of a waiting task; or the of a new higher- task instance according to its . Upon such a trigger, the scheduler examines the ready —a holding tasks awaiting execution—and, if a higher- task is found, immediately interrupts the current task to ensure the allocates resources to the highest- ready task. This mechanism enforces strict ordering without relying on voluntary cooperation from tasks. Context switching follows preemption and involves saving the state of the interrupted task and restoring the state of the preempting task to enable seamless resumption. The processor state saved typically includes CPU registers (such as general-purpose registers like r0-r12 on architectures), the (indicating the next to execute), and the stack pointer (to maintain the task's and local variables). These elements are stored in a task block or directly on the task's dedicated , often using hardware-assisted instructions for , such as automatic exception frame saving on entry to handlers. Restoration mirrors this process in reverse, loading the new task's state to resume execution from its prior point. In operating systems (RTOS), context switching overhead is generally low, ranging from 1 to 10 microseconds depending on the architecture and features like floating-point units, representing a small fraction of CPU cycles (e.g., around 420 cycles at 100 MHz). Interrupt handling integrates closely with preemption in fixed-priority systems, where interrupt service routines (ISRs) are assigned the highest priorities to ensure timely responses to hardware events. An ISR for a high-priority can preempt a running task, executing its handler before returning control, which may then trigger a full task preemption if a higher-priority task is ready. To manage and avoid excessive nested preemptions, RTOS kernels often disable during critical sections of the scheduler or , re-enabling them afterward; this prevents lower-priority ISRs from interrupting higher-priority ones unnecessarily while bounding latency. In fixed-priority pre-emptive scheduling, preemption is mandatory for higher-priority entities, including ISRs, and voluntary yielding by tasks is neither required nor typically implemented, as the policy relies on enforced to meet guarantees. Preemption latency, the delay from a preemption trigger until the higher-priority task begins execution, can be expressed as: \text{Preemption latency} = \text{interrupt latency} + \text{context switch time} + \text{scheduler overhead} Here, interrupt latency is the time to enter the from the hardware signal; context switch time covers state save/restore; and scheduler overhead includes ready inspection and decision-making. This formulation highlights the cumulative impact of these components, which must be minimized in RTOS to ensure predictability.

Schedulability Analysis

Rate Monotonic Scheduling Algorithm

The (RMS) algorithm serves as the canonical fixed-priority assignment policy in pre-emptive systems, where static priorities are assigned inversely to task periods such that tasks with shorter periods receive higher priorities. This ensures that more frequently invoked tasks are executed first, promoting efficient resource utilization for periodic workloads. Under the key assumption that each task's deadline equals its period, RMS is optimal among all fixed-priority scheduling disciplines, meaning it can schedule any task set feasible under another such policy. The algorithm operates under several foundational assumptions: tasks are periodic with fixed intervals between invocations; tasks are mutually independent, meaning no task blocks another except through pre-emption; scheduling is fully pre-emptive on a single processor; each task has a constant ; deadlines coincide with periods; and initial analysis ignores context-switching overheads, though extensions address these. Periods may be harmonic (multiples of each other) or non-harmonic, with the algorithm applicable to both cases. To implement RMS, the following steps are followed: first, model the task set where each task \tau_i is characterized by its WCET C_i and period T_i; second, sort the tasks in non-decreasing order of T_i to assign , granting the highest to the task with the shortest T_i; third, assess schedulability through , exact response-time , or bounding tests to confirm all deadlines are met. For illustration, consider a task set with periods T_1 = 5 ms, T_2 = 10 ms, and T_3 = 20 ms, where are assigned highest to lowest corresponding to these periods. If the total utilization U = \sum (C_i / T_i) satisfies U < \ln 2 \approx 0.693, the set is schedulable under via the sufficient utilization bound for large numbers of tasks. The optimality of RMS is formalized in the Liu-Layland theorem, which states that if any fixed-priority assignment can feasibly schedule a task set, then the rate-monotonic assignment can also do so; this is proven by showing that swapping priorities to align with rates does not violate feasibility.

Response Time and Utilization Bounds

In fixed-priority preemptive scheduling, schedulability analysis often relies on bounding the worst-case response time R_i for each task \tau_i, which is the time from its release until completion, including interference from higher-priority tasks. The standard approach computes R_i iteratively using the equation R_i^{(k+1)} = C_i + \sum_{j \in hp(i)} C_j \left\lceil \frac{R_i^{(k)}}{T_j} \right\rceil, where C_i is the of \tau_i, T_j is the of higher-priority task \tau_j, and hp(i) denotes the set of tasks with priority higher than \tau_i; the iteration starts with R_i^{(0)} = C_i and converges if R_i^{(k+1)} = R_i^{(k)} for some k, with the task set schedulable if R_i \leq D_i (deadline) for all i. A sufficient condition for schedulability under rate-monotonic priority assignment is the utilization bound U = \sum_{i=1}^n \frac{C_i}{T_i} \leq n(2^{1/n} - 1), where n is the number of tasks; this bound approaches \ln 2 \approx 0.693 (often approximated as 69%) as n increases, guaranteeing schedulability for any task set meeting it, though it is conservative and not necessary since higher utilizations are possible with specific periods. For exact schedulability testing beyond the utilization bound, the response time equation's provides a precise , while processor demand analysis offers tighter bounds by verifying if the cumulative demand W_i(t) = \sum_{j \in hp(i)} C_j \left\lceil \frac{t}{T_j} \right\rceil + C_i \leq t holds for all relevant t up to D_i, enabling schedulability up to 100% utilization in cases like task periods. When analytical bounds are insufficient, verification can involve simulating the schedule over the hyperperiod H = \mathrm{LCM}(T_1, \dots, T_n), the of all periods, as the schedule pattern repeats thereafter, confirming no deadline misses occur.

Advantages and Limitations

Key Benefits

Fixed-priority pre-emptive scheduling provides predictability in systems by assigning static priorities to tasks, enabling offline schedulability analysis to verify that all deadlines will be met under worst-case conditions, such as critical instants where higher-priority tasks are released simultaneously. This approach, exemplified by , assigns higher priorities to tasks with shorter periods, allowing system designers to guarantee timely execution without runtime adjustments. The simplicity of fixed-priority pre-emptive scheduling stems from its straightforward implementation and low runtime overhead, often achieving O(1) scheduling decisions through priority-based ready queues, making it suitable for resource-constrained embedded systems. This ease of analysis and deployment facilitates its integration into real-time operating systems (RTOS), where priorities remain fixed throughout task lifetimes, reducing complexity in and . Determinism is a strength, as the fixed scheme ensures bounded response times for high- tasks, even in the presence of from lower- ones, which is essential for hard applications requiring guaranteed completion within deadlines. Pre-emption allows critical tasks to lower- ones immediately upon release, providing responsive behavior without unbounded delays. In terms of , this scheduling policy improves CPU utilization compared to non-pre-emptive methods by minimizing idle time and ensuring high-priority tasks execute promptly, with theoretical bounds supporting up to approximately 69% utilization for large task sets under rate monotonic assignment. It is particularly valued in safety-critical domains, such as and automotive electronic control units (ECUs), where its predictability aids certification under standards like for airborne systems.

Potential Drawbacks and Mitigations

One significant drawback of fixed-priority preemptive scheduling is , where a high-priority task is blocked by a low-priority task that holds a , such as a mutex or , potentially for an extended duration if intermediate-priority tasks the low-priority one during the blocking period. Without mitigation, the blocking time for the high-priority task can become unbounded, as the low-priority task may be repeatedly by medium-priority tasks, delaying resource release. The duration of this inversion is typically limited to the execution time of the low-priority task's in the absence of such chains, but chained preemptions can extend it significantly. In multiprocessor environments, fixed-priority preemptive scheduling introduces additional risks, such as deadlocks arising from across cores, where tasks on different processors mutually block each other while waiting for locks held by the other. Furthermore, frequent preemptions can impose substantial runtime overhead from context switches, potentially increasing worst-case execution times by up to 40% due to cache flushes, saves, and scheduler invocations. To mitigate , the priority inheritance protocol, introduced by Sha et al. in 1990, temporarily boosts the priority of the low-priority task to match the highest priority of any blocked high-priority task, ensuring it runs until the is released and bounding the inversion duration to the length of the . The priority ceiling protocol, also proposed in the same work, assigns each a ceiling priority equal to the highest priority of tasks that may access it; a task attempting to acquire a locked is blocked only if its priority exceeds the ceiling, preventing chained inversions and deadlocks by avoiding unnecessary preemptions. For inversion chains, these protocols reduce delays by limiting blocking to single critical sections. In multiprocessor systems, period partitioning addresses deadlock and migration-related issues by statically assigning tasks to specific cores based on their periods and priorities, enabling independent fixed-priority scheduling per core without inter-core resource sharing that could lead to deadlocks. This partitioned approach minimizes overhead from frequent preemptions across cores by reducing migrations and contention, though it requires careful task allocation to balance loads. Additionally, limiting preemptions through techniques like preemption thresholds—where a task's priority is raised only upon certain conditions—can further curb context-switching overhead while preserving schedulability.

Comparisons and Applications

Comparison to Other Scheduling Policies

Fixed-priority pre-emptive scheduling, often exemplified by the rate monotonic (RM) algorithm, contrasts with dynamic-priority approaches like earliest deadline first (EDF) in terms of schedulability and overhead. While RM achieves a schedulability bound of approximately 69% processor utilization for large task sets (approaching \ln 2), EDF is optimal and supports up to 100% utilization, allowing more efficient resource use but requiring dynamic priority adjustments at runtime, which incurs higher scheduling overhead compared to the static priorities in fixed-priority schemes. In comparison to non-preemptive scheduling policies, fixed-priority pre-emptive scheduling reduces worst-case response times for high-priority tasks by immediately interrupting lower-priority ones upon arrival, whereas non-preemptive variants allow the current task to complete, potentially causing deadline misses for urgent higher-priority arrivals and increasing blocking delays in systems. Fixed-priority pre-emptive scheduling differs from (time-sliced) policies, which enforce fairness through equal time quanta regardless of priority, often leading to unpredictable latencies unsuitable for guarantees; in contrast, fixed-priority ensures timely execution for critical tasks but may starve low-priority ones, prioritizing over equity. Relative to scheduling, where tasks voluntarily yield control, fixed-priority pre-emptive enforces involuntary switches via interrupts, preventing monopolization by misbehaving or long-running tasks and providing stronger guarantees against unbounded delays in environments. Overall, fixed-priority pre-emptive scheduling facilitates static schedulability analysis through fixed priorities and offline tests, unlike the online decisions in dynamic policies such as EDF, which complicate verification despite their optimality.

Use in Real-Time Operating Systems

Fixed-priority pre-emptive scheduling serves as a foundational mechanism in numerous operating systems (RTOS), enabling predictable task execution in time-constrained environments. In , it is implemented through a priority-based pre-emptive where tasks are assigned fixed priorities, and higher-priority tasks interrupt lower ones upon becoming ready, ensuring deterministic behavior for embedded applications. Similarly, employs fixed-priority pre-emption as its core scheduling policy, with tasks configurable across 256 priority levels (0-255), where the scheduler dynamically preempts running tasks if a higher-priority one is pending, facilitating robust performance in industrial and systems. RTOS integrates fixed-priority pre-emptive scheduling via its adaptive partitioned architecture, assigning static priorities to threads and supporting pre-emption to meet hard deadlines in safety-critical domains. This approach aligns with the SCHED_FIFO policy, which enforces fixed-priority first-in-first-out scheduling without time-slicing, allowing pre-emption by higher-priority threads and widely adopted in RTOS for compliance with standards. In practical applications, fixed-priority pre-emptive scheduling is extensively used in , where the standard partitions the system into isolated modules, each managed by a fixed-priority scheduler to prevent and ensure temporal predictability in (IMA) platforms. In the automotive sector, AUTOSAR's OS module incorporates fixed-priority pre-emptive scheduling for ECU software, assigning static priorities to tasks and supporting pre-emption to handle mixed-criticality workloads in vehicles, from to advanced driver-assistance systems. For medical devices, such as pacemakers and infusion pumps, RTOS like those based on fixed-priority scheduling ensure timely responses to sensor inputs and actuators, complying with standards for software lifecycle processes in embedded systems. To manage sporadic and aperiodic tasks within this framework, RTOS often employ deferrable or polling servers, which allocate fixed-priority budget to handle irregular events without disrupting periodic tasks, as outlined in classic scheduling theory. Extensions to fixed-priority pre-emptive scheduling in modern RTOS address multicore processors through global scheduling, where a single spans all cores for load balancing, or partitioned approaches that assign tasks to specific cores with independent fixed-priority schedulers to minimize migration overhead. Integration with middleware like the (DDS) further enhances its utility, as DDS implementations in RTOS use fixed-priority threads for publish-subscribe communications, ensuring low-latency data exchange in distributed real-time systems such as unmanned vehicles. A notable historical example is the mission in 1997, where NASA's RTOS adopted —a fixed-priority pre-emptive policy assigning priorities inversely proportional to task periods—to guarantee mission-critical timing for rover operations and fault recovery.

References

  1. [1]
    Scheduling Algorithms for Multiprogramming in a Hard-Real-Time ...
    Abstract. The problem of multiprogram scheduling on a single processor is studied from the viewpoint of the characteristics peculiar to the program functions ...
  2. [2]
    Fixed priority pre-emptive scheduling: An historical perspective
    Fixed priority pre-emptive scheduling: An historical perspective. Published: March 1995. Volume 8, pages 173–198, (1995); Cite this article. Download PDF · Real ...
  3. [3]
    Priority inheritance protocols: an approach to real-time synchronization
    Abstract: An investigation is conducted of two protocols belonging to the priority inheritance protocols class; the two are called the basic priority ...
  4. [4]
    [PDF] Fixed priority pre-emptive scheduling: An historical perspective
    Fixed priority pre-emptive scheduling: An historical perspective · N. Audsley, A. Burns, +2 authors. A. Wellings · Published in Real-time systems 1 March 1995 ...
  5. [5]
    Wind River Celebrates 30 Years of Embedded Innovation
    May 2, 2011 · 1987: VxWorks, now the de facto real-time operating system (RTOS) for embedded devices, is introduced. 1993: Wind River becomes the first ...
  6. [6]
    [PDF] A Review of Fixed Priority and EDF Scheduling for Hard Real-Time ...
    Equation (1) provides an exact schedulability test for the fixed priority pre-emptive scheduling of constrained-deadline task sets with any fixed priority ...
  7. [7]
    FreeRTOS scheduling (single-core, AMP and SMP)
    "Fixed priority" means the scheduler will not permanently change the priority of a task, although it may temporarily boost the priority of a task due to ...
  8. [8]
    [PDF] Scheduling Algorithms for Multiprogramming in a Hard- Real-Time ...
    This paper presents the results of one phase of research carried out at the Jet Propulsion Lab- oratory, Califorma Institute of Technology, under Contract No.
  9. [9]
    [PDF] hard real-time scheduling: the deadline-monotonic approach1
    This test is able to determine the schedulability of any fixed priority process set where deadlines are no greater than periods, whatever the priority assign-.
  10. [10]
    [PDF] Using Fixed Priority Pre-emptive Scheduling in Real-Time Systems
    Deadline Monotonic algorithm is equivalent with Rate Monotonic when, for all processes Di = Ti. Deadline Monotonic priority assignment is optimal in a similar.
  11. [11]
    ARM Cortex-M RTOS Context Switching - Interrupt - Memfault
    Oct 30, 2019 · In this article we will explore how context switching works on ARM Cortex-M MCUs. We will discuss how the hardware was designed to support this operation.
  12. [12]
    RTOS CPU Overhead - IAR
    Dec 6, 2024 · A context switch consists of saving enough CPU registers to resume, at a later time, the pre-empted task (e.g., lower-priority task) and ...
  13. [13]
    [PDF] Concepts in Real-Time Operating Systems - IDC Technologies
    preemption latency of the order of a few micro seconds or less. Another ... In some situations such context switching overheads are not acceptable.
  14. [14]
    [PDF] The Limitations of Fixed-Priority Interrupt Handling in PREEMPT RT ...
    Threaded interrupt handling is a common technique used in real-time operating systems since it increases system responsiveness and reduces priority ...
  15. [15]
    [PDF] Cooperative & Preemptive Context Switching
    Nov 27, 2016 · Without preemption, latency is bounded by longest single task execution. • Might even be lowest priority task that keeps everything else ...
  16. [16]
    [PDF] The Evolution of Real-Time Linux - MontaVista Software
    Interrupt-context softIRQ processing is a ma- jor contributor to task preemption latency. Network-related softIRQ processing has been identified as a specific ...
  17. [17]
    [PDF] Scheduling Algorithms for Multiprogramming in a Hard-Real-Time ...
    It is shown that an optimum fixed priority scheduler possesses an upper bound to processor utilization which may be as low as 70 percent for large task sets. It ...
  18. [18]
    [PDF] LAB 5: Scheduling Algorithms for Embedded Systems
    Rate-monotonic scheduling (RMS) is a popular and easy to understand static policy which has a number of useful properties. Priorities are assigned in rank order ...
  19. [19]
    Real-Time Scheduling - UCLA CS
    Real-time scheduling is more critical and difficult than traditional time-sharing, and in many ways it is. But real-time systems may have a few characteristics ...
  20. [20]
  21. [21]
  22. [22]
    [PDF] Priority inheritance protocols: an approach to real-time synchronization
    The following exact characterization was proved by Lehoczky, Sha, and Ding [5]. An example of the use of this theorem will be given later in this section ...
  23. [23]
    Priority Inversion in Operating Systems - GeeksforGeeks
    Sep 2, 2025 · Priority Ceiling Protocol: Assigns a maximum priority to each resource, preventing tasks with lower priorities from acquiring resources needed ...
  24. [24]
    [PDF] Multiprocessor Real-Time Locking Protocols A Systematic Review
    Sep 20, 2019 · When the “middle-priority” task τ2 is activated at time 4, it preempts the lock-holding, lower-priority task τ3, which delays the completion of ...
  25. [25]
    Feasibility analysis under fixed priority scheduling with limited ...
    Jan 14, 2011 · Preemptive scheduling often generates a significant runtime overhead that may increase task worst-case execution times up to 40%, ...
  26. [26]
    Partitioned Fixed-Priority Preemptive Scheduling for Multi-core ...
    In this paper, we consider the problem of scheduling periodic real-time tasks on multi-core processors using fixed-priority preemptive scheduling. Specifically, ...
  27. [27]
    [PDF] Scheduling Algorithms for Multiprogramming in a Hard Real-Time ...
    Priority-driven & preemptive scheduling algorithms presented: ○ Fixed scheduling: Rate Monotonic (RM). □ Assign priorities to tasks according to their ...Missing: Liu Layland
  28. [28]
    [PDF] Real-time fixed and dynamic priority driven scheduling algorithms
    Abstract: There are two main positions regarding real-time scheduling algorithms. The first is based on fixed priorities and the second makes use of dynamic ...
  29. [29]
    Limited Preemptive Scheduling for Real-Time Systems. A Survey
    Mar 5, 2012 · This paper presents a survey of the existing approaches for reducing preemptions and compares them under different metrics.Missing: vs | Show results with:vs
  30. [30]
    [PDF] Real-Time Scheduling
    Real-time scheduling involves a scheduler allocating resources to jobs. Common approaches include clock-driven, round-robin, and priority-driven scheduling.<|separator|>
  31. [31]
    The performance and energy consumption of embedded real-time ...
    In general, there appears no clear winner in timing accuracy between preemptive systems and cooperative systems. The power-consumption measurements show that ...