Fact-checked by Grok 2 weeks ago

Worst-case execution time

Worst-case execution time (WCET) is the maximum duration a computational task or program requires to execute on a specific platform under the most adverse conditions, including worst-case inputs, initial states, and behaviors, providing an upper bound rather than an exact value due to the undecidability of precise computation times. This metric is fundamental in , where it enables schedulability analysis to guarantee that tasks meet strict deadlines, ensuring system reliability in safety-critical applications. In hard real-time systems, such as those in , automotive control, and medical devices, WCET analysis is indispensable for certifying timing correctness, as failure to meet deadlines can lead to catastrophic consequences; for instance, it underpins the systems in the , verified to RTCA/ Level A standards. The importance of WCET has grown with the complexity of modern processors, which feature elements like caches, pipelines, branch prediction, and multi-core architectures that introduce variability and , complicating accurate . Tight and safe WCET bounds are required to optimize while avoiding overly pessimistic estimates that could underutilize hardware. WCET estimation employs three primary approaches: static analysis, which derives bounds through program flow and modeling without execution; measurement-based analysis, which observes execution times on to infer bounds; and methods that combine both for improved and . Key challenges include handling timing anomalies—where local worst-case paths do not aggregate to global worst-case times—and accounting for inter-task interferences in multi-core systems, with recent advances incorporating constraints and probabilistic models for more flexible guarantees. Commercial tools like aiT from AbsInt and RapiTime from Rapita Systems, developed since the early , support industrial applications by automating much of this analysis, though manual annotations for loop bounds and assumptions remain necessary.

Fundamentals

Definition

The worst-case execution time (WCET) of a computational task or program is formally defined as the maximum duration required for its complete execution across all possible inputs, execution paths, and hardware states within a specified environment, providing a safe upper bound that is tight enough to be achievable under worst-case conditions. This bound ensures that the estimated time is never underestimated (safe) while remaining as close as possible to the actual maximum observed time (tight), distinguishing it from pessimistic overestimations that could lead to inefficient resource allocation in time-constrained systems. In contrast to WCET, the best-case execution time (BCET) represents the minimum execution duration under optimal conditions, while the average-case execution time (ACET) reflects the typical duration across a of scenarios. These metrics satisfy the inequality \text{WCET} \geq \text{ACET} \geq \text{BCET}, highlighting WCET's role in guaranteeing reliability rather than optimizing for common or favorable cases. The concept of WCET, originally termed maximum execution time (MAXT), emerged in real-time systems research during the late 1980s, with early formalization by Puschner and Koza in 1989, who proposed methods to compute it based on program structure and timing annotations. Understanding WCET presupposes familiarity with program graphs, which model possible execution paths, and timing analysis of instructions on target hardware.

Importance

The worst-case execution time (WCET) plays a pivotal role in schedulability analysis for systems, providing the upper bound on task execution C_i needed to verify that deadlines are met under scheduling algorithms such as (RMS) or earliest-deadline-first (EDF). In fixed-priority schemes like RMS, the worst-case response time R_i for task i is determined iteratively using the equation R_i = C_i + \sum_{j \in hp(i)} C_j \cdot \left\lceil \frac{R_i}{T_j} \right\rceil, where hp(i) is the set of higher-priority tasks, T_j is the period of task j, and schedulability holds if R_i \leq D_i (the deadline for task i). For EDF, WCET contributes to utilization tests ensuring total demand does not exceed available processor capacity. Regulatory standards emphasize WCET to guarantee timing predictability and prevent hazardous overruns. In , mandates WCET verification as part of software assurance levels (A-E), requiring evidence that execution times align with requirements to avoid failures from timing violations. Similarly, for automotive systems stipulates WCET bounds in requirements for all automotive integrity levels (ASIL A-D), ensuring schedulability and freedom from in time-critical functions. Precise WCET estimation is vital for balancing and , as overestimation leads to conservative designs that underutilize resources and inflate development costs, while underestimation risks catastrophic failures. The significance of WCET has intensified since the early , coinciding with rising complexity that rendered empirical testing insufficient for reliable bounds, prompting a transition to formal static methods for tighter, verifiable predictions. This shift enables safer of advanced features in systems without compromising timeliness.

Applications

Real-Time Systems

In real-time operating systems (RTOS) like and , the worst-case execution time (WCET) of tasks provides essential bounds for admission control, which determines whether a new task can be incorporated without violating timing constraints. Admission control algorithms assess schedulability by incorporating WCET estimates alongside task periods and priorities, ensuring that the system maintains hard guarantees—where no deadlines are missed—or soft properties, where occasional misses are tolerable but minimized. For instance, in , WCET analysis integrates the RTOS model to account for overheads like context switches and interrupts, enabling precise validation of task timing during system design and deployment. Similarly, VxWorks leverages WCET in its priority-based scheduling to support deterministic execution in embedded environments, preventing resource overload that could lead to unpredictable behavior. WCET plays a pivotal role in modeling and scheduling various task types within RTOS frameworks, including periodic tasks that execute at fixed intervals, aperiodic tasks triggered by external events, and sporadic tasks with minimum inter-arrival times. For periodic tasks under earliest deadline first (EDF) scheduling—commonly supported in RTOS for optimal utilization—the schedulability condition relies on the processor utilization bound U = \sum \frac{\text{WCET}_i}{P_i} \leq 1, where \text{WCET}_i is the worst-case execution time of task i and P_i is its period; this bound guarantees that all deadlines are met if the total utilization does not exceed 100% of processor capacity. This formulation assumes knowledge of fundamental scheduling , such as critical instants and preemptive dispatching, and extends to hybrid task sets by transforming aperiodic and sporadic tasks into equivalent periodic ones for analysis. By using WCET in these models, RTOS schedulers can dynamically adjust priorities or reject tasks during runtime admission, preserving system predictability. A representative illustrates WCET's application in automotive engine control units (), where precise timing is vital for and ignition synchronization. In a analyzed using a rhythmic task model across multiple processors, WCET bounds were derived for six key tasks under local nonpreemptive timer-triggered scheduling, accounting for varying speeds to maximum. Overruns in WCET could disrupt phase-coordinated operations, potentially causing stalls, misfires, or safety hazards like unintended acceleration; simulations confirmed that adhering to WCET ensured no deadline misses, outperforming simpler approaches by maintaining efficiency at high loads. This integration of WCET in ECU software underscores its necessity for verifiable performance in vehicular systems.

Safety-Critical Domains

In safety-critical domains, worst-case execution time (WCET) analysis is indispensable for ensuring that timing predictability prevents catastrophic failures, such as or environmental disasters. These industries impose stringent standards requiring verifiable bounds on execution times to guarantee that tasks complete within deadlines under all foreseeable conditions. WCET estimation supports compliance with regulations like for , where overruns could lead to system malfunctions with severe consequences. It also aids compliance with for automotive systems. In , WCET analysis is applied to flight software operating under the standard, which defines partitioning for (IMA) to isolate applications and ensure temporal separation. This partitioning requires precise WCET bounds to schedule partitions without interference, preventing delays in critical functions like flight control or navigation. For instance, the employs ARINC 653-compliant operating systems, such as INTEGRITY-178B, where WCET verification is essential for certifying the safety of software executing on multi-partitioned hardware. Tools like have been used to compute WCET for avionics programs, confirming compliance with certification objectives by analyzing and hardware effects statically. In the automotive sector, WCET integration within the architecture supports advanced driver-assistance systems (ADAS), particularly for time-sensitive algorithms in emergency braking. 's timing protection mechanisms rely on WCET estimates to enforce deterministic behavior, ensuring that braking tasks respond within milliseconds to avoid collisions. Under , which mandates ASIL-D certification for highest-risk functions, WCET analysis quantifies execution bounds for software components, mitigating risks from hardware variability like cache misses. This is critical for ADAS features, where a timing overrun in brake control could result in failure to actuate, violating safety goals. For medical devices, WCET analysis contributes to ensuring reliable operation under standards such as , which requires demonstration of software including timing predictability for higher-class devices in resource-constrained environments. In , WCET supports timing guarantees in supervisory and (SCADA) systems for process , where high-reliability operations are needed in sectors like chemical processing or power generation. SCADA software must meet stringent failure rates for integrity level 4 (SIL 4) under , with WCET helping to bound response times for loops that regulate valves or sensors and prevent delays from propagating to physical processes, such as explosions or spills.

Computation Methods

Static Analysis

Static analysis for worst-case execution time (WCET) estimation involves examining the program's and the underlying model without executing the program, deriving safe upper bounds on execution time through mathematical modeling. This approach requires constructing a (CFG) from the program's assembly or , where nodes represent basic blocks (sequences of instructions without branches) and edges denote possible control transfers. Basic knowledge of the processor's , including stages and , is essential to model timing effects accurately. The core technique in static WCET analysis is the Implicit Path Enumeration Technique (IPET), which formulates the problem as an linear programming (ILP) optimization to find the maximum execution time over feasible paths without explicitly enumerating all paths, avoiding exponential complexity. In IPET, the WCET is computed by maximizing the objective function \sum e_i x_i, where e_i is the estimated execution time of i and x_i is the number of times block i is executed in the worst-case path. This maximization is subject to flow constraints derived from the CFG, such as for each block i, \sum_{j \in \mathrm{pred}(i)} f_{j i} = x_i = \sum_{k \in \mathrm{succ}(i)} f_{i k}, where f_{j i} are variables representing the number of times the edge from j to i is traversed, with adjustments for entry/exit points and upper bounds on loop iterations obtained via static value analysis, ensuring conservation of execution counts along paths. These constraints form an ILP solvable by standard solvers to yield a safe WCET bound. Hardware modeling in static accounts for microarchitectural features that cause timing variability, such as pipelines, caches, and branch predictors, by deriving worst-case timing for each instruction. analysis models overlaps in instruction execution, estimating stalls due to data dependencies or resource conflicts using to track possible states. For caches, must-analysis identifies memory blocks guaranteed to be present (hits), while may-analysis identifies those possibly present or absent (potential misses), enabling worst-case assumptions like all may-hits as misses for in direct-mapped or set-associative caches. prediction modeling bounds misprediction penalties by tracking predictor states via finite automata integrated into the ILP, assuming worst-case resolutions for unresolved branches. These models feed execution time estimates e_i into the IPET , ensuring the bound reflects hardware behavior conservatively. Commercial tools like implement these principles using for value and cache analysis, combined with IPET-based path analysis on , to compute WCET with minimal user annotations, typically limited to descriptions and loop bounds where automatic is insufficient. The tool processes the CFG to infer loop bounds and flows, applies models for timing annotations, and solves the resulting ILP for the bound, emphasizing through domain-specific abstractions. Static analysis provides deterministic, safe upper bounds independent of input data or runtime conditions, guaranteeing that actual execution times never exceed the estimate, which is crucial for schedulability verification in real-time systems. Unlike measurement-based methods, it avoids optimistic assumptions from test cases, though it may yield looser bounds due to conservative modeling.

Measurement-Based Analysis

Measurement-based worst-case execution time (WCET) analysis estimates the maximum execution time of a program by empirically observing its behavior under controlled conditions, typically through repeated executions on target hardware or simulators. This approach involves running the program with a suite of test cases designed to exercise potential worst-case execution paths, capturing timing data to identify upper bounds on execution duration. Unlike static methods that rely on abstract models, measurement-based techniques prioritize real-world observations for tighter estimates, often using instrumentation to record precise timings. To provoke worst-case paths, extensive test suites are generated, sometimes automatically via techniques like or constraint solving, ensuring diverse inputs that maximize such as misses or interrupts. instrumentation plays a crucial role, including cycle-accurate simulators that mimic behavior at the clock-cycle level, or physical tools like oscilloscopes and logic analyzers attached to hardware to non-intrusively measure execution . For instance, on systems, logic analyzers can trace signal timings without altering program flow, providing raw counts for analysis. These measurements are collected from end-to-end runs or segmented blocks, with hundreds to thousands of iterations needed to observe rare worst-case scenarios. In probabilistic variants, such as Measurement-Based Probabilistic Timing Analysis (MBPTA), statistical methods extrapolate from samples using Extreme Value Theory (EVT) to model the tail of execution time distributions. Execution times from multiple runs are treated as independent and identically distributed (i.i.d.) random variables, often achieved by randomizing hardware elements like cache mappings. The empirical distribution is then fitted to extreme value distributions, such as the Weibull, to estimate probabilistic WCET (pWCET) with high confidence levels; for example, fitting a Weibull distribution to a set of measurements can yield a bound exceeded with probability less than 10^{-9}, corresponding to 99.9999999% confidence for safety-critical applications. This convolution of basic block timings along program paths provides a cumulative distribution function for the overall pWCET. Effective application requires controlled execution environments, such as PTA-friendly processors with randomized timing behaviors to ensure i.i.d. samples, or simulators like those for PowerPC with configurable policies. Real hardware setups often incorporate operating systems to replicate deployment conditions. However, the approach's reliability hinges on test coverage: estimates are safe only for the paths and inputs exercised, potentially underestimating WCET if untested scenarios exist, and thus not fully safe without complementary path analysis. Insufficient samples or non-i.i.d. timings can also lead to overly pessimistic or inaccurate bounds.

Hybrid Approaches

Hybrid approaches to worst-case execution time (WCET) integrate static and measurement-based techniques to leverage the strengths of both, providing path bounds from static while calibrating hardware effects through empirical data. In this framework, static methods derive control-flow graphs and prune infeasible paths using integer linear programming (ILP) formulations, while measurements from execution traces refine timing estimates for code segments, informing ILP constraints with context-sensitive execution times. For instance, trace-based refinement maps instruction-level traces to the program's , annotating worst-case timings for loops and branches to yield tighter, safe bounds without intrusive probing. Recent extensions incorporate probabilistic WCET (pWCET) estimates, combining measurement distributions with for bounds that hold with high probability. Measurement-Based Probabilistic Timing Analysis (MBPTA) uses on execution time measurements to model latencies, extended in variants to include static enumeration for dependency modeling via copulas, ensuring pWCET(\delta) represents an upper bound exceeded with probability at most \delta (e.g., \delta = 10^{-9}). These post-2020 developments address limitations in pure MBPTA by reducing overestimation through joint probability distributions of program units. Such hybrid methods yield tighter bounds than static analysis alone, which may overestimate due to conservative models, and safer estimates than pure measurements, which depend on input coverage; they have been applied in preliminary multicore scenarios to capture . A representative case involves adapting static models with measured behaviors, where trained on trace data estimates timings accounting for cache pollution, reducing WCET overestimations by up to 65% on benchmarks like TACLeBench executed on processors.

Challenges and Considerations

Hardware Factors

Hardware factors significantly influence the computation of worst-case execution time (WCET) by introducing variability in instruction execution cycles due to microarchitecture and platform components. In pipelined s, instructions are divided into stages, but hazards such as data dependencies or resource conflicts can cause stalls, where subsequent instructions wait, increasing the (CPI) in the worst case. For superscalar architectures, which issue multiple instructions per cycle, further complicates timing by reordering instructions to mask stalls from cache misses or branch mispredictions; however, this can lead to timing anomalies where a sequence with higher individual latencies executes faster overall due to better overlap, requiring conservative modeling to bound the WCET without underestimation. Stalls and pipeline flushes, often triggered by synchronization instructions, can add tens of cycles per event, modeled through execution graphs that bound start and completion times iteratively. Caches introduce substantial variability in memory access times, as hits incur minimal penalties (1-4 cycles) while misses propagate to lower levels like DRAM, imposing worst-case penalties of 60-200 cycles depending on the hierarchy and bus contention. Instruction and data caches must be analyzed separately, with static methods classifying accesses as always-hit, always-miss, or may-miss to derive safe bounds; for example, pseudo-round-robin replacement policies in some embedded processors like the Motorola ColdFire MCF5307 reduce predictability by uneven eviction patterns. Speculative execution, including prefetching, can exacerbate misses if predictions fail, though persistence of cache states across loop iterations aids tighter bounds in value-based analysis. Interrupts and peripherals add non-deterministic delays to WCET paths, as non-maskable interrupts (NMIs) halt the current task to service handlers, potentially flushing pipelines and incurring context-switch overheads of hundreds of cycles. Modeling requires context-bounded analysis, limiting interleavings to a bound derived from minimum inter-arrival times (e.g., k contexts where WCET T_W < k \alpha, with \alpha as the inter-arrival), to avoid path explosion while capturing worst-case handler executions; peripherals like UART or buses contribute via access latencies, integrated into processor models for full-system timing. Modern features amplify these challenges: branch predictors, using history-based tables (e.g., or gshare schemes), impose misprediction penalties of 10-20 cycles by flushing speculative paths, modeled via integer (ILP) constraints on prediction table entries to bound total mispredictions conservatively. (SMT) introduces resource contention across threads, potentially increasing WCET by up to 4x in worst-case scenarios due to shared execution units and caches, though dual-threaded benchmarks show minimal latency impact (e.g., 731 vs. 718 cycles for array reversal) when yields overlap stalls effectively; extended IPET formulations account for this with yield edges in graphs. Overall, WCET can be bounded as \text{WCET} = \sum (\text{instructions} \times \text{CPI}_\text{worst}), aggregating worst-case CPI from these hardware effects across the program's execution paths.

Interference and Multicore Issues

In multicore processors, arises from concurrent access to shared resources such as and interconnects, significantly complicating worst-case execution time (WCET) compared to single-core systems. Shared last-level (L2 or L3) are particularly prone to inter-core , where one core's cache evictions can displace another core's data, leading to additional misses and execution delays. For instance, —where unrelated tasks inadvertently share cache lines—can exacerbate this by triggering unnecessary traffic, potentially increasing WCET by up to 40% in scenarios involving large data sets contending for L2 cache space. Bus and memory contention further amplify WCET variability, as multiple cores compete for access to the shared bus or , causing worst-case delays from arbitration policies like or fixed-priority scheduling. Multicore Response Time Analysis (MC-RTA) addresses this by modeling processor demand, bus , and memory access patterns to bound response times, incorporating factors such as bus slot durations and refresh overheads. In fixed-priority , for example, higher-priority tasks can block lower ones, extending WCET through cumulative bounded by response time equations that account for self- and inter-task demands. This approach enables tighter WCET estimates by decoupling analysis from context-independent single-task bounds, though it requires detailed hardware modeling. Recent highlights hidden timing couplings, where non-deterministic between ostensibly independent tasks on different cores spikes WCET due to subtle dependencies, such as unexpected L2 thrashing or interconnect bottlenecks. has shown these couplings causing execution time increases of over 40% in mixed-criticality workloads, underscoring the need for advanced detection via WCET tools that inter-core interactions. strategies include partitioning to isolate cores' sets and priority-aware scheduling to minimize high-priority task disruptions, reducing without fully disabling shared . To handle in and multicore environments, probabilistic WCET (pWCET) extends traditional analysis by deriving probability distributions for execution times under variable contention. In RISC-V-based systems like the , pWCET models bus variability using measurement-based probabilistic timing analysis, bounding violation probabilities (e.g., below 0.01%) via quota mechanisms that limit interfering cores' access durations. For benchmarks such as cjpeg , randomized refill policies in leaky-bucket improved critical task efficiency by up to 7%, ensuring predictable timing in mixed-criticality setups. In , multicore adoption under guidelines like CAST-32A (2016, aligned with EASA's AMC 20-193 in 2022 and FAA's AC 20-193 in 2024) emphasizes WCET challenges from to maintain integrity. These standards require demonstrating deterministic timing behaviors across shared resources, addressing through life-cycle objectives like interference-aware partitioning and validation, with ongoing updates focusing on emerging idiosyncrasies to support safety-critical deployments.

Tools and Research

Notable Tools

AbsInt's is a prominent commercial tool for static worst-case execution time (WCET) analysis, employing integer (ILP) to model pipelines and caches for precise bounds on tasks. It analyzes executables directly, incorporating low-level behaviors such as branch prediction and to compute safe upper bounds without . The LDRA tool suite offers comprehensive WCET capabilities, with a significant update in March 2025 adding support for multicore architectures, including analysis of coherency protocols and hardware-based contention mitigation. This enables automated timing analysis for safety-critical applications on processors from vendors like Microchip, , and Technology, addressing multicore interference through integrated data and control flow coupling. Among open-source options, OTAWA serves as a modular C++ framework for static WCET analysis, supporting multiple instruction set architectures and facilitating extraction, annotation, and ILP-based computation. It allows researchers to implement and experiment with adaptive analyses on executables under an LGPL . pyCPA, a Python-based of compositional analysis, focuses on deriving worst-case response times for multicore and distributed systems by modeling event streams and . It has been applied to industrial benchmarks involving multicore setups, providing flexible bounds calculation without hardware-specific WCET estimation. These tools commonly support certification standards such as A(M)C 20-193, which guides multicore WCET verification in by requiring of partitioning and . LDRA, in particular, integrates dynamic WCET measurement to comply with these guidelines across the development lifecycle. In 2025, trends in emphasize continuous verification workflows, where WCET tools enable ongoing timing analysis integrated into DevSecOps pipelines for rapid collection. Evaluation of tool accuracy often relies on tightness metrics, measuring overestimation relative to measured execution times; for instance, has achieved overestimations as low as 4% on benchmarks and 7-8% on C16x//MPC565 processors in tool challenges. Recent advancements incorporate probabilistic WCET (pWCET) estimates into measurement-based tools, providing tail distributions for exceedance probabilities to balance tightness and safety in uncertain environments.

Benchmarks and Challenges

The International Workshop on Worst-Case Execution Time Analysis, which includes the WCET Tool Challenge initiated in 2006, is an annual event that held its 22nd edition in , serving as a key platform for evaluating and comparing WCET analysis tools across standardized benchmarks, with a primary emphasis on achieving tight bounds and enhancing in timing analysis. Participants submit results from their tools applied to common programs, fostering discussions on tool performance, scalability, and integration challenges in systems. The challenge typically utilizes the Mälardalen WCET benchmark suite, which comprises 42 programs designed to test various control structures, loops, and data dependencies relevant to applications. Complementing this, TACLeBench provides a collection of open-source benchmarks tailored for systems, focusing on WCET-oriented optimizations and time-predictable architectures, with programs adapted to ensure analyzability and reproducibility. Since 2022, efforts have extended these benchmarks to multicore scenarios, incorporating models and simulations to address parallel execution in modern processors. Emerging research directions since 2020 highlight probabilistic WCET (pWCET) approaches, as explored in a 2024 seminar paper that differentiates pWCET—providing execution time distributions with probabilistic guarantees—from purely statistical methods, emphasizing their implications for multicore and . Additionally, continuous WCET verification has gained traction for supporting agile development in , enabling automated timing analysis within iterative software cycles to meet standards without full recertification at each update. These trends reflect a shift toward dynamic, measurement-integrated techniques that balance precision with development efficiency in safety-critical domains. Open challenges in WCET analysis include scalability to AI and machine learning code, where non-deterministic elements like dynamic neural network operations complicate bounding execution paths and require new predictable programming frameworks. Conferences such as the International Symposium on Leveraging Applications of Formal Methods (ISoLA) and the Embedded Real Time Software and Systems (ERTS) symposium regularly feature WCET papers, showcasing advancements in tool integration, probabilistic methods, and multicore timing from both academic and industrial perspectives.

References

  1. [1]
    [PDF] The Worst-Case Execution Time Problem - CS, FSU
    A reliable guarantee based on the worst-case execution time of a task could easily be given if the worst-case input for the task were known. Unfortunately, in.
  2. [2]
    Survey on Estimation and Optimization of Worst-case Execution ...
    Nov 3, 2020 · Our survey shows that the research related to WCET has transitioned from the traditional single-core processor to the multi-core processor, and ...
  3. [3]
  4. [4]
    Time-Predictable Computer Architecture
    Variant A depicts a time-predictable processor with a higher BCET, ACET, and WCET than a standard processor. Although the WCET is higher than the WCET of ...
  5. [5]
    [PDF] Scheduling
    Simple process model. – The cyclic executive approach. – Process-based scheduling. – Utilization-based schedulability tests. – Response time analysis for ...
  6. [6]
    [PDF] Scheduling Theory - Real-Time Systems, Lecture 12
    Feb 25, 2020 · The response time calculations from the rate monotonic theory is also ... Earliest Deadline First (EDF) Scheduling: analysis. Result: • If the ...
  7. [7]
    [PDF] Safety Standards and WCET Analysis Tools - HAL
    Jul 23, 2019 · Safety standards like DO-178B [18], DO-178C, IEC-. 61508 [14], ISO-26262 [15] and EN-50128 [1] require to identify functional and non-functional ...
  8. [8]
    [PDF] Development of a Predictable Hardware Architecture Template and ...
    An important subset of real-time embedded systems controls safety-critical devices and components. These can perform, in case of an artificial pacemaker ...
  9. [9]
    Professor Reinhard Wilhelm on the History of WCET Analysis
    Mar 13, 2021 · The article nicely recounts how the research group built up their understanding for the WCET analysis problem. They started with caches, ...Missing: evolution empirical
  10. [10]
    The worst-case execution-time problem—overview of methods and ...
    This article describes different approaches to this problem and surveys several commercially available tools 1 and research prototypes.
  11. [11]
    Worst–Case Execution Time Analysis Approach for Safety–Critical ...
    Determining Worst–Case Execution Time (WCET) is key to predictability, that is to ensure that temporal behaviour of the system is correct and hence safe.<|control11|><|separator|>
  12. [12]
    Computing the Worst Case Execution Time of an Avionics Program ...
    This paper presents how the timing analyser aiT is used for computing the Worst-Case Execution Time (WCET) of two safety-critical avionics programs.
  13. [13]
    Green Hills Software s Operating System Selected for Boeing 787 ...
    Jul 6, 2005 · INTEGRITY-178B complies with the aviation industry standard ARINC 653-1 applications software interface and has been used in numerous ...Missing: WCET | Show results with:WCET<|control11|><|separator|>
  14. [14]
  15. [15]
    [PDF] Explanation of Safety Overview - AUTOSAR.org
    Mar 29, 2019 · and delete) have deterministic performance, meaning that either their worst execution. / blocking time is a known value, or a dedicated ...
  16. [16]
    [PDF] ISO-26262 Compliant Safety-Critical Autonomous Driving Applications
    Dec 26, 2020 · So, the task must terminate before it reaches its deadline [24] (e.g. its, Worst Case. Execution Time, WCET). There are many challenges to.
  17. [17]
    Hard Real-Time Computing Systems
    In the last several years, real-time computing has been required in new applications areas, such as medical ... worst-case execution time of task Ji (notice that ...
  18. [18]
    Towards a Distributed Runtime Monitor for ICS/SCADA Systems
    Sep 3, 2025 · ... worst case execution time (WCET), and fault-detection coverage by selectively instrumenting a subset of basic blocks. Experimental results ...
  19. [19]
    [PDF] Towards a Distributed Runtime Monitor for ICS/SCADA Systems
    The threats to reliable and safe SCADA operation can originate from (1) operator or programming error from lack of training or experience, (2) malicious access ...
  20. [20]
  21. [21]
  22. [22]
  23. [23]
    [PDF] Measurement-Based Worst-Case Execution Time Analysis using ...
    A static WCET calculation method is used after the in- struction timing of subpaths of program segments has been assessed by runtime measurements. The paper is ...
  24. [24]
    Worst Case Execution Time - Rapita Systems
    Worst-case execution time is the maximum length of time a task takes to execute on a specific hardware platform. WCET is a metric commonly used in reliable real ...Missing: primary | Show results with:primary
  25. [25]
    [PDF] Measurement-Based Probabilistic Timing Analysis - Inria
    MBPTA computes tight. WCET bounds expressed as probabilistic exceedance functions, without needing much information on the hardware and software internals of ...
  26. [26]
    [PDF] Measurement-Based Worst-Case Execution Time Estimation Using ...
    In this paper we tailor the application of EVT to timing analysis. To that end (1) we analyse the response time of different hardware resources (e.g. cache ...
  27. [27]
    [PDF] TimeWeaver: A Tool for Hybrid Worst-Case Execution Time Analysis
    A hybrid WCET analysis combines static value and path analysis with meas- urements to capture the timing behaviour of tasks. Compared to end-to-end measurements.
  28. [28]
    Hybrid probabilistic timing analysis with Extreme Value Theory and ...
    The proposed hybrid probabilistic timing analysis uses EVT to capture extreme cases and Copulas to model dependencies between program units.<|control11|><|separator|>
  29. [29]
    [PDF] WE-HML: hybrid WCET estimation using machine learning for ...
    The estimated WCETs are compared with the maximum observed execution time (MOET) of each benchmark, obtained by taking the maximum timing of 1000 executions,.<|control11|><|separator|>
  30. [30]
    [PDF] Modeling Out-of-Order Processors for Software Timing Analysis
    Estimating the Worst Case Execution Time (WCET) of a program on a given processor is important for the schedu- lability analysis of real-time systems.
  31. [31]
    [PDF] Timing Analysis of Interrupt-Driven Programs under Context Bounds
    To our knowledge, our approach is the first timing analysis technique for interrupt-driven software that can not only gener- ate worst-case execution time ...
  32. [32]
    [PDF] A Framework to Model Branch Prediction for Worst Case Execution ...
    In this paper, we study the effects of dynamic branch prediction on WCET analysis. Branch prediction schemes predict the outcome of a branch instruction based ...
  33. [33]
    [PDF] Worst-Case Execution Time Estimation for Hardware-assisted ...
    Our method extends the IPET method for estimating WCET to accommodate multiple threads. ... 1995. [7] Yau-Tsun Steven Li, Sharad Malik, and Andrew Wolfe. ... 1995 ...
  34. [34]
    Using Worst-Case Execution Time Analysis to Uncover Hidden ...
    Sep 17, 2025 · Learn about "hidden" timing couplings in multicore systems that cause unexpected interference, significantly impacting Worst-Case Execution Time ...Missing: 2024 | Show results with:2024
  35. [35]
    Timing-aware analysis of shared cache interference for non ...
    Sep 30, 2024 · In multi-core architectures, the last-level cache (LLC) is often shared between cores. Sharing the LLC leads to inter-core interference, ...
  36. [36]
    [PDF] A Generic and Compositional Framework for Multicore Response ...
    Nov 6, 2015 · The MRTA framework decouples response time analysis from a reliance on context independent WCET values. Instead, the analysis formulates ...Missing: MC- | Show results with:MC-
  37. [37]
  38. [38]
    CAST-32A: Development of avionics software for single-core ...
    This paper set out a series of objectives that need to be met in the SDLC to ensure that a multicore system is understood, with particular regard to timing ...Missing: WCET 2023
  39. [39]
  40. [40]
    aiT Worst-Case Execution Time Analyzers - AbsInt
    aiT WCET Analyzers statically compute tight upper bounds for the worst-case execution time (WCET) of tasks in real-time systems.Missing: ILP | Show results with:ILP
  41. [41]
    [PDF] Worst-Case Execution Time Prediction by Static Program Analysis
    Static program analysis uses abstract interpretation to compute cache and pipeline states, then uses ILP to predict WCET, involving CFG, value, loop, cache, ...Missing: engine ECU
  42. [42]
    aiT: Worst-Case Execution Time Prediction by Static Program Analysis
    Aug 6, 2025 · ... Based on the information flow of the program, the static analysis method estimates the WCET of the program according to the characteristics ...
  43. [43]
    LDRA Launches Multicore and Worst-Case Execution Time (WCET ...
    Mar 11, 2025 · The LDRA tool suite now supports multicore RISC-V architectures that address multicore contention in hardware. This support gives developers ...
  44. [44]
    LDRA Updates Tools to Automate Worst-Case Execution Time ...
    Mar 28, 2025 · LDRA's support covers multicore RISC-V implementations from Microchip, Synopsys, and Andes Technology and is tightly integrated with their ...
  45. [45]
  46. [46]
  47. [47]
  48. [48]
    [PDF] Compositional Analysis of the WATERS Industrial Challenge 2017
    We provide the details of our pyCPA analysis extension in Section V and evaluate the memory overheads (in time and space) as well as the achieved latency bounds ...
  49. [49]
    AMC20-193 - LDRA
    This page presents a practical, A(M)C 20-193 compliant approach to addressing the multicore WCET problem. ... Coding standard compliance and A(M)C 20-193. Most ...What other guidance is related... · A(M)C 20-193 and Worst-Case...
  50. [50]
    LDRA Simplifies CAST-32A & A(M)C 20-193 Compliance With ...
    Mar 13, 2023 · “The LDRA tool suite's support for CAST-32A & A(M)C 20-193 enables any manufacturer to collect timing evidence and tell their certification ...
  51. [51]
    Continuous verification: Worst-Case Execution Time (WCET ...
    Oct 23, 2025 · CAST-32A outlines important considerationsfor designing with multicore processors, including non-prescriptive Software Development Life Cycle ( ...Missing: 2023 | Show results with:2023
  52. [52]
    aiT WCET Analyzers: Unrivaled Precision
    ### Summary of aiT WCET Bounds Precision
  53. [53]
    [PDF] pWCET a Toolset for automatic Worst-Case Execution Time Analysis ...
    This paper presents the pWCET framework, a theory and its tool support for probabilistic WCET analysis of real-time embedded programs. pWCET com- bines the ...
  54. [54]
    TACLeBench: A Benchmark Collection to Support Worst-Case ...
    Dec 20, 2016 · Abstract. Engineering related research, such as research on worst-case execution time, uses experimentation to evaluate ideas.
  55. [55]
  56. [56]
    Invited Paper: Statistical, Stochastic or Probabilistic (Worst-Case ...
    Jul 26, 2024 · Within this paper, we discuss the opportunity of differentiating probabilistic (worst-case) execution times from statistical (worst-case) execution times.Author Liliana Cucu-Grosjean · Abstract · Cite As Get Bibtex
  57. [57]
    [PDF] Predictable Programming Framework for ML Applications in Safety ...
    The second challenge is predictability: the capacity to assess the worst-case execution time (WCET) of a sequential code. In the ML literature, most of the ...
  58. [58]
    Quantum-safe security: Progress towards next-generation ... - Microsoft
    Aug 20, 2025 · Quantum computing promises transformative advancements, yet it also poses a very real risk to today's cryptographic security.Missing: WCET | Show results with:WCET
  59. [59]
    The Worst Case Execution Time Tool Challenge 2006 - IEEE Xplore
    The first worst case execution time (WCET) tool challenge, performed in 2006, attempted to evaluate the state-of- the-art in timing analysis for real-time ...
  60. [60]
    ERTS 2024: Multi-core WCET Analysis Using Non-Intrusive ...
    Jun 10, 2024 · Daniel Kästner (CTO of AbsInt) present our joint paper “Multi-core WCET Analysis Using Non-Intrusive Continuous Observation” at this year's ERTS Congress in ...Missing: symposia | Show results with:symposia