Fact-checked by Grok 2 weeks ago

Robustness testing

Robustness testing is a methodology in focused on evaluating the degree to which a or component can operate correctly and reliably when exposed to unexpected inputs, invalid , resource constraints, or stressful environmental conditions, such as hardware failures or disruptions. This approach aims to identify vulnerabilities that could lead to crashes, incorrect behaviors, or breaches under non-nominal scenarios, distinguishing it from standard by emphasizing error handling and graceful degradation. Robustness testing is essential for enhancing the dependability of software in critical domains, including operating systems, embedded systems, and distributed applications, where failures can have significant consequences. Key techniques in robustness testing include fault injection, which deliberately introduces errors like invalid parameters or exceptions to observe system responses; fuzz testing, involving the generation of random or semi-random inputs to probe for weaknesses; and , which uses formal models to simulate exceptional conditions and verify compliance with robustness requirements. These methods target various software artifacts, from individual components to full systems, and are often automated to scale testing efforts efficiently. The importance of robustness testing has grown with the increasing complexity of software systems, as it helps mitigate risks associated with integration of off-the-shelf components and evolving operational environments. Historically, robustness testing emerged in the as a response to reliability issues in (COTS) software and operating systems, with pioneering work in the project at , which developed automated tools for API-level to assess interfaces across multiple platforms. Subsequent advancements have incorporated standards like those from NIST for and combinatorial approaches, addressing gaps in standardization and tool support identified in systematic reviews. Today, it remains a vital practice in safety-critical industries, such as automotive and , where compliance with standards like requires rigorous robustness validation.

Introduction

Definition

Robustness testing is a methodology in that evaluates a system's ability to maintain correct functionality and performance when subjected to unexpected, invalid, or abnormal conditions, including erroneous inputs, resource limitations, or environmental stresses. This approach specifically targets the system's beyond standard operational parameters to identify vulnerabilities that could lead to failures, crashes, or breaches. Unlike reliability testing, which assesses long-term performance under anticipated usage patterns, robustness testing emphasizes resilience in edge cases. Key attributes of robustness include graceful degradation, where the system reduces functionality in a controlled manner to preserve core operations; error recovery, enabling the system to detect and correct faults without complete failure; and , which allows continued operation despite partial component breakdowns. These characteristics ensure that the software does not propagate errors catastrophically but instead handles anomalies predictably and securely. In contrast to nominal testing, which verifies expected behaviors under normal inputs and conditions, robustness testing deliberately introduces non-standard scenarios to probe limits and mechanisms. For instance, robustness testing of a might involve submitting malformed HTTP requests, such as those with invalid headers or truncated payloads, to confirm that the returns appropriate error responses without crashing or exposing sensitive data.

Historical Development

The origins of robustness testing trace back to the 1970s and 1980s, emerging from research in fault-tolerant computing aimed at ensuring reliable software operation in critical environments. Influenced heavily by NASA's efforts to enhance software reliability for space missions, early work focused on software-implemented fault tolerance (SIFT) to handle hardware failures and software errors in real-time systems. In 1973, SRI International, under NASA sponsorship, initiated the SIFT project, which demonstrated the feasibility of executing multiple program versions in parallel on a fault-tolerant multiprocessor to mask errors and maintain system integrity in safety-critical applications such as aircraft control. These initiatives laid the groundwork for robustness practices by emphasizing error detection, recovery, and tolerance mechanisms in high-stakes applications. In the , robustness testing gained formalization through systematic techniques, which deliberately introduced faults to evaluate system behavior under . A key milestone was the development of the (Fault Injection-based Automated Testing) environment at , where researchers like J. H. Barton and colleagues conducted experiments to assess fault propagation and coverage in distributed systems. Their 1990 study on demonstrated how controlled could quantify dependability metrics, such as error detection rates, influencing subsequent standards for validating fault-tolerant software. This era shifted robustness testing from ad-hoc methods to structured experimental frameworks, enabling reproducible assessments of software . The 2010s marked modern advancements in robustness testing, with its integration into agile and methodologies to support and rapid iteration in dynamic environments. Practices like automated and became embedded in pipelines, allowing teams to test system stability under simulated failures during development sprints. Concurrently, post-2015 extensions to AI and emphasized adversarial robustness, where testing involved crafting inputs to expose vulnerabilities in neural networks, as explored in foundational works on adversarial . Influential standards, such as ISO/IEC 25010 (first published in 2011 and revised in 2023), formalized robustness within models by incorporating as a sub-characteristic of reliability, providing a for evaluating system behavior under abnormal conditions.

Importance and Applications

Benefits

Robustness testing significantly enhances software reliability by systematically identifying potential points under various conditions, thereby reducing system and ensuring more stable in operational environments. By simulating unexpected scenarios and edge cases, it uncovers latent defects that could otherwise lead to crashes or inconsistent behavior, allowing developers to implement that promote and continuous operation. This proactive approach has been shown to minimize disruptions, particularly in mission-critical s where reliability is paramount. In terms of , robustness testing plays a crucial role in detecting vulnerabilities arising from malformed or unexpected inputs, which could otherwise be exploited to cause issues such as buffer overflows or unauthorized access. Through rigorous evaluation of error-handling mechanisms, it exposes weaknesses that traditional testing might overlook, enabling the fortification of defenses against potential attacks and improving overall system integrity. This is especially vital in environments handling sensitive data, where such flaws could lead to breaches. The practice also yields substantial savings by facilitating early detection and resolution of robustness issues, preventing the escalation of defects that become exponentially more expensive to address in later stages of or post-deployment. Studies indicate that fixing problems after can cost up to 100 times more than during the or requirements , underscoring the economic value of integrating robustness testing into the lifecycle to avoid rework and associated overheads. Furthermore, robustness testing aids in achieving and fostering user trust by ensuring systems adhere to established standards for error management and , such as those outlined in ISO/IEC 12207 and IEEE guidelines. In critical applications, this compliance helps meet requirements for robust data handling under failure conditions, thereby building confidence among users and stakeholders in the system's dependability.

Use Cases

Robustness testing finds extensive application in embedded systems, particularly within the automotive sector, where it is employed to evaluate the performance of under simulated failures such as sensor malfunctions or fluctuations. In Advanced Driver Assistance Systems (ADAS), techniques simulate sensor errors like noise, delays, or complete outages to assess ECU responses, ensuring system reliability and safety without physical hardware risks. This approach allows developers to identify vulnerabilities in real-time decision-making processes, such as or lane-keeping assistance, where erroneous sensor data could lead to hazardous outcomes. In web and cloud services, robustness testing is crucial for validating endpoints against high-volume traffic resembling Distributed Denial-of-Service (DDoS) attacks or malformed payloads, thereby preventing service disruptions and security breaches. For SOAP-based web services, testing involves injecting invalid parameters—ranging from null values and boundary conditions to malicious inputs like attempts—to detect crashes, errors, or unintended behaviors. Evaluations of public web services have shown that nearly half exhibit robustness issues under such conditions, underscoring the need for these tests to maintain operational integrity in distributed environments. For models, robustness testing centers on assessing vulnerability to adversarial perturbations, especially in image recognition tasks where subtle input alterations can cause misclassifications. Seminal work demonstrated that deep neural networks, when trained on datasets like , fail on examples crafted by adding imperceptible noise, exploiting the models' linear behavior in high-dimensional spaces. Benchmarking studies further reveal that adversarial training enhances generalization against varied threat models, though relative robustness varies across architectures and attack types, emphasizing the importance of comprehensive evaluation metrics like robustness curves. In such as healthcare software, robustness testing ensures that medical devices handle malfunctions or abnormal conditions without compromising , aligning with regulatory standards for risk mitigation. The U.S. (FDA) guidelines recommend validating software through , error handling simulations, and boundary condition checks to verify performance under maximum loads, operator errors, or system failures. For instance, testing off-the-shelf components in devices like infusion pumps involves black-box methods to confirm recovery from memory constraints or input anomalies, preventing catastrophic risks as defined by harm severity scales. Additionally, the FDA's postmarket management guidance for cybersecurity in medical devices, issued in 2016 and updated as of 2023, recommends monitoring for cybersecurity-related vulnerabilities and implementing timely remediation plans to address uncontrolled risks.

Testing Techniques

Fault Injection

Fault injection is a technique used in robustness testing to deliberately introduce faults into a software or hardware system, simulating potential failures to evaluate the system's ability to detect, handle, and recover from errors. This method helps identify weaknesses in fault-tolerance mechanisms by mimicking real-world error conditions that are difficult to provoke naturally during standard testing. Common types of faults injected include memory corruption, such as bit flips in variables or buffers; delays or packet losses; and errors like voltage glitches or exceptions. These faults can manifest as incorrect arguments to functions, unavailability, I/O failures, or erroneous system timing, allowing testers to probe the system's response to diverse failure modes. Methods for fault injection are categorized by their level of intervention. Code-level injection involves mutating or , such as altering values or inserting erroneous statements before or execution. Hardware-level injection simulates physical faults, for instance, by inducing bit-level errors in through interfaces or . Interface-level injection targets boundaries between components, like corrupting input packets in protocols or library calls, to test inter-module interactions without altering core . The process follows structured steps to ensure systematic evaluation. First, involves defining representative error scenarios based on historical field data or dependability standards to guide the . Next, injection points are identified, such as critical paths, registers, or communication interfaces, to maximize relevance to system vulnerabilities. Finally, examines the system's behavior, including error detection rates and recovery actions, to validate tolerance mechanisms. Key metrics for assessing fault injection outcomes include fault coverage percentage, which measures the proportion of the fault space explored relative to the total possible errors, and recovery success rate, defined as the fraction of injected faults from which the system returns to normal operation without failure propagation. These metrics provide quantitative insights into robustness for comprehensive validation.

Fuzz Testing

Fuzz testing, also known as , is an automated technique that involves supplying a program with a large volume of invalid, unexpected, or random data as inputs to identify defects such as crashes, assertion failures, or memory corruption. This approach was pioneered in a study by Barton P. Miller and colleagues, who applied random input generation—termed "fuzz"—to UNIX utilities, revealing that approximately one-third of tested programs failed under such conditions. Inputs typically target various interfaces, including files, network protocols, or application programming interfaces (APIs), where malformed data can expose logical errors or buffer overflows that might otherwise remain undetected in standard testing. The process automates the generation and injection of these inputs, monitoring the program's response to detect anomalies without requiring prior knowledge of internal implementation details. Fuzz testing encompasses several variants distinguished by the level of access to the target's and the sophistication of input generation. Black-box fuzzing operates without any code inspection, relying solely on external interfaces to generate purely random or mutation-based inputs, making it simple to deploy but potentially less efficient in exploring deep paths. In contrast, white-box fuzzing incorporates or static analysis of the to guide input generation toward uncovered branches, enhancing coverage but increasing and computational demands. Hybrid fuzzing, often referred to as grey-box fuzzing, combines elements of both by using lightweight —such as feedback—to mutate inputs adaptively while maintaining black-box simplicity, striking a balance between speed and thoroughness. The effectiveness of fuzz testing lies in its ability to uncover a wide range of robustness issues, including crashes, memory leaks, and vulnerabilities like denial-of-service conditions or flaws, often achieving higher detection rates than due to its exhaustive input exploration. For instance, Google's OSS-Fuzz platform has identified more than 23,900 bugs across 316 projects in its first four years of operation, demonstrating its practical impact on real-world software reliability. This high yield stems from the technique's capacity to simulate edge cases that mimic real-world adversarial inputs, leading to rapid fixes in critical components such as web browsers and libraries. Despite its strengths, fuzz testing has notable limitations, particularly its computational intensity, as generating and executing millions of inputs can require significant resources without mechanisms like coverage guidance to prioritize promising test cases. Unguided fuzzers may waste effort on redundant or shallow explorations, prolonging the time to discover deep vulnerabilities and limiting scalability for large or complex systems.

Model-Based Testing

Model-based testing is a robustness testing technique that employs formal models of the expected system behavior to automatically generate and execute test cases, focusing on exceptional conditions such as invalid inputs, resource limitations, or environmental stresses. This method simulates non-nominal scenarios to verify compliance with robustness requirements, including error handling, fault detection, and recovery mechanisms. It is particularly valuable for complex systems like , applications, and implementations, where models such as state machines, UML diagrams, or Petri nets guide the creation of targeted tests that would be challenging to design manually. By comparing actual system responses against model predictions, testers can identify deviations that indicate vulnerabilities.

Stress and Load Testing

Load testing involves simulating expected high-volume usage scenarios to evaluate a software system's thresholds and ensure it maintains acceptable response times and throughput under anticipated conditions. This typically replicates normal operational loads, such as multiple concurrent users interacting with the system, to verify without exceeding design limits. By metrics like and utilization during these simulations, developers can confirm the system's ability to handle typical demands in production environments. Stress testing, in contrast, deliberately pushes the system beyond its normal operational limits—such as by imposing excessive CPU, memory, or network demands—to identify , failure modes, and recovery mechanisms. This approach exposes how the software behaves under overload, including potential crashes, , or degraded service, and tests the robustness of error-handling and processes. Unlike , which focuses on expected usage, aims to uncover latent vulnerabilities by maximizing resource consumption until the system falters. Key scenarios in stress and load testing include sudden spikes in concurrent users, which simulate surges like those during promotional events; overflow conditions, where input volumes exceed capacities leading to potential leaks or s; and prolonged operation under sustained high loads, revealing issues like memory leaks over extended periods. These scenarios help replicate real-world pressures, such as site rushes or cloud service scaling events, without risking live systems. The primary outcomes of these tests are the identification of performance bottlenecks, such as inefficient database queries or network chokepoints; determination of scalability limits, including the maximum sustainable user count before degradation; and evaluation of graceful degradation points, where the system prioritizes critical functions during overload to maintain partial operability. By analyzing these results, engineers can optimize and enhance overall system resilience, often leading to architectural improvements that support higher loads.

Tools and Frameworks

Open-Source Tools

Several prominent open-source tools have emerged to facilitate robustness testing across different software layers, from application binaries to operating system kernels and cloud-native services. These tools emphasize , coverage guidance, and fault to uncover vulnerabilities and ensure system reliability without proprietary dependencies. (AFL) is a widely adopted open-source tool that employs coverage-guided mutation to test binary executables for crashes, memory leaks, and other robustness failures. Developed by Michał Zalewski, AFL instruments programs to track during sessions, prioritizing inputs that exercise new code paths via a genetic algorithm-like process. This approach has proven effective in discovering thousands of vulnerabilities in , such as those in image parsers and network protocols, by generating compact test corpora that can seed further analysis. AFL's simplicity and efficiency make it suitable for both standalone use and integration into development workflows, supporting platforms like and macOS. Syzkaller serves as a specialized open-source fuzzer for operating system , particularly and other systems, targeting interfaces to probe for robustness issues like race conditions, deadlocks, and invalid memory accesses. Maintained by , it operates in an unsupervised mode, automatically generating and executing sequences based on kernel coverage feedback from tools like KCOV. Syzkaller has been instrumental in identifying hundreds of kernel bugs since its inception, with features for reproducing crashes and minimizing test cases to aid . Its configuration-driven design allows customization for specific kernel subsystems, enhancing its utility in continuous kernel . A practical application of these tools involves integrating into pipelines for automated , as demonstrated in GitLab's where fuzzing jobs run on code commits to detect regressions early. This setup compiles instrumented binaries, executes short fuzzing runs in parallel stages, and reports crashes via artifacts, ensuring robustness checks become a seamless part of the development lifecycle without disrupting standard builds.

Commercial Tools

Parasoft C/C++test provides an integrated environment for robustness testing in C and C++ applications, particularly for in safety-critical domains. It combines static analysis to detect defects, vulnerabilities, and compliance issues early in development with capabilities that include through function stubs to simulate error conditions and validate code resilience. This approach enables developers to automate the identification of robustness flaws, such as memory leaks or undefined behaviors, ensuring reliable performance under adverse conditions. Keysight Eggplant supports model-based testing that emphasizes cross-platform robustness, allowing teams to create visual models of applications for automated execution across devices and environments. Its AI-driven features generate executable tests to assess system behavior under stress, including load variations and integration failures, which helps uncover issues in user interfaces and backend services before deployment. By emulating real-world interactions, Eggplant facilitates comprehensive validation of application stability in diverse scenarios, such as mobile and web ecosystems. OpenText LoadRunner excels in advanced load and stress simulation tailored for applications, enabling the emulation of thousands of virtual users to evaluate system performance under peak conditions. It supports protocol-based scripting for , database, and , allowing precise measurement of response times, throughput, and resource utilization to identify bottlenecks that could compromise robustness. This tool is widely used in large-scale environments to ensure applications remain operational during high-traffic events or resource constraints. Commercial robustness testing tools like these often incorporate advanced reporting dashboards that provide visualizations of test results, trends, and defect metrics to facilitate decision-making. They are designed for , supporting distributed teams through integration and execution to handle complex, large-scale testing workflows. Additionally, many achieve compliance certifications, such as TÜV SÜD for functional safety standards like and , ensuring adherence to industry regulations in regulated sectors.

Best Practices and Challenges

Implementation Strategies

Implementing robustness testing effectively requires a phased approach integrated throughout the (SDLC). This begins with incorporating techniques at the unit level during the design and development phases, where developers simulate invalid inputs and conditions to verify error-handling mechanisms early in the process. As the progresses, testing scales to and levels, employing methods to introduce malformed data and boundary violations across components, ensuring robustness against unexpected interactions. This incremental strategy aligns with developmental testing standards, allowing for iterative refinement and reducing the cost of late-stage fixes. Automation plays a central role in embedding robustness testing into continuous integration/continuous deployment (CI/CD) pipelines, enabling frequent and consistent checks without manual intervention. Robustness tests, such as automated and suites, are triggered on code commits, with configurable thresholds for pass/fail criteria—such as minimum fault detection rates or response time limits—to gate deployments and prevent propagation of vulnerabilities. For instance, integrating strategies into CI/CD setups has been shown to enhance vulnerability detection by systematically varying inputs during builds, supporting shift-left practices that catch issues before production. Tools facilitate this by executing tests in parallel and generating reports on failure modes, promoting a culture of continuous . Achieving comprehensive robustness demands defined coverage criteria, targeting fault coverage through techniques like , , and invalid input simulation to address critical error paths. This involves measuring coverage against requirements and potential failure modes, such as single- and multi-mode faults, where empirical applications have demonstrated up to 100% coverage for critical faults and 87% overall through orthogonal array-based test plans. Such metrics ensure that testing not only detects but also isolates faults effectively, providing quantifiable assurance of system resilience without exhaustive enumeration. Recent advancements include the integration of AI-driven tools for enhanced test generation and coverage, as outlined in standards like IEEE 3129-2023 for AI-based systems. Successful implementation hinges on clear team roles, with developers responsible for unit-level robustness checks during coding, (QA) specialists designing and executing system-wide tests, and experts contributing to scenarios involving malicious or adversarial inputs. This collaborative model fosters shared ownership, where QA leads coordinate coverage goals, developers embed testable error models, and security professionals validate against models, drawing on interdisciplinary expertise to align testing with organizational priorities.

Common Challenges

One major challenge in robustness testing is the high resource intensity, particularly in techniques like , which demand substantial computational power and time to generate and execute large volumes of test inputs. This can strain limited hardware or budgets, often making exhaustive testing impractical for complex systems. To mitigate this, practitioners can leverage cloud-based resources for scalable parallelization or adopt selective testing strategies that prioritize high-risk components, such as those identified through static analysis. False positives represent another common obstacle, where tests flag non-existent failures due to overly sensitive oracles or , leading to wasted effort in and reduced tester confidence. Coverage gaps pose significant hurdles, as hard-to-reach code paths—such as those guarded by complex conditions or rare inputs—often remain untested, leaving potential robustness weaknesses undetected. This is especially prevalent in large-scale or systems where full exploration is infeasible. Hybrid approaches combining with code instrumentation, like , can address this by guiding test generation toward underrepresented paths, while enhances systematic coverage of error-handling scenarios. Adapting to evolving threats further complicates robustness testing, as new vulnerabilities emerge in response to changing software ecosystems, such as novel attack vectors in web services or components. Traditional test suites may quickly become obsolete without ongoing maintenance. With the rise of in software, additional challenges include ensuring robustness against adversarial inputs in models, as highlighted in recent standards and practices as of 2025.

References

  1. [1]
    A Systematic Review on Software Robustness Assessment
    May 3, 2021 · Online robustness testing of distributed embedded systems: An industrial approach. In Proceedings of the IEEE/ACM 39th International Conference ...
  2. [2]
    [PDF] User Guide for ACTS 1 Core Features
    1.4 Negative Testing. Negative testing, which is also referred to as robustness testing, is used to test whether a system handles invalid inputs correctly ...
  3. [3]
  4. [4]
    A Framework for Automated Testing of Software Robustness
    Robustness of a software system is defined as the degree to which the system can behave ordinarily and in conformance with the requirements in extraordinary ...
  5. [5]
    8.01 - Off Nominal Testing - NASA Software Engineering Handbook
    Oct 30, 2019 · Robustness testing is a testing method and approach that focuses on verifying and validating the system's ability to operate correctly or safely ...
  6. [6]
    [PDF] Robustness Testing of Software-Intensive Systems: Explanation and ...
    b) Testing the software product for its ability to isolate and minimize the effect of errors; that is, graceful degradation upon failure, request for operator ...
  7. [7]
    Robustness testing of Java server applications - IEEE Xplore
    Robustness testing of Java server applications | IEEE Journals & Magazine | IEEE Xplore ... error recovery code (i.e., exception handlers) of server applications ...
  8. [8]
    Testing of java web services for robustness - ACM Digital Library
    This paper presents a new compile-time analysis that enables a testing methodology for white-box coverage testing of error recovery ... Robustness Testing of Java ...
  9. [9]
    [PDF] Boundary Value Analysis
    Robustness testing can be seen as and extension of Boundary Value Analysis. The idea behind Robustness testing is to test for clean and dirty test cases. By ...
  10. [10]
    Software implemented fault tolerance - SRI International
    Nov 16, 1973 · In 1973, NASA asked SRI to use all it knew about fault-tolerant computing and build an experimental computer that could control the safety-critical functions ...Missing: 1980s | Show results with:1980s
  11. [11]
    [PDF] Software Fault Tolerance: A Tutorial - NASA Technical Reports Server
    Software fault tolerance is important because error-free software is hard to achieve due to system complexity and difficulty in assessing correctness.Missing: origins | Show results with:origins
  12. [12]
    (PDF) A Survey on Fault Injection Techniques - ResearchGate
    PDF | Fault tolerant circuits are currently required in several major application sectors. Besides and in complement to other possible approaches such.
  13. [13]
    Exploration of DevOps testing process capabilities: An ISM and ...
    We develop a holistic structure of testing capabilities to show their inter-relationship with each other and their priorities to select the best testing ...
  14. [14]
    [PDF] B. Boehm and V. Basili, "Software Defect Reduction Top 10 List ...
    40-50% of software effort is on avoidable rework. 80% of this rework comes from 20% of defects. Fixing after delivery is 100x more expensive. 10 techniques can ...
  15. [15]
    A generic architecture of ADAS sensor fault injection for virtual tests
    The presented research provides a generic architecture of ADAS sensor error injection for robustness testing of the System under Test (SuT). Effects of sensor ...
  16. [16]
    A robustness testing approach for SOAP Web services
    Jul 14, 2012 · This paper addresses the problem of robustness testing in Web services environments. The proposed approach is based on a set of robustness tests.3 Robustness Testing... · 3.3 Robustness Tests... · 3.4 Web Services...Missing: malformed | Show results with:malformed
  17. [17]
    [1412.6572] Explaining and Harnessing Adversarial Examples - arXiv
    Dec 20, 2014 · Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.
  18. [18]
    [PDF] Benchmarking Adversarial Robustness on Image Classification
    In this paper, we establish a comprehensive, rigorous, and coherent bench- mark to evaluate adversarial robustness on image classifi- cation tasks. After ...Missing: seminal | Show results with:seminal
  19. [19]
    [PDF] General Principles of Software Validation - Final Guidance for ... - FDA
    • Robustness – Software testing should demonstrate that a software product behaves correctly ... test the software that goes into an automated medical device.
  20. [20]
    Software Fault Injection: A Practical Perspective - IntechOpen
    Fault injection is a versatile tool for dependability assessment. When an injected fault causes a system failure, this can indicate insufficient fault tolerance ...2. Fundamentals · 5. Methodology · 6. Case Study: Sfi Contest
  21. [21]
    Assessing Dependability with Software Fault Injection: A Survey
    This survey provides a comprehensive overview of the state of the art on Software Fault Injection to support researchers and practitioners.
  22. [22]
    Fault injection - Carnegie Mellon University
    This paper discusses the proper view of fault injection as a testing and verification tool, rather than a debugging tool.Missing: seminal | Show results with:seminal
  23. [23]
    [PDF] An Empirical Study of the Reliability of UNIX Utilities - Paradyn Project
    This paper proceeds as follows. Section 2 describes the tools that we built to test the utilities. These tools include the fuzz (random character) generator, ...Missing: seminal | Show results with:seminal
  24. [24]
    What is Fuzzing (Fuzz Testing)? | Tools, Attacks & Security - Imperva
    Fuzzing is a quality assurance technique used to detect coding errors and security vulnerabilities in software, operating systems, or networks.What is Fuzzing (Fuzz Testing)? · Types of Fuzz Testing · Fuzzing Security Tools
  25. [25]
    Fuzzing: Hack, Art, and Science - Communications of the ACM
    Feb 1, 2020 · Other Approaches. Blackbox random fuzzing, grammar-based fuzzing and whitebox fuzzing are the three main approaches to fuzzing in use today. ...
  26. [26]
    [PDF] An Empirical Study of OSS-Fuzz Bugs - squaresLab
    Feb 10, 2020 · We find that OSS-Fuzz is often effective at quickly finding bugs, and developers are often quick to patch them.
  27. [27]
    OSS-Fuzz - continuous fuzzing for open source software. - GitHub
    Google has found thousands of security vulnerabilities and stability bugs by deploying guided in-process fuzzing of Chrome components, and we now want to ...
  28. [28]
  29. [29]
    Stress Testing of Distributed Multimedia Software Systems
    In this paper, we present several criteria for selecting test cases, and describe two methods for generating test cases which maximize system resource usage.
  30. [30]
  31. [31]
  32. [32]
    american fuzzy lop - [lcamtuf.coredump.cx]
    A revolutionary coverage-driven fuzzer credited with finding countless vulnerabilities in open-source code ... tool offers near-native or better-than-native ...Missing: open- | Show results with:open-
  33. [33]
    google/AFL: american fuzzy lop - a security-oriented fuzzer - GitHub
    Mar 22, 2024 · ... source tools. The fuzzer is thoroughly tested to deliver out-of-the ... Another recent addition to AFL is the afl-analyze tool. It ...
  34. [34]
    Using syzkaller, part 1: Fuzzing the Linux kernel - Collabora
    Mar 26, 2020 · A look at syzkaller, a valuable tool widely adopted by the kernel community to detect bugs in the kernel source code.
  35. [35]
    The Ballerina programming language
    ### Summary of Fault Injection or Robustness Testing Features in Ballerina
  36. [36]
    Ballerina reinvents cloud-native programming - Opensource.com
    Jul 17, 2018 · For example, Ballerina's taint-checking mechanism completely prevents SQL injection vulnerabilities by disallowing tainted data in the SQL query ...Missing: Fault framework
  37. [37]
    GitLab automates instrumented fuzzing via American Fuzzy Lop
    Aug 14, 2019 · I put together a baseline sample showing how fuzzing with AFL can be automated as part of a pipeline.
  38. [38]
    C/C++test: Check C++ and C Code Quality Tool - Parasoft
    Rating 4.9 (35,000) C/C++test is a powerful software test automation solution for the safety, security, and reliability of C and C++ applications.C/C++ Static Analysis · C/C++ Unit Testing · C/C++ Security Testing
  39. [39]
    Essential Guide: Automated Testing for Embedded Systems - Parasoft
    This method uses the function stubs mechanism to inject fault conditions into tested code flow analysis results and can be used to write additional tests.Unit Testing In Embedded... · Code Coverage Based Test... · Regression Testing
  40. [40]
    Eggplant Test - Automated Software Testing Tool - Keysight
    digital automation intelligence.Trusted By Teams Who Test... · Accelerate Testing With... · Resources To Get You Started
  41. [41]
    Eggplant Downloads - Keysight
    Keysight Eggplant products offer industry-leading AI-driven software test automation, active application monitoring, and robust performance and load testing<|control11|><|separator|>
  42. [42]
    Micro Focus LoadRunner - Automation Consultants
    Micro Focus LoadRunner is a performance test tool that simulates multiple users to test IT system performance and response times.
  43. [43]
    C/C++test Reporting & Analytics - Improve Testing - Parasoft
    Enhance testing efficiency with test automation reporting tools in C/C++test. Get better code quality and insights with Parasoft's analytics dashboard.Features · Test Impact Analysis · Parasoft C/c++test Resources
  44. [44]
    [PDF] Parasoft C/C++test CT
    Parasoft C/C++test CT is certified by. TÜV SÜD for functional safety according to ISO 26262, IEC 62304, IEC 61508, and. EN 50128 standards, helping development.
  45. [45]
    Robust Testing™: A Process For Efficient Fault Detections And ...
    ◇ 88% code coverage using 32 tests. ◇ 12% code coverage through 3 supplemental tests for illegal ... ◇ 100% Critical Fault Coverage (Blue). ◇ 87% Overall Fault ...
  46. [46]
    [PDF] A Systematic Review of Software Robustness - Page has been moved
    Mar 22, 2012 · A Systematic Review of Software Robustness. Ali Shahrokni & Robert Feldt. Department of Computer Science & Engineering, Chalmers University of ...