Fact-checked by Grok 2 weeks ago

Fuzzing

Fuzzing, also known as fuzz testing, is an automated technique that involves providing invalid, unexpected, or random data as inputs to a in order to identify defects such as crashes, memory leaks, assertion failures, or security vulnerabilities. This method systematically stresses the software under test by generating malformed inputs, often at high speed, to uncover implementation bugs that might otherwise go undetected through traditional testing approaches. The origins of fuzzing trace back to 1988, when Professor Barton P. Miller and his students at the University of Wisconsin-Madison developed the technique during a research project on the reliability of UNIX utilities. Inspired by a that caused electrical noise to their programs, they created a tool to generate random inputs—coining the term "fuzz" after the random noise—and found that 25-33% of common utilities crashed, hung, or otherwise failed under such conditions. This empirical study, published in 1990, demonstrated fuzzing's effectiveness in revealing reliability issues and laid the foundation for its evolution into a cornerstone of software assurance practices. Fuzzing encompasses several variants based on the tester's knowledge of the software's internals: black-box fuzzing, which operates without access to and relies on external inputs; white-box fuzzing, which uses full code analysis to guide input generation; and grey-box fuzzing, a hybrid that incorporates partial feedback to improve efficiency. Additionally, fuzzers can be categorized by input generation methods, such as mutation-based (altering valid inputs) or generation-based (creating inputs from scratch based on specifications). These approaches are particularly valuable for discovering security flaws like buffer overflows and injection vulnerabilities. In recent years, advancements including coverage-guided fuzzers and integration with have enhanced its scalability and precision, with ongoing research exploring AI-assisted techniques to address complex software ecosystems.

Fundamentals

Definition and Purpose

Fuzzing, also known as fuzz testing, is an automated technique that involves providing invalid, unexpected, or random data as inputs to a in order to discover defects such as crashes, failed assertions, or memory errors. This approach was pioneered in the late as a simple method for feeding random inputs to applications to evaluate their reliability. The primary purposes of fuzzing are to identify bugs and expose security vulnerabilities, such as buffer overflows, use-after-free errors, and denial-of-service conditions, thereby enhancing software robustness without necessitating detailed knowledge of the program's internal structure or . By systematically perturbing inputs, fuzzing complements traditional testing methods and has proven effective in uncovering issues that evade specification-based verification. In distinction from other testing methodologies, fuzzing operates as a , observing only the external input-output of the without to its internals, unlike white-box or model-driven approaches that rely on semantics or formal specifications. The workflow entails generating diverse test inputs, injecting them into the target application, monitoring for anomalies like crashes or hangs, and logging failures for subsequent analysis.

Core Principles

Fuzzing operates through three fundamental components that form its operational backbone. The input creates cases, often by mutating valid inputs or generating novel ones from models of expected formats, to probe the program's behavior under unexpected conditions. The execution environment provides a controlled setting to run the target program with these inputs, typically sandboxed to manage resource usage and isolate potential crashes or hangs. The then monitors outputs to detect anomalies, such as segmentation faults, assertion failures, or sanitizer-detected issues like memory errors, flagging them as potential defects. At its core, fuzzing explores the vast input space of a program by systematically generating diverse to uncover hidden flaws. Random sampling forms a primary , where are produced pseudo-randomly to broadly cover possible values and reveal implementation bugs that deterministic testing might miss. Boundary value testing complements this by focusing on edge cases, such as maximum or minimum values for types, which are prone to overflows or validation errors. loops enable iterative refinement, where observations from prior executions—such as execution traces or coverage —guide the generation of subsequent to prioritize unexplored regions and enhance . Success in fuzzing is evaluated using metrics that quantify exploration depth and defect detection quality. rates, for instance, measure the proportion of the program's structure exercised by test cases, with branch coverage calculated as the percentage of unique branches executed relative to total branches: \text{Branch Coverage Percentage} = \left( \frac{\text{Unique Branches Executed}}{\text{Total Branches}} \right) \times 100 This metric guides toward deeper code penetration. uniqueness assesses the diversity of failures found, counting distinct crashes (e.g., via stack traces or hashes) to avoid redundant reports and indicate broader exposure. Fault revelation efficiency evaluates the rate of novel discovered per unit of fuzzing time or effort, providing a practical gauge of the technique's in real-world testing scenarios. Instrumentation plays a pivotal role in enabling these principles by embedding lightweight probes into the target program during or execution. These probes collect data, such as transitions or accesses, to inform loops without modifying the program's observable semantics or performance significantly. Techniques like binary allow this even for unmodified binaries, ensuring across diverse software environments.

Historical Development

Origins and Early Experiments

The concept of in emerged in the 1950s during the era, when programmers commonly used decks of punch cards with random or garbage data to probe for errors in early computer programs, simulating real-world input variability without systematic methods. This practice laid informal groundwork for automated input-based testing, and by the and , rudimentary automated checks were incorporated into early operating systems to validate system stability against unexpected conditions. The modern technique of fuzzing originated in 1988 as a graduate class project in the Advanced Operating Systems course (CS736) taught by Barton P. Miller at the University of Wisconsin-Madison. Inspired by a thunderstorm that introduced line noise into Miller's dial-up connection, causing random corruption of inputs and subsequent crashes in UNIX utilities, the project aimed to systematically evaluate software reliability using automated random inputs. Students developed a tool called "fuzz" to generate random ASCII streams, including printable and non-printable characters, NULL bytes, and varying lengths up to 25,000 bytes, feeding them into 88 standard UNIX utilities across seven different UNIX implementations, such as 4.3BSD, SunOS 3.2, and AIX 1.1. For interactive programs, a complementary tool named "ptyjig" simulated random keyboard and mouse inputs via pseudo-terminals. The experiments revealed significant vulnerabilities, with 25-33% of the utilities crashing or hanging across the tested systems—for instance, 29% on a VAX running 4.3BSD and 25% on a Sun workstation running . Common failures included segmentation violations, core dumps, and infinite loops, often triggered by poor input validation in areas like and parsing; notable examples involved utilities like "" and "ld" producing exploitable faults. These results, published in 1990, demonstrated fuzzing's potential to uncover bugs overlooked by traditional testing, prompting UNIX vendors to integrate similar tools into their processes. Despite its successes, the early fuzzing approach had notable limitations, including the purely random nature of input generation, which lacked structure or guidance toward edge cases, potentially missing deeper program paths. Crash analysis was also manual and challenging, relying on core dumps and examination without access to for many utilities, limiting reproducibility and root-cause diagnosis.

Key Milestones and Modern Advancements

In the late and early , fuzzing evolved from ad-hoc to more structured frameworks targeted at specific domains. The PROTOS project, initiated in 1999 by researchers at the , introduced a systematic approach to fuzzing by generating test cases based on specifications to uncover implementation flaws in network software. This framework emphasized heuristic-based mutation of fields, leading to the discovery of over 50 vulnerabilities in widely used protocols like SIP and SNMP by 2003. Building on this, Microsoft's SAGE (Automated Whitebox Fuzz Testing) tool, released in 2008, pioneered whitebox fuzzing by combining with random input generation to systematically explore program paths in binary applications. significantly enhanced coverage in , reportedly finding dozens of bugs in Windows components that methods missed. The 2010s marked a surge in coverage-guided fuzzing, driven by open-source tools that integrated genetic algorithms and compiler instrumentation. American Fuzzy Lop (AFL), developed by Michał Zalewski and publicly released in 2013, employed novel compile-time to track code coverage and evolve inputs via , achieving breakthroughs in efficiency for binary fuzzing. played a pivotal role in exposing follow-up vulnerabilities related to the Shellshock bug (CVE-2014-6271 and CVE-2014-6277) in during 2014, demonstrating fuzzing's ability to uncover command injection flaws in shell interpreters. Concurrently, LLVM's LibFuzzer, introduced in 2015, provided an in-process fuzzing engine tightly integrated with AddressSanitizer and coverage , enabling seamless fuzzing of C/C++ libraries with minimal overhead. This tool's adoption accelerated bug detection in projects like , where it complemented sanitizers to identify memory errors. Google's OSS-Fuzz, launched in , represented a toward continuous, large-scale fuzzing for , integrating engines like and LibFuzzer into pipelines across thousands of cores. As of May 2025, OSS-Fuzz has helped identify and fix over 13,000 vulnerabilities and 50,000 bugs across 1,000 projects, underscoring fuzzing's role in proactive maintenance. In parallel, syzkaller, developed by starting in 2015, adapted coverage-guided fuzzing for operating system kernels by generating syscall sequences informed by kernel coverage feedback, leading to thousands of bug reports. For instance, syzkaller exposed race conditions and memory issues in subsystems like networking and filesystems, with ongoing enhancements improving its state-machine modeling for complex kernel interactions. Modern advancements from 2017 onward have focused on scalability and hybridization. , a of initiated in 2017, incorporated optimizations like mirror scheduling and advanced strategies (e.g., dictionary-based and modes), boosting performance by up to 50% on real-world benchmarks while maintaining compatibility. This evolution enabled deeper exploration in environments like web browsers and embedded systems. Google's ClusterFuzz, first deployed in 2011 and scaled extensively by the 2010s, exemplified cloud-based fuzzing by orchestrating distributed execution across 25,000+ cores, automating , and integrating with OSS-Fuzz to handle high-volume campaigns. Its impact was evident in high-profile detections, such as Codenomicon's 2014 fuzzing-based discovery of the vulnerability (CVE-2014-0160) in , which exposed a over-read affecting millions of servers. Recent trends up to 2025 include hybrid techniques blending fuzzing with for seed prioritization, as seen in tools like those extending syzkaller, and AI enhancements in OSS-Fuzz, which in 2024 discovered 26 new vulnerabilities in established projects, including a long-standing flaw in . further amplifying detection rates in and protocol domains.

Fuzzing Techniques

Mutation-Based Fuzzing

Mutation-based fuzzing generates test inputs by applying random or modifications to a set of valid inputs, such as existing files, packets, or messages, without requiring prior knowledge of the input or . The process begins by selecting a from a , optionally trimming it to minimize size while preserving behavior, then applying a series of s to produce variants for execution against the target program. Common mutation operations include bit flips (e.g., inverting 1, 2, or 4 bits at random positions), arithmetic modifications (e.g., adding or subtracting small integers to 8-, 16-, or 32-bit values), byte insertions or deletions, overwriting with predefined "interesting" values (e.g., 0, 1, or boundary cases like 0xFF), and dictionary-based swaps using domain-specific tokens. If a mutated input triggers new or crashes, it is added to the for further ; otherwise, the process cycles to the next . This approach offers low computational overhead due to its reliance on simple, stateless transformations and the reuse of valid seeds, which increases the likelihood of passing initial stages compared to purely random . It is particularly effective for or unstructured formats where structural models are unavailable or costly to develop, enabling rapid exploration of edge cases with minimal setup. For instance, dictionary-based enhance by incorporating protocol-specific terms, such as HTTP headers, to target relevant input regions without exhaustive random trials. Key algorithms optimize seed selection and mutation application to balance exploration and exploitation. The PowerSchedule algorithm, introduced in AFL, dynamically assigns "energy" (i.e., the number of mutations attempted per seed) based on factors like input length, path depth, and historical coverage contributions, favoring shorter or more promising seeds to allocate computational resources efficiently—typically executing 1 to 10 times more mutations on high-value paths. In havoc mode, a core mutation strategy, random perturbations are stacked sequentially (e.g., 2 to 4096 operations per input, selected via a batch exponent t where the number of tweaks is $2^t), including bit flips, arithmetic changes, block deletions or duplications, and dictionary insertions, with a low probability (around 6%) of invoking custom extensions to avoid over-mutation. The mutation rate is calibrated inversely with input length to maintain diversity; for an input of length L, the probability of altering a specific byte approximates $1 / L, ensuring proportional changes across varying sizes. In practice, mutation-based fuzzing has proven effective for testing file parsers with minimal structural knowledge. A study on image parsers using tools like zzuf applied bit-level mutations to files (e.g., varying chunk counts from 5 to 9), generating 200,000 variants per , which exposed handling flaws but achieved only ~24% of the obtained by generation-based methods due to limited deep-path exploration without format awareness. Similarly, a 2024 study fuzzing XML parsers such as , Apache Xerces, and Expat found that byte-level mutations with detected more crashes than tree-level strategies, particularly in Xerces (up to 57 crashes with protocol-conformant seeds vs. 38 with public seeds), though no security vulnerabilities beyond illegal instructions were found.

Generation-Based Fuzzing

Generation-based fuzzing employs formal models such as context-free grammars, schemas, or finite state machines (FSMs) to synthetically generate test inputs that adhere to specified input formats or s while incorporating deliberate faults. This method contrasts with mutation-based approaches by constructing inputs from scratch according to the model, ensuring syntactic validity to reach deeper program states without early rejection by input parsers. In fuzzing, FSMs model the sequence of states and transitions, allowing the creation of input sequences that simulate handshakes or sessions with injected anomalies. Key techniques include random grammar mutations, where production rules are probabilistically altered to introduce variations in structure, and constraint solving to produce semantically valid yet malformed data. For example, constraint solvers can enforce field dependencies in a schema while randomizing values to violate expected behaviors, such as generating HTTP requests with invalid headers that still parse correctly. In practice, parsers generated from tools like for HTTP grammars enable the derivation of test cases by expanding non-terminals and mutating terminals, focusing faults on semantic layers. The primary benefits of generation-based fuzzing lie in its ability to explore complex state spaces through valid inputs, enabling tests of intricate logic in parsers and handlers that random or mutated might bypass. However, this comes at the cost of higher computational overhead, as input generation involves recursive expansion of the model for each . The scale of possible derivations in a without is determined by the product of the number of rule choices for each non-terminal, leading to rapid growth in input variety but increased . In network protocol applications, generation-based methods facilitate stateful fuzzing by producing sequences that respect transition dependencies, as seen in frameworks like Boofuzz, which use FSM-driven primitives to craft multi-packet interactions for protocols such as or . This approach has proven effective for uncovering vulnerabilities in state-dependent implementations, where invalid sequences reveal flaws in session management.

Coverage-Guided and Hybrid Fuzzing

Coverage-guided fuzzing enhances traditional mutation-based approaches by incorporating runtime feedback to direct the generation of test inputs toward unexplored code regions. This technique involves instrumenting the target program to monitor execution coverage, typically at the level of basic blocks or control-flow edges, using lightweight mechanisms such as to record reached transitions. Inputs that trigger new coverage are assigned higher priority for mutation, enabling efficient exploration of the program's state space; for instance, (AFL) employs a shared to track edge coverage across executions, favoring "power schedules" that allocate more mutations to promising seeds. This feedback loop contrasts with undirected fuzzing by systematically increasing , often achieving deeper penetration into complex binaries. Hybrid fuzzing builds on coverage guidance by integrating complementary techniques, such as generation-based methods or , to overcome limitations in path exploration and input . In these approaches, is combined with adaptive strategies; for example, a fitness score can guide prioritization via the formula \text{edge\_score} = \frac{\text{new\_edges\_discovered}}{\text{total\_mutations}}, which quantifies the efficiency of inputs in revealing novel . Grey-box models further hybridize by selectively invoking to resolve hard-to-reach branches when coverage stalls, as in Driller, which augments fuzzing with concolic execution to generate inputs that bypass concrete execution dead-ends without full symbolic overhead. More recent advancements incorporate , such as NEUZZ, which trains neural networks to approximate program behavior and enable gradient-based optimization for fuzzing guidance, smoothing discrete branch decisions into continuous landscapes for better seed selection. As of 2025, further advancements include LLM-guided hybrid fuzzing, which uses large language models for semantic-aware input generation to improve exploration in stateful systems. These methods have demonstrated significant effectiveness in detecting vulnerabilities in large-scale, complex software, including web browsers, where traditional fuzzing struggles with deep state interactions. For example, coverage-guided hybrid techniques have uncovered numerous security in by achieving higher coverage and faster reproduction compared to black-box alternatives, contributing to real-world vulnerability disclosure in production environments. Quantitative evaluations show improvements in bug-finding rates, with hybrid fuzzers like Driller achieving a 13% increase in unique (77 vs. 68) over pure coverage-guided baselines like in the DARPA CGC benchmarks.

Applications

Bug Detection and Vulnerability Exposure

Fuzzing uncovers software defects by systematically supplying invalid, malformed, or random inputs to program interfaces, with the goal of provoking exceptions, corruptions, or logic errors that reveal underlying flaws. This dynamic approach monitors behavior for indicators of , such as segmentation faults or assertion violations, which signal potential defects in code handling edge cases. By exercising rarely encountered paths, fuzzing exposes issues that deterministic testing often misses, including those arising from unexpected data flows or conditions. Among the vulnerabilities commonly detected, buffer overflows stand out, where excessive input data overwrites adjacent memory regions, potentially allowing . Integer overflows, which occur when arithmetic operations exceed representable values in a , can lead to incorrect computations and subsequent exploits. Race conditions, involving timing-dependent interactions in multithreaded environments, manifest as inconsistent states or under concurrent access. In C/C++ programs, fuzzing frequently identifies null pointer dereferences by generating inputs that nullify pointers before dereference operations, triggering crashes that pinpoint the error location. Studies indicate that fuzzing outperforms by executing programs orders of magnitude more frequently, thereby exploring deeper into state spaces and uncovering unique crashes that human-led efforts overlook. For instance, empirical evaluations show fuzzers detecting vulnerabilities in complex systems where traditional methods achieve limited coverage. Integration with memory sanitizers like amplifies this impact by instrumenting code to intercept and report precise error details, such as the and offset for a , enabling faster triage and patching. To sustain effectiveness over time, corpus-based fuzzing employs seed input collections derived from prior tests or real-world , replaying them to verify regressions and mutate them for new discoveries. This strategy ensures that code modifications do not reintroduce fixed bugs while expanding coverage. Continuous fuzzing embedded in pipelines further automates this process, running fuzzer jobs on every commit or pull request to catch defects early in the development cycle, thereby reducing the cost of remediation.

Validation of Static Analysis

Fuzzing serves as a dynamic complement to static analysis tools, which often generate warnings about potential issues such as memory leaks or buffer overflows but suffer from high false positive rates. In this validation process, outputs from static analyzers like or Infer are used to guide targeted fuzzing campaigns, where fuzzers generate inputs specifically aimed at reproducing the flagged code paths or functions. This involves extracting relevant code slices or hotspots from the warnings—such as tainted data flows in taint analysis—and creating minimal, compilable binaries for fuzzing, allowing the fuzzer to exercise the suspected vulnerable locations efficiently. The primary benefit of this approach is the reduction of false positives through empirical verification: if a warning does not lead to a or under extensive fuzzing, it is likely spurious, thereby alleviating the manual burden on developers. For instance, in scenarios involving analysis warnings for potential information leaks, fuzzing can confirm whether tainted inputs actually propagate to sensitive sinks, as demonstrated in evaluations on libraries like where alerts were pruned if non-crashing. This method not only confirms true positives but also provides concrete evidence for dismissal, improving overall developer productivity in large-scale . Integration often employs feedback-directed fuzzing techniques, where static hotspots inform the fuzzer's power schedule or seed selection to prioritize exploration toward warning locations. Tools like FuzzSlice automate this by generating type-aware inputs for function-level slices, while advanced frameworks such as Lyso use multi-step directed greybox fuzzing, correlating alarms across program flows (via control and data flow graphs) to break validation into sequential goals. A key metric for effectiveness is the false positive reduction rate; for example, FuzzSlice identified 62% of developer-confirmed false positives in open-source warnings by failing to trigger crashes on them, and approaches have reported up to 100% false positive elimination in benchmark tests. Case studies in large codebases highlight practical impact, such as applying targeted fuzzing to validate reports in projects like and , where static tools flagged numerous potential issues but fuzzing confirmed only a subset, enabling focused fixes. Similarly, directed fuzzing guided by static analysis on libraries (e.g., Libsndfile) has uncovered and verified previously unknown vulnerabilities from alarm correlations, demonstrating for enterprise-scale validation without exhaustive manual review. These integrations underscore fuzzing's role in bridging static warnings to actionable insights, particularly for legacy or complex systems.

Domain-Specific Implementations

Fuzzing has been extensively adapted for , where it targets complex components such as DOM parsers and engines to uncover vulnerabilities that could lead to code execution or data leaks. Google's ClusterFuzz infrastructure, which supports fuzzing of , operates on a scale of 25,000 cores and has identified over 27,000 bugs in Google's codebase, including , as of February 2023. This large-scale deployment enables continuous testing of browser rendering pipelines and script interpreters, leveraging coverage-guided techniques to prioritize inputs that exercise rarely reached code paths in these high-risk areas. In and operating system fuzzing, tools like syzkaller focus on interfaces to systematically probe kernel behaviors, including those in device drivers and file systems, which are prone to memory corruption and conditions. Syzkaller employs grammar-based input generation and kernel coverage via mechanisms like KCOV to discover deep bugs that traditional testing overlooks. As of 2024, syzkaller has uncovered nearly 4,000 vulnerabilities in the alone, many of which affect drivers for storage and . These findings have led to critical patches, demonstrating the tool's effectiveness in simulating real-world OS interactions without requiring full . Fuzzing extends to other domains, such as network protocols, where stateful implementations like TLS demand modeling of handshake sequences and message flows to detect flaws in cryptographic handling or state transitions. Protocol state fuzzing, for instance, has revealed multiple previously unknown vulnerabilities in major TLS libraries, including denial-of-service issues in and , by systematically exploring valid and malformed protocol states. In embedded systems, adaptations for resource-constrained and stateful environments often involve firmware emulation or semi-hosted execution to maintain persistent states across fuzzing iterations, addressing challenges like limited memory and non-deterministic hardware interactions. These tailored approaches have improved coverage in devices and microcontrollers, identifying buffer overflows and logic errors that could compromise system integrity. Scaling fuzzing for domain-specific targets, especially resource-intensive ones like browsers and kernels, relies on distributed infrastructures to distribute workloads across clusters and achieve high throughput. However, challenges arise in efficient task scheduling, where imbalances can lead to underutilized resources or redundant efforts, as well as in managing for stateful targets. Solutions like dynamic centralized schedulers in frameworks such as UniFuzz optimize distribution and strategies across nodes, reducing overhead and enhancing discovery rates in large-scale deployments.

Tools and Infrastructure

(AFL) and its enhanced fork AFL++ are prominent coverage-guided fuzzing frameworks that employ mutation-based techniques to generate inputs, leveraging compile-time for efficient branch coverage feedback. AFL uses a fork-server model to minimize process overhead, enabling rapid execution of test cases, while AFL++ extends this with optimizations such as persistent mode for in-memory fuzzing without repeated initialization, custom mutator APIs for domain-specific mutations, and support for various instrumentation backends including and . These frameworks are open-source and widely adopted for fuzzing user-space applications, particularly in C and C++ binaries. LibFuzzer serves as an in-process, coverage-guided evolutionary fuzzer tightly integrated with the compiler infrastructure, allowing seamless linking with the target library to feed mutated inputs directly without external spawning. It supports and other sanitizers for detecting memory errors during fuzzing sessions, and is commonly invoked via build systems like by adding compiler flags such as -fsanitize=fuzzer to enable . LibFuzzer excels in fuzzing libraries and APIs, prioritizing speed through in-process execution and corpus-based mutation strategies. Other notable frameworks include Honggfuzz, which provides hardware-accelerated coverage feedback using PT or IBS for precise edge detection, alongside software-based options, and supports multi-threaded fuzzing to utilize all CPU cores efficiently. Syzkaller is a specialized, unsupervised coverage-guided fuzzer designed for operating system kernels, generating syscall programs based on declarative descriptions and integrating with kernel coverage tools like KCOV to explore deep code paths. Peach Fuzzer, in its original open-source community edition (no longer actively maintained since 2019), focuses on protocol-oriented fuzzing through generation-based and mutation-based approaches, requiring users to define data models via Peach Pit XML files for structured input creation and stateful testing of network protocols; its technology forms the basis for the actively developed Protocol Fuzzer Community Edition.
FrameworkTypePrimary Languages/TargetsLicense
AFL++Coverage-guided mutationC/C++, binaries (user-space)Apache 2.0
LibFuzzerCoverage-guided evolutionary (in-process)C/C++, libraries/APIsApache 2.0
HonggfuzzCoverage-guided (HW/SW feedback)C/C++, binariesApache 2.0
SyzkallerCoverage-guided (kernel-specific) syscalls (Linux, others)Apache 2.0
Peach FuzzerGeneration/mutation (protocol-oriented)Protocols, networks (multi-language)
OSS-Fuzz, Google's continuous fuzzing service, integrates frameworks like AFL++ and LibFuzzer to test over 1,000 open-source projects as of 2025, having identified and facilitated fixes for more than 13,000 vulnerabilities and 50,000 bugs across diverse software ecosystems.

Supporting Toolchain Elements

Automated input minimization is a critical post-fuzzing process that reduces the size of failure-inducing inputs to facilitate and analysis. The ddmin algorithm, a foundational delta-debugging technique, systematically partitions the input into subsets and tests them to isolate the minimal set of changes that still trigger the failure, achieving a 1-minimal configuration where removing any element eliminates the bug. This approach has been applied in fuzzing scenarios, where it dramatically shrinks large random inputs; for instance, a 10^6-character fuzz input for the CRTPLOT utility was reduced to a single failure-inducing character in just 24 tests. In practice, such minimization often compresses inputs to 1-10% of their original size or less, enabling developers to focus on relevant portions without extraneous data. Bug triage automation streamlines the classification and prioritization of crashes generated during fuzzing campaigns, which can number in the thousands. Techniques for clustering crashes rely on analyzing stack traces or execution similarities to group related failures by cause, reducing manual review overhead. For example, employs a dual-phase : first minimizing proof-of-concept inputs via coverage-reduction fuzzing to prune traces, then applying similarity (using the Weisfeiler-Lehman kernel) to cluster crashes, achieving near-100% in grouping for 39 bugs across 10 programs. Complementary tools like AddressSanitizer enhance by instrumenting code to detect memory errors such as use-after-free or buffer overflows at runtime, providing detailed reports that pinpoint violation sites with low overhead (about 73% slowdown). In fuzzing workflows, AddressSanitizer has uncovered over 300 previously unknown bugs in , including 210 heap-use-after-free instances, by enabling rapid test execution and precise error localization. Corpus management optimizes the collection and maintenance of seed inputs for fuzzing, ensuring efficient exploration without redundancy. Seeding involves curating initial inputs that cover diverse code paths, while deduplication algorithms remove similar corpora entries based on coverage or hash signatures to prevent wasteful mutations. Integration with continuous integration (CI) systems automates corpus synchronization across builds, allowing incremental fuzzing sessions to reuse and expand prior discoveries, as seen in frameworks that distill corpora to high-quality subsets improving bug detection rates. This process mitigates redundancy, with studies showing distilled corpora can reduce storage needs while boosting coverage-guided fuzzing effectiveness by focusing on unique, impactful seeds. Reporting pipelines automate the transformation of fuzzing findings into actionable outputs, such as assessments or advisories. These workflows aggregate data, apply to validate bugs, and generate reports that include minimized inputs and stack traces for . Advanced pipelines extend to patch suggestion and CVE assignment, using automated analysis to correlate crashes with known vulnerabilities and draft (CVE) entries for confirmed issues. In large-scale setups like OSS-Fuzz, such pipelines have facilitated the reporting of thousands of bugs, streamlining the path from detection to remediation by integrating with issue trackers and databases.

Challenges and Future Directions

Limitations and Common Pitfalls

Fuzzing techniques, while effective for uncovering software vulnerabilities, are inherently constrained by several factors that limit their ability to achieve comprehensive testing. One primary limitation is the difficulty in attaining full , particularly for deep or complex execution paths that require specific input sequences to trigger. Traditional fuzzers often struggle to generate inputs that navigate intricate control flows, leading to gaps in exploration where bugs remain undetected. This issue is exacerbated in programs with non-deterministic behaviors, such as those involving timing-dependent operations or external interactions, where the same input may produce varying outputs, complicating feedback-driven strategies. In multi-threaded programs, fuzzing faces state explosion, as the exponential growth of possible thread interleavings creates an infeasibly large state space, making it challenging to systematically test concurrent behaviors and increasing the likelihood of overlooking race conditions or deadlocks. False negatives represent another significant pitfall, where fuzzing fails to detect existing bugs due to insufficient input diversity or reliance on random mutations without targeted guidance. Bugs triggered only under rare conditions, such as uncommon environmental states or precise value combinations, are particularly prone to evasion, as fuzzers may not sample these scenarios within practical timeframes. Over-dependence on random inputs without mechanisms to ensure semantic validity can result in a high rejection rate of test cases, further reducing the chances of reaching vulnerable code paths and perpetuating incomplete assessments. Resource demands pose practical barriers to fuzzing's scalability, requiring substantial computational power and memory for extended campaigns to yield meaningful results. Long-running fuzzing sessions can consume high CPU and memory resources, especially when processing large input corpora or simulating complex executions, which may render the approach infeasible for resource-constrained environments. A common pitfall is the occurrence of non-reproducible crashes, where detected anomalies cannot be reliably recreated due to timing sensitivities or incomplete , hindering and efforts. Environmental factors further complicate fuzzing setups, particularly when targets depend on specific or configurations that are difficult to replicate in testing environments. Hardware-specific dependencies, such as proprietary peripherals in systems, can lead to inaccurate simulations and missed vulnerabilities that manifest only on actual devices. -reliant programs introduce additional challenges, as fuzzing often requires mocking external dependencies, which may not fully capture real-world interactions and result in overlooked issues arising from or variations. Recent advancements in fuzzing have increasingly incorporated techniques to enhance input generation and mutation strategies. Neural fuzzing approaches, such as Learn&Fuzz introduced in 2017, leverage neural networks to automatically infer input grammars from sample inputs, enabling predictive mutations that improve coverage in grammar-based fuzzing by modeling statistical patterns in valid inputs. This method has demonstrated superior performance in generating syntactically valid test cases for complex protocols, outperforming traditional manual grammar engineering. Building on this, generative adversarial networks (GANs) have emerged for input synthesis, with variational auto-encoder GANs (VAE-GANs) proposed to produce diverse, high-quality fuzzing inputs by learning latent representations of seed data, achieving up to 57% improvement in edge discovery when integrated with AFL++ compared to baseline fuzzers. Similarly, GAN-based seed generation techniques have shown efficacy in creating crash-reproducing inputs from prior failures, particularly for image-processing applications, by adversarially training generators to mimic vulnerable patterns. AI-driven adaptations are further refining fuzzing efficiency through () for dynamic seed scheduling. -based hierarchical schedulers, such as those employing multi-level coverage metrics, optimize seed selection in greybox fuzzing by treating the process as a , detecting 20% more bugs compared to tools like on CGC benchmarks. These approaches adaptively prioritize seeds based on rewards from new path coverage, addressing inefficiencies in static scheduling. In parallel, fuzzing for quantum-resistant cryptographic software has gained traction to validate post-quantum algorithm implementations against side-channel and implementation flaws. Techniques combining fuzzing with hardware performance counters have detected vulnerabilities in NIST-standardized post-quantum signatures, such as reduced security in flawed , by monitoring timing discrepancies during fuzz campaigns. Such methods ensure robustness in cryptographic libraries like liboqs, where fuzzing has uncovered bugs in algorithms like HQC before deployment. Broader applications of fuzzing are expanding to novel domains, including testing AI models against adversarial inputs and adapting to serverless environments. Fuzzing machine learning systems involves generating adversarial perturbations to expose robustness issues, with greybox techniques like those in TAEFuzz discovering up to 46.1% more errors in targeted image-based deep learning systems. This bidirectional trend—fuzzing ML models while using ML to enhance fuzzers—promises improved security for deployed AI. In serverless computing, fuzzing frameworks have been tailored to handle ephemeral functions, such as those using WebAssembly in platforms like Spin, where structure-sensitive fuzzers like SwFuzz scale testing across distributed invocations without persistent state overhead. For scalability in edge computing, in-place fuzzing architectures like E-FuzzEdge enable efficient campaigns on resource-constrained devices by minimizing data transfer, boosting throughput by 3-5x in IoT scenarios through localized mutation and feedback loops. Middleware-based tools like EdgeFuzz further support distributed fuzzing in edge networks, coordinating tests across nodes to uncover inter-device vulnerabilities. Future research directions emphasize hybrid methodologies and ethical frameworks to address evolving challenges. Integrating fuzzing with formal verification, as in HyPFuzz, uses symbolic execution to guide fuzzers toward hard-to-reach processor states, detecting 100% of known bugs in benchmarks like RISC-V cores while reducing manual effort. Ethical fuzzing practices for privacy-sensitive applications prioritize responsible disclosure and data minimization, ensuring tests on apps handling personal information comply with standards like GDPR by anonymizing inputs and limiting exposure of sensitive paths, as outlined in prudent evaluation guidelines. These hybrids and ethical considerations are poised to make fuzzing more reliable and deployable in critical systems, fostering verifiable security without compromising user privacy.

References

  1. [1]
    Fuzz Testing for Software Assurance | NIST
    Mar 1, 2015 · Fuzz testing, or fuzzing, is a software testing technique that involves providing invalid, unexpected, or random test inputs to the software system under test.
  2. [2]
    Fuzzing - OWASP Foundation
    Fuzz testing or Fuzzing is a Black Box software testing technique, which basically consists in finding implementation bugs using malformed/semi-malformed data ...
  3. [3]
    An empirical study of the reliability of UNIX utilities
    An empirical study of the reliability of UNIX utilities. Authors: Barton P. Miller. Barton P. Miller. Univ. of Wisconsin, Madison. View Profile. , Lars ...
  4. [4]
    [PDF] An Empirical Study of the Reliability of UNIX Utilities - Paradyn Project
    Copyright 1989 Miller, Fredriksen, and So. Page 2. 1. INTRODUCTION. When we use basic operating system facilities, such as the kernel and major utility ...
  5. [5]
    What is Fuzzing (Fuzz Testing)? | Tools, Attacks & Security - Imperva
    Fuzzing is a quality assurance technique used to detect coding errors and security vulnerabilities in software, operating systems, or networks.What is Fuzzing (Fuzz Testing)? · Types of Fuzz Testing · Fuzzing Security Tools
  6. [6]
    Fuzz Testing Landscape 2025 | Blog - Code Intelligence
    Fuzz testing has two main categories: white-box, which uses source code, and black-box, which does not. The market includes both open-source and commercial ...
  7. [7]
    Fuzz Testing of Application Reliability - cs.wisc.edu
    Aug 1, 2023 · Fuzz Testing of Application Reliability. Classic fuzz testing is a simple technique for feeding random input to applications.
  8. [8]
    Fuzzing: A Survey for Roadmap - ACM Digital Library
    As shown in Figure 2, fuzzing consists of three basic components, namely input generator, executor, and defect monitor. The input generator provides the ...
  9. [9]
    [PDF] Fuzzing: State of the Art - Cheng Wen's Home Page
    Fuzzing is a software testing technique which can automat- ically generate test cases. Thus, we run these test cases on a target program, and then observe the ...
  10. [10]
    [PDF] Evaluating Fuzz Testing - arXiv
    We surveyed the recent research literature and assessed the experimental evaluations carried out by 32 fuzzing papers. We found problems in every evaluation we.Missing: survey | Show results with:survey
  11. [11]
    [PDF] Analyzing Impact of Coverage Metrics in Greybox Fuzzing - USENIX
    Code coverage metrics evaluate the uniqueness among test cases at the code level, such as line coverage, basic block coverage, branch/edge coverage, and path ...
  12. [12]
    [PDF] Compiler-quality Instrumentation for Better Binary-only Fuzzing
    ZAFL's architectural focus on fine-grained instrumentation facilitates complex fuzzing- enhancing transformations in a performant manner.<|separator|>
  13. [13]
    [PDF] Combining Fuzzing and Concolic execution for Java
    Random testing can be dated back to the 1950s when data was stored on punch cards. Programmers would supply their programs with random inputs and if any ...
  14. [14]
    The History of Software Testing - Testing References
    This page contains the Annotated History of Software Testing; a comprehensive overview of the history of software testing.
  15. [15]
    Fuzz Testing of Application Reliability - cs.wisc.edu
    Aug 1, 2023 · 1988 Original fuzz project assignment (PDF). Project (1) on the list is the fuzz assignment. This is the original project assignment from my ...
  16. [16]
    Automated Whitebox Fuzz Testing - NDSS Symposium
    We present an alternative whitebox fuzz testing approach inspired by recent advances in symbolic execution and dynamic test generation.
  17. [17]
    [PDF] Billions and Billions of Constraints: Whitebox Fuzz Testing in ...
    This paper is organized as follows. In Section 2, we review whitebox fuzzing and SAGE . In Section 3, we present our two new systems, SAGAN and JobCenter,.
  18. [18]
    american fuzzy lop - [lcamtuf.coredump.cx]
    American fuzzy lop is a security-oriented fuzzer that employs a novel type of compile-time instrumentation and genetic algorithms to automatically discover ...Missing: 2013 | Show results with:2013
  19. [19]
    Simple guided fuzzing for libraries using LLVM's new libFuzzer
    LibFuzzer, recently added to the LLVM tree, is a library for in-process fuzzing that uses Sanitizer Coverage instrumentation to guide test ...
  20. [20]
    Announcing OSS-Fuzz: Continuous Fuzzing for Open Source Software
    OSS-Fuzz's goal is to make common software infrastructure more secure and stable by combining modern fuzzing techniques with scalable distributed execution.
  21. [21]
    Documentation for OSS-Fuzz - Google
    OSS-Fuzz aims to make common open source software more secure and stable by combining modern fuzzing techniques with scalable, distributed execution.Fuzz Introspector · Getting started · Architecture · Setting up a new project
  22. [22]
    syzkaller is an unsupervised coverage-guided kernel fuzzer - GitHub
    Initially, syzkaller was developed with Linux kernel fuzzing in mind, but now it's being extended to support other OS kernels as well.Wiki · Google/syzkaller · GitHub · Package syzkaller/ci · Package syzkaller/envMissing: 2014 | Show results with:2014
  23. [23]
    [PDF] AFL++: Combining Incremental Steps of Fuzzing Research - USENIX
    In this paper, we present AFL++, a community-driven open- source tool that incorporates state-of-the-art fuzzing research, to make the research comparable, ...
  24. [24]
    Fuzzing for Security - Chromium Blog
    Apr 26, 2012 · Chrome's fuzzing infrastructure (affectionately named "ClusterFuzz") is built on top of a cluster of several hundred virtual machines running approximately six ...
  25. [25]
    How Heartbleed could've been found - Hanno's blog
    Apr 7, 2015 · Fuzzing is a widely used strategy to find security issues and bugs in software. The basic idea is simple: Give the software lots of inputs with ...
  26. [26]
    Afl Fuzz Approach | AFLplusplus
    Second to last is the power schedule mode being run (default: fast). ... Because high bitmap density makes it harder for the fuzzer to reliably discern new ...
  27. [27]
    [PDF] Analysis of Mutation and Generation-Based Fuzzing
    The advantage to mutation-based fuzzing is that lit- tle or no knowledge of the protocol or application under study is required. All that is needed is one ...Missing: definition | Show results with:definition
  28. [28]
    Mutation-Based Fuzzing
    Mutational fuzzing – that is, introducing small changes to existing inputs that may still keep the input valid, yet exercise new behavior.Missing: advantages | Show results with:advantages
  29. [29]
    Fuzzing with Grammars - The Fuzzing Book
    This chapter introduces grammars as a simple means to specify input languages, and to use them for testing programs with syntactically valid inputs.Missing: schemas papers
  30. [30]
    [PDF] A Review on Grammar-Based Fuzzing Techniques - CSC Journals
    In this paper, we present an overview of grammar-based fuzzing tools and techniques that are used to guide them which include mutation, machine learning, and ...
  31. [31]
    [2008.01150] Evolutionary Grammar-Based Fuzzing - arXiv
    Aug 3, 2020 · In this paper, we present EvoGFuzz, an evolutionary grammar-based fuzzing approach to optimize the probabilities to generate test inputs.
  32. [32]
    [PDF] Grammarinator:A Grammar-Based Open Source Fuzzer
    Nov 5, 2018 · In this paper, we present a tool named Grammarinator that aims at providing the capabilities of both generation and mutation-based fuzzers with ...
  33. [33]
    [PDF] A Study of Grammar-Based Fuzzing Approaches
    Fuzzing is the process of finding security vulnerabilities in code by creating inputs that will activate the exploits. Grammar-based fuzzing uses a grammar, ...
  34. [34]
  35. [35]
    Technical "whitepaper" for afl-fuzz - [lcamtuf.coredump.cx]
    This document provides a quick overview of the guts of American Fuzzy Lop. See README for the general instruction manual; and for a discussion of motivations ...Missing: source | Show results with:source
  36. [36]
    [PDF] Driller: Augmenting Fuzzing Through Selective Symbolic Execution
    Feb 24, 2016 · We present Driller, a hybrid vulnerability excavation tool which leverages fuzzing and selective concolic execution in a complementary manner, ...
  37. [37]
    NEUZZ: Efficient Fuzzing with Neural Program Smoothing - arXiv
    Jul 15, 2018 · In this paper, we propose a novel program smoothing technique using surrogate neural network models that can incrementally learn smooth approximations.
  38. [38]
    Coverage guided vs blackbox fuzzing | ClusterFuzz - Google
    Coverage guided fuzzing is recommended as it is generally the most effective. ... Chrome, without any coverage feedback to guide its mutations.
  39. [39]
    Fuzzing beyond memory corruption: Finding broader classes of ...
    Sep 8, 2022 · Fuzzing, a type of testing once primarily known for detecting memory corruption vulnerabilities in C/C++ code, has considerable untapped potential to find ...
  40. [40]
    What Bugs Can You Find With Fuzzing? - Code Intelligence
    Bugs and CWE's Found Through Code Intelligence Fuzzing in C/C++ ; CWE-680, Integer Overflow to Buffer Overflow, CWE-362, Signal Handler Race Condition ; CWE-466 ...
  41. [41]
    The Role of Fuzz Testing in Software Security Part 1
    Apr 23, 2025 · Fuzzing is highly effective at discovering: Buffer overflows (stack/heap overflows). Use-after-free vulnerabilities. Integer overflows and ...
  42. [42]
    [PDF] Understanding and Detecting Disordered Error Handling with ...
    Aug 11, 2021 · Memory corruption. DiEH bugs often cause critical mem- ory corruption such as use-after-free, double free, and NULL- pointer dereference. In ...<|separator|>
  43. [43]
    [PDF] FUZZIFICATION: Anti-Fuzzing Techniques - USENIX
    Compared to manual analysis or static analysis, fuzzing is able to execute the program orders of magnitude more times and thus can explore more program states ...
  44. [44]
    [PDF] Evaluating Fuzz Testing
    A fuzz tester (or fuzzer) is a tool that iteratively and randomly gener- ates inputs with which it tests a target program. Despite appearing. “naive” when ...
  45. [45]
    [PDF] FuZZan: Efficient Sanitizer Metadata Design for Fuzzing - USENIX
    Jul 15, 2020 · ASan provides logging functionality for error reporting (e.g., saving allocation sizes and thread IDs during object allocation). Unfortunately, ...
  46. [46]
    What is Fuzz Testing [Complete Guide] - Code Intelligence
    Fuzzing is a dynamic testing method used to identify bugs and vulnerabilities in software. It is mainly used for security and stability testing of the codebase.
  47. [47]
    [PDF] Effective Fuzzing within CI/CD Pipelines (Registered Report)
    Sep 16, 2024 · Abstract. Deploying fuzzing within CI/CD pipelines can help ensure safe and secure code evolution. Directed greybox fuzzing techniques.
  48. [48]
    [PDF] Multi-target Multi-step Directed Greybox Fuzzing for Static Analysis ...
    Aug 15, 2025 · We propose a novel multi-target, multi-step guided fuzzer,. Lyso, to enhance existing methods by leveraging program flows and alarm correlations ...
  49. [49]
    FuzzSlice: Pruning False Positives in Static Analysis Warnings ...
    Feb 6, 2024 · FuzzSlice: Pruning False Positives in Static Analysis Warnings through Function-Level Fuzzing. Authors: Aniruddhan Murali. Aniruddhan Murali.
  50. [50]
    [PDF] FuzzSlice: Pruning False Positives in Static Analysis Warnings ...
    FuzzSlice automatically prunes false positives in static analysis warnings by fuzzing code slices at the function level, generating a separate binary for each  ...
  51. [51]
    [PDF] ClusterFuzz - Black Hat
    How to fuzz effectively as a Defender? ○ Not just “more cores”. ○ Security teams can't write all fuzzers for the entire project. ○ Bugs create triage burden.Missing: 2011 | Show results with:2011
  52. [52]
    google/clusterfuzz: Scalable fuzzing infrastructure. - GitHub
    ClusterFuzz is a scalable fuzzing infrastructure that finds security and stability issues in software. Google uses ClusterFuzz to fuzz all Google products.Missing: 2011 | Show results with:2011
  53. [53]
    [PDF] Tuning Configuration Selection for Continuous Kernel Fuzzing
    Aug 15, 2024 · Google's syzkaller [10] has found nearly 4,000 vulnerabilities in the Linux kernel alone [19] and is consistently among the top reporters of ...<|separator|>
  54. [54]
    [PDF] Protocol State Fuzzing of TLS Implementations - USENIX
    Aug 12, 2015 · These range from crypto- graphic attacks (such as problems when using RC4 [4]) to serious implementation bugs (such as Heartbleed [13]) and ...
  55. [55]
    Embedded fuzzing: a review of challenges, tools, and solutions
    Sep 2, 2022 · Embedded fuzzing is a method to uncover software bugs in embedded systems, which are microcontroller-based devices with dedicated software.
  56. [56]
    AFLplusplus: The AFL++ fuzzing framework
    The AFL++ fuzzing framework includes the following: It includes a lot of changes, optimizations and new features respect to AFL like the AFLfast power ...Missing: enhancements | Show results with:enhancements
  57. [57]
    libFuzzer – a library for coverage-guided fuzz testing. - LLVM
    LibFuzzer is an in-process, coverage-guided, evolutionary fuzzing engine. LibFuzzer is linked with the library under test, and feeds fuzzed inputs to the ...
  58. [58]
    google/honggfuzz - GitHub
    Security oriented software fuzzer. Supports evolutionary, feedback-driven fuzzing based on code coverage (SW and HW based) - google/honggfuzz.
  59. [59]
    Peach Fuzzer - GitLab
    Peach is a SmartFuzzer that is capable of performing both generation and mutation based fuzzing. Peach requires the creation of Peach Pit files.
  60. [60]
    OSS-Fuzz - continuous fuzzing for open source software. - GitHub
    OSS-Fuzz aims to make common open source software more secure and stable by combining modern fuzzing techniques with scalable, distributed execution.
  61. [61]
    [PDF] Simplifying and Isolating Failure-Inducing Input
    Each time a test fails, Delta. Debugging could be used to simplify and isolate the circum- stances of the failure. Given sufficient testing resources and a.Missing: seminal | Show results with:seminal
  62. [62]
    Igor: Crash Deduplication Through Root-Cause Clustering
    Nov 13, 2021 · We develop Igor, an automated dual-phase crash deduplication technique. By minimizing each PoC's execution trace, we obtain pruned test cases that exercise the ...
  63. [63]
    [PDF] AddressSanitizer: A Fast Address Sanity Checker - USENIX
    In this paper we presented AddressSanitizer, a fast mem- ory error detector. AddressSanitizer finds out-of-bounds. (for heap, stack, and globals) accesses ...
  64. [64]
    Corpus Distillation for Effective Fuzzing: A Comparative Evaluation
    May 30, 2019 · We present results of 34+ CPU-years of fuzzing with five distillation approaches to understand their impact in finding bugs in real-world software.Missing: management | Show results with:management
  65. [65]
    [PDF] Fuzzing: Challenges and Reflections - Marcel Boehme
    In this article, we summarize the open challenges and opportunities for fuzzing and symbolic execution as they emerged in discussions among researchers and ...
  66. [66]
    (PDF) Survey of Software Fuzzing Techniques - ResearchGate
    Dec 12, 2021 · This survey does that by summarizing current state-of-the art fuzzing approaches, classifying these approaches, and highlighting key insights ...
  67. [67]
    Fuzzing: Progress, Challenges, and Perspectives
    Traditional mutation-based fuzzers, such as AFL, often require a corpus crawled from the Internet. However, these corpora usually contain only the common ...
  68. [68]
    Fuzzing vulnerability discovery techniques: Survey, challenges and ...
    Fuzzing, as a common method for vulnerability mining, has the disadvantages of low reception rate of generated test cases and blind mutation, which leads to ...
  69. [69]
    Learn&Fuzz: Machine learning for input fuzzing - IEEE Xplore
    In this paper, we show how to automate the generation of an input grammar suitable for input fuzzing using sample inputs and neural-network-based statistical ...
  70. [70]
    Effective fuzzing testcase generation based on variational auto ...
    In this paper, a novel fuzzing testcase generation technique based on variational auto-encoder generative adversarial network(VAE-GAN) is proposed.
  71. [71]
    [PDF] GAN-based Seed Generation for Efficient Fuzzing - SciTePress
    This model offers a novel method for generating seed files by learning from crash files in previous experiments. Notably, it excels in generating images across ...Missing: synthesis 2020-2025
  72. [72]
    Reinforcement Learning-based Hierarchical Seed Scheduling for ...
    This paper introduces a multi-level coverage metric and a reinforcement-learning-based hierarchical scheduler for greybox fuzzing, outperforming AFL and ...
  73. [73]
  74. [74]
    Finding bugs in implementations of HQC, the fifth post-quantum ...
    Mar 21, 2025 · In this blogpost, we give a high-level explanation on how HQC works and how you can do automatic testing of implementations of this standard with internal test ...
  75. [75]
    TAEFuzz: Automatic Fuzzing for Image-based Deep Learning ...
    Aug 14, 2025 · 1. Transferable adversarial examples generation: The attacker generates transferable adversarial examples on the surrogate model and inputs them ...
  76. [76]
    [PDF] SwFuzz: Structure-Sensitive WebAssembly Fuzzing
    It optimizes state-of-the-art feedback fuzzing and easily scales to large binary applications and unknown environments. ... “Sledge: a serverless-first, light- ...
  77. [77]
    [PDF] EdgeFuzz: A Middleware-Based Security Testing Tool for ...
    Fuzz testing has become a popular automated tool for finding software system bugs and vulnerabilities in dis- tributed environments. Distributed systems are ...
  78. [78]
    HyPFuzz: Formal-Assisted Processor Fuzzing - USENIX
    We present HyPFuzz, a hybrid fuzzer that leverages formal verification tools to help fuzz the hard-to-reach part of the processors.
  79. [79]
    [PDF] SoK: Prudent Evaluation Practices for Fuzzing
    Ethical handling requires researchers to responsibly disclose these bugs to the vendors or maintainers. Both sides can additionally request a CVE that serves as ...<|control11|><|separator|>