Fact-checked by Grok 2 weeks ago

Buffer overflow protection

Buffer overflow protection encompasses a variety of techniques employed during , , execution, and hardware design to detect, prevent, or mitigate vulnerabilities, which occur when a attempts to store more data in a fixed-size than it can accommodate, potentially leading to corruption, program crashes, or by attackers. These protections address both stack-based and heap-based overflows, common in languages like and C++ that lack built-in bounds checking, and have become essential in to counter exploits that have historically compromised systems ranging from servers to devices. Key compile-time and runtime mechanisms include stack canaries, also known as stack guards, which insert a random or secret value (the "canary") between local buffers and critical stack data like return addresses; if an overflow corrupts this value, the program detects the anomaly and terminates execution before can be hijacked. This technique is implemented via compiler flags such as GCC's -fstack-protector-strong, which protects functions with local arrays or vulnerable parameters, and Microsoft's /GS option in , which places a security cookie on the stack and verifies it on function exit, effectively blocking many attacks with minimal performance overhead. Additionally, (ASLR) randomizes the base addresses of key regions like the , , and libraries at each program load, making it difficult for attackers to predict memory locations for precise exploits, while Data Execution Prevention (DEP), or non-executable memory pages, prevents injected code from running by marking stack and heap regions as non-executable. In operating systems, distributions often enable through compiler options and built-in features, which combine canaries with to restrict code execution on the , while Windows integrates ASLR, DEP, and Guard (CFG) to validate indirect calls and further harden against overflows. Advanced approaches, such as dynamic tracking (DIFT), tag potentially tainted from untrusted and block unsafe pointer operations, offering for both userspace applications and code without requiring source modifications. Despite these defenses, complete elimination relies on secure coding practices, such as using bounds-checked functions (e.g., strncpy instead of strcpy) and transitioning to memory-safe languages like or , as recommended in secure-by-design principles to proactively avoid introducing defects.

Fundamentals

Buffer Overflows

A buffer overflow occurs when a writes more to a than it is allocated to hold, resulting in the excess overwriting adjacent locations and potentially corrupting state or enabling unauthorized code execution. This vulnerability arises primarily in languages like C and C++ that lack built-in bounds checking for array or buffer operations. Common causes include off-by-one errors, where a or inadvertently accesses one beyond the buffer's boundary; improper handling of strings using functions like strcpy() without validation; and overflows that allow excessively large amounts of data to be written by miscalculating buffer sizes. These errors often stem from unvalidated user input or assumptions about data lengths in complex codebases. Buffer overflows are classified into several types based on the region affected. Stack-based overflows target the call stack, typically by overflowing local variables in a frame to overwrite the saved or other control data, altering program execution flow. Heap-based overflows occur in dynamically allocated on the , corrupting such as malloc headers or pointers to adjacent objects, which can lead to arbitrary writes or data leaks. Kernel-based overflows, less common in user-space applications but critical in operating systems, involve buffers in and can enable by overwriting structures. The first documented exploitation of a occurred in the 1988 , which used a in the fingerd daemon on UNIX systems to inject and execute malicious code, infecting approximately 6,000 machines or 10% of the at the time. In a basic exploitation of a stack-based , an attacker crafts input to overflow a local buffer and overwrite the function's return address with the location of injected , redirecting to execute arbitrary instructions such as spawning a . For example, consider the following vulnerable C code:
c
#include <string.h>
#include <stdio.h>

void vulnerable_function(char *user_input) {
    char buffer[10];
    strcpy(buffer, user_input);  // No bounds checking
    printf("Buffer content: %s\n", buffer);
}

int main(int argc, char **argv) {
    if (argc > 1) {
        vulnerable_function(argv[1]);
    }
    return 0;
}
If user_input exceeds 10 bytes, strcpy overflows buffer, potentially overwriting the return address on the stack to point to attacker-controlled code.

Protection Objectives

The primary objectives of buffer overflow protections are to detect unauthorized memory overflows before exploitation can occur, prevent the execution of injected malicious code, and randomize memory layouts to complicate reliable attack targeting. Detection focuses on identifying buffer overruns early, often through sentinel values placed adjacent to critical data like return addresses, enabling timely program termination to avert control hijacking. Prevention mechanisms enforce hardware-supported memory permissions, such as marking and regions as non-executable, thereby blocking the direct execution of attacker-supplied code in data areas. Randomization, exemplified by (ASLR), dynamically varies the positions of code, libraries, , and to disrupt predictable exploitation paths, forcing attackers to guess memory addresses with low success probability. These protections involve inherent trade-offs, including performance overhead from checks and validations, which can introduce slowdowns of a few percent depending on the implementation and workload. For instance, sentinel-based detection adds computational cost for verification on function returns, while may increase context-switching latency in multi-process environments. challenges emerge with legacy software lacking support for these features, potentially requiring recompilation or wrappers, and detection schemes risk false positives that crash benign programs due to unrelated corruptions. Balancing gains against these costs remains a key design consideration, with optimizations like selective protection for vulnerable functions mitigating overhead in production systems. The evolution of protection objectives traces from rudimentary crash-on-overflow detection in the 1990s, pioneered by techniques like StackGuard canaries responding to early exploits such as the 1988 , to comprehensive multi-layered defenses by the 2000s. This shift incorporated prevention and randomization amid rising attack sophistication, including (ROP) that bypassed single defenses. By the 2010s, further advancements included hardware-supported mechanisms for . Stack canaries are effective against traditional stack-smashing attacks that attempt to overwrite return addresses. Layered defenses significantly improve protection against various attack vectors, though advanced bypasses like information leaks can reduce efficacy unless complemented by additional mitigations.

Software Detection Techniques

Stack Canaries

Stack canaries, also known as stack cookies or guards, serve as a detection for stack-based overflows in compiled programs. They involve inserting a known secret value, referred to as the , into the frame of a immediately adjacent to sensitive control data, such as the return address. If a occurs and overwrites the , it would likely corrupt the value. Before the returns, the compiler-generated verifies the integrity of this ; any mismatch triggers an immediate program termination, preventing potential exploitation like control-flow hijacking. The insertion of stack canaries occurs automatically during compilation for functions deemed vulnerable, typically those allocating local buffers. In the function prologue, the compiler loads a canary value from a protected global or area—often an indexed by a thread identifier to ensure uniqueness per —and places it on the stack right after the local variables but before the saved and frame pointer. This positions the canary as a between potential overflow sources (e.g., arrays or strings) and critical control data. In the epilogue, just prior to popping the stack and returning, the code reloads the original canary from storage and compares it against the stack copy; if they differ, execution jumps to a failure handler that aborts the process, often invoking routines like __stack_chk_fail in implementations. This approach requires no changes to and maintains binary compatibility while providing probabilistic protection against overflows. The effectiveness of stack canaries stems from their low runtime overhead and ability to thwart straightforward return-address overwrites, a common vector in stack-smashing attacks. Modern implementations, such as in , show typical performance impacts under 5% in real-world applications. However, they are not foolproof: attackers can bypass them through information leakage (e.g., via format-string vulnerabilities that disclose the canary value) or by crafting partial overflows that avoid the canary entirely, such as targeting adjacent non-buffer data. Variants like random, terminator, or XOR-based canaries address specific bypasses but build on this core mechanism. To illustrate the stack layout and operations, consider a simplified function with a vulnerable buffer:
Stack Frame Layout (high to low addresses):
+-------------------+
| Return Address    |
+-------------------+
| Canary Value      |  <-- Secret value placed here
+-------------------+
| Saved Frame Ptr   |
+-------------------+
| Local Buffer[ ]   |  <-- Vulnerable array/string
+-------------------+
Pseudocode for insertion and check (in a C-like compiler extension):
// Function Prologue (entry):
void vulnerable_func(char buf[10]) {
    unsigned long canary = get_canary_from_storage();  // Load from thread-local/global
    // Allocate stack frame, push locals
    unsigned char local_buffer[10];
    // Insert canary after locals, before control data
    *(unsigned long *)(&local_buffer + sizeof(local_buffer)) = canary;  // Simplified placement

    // Function body: strcpy(local_buffer, input);  // Potential overflow

    // Function Epilogue (before return):
    if (*(unsigned long *)(&local_buffer + sizeof(local_buffer)) != canary) {
        __stack_chk_fail();  // Abort on mismatch
    }
    // Pop frame and return
}
This general implementation highlights how the canary acts as an early warning for corruption. Stack canaries were first systematically introduced in the StackGuard compiler extension by Cowan et al. in 1998, providing a foundational defense integrated into GCC and other toolchains. Their adoption was further advanced by systems like OpenBSD, which incorporated enhanced variants starting in 2003 to bolster default security.

Terminator Canaries

Terminator canaries employ fixed values composed of common string terminator bytes, such as the null byte (0x00), line feed (0x0A), carriage return (0x0D), and end-of-file marker (0xFF), typically arranged in a 32-bit word like 0x000A0DFF or 0x000AFF0D depending on the implementation. These values are inserted into the stack frame between local buffers and the return address during the function prologue, with integrity checked against the original value in the epilogue; corruption triggers program termination via a handler. The design targets overflows in string-processing functions like strcpy or strcat, which halt upon encountering terminator bytes in the source data. In operation, if an overflow attempts to propagate from a local buffer toward the return address, the terminator bytes in the canary exploit limitations in input vectors that prohibit or filter such characters—common in network protocols or formatted inputs where null or newline bytes are stripped or delimit strings. This prevents attackers from crafting payloads that precisely overwrite the canary without detection, as they cannot include the required terminator bytes to restore its value while altering the return address. For instance, in a vulnerable strcpy call to a fixed-size char buffer, an input lacking terminators might overflow the buffer, but if the source cannot embed null or newline bytes, the copy either terminates prematurely or corrupts the canary with non-matching bytes, triggering the check. The primary strengths of terminator canaries lie in their simplicity—no additional runtime storage or randomization is required, reducing overhead and implementation complexity compared to unpredictable values—and their effectiveness against exploits constrained by input sanitization that blocks terminator bytes. However, they are vulnerable to bypasses in scenarios where arbitrary bytes, including terminators, can be supplied, such as through read() or memcpy-based overflows; here, the fixed and predictable value allows attackers to embed the exact canary bytes in their payload to avoid detection. They also offer limited protection against non-string buffer overflows, like integer-based ones, where no terminator semantics apply. Early implementations appeared in the StackGuard compiler patch for GCC 2.7.2.2, released in 1998 as part of the Immunix project, where terminator canaries served as a lightweight option for detecting stack-smashing attacks in C programs without requiring source modifications. This approach predated widespread adoption of randomized variants and was used in basic protection modes for Linux distributions in the late 1990s and early 2000s.

Random Canaries

Random canaries enhance the stack canary mechanism by employing unpredictable values that are generated randomly, making it difficult for attackers to anticipate and bypass the protection during buffer overflows. Unlike fixed or terminator-based canaries, random variants are designed to thwart prediction through memory inspection or repeated attempts. This approach was pioneered in systems like , which integrates random canaries into the compilation process to safeguard return addresses on the stack. The canary value is randomized at program startup, typically using a cryptographically secure pseudorandom number generator such as /dev/urandom on Linux systems, producing a 64-bit integer for modern 64-bit architectures. This value is stored in a protected global variable within the program's data segment and remains constant for the duration of the process execution. In multi-threaded environments, the canary is copied from the global location to thread-local storage (TLS) during thread initialization, ensuring each thread accesses its own copy without race conditions that could arise from concurrent reads of the shared global value; this synchronization is handled atomically by the runtime library, such as glibc's pthread_create implementation. Detection relies on the improbability of an attacker correctly guessing the random canary to overwrite the return address undetected; with a 64-bit value, there are 2^{64} possible combinations, rendering brute-force attacks computationally infeasible even with billions of attempts per second. Upon function return, the compiler inserts code to compare the stack-placed canary against the reference value, aborting execution if a mismatch occurs. An early implementation example is provided by the StackGuard framework from 1998, which patches the GCC compiler to insert and verify random canaries automatically for vulnerable functions. Despite their effectiveness, random canaries have limitations, including susceptibility to information disclosure vulnerabilities that leak the value, such as kernel memory leaks exploitable via /proc interfaces or format string bugs in user-space applications. Side-channel attacks, including those leveraging cache timing or speculative execution like , can also reveal the canary indirectly. The computational overhead of generating the initial random value and performing per-function checks is low in modern implementations. In real-world deployments, the widespread adoption of random canaries in compilers and operating system distributions since the early 2000s has significantly mitigated stack smashing exploits in protected binaries.

Random XOR Canaries

Random XOR canaries represent an advanced variant of stack canaries designed to enhance resistance to information disclosure attacks in buffer overflow scenarios. In this approach, the canary value is computed by XORing a globally generated random 32-bit value with a portion of the stack frame address, typically the low 16 bits of the frame pointer (e.g., random_value ^ (frame_ptr & 0xFFFF)). This modified canary is then inserted into the stack frame immediately after local variables and before the saved frame pointer and return address during function prologue. Upon function epilogue, the stored canary is retrieved, XORed again with the current frame pointer, and compared against the original random value; a mismatch triggers program termination via a call to __stack_chk_fail. The primary purpose of incorporating the XOR operation with stack position data is to obfuscate the canary value, thereby mitigating attacks that partially leak stack contents. Unlike plain random canaries, where a direct leak of the value enables attackers to forge it in subsequent overflows, the XOR binding ensures that knowledge of the raw random value alone is insufficient without the corresponding frame pointer, and vice versa. This provides additional protection against memory disclosure vulnerabilities, such as those exploited via format string bugs or partial stack reads, by complicating the reconstruction of valid canaries across different stack frames. The algorithm can be outlined in pseudocode as follows: Generation and Insertion (Function Prologue):
global_random = generate_random_32bit()  // Once at program startup
canary = global_random ^ (frame_ptr & 0xFFFF)
push canary onto stack
// Proceed with local variables, saved frame_ptr, return_addr
Verification (Function Epilogue):
retrieved_canary = pop from stack
computed_canary = global_random ^ (frame_ptr & 0xFFFF)
if retrieved_canary != computed_canary:
    call __stack_chk_fail()  // Terminate program
This process ensures the canary's integrity without exposing the global random value directly on the stack. Random XOR canaries were introduced as an enhancement in StackGuard version 2 by Immunix, building on the original random canary mechanism from the 1998 USENIX Security paper, and were integrated into the GNU Compiler Collection (GCC) starting with version 4.1 in 2006 via the -fstack-protector option. This adoption marked a significant step in mainstream compiler support for stack overflow detection, with subsequent GCC versions (e.g., 4.1 onward) enabling it by default for vulnerable functions, contributing to widespread deployment in Linux distributions like Red Hat. The enhancement improves resilience against information-leak attacks compared to plain random canaries by approximately doubling the entropy required for successful forgery in partial disclosure scenarios. Despite these benefits, random XOR canaries introduce a minor computational overhead due to the additional XOR and comparison operations, typically negligible but measurable in performance-sensitive applications. They remain vulnerable to full stack frame leaks or non-linear overflows that expose both the canary and frame pointer simultaneously, as well as attacks targeting functions without canary protection.

Software Prevention Techniques

Bounds Checking

Bounds checking is a proactive technique to prevent buffer overflows by enforcing limits on array and string accesses at compile time or runtime, ensuring that indices and lengths do not exceed allocated bounds. Static bounds checking involves compiler analysis to verify safe accesses, often through the use of safe library functions that incorporate length parameters, such as , which copies strings while guaranteeing null-termination and avoiding overflows by respecting the destination buffer size. In contrast, dynamic bounds checking performs runtime verification on each access, typically via conditional statements like if (index < array_size) before dereferencing, which catches violations immediately but incurs execution-time costs. Examples of bounds checking implementations include Microsoft's Secure C Runtime (CRT) functions, such as , which explicitly validate buffer sizes and source lengths to prevent overflows in C programs. In Java, dynamic bounds checking is built into the language, throwing an when an array index is negative or exceeds the array length, providing automatic enforcement without manual intervention. The concept of bounds checking originated in the 1970s with safe programming languages like , which included runtime checks to detect out-of-bounds array accesses as a core safety feature. For legacy languages like , retrofitting bounds checking has been advanced through tools such as , introduced in 2002, which uses type inference and selective runtime instrumentation to add safety to existing code without full rewrites. Dynamic bounds checking typically introduces a performance overhead of 10-50% due to the added conditional branches and metadata management on each access, while static approaches impose lower costs, often under 10%, by optimizing or eliminating redundant checks during compilation. When fully applied across a program, bounds checking eliminates an entire class of buffer overflow vulnerabilities by preventing invalid writes altogether. Despite its effectiveness, bounds checking has limitations in unsafe languages like C, where coverage is incomplete without comprehensive adoption of safe libraries or tools, leaving unchecked legacy code vulnerable. Additionally, it places a burden on developers for manual implementation in performance-critical sections, as automatic retrofitting may not handle all pointer usages.

Address Space Layout Randomization (ASLR)

Address Space Layout Randomization (ASLR) is a memory protection mechanism that introduces non-deterministic changes to the virtual memory layout of a process at runtime, making it significantly harder for attackers to predict and exploit memory addresses in buffer overflow attacks. By randomizing the base addresses of key memory regions, ASLR disrupts the reliability of exploits that rely on hardcoded or leaked addresses, such as those overwriting return pointers to redirect control flow. This technique was first conceptualized and implemented as part of the for the Linux kernel, where it was introduced in July 2001 to counter deterministic exploit chains enabled by predictable memory layouts. ASLR operates by randomizing several core components of a process's address space. The stack receives a random base offset, typically shifting its starting address by a value derived from a pseudo-random delta, which varies per process invocation. Heap allocation, managed via mechanisms like the brk() system call for initial segments or for dynamic regions, incorporates randomization to obscure data structure locations. Memory mappings via , which load shared libraries and other dynamic content, are offset by another random delta to prevent prediction of library function addresses. For position-independent executables (PIE), compiled with flags like -fPIE, the main program's text segment is also randomized, extending protection to the executable itself rather than just loaded modules. These randomizations are applied during process creation, such as in the function for ELF binaries, ensuring the layout is determined anew for each execution. Implementations of ASLR occur at the operating system level, with varying degrees of granularity measured in bits of entropy—the effective randomness provided against guessing attacks. Early on Linux provided approximately 16 bits of entropy across randomized segments, sufficient to slow but not fully prevent brute-force derandomization on 32-bit systems. Microsoft introduced ASLR in in 2007, randomizing image bases, stacks, heaps, and the Process Environment Block (PEB) with opt-in support for executables via linker flags; initial entropy was lower, around 8-11 bits per component due to alignment constraints and reboot-persistent choices, though later enhancements increased this. By the 2010s, mainstream operating systems had adopted full ASLR: Linux kernels from version 2.6.12 (2005) integrated PaX-inspired randomization, achieving up to 28 bits of entropy on 64-bit architectures for components like the stack (19-22 bits) and mmap base; Windows expanded to mandatory high-entropy ASLR in versions like ; and macOS implemented it starting with (2007), evolving to full coverage by the mid-2010s. (KASLR), an extension randomizing the kernel's own layout, was added to Linux in version 3.14 (2014) to protect against kernel-level exploits. The effectiveness of ASLR lies in elevating the difficulty of advanced exploits like Return-Oriented Programming (ROP) and Jump-Oriented Programming (JOP), which chain short code snippets (gadgets) from existing binaries to bypass non-executable memory protections; randomization scatters these gadgets across unpredictable addresses, requiring attackers to first disclose or guess layouts via side channels. On 32-bit systems, however, ASLR's limited entropy (often 16 bits or less) allows brute-force bypasses in forking environments like servers, where child processes inherit the parent's layout and repeated attempts can guess addresses in seconds to minutes without crashing the parent. This vulnerability is largely mitigated on 64-bit systems, where 28+ bits of entropy render brute force computationally infeasible, often requiring billions of attempts. To illustrate, consider a simplified memory layout shift: Fixed Layout (Pre-ASLR):
High Addresses
+--------------------+
|   Stack (0xbffff000)|
+--------------------+
|   Heap (0x0804a000) |
+--------------------+
| Shared Libs (0x400000 via mmap)|
+--------------------+
|   Code (0x08048000)|
+--------------------+
Low Addresses
Randomized Layout (Post-ASLR, e.g., +0x123400 offset):
High Addresses
+--------------------+
|   Stack (0xc123f400)|
+--------------------+
|   Heap (0x1928e400) |
+--------------------+
| Shared Libs (0x523400 via mmap)|
+--------------------+
|   Code (0x1a38c400) |  (if PIE)
+--------------------+
Low Addresses
This randomization breaks address-dependent payloads, though effectiveness depends on full adoption (e.g., PIE-enabled binaries) and resistance to information leaks.

Control-Flow Integrity (CFI)

Control-Flow Integrity (CFI) is a security mechanism designed to mitigate buffer overflow attacks by ensuring that a program's runtime control flow adheres strictly to a precomputed control-flow graph (CFG) derived at compile time, thereby preventing attackers from redirecting execution to unintended code paths. This approach addresses the limitations of defenses like , which complicates but does not prevent control-flow hijacking by merely randomizing memory addresses without validating transfer validity. The core principle of CFI involves instrumenting the code to insert runtime validation checks on indirect control transfers, such as indirect calls, jumps, and returns, ensuring that the target address belongs to a predefined set of legitimate destinations in the CFG. For instance, before an indirect call, the implementation verifies whether the computed target is among the allowed function entry points, aborting execution if the check fails. This enforcement limits attackers' ability to chain exploits like return-oriented programming (ROP), even if they can overwrite pointers or control data. CFI originated from the 2005 paper "Control-Flow Integrity: Principles, Implementations, and Applications" by Martín Abadi, Mihai Budiu, Úlfar Erlingsson, and Jay Ligatti, which introduced the concept along with a software-based enforcement prototype for Windows on x86 architectures, demonstrating its feasibility through experiments on real-world applications. The technique gained practical adoption in the early 2010s, with Google integrating CFI into Chrome and Chrome OS by 2013 to protect against control-flow hijacks in browser components and system software. CFI variants vary in precision and coverage to balance security and performance: fine-grained CFI enforces context-specific target sets, such as unique valid destinations per indirect call site or function, offering stronger protection at higher cost; coarse-grained CFI, in contrast, partitions code into broader equivalence classes (e.g., all functions of the same type) or uses simple blacklists of invalid targets for efficiency. Additionally, forward-edge CFI focuses on protecting outgoing transfers like indirect calls and jumps, while backward-edge CFI secures incoming transfers such as returns, often using separate mechanisms for each. Modern implementations, such as the CFI mode in introduced in the 2010s, support these variants through compiler passes that generate CFGs and insert checks; for forward edges, it promotes indirect calls to direct calls where possible or uses jump tables with bit-set validation, while backward edges employ shadow stacks to store and compare return addresses separately from the stack. These features enable deployment in production environments like web browsers and operating systems without requiring source code modifications. CFI typically incurs a runtime overhead of 5-20% in execution time, varying by granularity—fine-grained approaches approach the upper end on compute-intensive benchmarks, while optimized coarse-grained variants stay below 10%—as measured across standard suites like . In terms of effectiveness, CFI prevents code-reuse attacks like by restricting transfers to valid edges, rendering the vast majority of gadget chains unusable and significantly raising the bar for exploitation, though coarse-grained implementations remain vulnerable to attacks exploiting large equivalence classes.

Hardware and OS-Level Protections

Non-Executable Memory Regions

Non-executable memory regions represent a hardware and operating system-level defense against buffer overflow attacks that attempt to inject and execute malicious code, such as shellcode, in data areas like the stack or heap. This protection works by marking certain memory pages as non-executable, ensuring that attempts to run code from these regions trigger a hardware exception or fault, thereby preventing code injection exploits. Key technologies include the NX (No eXecute) bit introduced by AMD in their AMD64 architecture in 2003, which allows processors to enforce execution restrictions on memory pages. Microsoft's Data Execution Prevention (DEP), rolled out in Windows XP Service Pack 2 in 2004, leverages the NX bit (or equivalent hardware features) to mark data pages as non-executable by default. Complementing these is the W^X (write XOR execute) policy, first implemented in OpenBSD 3.3 in 2003 and later adopted in projects like PaX for Linux, which enforces that no memory page can simultaneously be writable and executable. The mechanism relies on page table entries in the memory management unit (MMU), where the NX bit—specifically bit 63 in 64-bit x86 page table entries—is set to indicate non-executability; if the processor's extended feature enable (EFER) register has the no-execute enable (NXE) bit activated, any attempt to fetch instructions from such a page causes a general protection fault. Operating systems configure page tables during process initialization to apply this flag to data regions like the stack and heap, while code segments remain executable. Violations result in immediate termination of the offending process or a kernel-level fault, halting exploitation before injected code can run. In x86-64 architectures, non-executable protection for the stack and heap has become standard, with modern operating systems like , , and enabling it by default for 64-bit processes to cover vulnerable data areas comprehensively. However, limitations exist, as attackers can bypass this via (ROP), where overflows corrupt control data (e.g., return addresses) to chain existing executable code snippets ("gadgets") from legitimate libraries, treating data as pointers to code without injecting new instructions. Adoption accelerated in the mid-2000s, with PaX integrating non-executable pages into Linux kernels as early as 2000 and achieving widespread use through grsecurity patches by the mid-decade, while hardware support from AMD and Intel processors made it ubiquitous across consumer systems. The performance impact is near-zero for hardware implementations, as it involves only a single bit check during instruction fetch, with studies showing negligible overhead (under 2%) even in software-emulated scenarios on older systems. For example, in a classic stack buffer overflow, an attacker might overwrite a buffer with shellcode followed by a return address pointing to that shellcode; under non-executable protection, the jump to the stack triggers an execution fault, crashing the program before the shellcode executes.

Pointer Tagging and Authentication

Pointer tagging and authentication are hardware-supported techniques that embed security metadata directly into pointer values to detect corruption or enforce access permissions, thereby mitigating buffer overflow attacks that alter pointers to hijack control flow or access unauthorized memory. In pointer tagging, unused bits within a pointer—often the low-order bits reserved for alignment or the high-order byte in architectures supporting top-byte ignore (TBI)—are repurposed to store tags indicating the pointer's type, permissions, or associated object metadata. For instance, ARM's TBI feature, introduced in Armv8-A, ignores the top 8 bits of 64-bit virtual addresses, allowing software to safely store 8-bit tags without affecting address calculations during memory access. These tags enable hardware to perform integrity checks on loads and stores; a mismatch between the pointer's tag and the memory location's expected tag triggers a fault, preventing exploits like spatial buffer overflows where an attacker writes beyond a buffer's bounds to corrupt adjacent pointers. Pointer authentication extends this by appending a cryptographic message authentication code (MAC) to the pointer, computed using a secret key and contextual data such as the pointer's address and modifiers like thread ID, ensuring tamper detection even if the attacker's information leakage reveals the base address. ARM Pointer Authentication (PAC), specified in Armv8.3-A since 2016, generates a PAC of variable size, typically 16 bits or up to 31 bits (depending on the variant and virtual address size) via the QARMA-64 block cipher, which is appended to the pointer after stripping low bits for alignment; hardware verifies the MAC before dereferencing, authenticating return addresses, function pointers, and data pointers against corruption from buffer overflows or use-after-free errors. Software interfaces with these mechanisms through compiler intrinsics, such as ARM's PACIASP (pointer authenticate instruction to stack pointer) for signing and AUTIASP for verification, integrated into calling conventions to protect stack and heap pointers without significant code changes. This hardware enforcement occurs transparently during instruction execution, raising exceptions on invalid authentications to complement coarser protections like non-executable memory regions. The CHERI (Capability Hardware Enhanced RISC Instructions) project, developed since the early 2010s by the University of Cambridge and SRI International, exemplifies advanced pointer tagging through capability-based architectures that replace conventional pointers with "fat" 128- or 256-bit capabilities containing tagged bounds, permissions, and a monotonically decreasing authority mask. In CHERI, a single-bit tag in each capability word signals validity; hardware clears the tag on unaligned stores or out-of-bounds accesses, causing faults on subsequent uses and blocking buffer overflow-induced pointer forgery or use-after-free exploits. Seminal work in CHERI demonstrated its efficacy on MIPS and later RISC-V and ARM implementations, with compiler adaptations for C/C++ ensuring backward compatibility while enforcing spatial and temporal safety. Apple's adoption of ARM PAC in its arm64e architecture, starting with iOS 12 on devices with the A12 Bionic chip in 2018, and extending to macOS Big Sur in 2020, applies authentication to iOS apps, using dedicated keys for instruction (IA/IB) and data (DA/DB) pointers to secure return-oriented programming (ROP) chains and indirect calls against overflow attacks. These techniques provide fine-grained protection at low runtime overhead, typically 1-5% in performance-critical workloads, by leveraging hardware parallelism for tag/MAC operations without frequent software intervention; for example, CHERI evaluations on FreeBSD showed under 4% slowdown for SPEC CPU2006 benchmarks, while ARM PAC incurs less than 1% overhead in pointer-heavy applications due to its asymmetric signing/verification. They effectively counter information leaks by randomizing tags per allocation or context, complicating ROP gadgets and data-only attacks, and extend to use-after-free by invalidating tags on deallocation. However, deployment requires specialized hardware—ARMv8.3-A for PAC and custom extensions for CHERI—limiting universality, as x86 architectures lack native support and rely on software emulation with higher costs. Ongoing research, such as PARTS for PAC-based pointer integrity, continues to refine compiler integrations for broader memory safety in C/C++ codebases.

Implementations in Compilers and Languages

GNU Compiler Collection (GCC)

The GNU Compiler Collection (GCC) provides several built-in mechanisms to mitigate buffer overflow vulnerabilities in compiled C and C++ code, primarily through compiler flags that instrument protective code during compilation. These features focus on stack protection, position-independent execution for address space randomization, control-flow integrity checks, and fortified library functions, enabling developers to enhance security without modifying source code. Early efforts in GCC included basic stack guards via patches applied to version 2.95 around 1999, which laid the groundwork for more robust protections in later releases. GCC's stack protection, activated via the -fstack-protector flag, inserts a random canary value—a secret placed between local buffers and the function's return address—into vulnerable functions to detect overflows at runtime. Introduced in GCC 4.1 in 2006, this feature protects functions containing local arrays larger than 8 bytes or calls to alloca, verifying the canary's integrity before function exit and aborting execution if altered. Since GCC 4.1, the implementation supports random XOR canaries, which XOR the guard with local control data like saved frame pointers or registers, increasing resilience against partial overwrites by randomizing the effective value per function frame. Enhanced variants include -fstack-protector-strong (added in GCC 4.9), which extends protection to functions with local arrays or frame address references even without large buffers, and -fstack-protector-all, which instruments every function regardless of content, trading performance for broader coverage. These options typically incur a 1-5% runtime overhead, depending on code complexity, while significantly reducing stack-smashing exploit success rates in vulnerable binaries. To support Address Space Layout Randomization (ASLR), GCC generates position-independent executables (PIE) using the -fPIE flag (or -fpie for lowercase variant), producing relocatable code that the linker can load at randomized base addresses via the -pie option. This prevents attackers from predicting memory layouts for exploits like return-oriented programming. The GNU linker (ld) complements this with --hash-style=gnu, enabling the .gnu.hash section for faster symbol resolution in PIE binaries, which improves startup performance under ASLR without compromising security. GCC integrates Control-Flow Integrity (CFI) through the -fsanitize=cfi flag, available since version 6 in 2016, which enforces valid indirect branches and calls by generating checks against a precomputed control-flow graph. This detects and prevents diversions to unauthorized code gadgets, offering forward-edge and indirect-branch protection with minimal overhead (around 5-10% for typical workloads). Additionally, the -D_FORTIFY_SOURCE macro enables bounds-checked variants of standard libc functions (e.g., memcpy, strcpy, printf), replacing unsafe calls with fortified versions that perform runtime size validations using GCC builtins like __builtin_object_size. Defined at level 1 or 2 during compilation (with optimization at -O1 or higher), it aborts on detected overflows, catching common errors in buffer-handling code; level 2 adds checks for unsafe but standards-conforming uses, such as format string exploits. Introduced in glibc 2.3.4 and integrated with GCC, this feature has been a staple for hardening since the early 2000s. Developers can combine these flags for comprehensive protection, such as gcc -fstack-protector-strong -D_FORTIFY_SOURCE=2 -fPIE -pie -fsanitize=cfi program.c -o program, which instruments stack canaries, fortified library calls, PIE for ASLR, and CFI checks, reducing buffer overflow vulnerabilities in C/C++ binaries on Linux and Unix-like systems. This configuration exemplifies GCC's role in open-source ecosystems, where such flags are often enabled by default in distributions like and for enhanced security.

Microsoft Visual Studio

Microsoft Visual Studio provides several built-in protections against buffer overflows through compiler flags, runtime libraries, and integration with Windows security features, primarily targeting C and C++ code in Windows environments. The /GS flag, introduced in Visual Studio .NET 2002, implements buffer security checks by inserting a randomly generated security cookie (canary) on the stack frame before the return address and certain parameters. This cookie is verified at function exit; if altered due to a buffer overrun, the program terminates to prevent exploitation. The cookie is generated per-process and stored in a thread-local location, making it unpredictable for attackers without process access. Enhancements to /GS continued in subsequent versions. In Visual Studio 2005, parameter shadowing was added to protect vulnerable function parameters from overflows, extending coverage beyond just return addresses. This evolved further with optimizations in later releases, such as improved heuristics in Visual Studio 2010 to broaden protection scope and reduce performance overhead. By Visual Studio 2017, integration with via the /guard:cf flag provided CFI-like protections, validating indirect calls at runtime against a table of valid targets compiled into the binary, mitigating control-flow hijacking often enabled by buffer overflows. Developers enable these via project properties under C/C++ > Code Generation > Buffer Security Check for /GS, or Linker > Advanced > for /guard:cf, with default enabling in many configurations for enhanced exploit resistance in both native C++ and .NET interop scenarios. Visual Studio integrates Data Execution Prevention (DEP) to mark stack and heap regions as non-executable, preventing from buffer overflows. This is achieved through the /NXCOMPAT linker flag, introduced in 2005, which signals compatibility with Windows DEP, automatically applying no-execute permissions to protected memory pages. Configuration occurs in project settings under Linker > All Options > Data Execution Prevention Support, ensuring executables leverage hardware-enforced DEP on supported processors to block execution. For additional runtime detection, 2019 (version 16.9) introduced AddressSanitizer via the /fsanitize=address compiler option, ported from Google's implementation to detect stack and heap s, use-after-free, and other memory errors with minimal overhead. This tool instruments code to shadow memory allocations and reports violations at runtime, integrable through project properties under C/C++ > All Options > Enable AddressSanitizer. Complementing this, the SafeInt library, available since 2010, prevents integer overflows in arithmetic operations that could lead to buffer size miscalculations, using templated classes like SafeInt for bounds-checked computations that throw exceptions on overflow. These features collectively strengthen mitigations in C++ projects, reducing vulnerability surfaces in Windows applications.

Clang and LLVM

Clang and LLVM provide a suite of integrated tools for buffer overflow protection, leveraging the modular (IR) for advanced static and dynamic analysis across multiple platforms. These protections emphasize runtime sanitizers and compiler flags that enable developers to detect and mitigate memory errors, including buffer overflows, during development and deployment. Key features include AddressSanitizer for memory access validation, stack canaries for local buffer safeguards, and (CFI) to prevent control-flow hijacking often resulting from overflows. AddressSanitizer (ASan), introduced in 3.1 in 2012, is a prominent detector that identifies buffer overflows on the , , and globals by instrumenting code at and using a . It employs shadow —a compressed mapping of the where each byte of shadow represents multiple bytes of application —to track allocated regions and detect out-of-bounds accesses. For instance, accesses beyond buffer limits trigger immediate reports, enabling early bug detection with an average runtime slowdown of approximately 2x. ASan has been fully functional on supported platforms since its inception and integrates seamlessly with via the -fsanitize=address flag. Clang's stack protector mechanism guards against stack-based buffer overflows by inserting canaries—random values placed between local buffers and the return address—into vulnerable functions. The -fstack-protector-all applies this protection universally to all functions, using random XOR canaries by default to obfuscate the guard values and thwart prediction attacks. Upon function exit, Clang verifies the canary; any corruption due to overflow causes the program to abort, preventing . This feature, inherited and enhanced from earlier compiler traditions, operates with negligible overhead in most cases and is enabled through standard Clang command-line options. Control-Flow Integrity (CFI) in enforces valid control transfers to mitigate exploits that redirect execution following a . Enabled via -fsanitize=cfi, it provides fine-grained protection, particularly for indirect function calls and virtual calls, by generating jump tables for function pointers and validating targets against type-safe sets using bit vectors or interleaved virtual tables. For indirect calls, ensures alignment and range checks on jump table entries, reducing the for control hijacking. This implementation, part of 's sanitizer framework, supports cross-module operations experimentally and integrates with LLVM's type metadata for precise enforcement. Complementing these, UndefinedBehaviorSanitizer (UBSan) detects bounds violations through compile-time , focusing on out-of-bounds accesses with -fsanitize=bounds (including suboptions like array-bounds for static checks). It instruments indexing operations to trap invalid accesses at , aiding in prevention without the full overhead of . Additionally, supports hardware-accelerated protections via hints for tagged pointers, as in Hardware-assisted AddressSanitizer, which leverages AArch64's top-byte-ignore feature to embed tags in pointer high bits for probabilistic spatial safety checks on memory accesses. This allows efficient detection of overflows with low false positives on compatible hardware. The modular design of and sanitizers facilitates their adoption in large-scale projects, such as Google's browser, where CFI and are routinely enabled for security hardening, and OS, which integrates multiple sanitizers for runtime bug detection in its kernel and userland components. Originating from 3.1, these tools have evolved to support cross-platform development, offering developers flexible, low-overhead options for robust mitigation.

Other Compilers and Languages

IBM XL C/C++ compilers incorporate stack protection mechanisms similar to those in , using the -fstack-protector or -qstackprotect option to insert values between local buffers and control data on the , thereby detecting overflows at . On PowerPC architectures, these compilers leverage hardware-assisted canaries, where processor features like the enable efficient verification of stack integrity without significant performance overhead, a capability introduced in the early . Additionally, the -qcheck option enables bounds checking for arrays and pointers, inserting explicit validations to prevent out-of-bounds accesses during execution. The (ICC), now part of oneAPI DPC++/C++, provides buffer overflow protection through the /GS flag, which generates code to detect stack-based overruns by placing security cookies adjacent to return addresses and verifying them before function returns, ensuring compatibility with Visual Studio's implementation. This compiler also integrates link-time optimization (/Qipo) to facilitate (ASLR) by enabling whole-program analysis that randomizes code and data layouts during linking, reducing the predictability of memory addresses for potential exploits. In programming languages designed for safety, Rust's borrow checker enforces at by tracking ownership, borrowing, and lifetimes of references, preventing invalid accesses such as buffer overflows without runtime overhead or collection. This mechanism ensures that attempts to access buffers beyond their bounds or use deallocated result in compilation errors, as demonstrated in Rust's core library implementations where slice operations include bounds checks. Similarly, Java's (JVM) mandates bounds checking for array accesses via instructions like iaload, which throw ArrayIndexOutOfBoundsException if indices exceed array limits, a feature inherent since Java's initial release in 1995. Fail-Safe C, a memory-safe dialect of developed in the early 2000s, introduces runtime checks through fat pointers that embed bounds information, automatically validating all pointer dereferences and arithmetic to detect and prevent buffer overflows while maintaining compatibility with standard C semantics. This approach instruments to enforce spatial and temporal safety, disallowing unsafe operations like unchecked writes at runtime with minimal performance impact for safe programs. At the hardware level, implements kernel-protected stack canaries for on processors, utilizing the architecture's register windows to conceal and verify return addresses transparently across all user processes without modifying applications or binaries. Introduced in 2001, this mechanism embeds randomized values in hidden hardware registers, detecting overflows by comparing them upon return and terminating the process if tampered, providing system-wide protection against stack-smashing attacks.

References

  1. [1]
    CWE-121: Stack-based Buffer Overflow
    Use automatic buffer overflow detection mechanisms that are offered by certain compilers or compiler extensions. ... Chapter 5, "Protection Mechanisms", Page 189.
  2. [2]
    Buffer Overflow - OWASP Foundation
    A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold or when a program attempts to put data in a memory area ...
  3. [3]
    Secure by Design Alert: Eliminating Buffer Overflow Vulnerabilities
    Feb 12, 2025 · This Alert outlines proven methods to prevent or mitigate buffer overflow vulnerabilities based on secure by design principles and software development best ...
  4. [4]
    'Strong' stack protection for GCC - LWN.net
    The basic idea behind stack protection is to push a "canary" (a randomly chosen integer) on the stack just after the function return pointer has been pushed.
  5. [5]
    /GS (Buffer Security Check) | Microsoft Learn
    May 29, 2025 · Causing a buffer overrun is a technique used by hackers to exploit code that does not enforce buffer size restrictions.
  6. [6]
    [PDF] Real-World Buffer Overflow Protection for Userspace & Kernelspace
    The specific contributions of this work are: We present the first DIFT policy for buffer overflow prevention that runs on stripped, unmodified bina- ries, ...
  7. [7]
    4.15. Protecting against buffer overflows - Debian
    Kernel patches related to buffer overflows include the Openwall patch provides protection against buffer overflows in 2.2 linux kernels.
  8. [8]
    [PDF] Real-World Buffer Overflow Protection for Userspace & Kernelspace
    Kernel buffer overflows are especially potent as they can override any protection mechanisms, such as Solaris jails or SELinux access con- trols. Remotely ...
  9. [9]
    What is a Buffer Overflow | Attack Types and Prevention Methods
    Developers can protect against buffer overflow vulnerabilities via security measures in their code, or by using languages that offer built-in protection. In ...
  10. [10]
    Buffer Overflow Attack - OWASP Foundation
    Buffer overflows can consist of overflowing the stack [Stack overflow] or overflowing the heap [Heap overflow]. We don't distinguish between these two in ...Missing: kernel | Show results with:kernel
  11. [11]
    What Is Buffer Overflow? Attacks, Types & Vulnerabilities | Fortinet
    One of the most common methods for preventing buffer overflows is avoiding standard library functions that have not been bounds-checked, which includes gets, ...<|separator|>
  12. [12]
    Windows Kernel Buffer Overflow - White Knight Labs
    Mar 31, 2025 · A buffer overflow happens when more data is written to a buffer than it can hold, causing it to overflow into adjacent memory. Example: Imagine ...
  13. [13]
    [PDF] The Morris worm: A fifteen-year perspective - UMD CS
    Today, the Morris worm is remembered as the first of many such attacks, as what might have been a wake-up call to system administrators and security researchers ...<|separator|>
  14. [14]
    Morris Worm - fingerd Stack Buffer Overflow (Metasploit) - Exploit-DB
    Nov 6, 2018 · This module exploits a stack buffer overflow in fingerd on 4.3BSD. This vulnerability was exploited by the Morris worm in 1988-11-02.
  15. [15]
    [PDF] Smashing The Stack For Fun And Profit Aleph One
    This paper attempts to explain what buffer overflows are, and how their exploits work. Basic knowledge of assembly is required. An understanding of virtual ...Missing: seminal | Show results with:seminal
  16. [16]
    Everything about Buffer Overflows | Blog - Code Intelligence
    Simple buffer overflow example​​ If an attacker inputs more than 10 characters, the `strcpy` function will overflow the buffer allocated on the stack and ...What is buffer overflow? · Types of buffer overflow · Buffer overflow example...
  17. [17]
    Protecting Pointers From Buffer Overflow Vulnerabilities - USENIX
    This paper presents PointGuard, a compiler technique to defend against most kinds of buffer overflows by encrypting pointers when stored in memory.
  18. [18]
    On the effectiveness of address-space randomization
    The idea is to introduce artificial diversity by randomizing the memory location of certain system components. This mechanism is available for both Linux (via ...
  19. [19]
    [PDF] Protection against overflow attacks
    Direct buffer overflow attacks use direct mechanisms to modify a program counter bound address. The indirect buffer overflow attacks, on the other hand, use.
  20. [20]
    [PDF] Memory Corruption Attacks The (almost) Complete History
    Jun 25, 2010 · 10/21/1999 - “Advanced Buffer Overflow Exploits”. Taeh Oh publishes "Advanced Buffer Overflow Exploit" on advanced buffer overflows [18]. It ...
  21. [21]
  22. [22]
    [PDF] A Dynamic Mechanism for Recovering from Buffer Overflow Attacks
    Our tests resulted in fixing 14 out of 17 “fixable” buffer overflow vulnerabilities, a 82% success rate. The remaining 14 packages in the CoSAK suite were ...Missing: metrics | Show results with:metrics
  23. [23]
    None
    ### Summary of StackGuard Paper (https://www.usenix.org/legacy/publications/library/proceedings/sec98/full_papers/cowan/cowan.pdf)
  24. [24]
    OpenBSD 3.3
    May 1, 2003 · With this change, function prologues are modified to rearrange the stack: a random canary is placed before the return address, and buffer ...
  25. [25]
    [PDF] defeating compiler- level buffer overflow protection - USENIX
    StackGuard will prevent a generic buffer overflow attack against this code. ... The use of compiler-level stack protection, as in StackGuard and SSP, along with.
  26. [26]
    [PDF] StackGuard: A Historical Perspective - Washington
    Why Are We So Vulnerable To. Something So Trivial? • Why are we so vulnerable to something so trivial? – Because C chose to represent strings as null.
  27. [27]
    An In-Depth Survey of Bypassing Buffer Overflow Mitigation ... - MDPI
    The current work aims to describe the stack-based buffer overflow vulnerability and review in detail the mitigation techniques reported in the literature.Missing: seminal | Show results with:seminal<|control11|><|separator|>
  28. [28]
  29. [29]
    [PDF] Four different tricks to bypass StackShield and StackGuard protection
    Stack shielding technologies have been developed to protect programs against exploitation of stack based buffer overflows. Among different types of protections, ...
  30. [30]
    [PDF] PESC: A Per System-Call Stack Canary Design for Linux Kernel
    ARM64 (a.k.a., AArch64) Linux kernel v4.19 uses one single global canary variable __stack_chk_guard for kernel stacks of all processes. Canary initialize: ARM64 ...<|control11|><|separator|>
  31. [31]
  32. [32]
    Security Technologies: Stack Smashing Protection (StackGuard)
    Aug 20, 2018 · StackGuard basically works by inserting a small value known as a canary between the stack variables (buffers) and the function return address.Missing: 80-90% | Show results with:80-90%
  33. [33]
    Stack Canaries – Gingerly Sidestepping the Cage - SANS Institute
    Feb 4, 2021 · Stack canaries were invented to prevent buffer overflow (BOF) vulnerabilities from being exploited. This BOF is the root problem that needs ...Missing: 80-90% | Show results with:80-90%
  34. [34]
    [PDF] A Comparison of Buffer Overflow Prevention Implementations and ...
    Nov 4, 2003 · This paper aims to explain the concepts behind buffer overflow protection software and implementation details of some of the more popular ...
  35. [35]
    [PDF] CSC 591 Systems Attacks and Defenses Stack Canaries & ASLR
    Random XOR Canary – The random canary concept was extended in StackGuard version 2 to provide slightly more protection by performing a XOR operation on the ...<|control11|><|separator|>
  36. [36]
    404 Not Found
    No readable text found in the HTML.<|control11|><|separator|>
  37. [37]
    Efficient and effective array bound checking - ACM Digital Library
    Array bound checking refers to determining whether all array references in a program are within their declared ranges. This checking is critical for ...
  38. [38]
    strlcpy and strlcat – consistent, safe, string copy and concatenation.
    The strlcpy() and strlcat() functions return the total length of the string they tried to create. For strlcpy() that is simply the length of the source; for ...How Do Strlcpy() And... · What Strlcpy() And Strlcat()... · Who Uses Strlcpy() And...<|separator|>
  39. [39]
    [PDF] WG14 N2660 Title: Improved Bounds Checking for Array Types Author
    Feb 13, 2021 · Array types with static or dynamic bound can be used instead of pointers for safe programming because compilers can use length information ...
  40. [40]
    Security Features in the CRT | Microsoft Learn
    Jun 18, 2025 · The secure functions don't prevent or correct security errors. Instead, they catch errors when they occur. They do extra checks for error conditions.Missing: bounds | Show results with:bounds
  41. [41]
    ArrayIndexOutOfBoundsException (Java Platform SE 8 )
    Thrown to indicate that an array has been accessed with an illegal index. The index is either negative or greater than or equal to the size of the array.
  42. [42]
    Pascal bounds checking (standard) - Stack Overflow
    Apr 29, 2016 · Whether the compiler enforces bounds checking is implementation specific, and is not specified in the standard.Checking to see if a number is within a range in free pascalWhy is bounds checking not implemented in some of the languages?More results from stackoverflow.com
  43. [43]
    [PDF] CCured: Type-Safe Retrofitting of Legacy Software - People @EECS
    The main contribution of this paper is the CCured type system, a refinement of the C type system with separate pointer kinds for different pointer usage modes.
  44. [44]
    [PDF] Backwards-Compatible Array Bounds Checking for C with Very Low ...
    This paper addresses the problem of enforcing correct us- age of array and pointer references in C and C++ programs. This remains an unsolved problem ...
  45. [45]
    CCured: type-safe retrofitting of legacy code - ACM Digital Library
    In this paper we propose a scheme that combines type inference and run-time checking to make existing C programs type safe.
  46. [46]
    [PDF] CCured in the Real World - Scott McPeak
    Such a pointer requires bounds checks and is instrumented to carry with it information about the bounds of the memory area to which it is supposed to point.
  47. [47]
    PaX ASLR (Address Space Layout Randomization)
    ... ASLR at all). 2. Implementation PaX can apply ASLR to tasks that are created from ELF executables and use ELF libraries. The randomized layout is determined ...Missing: 2003 | Show results with:2003
  48. [48]
    [PDF] On the Effectiveness of Address-Space Randomization
    Address-space randomization is a technique used to fortify systems against buffer overflow attacks. The idea is to introduce artificial diversity by randomizing ...Missing: seminal | Show results with:seminal
  49. [49]
    [PDF] An Analysis of Address Space Layout Randomization on Windows ...
    This paper presents our measurements and discusses our measurement techniques. Our analysis uncovers some flaws that reduce the effectiveness of Vista's ASLR.Missing: seminal | Show results with:seminal
  50. [50]
    [PDF] Breaking Kernel Address Space Layout Randomization with Intel TSX
    ASLR is a comprehensive, popular defense mechanism that mitigates memory corruption attacks in a probabilistic manner. To exploit a memory corruption ...
  51. [51]
    [PDF] ROP is Still Dangerous: Breaking Modern Defenses - USENIX
    Aug 20, 2014 · One common defense for ROP attacks is ASLR which works by randomly moving the segments of a program (includ- ing the text segment) around in ...
  52. [52]
    [PDF] Exploiting Linux and PaX ASLR's weaknesses on 32 - Black Hat
    Apr 5, 2016 · Next (figure 7) is compares the entropy of ASLR-NG, PaX ASLR for 64-bit systems. ASLR-NG has been configured to work in conservative mode and ...
  53. [53]
    Control-Flow Integrity - Principles, Implementations, and Applications
    Nov 1, 2005 · The enforcement of a basic safety property, Control-Flow Integrity (CFI), can prevent such attacks from arbitrarily controlling program behavior ...
  54. [54]
    [PDF] Enforcing Forward-Edge Control-Flow Integrity in GCC & LLVM
    We encountered this problem in ChromeOS with the. Chrome browser. There are two third-party libraries which are built without VTV and are distributed to.
  55. [55]
    [PDF] Control-Flow Integrity: Precision, Security, and Performance
    The goal of Control-Flow Integrity (CFI) [Abadi et al. 2005a] is to restrict the set of possible control-flow transfers to those that are strictly required for ...
  56. [56]
    Control Flow Integrity — Clang 22.0.0git documentation - LLVM
    Clang includes an implementation of a number of control flow integrity (CFI) schemes, which are designed to abort the program upon detecting certain forms ...
  57. [57]
    How security flaws work: The buffer overflow - Ars Technica
    Aug 25, 2015 · Hardware supporting NX has been mainstream since 2004, when Intel introduced the Prescott Pentium 4, and operating system support for NX has ...Stack It Up · Blame C · Fixing The Leaks<|separator|>
  58. [58]
    [PDF] Data Execution Prevention
    Data Execution Prevention (DEP) is a set of hardware and software technologies that checks memory to protect against malicious code and viruses.
  59. [59]
    [PDF] Kernel W^X Improvements In OpenBSD
    Oct 18, 2014 · W^X is a memory protection policy where memory cannot be both writable and executable. OpenBSD kernel improvements focus on correctness, not ...Missing: origin | Show results with:origin
  60. [60]
    Data Execution Prevention - Win32 apps - Microsoft Learn
    May 1, 2023 · Data Execution Prevention (DEP) is a memory protection feature that marks memory as non-executable, preventing code from running from data ...Missing: 2004 | Show results with:2004
  61. [61]
    x86 NX support - LWN.net
    Jun 2, 2004 · The NX bit only works when the processor is running in the PAE mode. Most x86 Linux systems currently do not run in that mode; it is normally ...
  62. [62]
    Defeating Windows DEP With A Custom ROP Chain | NCC Group
    Jun 12, 2023 · Build and execute shellcode (e.g., a reverse shell) using just the ROP gadgets. Disable DEP and then jump to the shellcode address that is now ...
  63. [63]
    [PDF] PaX: Twelve Years of Securing Linux - grsecurity
    Oct 10, 2012 · ▷ non-executable memory pages, etc. PaX: Twelve Years of Securing Linux. Page 9. Introduction. PaX. Future. The Solutions. Implementation.Missing: adoption mid-
  64. [64]
    [PDF] e-NeXSh: Achieving an Effectively Non-Executable Stack and Heap ...
    The technique is simple and lightweight, demonstrating no measurable overhead for select UNIX utilities, and a negli- gible 1.55% performance impact on the ...
  65. [65]
    Buffer Overflow: Code Execution By Shellcode Injection - hg8.sh
    Oct 28, 2023 · In this article we will details how to exploit a buffer overflow in order to achieve remote code execution via shellcode injection.Shellcode Creation · Shellcode Testing · Shellcode Injection · NOP-sled
  66. [66]
    Top Byte Ignore For Fun and Memory Savings | Blog - Linaro
    Feb 8, 2023 · Top Byte Ignore (TBI) is a feature of Armv8-a AArch64 that allows software to use unused pointer bits to store data without having to hide them from the ...
  67. [67]
    Introduction to PAC - Armv8.1-M PACBTI Extensions
    One of the key uses of the PAC feature is to detect corruption of return addresses in function calls. Because the return address is often stored on the ...
  68. [68]
    [PDF] Pointer Authentication on ARMv8.3 - Qualcomm
    The pointer authentication scheme introduced by ARM is a software security primitive that makes it much harder for an attacker to modify protected pointers ...
  69. [69]
    [PDF] The CHERI capability model: Revisiting RISC in an age of risk
    In this paper, we introduce Capability Hardware Enhanced. RISC Instructions (CHERI), a hybrid capability model that blends conventional ISA and MMU design ...
  70. [70]
    CHERI Frequently Asked Questions (FAQ)
    Many memory-based attacks on contemporary hardware-software designs rely on corrupting pointers or lengths. Tags provide strong pointer-integrity guarantees ...
  71. [71]
    Preparing your app to work with pointer authentication
    Overview. The arm64e architecture introduces pointer authentication codes (PACs) to detect and guard against unexpected changes to pointers in memory.
  72. [72]
    [PDF] PAC it up: Towards Pointer Integrity using ARM Pointer Authentication
    We evaluate the security and practicality of PARTS to demonstrate its effectiveness against memory corruption attacks. Our main contributions are: • Analysis: A ...
  73. [73]
    Hiroaki Etoh - gcc stack-smashing protector (for gcc-2.95.3)
    Jun 28, 2001 · From: Hiroaki Etoh <etoh at trl dot ibm dot ... propolice as a stack protection method */ extern int flag_propolice_protection; #endif /* ...Hiroaki Etoh - gcc stack-smashing protector (for gcc-3.0)Re: gcc stack-smashing protector - GNUMore results from gcc.gnu.org
  74. [74]
    Optimize Options - Using the GNU Compiler Collection (GCC)
    ### Summary: Checking for `-fstack-protector` in GCC 4.1 Documentation
  75. [75]
    StackGuard+$\text{StackGuard}^+$: Interoperable alternative to ...
    Oct 7, 2024 · By verifying if this canary value has been altered, StackGuard can effectively detect buffer overflow attacks. Since GCC version 4.1, StackGuard ...
  76. [76]
    Instrumentation Options (Using the GNU Compiler Collection (GCC))
    ### Summary of `-fstack-protector` and Variants
  77. [77]
    Code Gen Options (Using the GNU Compiler Collection (GCC))
    ### Summary of -fPIE and Position-Independent Code Related to ASLR
  78. [78]
    GCC 6 Release Series — Changes, New Features, and Fixes
    Sep 29, 2025 · The GCC 6 release series includes a much improved implementation of the OpenACC 2.0a specification. Highlights are:
  79. [79]
    Source Fortification (The GNU C Library)
    ### Summary of `_FORTIFY_SOURCE`
  80. [80]
    _FORTIFY_SOURCE | MaskRay
    Nov 5, 2022 · glibc 2.3.4 introduced _FORTIFY_SOURCE in 2004 to catch security errors due to misuse of some C library functions.
  81. [81]
    Use compiler flags for stack protection in GCC and Clang
    Jun 2, 2022 · This article discusses the major stack protection mechanisms in the GNU Compiler Collection (GCC) and Clang, typical attack scenarios, and the compiler's ...Missing: random generation thread synchronization
  82. [82]
    GS - C++ Team Blog
    Mar 19, 2009 · The GS switch was first provided in Visual Studio .NET 2002. It detects certain kinds of stack buffer overruns and terminates the process at ...
  83. [83]
    Security Briefs: Protecting Your Code with Visual C++ Defenses
    Stack-based buffer overrun detection is the oldest and most well-known defense available in Visual C++. The goal of the /GS compiler flag is simple: reduce the ...<|control11|><|separator|>
  84. [84]
    Enhanced GS in Visual Studio 2010 - Microsoft
    Mar 20, 2009 · Visual Studio 2010 introduces an enhanced GS heuristic that provides significant security improvements by increasing the scope of GS protection ...
  85. [85]
    /guard (Enable Control Flow Guard) | Microsoft Learn
    Oct 3, 2025 · The /guard:cf option causes the compiler to analyze control flow for indirect call targets at compile time, and inserts code at runtime to verify the targets.
  86. [86]
    NXCOMPAT (Compatible with Data Execution Prevention)
    Sep 22, 2022 · Describes the Microsoft C/C++ (MSVC) /NXCOMPAT linker option, which marks an executable as compatible with Data Execution Prevention (DEP).Missing: buffer overflow
  87. [87]
    Appendix F: SDL Requirement: No Executable Pages | Microsoft Learn
    May 21, 2012 · All binaries must link with /NXCOMPAT flag (and not link with /NXCOMPAT:NO) using the linker included with Visual Studio 2005 and later.
  88. [88]
    AddressSanitizer | Microsoft Learn
    AddressSanitizer, originally introduced by Google, provides runtime bug-finding technologies that use your existing build systems and existing test assets ...
  89. [89]
    SafeInt Library | Microsoft Learn
    Aug 3, 2021 · SafeInt is a portable library that can be used with MSVC, GCC or Clang to help prevent integer overflows that might result when the application performs ...Missing: buffer | Show results with:buffer
  90. [90]
    Clang Compiler User's Manual — Clang 22.0.0git documentation
    This document describes important notes about using Clang as a compiler for an end-user, documenting the supported features, command line options, etc.
  91. [91]
    AddressSanitizer — Clang 22.0.0git documentation - LLVM
    AddressSanitizer is a fast memory error detector. It consists of a compiler instrumentation module and a run-time library. The tool can detect the following ...Missing: Visual Studio
  92. [92]
  93. [93]
    LLVM 3.1 Release Notes
    May 15, 2012 · LLVM 3.1 includes several major changes and big features: AddressSanitizer, a fast memory error detector. MachineInstr Bundles, Support to ...
  94. [94]
    Control Flow Integrity Design Documentation - Clang - LLVM
    This page documents the design of the Control Flow Integrity schemes supported by Clang. Forward-Edge CFI for Virtual Calls
  95. [95]
    UndefinedBehaviorSanitizer — Clang 22.0.0git documentation - LLVM
    UndefinedBehaviorSanitizer (UBSan) is a fast undefined behavior detector. UBSan modifies the program at compile-time to catch various kinds of undefined ...
  96. [96]
    Hardware-assisted AddressSanitizer Design Documentation - Clang
    AArch64 has Address Tagging (or top-byte-ignore, TBI), a hardware feature that allows software to use the 8 most significant bits of a 64-bit pointer as a tag.
  97. [97]
    Sanitizers - Fuchsia
    Mar 22, 2025 · Fuzzers are similar to sanitizers in that they attempt to expose bugs in the code at runtime, and they are usually used in conjunction.
  98. [98]
    -fstack-protector (-qstackprotect) - IBM
    The -fstack-protector option protects against stack corruption by generating extra code, but it is disabled by default due to performance degradation.Missing: overflow | Show results with:overflow
  99. [99]
    GS - Intel
    The compiler does not detect buffer overruns. Description. This option tells the compiler to provide full stack security level checking. This option has been ...Missing: overflow | Show results with:overflow