Fact-checked by Grok 2 weeks ago

Memory protection

Memory protection is a core mechanism in that enforces between different processes by restricting access to memory regions, preventing unauthorized reads, writes, or executions that could compromise system stability or . This feature ensures that each process operates within its own , shielding the operating system , other applications, and critical data from interference by faulty or malicious code. The primary importance of memory protection lies in its role in maintaining system integrity and multitasking reliability, as it mitigates risks such as buffer overflows, rogue processes corrupting shared resources, or one application crashing the entire system. By implementing strict access controls, it enables safe concurrent execution of multiple programs, a foundational requirement for contemporary computing environments ranging from desktops to embedded devices. Without effective memory protection, early systems like MS-DOS suffered frequent crashes from unchecked memory accesses, highlighting its evolution as a critical advancement since the 1970s with the introduction of hardware support. Memory protection is typically implemented through hardware components such as the Memory Management Unit (MMU) or Memory Protection Unit (MPU), which work in tandem with operating system software to translate virtual addresses to physical ones while enforcing permissions. Key techniques include virtual memory with page tables that define access rights (e.g., read-only, read-write, or execute-disabled) for memory pages, and segmentation for finer-grained control over code, data, and stack regions. In privileged modes, the kernel enjoys full access, while user-mode processes face restrictions, with violations triggering faults like segmentation faults to halt errant operations. Advanced variants, such as protection keys, allow dynamic adjustments to these permissions without altering page tables, further enhancing flexibility in systems like Linux on supported architectures.

Fundamentals

Definition and Purpose

Memory protection is a core mechanism in computing systems designed to isolate processes or users, preventing them from accessing memory regions not allocated to them. This isolation is enforced by hardware components, such as the (MMU), which maps virtual addresses to physical ones and validates access attempts, in conjunction with software policies implemented by the operating system. Each process operates within a dedicated address space, comprising the virtual addresses it is permitted to reference, ensuring that attempts to access unauthorized locations trigger exceptions handled by the OS. The primary purpose of memory protection is to safeguard system stability and in multi-tasking environments by blocking erroneous or malicious memory accesses, such as buffer overflows that could corrupt adjacent data or attempts that seek to execute unauthorized instructions. It prevents a misbehaving from damaging the operating system or interfering with other ' data, thereby maintaining overall resource integrity and enabling reliable concurrent execution. Systems typically enforce this through privileged (e.g., ) and unprivileged (e.g., ) modes, where sensitive operations require elevated privileges to access protected regions. Key benefits include fault isolation, which confines errors or crashes within a single without propagating to others; data confidentiality, achieved by restricting unauthorized reads; and , upheld by controlling writes to prevent corruption in scenarios. Memory regions are further protected by granular permissions, such as read-only, read-write, or execute-only, which the MMU checks against each access to enforce these guarantees. acts as a foundational enabler, abstracting physical to provide these isolated spaces efficiently.

Historical Development

In the 1950s and early , early systems like the operated without memory protection mechanisms, allowing user programs unrestricted access to the entire physical memory and risking system crashes or from errant code. This lack of isolation stemmed from the era's focus on single-user, non-sharing environments dedicated to scientific computations, where multiprogramming was absent. By the mid-, the push for systems exposed these vulnerabilities, leading to the development of , a collaborative project between , , and starting in 1964. Under the leadership of Fernando Corbató at , introduced pioneering ring-based in 1969 on the GE-645 (later rebranded by after its 1970 acquisition of GE's computer division), using hierarchical protection rings to enforce access controls and segmenting memory to isolate user processes from the kernel and each other. The 1970s saw memory protection evolve with the rise of minicomputers and portable operating systems. At , and developed Unix starting in 1969 on the , initially without robust protection, but ported it to the PDP-11 in 1971, where hardware support enabled initial multi-programming. By 1973, a kernel rewrite in C introduced paging-based , allowing processes to operate in isolated address spaces while sharing physical memory efficiently, influenced by but simplified for practicality. This adaptation prioritized usability in research environments, marking Unix as a foundational system for modern protected multitasking. Widespread adoption accelerated in the with personal computing. Intel's 80386 , released in 1985, extended from the 80286 with 32-bit addressing, supporting up to 4 GB of per process and multilevel protection rings to safeguard the OS from user applications. Operating systems like , launched in 1993, leveraged this hardware for demand-paged , enforcing strict isolation between processes and the to enable secure multi-user environments on desktops. Into the 2000s, enhancements addressed and exploit mitigation. Intel introduced VT-x in 2005, adding hardware extensions to for efficient isolation, allowing guest OSes to run with nested paging and reduced overhead in hypervisors. Concurrently, (ASLR), first implemented in version 3.4 in 2003 and evaluated for its effectiveness in a 2004 study, randomized the layout of code, stack, and libraries in to thwart attacks, gaining adoption in systems like and later mainstream OSes.

Core Principles

Memory Isolation

Memory isolation is a core principle of memory protection in operating systems, providing logical separation of address spaces for individual processes to prevent unauthorized access or modification of memory belonging to other processes. This separation creates the illusion of a dedicated memory environment for each process, ensuring that computational activities remain confined and independent. By isolating address spaces, the system mitigates risks from faulty or malicious code, maintaining system stability and security. High-level techniques for implementing memory isolation include base and limit registers, which define the starting point and extent of a process's allowable range, and page tables, which map virtual addresses to physical locations while establishing boundaries. These mechanisms operate abstractly to enforce spatial separation without direct overlap between processes' memory regions. In multiprogramming systems, where multiple processes execute concurrently to maximize resource utilization, memory isolation is essential to prevent data leakage, , or that could compromise the of individual processes or the entire . Without such isolation, a single process failure could cascade, halting operations or exposing sensitive information across the shared environment. Memory isolation manifests in two primary types: physical isolation, enforced directly by hardware components such as memory management units that validate access at the circuit level, and logical isolation, managed by software through the configuration of protection data structures like segment descriptors or translation tables. Protection rings complement these by providing hierarchical privilege levels, restricting privileged operations to the most privileged ring (ring 0) for the , with user processes in outer rings like ring 3.

Access Control Models

Access control models in memory protection define the rules and mechanisms for granting or denying operations such as reading, writing, or executing on memory regions, ensuring that processes only access authorized portions of memory. These models provide an abstract framework for specifying permissions, which are then enforced by the operating system in conjunction with . The foundational concept is the , introduced by in 1971, which represents subjects (e.g., processes) as rows and objects (e.g., memory segments or pages) as columns, with entries specifying access rights like read (R), write (W), or execute (X). This matrix separates policy (what rights are allowed) from mechanism (how rights are checked), allowing flexible implementation in operating systems. Discretionary Access Control (DAC) is a common model where the owner of a memory object—typically the that allocated it—has the discretion to set and modify permissions for other subjects. In DAC, permissions are managed through lists (ACLs) or similar structures attached to memory descriptors, allowing the owner to grant R, W, or X rights based on user or group identities. For instance, in systems, a can use system calls like mprotect to adjust page permissions, subject to the owner's privileges. This model promotes user flexibility but relies on the owner's judgment, potentially leading to security risks if misconfigured. In contrast, (MAC) enforces system-wide policies defined by administrators, overriding individual discretion; access decisions are based on security labels assigned to subjects and objects, such as sensitivity levels, ensuring consistent enforcement regardless of owner intent. MAC models, like those inspired by the Bell-LaPadula framework for confidentiality, apply labels to memory regions to prevent unauthorized flows, such as a low-clearance reading high-sensitivity data. Permissions in these models are typically represented by bits in structures, such as page table entries (PTEs) in systems. The R/W/X flags indicate allowable operations: R for reading data, W for writing, and X for executing instructions, with the absence of X enabling no-execute protection to mitigate exploits by preventing code execution in data areas. These bits are set by the during memory allocation or modification, reflecting the underlying DAC or policy. Enforcement occurs through hardware-software cooperation: when a process attempts an unauthorized access, the memory management unit (MMU) detects the violation and generates a trap or interrupt, such as a page fault, which the operating system handler processes. This may result in denying the access, logging the event, or terminating the process via signals like segmentation fault (SIGSEGV) in POSIX systems. An illustrative example is the no-execute (NX) bit, first implemented in hardware by AMD in 2003 for the Opteron processor and by Intel in 2004 for Pentium 4 models supporting execute disable (XD), allowing operating systems to mark data pages as non-executable at the page level, akin to file permissions but applied dynamically to virtual memory regions.

Hardware-Based Mechanisms

Segmentation

Segmentation is a hardware-based memory protection that divides a program's into variable-sized units known as , each corresponding to logical components such as , , or . Each is defined by a base in physical and a or , stored in registers or descriptor tables that the consults during . When a attempts to , the processor translates the —comprising a selector and an —by adding the to the segment's base and verifying that the does not exceed the segment's ; violations a fault, such as a general protection exception, preventing unauthorized . This approach ensures isolation by enforcing boundaries around each , allowing different protection attributes like read-only for or read-write for to be applied independently. In implementations like the Intel x86 architecture, segmentation relies on segment descriptors housed in tables such as the (GDT) or Local Descriptor Table (LDT), which specify the base address, limit, access rights, and privilege levels for each segment. The processor uses six segment registers (CS for code, DS/ES/FS/GS for data, SS for stack) to hold selectors that index into these tables, enabling efficient context switching between segments during program execution. Access violations, including out-of-bounds offsets or disallowed operations (e.g., writing to a code segment), result in immediate hardware interrupts that the operating system can handle to enforce protection policies. This mechanism was particularly prominent in systems from the 1960s to the 1980s, originating in designs like the operating system and evolving through processors such as the and 80386. The primary advantages of segmentation lie in its alignment with the natural structure of programs, where segments map directly to modules like procedures or global variables, facilitating intuitive organization and relocation without fixed boundaries. It also enables fine-grained protection by assigning distinct access controls to each type—for instance, executable-only permissions for segments to prevent modification—enhancing against overflows or erroneous writes. However, segmentation suffers from external fragmentation, as allocating variable-sized segments in physical leaves unusable gaps between them over time, potentially reducing effective utilization. Additionally, the management of segment descriptors and tables introduces overhead, including table lookups and updates during context switches, which contributed to its decline in favor of paging—a complementary fixed-size allocation method—for modern systems.

Paging and Virtual Memory

Paging divides both virtual and physical memory into fixed-size blocks called pages, typically 4 kilobytes in modern systems, enabling efficient allocation and management without the fragmentation issues of variable-sized units. Virtual addresses generated by a are divided into a page number and an offset within that page, with the page number used to index into a that maps it to a corresponding physical in main . This allows non-contiguous allocation of physical to a , simplifying by the operating system while providing a contiguous view to the . Protection in paging is enforced through attributes stored in page table entries (PTEs), which include bits for permissions such as read, write, and execute, as well as a valid bit indicating whether the page is present in physical memory. If a attempts to access a page with invalid permissions or an absent page, the hardware triggers a , allowing the operating system to intervene—either by denying access for protection violations or loading the page from secondary storage. This mechanism isolates by ensuring that each has its own , preventing unauthorized access to other ' memory or kernel space. Virtual memory integrates paging to abstract the physical memory layout from processes, providing each with a large, uniform that appears dedicated and contiguous, regardless of actual physical constraints. paging extends this by loading pages into physical only when first accessed, reducing initial memory demands and enabling support for programs larger than physical through swapping to disk. This abstraction not only enhances protection by isolating but also facilitates efficient resource sharing and multitasking. Hardware support for paging and is provided by the (MMU), a dedicated component that performs address translation and permission checks on every , trapping invalid operations as exceptions. To accelerate translations, the (TLB), a small within the MMU, stores recent virtual-to-physical mappings, avoiding full lookups for most accesses and thus minimizing performance overhead. Unlike segmentation, which relies on variable-sized divisions, paging's uniform pages enable straightforward of these mechanisms for robust .

Protection Rings

Protection rings represent a hierarchical model of privilege levels in computer architectures, designed to enforce memory protection by restricting access to sensitive resources based on the executing code's privilege. Originating from the Multics operating system, this approach structures privileges as concentric rings, where inner rings possess greater access rights than outer ones, ensuring that less trusted code cannot interfere with critical system components. In this model, ring numbers increase outward, with the innermost ring (typically ring 0) granting full privileges and outer rings imposing progressive restrictions on operations such as direct hardware access or memory manipulation. The core mechanism relies on CPU-enforced mode switches between rings, where the processor maintains a current privilege level (CPL) to validate every memory access and instruction execution. Transitions between rings are controlled through dedicated entry points, such as gates or procedure calls, which allow upward (to higher privilege) or downward (to lower privilege) shifts only under strict conditions—for instance, system calls from user space invoke a gate to enter the kernel ring without compromising isolation. In the Intel x86 architecture, which supports four rings (0 through 3), ring 0 is reserved for the operating system kernel with unrestricted access, while ring 3 confines user applications to a limited subset of instructions and memory regions, preventing direct manipulation of kernel data structures. Implementation in x86 involves segment descriptors in the (GDT) that specify requirements, with the CPL—encoded in the register—compared against descriptor levels (DPL) for each access. Unauthorized attempts, such as a ring 3 trying to execute a privileged instruction or access memory, trigger hardware exceptions like the general fault (#GP) or (#PF), which transfer control to the for handling. Most modern operating systems, including and Windows, utilize only rings 0 and 3, leaving intermediate rings unused to simplify design while maintaining the hierarchy. By isolating the kernel's virtual address space from user code, protection rings prevent privilege escalation attacks and contain faults, enhancing overall system security and reliability without relying solely on software checks. This hardware-enforced separation ensures that even if user-level code is compromised, it cannot arbitrarily elevate privileges to access protected memory, thereby safeguarding critical system integrity.

Protection Keys

Protection keys provide a hardware-based approach to coarse-grained memory protection by assigning a numeric tag to fixed-size blocks of physical memory, enabling simple without the complexity of full virtual addressing. In this mechanism, each memory block is associated with a protection key, typically a 4-bit value ranging from 0 to 15, stored separately from the addressable data. The processor holds a current protection key in a dedicated register, such as bits 8-11 of the (PSW) in architectures. On every memory access—particularly stores—the hardware compares the current key against the block's key; access is granted if they match or if either is zero (indicating unrestricted access), otherwise triggering a protection exception that halts the operation while preserving the data. This technique originated in the mainframe architecture announced in 1964, where it was introduced as an optional feature to safeguard multitasking environments by isolating up to 15 user programs from each other and the operating system. Early implementations tagged 2,048-byte blocks, a size retained in the System/370 (1970), while later standardized on 4 pages for key assignment, allowing protected regions to span from 4 up to 1 MB or larger contiguous multiples. Keys are managed via privileged instructions like Set Storage Key (SSK) and Insert Storage Key (ISK), ensuring only the operating system can alter them to prevent user-level tampering. Protection keys offer significant advantages in simplicity and efficiency, imposing no measurable performance overhead since the key comparison is a fast, inline hardware check integrated into memory operations. This makes them ideal for protecting large, coarse regions in resource-constrained or high-throughput systems, where the minimal hardware—essentially per-block registers and comparator logic—reduces complexity compared to more elaborate schemes. Protection keys continue to be utilized in modern systems, such as the z16 introduced in 2022, for efficient protection in high-performance mainframe environments as of 2025. Despite these benefits, protection keys suffer from limited , as their fixed block sizes and small key space (only 16 domains) hinder precise of small or variably sized regions, often requiring wasteful padding or multiple keys for fine control. Consequently, they have become less prevalent in modern general-purpose architectures like x86 and , which prioritize paging for its superior flexibility in combining mapping with per-page permissions. Unlike segmentation, which supports variable-length logical units for more adaptable protection, keys enforce rigid, block-uniform tagging better suited to mainframe workloads.

Software-Based Mechanisms

Capability-Based Addressing

Capability-based addressing is a memory protection mechanism in which access rights to resources are encapsulated in unforgeable known as . Each consists of an object's address and a set of associated access rights, such as read, write, or execute permissions, enabling secure referencing without relying on implicit user identities or global address spaces. These are treated as first-class objects that can be passed between processes or principals, allowing controlled of access while maintaining . In capability-based systems, are stored in protected lists associated with each , and hardware or software checks ensure that only valid capabilities can be used for access or resource invocation. This approach provides a uniform way to address both memory segments and resources, protection from the addressing scheme itself. For , a can transfer or copy a capability to another, but the recipient's are limited to those in the received token, preventing escalation beyond the original grant. Early hardware implementations of capability-based addressing appeared in the , with the System 250 being the first operational computer to employ this scheme for , high-reliability applications. In the System 250, capabilities were stored in dedicated capability-list segments, and hardware enforced access by validating the token's tag and rights during addressing operations. Later software-based realizations, such as the EROS operating system developed in the , implemented capabilities purely in user on commodity hardware, using a single-level store where all persistent objects are addressed via capabilities for both protection and naming. The primary advantages of capability-based addressing include fine-grained control over access rights, as permissions are explicitly bound to each rather than inherited from process privileges. This enables revocable access, where capabilities can be selectively invalidated to withdraw permissions without affecting unrelated accesses, and inherently prevents unauthorized escalation by requiring explicit handover of . Such systems also simplify , as capabilities provide context-independent references that avoid the pitfalls of pointer or relocation issues in traditional segmented addressing. Despite these benefits, capability-based addressing faces challenges in , as capabilities can proliferate through and , making it difficult to locate and invalidate all instances without centralized tracking or garbage collection mechanisms. Storage overhead is another issue, since each process must maintain capability lists that can grow large with extensive , potentially increasing usage and access times. These complexities have limited widespread , though they inspire modern designs for secure .

Simulated Segmentation

Simulated segmentation is a software employed by operating systems to emulate the variable-sized logical segments typical of hardware segmentation, but implemented atop fixed-size paging mechanisms. In this approach, the OS groups contiguous pages into larger, variable-length segments to represent logical divisions such as , , or regions, using a segment table to map segment identifiers to the starting and length within the page tables. This allows for and by enforcing boundaries and access rights at the segment level through page-level hardware support. An early implementation appeared in the development of a segmented memory manager for Unix on the PDP-11/50 minicomputer, where software routines managed variable segments by configuring the hardware's base-and-limit registers to overlay logical units onto physical memory regions, providing without native variable paging. In like Windows, simulated segmentation facilitates DLL isolation by mapping dynamic link libraries to distinct virtual address ranges within a process's paged , applying granular page protections (e.g., read-only for code sections) to prevent unauthorized access across modules while sharing common libraries efficiently across processes. The primary benefits include enhanced flexibility, as systems can adopt segmentation's logical organization without requiring specialized hardware modifications, and simplified migration for software originally designed for segmented architectures to paging-based platforms. This method also supports fine-grained protection inheritance from pages to segments, enabling features like shared segments with controlled . However, simulated segmentation introduces performance overhead from the extra in address —requiring lookups in both segment and page tables—which can increase memory compared to pure paging. Additionally, managing the alignment of variable segments over fixed pages may lead to minor internal fragmentation within boundary pages.

Dynamic Tainting

Dynamic tainting is a mechanism that marks potentially untrusted , such as inputs from external sources, as "tainted" and tracks its through a program's execution to prevent unauthorized es or hijacks. Upon allocation, regions and associated pointers are assigned unique taint marks, which are propagated during operations like or arithmetic; any attempt to use tainted for sensitive actions, such as dereferencing a pointer to access unallocated , triggers a check that halts execution if marks mismatch. This approach detects illegal es (IMAs) by ensuring that only properly authenticated interacts with protected regions, thereby enforcing fine-grained without relying on static compile-time analysis. Early software implementations of dynamic tainting focused on binary-level to monitor in C programs vulnerable to exploits. For instance, TaintCheck, introduced in , uses dynamic to tag external inputs and propagate s through registers and memory, detecting buffer overruns and format string vulnerabilities by flagging tainted control data usage. Building on this, a 2007 system employed reusable marks (as few as two) applied at allocation time, with propagation rules defined for common instructions, implemented via tools like DYTAN for software and simulated hardware on for efficiency; it successfully identified all tested IMAs in applications like bc and . These software approaches often utilize shadow memory—a parallel structure mirroring the program's —to store tags compactly, avoiding direct modification of original data and enabling low-overhead tracking. Hardware-assisted dynamic tainting emerged in the to reduce overheads inherent in software methods, integrating tag propagation directly into pipelines. A 2018 RISC-V extension, D-RI5CY, augments the core with 1-bit per word, programmable policy registers for propagation and checking rules, and custom instructions for management; flow in with , adding negligible while detecting memory corruption attacks on devices. This implementation, prototyped on FPGA, incurs zero runtime overhead and minimal area increase (<1% LUTs), demonstrating scalability for embedded systems. More recent advances as of include software-focused optimizations for dynamic . For example, a 2023 method applies dynamic tainting to container environments for detecting issues in labels and data flows. In , HardTaint uses selective hardware tracing to monitor taint propagation in and registers with improved efficiency, while AirTaint enhances the speed and usability of dynamic for broader application in software . These developments continue to refine tainting for low-overhead in modern systems like and cloud containers. In applications, dynamic tainting safeguards against exploits that manipulate , such as buffer overflows enabling or use-after-free errors leading to data leaks; for example, it blocks tainted inputs from altering in ways that could facilitate by preventing overflow-induced query modifications. Advances include combining tainting with (ASLR) to detect and mitigate information leaks that reveal randomized addresses, enhancing overall exploit resistance. Performance optimizations, like packed shadow arrays, limit overhead to ~1% in hardware prototypes, making the technique viable for production use.

Implementation in Operating Systems

Unix-Like Systems

In systems, including and BSD variants, memory protection is fundamentally implemented through the (VM) subsystem, which uses hierarchical page tables to translate virtual addresses to physical memory while enforcing access controls such as read, write, and execute permissions on individual pages. This approach isolates from one another and separates user-space from kernel-space, preventing unauthorized access to sensitive system resources. The VM subsystem relies on underlying paging support to manage these mappings dynamically, ensuring that each process perceives a contiguous, private . A core distinction in privilege levels underpins this protection: the executes in ring 0 with unrestricted access to and , while user-space applications run in ring 3, confined to their allocated spaces and unable to directly manipulate kernel data structures or other ' . Attempts to violate these boundaries, such as dereferencing an invalid pointer or accessing protected , trigger a hardware exception that the intercepts and translates into a SIGSEGV (segmentation violation) signal delivered to the offending , often leading to its termination unless a handler is installed. POSIX standards, which guide in environments, provide system calls for explicit control over these protections. The mprotect() syscall allows a to modify access permissions on a range of mapped memory pages, specifying protections like read-only or no-execute to prevent unintended modifications or executions. Complementing this, the syscall enables mapping files, devices, or anonymous memory into the with initial protections, facilitating efficient or file-backed allocations while adhering to the specified access rules. Process creation via fork() incorporates (COW) optimization to balance efficiency and isolation: upon forking, the inherits the parent's entries, but physical pages are initially shared and marked read-only, with a private copy allocated only if either process attempts a write, thereby enforcing separation without immediate duplication overhead. To counter exploits like buffer overflows that attempt , early 2000s innovations introduced non-executable (NX) memory protections. The patch, first released in 2000, emulates NX semantics on hardware lacking native support by restricting executable mappings and randomizing address layouts, significantly reducing the attack surface in kernels. Similarly, Red Hat's Exec Shield, introduced in 2003 with Enterprise Linux 3, leverages the on supported processors (or emulates it via segmentation) to mark data pages as non-executable, preventing execution in writable regions. SELinux, developed by the and integrated into the mainline in 2003 as a Linux Security Module (LSM), extends these mechanisms with (MAC) policies that govern memory operations, including restrictions on mmap() and mprotect() to enforce type enforcement and role-based rules beyond standard discretionary access controls.

Windows

Memory protection in Windows is primarily managed by the , which leverages hardware-enforced protection rings on x86 architectures to isolate user-mode applications in ring 3 from kernel-mode operations in ring 0, preventing direct access to system resources. This separation ensures that user processes cannot interfere with kernel memory or other processes without explicit authorization. The kernel employs management to allocate isolated address spaces for each process, using paging mechanisms to enforce boundaries and protect against unauthorized reads, writes, or executions. For legacy compatibility, Windows incorporates s (VDM), which provide an emulated environment for running 16-bit and Windows 3.x applications within a dedicated address space, isolating them from the host system to prevent legacy code from compromising modern protections. VDM achieves this by translating 16-bit calls into 32-bit equivalents through the Windows NT Virtual DOS Machine subsystem, maintaining separation via the 's . Address Space Layout Randomization (ASLR), first implemented as an opt-in feature in and made more robust in in 2009, randomizes the base addresses of key modules such as executables, DLLs, and the at load time to hinder exploitation of memory corruption vulnerabilities like buffer overflows. By relocating these elements to unpredictable locations, ASLR increases the difficulty of crafting reliable exploits that depend on fixed layouts. Similarly, Data Execution Prevention (DEP), introduced in Service Pack 2 in 2004, marks certain regions—such as and —as non-executable, causing hardware exceptions if code attempts to run from data pages, thereby mitigating attacks that inject and execute malicious payloads. Additional protections include guard pages, enabled via the PAGE_GUARD memory protection constant, which serve as one-time sentinels for detecting fine-grained memory access violations, such as stack overflows, by triggering exceptions on first access to allow dynamic expansion or error handling without full crashes. Control Flow Guard (CFG), introduced in in 2014, further bolsters defenses against return-oriented programming (ROP) attacks by validating indirect calls at runtime, ensuring they target only legitimate function entry points marked in a global control flow table, thus disrupting control-flow hijacking attempts. User and separation is reinforced through token-based , where each and holds an encapsulating the user's (), privileges, and group memberships, which the 's Security Reference Monitor uses to authorize system calls and prevent unauthorized escalations from user mode. For mixed architectures, the subsystem enables 32-bit applications to run on 64-bit Windows by emulating an x86 environment in a thunking layer, isolating 32-bit address spaces and preventing direct interaction with 64-bit memory to maintain compatibility without compromising security. A unique aspect of Windows memory protection is its integration with through Virtualization-Based Security (VBS), which employs the to create isolated enclaves for components, enforcing memory integrity via Hypervisor-protected Code Integrity (HVCI) to block malicious modifications even from ring 0 code, providing nested protection layers for critical system processes. This approach extends to safeguard against advanced exploits, with features like shielded virtual machines adding and attestation for hosted workloads.

Embedded and Real-Time Systems

Embedded and real-time systems face unique challenges in implementing memory protection due to severe constraints on RAM and CPU resources, often limited to kilobytes of memory and low-power processors. These environments prioritize deterministic behavior to meet strict timing deadlines, avoiding mechanisms like paging or swapping that introduce non-deterministic latency from disk I/O or cache misses. As a result, memory protection must operate with minimal overhead to prevent disrupting real-time guarantees, focusing on isolation without complex virtual memory translation. A primary in MMU-less systems is the (), which provides hardware-enforced access controls by dividing physical memory into configurable regions with permissions such as read-only, read-write, or execute-never. In processors, the supports up to 16 regions, each defining size, location, and attributes like cacheability, triggering faults on violations to contain errors without full address translation. For operating systems (RTOS), static partitioning complements MPU by pre-allocating fixed memory blocks to tasks or modules at , ensuring spatial isolation and preventing interference in multi-core setups. The seL4 microkernel exemplifies advanced memory protection in contexts through of its capability-based model, which enforces via object capabilities and page tables while proving absence of buffer overflows and unauthorized accesses. Released in 2009, seL4's proofs cover functional correctness down to C code, making it suitable for safety-critical applications like where predictability is paramount. Similarly, integrates support to run tasks in unprivileged mode, restricting stack and heap access per task to prevent corruption, with regions reconfigured on context switches for low-overhead fault . In VxWorks, static partitioning under standards allocates dedicated memory to partitions, protecting against faults in mixed-criticality systems by enforcing boundaries at the kernel level. These approaches involve trade-offs, such as using simplified rings—typically just and modes in Cortex-M—to reduce context-switch overhead compared to multi-ring architectures, prioritizing fault containment over comprehensive . keys offer a alternative for region tagging in some hardware, enabling quick permission changes without full reconfiguration. Overall, the emphasis remains on efficient, verifiable to maintain in resource-constrained environments.

Challenges and Advances

Common Vulnerabilities

Buffer overflows represent one of the most prevalent memory protection vulnerabilities, occurring when a writes more to a fixed-size than it can accommodate, leading to corruption of adjacent memory regions and potential hijacking. These flaws enable attackers to overwrite return addresses or function pointers, facilitating or execution of malicious payloads. Memory buffer vulnerabilities, including overflows, constitute approximately 20% of reported vulnerabilities in cryptographic libraries. A key exploitation method for buffer overflows is return-oriented programming (ROP), which circumvents hardware protections like the No eXecute (NX) bit by chaining existing code fragments, or "gadgets," from the program's address space to perform arbitrary operations without introducing executable code. This technique reuses instruction sequences ending in return statements, allowing attackers to bypass write-XOR-execute (W^X) policies that mark data pages as non-executable. Historically, such vulnerabilities have caused widespread breaches; the 1988 Morris Worm exploited a stack buffer overflow in the fingerd daemon on UNIX systems, infecting an estimated 10% of the internet-connected computers at the time and marking one of the first major incidents demonstrating memory protection failures. Race conditions in , particularly time-of-check-to-time-of-use (TOCTOU) flaws, emerge in multi-threaded applications where concurrent access to protected resources lacks adequate , permitting an attacker to alter the resource state between validation and utilization. This can lead to unauthorized modifications or privilege escalations. Side-channel attacks further undermine isolation; the 2018 vulnerabilities exploit CPU to transiently bypass protections, enabling cross-process data leaks through cache timing discrepancies. Detection of these vulnerabilities often relies on runtime tools such as Valgrind's Memcheck, which instruments code to monitor memory allocations, accesses, and deallocations, identifying , use-after-free errors, and invalid reads/writes before exploitation. Operating system kernels typically enforce protection by inducing panics on detected violations, such as invalid accesses, to halt execution and prevent further compromise. As a mitigation, dynamic tainting tracks untrusted data propagation to block tainted inputs from influencing in overflow scenarios.

Modern Enhancements

In the realm of hardware-based memory protection, Intel introduced Memory Protection Extensions (MPX) in 2013 to enable efficient bounds checking for memory accesses, allowing compilers to insert hardware-accelerated checks that prevent buffer overflows by verifying pointer bounds at runtime. MPX uses dedicated registers to store bounds information associated with pointers, significantly reducing the overhead of software-based checks while protecting against common memory corruption vulnerabilities. Similarly, ARM's Pointer Authentication, specified in the ARMv8.3-A architecture in 2016, enhances pointer integrity by appending cryptographic signatures (pointer authentication codes) to pointers, which are verified before use to thwart manipulation attacks like return-oriented programming. This mechanism leverages dedicated instructions for signing and authenticating pointers, providing low-overhead protection integrated into the processor pipeline. Confidential computing has advanced with AMD's Secure Encrypted Virtualization (SEV), launched in 2019 as an extension to AMD-V, which encrypts memory using per-VM keys managed by a to isolate guest data from the and prevent unauthorized access. SEV ensures that memory encryption occurs transparently during walks, offering protection against physical attacks and privileged software exploits without impacting performance in typical workloads. On the software side, (CFI) has evolved post-2010 with implementations that enforce precise control-flow graphs at runtime, limiting indirect branches to valid targets and mitigating code-reuse attacks. Modern CFI variants, such as those using shadow stacks and fine-grained policies, achieve sub-5% overhead on SPEC benchmarks. Browser sandboxing has also progressed, exemplified by Google's Site Isolation feature enabled in 2018, which renders each site in a separate to isolate cross-site scripting and Spectre-like attacks, reducing the for memory leaks between origins. Virtualization enhancements include Intel's Extended Page Tables (EPT), introduced in 2008 but widely adopted in post-2010 hypervisors, which accelerate second-level address translation by mapping guest-physical to host-physical addresses in hardware, cutting VM exit overheads by up to 50% compared to software-emulated paging. For containerized environments, Linux's seccomp-BPF, added in kernel 3.5 in 2012, allows fine-grained syscall filtering to confine processes, blocking unauthorized memory operations and enhancing isolation in systems like . Emerging trends incorporate AI-assisted to identify security threats in and settings, using models trained on runtime data to achieve detection rates of approximately 96-97% with minimal false positives. Additionally, quantum-resistant protections are gaining traction in the 2020s through integration of into memory encryption schemes, with NIST-standardized algorithms like CRYSTALS-Kyber ensuring long-term confidentiality against quantum threats in platforms. In 2025, ARM's Memory Tagging Extension (MTE) saw significant adoption, notably in Apple's ecosystem with the introduction of Memory Integrity Enforcement on devices like the 17. MTE assigns tags to allocations and pointers, with hardware verifying matches on access to detect safety violations such as overflows and use-after-free errors at , providing low-overhead protection against common memory corruption vulnerabilities.

References

  1. [1]
    Chapter 3 Memory Management
    Also, the hardware virtual memory mechanisms allow areas of memory to be protected against writing. This protects code and data from being overwritten by rogue ...
  2. [2]
    Memory Protection - University of Iowa
    Prevent misbehaving programs from damaging the operating system. Prevent misbehaving programs from accessing or damaging data belonging to other programs.<|control11|><|separator|>
  3. [3]
    Memory Protection - Arm Developer
    Memory protection restricts access to code and data based on execution context, preventing applications from accessing OS data or code.
  4. [4]
    Memory Protection Keys - The Linux Kernel documentation
    Memory Protection Keys provide a mechanism for enforcing page-based protections, but without requiring modification of the page tables.
  5. [5]
    [PDF] Memory Protection
    User processes safely enter the kernel to access shared OS services. • Virtual memory mapping. OS controls virtual-physical translations for each address space.
  6. [6]
    [PDF] Memory Protection: Kernel and User Address Spaces
    Memory Protection: Kernel and. User Address Spaces. Sarah Diesburg ... spaces to achieve fault isolation. • What if your applications are built by ...
  7. [7]
    Operating Systems: Main Memory
    A bit or bits can be added to the page table to classify a page as read-write, read-only, read-write-execute, or some combination of these sorts of things. Then ...
  8. [8]
    [PDF] History of Protection in Computer Systems - DTIC
    Jul 15, 1980 · The idea behind multiprogramming is that the operating system keeps more than one user program resident in main memory at a time. One user ...
  9. [9]
    Multics--The first seven years - MIT
    HISTORY OF THE DEVELOPMENT. As previously mentioned, the Multics project got under way in the Fall of 1964. The computer equipment to be used was a modified ...
  10. [10]
    [PDF] The Evolution of the Unix Time-sharing System* - Nokia
    This paper presents a brief history of the early development of the Unix operating system. It concentrates on the evolution of the file system, the process ...
  11. [11]
    [PDF] INTEL 80386 PROGRAMMER'S REFERENCE MANUAL 1986
    Protected Mode. 2. Real-Address Mode. 3. Virtual 8086 Mode. Protected mode is the natural 32-bit environment of the 80386 processor. In this mode all ...Missing: history | Show results with:history
  12. [12]
    [PDF] [12] CASE STUDY: WINDOWS NT
    Reliability: NT uses hardware protection for virtual memory and software protection mechanisms for operationg system resources. Compatibility: applications ...
  13. [13]
    [PDF] Intel Virtualization Technology - UT Computer Science
    Additional controls allow selective protection of CR0, CR3, and CR4. VT-x includes two controls that support inter- rupt virtualization. When the external ...Missing: introduction | Show results with:introduction
  14. [14]
    On the effectiveness of address-space randomization
    Address-space randomization is a technique used to fortify systems against buffer overflow attacks. The idea is to introduce artificial diversity by ...Abstract · Information & Contributors · Published In
  15. [15]
    Deconstructing process isolation - ACM Digital Library
    Most operating systems enforce process isolation through hardware protection mechanisms such as memory segmentation, page mapping, and differentiated user ...
  16. [16]
    [PDF] Multiprogramming on physical memory
    Multiprogramming on physical memory. • Makes it hard to allocate space ... • Isolation is natural. - Can't even name other proc's memory. Page 3 ...
  17. [17]
    Operating System Privilege: Protection and Isolation
    The first step is memory itself. We want to isolate the processes' memory spaces, so that each process can access only its own memory.
  18. [18]
    Protection - Butler Lampson
    It has three major components: a set of objects which we will call X, a set of domains which we will call D, and an access matrix or access function which we ...
  19. [19]
    [PDF] Access Control Models - Jackson State University
    Discretionary Access Control (DAC) Model: The DAC model gives the owner of the object the privilege to grant or revoke access to other subjects. • Mandatory ...
  20. [20]
    [PDF] Configuring the SELinux Policy - National Security Agency
    NSA Security-Enhanced Linux (SELinux) is an implementation of a flexible and fine-grained mandatory access control (MAC) architecture called Flask in the ...<|control11|><|separator|>
  21. [21]
    [PDF] Segmentation - cs.wisc.edu
    What segmentation allows the OS to do is to place each one of those segments in different parts of physical memory, and thus avoid filling physical memory with ...
  22. [22]
    [PDF] CHAPTER 3 PROTECTED-MODE MEMORY MANAGEMENT
    Segmentation provides a mechanism of isolating individual code, data, and stack modules so that multiple programs (or tasks) can run on the same processor ...<|control11|><|separator|>
  23. [23]
    6.2 Overview of 80386 Protection Mechanisms
    The protection hardware of the 80386 is an integral part of the memory management hardware. Protection applies both to segment translation and to page ...
  24. [24]
    Operating Systems Lecture Notes Lecture 15 Segments
    Each segment is a variable-sized chunk of memory. An address is a segment,offset pair. Each segment has protection bits that specify which kind of accesses can ...
  25. [25]
    x86 Segmentation for the 15-410 Student
    Sep 8, 2017 · x86 segmentation divides memory into segments, like code, stack, and data. Segment selectors use segment numbers to access descriptor tables, ...
  26. [26]
    Guide to Understanding Segmentation
    As shown in Figure 3-1, segmentation provides a mechanism for dividing the processor's addressable memory space (called the linear address space) into smaller ...
  27. [27]
    A Hardware Architecture for Implementing Protection Rings - Multics
    In a system which uses segmentation as a memory addressing scheme, protection can be achieved in part by associating concentric rings of decreasing access ...
  28. [28]
    Memory Management, Segmentation, and Paging - UCSD CSE
    Appropriate protection and security can be enforced by associating this information with the segment table.
  29. [29]
    [PDF] Complete Virtual Memory Systems - cs.wisc.edu
    The page table entry (PTE) in VAX contains the following bits: a valid bit, a protection field (4 bits), a modify (or dirty) bit, a field reserved for. OS ...
  30. [30]
    [PDF] Virtual Memory - Computer Systems: A Programmer's Perspective
    If virtual memory is used improperly, applications can suffer from perplexing and insidious memory- related bugs. For example, a program with a bad pointer can ...Missing: fundamentals | Show results with:fundamentals
  31. [31]
    Virtual Memory - Cornell: Computer Science
    Virtual memory is a system by which the machine or operating system fools processes running on the machine into thinking that they have a lot more memory to ...Missing: fundamentals | Show results with:fundamentals
  32. [32]
    [PDF] Virtual Memory - the denning institute
    The address translator also recognized access codes, thus protecting read-only pages from being overwritten. Downloaded by [Peter Denning] at 10 ...
  33. [33]
    CS 537 Lecture Notes Part 7 Paging
    The MMU allows a contiguous region of virtual memory to be mapped to page frames scattered around physical memory making life much easier for the OS when ...
  34. [34]
    [PDF] Virtual Memory Overview - Washington
    Virtual memory uses virtual addresses (VA) and physical addresses (PA). The MMU translates VA to PA using the TLB and page table. The page table maps virtual ...
  35. [35]
    [PDF] A Hardware Architecture for Implementing Protection Rings
    In a system which usessegmentation as a memory addressing scheme, protection can be achieved in part by associating concentric rings of decreasing access.
  36. [36]
    [PDF] Intel® 64 and IA-32 Architectures Software Developer's Manual
    NOTE: The Intel® 64 and IA-32 Architectures Software Developer's Manual consists of nine volumes: Basic Architecture, Order Number 253665; Instruction Set ...
  37. [37]
    [PDF] Systems Reference Library IBM System/360 Principles of Operation
    The manual is useful for individual study, as an instruction aid, and as a machine reference manual. The manual defines System/360 operating princi- ples, ...
  38. [38]
    What is storage protection? - IBM
    Storage protection prevents unauthorized alteration and reading of storage, working on 4K pages of real memory, and cannot be altered by application programs.
  39. [39]
    Memory protection keys - LWN.net
    May 13, 2015 · Memory protection keys (MPK) use bits in page tables to assign keys to memory pages, allowing processes to partition memory and control access ...
  40. [40]
    Capability-based addressing | Communications of the ACM
    A computer using capability-based addressing may be substantially superior to present systems on the basis of protection, simplicity of programming conventions ...
  41. [41]
    [PDF] Capability-Based Computer Systems
    Capabilities provide (1) a single mechanism to address both primary and secondary memory, and (2) a single mechanism to address both hardware and soft- ware ...
  42. [42]
    [PDF] The Plessey System 250
    Capability addressing facilitated sharing among processors, while also restricting each processor's domain to the segments for which it possessed capabilities.
  43. [43]
    EROS: a fast capability system - ACM Digital Library
    EROS is a capability-based operating system for commodity processors which uses a single level storage model. The single level store's persistence is ...
  44. [44]
    [PDF] A Capability-based Foundation for Trustless Secure Memory Access
    Aug 9, 2023 · Capability-based memory isolation is a promising new ar- chitectural primitive. Software can access low-level memory.
  45. [45]
    CS 513 System Security -- LReview and Revocation for Capabilities
    To perform review in a capability-based system is more difficult. All of the capabilities could be printed, but from that information it would still be hard to ...
  46. [46]
    [PDF] Efficient and Provable Local Capability Revocation using ...
    Unfortunately, local capability revocation is unrealistic in practice because large amounts of stack memory need to be cleared as a security precaution. In this ...
  47. [47]
    [PDF] paging.pdf - cs.Princeton
    Paged Segmentation. Silberschatz. & Peterson. Swapping. • What happens if cumulative sizes of segments exceeds virtual memory? Page 6. 6. Swapping to Disk. • If ...
  48. [48]
    [PDF] The Development of a Segmented Memory Manager for the ... - DTIC
    This thesis rsoorts the development of a segmented memory manager for the UNIX operating system on a PDP-11/50 minicomputer. Considered in detail is the ...
  49. [49]
    Memory Protection - Win32 apps | Microsoft Learn
    Jan 7, 2021 · Copy-on-write protection is an optimization that allows multiple processes to map their virtual address spaces such that they share a physical page.Missing: NT history
  50. [50]
    [PDF] Effective Memory Protection Using Dynamic Tainting
    Nov 9, 2007 · In this paper, we present a new technique based on dynamic taint- ing for protecting programs from illegal memory accesses. When memory is ...
  51. [51]
    [PDF] Dynamic Taint Analysis for Automatic Detection ... - People @EECS
    TaintCheck design and implementation​​ TaintCheck is a novel mechanism that uses dynamic taint analysis to detect when a vulnerability such as a buffer overrun ...
  52. [52]
    [PDF] Design and Implementation of a Dynamic Information Flow Tracking ...
    Our focus has been on deriving an implementation of DIFT for a RISC-V core that protects IoT applications against memory-corruptions attacks while presenting no ...
  53. [53]
    Page Tables - The Linux Kernel documentation
    Page tables map virtual addresses as seen by the CPU into physical addresses as seen on the external memory bus. Linux defines page tables as a hierarchy.
  54. [54]
    [PDF] Sharing Page Tables in the Linux Kernel
    During fork, every pte en- try is copied to the new page table. Data pages that can't be fully shared are marked as “copy on write.” Marking a page as copy on ...
  55. [55]
    [PDF] PaX: Twelve Years of Securing Linux - grsecurity
    Oct 10, 2012 · PaX. Future. The Solutions. Deployment. ▷ Mandatory Access Control (policies). ▷ Linux Security Modules (LSM). ▷ Apparmor, SELinux, Smack ...Missing: paper | Show results with:paper
  56. [56]
    Security Technologies: ExecShield - Red Hat
    Jul 25, 2018 · ExecShield, a Red Hat technology, protects systems from memory corruption by segmenting memory and using address space layout randomization.Missing: PaX | Show results with:PaX
  57. [57]
    [PDF] Integrating Flexible Support for Security Policies into the Linux ...
    This paper describes the security architecture, security mechanisms, application programming interface, secu rity policy configuration, and performance of ...
  58. [58]
    CreateProcessA function (processthreadsapi.h) - Win32 apps
    Feb 8, 2023 · Creates a new process and its primary thread. The new process runs in the security context of the calling process.
  59. [59]
    Mitigate threats by using Windows 10 security features
    Dec 31, 2017 · Memory protection options provide specific mitigations against malware that attempts to manipulate memory in order to gain control of a system.Address Space Layout... · Windows Heap Protections · Kernel Pool Protections<|control11|><|separator|>
  60. [60]
    On the effectiveness of DEP and ASLR - Microsoft
    Dec 8, 2010 · DEP (Data Execution Prevention) and ASLR (Address Space Layout Randomization) have proven themselves to be important and effective ...<|control11|><|separator|>
  61. [61]
    Data Execution Prevention - Win32 apps - Microsoft Learn
    May 1, 2023 · Data Execution Prevention (DEP) is a memory protection feature that marks memory as non-executable, preventing code from running from data ...
  62. [62]
    Creating Guard Pages - Win32 apps - Microsoft Learn
    Jan 7, 2021 · A guard page is a one-shot alarm for memory access, created by setting PAGE_GUARD. Accessing it raises an exception, and the guard status is ...
  63. [63]
    Memory Protection Constants (WinNT.h) - Win32 - Microsoft Learn
    May 20, 2022 · The following are the memory-protection options; you must specify one of the following values when allocating or protecting a page in memory.
  64. [64]
    Control Flow Guard for platform security - Win32 apps | Microsoft Learn
    Dec 17, 2024 · Control Flow Guard (CFG) is a highly-optimized platform security feature that was created to combat memory corruption vulnerabilities.What is Control Flow Guard? · How Can I Enable CFG?
  65. [65]
    Access Tokens - Win32 apps - Microsoft Learn
    Jul 8, 2025 · An access token is an object that describes the security context of a process or thread. The information in a token includes the identity and privileges of the ...Missing: kernel | Show results with:kernel
  66. [66]
    Windows Kernel-Mode Security Reference Monitor - Microsoft Learn
    Sep 24, 2025 · Learn about the Windows Security Reference Monitor and how to use its routines for access control in kernel-mode drivers.Missing: protection | Show results with:protection
  67. [67]
    Running 32-bit Applications - Win32 apps - Microsoft Learn
    Aug 19, 2020 · WOW64, an x86 emulator, allows 32-bit apps to run on 64-bit Windows. The system isolates them, but 32-bit apps can't load 64-bit DLLs.
  68. [68]
    Enable virtualization-based protection of code integrity
    Aug 15, 2025 · Memory integrity can be turned on in Windows Security settings and found at Windows Security > Device security > Core isolation details > Memory integrity.Memory integrity features · How to turn on memory integrity
  69. [69]
    Virtualization-based Security (VBS) - Microsoft Learn
    Feb 27, 2025 · Virtualization-based security, or VBS, uses hardware virtualization and the Windows hypervisor to create an isolated virtual environment that becomes the root ...
  70. [70]
    Memory Protection Unit (MPU) - Arm Developer
    The MPU is a programmable device that can define memory access permissions, such as privileged access only, and memory attributes, for example Cacheability.
  71. [71]
    [PDF] Deterministic Memory Hierarchy and Virtualization for Modern Multi ...
    Abstract—One of the main predictability bottlenecks of mod- ern multi-core embedded systems is contention for access to shared memory resources.
  72. [72]
    Memory Protection Unit (MPU) Support - FreeRTOS™
    FreeRTOS MPU ports enable microcontroller applications to be more robust and more secure by: first, enabling tasks to run in either privileged or unprivileged ...
  73. [73]
    Memory Protection Unit - Cortex-M0+ Devices Generic User Guide
    The MPU divides memory into regions, defining access permissions and attributes. It can cause a HardFault if a prohibited access occurs.
  74. [74]
    Chapter 5. Memory Protection Unit - Cortex-M4 - Arm Developer
    This chapter describes the processor Memory Protection Unit (MPU). It contains the following sections: About the MPU · MPU functional description.
  75. [75]
    [PDF] Mitigation of interference in Multicore Processors - Wind River Systems
    Figure 12 – The combination of cache partitioning and the use of certain RAM addresses can mitigate interference in the memory system through space partitioning ...
  76. [76]
    [PDF] seL4: Formal Verification of an OS Kernel - acm sigops
    seL4 is a formally verified, general-purpose OS kernel, the first of its kind, designed for functional correctness and is a member of the L4 microkernel family.
  77. [77]
    [PDF] seL4: Formal Verification of an Operating-System Kernel
    ABSTRACT. We report on the formal, machine-checked verification of the seL4 microkernel from an abstract specification down to its C implementation.
  78. [78]
    [PDF] Modular Avionics Safety-Critical Software Development for Integrated
    In VxWorks 653, the module OS performs ARINC 653 scheduling of the individual partitions. Within each time slot, the partition OS uses the VxWorks scheduler to ...<|separator|>
  79. [79]
    Benefits of Using the Memory Protection Unit - FreeRTOS™
    Feb 16, 2021 · MPU regions can be modified on a per-task basis; each task can have its own unique set of regions that are configured when the task is moved to ...
  80. [80]
    [PDF] Capability memory protection for embedded systems
    This dissertation explores the use of capability security hardware and software in real-time and latency-sensitive embedded systems, to address existing memory ...
  81. [81]
    [PDF] Protecting Cryptographic Libraries against Side-Channel and Code ...
    Dec 26, 2024 · Memory- buffer vulnerabilities are among the most common security vulnerabilities, comprising approximately 20% of the reported ...Missing: Spectre | Show results with:Spectre
  82. [82]
    [PDF] Return-Oriented Programming: Systems, Languages, and Applications
    In this paper, we present a new form of attack, dubbed return-oriented programming, that categorically evades W+X protections. Attacks using our technique ...
  83. [83]
    [PDF] The Morris worm: A fifteen-year perspective - UMD Computer Science
    did not need authentication, and the fingerd application was vulnerable to a buffer overrun exploit, something of a novelty at the time. Many systems did ...
  84. [84]
    CWE-367: Time-of-check Time-of-use (TOCTOU) Race Condition
    This can happen with shared resources such as files, memory, or even variables in multithreaded programs. + Applicable Platforms. Section Help This listing ...
  85. [85]
    [1801.01203] Spectre Attacks: Exploiting Speculative Execution - arXiv
    Jan 3, 2018 · This paper describes practical attacks that combine methodology from side channel attacks, fault attacks, and return-oriented programming that can read ...
  86. [86]
    4. Memcheck: a memory error detector - Valgrind
    Memcheck is a memory error detector. It can detect the following problems that are common in C and C++ programs. Incorrect freeing of heap memory.Missing: protection | Show results with:protection
  87. [87]
    ViK: Practical Mitigation of Temporal Memory Safety Violations ...
    Mar 4, 2022 · In kernel UAF attacks, the attacker has only one chance: The kernel will panic upon failed attacks due to an invalid memory access via a ...
  88. [88]
    [PDF] Real-World Buffer Overflow Protection for Userspace & Kernelspace
    Dynamic Information Flow Tracking (DIFT) is a practi- cal platform for preventing a wide range of security at- tacks from memory corruptions to SQL injections.
  89. [89]
    Support for Intel® Memory Protection Extensions (Intel® MPX)...
    Jul 16, 2021 · Describes Intel® MPX and how to find out if the technology is supported by a processor.
  90. [90]
    12. Intel(R) Memory Protection Extensions (MPX)
    Intel MPX provides hardware features that can be used in conjunction with compiler changes to check memory references, for those references whose compile-time ...
  91. [91]
    Armv8-A architecture: 2016 additions - Arm Developer
    Oct 26, 2016 · Pointer authentication · PAC value creation that write the value to the uppermost bits in a destination register alongside an address pointer ...
  92. [92]
    [PDF] Pointer Authentication on ARMv8.3 - Qualcomm
    In this document, we have described the design of the ARM Pointer Authentication extensions newly introduced in ARMv8.3-A specification. We have presented ...
  93. [93]
    AMD Secure Encrypted Virtualization (SEV)
    AMD Secure Encrypted Virtualization (SEV) uses one key per virtual machine to isolate guests and the hypervisor, managed by the AMD Secure Processor.
  94. [94]
    AMD Secure Encrypted Virtualization (SEV) — QEMU documentation
    SEV is an extension to the AMD-V architecture which supports running encrypted virtual machines (VMs) under the control of KVM.
  95. [95]
    [PDF] Control-Flow Integrity Principles, Implementations, and Applications
    This paper describes and studies one mitigation technique, the enforcement of Control-. Flow Integrity (CFI), that aims to meet these standards for ...
  96. [96]
    [PDF] PAC it up: Towards Pointer Integrity using ARM Pointer Authentication
    Pointers with PACs can be authenticated either as they are loaded from memory, or immediately before they are used. We refer to these as on-load and on-use ...
  97. [97]
    Mitigating Spectre with Site Isolation in Chrome
    Jul 11, 2018 · To better mitigate these attacks, we're excited to announce that Chrome 67 has enabled a security feature called Site Isolation on Windows, Mac ...
  98. [98]
    [PDF] Performance Evaluation of Intel EPT Hardware Assist - VMware
    Recently Intel introduced its second generation of hardware support that incorporates MMU virtualization, called Extended Page Tables (EPT). We evaluated EPT ...
  99. [99]
    AI-Driven Anomaly Detection for Securing IoT Devices in 5G ... - MDPI
    This paper proposes a novel AI-driven anomaly detection framework designed to enhance cybersecurity in IoT-enabled smart cities operating over 5G networks.
  100. [100]
    NIST Releases First 3 Finalized Post-Quantum Encryption Standards
    Aug 13, 2024 · NIST has finalized its principal set of encryption algorithms designed to withstand cyberattacks from a quantum computer.