Fact-checked by Grok 2 weeks ago

Memory management unit

A memory management unit (MMU) is a component integrated into a computer's that translates addresses generated by software into physical addresses in the actual memory , while also enforcing access permissions and supporting management to enable efficient multitasking and . The primary purpose of the MMU is to abstract the physical memory layout from applications, allowing multiple processes to operate in isolated virtual spaces as if they have dedicated memory, which prevents interference and fragmentation in physical . This translation process may involve techniques such as paging—dividing memory into fixed-size pages, typically 4 KiB, and mapping virtual pages to physical ones via data structures like page tables—or segmentation, depending on the , which store mappings along with attributes such as read/write permissions and presence in memory. If a requested lacks a valid mapping or violates permissions, the MMU triggers an exception, such as a in paging systems or a fault, enabling the operating system to handle paging from disk or terminate the process. Key components of the MMU include the , a high-speed cache that stores recent virtual-to-physical address translations to accelerate lookups and avoid frequent access to slower page tables in main memory. The MMU also manages caching policies, memory access ordering, and integration with the processor's bus interface, optimizing performance in systems like ARM-based embedded devices or general-purpose computers running complex operating systems such as . By handling these operations in hardware, the MMU reduces software overhead, minimizes risks, and lowers overall memory-related costs through efficient and fragmentation control.

Overview and Fundamentals

Definition and Role

The Memory Management Unit (MMU) is a hardware component integrated into the (CPU) that translates virtual addresses generated by software applications into corresponding physical addresses in (RAM). This translation process occurs dynamically and transparently to the executing programs, enabling the operating system to allocate and manage memory resources without requiring software to handle the complexities of physical memory organization. As a critical intermediary between the CPU and memory subsystem, the MMU ensures that all memory accesses are validated and mapped correctly before reaching the physical hardware. The MMU plays a central role in modern computing by facilitating multitasking, , and systems. In multitasking environments, it allows multiple processes to share the same physical concurrently by assigning each an independent , preventing interference and enabling efficient resource utilization. For , the MMU enforces access controls that isolate processes, blocking unauthorized reads, writes, or executions in restricted regions to safeguard against faults or attacks. Through virtual memory support, the MMU abstracts physical RAM constraints, permitting programs to address larger spaces by inactive portions to secondary , thus optimizing overall system performance. Historically, the MMU emerged in the amid the push for systems that supported interactive, multi-user computing. The Model 67, announced in August 1965 and first delivered in 1966, introduced one of the earliest commercial implementations via its Dynamic Address Translation (DAT) unit, which served as an MMU to enable for applications. This design allowed the system to interleave tasks from multiple users, reducing idle time and improving throughput on large-scale mainframes. Virtual addressing, managed by the MMU, provides software with an abstracted view of that is process-specific and potentially much larger than physical , whereas physical addressing directly references actual locations. By mediating all memory operations, the MMU ensures processes cannot bypass translation to access physical memory directly, thereby enforcing and boundaries that protect the system from unauthorized or erroneous intrusions.

Key Components and Operation

The Memory Management Unit (MMU) consists of several core hardware components that enable efficient virtual-to-physical address translation. The Translation Lookaside Buffer (TLB) serves as a high-speed cache storing recent address translations, typically organized in multiple levels such as L1 instruction and data TLBs for fast lookups, and a shared L2 TLB for broader coverage across page sizes from 4KB to 1GB. In RISC-V architectures, the TLB is private to each hart (hardware thread) and caches translations tagged with address space identifiers to support isolation. The page table walker is dedicated hardware that traverses multi-level page tables upon a TLB miss, starting from a root pointer in control registers and fetching page table entries (PTEs) to compute the physical address, often optimized with prefetching for contiguous tables to reduce latency. Control registers, such as the ARM SCTLR (System Control Register) or RISC-V satp (Supervisor Address Translation and Protection), configure MMU modes, enable/disable translation, set root table pointers, and manage features like prefetching via bits in registers like ECTLR. During a memory access, the MMU initiates translation by first checking the TLB for a matching virtual address entry; a hit provides the physical address in 1-3 clock cycles while verifying permissions. On a TLB miss, the page table walker hardware automatically traverses the page or segment tables—using a radix-tree structure in designs like RISC-V's Sv39 (3-level) or ARM's stage-1 tables—indexing levels with virtual page number bits until a valid leaf PTE is found, combining its physical page number with the offset to form the final address, and then inserting the translation into the TLB for future use. This hardware-managed refill mechanism, as seen in Intel x86 and ARM implementations, avoids software intervention for most misses, completing walks in fixed cycles (e.g., 7 cycles in early x86 designs) without flushing the pipeline. Address space identifiers (ASIDs) enhance during context switches by tagging TLB entries with a unique per-process identifier, allowing the MMU to retain valid translations without full TLB flushes; for instance, uses 8-16 bit ASIDs in TLB entries, while employs up to 16-bit ASIDs in the satp to distinguish s per hart. Global mappings (marked in PTEs) are exempt from ASID-specific invalidations, further minimizing overhead during switches. If translation fails or access violates configured rules during the TLB check or walk, the MMU generates a fault—such as a for invalid PTEs (V=0 bit unset) or protection violations (e.g., read on a write-only page)—triggering an to the operating system via mechanisms like ARM's Data/Instruction Aborts or RISC-V's scause/stval registers, which capture the faulting address and cause for handler resolution. These faults ensure precise , passing control to the OS without corrupting the state.

Address Translation Techniques

Segmented Translation

Segmented translation is a memory management technique employed by the memory management unit (MMU) to map variable-sized logical segments of a program's to physical memory, enabling logical partitioning and . In this scheme, the virtual address is divided into two parts: a segment selector, which acts as an index into a per-process segment table, and an specifying the position within the selected . The segment table, maintained by the operating system and accessed via the MMU, consists of entries that include the physical address of the , a indicating its size, and bits defining allowable types such as read, write, or execute. Upon receiving a virtual address, the MMU uses the segment selector to retrieve the corresponding table entry, verifies that the does not exceed the to prevent out-of-bounds , and—if valid—computes the by adding the offset to the . This address translation process can be mathematically expressed as follows: \text{Physical Address} = \text{Segment Base} + \text{Offset} subject to the bound check: \text{Offset} < \text{Segment Limit} If the offset violates this condition, the MMU generates a , trapping to the operating system for handling, such as process termination or memory reallocation. The mechanism supports efficient relocation of entire segments in physical memory without altering the program's logical view, as only the base address in the table entry needs updating. One key advantage of segmented translation lies in its ability to separate distinct program modules, such as , , and , into independent segments with tailored attributes. For instance, a can be marked read-only to prevent modification, while a allows read-write access, thereby enhancing by isolating components and reducing the risk of unintended interference. This logical division aligns with program structure, facilitating modular design and enabling selective sharing of segments, like common libraries, across multiple processes without duplicating the entire . Despite these benefits, segmented translation suffers from external fragmentation due to the variable sizes of segments, which can leave scattered unused gaps that are too small for new allocations, complicating utilization over time. Additionally, handling segment overlap or demands precise of table entries to maintain , as improper configuration could lead to violations or between processes. To igate fragmentation issues, segmented translation is sometimes combined with paging for finer-grained allocation, though this hybrid approach introduces additional complexity.

Paged Translation

Paged address translation divides the virtual address into two primary components: a virtual page number (VPN), which identifies the page in virtual memory, and an , which specifies the byte position within that page. The MMU uses a to map the VPN to a corresponding physical number (PFN), leaving the offset unchanged to preserve intra-page addressing. This mechanism enables fixed-size allocation of memory in uniform blocks, typically avoiding external fragmentation associated with variable-sized segments. The translation process computes the physical address using the formula: \text{Physical Address} = (\text{page table[VPN]} \ll \text{page shift}) + \text{offset} where \text{page shift} = \log_2(\text{page size}), and the page table entry at the VPN index provides the PFN. For a standard 4 KiB page size, the shift is 12 bits, with the low 12 bits of the virtual address serving as the offset and the higher bits forming the VPN. In practice, large virtual address spaces necessitate multi-level page tables to manage memory efficiently without requiring a single, impractically large table; for instance, a two-level structure consists of a page directory (indexed by upper VPN bits) pointing to page tables (indexed by lower VPN bits), each containing entries for individual pages. The MMU traverses this hierarchy starting from a base register (e.g., CR3 in x86 architectures), using successive address bits to index each level until reaching the final page table entry with the PFN. This hierarchical approach reduces the overall table size by populating only necessary sub-tables for sparsely used address regions. Common page sizes include 4 KiB for fine-grained control, balancing allocation overhead with management complexity. To mitigate (TLB) pressure—where frequent misses slow translation due to the limited number of cached mappings—modern MMUs support larger pages, such as 2 MiB, which cover more memory per entry and reduce the total mappings needed for a given . For example, using 2 MiB pages can decrease TLB entries by a factor of 512 compared to 4 KiB pages for the same region, improving performance in memory-intensive workloads by minimizing walks and cache pollution. These large pages are indicated in higher-level table entries, allowing the MMU to bypass lower levels during traversal.

Segmentation Combined with Paging

Hybrid memory management schemes integrate segmentation for logical partitioning of the address space with paging for efficient physical memory allocation. In this approach, the virtual address is divided into a segment identifier and an offset; the segment identifier indexes a segment table to retrieve a descriptor containing the base address of a dedicated page table for that segment. The offset is then split into a page number and page offset, where the page number indexes the segment's page table to map to a physical frame, and the page offset remains unchanged to locate the byte within the frame. This two-stage translation process allows segments to be treated as collections of pages, enabling variable-sized logical units without requiring contiguous physical storage. The table typically consists of entries for each , each pointing to the base of its associated along with attributes like length limits and protection bits; per- tables map within the to physical pages, supporting hierarchical structures to manage larger address spaces efficiently. The address format thus comprises the ID, number (relative to the ), and intra-page , facilitating modular organization where represent , , or stack regions. This structure supports sharing of individual across processes while handle fine-grained allocation. By leveraging segmentation's ability to define meaningful, variable-sized address space divisions—such as per-process or per-module boundaries—alongside paging's fixed-size units, the hybrid method achieves logical modularity and eliminates external fragmentation inherent in pure segmentation. Paging ensures that segments can be allocated non-contiguously in physical , reducing waste and supporting loading of pages within segments, which enhances overall resource utilization in multiprogrammed environments. This combination promotes machine independence and simplifies dynamic management without the contiguity constraints of traditional segmentation. However, the integration increases hardware complexity, as the memory management unit must perform sequential lookups across the segment table and , potentially doubling the access during compared to single-stage methods. Table walks may require multiple memory references unless mitigated by caches like translation lookaside buffers, and managing separate per-segment page tables adds overhead in space and maintenance. Despite these drawbacks, the approach balances flexibility and efficiency, particularly for systems requiring both logical and fragmentation avoidance.

Core Functions Beyond Translation

Memory Protection Mechanisms

The Memory Management Unit (MMU) enforces memory protection through attributes embedded in translation structures, such as page table entries (PTEs), which define access permissions for memory regions. These protection bits typically include flags for read, write, and execute permissions, as well as mode-based restrictions like user/supervisor access. In the x86 architecture, for instance, PTEs feature the R/W bit (bit 1), which sets a page as read-only (0) or read/write (1), and the U/S bit (bit 2), which limits access to supervisor mode (0) or allows user-mode access (1) when set. Similarly, in ARM architectures, access permissions are controlled via the AP (Access Permission) fields in stage-1 translation tables, which specify read/write privileges per exception level (e.g., EL0 for user, EL1 for kernel), ensuring granular control over data and instruction fetches. These bits allow the MMU to validate every memory access against the requesting process's privileges, preventing unauthorized modifications or reads. When an access violates these protection attributes—such as attempting to write to a read-only page or access a supervisor-only region—the MMU generates a precise fault, halting the operation and providing diagnostic information to the operating system. In x86 systems, this triggers a exception (#PF, 14), where the faulting virtual address is captured in the CR2 register, and an indicates the violation type (e.g., protection violation if bit 1 is set in the code pushed on the ). The MMU ensures the fault is synchronous and precise, reporting the exact instruction causing the issue to enable efficient handler invocation without corrupting the processor state. In ARMv8-A, such violations result in memory management faults (e.g., data abort for loads/stores or prefetch abort for instructions), with the Fault Status Register (FSR) or syndrome information detailing the cause, such as permission faults, allowing the exception handler to address the issue at the faulting address. This mechanism isolates erroneous or malicious accesses, trapping them before they propagate. Isolation features in the MMU extend protection by maintaining separate address spaces for processes through dedicated translation structures, preventing one process from accessing another's memory. Each process is assigned unique page tables, loaded into hardware registers like CR3 in x86 (pointing to the page directory base) or TTBR0_EL1 in (for user space mappings), ensuring virtual-to-physical translations are context-specific and non-interfering. enhances this by allowing initial sharing of read-only pages between parent and child processes (e.g., after ), marked with read-only protection bits; a write attempt triggers a fault, prompting the OS to duplicate the page and update permissions, thus optimizing memory usage while preserving . Process-Context Identifiers (PCIDs) in x86 further refine this by tagging TLB entries per process, avoiding full flushes on context switches and maintaining without performance overhead. Advanced controls bolster these mechanisms against specific threats, notably the NX (No-Execute) bit (or XD bit in x86), which marks pages as non-executable to block code injection attacks like buffer overflows. In , this bit (bit 63 in PTEs) is enabled via the IA32_EFER.NXE flag and checked during instruction fetches; if set on a page, it raises a , preventing execution of malicious payloads in non-code regions. equivalents include the XN (Execute-Never) attribute in table descriptors, which similarly prohibits instruction fetches from designated memory areas, configurable per page or granule to mitigate exploits targeting sections. These features integrate seamlessly with base permissions, providing layered defense without altering core translation flows.

Virtual Memory Management

The Memory Management Unit (MMU) facilitates by supporting demand paging, where pages are loaded into physical only upon access, allowing processes to operate in a larger than available physical . In this mechanism, the MMU uses to translate virtual addresses to physical ones, with a present bit in each entry (PTE) indicating whether the corresponding is resident in physical . If the present bit is unset during an access attempt, the MMU generates a , trapping to the operating system (OS) to handle loading the page from secondary storage, such as disk or swap space. This on-demand loading minimizes initial usage and enables efficient execution of large programs. Page replacement is another key aspect of virtual memory management supported by the MMU, where the hardware provides fault information to the OS upon detecting non-resident pages, enabling software algorithms to decide which pages to evict from physical memory. The MMU does not implement replacement policies in hardware but assists by updating PTEs with bits like the accessed or dirty flags, which the OS uses to approximate algorithms such as Least Recently Used (LRU), where the least recently accessed page is selected for to minimize future faults. When a is needed, the OS may write modified pages back to swap space before loading new ones, ensuring the virtual address space remains consistent. Swapping extends by allowing entire processes or inactive pages to be moved to disk when physical memory is overcommitted, with the operating system detecting such conditions through elevated rates that signal thrashing—a where the system spends more time pages than executing useful work. To mitigate thrashing, the OS may suspend low-priority processes or adjust allocation, while the MMU supports large pages (e.g., 2MB or 1GB sizes in modern architectures) to reduce the number of PTEs and TLB entries, thereby lowering translation overhead and fault frequency in systems. Mapping techniques in , enabled by the MMU, allow between processes by mapping the same physical pages into multiple virtual address spaces via shared PTEs, promoting efficient resource sharing without duplicating data in . The total size of a process's is calculated as the number of virtual pages multiplied by the page size, providing an independent of physical constraints—for instance, a 32-bit system with 4KB pages yields 4GB of (2^32 bytes). This abstraction supports features like for efficient forking, where pages are initially shared and duplicated only on modification.

Benefits and Performance Impacts

Efficiency and Resource Utilization

The (TLB), a critical within the MMU, significantly enhances by storing recent virtual-to-physical address translations, achieving hit rates typically exceeding 99% due to temporal and spatial locality in memory accesses. This high hit rate reduces translation to 0.5–1 clock on hits, compared to 10–100 cycles for misses that require walks. TLB designs employ associativity (e.g., 4-way to fully associative) and replacement policies such as least recently used (LRU) or to maximize coverage and minimize conflicts, with studies showing LRU outperforming random or policies in fully associative TLBs by up to 20% in miss rate reduction for typical workloads. These optimizations ensure that most memory accesses bypass slower lookups, offloading overhead from the CPU and improving overall system throughput. By performing address in , the MMU eliminates the substantial overhead of software-based translation, which can consume hundreds to thousands of CPU cycles per access in systems lacking , whereas translation completes in nanoseconds (e.g., 1–10 ns at multi-GHz clocks). This offload is particularly beneficial for context switches, as the MMU handles per-process address mappings transparently without requiring extensive software reconfiguration, reducing switch times by avoiding iterative table traversals and enabling sub-microsecond transitions in modern systems. Consequently, applications experience lower in multitasking environments, with MMUs contributing to 2–5x faster effective access patterns compared to . Virtual memory mechanisms enabled by the MMU facilitate resource overcommitment, allowing the total virtual address space across processes to exceed physical RAM capacity while optimizing utilization through demand paging and swapping. This approach ensures efficient RAM allocation, as idle processes release physical pages to active ones, achieving up to 2–3x higher memory utilization in server workloads without performance degradation. Additionally, employing large pages (e.g., 2 MB or 1 GB) in MMU configurations expands TLB coverage, reducing misses by 50–90% in memory-intensive applications by mapping larger contiguous regions with fewer entries. Such strategies minimize translation overheads, with empirical results showing 10–30% improvements in memory-bound task performance.

Security and Isolation Advantages

The Memory Management Unit (MMU) provides fundamental security through by mapping each to a distinct , ensuring that one process cannot or modify the allocated to another. This is enforced via tables that associate physical memory pages exclusively with specific processes, trapping any cross-process attempts as faults. As a result, even if a process is compromised, its reach is confined, preventing widespread data leakage or corruption across the system. In mitigating malware, MMU protection bits play a critical role by enforcing attributes such as read-only, no-execute, or write-only on memory pages, which directly thwart exploits like (ROP) attacks that attempt to chain existing code gadgets. For instance, the no-execute bit (NX or XD) prevents code execution from data regions, blocking ROP chains that repurpose stack or heap memory. Additionally, Address Space Identifiers (ASIDs) enhance this by tagging (TLB) entries with process-specific values, allowing secure context switches without full TLB flushes and reducing the overhead of maintaining isolated environments during multitasking. Kernel protection is bolstered by the MMU's support for privilege levels, where or mode restricts user-mode processes from accessing sensitive regions, generating traps for any invalid attempts. This ring-based hierarchy, typically with in ring 0 and user processes in higher rings, ensures that requires explicit mode switches via system calls, isolating the from erroneous or malicious user code.

Specialized and Extended MMU Variants

Input-Output Memory Management Units (IOMMUs)

An Input-Output Memory Management Unit (IOMMU) is a hardware component that translates virtual addresses generated by input/output (I/O) devices, such as peripherals performing (DMA), into physical addresses in system memory, functioning analogously to a (CPU) memory management unit (MMU) but specifically for device-initiated memory operations. This translation enables devices to operate within a virtualized , decoupling their memory accesses from the physical layout and allowing for dynamic remapping without device reconfiguration. IOMMUs emerged as a response to the limitations of traditional DMA, where devices directly access physical memory addresses, which can lead to fragmentation and security vulnerabilities in modern systems with . Key functions of an IOMMU include support for scatter-gather , which allow devices to transfer data to or from non-contiguous physical buffers by a contiguous buffer to multiple scattered physical pages, thereby optimizing I/O performance for applications like processing or storage operations. Additionally, IOMMUs provide mechanisms, such as permission bits in page tables and fault isolation, to prevent malicious or erroneous attacks by blocking unauthorized device access to system regions and generating interrupts for invalid operations. These protections are crucial in environments with untrusted peripherals, ensuring that devices cannot overwrite critical data or user processes. Prominent IOMMU architectures include Intel's Virtualization Technology for Directed I/O (VT-d), introduced in 2008 as part of the Nehalem processor family, which not only handles address translation but also features interrupt remapping to virtualize device interrupts for guest virtual machines (VMs), enhancing isolation in hypervisor environments. Similarly, AMD's I/O Virtualization Technology (AMD-Vi) provides comparable functionality for AMD processors, supporting address translation and protection for DMA operations in virtualized environments. The ARM System Memory Management Unit (SMMU), specified in versions like SMMU v3, supports stage-1 and stage-2 translations for nested virtualization, enabling efficient device assignment to VMs on ARM-based systems such as mobile and server platforms. These implementations integrate with system interconnects like PCI Express for Intel and AMD, and AMBA for ARM, ensuring seamless operation with high-speed peripherals. The primary benefits of IOMMUs lie in enabling secure device passthrough in virtualized setups, where entire I/O devices can be directly assigned to without hypervisor mediation, improving performance by avoiding address translation overhead during transfers. Furthermore, by handling large transfers independently, IOMMUs reduce CPU involvement and cache pollution, leading to better overall system throughput in data-intensive workloads like graphics rendering or . In practice, these advantages have been demonstrated in environments, where IOMMU-enabled systems show significantly lower in VM I/O compared to emulated alternatives.

Software-Assisted MMU Implementations

In software MMUs, the operating system handles address translation primarily through exception traps triggered on TLB misses, rather than relying on dedicated hardware walkers, enabling support in architectures lacking full . Upon a TLB miss, the processor generates an exception that vectors to an OS handler, which walks the process's —often a multi-level structure—and installs the translation into the TLB entry. This approach was prominent in early implementations, where the TLB exception handler, typically 8-20 instructions long, uses registers like EntryHi and EntryLo to compute the physical frame number and update the TLB via instructions such as TLB Write Index. In embedded systems without hardware MMUs, such as certain low-end microcontrollers, software MMUs reproduce functionality entirely in the OS ; for instance, techniques like MEMMU expand effective memory via and software paging, avoiding hardware costs while supporting task in resource-constrained environments. To accelerate frequent walks, shadow page tables maintain a parallel set of mappings that mirror guest or process tables, reducing redundant traversals during . Hybrid models combine a TLB for fast lookups with software-managed table walks and refills, balancing performance and control in systems like under . In on , the kernel's TLB refill handlers—invoked via dedicated exception vectors—probe the using the register to locate page table entries, then probe and update the TLB while handling attributes like cacheability and validity. This software layer allows the OS to enforce custom policies, such as variable page sizes or inverted s, without hardware constraints. In architectures with optional MMU extensions introduced post-2011 in the privileged specification, hybrid approaches permit software refills for cores omitting full hardware walkers, though the specification discourages pure software-managed TLBs due to performance bottlenecks in high-throughput scenarios. These implementations trade higher latency—often hundreds of cycles per miss from trap overhead and handler execution—for greater flexibility in page table formats and policy enforcement compared to rigid hardware MMUs. Software-managed refills introduce interrupt costs and potential instruction cache pollution, increasing virtual memory overhead by 10-30% in simulated workloads, but enable adaptations like dynamic relocation in embedded or legacy systems. In modern contexts as of 2025, software-assisted MMUs remain relevant in hypervisors like KVM for nested , where the shadow MMU emulates guest s in software to translate nested virtual addresses to host physical addresses, syncing changes on events like CR3 updates or to maintain isolation without full hardware nesting support. This approach, using structures like shadow entries to track multi-level translations, supports scalable VM hosting but incurs emulation overhead, mitigated by techniques like reverse mapping for efficient invalidations.

Historical and Architectural Implementations

Early Systems (IBM and VAX)

The System/360 Model 67, introduced in 1967, represented a pioneering implementation of a memory management unit (MMU) in commercial computing, featuring the first dynamic address translation () hardware designed specifically to enable systems. This unit, often referred to as the "DAT box," allowed for support by translating virtual addresses to real addresses on the fly, facilitating multiprogramming and resource sharing among multiple users without fixed memory assignments. The system employed 24-bit addressing, supporting up to 16 MB of virtual storage organized into 4,096 pages of 4 KB each, which marked a significant advancement over prior fixed-addressing mainframes by enabling dynamic relocation of programs in real storage. Building on this foundation, the series, announced in 1972, refined and standardized across its models, incorporating a two-level address translation mechanism using segment tables and s to map virtual addresses to real . In this , a virtual address is first indexed into a segment table to locate the base of a , which then provides the offset to the real page frame, supporting efficient management for operating systems like OS/VS and VM/370. The System/370 extended real addressing to 26 bits via an optional feature, allowing access to up to 64 MB of physical by utilizing reserved bits in page table entries, which addressed the growing demands of larger workloads while maintaining compatibility with System/360 software. Additionally, the included protection keys, where each 2 KB or 4 KB block of real is assigned a 4-bit key that must match the processor's current key for access, providing hardware-enforced to prevent unauthorized reads or modifications by other programs or users. The VAX architecture, introduced by in 1977 with the /780, implemented a sophisticated paged system using an integrated MMU that supported 32-bit virtual addressing, providing a vast 4 GB address space divided into user, control, and system regions. Virtual addresses were translated via process-specific s—specifically, per-process P0 (user) and P1 (control) tables located in system virtual address space—where a 32-bit address breaks into a 21-bit page number and 9-bit offset for 512-byte pages, the fixed unit of mapping and protection. This design allowed each process to maintain its own page tables, enabling independent virtual address spaces while sharing kernel resources, and incorporated hardware support for demand paging to load pages only as needed. For innovations in efficiency, the VAX employed a hierarchical page table structure with base/bounds registers to manage table allocation dynamically, reducing overhead during context switches by avoiding full table flushes and leveraging system-space residency for shared tables.

RISC and Modern Architectures (ARM, MIPS, PowerPC, x86)

The Memory Management Unit (MMU) in Reduced Instruction Set Computing (RISC) architectures and modern processors emphasizes efficiency, simplicity, and support for , evolving from early TLB-centric designs to multi-stage translations and larger address spaces to meet demands for embedded systems, servers, and up to 2025. These implementations prioritize for paging while allowing software flexibility, contrasting with more complex CISC mechanisms by reducing instruction overhead and focusing on fixed or variable granule sizes for address translation. In architectures, the MMU introduced in ARMv8 () supports 4KB page granules as the base translation unit, enabling 48-bit virtual addressing through a multi-level walk using 4 levels for execution state. This design facilitates efficient virtual-to-physical address translation via a (TLB) and supports Stage 1 translation for guest virtual to intermediate physical addresses, with Stage 2 translation added for control in scenarios. EL2 exception level support, introduced in ARMv8 and refined by 2013, allows secure nested by enabling the to perform Stage 2 translations on intermediate addresses, ensuring isolation without excessive overhead. MIPS processors pioneered TLB-based MMUs in RISC designs, with the (1988) implementing a fully associative 64-entry TLB for paging without dedicated hardware page tables, relying on software to manage entries for 4KB pages and handle misses via exceptions. This approach provided 32-bit virtual addressing with kernel-managed translations, emphasizing simplicity for and use. Extending to 64-bit in MIPS64, the supports variable page sizes from 4KB up to 256MB, configurable in the TLB to optimize for large memory mappings and reduce TLB pressure in high-memory systems. PowerPC MMUs in Book E variants (developed in the for applications) adopted a software-managed model with hashed page tables, allowing operating systems to implement custom structures while using hardware for TLB lookups and 32-bit addressing with 4KB pages. This flexibility departed from fixed hardware tables, enabling efficient translations via hash anchors for collision resolution in page lookups. In modern iterations like the (2021), the MMU shifted to tree-based page tables for 64-bit addressing, supporting up to 51-bit physical addresses and improving translation speed for large-scale server workloads by using a balanced over traditional hashing. The x86 architecture, while CISC-based, incorporates RISC-like efficiencies in its MMU evolution, starting with the 80386 (1985) that combined segmentation for variable-sized spaces with paging for 4KB fixed pages and 32-bit linear addressing, using two-level page tables for virtual-to-physical . The AMD64 extension (2003) expanded this to 64-bit mode with initial 48-bit virtual addressing via four-level paging, later enhanced to support 57-bit virtual addresses through optional five-level paging. For , Intel's Extended Page Tables (EPT, introduced in Nehalem processors) and AMD's Nested Page Tables (NPT) enable two-dimensional address , mapping physical to physical addresses in to reduce overhead. By 2025, trends in these architectures include Intel's widespread adoption of five-level paging in processors like the 5th Gen Xeon Scalable (Emerald Rapids, 2023), enabling 57-bit virtual addressing to accommodate over 128 petabytes of addressable RAM per process, driven by AI and cloud workloads requiring massive memory footprints without address space fragmentation. This extension maintains backward compatibility with 48-bit modes while scaling translations via an additional PML5 table level, reducing page table walks for sparse large-address uses.

Alternatives to Traditional MMUs

Bank Switching Techniques

Bank switching is a hardware-based technique employed to extend the effective memory capacity in systems where the processor's address space is limited, such as 16-bit addressing restricting direct access to 64 KB. It operates by dividing physical memory into fixed-size banks, typically of 64 KB or smaller, and using dedicated hardware mechanisms to select which bank is mapped into the processor's visible address space at any given time. This selection is achieved through writes to specific registers or I/O ports, which act as decode lines to enable or disable banks without performing address translation or relocation beyond simple remapping. In practice, a single I/O line can toggle between two banks, while multiple lines—such as four bits—allow switching among up to 16 banks, enabling access to a total of 1 MB when each bank is 64 KB. The mechanism relies on memory-mapped I/O or port-mapped I/O to control the bank select register, ensuring that only one bank is active in the while others remain dormant until explicitly switched. This approach avoids the overhead of a full memory management unit (MMU) by forgoing virtual-to-physical translation, instead providing direct physical access to the selected bank. This technique found widespread use in resource-constrained environments during the 1970s and 1980s, particularly in home computers where cost and simplicity were paramount. For instance, the Commodore 64, released in 1982, utilized to achieve 64 KB of total RAM by selectively mapping 8 KB blocks of RAM in place of ROM areas, such as the or character generator, thereby expanding usable beyond the 6502 processor's 48 KB limit without requiring additional address lines. Similarly, the Apple IIe employed to increase ROM space for expansions while maintaining compatibility with earlier models. In systems, such as those based on high-speed microcontrollers like the DS80C320, expanded program and data up to 512 KB using general-purpose I/O pins, circumventing MMU complexity to keep designs inexpensive and power-efficient. Despite its utility, bank switching lacks inherent memory protection mechanisms, as there are no hardware-enforced boundaries or access controls between banks, making systems vulnerable to unauthorized overwrites or reads across the entire physical . It also does not support virtual addressing, requiring software to manage all mappings explicitly, which often results in memory fragmentation as fixed bank sizes do not align well with variable allocation needs. Manual bank management by programmers frequently leads to bugs, such as inadvertent switches during that corrupt program flow or , particularly when interrupt vectors must be duplicated or preserved in a common non-switched area. Compared to MMU-based systems, bank switching introduces noticeable performance overheads; explicit bank switches via I/O writes typically require several CPU cycles, resulting in latencies on the order of microseconds, whereas MMU translation lookaside buffer (TLB) hits resolve in nanoseconds for seamless virtual addressing. The maximum addressable memory is simply the sum of all bank sizes, limited only by available hardware pins and storage, but this total is not simultaneously accessible, contrasting with MMUs that enable larger, contiguous virtual spaces through paging.

Capability-Based and Tagged Memory Systems

Capability-based memory management systems represent an alternative to traditional page-based MMUs by enforcing through unforgeable references known as capabilities, which encapsulate object es, bounds, and permissions. These systems originated in the as a for secure multiprogramming, where capabilities serve as the primary means of referencing and protecting objects, eliminating the need for separate translation hardware in some designs. Unlike MMUs that rely on kernel-managed page tables for coarse-grained , capability systems distribute to user-level processes, enabling fine-grained, decentralized resource sharing. In capability-based architectures, each memory access requires presenting a valid capability to hardware, which verifies rights (e.g., read, write, execute) and bounds before granting access, thus providing isolation without relying on virtual-to-physical translation. Seminal implementations include the Cambridge CAP computer (1970s), which used capabilities for both protection and addressing in a segmented memory model, and the Hydra system on the Carnegie-Mellon Multi-Mini-Processor (CMMP), demonstrating object-oriented protection kernels. These systems often integrate capability registers or lists in the processor, reducing overhead compared to page table walks while supporting dynamic object creation and revocation. Modern capability-based approaches, such as the CHERI (Capability Hardware Enhanced RISC Instructions) extension to and , extend this model to retrofit legacy systems, providing pointer-based capabilities that bound memory accesses at the hardware level without a full MMU overhaul. Evaluations show CHERI reduces errors by enforcing bounds checks on every pointer dereference, with performance overheads under 5% for many workloads due to optimized capability compression. This approach prioritizes security in capability-aware operating systems like seL4, where capabilities replace traditional ACLs for least-privilege enforcement. Tagged memory systems offer another MMU alternative by appending metadata tags—typically 4-16 bits per word or cache line—to memory locations, enabling hardware to enforce policies like and isolation directly during loads and stores. Originating in the with architectures like the Burroughs B5000, which used 1-bit tags to distinguish from and prevent unauthorized modifications, tagged memory avoids paging by validating tags on access rather than translating addresses. This hardware-enforced tagging supports fine-grained protection, such as confining data flows between security domains, without the context-switch costs of traditional . A foundational modern example is HardBound (2008), which augments the x86 architecture by adding tag bits (1-4 bits per word) to indicate pointers and storing base, bound, and permissions metadata in shadow memory using compressed encodings, preventing spatial safety violations like buffer overflows through runtime checks integrated into the memory pipeline. HardBound achieves this with minimal hardware changes, with average performance overheads of 5-9% on the Olden benchmarks due to bounds checks, though tag propagation adds near-zero overhead in optimized code. Subsequent systems, including ARM's Memory Tagging Extension (MTE, introduced in ARMv8.5-A), use 4-bit tags for randomized to detect use-after-free and out-of-bounds errors, providing probabilistic protection compatible with existing MMUs. Both capability-based and tagged systems enhance or supplant traditional MMUs by shifting to object-level or data-level semantics, improving against exploits while maintaining with flat spaces in resource-constrained environments. Historical deployments, such as IBM's System/38 () combining tagged objects with capabilities, demonstrated scalable database without paging overhead. Ongoing , including designs like CHERI's tagged capabilities, aims to balance performance and robustness in and computing.

References

  1. [1]
    The Memory Management Unit - Arm Developer
    An important function of a Memory Management Unit (MMU) is to enable you to manage tasks as independent programs running in their own private virtual memory ...Missing: definition components authoritative
  2. [2]
    What Is a Memory Management Unit (MMU)? - TechTarget
    Sep 29, 2022 · A memory management unit (MMU) is a computer hardware component that handles all memory and caching operations associated with the processor.Missing: authoritative | Show results with:authoritative
  3. [3]
    Memory Management Unit (MMU) - COMP15212 Wiki
    May 20, 2024 · A Memory Management Unit (MMU) is a hardware system that performs virtual memory mapping and checks privilege to separate user processes. It ...Missing: components authoritative<|control11|><|separator|>
  4. [4]
    Virtual Memory - CS 3410 - CS@Cornell
    The memory management unit (MMU) is the hardware structure that is responsible for translating virtual addresses to physical addresses. It uses a page table to ...Missing: definition | Show results with:definition
  5. [5]
    [PDF] Virtual Memory - Computer Systems: A Programmer's Perspective
    Dedicated hardware on the CPU chip called the memory management unit (MMU) translates virtual addresses on the fly, using a look-up table stored in main memory ...Missing: definition | Show results with:definition
  6. [6]
    CS 537 Lecture Notes Part 7 Paging
    Most modern computers have special hardware called a memory management unit (MMU). This unit sits between the CPU and the memory unit.
  7. [7]
    Virtual Memory
    Multitasking: allow multiple processes to be memory-resident at once. Transparency: no process should need to be aware of the fact that memory is shared.
  8. [8]
    [PDF] Virtual Memory: Protection and Address Translation - cs.Princeton
    Question. ○ What's the difference between protection and security? 4. Architecture Support for Processing/CPU. Protection. An interrupt or exception (INT).
  9. [9]
    CS:4980:4, Notes 17, Fall 2015 - University of Iowa
    Model 75 -- a big machine, first shipped early 1966, 940,000 instructions per second. Model 67 -- the first 360 with an MMU, based on the Model 65 CPU ...
  10. [10]
    [PDF] System/360 Model 67 Time Sharing System Preliminary Technical ...
    The System/360 Model 6'7 Technical Sum.mary is a self- contained description of the system, its components, and the Time-Sharing System programming support.
  11. [11]
    [PDF] Virtual Memory
    Feb 16, 2017 · Address space isolation is easy: don't allow the segments of the two address spaces to overlap! • A segment can be mapped into any sufficiently- ...
  12. [12]
  13. [13]
    [PDF] A Look at Several Memory Management Units, TLB-Refill ...
    This paper compares several virtual memory designs, including combinations of hierarchical and inverted page tables on hardware- managed and software-managed ...Missing: core | Show results with:core<|control11|><|separator|>
  14. [14]
    Operating Systems Lecture Notes Lecture 15 Segments
    Each segment is a variable-sized chunk of memory. An address is a segment,offset pair. Each segment has protection bits that specify which kind of accesses can ...
  15. [15]
    [PDF] Mechanism: Address Translation - cs.wisc.edu
    Sometimes people call the part of the processor that helps with address translation the memory management unit (MMU); as we develop more sophisticated memory-.
  16. [16]
    [PDF] Address Translation - Csl.mtu.edu
    multiple base/bound registers stored in a table. Processes can share segments. Each virtual address is divided into a segment # and an offset in that segment.
  17. [17]
    [PDF] Intel® 64 and IA-32 Architectures Software Developer's Manual
    NOTE: The Intel® 64 and IA-32 Architectures Software Developer's Manual consists of nine volumes: Basic Architecture, Order Number 253665; Instruction Set ...
  18. [18]
    About address translation - Arm Developer
    Address translation is the process of mapping one address type to another, for example, mapping VAs to IPAs, or mapping VAs to PAs. A translation table ...
  19. [19]
    [PDF] Virtual Memory - the denning institute
    Two principal methods for implementing virtual memory, segmentation and paging, are compared and contrasted. Many contemporary implementations have experienced ...
  20. [20]
    [PDF] W4118: segmentation and paging - Columbia CS
    Memory Management Unit (MMU). ❑ Map program-generated address (virtual ... ▫ Linear address given to paging unit. • Which generates physical address in ...
  21. [21]
    Segmentation, paging and optimal page sizes in virtual memory
    Two topics concerning virtual memory systems are discussed: determining an optimal page size and performance of segmentation as compared to paging.Missing: unit seminal
  22. [22]
    [PDF] Intel® 64 and IA-32 Architectures Software Developer's Manual
    ... Volume 3 (3A, 3B, 3C, & 3D):. System Programming Guide. NOTE: The Intel 64 and IA-32 Architectures Software Developer's Manual consists of four volumes: Basic ...
  23. [23]
    Memory access protection and the Access Protection Unit
    APUs implement memory protection, define access levels, and control access to memory regions, dropping unauthorized writes and returning null data on reads.
  24. [24]
    Page Tables - The Linux Kernel documentation
    Huge pages contain large contiguous physical regions that usually span from 2MB to 1GB. They are respectively mapped by the PMD and PUD page entries. The huge ...
  25. [25]
    [PDF] Demand paging - CSE IITB
    Valid and present bits in page table entry. • Valid bit in PTE indicates if virtual page is in use by process. • Present bit indicates if page is allocated ...
  26. [26]
    [PDF] Chapter 10: Virtual Memory - andrew.cmu.ed
    ▫ Illustrate how pages are loaded into memory using demand paging. ▫ Apply the FIFO, optimal, and LRU page-replacement algorithms. ▫ Describe the working ...Missing: assistance | Show results with:assistance
  27. [27]
    Introduction to Virtual Memory
    The same physical page frame can be mapped into multiple virtual memory spaces. One use for this is to have procedures communicate through shared memory.Missing: via | Show results with:via
  28. [28]
    [PDF] Computer Hardware Engineering - KTH
    A Translation Lookaside Buffer (TLB) caches the latest pages. Due to temporal and spatial locality, a TLB typically has a hit rate of more than 99%. Part I.
  29. [29]
    MMU gang wars: the TLB drive-by shootdown - Fast. Faster. Freak
    May 15, 2020 · To put the impact of TLB-assisted address translation in numbers: a hit takes 0.5 - 1 CPU cycle; a miss can take anywhere between 10 to even ...
  30. [30]
    [PDF] A simulation-based study of TLB performance - Bitsavers.org
    Simulations were also run to compare LRU, FIFO and Random replacement policies in full size data TLBs. It was found that FIFO performs uniformly better than ...
  31. [31]
    [PDF] virtual memory, physical memory, address translation, MMU, TLB ...
    The virtual memory of a process holds the code, data, and stack for the program that is running in that process. If virtual addresses are V bits, the maximum ...
  32. [32]
    [PDF] Performance Best Practices for VMware vSphere 8.0
    While hardware-assisted MMU virtualization improves the performance of most workloads, it does increase the time required to service a TLB miss, thus ...
  33. [33]
    [PDF] A Comparison of Software and Hardware Techniques for x86 ...
    Feb 27, 2009 · Current hardware does not achieve performance-parity with previous software techniques. Major problem for accelerating virtualization. Not ...
  34. [34]
    [PDF] Translation Pass-Through for Near-Native Paging Performance in VMs
    When the MMU traverses a TPT page table, it raises an exception into the hypervisor whenever the tag for the VM does not match that of an ac- cessed host frame.
  35. [35]
    [PDF] Dissecting Scale-Out Applications Performance on Diverse TLB ...
    After enabling THP huge pages, the L1 dTLB miss rates on. Host C, Host A, and Host B are reduced for any applications. The average reduction is up to 58.52%, ...
  36. [36]
    [PDF] HugeGPT: Storing Guest Page Tables on Host Huge Pages to ...
    Since it is hard to reduce TLB misses for such workloads, reducing page table walk overhead (i.e., the overhead of each TLB miss) is an increasingly important.
  37. [37]
    Address Space ID - Arm Developer
    When the MMU performs a translation, it uses both the virtual address and an ASID value. The ASID is a number assigned by the OS to each individual task.
  38. [38]
    [PDF] IMIX: In-Process Memory Isolation EXtension - USENIX
    Aug 17, 2018 · Memory protection based on hardware extensions is an- other approach to achieve in-process isolation. ... the Memory Management Unit (MMU) to ...
  39. [39]
    Memory Management in Operating Systems Explained - phoenixNAP
    Sep 22, 2023 · Security. Memory management ensures data and process security. Isolation ensures processes only use the memory they were given. Memory ...
  40. [40]
    Operating system integrity - Apple Support
    Dec 19, 2024 · Kernel Integrity Protection​​ The Application Processor's Memory Management Unit (MMU) is configured to help prevent mapping privileged code from ...
  41. [41]
    [PDF] NIST.IR.8320.pdf
    This section describes various software runtime attacks and protection mechanisms. 4.1 Return Oriented Programming (ROP) and Call/Jump Oriented Programming. ( ...<|control11|><|separator|>
  42. [42]
    Security Extensions and Privilege Levels - Arm Developer
    Mar 21, 2016 · IntroductionIn ARM® Cortex®-A class CPUs, the Memory Management Unit (MMU) and Operating System (OS) work together to protect address spaces.
  43. [43]
    [PDF] Domain Keys – Efficient In-Process Isolation for RISC-V and x86
    Aug 12, 2020 · An additional security enhancement is to use process isolation, e.g., in the form of site isolation [67]. ... We extend the memory management unit ...
  44. [44]
    [PDF] The Internet Worm Program: An Analysis - Purdue University
    Nov 3, 1988 · On the evening of 2 November 1988, someone infected the Internet with a worm program. That program exploited flaws in utility programs in ...
  45. [45]
    [PDF] TLB - MIPS
    The exception handler is then responsible for looking up the virtual address translation in the process's page table and replacing an entry in the TLB with it.
  46. [46]
    MEMMU: Memory Expansion for MMU-Less Embedded Systems
    Software techniques that use data compression to increase usable memory have advantages over hardware techniques. They do not require processor or printed ...
  47. [47]
    A software reproduction of virtual memory for deeply embedded ...
    For example, virtual memory system needs both MMU and TLB devices to execute large-size program on a small memory. This paper presents a software reproduction ...
  48. [48]
    The x86 kvm shadow mmu - The Linux Kernel documentation
    A shadow page contains 512 sptes, which can be either leaf or nonleaf sptes. A shadow page may contain a mix of leaf and nonleaf sptes. A nonleaf spte allows ...Missing: software | Show results with:software
  49. [49]
    [PDF] Functional Characteristics
    System/360 24-bit addressing enables a 4096-page virtual storage ... The direct control function as modified for IBM System/360 Model 67-2. The.
  50. [50]
    [PDF] IBM System/370 Principles of Operation
    Sep 1, 1975 · Dynamic address translation (DAT) eliminates the need to assign a program to a fixed location in real main storage and thus reduces the ...
  51. [51]
  52. [52]
    [PDF] Systems Reference Library IBM System/360 System Summary
    This status information, which consists of the instruction address, condition code, storage protection key, etc., is saved when an interruption occurs, and is ...<|control11|><|separator|>
  53. [53]
    [PDF] Virtual Memory Management in the VAX/VMS Operating System
    A 32-bit virtual address contains a 21-bit virtual page number and a 9-bit byte offset within the page. The page is the basic unit of mapping and protection..
  54. [54]
    [PDF] Virtual Memory Management in the VAX/VMS Operating System
    Aug 30, 2000 · The P0 and P1 page tables for each process are located in the system-space section of the address space; therefore, the PO and P1 page table.
  55. [55]
    Stage 2 translation - Arm Developer
    This chapter introduces Stage 2 translation and ways to control memory access. What is stage 2 translation. Stage 2 translation allows a hypervisor to control a ...Missing: specification 4KB 48-
  56. [56]
    [PDF] ARMv8-Reference-Manual.pdf - Stanford CS140e
    This document describes only the ARMv8-A architecture profile. For the behaviors required by the previous version of this architecture profile, ARMv7-A, see the ...
  57. [57]
    [PDF] MIPS MMU and TLB - People
    The MIPS R3000 is a RISC system and has been a platform for SVR4 UNIX as well as Digital. Equipment Corporation's ULTRIX (a 4.2BSD-based system).Missing: early | Show results with:early
  58. [58]
    [PDF] IDT R30xx Family Software Reference Manual - CSE CGI Server
    This manual is for systems programmers building R30xx-based systems, covering architecture-specific operations and programming conventions. It is not a ...
  59. [59]
    [PDF] MIPS64® Architecture For Programmers Volume III
    In Release 1 of the Architecture, the minimum page size was 4KB, with optional support for pages as large as 256MB. ... Control for variable page size in TLB ...
  60. [60]
    [PDF] Freescale PowerPC Architecture Primer - NXP Semiconductors
    A software-managed MMU approach allows an operating system to use its own page table structure instead of the OEA-defined hashed reverse page table structure, ...
  61. [61]
    [PDF] Programmer's reference manual for Book E processors
    May 1, 2015 · This reference manual gives an overview of Book E, a version of the PowerPC architecture intended for embedded processors.Missing: hashed 1990s
  62. [62]
    [PDF] Power10 Performance Quick Start Guides - IBM
    Radix page table is supported starting on Power10 running Linux. It can potentially improve workload performance. Reference: Hints andTips for Migrating ...Missing: trees | Show results with:trees
  63. [63]
    [PDF] INTEL 80386 PROGRAMMER'S REFERENCE MANUAL 1986
    The on-chip memory-management facilities include address translation registers, advanced multitasking hardware, a protection mechanism, and paged virtual memory ...
  64. [64]
    [PDF] 80386 Hardware Reference Manual - Bitsavers.org
    When the 80386 paging mechanism is enabled, the Paging Unit translates linear addresses ... 1(0 operations such as paging and non-cacheable memory for slower 1(0 ...
  65. [65]
    [PDF] AMD64 Architecture Programmer's Manual, Volume 2: System ...
    ... AMD64 Technology. November. 2009. 3.15. Added section 7.5, ”Buffering and Combining Memory Writes” on page 177. Added MFENCE to list of ”Serializing ...
  66. [66]
    [PDF] 5-Level Paging and 5-Level EPT - Intel
    May 1, 2017 · Most Intel 64 processors supporting VMX also support an additional layer of address translation called extended page tables (EPT). VM entry ...Missing: multi- | Show results with:multi-
  67. [67]
    5-Level Paging and 5-Level EPT White Paper - Intel
    This document describes planned extensions to the Intel® 64 architecture to expand the size of addresses that can be translated through a processor's memory- ...Missing: 57- bit 2024 2025
  68. [68]
    Memory Expansion with the High-Speed Microcontroller Family
    Mar 29, 2001 · Bit-addressable I/O ports allow single instruction modification of control lines, which can be used to bank switch or page between multiple ...
  69. [69]
    Bank Switching | Big Mess o' Wires
    Jun 19, 2010 · The bank select register is part of the memory-mapped I/O ports in common memory. To swap a bank, the CPU only needs to write the new bank ...
  70. [70]
    Memory-Mapped I/O v/s Port-Mapped I/O | EmbLogic
    Memory-mapped I/O (MMIO) and port-mapped I/O (PMIO) (which is also called isolated I/O) are two complementary methods of performing input/output between the ...
  71. [71]
    The Commodore 64 | The Digital Antiquarian
    Dec 17, 2012 · I should emphasize here that the concept of bank switching was hardly an invention of the MOS engineers; it's a fairly obvious approach, after ...
  72. [72]
    7-The Apple IIe - Apple II History
    To have enough firmware space for these extra features, the engineers increased the size of the available ROM by making it bank-switched. This space was taken ...
  73. [73]
    [PDF] Capability memory protection for embedded systems
    This dissertation explores the use of capability security hardware and software in real-time and latency-sensitive embedded systems, to address existing memory ...
  74. [74]
    [PDF] Memory protection in embedded systems - ARPI
    With reference to an embedded system featuring no support for memory manage- ment, we present a model of a protection system based on passwords.
  75. [75]
    [PDF] Performance Analysis of the Memory Management Unit under Scale ...
    The latency of the average cost per page walk depends on. (i) the performance of the MMU cache, which dictates how many memory accesses (up to four) are ...
  76. [76]
    [PDF] Capability-Based Computer Systems - Bitsavers.org
    The purpose of this book is to provide a single source of infor- mation about capability-based computer systems. Although capability systems have existed ...Missing: seminal | Show results with:seminal
  77. [77]
    Capability-based addressing | Communications of the ACM
    A computer using capability-based addressing may be substantially superior to present systems on the basis of protection, simplicity of programming conventions ...Missing: seminal papers
  78. [78]
    [PDF] HardBound: Architectural Support for Spatial Safety of the C ...
    Mar 5, 2008 · HardBound is a hardware design using a bounded pointer to enforce spatial safety in C programs, addressing issues with software-only approaches.