Fact-checked by Grok 2 weeks ago

Logical address

A logical address, also referred to as a virtual address, is the address generated by the (CPU) during program execution to reference memory locations within a process's . This address is distinct from the seen by the memory hardware, as it is dynamically translated by the (MMU) to map onto actual physical memory locations, enabling efficient memory utilization without requiring contiguous allocation. In operating systems, logical addresses form the foundation of virtual memory management, allowing each to perceive a large, continuous, and isolated —often much larger than the available physical —such as 2^{32} bytes (4 ) on 32-bit systems or 2^{48} bytes (256 TB) on typical 64-bit systems. This abstraction supports key features such as , where processes cannot access each other's memory, and , permitting programs to run regardless of their loading position in physical memory. Address binding, the of mapping logical to physical addresses, can occur at , load time, or execution time, with modern systems favoring dynamic binding at execution for flexibility. The translation mechanism typically involves page tables for paging or segment tables for segmentation. In paging, a logical address is divided into a page number and ; in segmentation, into a segment number and , with the MMU using a (TLB) for fast lookups to minimize performance overhead. This approach facilitates demand paging, pages between main and secondary , and of or among processes while maintaining . Overall, logical addressing is essential for multitasking operating systems, enhancing , , and in computer architectures.

Fundamentals

Definition

A logical address, also referred to as a virtual address, is the address generated by the CPU during program execution to reference a specific byte or location within the program's contiguous . This address represents a symbolic or abstract reference from the perspective of the executing , independent of the actual physical configuration. Logical addresses enable programs to operate as if they have access to a dedicated, large region, abstracting away the complexities of limited physical resources. Key characteristics of logical addresses include their fixed bit widths, which vary by architecture—typically 32 bits in legacy x86 systems or 64 bits in modern and ARM64 processors—to define the size of the addressable space. These addresses are decoupled from the underlying layout, promoting by ensuring each program views its own private memory view without interference from others, and supporting to simplify . The concept originated in the early 1960s with the development of systems, first implemented in the Atlas computer around 1962, where it facilitated multiprogramming by allowing programs to use addresses without direct knowledge of physical hardware placement. This approach, part of the Atlas's "one-level store" design, treated drum and core memory as a unified large store, marking a shift from fixed physical addressing to dynamic, user-transparent . As a basic example, consider a where a is defined at an offset of 0x1000 relative to the start of its ; the CPU generates this as the logical address 0x1000, oblivious to the segment's actual placement in physical , which may involve or . Logical addresses form a foundational element of , enabling efficient resource sharing in multitasking environments.

Relation to Physical Address

A physical address refers to the actual location in the computer's () where data is stored, and it is generated by the () after translating the corresponding logical address. In contrast, the logical address is produced by the CPU during program execution and represents a reference within the process's , which is then mapped to a to enable actual access. This translation process ensures that programs operate within an abstracted view of memory, independent of the underlying physical layout. Key differences between logical and physical addresses lie in their generation, scope, and flexibility. Logical addresses are generated by the CPU and are specific to each executing , allowing the same logical address value to exist across multiple processes without conflict. Physical addresses, however, are produced by the MMU and represent globally unique locations in the physical , visible only to the hardware. A significant advantage of logical addresses is their support for relocation: programs can be loaded into different physical memory locations at without requiring modifications to the , as the mapping handles the offset adjustments. The use of logical addresses has profound implications for system design, particularly in enabling and among . By isolating each within its own logical address space, the system prevents one from accessing or corrupting the memory of another, thereby enhancing and stability. Logical addresses also facilitate efficient of or data segments between cooperating , as multiple logical references can map to the same physical location. Conversely, physical addresses manage the tangible aspects of memory, including data storage in and interactions with hardware caches, where locality and actual placement determine performance. For instance, a logical address such as 0x1000 in one might be translated to 0xA000, while the same logical address in another could map to 0xB000, demonstrating relocation without altering the program's logic. This relation is most evident in systems, where logical-to-physical mapping underpins the abstraction.

Memory Management Context

Role in Virtual Memory

Logical addresses play a central role in systems by forming the basis of a per-process , which provides an abstraction of a large, contiguous region that can exceed the size of physical . This allows each to generate and use addresses within its own isolated logical view of , independent of the physical layout or the locations used by other processes. By decoupling the program's addressing from physical constraints, enables efficient resource sharing among multiple processes on the same hardware. Key benefits of logical addresses in this context include support for demand paging, where only the required portions of a process's are loaded into physical RAM upon access, thereby optimizing usage and reducing initial loading times. They also enable , which temporarily moves inactive process pages to secondary storage to accommodate active workloads, and overcommitment, permitting the sum of allocated virtual spaces across processes to surpass physical limits through intelligent paging decisions. Furthermore, this mechanism inherently protects processes by enforcing , as each operates solely within its logical address , preventing unauthorized access to others' and enhancing system security and stability. During the process lifecycle, logical addresses are typically assigned at compile or link time, generating relocatable code that assumes a starting address but remains fixed throughout execution, even as underlying physical mappings change dynamically due to paging or . This binding strategy ensures that programs do not need to adjust their addressing logic at , simplifying and portability across systems with varying physical memory configurations. For instance, in systems on 32-bit architectures, each process is granted a 4 logical address space, irrespective of the actual physical RAM available, allowing applications to utilize up to this limit virtually while the operating system handles physical allocation.

Address Spaces

The logical address space in operating systems is organized into distinct segments that represent different types of data and , such as the for executable instructions, the for global and static variables, the stack segment for local variables and function calls, and the heap segment for dynamically allocated memory. Each segment is defined by a base address indicating its starting location and a limit specifying its size, allowing the operating system to perform bounds checking and prevent access violations. This segmented structure provides a logical partitioning of the , facilitating modular and protection within a . To enforce and , the is divided between -mode and -mode regions, with processes restricted to a lower portion to prevent interference with system operations. In 32-bit x86 systems running , for example, the 4 GB is conventionally split into 3 GB for space (addresses 0x00000000 to 0xBFFFFFFF) and 1 GB for space (0xC0000000 to 0xFFFFFFFF), ensuring that applications cannot directly access structures. This separation relies on privileges and mode switches to maintain integrity, with the mapping its code and into the upper region accessible only in privileged mode. The size of the logical address space often exceeds the physical memory available, allowing processes to operate with larger memory footprints through abstraction; for instance, in a system with 1 GB of RAM, a process might logically allocate up to several GB via its virtual address space. Dynamic growth occurs as needed, such as when the malloc() function requests additional heap space, which the operating system accommodates by extending the heap segment's boundaries within the logical space without immediately requiring corresponding physical allocation. Virtual memory serves as the enabling technology for these expansive address spaces, supporting efficient resource sharing across processes. In the x86 architecture, the organization of logical addresses within segments is handled through segment selectors stored in registers like CS (code segment) and DS (data segment), which serve as indices into the (GDT) to retrieve segment descriptors containing base addresses, limits, and access rights. The GDT, maintained by the operating system, defines the segments available to the processor, enabling flexible reconfiguration of the address space layout per or system-wide.

Translation Mechanisms

Address Translation Process

The address translation process involves converting a logical address, generated by the CPU during program execution, into a corresponding physical address that identifies a location in actual main memory. This transformation is essential for enabling virtual memory systems, where processes operate within an abstracted address space isolated from physical hardware constraints. The Memory Management Unit (MMU), a hardware component integrated into the CPU, performs this translation transparently to the executing program, using predefined mapping structures to ensure secure and efficient memory access. The process begins when the CPU issues a as part of an , such as a load or operation. The MMU then intercepts this and consults appropriate structures, such as tables for paging systems or segment descriptors for segmentation systems, to determine the . Finally, the MMU outputs the derived , which the CPU uses to fetch or in . In paging-based , the is typically divided into a number and an , with the page number indexing the to retrieve the frame number, which is concatenated with the offset to form the . For simple segmentation, the is calculated by adding the logical to the base stored in a segment register. \text{Physical Address} = \text{Base Register} + \text{Logical Offset} If the translation reveals that the targeted page is not resident in physical memory—indicated by an invalid bit in the page table entry—a page fault is generated. This exception traps to the operating system kernel, which handles the fault by allocating a physical frame if necessary, retrieving the page from secondary storage like disk, updating the page table, and resuming the interrupted instruction. Such faults ensure demand-paged virtual memory operates correctly but introduce overhead if frequent. To optimize performance and manage memory overhead, multi-level page tables are commonly used, organizing translations into a hierarchical structure that sparsely populates only the necessary portions of the , thereby reducing the overall size of the translation data compared to a single flat table. (TLB) caching further improves speed by storing recent address mappings in fast on-chip memory, avoiding full table walks for repeated accesses. Page tables play a central role as the primary in paging implementations of this process.

Hardware Support

The Memory Management Unit (MMU) serves as the primary hardware component responsible for intercepting logical addresses issued by the processor and translating them into corresponding physical addresses, enabling virtual memory operations. Integrated into the CPU, the MMU performs this translation using page tables or segment descriptors, while also enforcing memory protection mechanisms such as access permissions and isolation between processes. In modern systems, the MMU operates transparently during instruction execution, contributing to efficient memory management without software intervention for each access. To optimize translation speed, the (TLB) functions as a high-speed on-chip within or alongside the MMU, storing recently used virtual-to-physical mappings along with associated attributes like permissions. Upon a memory access, the MMU first consults the TLB; a hit allows immediate translation, but a miss triggers a full walk, which can impose a penalty of hundreds of cycles due to multiple accesses. TLB designs vary by , often employing set-associative structures to balance size, speed, and hit rates, with typical capacities ranging from 32 to 2048 entries in contemporary processors. Specific registers facilitate MMU operations across architectures. In x86 systems, the CR3 control register stores the physical base address of the page directory, serving as the entry point for hierarchical page table traversals during translation. Segment registers (CS, DS, SS, ES) aid in logical address formation by adding a segment base to an offset, though their role has diminished in flat memory models. Hardware support for logical addressing traces its origins to the 1970s IBM System/370, which introduced dynamic address translation for virtual storage. Contemporary architectures like x86-64 (with 48-bit virtual addressing), ARMv8 (supporting 39- to 52-bit virtual spaces via stage-1 and stage-2 translation), and RISC-V (offering Sv39/Sv48 modes for 39- and 48-bit virtual addresses) build on this foundation, integrating MMUs and TLBs for scalable virtual memory.

Implementations and Examples

Paging Systems

Paging systems represent a fundamental implementation of logical addressing in management, where physical memory is divided into fixed-size blocks known as , typically 4 in size, and the logical address space of a is similarly partitioned into pages of the same size. This approach allows the operating to map non-contiguous physical to contiguous logical pages, enabling efficient memory allocation without the need for contiguous physical storage. Introduced in systems like , paging facilitates location-independent addressing by treating logical addresses as generalized references that are transparently mapped to physical locations via support. In paging, a logical address is divided into two primary components: the virtual page number (VPN), which identifies the page within the logical address space, and the , which specifies the byte position within that page. For instance, with 4 pages, the lower 12 bits of a 32-bit logical address form the , while the upper bits constitute the VPN. The operating system maintains to perform the from VPN to physical frame number (PFN), where each entry in the page table indicates the starting of the corresponding frame and includes bits for presence, , and validity. If a referenced page is not in , a occurs, triggering the operating system to load the page from secondary storage. To handle large address spaces efficiently, paging often employs multi-level page tables, which organize the mapping hierarchy into multiple tiers, such as page directory, , and page table entry levels, reducing the memory overhead of a single large table. In a two-level scheme, the VPN is split into an outer index for the page directory and an inner index for the , allowing sparse address spaces to allocate tables only as needed. This structure supports virtual address spaces far larger than physical memory, with translations performed by the (MMU) hardware. Paging offers several advantages, including the elimination of external fragmentation since can be allocated non-contiguously, and it simplifies among by the same physical to multiple logical pages. It also supports demand paging, where pages are loaded only when accessed, optimizing resource use. However, it introduces internal fragmentation, as the last page of a may not be fully utilized, wasting up to one page's worth of space per . Additionally, traversals can incur performance overhead, mitigated by translation lookaside buffers (TLBs). A practical example of paging in modern systems is found in on x86 architecture, where a 32-bit logical address like 0x12345678 is translated using a two-level for 4 KB pages. The address breaks down as follows: the upper 10 bits (0x048, or bits 31-22) index the page directory to locate the base; the next 10 bits (0x389, or bits 21-12) index into that to find the PFN; and the lower 12 bits (0x678, or bits 11-0) serve as the within the physical frame. The MMU uses the CR3 register to access the page directory, performing the translation to derive the by combining the PFN with the . In 64-bit , this extends to four or five levels (PGD, PUD, PMD, PTE) to support vast address spaces, with levels folded if unnecessary.

Segmentation Systems

In segmentation systems, memory is divided into variable-sized segments that correspond to logical units of a , such as , , , or , allowing for non-contiguous allocation in physical memory. A logical address in this scheme consists of two parts: a identifier (or number) and an within that segment. For example, in a 32-bit , the upper bits might select the segment, while the lower bits provide the offset, enabling direct mapping to program structure without fixed-size constraints. The mapping from logical to physical addresses is handled by a segment table, which contains entries for each including a base (starting location in physical memory), a ( length), and bits to enforce controls like read, write, or execute permissions. checks the offset against the to prevent overruns, and bits ensure secure per , supporting features like code modules across processes while isolating . This table is typically maintained by the operating system and referenced via a descriptor for efficient lookup. One key advantage of segmentation is its alignment with the natural structure of programs, facilitating and easier of logical units without the overhead of fixed divisions, though it can lead to external fragmentation due to varying sizes. This approach was prominently used in early systems like , where segments were named symbolically and managed dynamically to support large, sparse address spaces with controlled . In , implemented on the Honeywell 645 in the 1960s, segmentation enabled processes to reference up to 2^18 segments, each growing independently with hardware-enforced protection rings. Hybrid systems combine segmentation with paging to mitigate fragmentation issues, using segmentation for logical organization and paging for efficient physical allocation. In the x86 architecture, this is achieved in , where a logical address comprises a 16-bit selector (indexing into a descriptor table) and a 0- to 32-bit , translated first to a linear address via the and then paged to physical if enabled. This design, detailed in Intel's architecture, allows segments up to 4 GB with granular protection, enhancing efficiency in multitasking environments like those in .

References

  1. [1]
    Operating Systems: Main Memory
    The address generated by the CPU is a logical address, whereas the address actually seen by the memory hardware is a physical address. Addresses bound at ...
  2. [2]
    Jian Huang at University of Tennessee; CS361 Operating System ...
    After address binding, each process logically has its complete address space. On a 32-bit system, that amounts to 2GB. These addresses are also called virtual ...
  3. [3]
    Virtual Memory I – Computer Architecture
    The binary addresses that the processor issues for either instructions or data are called virtual or logical addresses. These addresses are translated into ...
  4. [4]
    [PDF] Virtual Memory - the denning institute
    – Virtual address: generated by the CPU to designate a specific data byte in a contiguous sequence of bytes called Address Space.
  5. [5]
    [PDF] Virtual Memory - the denning institute
    Oct 13, 2014 · In the 1960s, virtual memory was seen as a method to make programming easier and more secure: 1. Programmers can write their code without ...
  6. [6]
    Milestones:Atlas Computer and the Invention of Virtual Memory ...
    A two-dimensional addressing system has been incorporated which allows each user to write programs as though there is a virtual memory system of large size.
  7. [7]
    [PDF] Main Memory
    A logical address is generated by the CPU and is translated into a physical address by the memory management unit(MMU). Therefore, physical addresses are.Missing: textbook | Show results with:textbook
  8. [8]
    [PDF] Chapter 8 Main Memory - Florida State University
    • OS determines logical to physical address mapping using MMU. • a logical address can be mapped to different physical address for different processes or ...<|control11|><|separator|>
  9. [9]
    [PDF] Chapter 8: Memory Management | UTC
    a separate physical address space is central to proper. memory management. ● Logical address – generated by the CPU; also. referred to as virtual address.Missing: computer | Show results with:computer
  10. [10]
    [PDF] Virtual Memory: Protection and Address Translation - cs.Princeton
    ○ Provides logical protection: programmer knows program and so segments. ○ Therefore efficient. ○ Easy to share data. ◇ Cons. ○ Complex management.
  11. [11]
    [PDF] MEMORY VIRTUALIZATION - cs.wisc.edu
    Sharing: Enable sharing between cooperating processes ... • Each segment has own base and bounds, protection bits. • Example: 14 bit logical address, 4 segments;.
  12. [12]
    [PDF] Virtual Memory
    If data cache is tagged with physical addresses, then we must translate the VA before we can access the data cache. TLB only has a few entries so now we need to ...
  13. [13]
    [PDF] Lecture 10: Memory Management - UCSD CSE
    Next lecture we'll go into sharing, protection, efficient ... virtual addresses (logical addresses). ♢. Virtual addresses are independent ...
  14. [14]
    Introduction to Virtual Memory
    Virtual memory is the idea of creating a logical spaces of memory locations, and backing the logical spaces with real, physical memory.
  15. [15]
    CS322: Virtual Memory - Gordon College
    The logical address space is the range of logical addresses that can be generated by a process running on the CPU. On many systems, the logical address space is ...
  16. [16]
    Virtual Memory - FSU Computer Science
    OS puts the process in the Blocked state; Piece of process that contains the logical address is brought into main memory. OS initiates a disk read request ...
  17. [17]
    Lecture 8: Main Memory
    A side benefit of virtual memory is it allows the logical address space of a process to exceed the physical address space of the machine. Nearly all modern ...
  18. [18]
    [PDF] Virtual Memory
    Virtual memory. 0. Phy. size. 10. Page 11. Virtual Address. • Processes use virtual (logical) addresses. – Make it easier to manage memory of multiple processes.
  19. [19]
    [PDF] Chapter 8: Memory Management
    ○ Compile time: If memory location known a priori, absolute code can be ... ▫ The user program deals with logical addresses; it never sees the real ...
  20. [20]
    [PDF] PROCESS VIRTUAL MEMORY
    • The addresses used by programs are virtual addresses (logical addresses). • The range of addresses a program uses is called its virtual address space. • The ...<|control11|><|separator|>
  21. [21]
    [PDF] Segmentation - cs.wisc.edu
    To fully utilize the virtual address space (and avoid an unused segment), some systems put code in the same segment as the heap and thus use only one bit to ...
  22. [22]
    Chapter 4 Process Address Space - The Linux Kernel Archives
    This feature is intended for 32 bit systems that have very large amounts (> 16GiB) of RAM. The traditional 3/1 split adequately supports up to 1GiB of RAM.
  23. [23]
    [PDF] Intel® 64 and IA-32 Architectures Software Developer's Manual
    NOTE: The Intel® 64 and IA-32 Architectures Software Developer's Manual consists of nine volumes: Basic Architecture, Order Number 253665; Instruction Set ...
  24. [24]
    [PDF] Address Translation - Csl.mtu.edu
    Segmentation: 2/7. 22. Each process has at four segments: code, data, heap and stack. Each virtual address is divided into a segment # and an offset in that ...<|separator|>
  25. [25]
    [PDF] Virtual Memory and Address Translation - Duke Computer Science
    VM translation is supported in hardware by a Memory Management Unit or MMU. The addressing model is defined by the CPU architecture. The MMU itself is an ...
  26. [26]
    [PDF] Mechanism: Address Translation - cs.wisc.edu
    Address translation is a hardware mechanism that transforms virtual addresses to physical addresses during each memory access, interposing on each access.
  27. [27]
    [PDF] Memory Management - GMU CS Department
    in a separate address space, called logical address, or virtual address. ❑A Memory Management Unit (MMU) is used to map logical addresses to physical addresses.
  28. [28]
    Translation Lookaside Buffer (TLB) - Arm Developer
    The Translation Lookaside Buffer (TLB) is a cache of recently executed page translations within the MMU. On a memory access, the MMU first checks whether the ...
  29. [29]
    [PDF] Paging: Faster Translations (TLBs) - cs.wisc.edu
    A TLB is part of the chip's memory-management unit (MMU), and is simply a hardware cache of popular virtual-to-physical address translations; thus, a better ...
  30. [30]
    A brief history of virtual storage and 64-bit addressability - IBM
    In 1970, IBM introduced System/370, the first of its architectures to use virtual storage and address spaces. Since that time, the operating system has ...
  31. [31]
    [PDF] Paging: Introduction - cs.wisc.edu
    Thus, we can translate this virtual address by replacing the VPN with the PFN and then issue the load to physical memory (Figure 18.3). 0. 1. 0. 1. 0. 1. VPN.
  32. [32]
    Page Tables - The Linux Kernel documentation
    Page tables map virtual addresses as seen by the CPU into physical addresses as seen on the external memory bus. Linux defines page tables as a hierarchy.
  33. [33]
    Paging - OSDev Wiki
    Paging is a system which allows each process to see a full virtual address space, without actually requiring the full amount of physical memory to be available ...Setting Up Paging · Identity Paging · Other languages
  34. [34]
    [PDF] The Multics virtual memory: concepts and design
    In segmented systems, hardware segmentation can be used to divide a core image into several parts, or segments [10]. Each segment is accessed by the hardware.
  35. [35]
    The Multics Virtual Memory: Concepts and Design
    In Multics, segments are packages of information which are directly addressed and which are accessed in a controlled fashion. Associated with each segment is a ...
  36. [36]
    CS 537 Lecture Notes, Part 8 Segmentation - cs.wisc.edu
    Multics. One of the advantages of segmentation is that each segment can be large and can grow dynamically. To get this effect, we have to page each ...
  37. [37]
    Intel® 64 and IA-32 Architectures Software Developer Manuals
    Oct 29, 2025 · Overview. These manuals describe the architecture and programming environment of the Intel® 64 and IA-32 architectures.