Fact-checked by Grok 2 weeks ago

Shared memory

Shared memory is a architecture in which multiple processors or cores simultaneously access a single unified physical space, allowing them to share and communicate efficiently without explicit . This design contrasts with systems, where each processor maintains its own local and inter-processor communication requires explicit transfer via or messages. Shared memory systems are broadly categorized into two types based on memory access patterns: Uniform Memory Access (UMA), where all processors experience the same access latency to any memory location, typically implemented via a shared bus or ; and (NUMA), where access times vary depending on the proximity of the memory to the processor, often seen in larger-scale multicore setups with multiple memory nodes. UMA architectures provide symmetric access but scale poorly beyond a few processors due to bus contention, while NUMA enables better for systems with dozens or hundreds of cores by distributing memory controllers, though it introduces challenges in locality optimization. Programming for shared memory parallelism commonly employs multithreading models, such as for low-level control or directive-based APIs like for higher-level abstraction, enabling parallel execution across cores while managing shared data. However, these systems face critical challenges, including maintaining to ensure consistent views of memory across processors and preventing race conditions through synchronization mechanisms like mutexes, semaphores, and barriers, which coordinate access and avoid from concurrent modifications. The history of shared memory architectures traces back to the early 1970s with experimental multiprocessors, evolving into commercially viable symmetric multiprocessors (SMPs) by the 1980s and dominating modern multicore CPUs for applications in , servers, and embedded systems.

Core Concepts

Definition and Principles

Shared memory is a architecture in which multiple processors or cores simultaneously access a single unified physical , enabling efficient and communication without explicit . In this model, all processors share a global , allowing them to read from and write to the same locations concurrently, which facilitates low-latency inter-processor coordination in multiprocessor systems. This unified access contrasts with models using separate local memories, but requires protocols to maintain and to avoid race conditions from concurrent modifications. The core principles of shared memory center on unified addressing, visibility of modifications across processors, and the role of in abstraction. Unified addressing provides all processors with a consistent view of the entire memory , typically implemented through hardware interconnects like buses or switches. Visibility ensures that a write by one processor becomes observable to others, often enforced through hardware protocols that propagate updates and invalidate stale copies. systems abstract physical memory details, allowing processors to use virtual addresses mapped to the shared physical while managing paging and . Key concepts include the distinction between shared global addressing and explicit data transfer, common patterns like producer-consumer, and assumptions about operations. Shared memory enables direct access to common locations, minimizing but necessitating to manage . In producer-consumer scenarios, one produces in shared memory while another consumes it, coordinated via to prevent . Atomicity typically applies to single-word reads and writes, ensuring indivisible execution as a basis for concurrent programming. The historical origins of shared memory trace back to 1960s multiprocessor systems, such as the Burroughs B5000 introduced in , which pioneered direct access to shared memory modules among multiple processors via a tightly coupled architecture, emphasizing efficiency over explicit messaging. This design allowed processors to utilize both local and shared memory without storage, laying groundwork for modern paradigms.

Comparison to Message Passing

Message passing is an paradigm where processes exchange data explicitly through messages sent via queues or channels, often involving of data and potential mediation for transmission. In systems like the (MPI), core primitives such as send and receive facilitate this explicit transfer, enabling coordination in distributed environments without a unified . Shared memory differs fundamentally by providing an implicit communication model, where multiple processes access a common memory region directly through reads and writes, eliminating the need for explicit packaging and transfer. This contrasts with message passing's enforced boundaries, which promote locality but require programmers to manage distribution manually. Regarding scalability, shared memory excels in (NUMA) architectures with moderate node counts due to its low-latency access, while supports larger distributed systems by avoiding centralized coherence overhead. The trade-offs between the two paradigms center on and : shared memory offers superior speed for tightly coupled applications through , but demands careful to prevent conditions, increasing programming effort. Conversely, provides inherent via ordering and is more portable across heterogeneous clusters, though it incurs overhead from data copying and , potentially reducing efficiency for frequent small transfers. These characteristics make shared memory preferable for latency-sensitive tasks on shared , while suits scalable, loosely coupled workloads. Hybrid models have evolved to mitigate these limitations, notably the (PGAS), which combines a logically shared with explicit locality control to bridge the paradigms. In PGAS, processes access remote data via one-sided operations akin to , while local accesses mimic shared memory efficiency, improving scalability over pure shared models. An example is the Unified Parallel C (UPC) language, an extension of C that partitions the global space among threads, allowing direct pointer use with awareness of data distribution to balance ease of programming and performance.

Hardware Foundations

Shared Memory Architectures

Shared memory architectures in multiprocessor systems are designed to provide multiple processors with access to a common physical memory space, enabling efficient data sharing while managing scalability and performance trade-offs. These architectures are broadly classified into Uniform Memory Access (UMA) and Non-Uniform Memory Access (NUMA) models, each addressing different constraints in processor count and memory access patterns. In UMA architectures, all processors experience the same access latency to any memory location, typically achieved through a centralized memory system connected via a single shared bus or symmetric interconnect. This design ensures equitable access but limits scalability to a small number of processors, often up to or fewer, due to bus contention and bandwidth saturation. Early UMA systems, such as the Balance 8000 introduced in 1984, exemplified this approach by interconnecting up to 12 processors with private caches to a shared memory via a high-speed bus, marking a pivotal step in (SMP) development. NUMA architectures extend scalability to larger configurations by distributing memory across nodes, where each processor or group of processors has faster access to local memory attached to its node and slower access to remote memory on other nodes. This non-uniform latency arises from the physical separation of memory modules, often connected through scalable interconnects rather than a single bus. NUMA systems, including Cache-Coherent NUMA (CC-NUMA) variants, dominate modern multi-socket servers, such as those based on or processors, where memory is partitioned per socket to balance local performance with global sharing. The evolution from UMA to NUMA, beginning in the late and accelerating in the 1990s with systems like the SGI Origin 2000, addressed the bottlenecks of shared buses by adopting hierarchies while maintaining a unified . Key components of these architectures include interconnect topologies that facilitate communication between processors, caches, and memory. Common topologies encompass crossbar switches, which provide non-blocking connections between multiple processors and memory modules in smaller UMA systems, and ring or mesh networks in NUMA designs for higher scalability. Each memory node typically features a dedicated memory controller to manage local accesses, reducing contention compared to centralized controllers in UMA. To ensure data consistency across caches—integrated into these architectures for performance—these systems employ either snooping protocols, which broadcast updates over the interconnect for small-scale UMA setups, or directory-based protocols, which track cache states in a distributed directory for scalable NUMA environments with hundreds of processors. Directory protocols enhance scalability by avoiding broadcast overhead, making them essential for modern CC-NUMA systems. Performance in shared memory architectures is characterized by metrics such as memory bandwidth and access latency, which vary significantly between UMA and NUMA. UMA systems offer consistent latency, typically around 50-100 ns across all memory, but aggregate bandwidth is constrained by the shared interconnect, often peaking at tens of GB/s before saturation limits further scaling. In contrast, NUMA provides high local bandwidth— up to 100 GB/s per node in contemporary servers—and low local latency (e.g., 70-80 ns), but remote accesses incur 1.5-2x higher latency (100-150 ns) and reduced bandwidth due to interconnect traversal, necessitating workload distribution strategies like thread affinity to local nodes to minimize remote traffic. These variations imply that NUMA favors applications with locality, such as databases or scientific simulations, where data placement optimizes overall throughput, while UMA suits tightly coupled tasks with uniform sharing.

Cache Coherence Mechanisms

In shared memory multiprocessors, the problem arises when multiple maintain private of the same shared data, leading to potential inconsistencies if one modifies a cache line while others hold stale copies. This can result in incorrect program execution, as may read outdated values without awareness of remote updates. Hardware mechanisms address this by ensuring that all observe a consistent of , typically through protocols that track the state of cache lines and propagate changes across the system. Snooping protocols, suitable for bus-based architectures with a small number of processors (typically fewer than 32), rely on each cache controller monitoring ("snooping") all bus transactions to maintain coherence. In the MSI protocol, each cache line can be in one of three states: Modified (M), indicating the line is exclusively held and altered by the local cache; Shared (S), meaning the line is clean and potentially held by multiple caches; or Invalid (I), where the line is unusable and must be fetched from memory or another cache. On a write miss, the protocol issues a write invalidate to transition shared copies to invalid, ensuring exclusivity for the writer; on a read miss, it supplies data from a modified copy if available, updating the state to shared. The MESI protocol extends MSI by adding an Exclusive (E) state for clean, privately held lines, reducing bus traffic by avoiding unnecessary invalidations for unmodified data. For example, a transition from E to M on a write hit avoids bus snooping, while a write miss from S invalidates all other copies, transitioning them to I. These invalidate-based approaches dominate due to simplicity, though they generate coherence traffic proportional to the number of sharers. Directory-based protocols scale to larger systems by replacing bus snooping with a centralized or distributed directory that tracks the location and state of each cache line's copies, avoiding broadcast overhead. Each memory block has an associated directory entry recording which caches hold copies (sharers) and their permission states, such as exclusive or shared. On a read miss, the requesting processor queries the directory to obtain data from memory or a sharer, adding the requestor to the sharer list; on a write, it invalidates or updates remote copies selectively, using point-to-point messages. The DASH protocol exemplifies this, employing a distributed directory per processing node in a cluster architecture, with each node handling a portion of shared memory and using a bit-vector to track up to 32 potential sharers per line. DASH supports weak consistency models for scalability, with hardware controllers managing transactions to minimize latency, achieving coherence in systems of up to 64 nodes through non-broadcast intervention. Directory protocols incur directory storage overhead (e.g., one bit per potential sharer) but reduce traffic compared to snooping in large-scale setups. Advanced features in cache coherence mechanisms address scalability and performance in complex hierarchies. Hierarchical coherence protocols organize caches into levels, such as intra-chip snooping combined with inter-chip directories, to manage multi-level systems efficiently; for instance, the Protocol for Hierarchical Directories () stitches local bus-based coherence with a higher-level directory for massive processor counts, reducing global by localizing most operations. Speculation techniques, like thread-level , allow optimistic execution assuming no conflicts, with rollback on violations; this decouples coherence checks from critical paths, improving parallelism in shared memory. Key metrics for evaluating these mechanisms include coherence overhead, measured as messages per cache line access (often 1-2 in optimized snooping but scaling logarithmically in directories), and miss latency, where directory protocols can add 10-20 cycles due to indirection but enable systems beyond bus limits.

Software Implementations

Unix-like Systems Support

In operating systems, shared memory is primarily supported through standardized interfaces defined by the specification, enabling (IPC) by allowing multiple processes to access a common region of memory. The shared memory , introduced in the POSIX.1-2001 standard (IEEE Std 1003.1-2001), provides mechanisms for creating, mapping, and managing shared memory objects that are persistent until explicitly removed, unlike transient mappings. This standard emphasizes portability across compliant systems, such as and BSD variants, where shared memory objects are backed by kernel-managed memory segments. Two primary approaches exist for implementing shared memory in Unix-like systems: the older System V IPC mechanisms and the more modern POSIX interfaces. System V IPC, originating from AT&T Unix, uses functions like shmget() to create or access a shared memory segment identified by a key, and shmat() to attach it to a process's address space, with segments persisting until explicitly removed via shmctl(IPC_RMID). In contrast, POSIX shared memory employs shm_open() to create or open a named object (typically in a filesystem namespace like /dev/shm), followed by mmap() to map it into the process's virtual address space, offering file-like semantics for easier integration with existing file I/O operations. Key differences include persistence—System V segments are detached but remain allocated system-wide until all references are gone or removed, while POSIX objects can be unlinked with shm_unlink() but persist until the last close—and naming: System V relies on integer keys for opaque identification, whereas POSIX uses pathnames for explicit, hierarchical naming that supports inheritance across process forks. Implementation in Unix-like kernels treats shared memory as either anonymous (unnamed, process-specific) or file-backed segments, with the latter often using temporary filesystems for persistence. In , POSIX shared memory is commonly backed by , a RAM-based filesystem mounted at /dev/shm, allowing segments to reside in without disk involvement, while sizing is adjusted via ftruncate() on the file descriptor from shm_open(). BSD systems, such as , similarly support POSIX interfaces through kernel memory mapping, integrating with the mmap() system for both shared memory and file-backed objects. System limits, like SHMMAX (maximum segment size, often configurable via /proc/sys/kernel/shmmax in , defaulting to hardware-dependent values up to gigabytes), enforce resource constraints to prevent exhaustion of physical memory. Usage involves careful management of permissions and conditions for secure . Permissions on shared memory segments are set during creation (e.g., via mode flags in shmget() or shm_open()) and modified with shmctl(IPC_SET) using a struct shmid_ds to specify owner, group, and access modes (read/write/execute bits). Detachment occurs via shmdt() for System V segments or munmap() for POSIX mappings, ensuring the memory is unmapped from the process's without destroying the segment itself. Common errors include EACCES, returned when a process lacks sufficient permissions to attach (shmat()) or open (shm_open()) a segment, often due to mismatched user/group IDs or restrictive modes, requiring explicit checks and handling in applications.

Windows Systems Support

Windows implements shared memory primarily through file mapping objects, a mechanism introduced with in 1993 as part of the Win32 subsystem. These objects enable processes to share memory regions backed by files or the system paging file, facilitating and data exchange in an object-oriented model distinct from syscall-based approaches in other systems. The core user-mode revolves around HANDLE-based objects, where CreateFileMapping creates or opens a named or unnamed file mapping object, specifying attributes such as size and protection (e.g., PAGE_READWRITE). To access the shared region, processes invoke MapViewOfFile, which maps a view of the object into the calling process's , allowing read/write operations as if accessing local memory. Named mappings enable unrelated processes to locate and share the object via its string identifier, while unnamed (anonymous) mappings are typically used within process hierarchies via . For kernel-mode operations, Windows provides section objects through NtCreateSection, which creates a shareable memory section with specified protections such as PAGE_READWRITE, PAGE_READONLY, or PAGE_EXECUTE_READWRITE. These sections underpin user-mode file mappings and support advanced features like for physical allocation. Windows further enhances performance with large-page support, allowing mappings of or larger pages on 64-bit systems to reduce (TLB) overhead in -intensive server applications. Additionally, shared integrates with job objects, enabling resource limits (e.g., total committed ) to be applied across groups that share mappings, thus governing usage in clustered workloads like those in Windows containers. Management of shared memory views involves UnmapViewOfFile to release the mapping from a process's , ensuring proper cleanup to avoid leaks. Synchronization is achieved using Windows synchronization primitives, such as event objects, which processes can signal to coordinate access to shared regions and prevent conditions during read/write operations. However, shared memory usage is constrained by per-process limits—for instance, 2 GB user-mode space in 32-bit processes or up to 128 TB in 64-bit processes—potentially limiting the size of mappable views without fragmentation or paging overhead. File-backed mappings, while similar in concept to Unix memory-mapped files, enforce Windows-specific semantics, such as requiring file handle inheritance for persistence across process boundaries.

Cross-Platform Approaches

Cross-platform approaches to shared memory aim to provide portable abstractions that work across diverse operating systems, such as systems and Windows, without relying on platform-specific APIs. These methods often leverage standardized interfaces or techniques to ensure compatibility in heterogeneous environments. One prominent library for achieving portability is Boost.Interprocess, which emulates shared memory using memory-mapped files to bridge differences between POSIX-compliant systems and Windows. This approach allows developers to create named shared memory segments that are accessible across processes on multiple platforms, with built-in support for synchronization primitives like mutexes and conditions. By unifying interfaces and behaviors, Boost.Interprocess facilitates without platform-dependent code, though it may incur slight performance overhead due to file-based on systems lacking native shared memory objects. In (HPC) contexts, the MPI-3 standard introduces one-sided communication extensions that enable portable shared memory operations. Specifically, the MPI_Win_allocate_shared routine allows processes within a to allocate and map shared memory segments directly, optimizing intra-node while maintaining across clusters. This extension supports the creation of "shared-memory capable" communicators via MPI_Comm_split_type, reducing latency in (PGAS) models compared to traditional . Another key standard is OpenSHMEM, an open specification for PGAS programming in HPC environments, which provides a unified for one-sided put/get operations on globally addressable shared memory. OpenSHMEM abstracts underlying hardware and OS differences, enabling symmetric memory allocation across processing elements (PEs) and supporting languages like , , and . Its design emphasizes low-latency access in distributed-memory systems, with implementations like SHMEM extending it to GPU-accelerated nodes for . Techniques such as conditional compilation further enhance portability by allowing code to selectively invoke OS-appropriate at build time, wrapping shared memory creation in directives for Unix (e.g., shm_open) or Windows (e.g., CreateFileMapping). In containerized environments, virtual file systems like /dev/shm provide isolated yet shareable memory spaces, with tools like enabling volume mounting or --shm-size flags to allocate larger segments for multi-container applications. The rise of has amplified the need for these methods, as shared volumes allow persistent memory sharing between containers, mitigating isolation overhead in cloud-native deployments. To address challenges in heterogeneous setups, such as varying hardware support or security restrictions, cross-platform implementations often include fallbacks to network-based alternatives like sockets when native shared memory is unavailable. This ensures reliability in mixed-OS clusters, though at the cost of higher latency, and aligns with as a baseline for Unix portability without delving into native details.

Programming and Usage

Language and API Support

In C and C++, shared memory access is facilitated by platform-specific APIs that build on operating system primitives. On POSIX-compliant systems, the shm_open(), shm_unlink(), and ftruncate() functions from <sys/mman.h> create and manage shared memory objects, while mmap() maps these objects into a process's address space for direct access. On Windows, the kernel32.dll library provides CreateFileMapping() to establish a file mapping object—often using INVALID_HANDLE_VALUE for anonymous shared memory—and MapViewOfFile() to map it into the process's virtual address space. These APIs enable efficient inter-process communication by allowing multiple processes to read and write the same memory region without copying data. Java supports shared memory through off-heap mechanisms to bypass the garbage-collected heap. The java.nio.DirectByteBuffer class allocates direct buffers outside the Java heap, enabling memory sharing that avoids heap compaction and supports native code interoperation via memory-mapped files. For lower-level direct access, the sun.misc.Unsafe API—though internal and deprecated in favor of the Foreign Function & Memory API—provides methods like allocateMemory() and getLong() to manipulate raw memory addresses, which can interface with shared regions created via NIO's FileChannel.map(). Garbage collection in Java impacts shared memory usage, as direct buffers are not relocated during compaction but must be managed to prevent cleanup by the Cleaner mechanism if references are lost; pinning is implicitly handled for mapped buffers to ensure address stability across collections.) Other languages offer varying levels of built-in or library-based support for shared memory. Python, starting with version 3.8, includes the multiprocessing.shared_memory module, where the SharedMemory class allows creating named or anonymous shared memory blocks that multiple processes can attach to and access via buffer protocols, wrapping OS-level mappings for portability. In Rust, the memmap2 crate (a maintained fork of memmap) provides a safe abstraction over memory-mapped files and anonymous mappings, using types like Mmap and MmapMut to enable process-shared access while enforcing Rust's ownership rules through lifetimes. Go lacks dedicated shared memory APIs, relying on the unsafe package for pointer arithmetic and raw memory operations like syscall.Mmap(); however, this approach is limited by Go's emphasis on safe concurrency via channels and goroutines, making shared state error-prone without additional synchronization and exposing risks like data races due to the runtime's assumptions about memory safety. In threads (), memory sharing distinctions arise between thread-local and process-shared contexts. Global or heap-allocated variables are inherently shared among threads within a single process, providing efficient intra-process access without additional mapping. , created via pthread_key_create(), isolates data per thread to avoid unintended sharing. For inter-process scenarios, pthreads objects like mutexes can be initialized with the PTHREAD_PROCESS_SHARED attribute to operate on shared memory regions established via APIs such as shm_open, extending across process boundaries. These language-level abstractions typically reference underlying OS primitives like shm_open for cross-process coordination.

Synchronization Techniques

Synchronization techniques in shared memory systems ensure that multiple threads or processes accessing the same memory region do so without causing race conditions, , or inconsistent views of shared data. These methods enforce , ordering of operations, and coordination among concurrent entities, often at the cost of overhead from contention and . Introduced as part of the threads () standard in IEEE Std 1003.1c-1995, these primitives provide a portable foundation for concurrent programming on systems. Basic synchronization primitives include mutexes, semaphores, and condition variables, which are designed to protect critical sections and signal events across threads sharing memory. A mutex (mutual exclusion lock) prevents multiple threads from simultaneously accessing a shared resource by allowing only one thread to hold the lock at a time. In shared memory contexts, mutexes can be made process-shared using the PTHREAD_PROCESS_SHARED attribute, set via pthread_mutexattr_setpshared(), enabling coordination between unrelated processes that map the same memory region. For example, processes using System V or POSIX shared memory can initialize a mutex within the shared segment to serialize access to data structures. Semaphores provide a more flexible counting mechanism for controlling access to a pool of resources, with named semaphores created via sem_open() supporting inter-process synchronization in shared environments. Condition variables, used in conjunction with mutexes, allow threads to wait for specific conditions to become true, such as the availability of data in shared memory; they can also be process-shared by setting the PTHREAD_PROCESS_SHARED attribute on their attributes object. These primitives collectively address producer-consumer scenarios common in shared memory applications. Atomic operations offer low-level without explicit locks, directly leveraging hardware s to perform indivisible updates on shared variables. (CAS) is a foundational that reads a location, compares it to an expected value, and conditionally writes a new value if they match, all in one uninterruptible step. In , this is implemented via the built-in __sync_val_compare_and_swap(), which returns the original value and facilitates lock-free algorithms like counters or queues in shared . To maintain across cores—crucial in weakly ordered architectures like x86— barriers such as the mfence serialize loads and stores, ensuring that prior writes are visible to subsequent reads by other threads accessing shared . These operations minimize overhead compared to higher-level locks but require careful design to avoid issues like the in CAS loops. Higher-level techniques build on these foundations for more complex coordination. Barriers enable collective , where a group of threads waits until all reach a designated point before proceeding, useful for phases in parallel computations on shared data; provides pthread_barrier_init() with process-shared support for multi-process use. Reader-writer locks optimize for scenarios where multiple readers can access shared memory concurrently but writers require exclusive access, implemented in via pthread_rwlock_t with the PTHREAD_PROCESS_SHARED attribute set using pthread_rwlockattr_setpshared(). As an alternative to traditional locking, (STM) treats blocks of code as atomic transactions that speculatively execute and commit only if no conflicts occur with concurrent transactions, reducing lock contention in dynamic workloads; early work formalized as a lock-free mechanism for shared memory. Despite their utility, these techniques incur performance costs, particularly from lock contention, where threads compete for the same primitive, leading to serialization and reduced scalability in high-contention shared memory systems. Studies show that under contention, acquiring a spin lock can degrade throughput significantly, with costs scaling with the number of processors due to cache invalidations and bus traffic. Developers must balance correctness with efficiency, often combining primitives—like using atomics for fine-grained updates within mutex-protected sections—to mitigate these overheads.

Applications and Challenges

Common Use Cases

Shared memory is widely employed in multithreading environments, particularly within applications like web s, where multiple threads access shared structures such as session to manage concurrent client requests without the overhead of copying. This approach enhances by leveraging the low-latency access to common memory regions, as demonstrated in multithreaded benchmarks that show improved throughput under high concurrency. In database systems, shared memory serves as a core mechanism for (IPC). For instance, utilizes shared memory segments to enable multiple backend processes to coordinate access to shared buffers, which frequently accessed data pages, thereby reducing I/O operations and improving query performance across connections. This is configured via parameters like shared_buffers, which allocate a dedicated shared memory for buffer pool management. High-performance computing (HPC) applications frequently adopt shared memory for parallel processing on multi-core systems. OpenMP, a standard API for shared-memory parallelism, facilitates loop-level sharing where threads within a single process access common arrays or variables, enabling efficient computation on symmetric multiprocessors without explicit message exchanges. This model is particularly effective for data-intensive simulations, as it minimizes communication latency compared to distributed alternatives like message passing, which are better suited for inter-node coordination. In , shared memory bridges CPU and GPU environments through mechanisms like Unified Memory, which provides a coherent allowing kernels to access the same data pointers as host code without manual transfers. Allocated via cudaMallocManaged, this feature automatically migrates pages between device and host memory on demand, simplifying development for applications like scientific modeling while supporting oversubscription for workloads exceeding GPU limits. Embedded and systems rely on shared memory for low-overhead data exchange in resource-constrained settings. In operating systems (RTOS) such as , shared memory regions are used by device drivers to enable rapid communication between kernel tasks and hardware peripherals, such as in networking or sensor interfaces, ensuring deterministic response times. On mobile platforms, previously employed Ashmem (Anonymous Shared Memory) for inter-app data sharing via explicitly allocated regions, which processes could map into their address spaces for efficient transfer of large buffers like media files, while the kernel handled reclamation under memory pressure. However, Ashmem has been deprecated since (2019); current implementations use alternatives like memfd or file-backed mappings for similar purposes. In cloud environments, container orchestration platforms like support shared memory through volume mechanisms, such as emptyDir with a memory-backed medium, which mounts /dev/shm as a for between co-located containers in a , facilitating scenarios like multi-process workloads. Adoption in such systems underscores shared memory's role in distributed applications. In recent developments as of , shared memory is increasingly used in disaggregated computing via technologies like (CXL), enabling cache-coherent memory pooling across CPUs, GPUs, and accelerators in data centers for and HPC workloads, improving resource utilization and scalability beyond traditional node boundaries.

Security and Performance Issues

Shared memory systems introduce several security vulnerabilities, primarily due to the direct access multiple processes have to the same physical memory regions. One notable risk is time-of-check-to-time-of-use (TOCTOU) races, where a process checks permissions or bounds on a shared memory segment, but another process modifies the segment before the first process uses it, potentially leading to unauthorized access or corruption. Malicious mappings exacerbate this, as attackers can create or alter shared memory mappings to escalate privileges, for instance by inducing bit flips in adjacent DRAM rows through repeated accesses, as demonstrated in cross-VM scenarios. A seminal example is the Rowhammer vulnerability, identified in 2014, where aggressive row activations in DRAM cause unintended bit flips in nearby rows, enabling privilege escalation even across isolated environments like virtual machines. To mitigate these, Linux's seccomp filters restrict system calls such as shm_open or mmap, preventing untrusted processes from creating dangerous shared memory mappings. Performance in shared memory systems is hindered by issues like , where multiple processors access distinct data within the same line, triggering unnecessary traffic and invalidations that degrade throughput. In (NUMA) architectures, poor locality—where threads access remote nodes' memory—incurs higher latency and bandwidth contention compared to local access, potentially halving performance in multi-socket systems. Optimizations include using huge pages in , such as 2MB transparent huge pages (THP) or 1GB gigantic pages, which reduce (TLB) misses and overhead for large shared segments, improving access speeds by up to 20-30% in memory-intensive workloads. Beyond security and raw performance, shared memory implementations face reliability challenges from inadequate , which can cause deadlocks if processes hold locks indefinitely while waiting for shared resources, stalling execution indefinitely. diminishes beyond 64 cores due to escalating synchronization overheads, such as barrier costs and contention on shared locks, limiting efficient parallelism in large-scale systems. these issues benefits from tools like Valgrind's Memcheck, which tracks shared memory accesses for errors like invalid reads/writes and leaks, maintaining validity bits across processes to pinpoint race-induced corruptions.

References

  1. [1]
    Shared Memory - Cornell Virtual Workshop
    A shared memory computer has multiple cores with access to the same physical memory. The cores may be part of multicore processor chips.
  2. [2]
    [PDF] Distributed Shared Memory: Concepts and Systems - IEEE Parallel ...
    In contrast, a distributed-memory system (often called a multicomputer) consists of multiple independent processing nodes with local memory mod- ules, connected ...
  3. [3]
    [PDF] Lecture #33 - Texas Computer Science
    Shared memory: numa v. uma. UMA – uniform memory access aka SMP – symmetric MP design – interconnect (usu bus) connects all processors, memories, IO devices.
  4. [4]
    [PDF] Non-uniform memory access (NUMA)
    NUMA = Non-Uniform Memory Access. Page 7. Non-Uniform Memory Access (NUMA). • NUMA architectures support higher aggregate bandwidth to memory than UMA ...
  5. [5]
    [PDF] Chapter 11 Shared Memory Parallel Computing Using OpenMP
    OpenMP is an open API for writing shared-memory parallel programs written in C/C++ and. FORTRAN. Parallelism is achieved exclusively through the use of threads.
  6. [6]
    [PDF] Introduction to Shared-Memory Parallelism and Concurrency
    To prevent problems like this, concurrent programs use synchronization primitives to prevent multiple threads from interleaving their operations in a way that ...
  7. [7]
    [PDF] Multiprocessors and Multicomputers - Computer Sciences
    Apr 23, 2003 · Shared memory multiprocessors are now commonly used for servers and larger computers. Most of these are symmetric multiprocessors—CC-UMAs that ...
  8. [8]
    [PDF] Shared Memory - DISCO
    Definition 5.1 (Shared Memory). A shared memory system is a system that consists of asynchronous processes that access a common (shared) memory.
  9. [9]
    Shared Memory System - an overview | ScienceDirect Topics
    A shared memory system is a computer architecture in which multiple processors or cores have access to a global memory, enabling processes executing on ...Introduction · Architecture of Shared Memory... · Programming Models and...
  10. [10]
    [PDF] Programming Shared-memory Machines - Texas Computer Science
    Threads share global variables and heap. • Caveat: writing programs in which shared space is treated as a “flat” address space may give poor performance.
  11. [11]
    [PDF] Lecture 19/20: Shared Memory SMP and Cache Coherence
    Each processor can name every physical location in the machine. Each process can name all data it shares with other processes. Data transfer via load and ...
  12. [12]
    19.1 Annotated Slides | Computation Structures
    In this lecture, we'll use the classic producer-consumer problem as our example of concurrent processes that need to communicate and synchronize.
  13. [13]
    [PDF] Shared Memory Consistency Models: A Tutorial - Computer
    Reality: A given memory consistency model can allow both invalidate and update coherence protocol. Myth: A system's memory model may be defined solely by.
  14. [14]
    The architecture of the Burroughs B5000 - ACM Digital Library
    In a tightly coupled mode, all memory is shared and only one copy of the MCP (the operating system or Master Control Program) exists. This was the only multiple.
  15. [15]
    [PDF] Burroughs B 5000 - Bitsavers.org
    The multiple memory modules permit each processor to utilize separate modules of mem- ory and eliminate time-sharing of storage facilities. When there is a ...
  16. [16]
    Message Passing Interface (MPI) Introduction to MPI - What is MPI?
    In the MPI model, communication among the processes in nodes on a distributed memory system is accomplished within a parallel program by passing messages ...
  17. [17]
    [PDF] Shared Memory And Distributed Shared Memory Systems: A Survey
    As compared to shared memory systems, distributed memory (or message passing) systems can accommodate larger number of computing nodes. This scalability was ...
  18. [18]
    Partitioned Global Address Space Languages - ACM Digital Library
    The main premise of PGAS is that a globally shared address space improves ... shared memory with the locality and performance control of message passing.
  19. [19]
    Berkeley Unified Parallel C (UPC) Project
    Unified Parallel C (UPC) is an extension of the C programming language designed for high performance computing on large-scale parallel machines.
  20. [20]
    A case for uniform memory access multiprocessors
    Shared-memory machines may be UMA (uniform memory access) or NUMA (non- uniform memory access). In UMA architectures, every portion of the address space is.
  21. [21]
    An Overview of Non-Uniform Memory Access
    Sep 1, 2013 · An overview of non-uniform memory access. NUMA becomes more common because memory controllers get close to execution units on microprocessors.
  22. [22]
    CSE 160: Notes on NUMA and Shared Memory
    In a NUMA architecture, memory access times are non-uniform. A processor sees different access times to memory, depending on whether the access is local or not.
  23. [23]
    Shared Memory Multiprocessor - an overview | ScienceDirect Topics
    Early SMPs emerged in the 1980s with such systems as the Sequent Balance 8000. Today SMPs serve as enterprise servers, deskside machines, and even laptops ...
  24. [24]
    [PDF] Evolution of parallel and cluster computing
    Non Uniform Memory Access (NUMA). Because in SMP/UMA, interconnect and memory are bottlenecks. In the 1990's: Sequent NUMA-Q (Intel, 1992), SGI Origin 2000.
  25. [25]
    Shared memory - Supercomputers for Starters
    All current multiple-socket server CPUs use a NUMA architecture, e.g., the Intel Xeon, AMD EPYC and IBM POWER processors. Probably the largest NUMA design ...
  26. [26]
    Introduction to Parallel Computing Tutorial - | HPC @ LLNL
    The SGI Origin 2000 employed the CC-NUMA type of shared memory architecture, where every task has direct access to global address space spread across all ...
  27. [27]
    [PDF] Shared Memory Multiprocessors - Prof. Marco Ferretti
    The simplest scheme to interconnect n CPU with k memory is a crossbar switch, an architecture similar to that used for decades in telephone switching. 18. Page ...
  28. [28]
    [PDF] Interconnection Networks - cs.wisc.edu
    • Fully connected and ring topologies delimit the two extremes. • The ideal topology ... › Alternatively, multiple datapaths via a buffered crossbar switch.
  29. [29]
    [PDF] Directory-Based Cache Coherence
    The snooping cache coherence protocols from the last lecture relied on broadcasting coherence information to all processors over the chip interconnect. Every ...Missing: crossbar switches
  30. [30]
    [PDF] Lecture 18: Snooping vs. Directory Based Coherency
    Different programming models place different requirements on communication assist. – Shared address space: tight integration with memory to capture memory ...Missing: topologies crossbar switches rings
  31. [31]
    NUMA Deep Dive Part 1: From UMA to NUMA - frankdenneman.nl
    Jul 7, 2016 · Local memory access provides a low latency – high bandwidth performance. While accessing memory owned by the other CPU has higher latency and ...
  32. [32]
    Challenges of Memory Management on Modern N-UMA Systems
    Dec 1, 2015 · This article evaluates performance characteristics of a representative modern NUMA system, describes NUMA-specific features in Linux, and ...
  33. [33]
    [PDF] NUMA Data-Access Bandwidth Characterization and Modeling
    Jan 23, 2012 · Data-access latency and bandwidth performance in modern Non-Uniform Memory Access (NUMA) architectures can significantly affect program ...
  34. [34]
    [PDF] A Survey of Cache Coherence Mechanisms in Shared Memory ...
    May 14, 1998 · Cache coherence is important to insure consistency and performance in scalable multiprocessors. A variety of hardware and software protocols ...Missing: seminal | Show results with:seminal
  35. [35]
    [PDF] A Primer on Memory Consistency and Cache Coherence, Second ...
    directory-based systems, in which the directory controller was integrated with the memory controller, directly implemented this logical abstraction by ...
  36. [36]
    The directory-based cache coherence protocol for the DASH ...
    In this paper, we present the design of the DASH coherence protocol and discuss how it addresses the above issues.
  37. [37]
    A Hierarchical Cache Coherent Protocol - DSpace@MIT
    We have designed the Protocol for Hierarchical Directories (PHD) to allow shared-memory support for systems containing massive numbers of processors.Missing: seminal paper
  38. [38]
    [PDF] A Scalable Approach to Thread-Level Speculation
    The level of interest is the first level where invalidation-based cache coherence begins, which we refer to as the speculation level. We generalize the ...
  39. [39]
    sysvipc(7) - Linux manual page - man7.org
    The System V shared memory API consists of the following system calls: shmget(2) Create a new segment or obtain the ID of an existing segment. This call ...
  40. [40]
    shm_open
    The shm_open() function shall establish a connection between a shared memory object and a file descriptor. It shall create an open file description.
  41. [41]
    shm_open(3) - Linux manual page - man7.org
    A POSIX shared memory object is in effect a handle which can be used by unrelated processes to mmap(2) the same region of shared memory. The shm_unlink() ...
  42. [42]
    Tmpfs — The Linux Kernel documentation
    Tmpfs is a file system which keeps all files in virtual memory. Everything in tmpfs is temporary in the sense that no files will be created on your hard drive.
  43. [43]
    Memory Management - FreeBSD Documentation Archive
    The support of large sparse address spaces, mapped files, and shared memory was a requirement for 4.2BSD. An interface was specified, called mmap, that allowed ...
  44. [44]
    tmpfs(5) - Linux manual page - man7.org
    A tmpfs filesystem mounted at /dev/shm is used for the implementation of POSIX shared memory (shm_overview(7)) and POSIX semaphores (sem_overview(7)). The ...
  45. [45]
    shmctl
    The shmctl() function provides a variety of shared memory control operations as specified by cmd. The following values for cmd are available.
  46. [46]
    shmdt
    The shmdt() function detaches the shared memory segment located at the address specified by shmaddr from the address space of the calling process.
  47. [47]
    Sharing Files and Memory - Win32 apps - Microsoft Learn
    Jan 7, 2021 · Processes that share files or memory must create file views by using the MapViewOfFile or MapViewOfFileEx function. They must coordinate their ...
  48. [48]
    CreateFileMappingW function (memoryapi.h) - Win32 apps
    Jul 27, 2022 · Creates or opens a named or unnamed file mapping object for a specified file. To specify the NUMA node for the physical memory, see CreateFileMappingNuma.
  49. [49]
    MapViewOfFile function (memoryapi.h) - Win32 apps | Microsoft Learn
    Oct 30, 2024 · Maps a view of a file mapping into the address space of a calling process. To specify a suggested base address for the view, use the MapViewOfFileEx function.
  50. [50]
    NtCreateSection function (ntifs.h) - Windows drivers - Microsoft Learn
    Jul 2, 2024 · Specifies the maximum size, in bytes, of the section. NtCreateSection rounds this value up to the nearest multiple of PAGE_SIZE.
  51. [51]
    Large-Page Support - Win32 apps | Microsoft Learn
    Jan 7, 2021 · Large-page support enables server applications to establish large-page memory regions, which is particularly useful on 64-bit Windows.
  52. [52]
    Using Event Objects (Synchronization) - Win32 apps - Microsoft Learn
    Sep 16, 2021 · Discover how events facilitate communication between objects in C#, and how their integration with delegates creates robust, decoupled code.Missing: job | Show results with:job
  53. [53]
    Virtual Address Space (Memory Management) - Win32 apps
    Jan 7, 2021 · The virtual address space for 32-bit Windows is 4 gigabytes (GB) in size and divided into two partitions: one for use by the process and the other reserved for ...
  54. [54]
    Architecture and internals - Boost
    Boost.Interprocess should be portable at least in UNIX and Windows systems. That means unifying not only interfaces but also behaviour. This is why Boost.
  55. [55]
    Sharing memory between processes - Boost
    Interprocess emulates shared memory using memory mapped files. This assures portability between POSIX and Windows operating systems. However, accessing native ...
  56. [56]
    [PDF] An Implementation and Evaluation of the MPI 3.0 One-Sided ...
    MPI-3 adds a new shared-memory window, created via MPI Win allocate shared, which allows processes to portably allocate a shared-memory segment that is mapped ...
  57. [57]
    [PDF] Leveraging MPI's One-Sided Communication Interface for Shared ...
    In order to facilitate the creation of such a “shared memory capable” communicator, MPI-3 provides a new rou- tine, MPI Comm split type. This function is an ...
  58. [58]
    [PDF] OpenSHMEM-1.5.pdf
    Jun 8, 2020 · The OpenSHMEM specification belongs to Open Source Software Solutions, Inc. (OSSS), a nonprofit organization, under an agreement with HPE.
  59. [59]
    Leveraging MPI-3 Shared-Memory Extensions for Efficient PGAS ...
    Mar 7, 2016 · In this paper, we present an optimized PGAS-like runtime system which uses the new MPI-3 shared-memory extensions to serve intra-node ...
  60. [60]
    kyr0/libsharedmemory: Cross-platform shared memory ... - GitHub
    libsharedmemory is a small C++11 header-only library for using shared memory on Windows, Linux and macOS. libsharedmemory makes it easy to transfer data.
  61. [61]
    How to Configure Docker's Shared Memory Size (/dev/shm) | Last9
    Jun 25, 2025 · Docker containers come with a default shared memory size of 64MB. That's often not enough for applications like headless browsers, databases, or ML frameworks.How to Troubleshoot Shared... · Step-by-Step Process to...
  62. [62]
    Shared Memory Client-Server Communication for Unix/Linux
    Feb 13, 2025 · When more than one FairCom process is run on a Unix system, the shared memory key used by the servers might hash to the same value, causing ...
  63. [63]
    CreateFileMappingA function (winbase.h) - Win32 apps
    Jul 27, 2022 · Creates or opens a named or unnamed file mapping object for a specified file. To specify the NUMA node for the physical memory, see CreateFileMappingNuma.
  64. [64]
    JEP 454: Foreign Function & Memory API - OpenJDK
    Jun 22, 2023 · The API enables Java programs to call native libraries and process native data without the brittleness and danger of JNI.
  65. [65]
    multiprocessing.shared_memory — Shared memory for direct ...
    This module provides a class, SharedMemory, for the allocation and management of shared memory to be accessed by one or more processes on a multicore or ...
  66. [66]
    unsafe - Go Packages
    Pointer or reflect.Value.UnsafeAddr from uintptr to Pointer. Package reflect's Value methods named Pointer and UnsafeAddr return type uintptr instead of unsafe.Missing: limitations | Show results with:limitations
  67. [67]
    IEEE 1003.1c-1995
    Standard for Information Technology--Portable Operating System Interface (POSIX(™)) - System Application Program Interface (API) Amendment 2: Threads Extension ...
  68. [68]
    [PDF] Intel® 64 and IA-32 Architectures Software Developer's Manual
    NOTE: The Intel® 64 and IA-32 Architectures Software Developer's Manual consists of ten volumes: Basic Architecture, Order Number 253665; Instruction Set ...
  69. [69]
    [PDF] Software Transactional Memory - MIT
    This paper we will focus on implementations of a transactional memory that supports static. transactions, a class that includes most of the known and.Missing: seminal | Show results with:seminal
  70. [70]
    [PDF] The Performance of Spin Lock Alternatives for Shared-Memory ...
    A complex algorithm that reduces the cost of spin- waiting could degrade overall performance if it takes longer to acquire the lock when there is no contention.
  71. [71]
    Evaluating the impact of simultaneous multithreading on network ...
    This paper examines the performance of simultaneous multithreading (SMT) for network servers using actual hardware, multiple network server applications, and ...
  72. [72]
    Documentation: 18: 18.4. Managing Kernel Resources - PostgreSQL
    Less likely to cause problems is the minimum size for shared memory segments ( SHMMIN ), which should be at most approximately 32 bytes for PostgreSQL (it is ...
  73. [73]
    OpenMP Programming Model | LLNL HPC Tutorials
    OpenMP is for multi-processor shared memory machines, using threads for explicit parallelism, starting with a master thread and using a fork-join model.
  74. [74]
  75. [75]
    [PDF] VxWorks Device Driver Developer's Guide, 6.6
    Nov 6, 2007 · The term legacy driver is used to describe pre-VxBus device drivers as implemented in early VxWorks 6.x and in VxWorks 5.x releases.
  76. [76]
    Overview of memory management | App quality - Android Developers
    May 9, 2023 · In many places, Android shares the same dynamic RAM across processes using explicitly allocated shared memory regions (either with ashmem or ...
  77. [77]
    Volumes | Kubernetes
    Jul 17, 2025 · Kubernetes volumes provide a way for containers in a pod to access and share data via the filesystem. There are different kinds of volume ...Persistent Volumes · Ephemeral Volumes · Projected Volumes · V1.32
  78. [78]
    [PDF] ElasticlavE: An Efficient Memory Model for Enclaves - USENIX
    Property 2 eliminates malicious races and TOCTOU attacks by enforcing a safe access sequence. ... RISC-V software runs on three different privilege levels ...
  79. [79]
    [PDF] Cross-VM Row Hammer Attacks and Privilege Escalation - USENIX
    Aug 12, 2016 · In this paper, we explore row hammer attacks in cross-VM set- tings, in which a malicious VM exploits bit flips induced by row hammer attacks to ...
  80. [80]
    [PDF] Flipping Bits in Memory Without Accessing Them
    To our knowledge, this is the first paper to expose the widespread existence of disturbance errors in commodity. DRAM chips from recent years. We construct a ...
  81. [81]
    filter documentation - The Linux Kernel Archives
    seccomp vs socket filters have different security restrictions for classic BPF. Seccomp solves this by two stage verifier: classic BPF verifier is followed ...
  82. [82]
    False Sharing and its Effect on Shared Memory Performance - USENIX
    False sharing occurs when processors in a shared-memory parallel system make references to different data objects within the same coherence block.
  83. [83]
    [PDF] Optimizing Applications for NUMA - Intel
    In the NUMA shared memory architecture, each processor has its own local memory module that it can access directly with a distinctive performance advantage.Missing: locality | Show results with:locality
  84. [84]
    Transparent huge page reference counting - LWN.net
    Nov 11, 2014 · The kernel's transparent huge pages (THP) feature enables the use of huge pages without the need for any sort of developer or user intervention.
  85. [85]
    Introduction to Process Synchronization - GeeksforGeeks
    Aug 30, 2025 · Process Synchronization is a mechanism in operating systems used to manage the execution of multiple processes that access shared resources.Missing: poor | Show results with:poor
  86. [86]
    Application Scaling under Shared Virtual Memory on a Cluster of ...
    Figure 7 shows the factor that limits scalability is the barrier cost. The barrier cost increases at the 64-processor scale, because of an imbalance in the ...Missing: cores | Show results with:cores
  87. [87]
    4. Memcheck: a memory error detector - Valgrind
    As explained above, Memcheck maintains 8 V bits for each byte in your process, including for bytes that are in shared memory. ... debugging of distributed-memory ...
  88. [88]
    12. Intel(R) Memory Protection Extensions (MPX)
    Intel MPX provides hardware features that can be used in conjunction with compiler changes to check memory references, for those references whose compile-time ...