Fact-checked by Grok 2 weeks ago

Inter-process communication

Inter-process communication (IPC) refers to the mechanisms provided by operating systems that enable concurrent processes to exchange data, share resources, and synchronize their activities within a single computer or across networked systems. These mechanisms are essential for cooperating processes, allowing them to coordinate tasks such as handling, file management, and while preventing conflicts like race conditions. IPC originated in early Unix systems for local process interaction but expanded with Berkeley Unix 4.2BSD in to support communication between machines over networks. The two fundamental models of IPC are , in which multiple processes access a designated region of directly to read and write data, and , where processes communicate by sending and receiving discrete messages via kernel-mediated channels. Shared memory offers high performance for large data transfers but requires explicit synchronization to avoid inconsistencies, often using primitives like semaphores. In contrast, message passing provides abstraction and safety through copy-based transfers, making it suitable for distributed environments, though it incurs overhead from kernel involvement. Common IPC mechanisms vary by operating system but include for unidirectional streaming between related processes, named pipes or FIFOs for unrelated processes, sockets for bidirectional network-aware communication, and message queues for asynchronous, ordered delivery. In systems such as , System V IPC encompasses message queues, semaphores, and segments, while standards emphasize , signals for event notification, and sockets for portability. Windows supports similar facilities through anonymous and named , Remote Procedure Calls (RPC), and mappings, often integrated with (COM) for structured interactions. These tools address both local and remote scenarios, with performance evaluations showing trade-offs in latency and throughput depending on the mechanism and workload.

Introduction

Definition and Scope

Inter-process communication () refers to the mechanisms and protocols that enable independent processes to exchange data and synchronize their execution within a computing environment. These mechanisms facilitate cooperation among processes, allowing them to share information and coordinate actions without direct access to each other's internal state, thereby supporting modular and concurrent program design. IPC can be categorized into local IPC, which occurs between processes on the same machine, and distributed IPC, which involves processes across networked systems. In , is enforced by the through techniques such as addressing and privilege rings, ensuring that each process operates in its own protected and cannot directly access another's memory or resources unless explicitly permitted via IPC channels. Core principles of IPC include the producer-consumer model, where one process generates data (producer) and another consumes it, often requiring buffering to handle differing speeds; blocking operations, in which a process suspends execution until the communication completes; non-blocking operations, which allow a process to continue without waiting; and atomicity, ensuring that data transfers or synchronization events occur indivisibly to prevent partial updates or race conditions. The scope of IPC is limited to interactions among distinct user-space processes and excludes intra-thread communication within a single process, which is typically handled by concurrency primitives like mutexes rather than inter-process mechanisms; this focus maintains clear boundaries in operating system design for reliability and security.

Historical Development

The origins of inter-process communication (IPC) trace back to the 1960s, when early systems emerged to address the limitations of in mainframe environments. The (CTSS), developed at and first demonstrated in 1961 on a modified , introduced foundational concepts for concurrent program execution, including rudimentary mechanisms for inter-user messaging that presaged modern IPC primitives like signals. Building on CTSS, the system, initiated in 1965 as a collaborative project between , , and , implemented a more sophisticated IPC facility by the early 1970s. This facility, detailed in a 1971 technical memorandum, enabled processes to exchange messages and share resources securely in a multi-user, time-shared environment, using hierarchical file systems and access controls to manage communication between segments. In the 1970s, Unix at advanced IPC with lightweight primitives suited to its minimalist philosophy. Signals, introduced in early Unix versions around 1971, provided asynchronous notification mechanisms for processes to handle events like interrupts or terminations. , conceived by Douglas McIlroy in a 1964 memorandum but first implemented in Unix Version 3 in 1973, allowed sequential data streaming between processes, enabling modular command composition and influencing subsequent stream-based methods. By the early 1980s, divergences in Unix variants led to expanded IPC sets: AT&T's System V Release 2 in 1983 introduced message queues, semaphores, and as standardized primitives for structured data exchange and . Concurrently, the Berkeley Software Distribution (BSD) extended IPC through 4.2BSD in 1983, adding socket interfaces for network-aware communication, which facilitated integration with emerging protocols like TCP/IP. The 1980s and 1990s saw IPC evolve under the influence of , shifting from local to networked paradigms. Sun Microsystems' Open Network Computing (ONC) framework, released in 1986, popularized Remote Procedure Calls (RPC) as a transparent mechanism for cross-machine invocations, underpinning services like the Network File System (NFS). This was complemented by the Object Management Group's CORBA 1.0 standard in 1991, which defined an object-oriented middleware for distributed IPC using Interface Definition Language (IDL) to enable platform-independent method calls across heterogeneous systems. Standardization efforts culminated in POSIX.1 (IEEE Std 1003.1-1988), which unified core IPC interfaces like and signals across systems, with revisions through the 2020s incorporating enhancements for real-time and multithreaded environments. Post-2010, the rise of cloud-native architectures integrated IPC with , emphasizing scalable, asynchronous communication in distributed environments. Frameworks like (2015) extended RPC principles for efficient, HTTP/2-based service interactions, while message brokers such as enabled decoupled, event-driven IPC in large-scale cloud deployments. These advancements addressed the demands of containerized applications, prioritizing low-latency and fault-tolerant data exchange in elastic infrastructures.

Challenges

Performance Limitations

Kernel-mediated inter-process communication (IPC) mechanisms, such as and message queues, impose substantial overhead due to es between user mode and mode. Each such operation requires trapping into the , which involves saving the current process's —including CPU registers, , and potentially updating page tables—and restoring the 's , followed by the reverse upon return. This mode transition can consume several hundred to thousands of CPU cycles, translating to latencies of 1-4 microseconds on x86 processors running as of 2024, depending on factors like and TLB flushes for . For instance, benchmarks on contemporary systems report times around 2 microseconds for processes with minimal sizes, escalating to over 10 microseconds under heavier loads or larger contexts, and up to 48 microseconds in densely packed workloads. Bandwidth limitations further constrain IPC efficiency, particularly in message-passing paradigms where data must be copied between process address spaces via buffers. This copying overhead restricts throughput to levels far below raw ; for example, Unix pipes on exhibit latencies of approximately 15-50 microseconds for small message round-trips, with sustained of approximately 3-4 GB/s for standard operations due to repeated memcpy operations and scheduling, though optimizations like vmsplice can exceed 50 GB/s. In contrast, techniques avoid explicit copying by mapping the same physical pages into multiple address spaces, enabling bandwidths approaching limits of 20-100 GB/s on modern multicore systems, though at the cost of added synchronization overhead to prevent conditions. Quantitative evaluations confirm that incurs 2-10x higher latency for payloads under 1 KB compared to access, highlighting the trade-off between simplicity and performance. Scalability challenges emerge in environments with many concurrent , where amplifies IPC bottlenecks. In shared memory setups, multiple competing for access to common regions can trigger traffic and lock contention, leading to serialized execution and quadratic degradation in throughput as count grows; studies on multicore platforms show up to 50% loss beyond 8-16 threads due to and bus contention. Message-passing systems fare worse under high concurrency, as kernel-mediated queuing introduces O(n) overhead from polling or busy-waiting on descriptors, exacerbating latency spikes in dense workloads like clusters. For example, in simulations with 32+ , naive IPC polling can inflate average response times by an compared to idle conditions. Additional performance challenges arise in virtualized and containerized environments, where IPC overhead increases due to isolation mechanisms like namespaces and , adding 20-50% in setups like or compared to bare metal. To mitigate these limitations, techniques bypass unnecessary data duplication by leveraging facilities like for or sendfile for I/O transfers, potentially halving and doubling in bandwidth-bound scenarios. Asynchronous I/O interfaces, such as Linux's aio or (enhanced post-2020 for lower ), further alleviate context-switching costs by enabling non-blocking submissions that defer until completion, reducing CPU overhead by 30-70% in high-throughput applications without detailed specifics. These strategies, while effective, require careful to balance with needs in multi-process settings.

Security and Reliability Issues

Shared memory mechanisms in inter-process communication (IPC) are particularly susceptible to risks, where an attacker can exploit timing discrepancies to gain unauthorized access. A prominent example is the time-of-check-to-time-of-use (TOCTOU) attack, in which a verifies permissions or resource availability before using the segment, but an intervening action by a malicious alters the , allowing elevated privileges. This arises because shared resources like segments can be modified concurrently without checks, enabling attackers to inject malicious code or data into the segment after the initial validation but before attachment. Reliability challenges in IPC often stem from message loss in unreliable channels and race conditions due to inadequate synchronization. In message-passing systems, particularly over networks, messages can be lost due to sender failures, network disruptions, or buffer overflows, leading to incomplete transfer and potential system inconsistencies without built-in acknowledgments or retries. Race conditions occur when multiple processes access shared resources simultaneously without proper coordination, resulting in corrupted , deadlocks, or erroneous computations, as concurrent modifications violate expected sequential ordering. Recent concerns include side-channel attacks on , such as those amplified by and Meltdown vulnerabilities (disclosed 2018, with ongoing mitigations as of 2025), which can leak across process boundaries via cache timing. Security models for IPC incorporate access controls and to mitigate these risks. In systems, IPC objects such as segments, message queues, and semaphores are protected by permissions analogous to file access controls, including owner, group, and world read/write/execute modes enforced through (DAC) checks before operations. For distributed IPC setups, is essential to protect data in transit over untrusted networks, employing protocols like TLS to ensure confidentiality and integrity against interception or tampering. Historical incidents highlight the severity of these issues, particularly buffer overflows in Unix programs during the 1980s and 1990s. For instance, vulnerabilities in programs like and fingerd were exploited through buffer overflows in input handling to execute arbitrary code, as seen in the 1988 that infected thousands of Unix systems via a buffer overflow in fingerd's network input processing. These exploits demonstrated how unchecked data could lead to widespread compromises, prompting advancements in secure coding practices.

Local IPC Mechanisms

Shared Memory Techniques

Shared memory techniques enable processes to exchange data by mapping a common region of physical memory into their respective address spaces, facilitating direct access without the overhead of kernel-mediated message passing. This approach is foundational in systems for high-performance local inter-process communication, particularly suited for scenarios involving frequent or bulk data sharing between cooperating processes on the same host. In the System V inter-process communication (IPC) framework, shared memory segments are allocated and accessed through dedicated system calls. The shmget() function creates a new shared memory segment or retrieves an existing one, specified by a unique key value, with parameters defining the segment size in bytes, creation flags (such as IPC_CREAT), and permission bits. Upon successful allocation, it returns a non-negative identifier (shmid) associated with the segment, which persists until explicitly removed or the system reboots. This identifier serves as a for subsequent operations. Once allocated, processes attach to the shared segment using shmat(), which maps the memory into the calling process's at an determined by the (if unspecified) or a provided hint. The function returns a pointer to the start of the mapped region, allowing processes to perform read and write operations directly on this pointer as if it were local . Attachment can be shared among multiple processes using the same shmid, enabling concurrent access; detachment occurs via shmdt() to unmap the region. The segment's lifetime is managed separately, with removal via shmctl() to free resources. Memory-mapped files offer an alternative mechanism for shared memory, leveraging the mmap() to associate a or anonymous with a process's . For IPC purposes, processes invoke mmap() with the MAP_SHARED on the same underlying file (or anonymous region via MAP_ANONYMOUS), ensuring modifications by one process are immediately visible to others. If backed by a file, the mapping supports persistence across process executions, with data loaded on demand through -handled page faults when accessing unmapped pages. This contrasts with pure anonymous mappings, which are volatile and exist only until unmapped. is required to coordinate updates, as the kernel does not inherently serialize access. These techniques provide significant advantages, including minimal for large transfers due to the absence of copying between user and kernel spaces, making them ideal for bandwidth-intensive applications like multimedia processing. However, they demand explicit bounds checking by processes to avoid memory overruns and buffer overflows, as the enforces no automatic limits on access within the mapped region. A representative example is the using a within . The writes items sequentially into the fixed-size , updating a write index ( the buffer length) after each insertion, while the consumer reads from a separate read index, advancing it upon consumption. This structure efficiently handles streaming data, such as log entries or sensor readings, with the preventing blocking by wrapping around; access to indices and the requires protection via primitives to ensure atomicity.

Pipe and Stream-Based Methods

Pipe and stream-based methods provide mechanisms for inter-process communication through unidirectional or bidirectional streams, enabling processes to exchange sequentially without . These approaches rely on kernel-managed buffers to facilitate copying between processes, ensuring isolation while allowing controlled flow. Originating in early Unix implementations, were introduced in the Third Edition Unix in February 1973, proposed by McIlroy as a way to chain commands via a simple conduit. Anonymous pipes serve as a fundamental unidirectional communication channel primarily between related processes, such as and after a . They are created using the pipe() , which allocates a buffer (typically 64KB on modern systems) and returns two file descriptors: one for reading (fd[0]) and one for writing (fd[1]). In a common usage pattern, a calls pipe() before fork(), duplicating the descriptors across the via inheritance, allowing the parent to write data that the child reads, or . Read operations block if the buffer is empty, while writes block if full, providing implicit flow control through scheduling; a write to a closed read end generates a SIGPIPE signal. This setup supports half-duplex communication, where data flows in one direction, and closing the write end signals to the reader. Named pipes, also known as FIFOs (first-in, first-out), extend anonymous pipes to enable communication between unrelated es by associating the channel with a filesystem pathname. They are created using the mkfifo() or the mkfifo command, resulting in a special visible via ls -l with type 'p'. Unlike anonymous pipes, named pipes persist in the filesystem until explicitly removed with rm, allowing any with access permissions to open them via standard operations like open(). Opening a named pipe for reading blocks until a opens the other end (and vice versa in blocking mode), ensuring synchronized access; non-blocking opens allow reads without a but fail writes with ENXIO if no reader exists. Data transfer follows semantics, with reads consuming bytes sequentially and writes appending to the buffer, maintaining the same blocking behavior for flow control as anonymous pipes. Stream-based extensions, such as Unix domain sockets, provide more flexible local IPC by supporting both byte-stream and datagram modes over the filesystem or abstract namespaces. These sockets operate in the AF_UNIX (or AF_LOCAL) domain for communication between processes on the same host, bypassing network stacks for efficiency. In byte-stream mode (SOCK_STREAM), akin to , they establish a reliable, ordered, full-duplex connection via socket(), bind(), listen(), and accept(), transmitting data as a continuous sequence without message boundaries. Conversely, datagram mode (SOCK_DGRAM), similar to , sends discrete messages preserving boundaries using sendto() and recvfrom(), also reliably but without connection setup. The socketpair() call creates an unnamed pair for bidirectional communication between related processes, functioning like a full-duplex . Flow control in stream mode mirrors pipes, with blocking on full buffers, while datagrams may queue up to a limit before dropping. These mechanisms can introduce reliability issues, such as potential in overloaded datagram scenarios if not handled by the application. A practical example of pipe usage appears in shell command chaining with the | operator, which connects the standard output of one command to the input of the next, forming a pipeline executed in subshells. For instance, ls | grep .txt lists files and filters for those ending in .txt, leveraging anonymous pipes created by the shell to stream output directly. This chaining, a hallmark of Unix philosophy, allows complex data processing by composing simple tools without intermediate files.

Message Queues and Signals

Message queues provide a mechanism for processes to exchange discrete messages asynchronously in a first-in, first-out (FIFO) manner, with support for message types and priorities to facilitate selective retrieval. In System V UNIX, message queues are created or accessed using the msgget system call, which takes a key (a unique identifier) and flags to specify creation or access permissions, returning a message queue identifier (msqid) upon success. Messages are then sent to the queue via msgsnd, which appends a message structure containing a type field (used for prioritization or filtering), a text buffer, and size information; the call blocks if the queue is full until space is available or a timeout occurs. Reception occurs through msgrcv, allowing processes to retrieve messages by type (e.g., the lowest type or a specific one) with optional priority handling, where higher-priority messages are dequeued first within the same type; this supports asynchronous communication by decoupling sender and receiver execution. System V queues have inherent limits, such as a maximum message size (typically 8 KB via the MSGMAX kernel parameter) and a system-wide limit on the number of message queues (MSGMNI), enforced to prevent resource exhaustion, with queue control via msgctl for status queries, permission changes, or removal. The total size of a queue is limited by MSGMNB (default 16384 bytes). POSIX message queues extend this model with a file-like interface, emphasizing portability across UNIX-like systems. The mq_open function creates or opens a named queue (using a pathname-like string) with specified attributes like maximum message size and queue capacity, returning a message queue descriptor (mqd_t) akin to a file descriptor for subsequent operations. Messages are enqueued using mq_send, which adds a buffer of specified length and priority (0 being lowest, higher values dequeued first) to the tail of the corresponding priority-ordered list, blocking if the queue is full unless non-blocking mode is set; absolute and relative priority schemes ensure ordered delivery. A key feature is asynchronous notification support via mq_notify, where a process registers a sigevent structure to receive alerts—either as a signal or via a file descriptor event (e.g., using poll or select)—when a message arrives, enabling efficient waiting without constant polling and tying into broader event-driven IPC patterns. POSIX queues also enforce limits, such as a configurable maximum message size (msgsize_max, default 8192 bytes) and queue depth (msg_max, default 10), with a system-wide limit on the number of message queues (queues_max), with attributes adjustable post-creation using mq_setattr. Signals offer a lightweight, event-based IPC primitive for notifying processes of asynchronous events, such as interrupts or inter-process requests, without transferring data payloads. In UNIX systems, signals are identified by integers (e.g., SIGINT for keyboard interrupt via Ctrl+C), and the kill delivers a specified signal to a target or group by PID, allowing one to asynchronously another. Upon receipt, the invokes a user-defined signal handler (registered via sigaction or the simpler signal function) if not ignored or blocked, or performs a default action like termination for SIGINT; handlers execute in a dedicated context, with the process's signal mask temporarily augmented to block the signal itself during handling (unless SA_NODEFER is set). Signal masking, managed by sigprocmask or pthread_sigmask in multithreaded environments, allows processes to temporarily block specific signals (e.g., masking SIGINT during critical sections) to prevent interruption, pending signals are queued and delivered post-unmasking in POSIX-compliant order. This mechanism is efficient for simple notifications but lacks data transfer, complementing message queues for event signaling in asynchronous IPC. A practical example of signals in action is job control within UNIX shells like or csh, where foreground processes receive SIGINT (from Ctrl+C) to terminate immediately, allowing the shell to regain control and prompt for new input. For suspension, Ctrl+Z sends SIGTSTP to the foreground job, pausing it and returning shell control; the shell then lists the stopped job and can resume it in foreground with [fg](/page/FG) (sending SIGCONT) or background with [bg](/page/BG), demonstrating signals' role in managing process lifecycle without direct data exchange. This facility, standardized in , enables interactive multitasking by leveraging signals for termination and state transitions across process groups.

Synchronization Primitives

Semaphores and Monitors

Semaphores are synchronization primitives used in inter-process communication to control access to shared resources and coordinate execution. Invented by in his 1968 paper "Cooperating Sequential Processes," semaphores provide a mechanism for processes to signal each other and manage concurrency without . A is an integer variable that supports two atomic operations: wait (denoted as , from the Dutch "proberen," meaning to test or decrement) and signal (denoted as , from "verhoog," meaning to increment). The P operation decrements the semaphore value if it is positive, allowing the process to proceed; otherwise, the process blocks until the value becomes positive. The V operation increments the value and wakes a waiting process if any are blocked. In systems, semaphores are implemented through System V IPC mechanisms, using system calls like semget to create or access a semaphore set, semop to perform P and V operations atomically on one or more semaphores, and semctl for control operations such as initialization and deletion. Semaphores come in two primary forms: binary and counting. A binary semaphore, initialized to 1, functions as a lock, ensuring that only one can access a at a time by performing P before entry and V after exit. This is particularly useful in IPC scenarios where processes must serialize access to shared data structures to prevent race conditions. In contrast, a semaphore, initialized to a positive N representing the number of available resources, allows up to N processes to proceed concurrently before blocking additional ones. For example, in a system with a pool of database connections limited to five, a semaphore initialized to 5 enables concurrent readers up to that limit, with each acquiring performing P to decrement and releasing with V, thus throttling access without unnecessary . These operations are implemented atomically in System V to ensure integrity even under high contention. Monitors represent a higher-level built upon semaphores, encapsulating shared data, procedures, and within a single module to simplify concurrent programming. Introduced by C.A.R. Hoare in his 1974 paper "Monitors: An Operating System Structuring Concept," monitors ensure that only one process executes within the at a time, using implicit semaphores for , while condition variables allow processes to wait for specific states and be signaled upon changes. This design hides low-level semaphore details from programmers, reducing errors in coordination. In practice, implements monitors through synchronized blocks and methods, where entering a synchronized block acquires the intrinsic lock on an object (acting as the ), and wait(), notify(), and notifyAll() serve as condition variable operations to manage waiting and signaling. For instance, a producer-consumer scenario can use a monitor to protect a shared , with producers signaling after adding items and consumers waiting until items are available. To prevent deadlocks in semaphore usage, processes must adhere to strategies that break the circular wait condition, such as imposing a total ordering on resource acquisition—always requesting s in the same sequence across all processes—and incorporating timeouts on wait operations to abort and retry if a lock cannot be acquired promptly. Resource ordering ensures no cycles form in the , while timeouts mitigate indefinite blocking, as seen in implementations where P operations include a before returning control to the process. In shared memory techniques for , s and monitors provide essential coordination to synchronize reads and writes, preventing from concurrent modifications.

Mutexes and Condition Variables

Mutexes, or locks, are synchronization primitives designed to ensure that only one or can access a at a time, preventing race conditions in inter-process communication scenarios. In systems, mutexes are implemented via the pthread_mutex_t type, where a acquires the lock using pthread_mutex_lock() before accessing the and releases it with pthread_mutex_unlock() afterward; if the mutex is already locked, the calling blocks until it becomes available. This ownership-based mechanism differs from counting semaphores by enforcing strict ownership, where only the acquiring can release the lock. POSIX mutexes support variants for specific use cases, including recursive mutexes that allow the same process to acquire the lock multiple times without , specified by the PTHREAD_MUTEX_RECURSIVE attribute during initialization with pthread_mutex_init(). For inter-process use, mutexes can be placed in regions with the PTHREAD_PROCESS_SHARED attribute, enabling synchronization across process boundaries. These features make mutexes suitable for fine-grained protection of shared data structures in , such as buffers or queues. Condition variables complement mutexes by allowing processes to wait efficiently for specific to become true, avoiding the inefficiency of busy-waiting loops. In , condition variables are represented by pthread_cond_t and must always be used in conjunction with an associated mutex; a process atomically releases the mutex and blocks on pthread_cond_wait() until awakened by pthread_cond_signal() or pthread_cond_broadcast(), at which point it reacquires the mutex. The signal operation wakes at least one waiting process, while broadcast wakes all, ensuring that changes to shared state—such as data availability in a producer-consumer setup—are efficiently propagated without polling. To address in systems, where a high-priority is delayed by a low-priority holding a mutex needed by intermediate-priority processes, mutexes often incorporate priority inheritance protocols. Under the basic priority inheritance protocol, the priority of the mutex-holding low-priority is temporarily raised to match the highest priority of any waiting , minimizing blocking time for critical tasks. This approach, formalized in priority inheritance protocols, bounds the duration of inversion and is implemented in operating systems to support predictable scheduling in IPC-heavy environments. A practical application of mutexes and condition variables is solving the reader-writer problem, where multiple readers can access shared data concurrently but writers require exclusive access to maintain consistency. In a typical , a mutex protects a reader count variable, while separate condition variables (e.g., for readers and writers) allow waiting processes to be signaled upon state changes, such as when no readers are active for a writer to proceed. For instance, readers increment the count under mutex protection and signal waiting readers if appropriate, while writers wait on a condition variable until the count reaches zero, ensuring fairness and avoiding through prioritized signaling. This pattern is widely used in database systems and file servers for concurrent .

Network and Distributed IPC

Socket-Based Communication

Socket-based communication enables inter-process communication (IPC) over networks, facilitating data exchange between processes on the same or different hosts, and extends to efficient local IPC via specialized domains. The Berkeley sockets application programming interface (API), originating from the 4.2BSD release of Unix in 1983 developed by the , Berkeley's Computer Systems Research Group, provides a uniform abstraction for both connection-oriented and connectionless protocols. This API has evolved into the standard, supporting protocols like for reliable stream delivery and for unreliable datagrams, making it foundational for networked applications. The core operations of the Berkeley sockets API involve creating and managing endpoints through specific system calls. The socket() function creates a new socket descriptor, specifying the , socket type (e.g., for or for ), and protocol. A uses bind() to associate the socket with a local and , listen() to prepare for incoming by setting a backlog queue, and accept() to retrieve the next connection request, yielding a new connected socket for data transfer. Clients invoke connect() to establish a to a remote 's and , after which data can be exchanged via send()/recv() for or sendto()/recvfrom() for datagrams. These abstract the underlying , allowing seamless communication across local or wide-area networks. Address families define the for socket addressing and protocol support. AF_INET specifies IPv4 addressing, combining 32-bit addresses with 16-bit numbers to uniquely identify endpoints, while AF_INET6 extends this to 128-bit addresses for modern networks. range from 0 to 65535, with well-known ports (0-1023) reserved for standard services like HTTP on 80. Binding a socket to an address ensures incoming packets are demultiplexed correctly to the appropriate process. For local IPC, the AF_UNIX (or AF_LOCAL) family uses filesystem pathnames as abstract addresses, bypassing the network stack for low-latency communication between processes on the same machine, often outperforming due to direct kernel-mediated transfers. Efficient handling of concurrent connections requires non-blocking operations and techniques. Sockets can be configured as non-blocking via the fcntl() call with O_NONBLOCK, ensuring operations like read() or write() return immediately if data is unavailable, rather than suspending the . To monitor multiple sockets simultaneously, select() allows a to wait on sets of s for , writability, or errors, with a timeout option; it returns the number of ready descriptors for further processing. Alternatively, poll() provides similar functionality using a more scalable array of pollfd structures, avoiding the file descriptor limits of select() in high-connection scenarios. These mechanisms enable single-threaded servers to manage thousands of clients by reacting only to ready events, reducing overhead in scalable applications. A representative example is a -based client-server application, illustrating bidirectional stream communication. The creates a TCP socket with socket(AF_INET, SOCK_STREAM, 0), binds it to an like "0.0.0.0:8080" using bind(), sets it to listen with listen(socket_fd, 5), and enters a loop calling accept() to handle incoming client . For each accepted , the uses select() to multiplex reads from multiple client s and the standard input, broadcasting messages received via recv() to all other connected clients using send(). The client, meanwhile, creates a socket, connects to the 's with connect(), and alternates between send() for user messages and recv() for incoming broadcasts in a non-blocking loop. This setup leverages TCP's reliability for ordered, error-checked delivery, forming the basis for many networked services.

Remote Procedure Calls

Remote procedure calls (RPC) provide a mechanism for processes to invoke functions on remote machines as if they were local procedure calls, abstracting the underlying network communication to enable distributed computing. This model, introduced in seminal work by Birrell and Nelson, emphasizes transparency, where the caller remains unaware of the remote execution, and focuses on synchronous invocation to mimic local semantics. RPC systems typically rely on transport protocols like sockets for message exchange but layer abstractions to handle distribution. The core architecture of RPC involves stub generation, argument marshalling, and unmarshalling to facilitate cross-process calls. At the client side, a routine intercepts the procedure call, serializes (marshals) the s into a network message using a standard format like (XDR), and sends it to the server via a transport protocol. On the server, a corresponding receives the message, unmarshals the s, invokes the actual , marshals the results, and returns them to the client , which unmarshals and delivers the output to the caller. This process ensures synchronous execution, where the client blocks until the response arrives, though implementations may use threads for concurrency. code is often generated automatically from definitions to ensure and portability across heterogeneous systems. Key protocols exemplify RPC implementations, such as Open Network Computing (ONC) RPC developed by in the 1980s and introduced by in 2015. ONC RPC, standardized in RFC 1831, uses or over port 111 for the portmapper service and employs XDR for data serialization, supporting remote procedure invocation in distributed environments. In contrast, builds on for efficient bidirectional streaming and multiplexing, using for compact serialization, which enhances performance in modern cloud-native applications. Both protocols handle binding via service registries but differ in transport efficiency and language support. Failure handling in RPC addresses network unreliability through idempotency and delivery semantics, balancing reliability with performance. Idempotent operations, where repeated calls yield the same result, allow safe retries without side effects. Common semantics include at-most-once, where a call executes zero or one time (discarding duplicates via sequence numbers to avoid replays), and at-least-once, where retries ensure execution but may cause multiples unless idempotent. Birrell and Nelson's design approximates "exactly once" semantics as an illusion, relying on timeouts and acknowledgments, though true exactly-once requires additional state management like transactions. A prominent example of RPC application is the Network File System (NFS), which uses Sun's ONC RPC to enable remote file access as local operations. In NFS version 2, clients invoke RPC procedures like READ or WRITE on the NFS (program number 100003) to manipulate files, with arguments marshaled in XDR and transported over for low latency. This integration allows transparent mounting of remote directories, hiding distribution details while relying on RPC for reliable invocation.

Higher-Level Frameworks

Message-Oriented Middleware

Message-oriented middleware (MOM) is a class of software infrastructure that facilitates asynchronous communication between distributed applications by enabling the exchange of structured messages, thereby decoupling producers and consumers in terms of time, location, and platform. This approach contrasts with synchronous methods like remote procedure calls by allowing senders to continue processing without waiting for immediate responses, which enhances and in enterprise environments. MOM builds on foundational message queuing concepts by adding layers for routing, , and interoperability to support complex distributed systems. Central to MOM are message brokers, which act as intermediaries that receive, store, route, and forward messages between applications; prominent examples include , an open-source, multi-protocol broker written in that supports enterprise-scale messaging. Message queues serve as first-in, first-out () buffers for point-to-point delivery, ensuring a message reaches exactly one consumer, while topics enable publish-subscribe patterns where publishers broadcast messages to multiple interested subscribers without direct knowledge of them. These components collectively promote , as applications interact via standardized message formats rather than tight bindings to specific endpoints. Key standards underpinning MOM interoperability include the , a specification introduced in the late 1990s by (now ) as a for creating, sending, receiving, and reading messages across compliant brokers, which has become foundational for Java-based enterprise applications. Complementing JMS is the , an open application-layer protocol developed starting in 2003 by in collaboration with partners like iMatix and later standardized by , designed to ensure secure, reliable message exchange across diverse middleware implementations regardless of vendor. To handle reliability in unreliable networks, MOM incorporates durability via persistent storage on brokers, preventing message loss during failures, and transactional support that coordinates message production, consumption, and acknowledgments across distributed participants. Exactly-once delivery is achieved through mechanisms like client acknowledgments—where consumers confirm receipt to trigger broker removal—and two-phase commit protocols in transactional contexts, guaranteeing that each message is processed precisely once without duplication or omission, even amid crashes or network partitions. In practice, MOM like RabbitMQ implements enterprise integration patterns in microservices architectures, such as the message router pattern for directing orders to inventory or payment services via queues, or the publish-subscribe channel for broadcasting user events to multiple notification handlers, enabling resilient, scalable event-driven systems.

Distributed Object Systems

Distributed object systems provide frameworks that allow processes to interact with remote objects as if they were local, abstracting the complexities of network communication to facilitate seamless inter-process communication in distributed environments. These systems emphasize object-oriented principles, where objects encapsulate data and behavior, and invocations on remote objects mimic local method calls. By leveraging middleware to handle marshaling, unmarshaling, and transport, they achieve location transparency, enabling developers to focus on application logic rather than distribution details. A seminal example is the Common Object Request Broker Architecture (CORBA), introduced in 1991 by the Object Management Group (OMG) as a standard for distributed object computing. CORBA uses the Interface Definition Language (IDL) to define object interfaces independently of implementation languages, allowing stubs and skeletons to be generated for client-server interactions. The Internet Inter-ORB Protocol (IIOP), a key component of CORBA, enables interoperability between different Object Request Brokers (ORBs) over TCP/IP networks by mapping the General Inter-ORB Protocol (GIOP) to the internet transport layer. Alternatives to CORBA emerged in the , including Microsoft's (DCOM), first released in 1995 as an extension of the () for network-transparent object invocation on Windows platforms. DCOM relies on proxies and object exporters to facilitate remote calls, similar to CORBA's but tied to Microsoft's ecosystem. In modern contexts, RESTful services using serialization have become prevalent alternatives, offering lightweight, stateless interactions over HTTP without the overhead of binary protocols or IDL, prioritizing simplicity and web-scale interoperability. Central to these systems is location transparency, achieved through proxy objects that stand in for remote objects on the client side, intercepting method calls and forwarding them across the network while hiding distribution mechanics. Dynamic invocation allows clients to discover and call methods at runtime via naming services or interfaces, supporting flexibility in evolving distributed applications. For instance, in Java Remote Method Invocation (RMI), introduced in JDK 1.1, distributed garbage collection is handled through the Distributed Garbage Collection (DGC) protocol, where clients register references with remote VMs to track object liveness and enable automatic cleanup of unreferenced remote objects.

Operating System Implementations

Unix-like Systems

Unix-like systems provide robust support for inter-process communication (IPC) through mechanisms defined in the standard, enabling processes to exchange data and synchronize operations efficiently. These include for stream-based data transfer, message queues for structured messaging, semaphores for synchronization, and for direct access to common data regions. serve as a fundamental IPC tool in Unix-like environments, allowing unidirectional data flow between related processes, typically parent and child via the pipe() , which creates a pair of file descriptors for reading and writing. Named pipes, or FIFOs, extend this to unrelated processes using mkfifo() or the mknod() , facilitating persistent communication channels accessible by pathnames. Message queues enable processes to send and receive formatted messages asynchronously; in the System V IPC model, accessed via <sys/ipc.h>, queues are created with msgget() using a key, messages are sent via msgsnd() and received with msgrcv(), supporting priority-based queuing up to a system-defined limit. POSIX-compliant message queues, using <mqueue.h>, offer similar functionality through mq_open() for creation and mq_send()/mq_receive() for operations, with attributes like maximum message size and queue depth configurable at creation. Semaphores in Unix-like systems, also part of the System V IPC via <sys/ipc.h>, provide synchronization primitives for controlling access to shared resources; arrays of semaphores are initialized with semget(), values adjusted using semop() for wait (P) and signal (V) operations, and controlled via semctl(). POSIX semaphores, defined in <semaphore.h>, include named semaphores via sem_open() for inter-process use and unnamed ones via sem_init() for intra-process or shared memory scenarios, supporting atomic wait (sem_wait()) and post (sem_post()) operations. Shared memory segments, created through shmget() in the System V interface, allow multiple processes to map a common memory region using shmat(), with access controlled by keys and permissions, and detachment via shmdt(); POSIX shared memory uses shm_open() to create a memory object treated as a file, mapped with mmap(). These mechanisms collectively support the general synchronization primitives like mutual exclusion and signaling discussed in broader IPC contexts. Linux, a prominent Unix-like system, extends POSIX IPC with futexes (fast user-space mutexes), which enable efficient user-space locking by allowing atomic operations on without kernel intervention unless contention occurs, via the futex() introduced in 2.6. This reduces overhead for uncontested locks, making it a foundation for higher-level synchronization like pthread mutexes. Additionally, provides scalable I/O for handling multiple file descriptors, including those from or sockets used in IPC, through epoll_create(), epoll_ctl() for registration, and epoll_wait() for event notification, offering O(1) complexity for large numbers of descriptors compared to POSIX select() or poll(). In macOS and BSD variants, IPC diverges with Mach influences; macOS's XNU kernel implements IPC ports as kernel-managed message queues for task-to-task communication, where ports are created via mach_port_allocate() and messages sent using mach_msg(), supporting complex data types like out-of-line memory and port rights. XNU kernel messages facilitate this by handling IPC traps and queuing, integrating with layers for hybrid use. Administrative tools like ipcs and ipcmk aid in managing System V IPC resources on systems. The ipcs utility displays information on active segments, s, and sets, with options to filter by type, user, or ID, providing details such as keys, owners, and usage statistics. Conversely, ipcmk creates these resources programmatically from the command line, specifying sizes for (-M), limits (-Q), or array counts (-S), generating keys for subsequent use in applications.

Windows and Other Platforms

In Windows, inter-process communication (IPC) relies on several native APIs that facilitate data exchange and synchronization between processes without adhering to POSIX standards. Named pipes provide a reliable, two-way communication channel between server and client processes, where the server creates an instance using the CreateNamedPipe function, and clients connect via CreateFile, enabling stream-oriented data transfer similar to Unix pipes but with built-in support for remote communication across machines. Mailslots offer a lightweight, one-way broadcast mechanism for sending short messages (up to 64 KB per message, or 424 bytes for domain-wide broadcasts), where a server process creates a mailslot with CreateMailslot and clients write to it using WriteFile, appending messages in a queue until read by the server. Shared memory is implemented through file mapping objects created with CreateFileMapping (using INVALID_HANDLE_VALUE for pagefile-backed memory), allowing multiple processes to map the same physical memory region via MapViewOfFile for efficient, high-speed data sharing, though it requires additional synchronization to avoid race conditions. Windows provides robust synchronization primitives in the Win32 to coordinate access in multi-process scenarios. Critical sections, initialized with InitializeCriticalSection, offer lightweight within a process but are not suitable for cross-process use; instead, mutexes created via CreateMutex ensure exclusive access across processes by signaling ownership states. Events, managed through CreateEvent, allow processes to signal completion or state changes, supporting both manual-reset (affecting all waiters) and auto-reset (affecting one waiter) modes to coordinate asynchronous operations between unrelated processes. These mechanisms differ from Unix equivalents like semaphores by emphasizing kernel-managed handles with security descriptors for . On other non-Unix platforms, such as operating systems (RTOS), IPC often prioritizes and low . In , an embedded RTOS developed by Wind River, message queues serve as the primary inter-task communication method, where tasks create queues with msgQCreate, send messages via msgQSend, and receive them with msgQReceive, supporting prioritized, blocking, and non-blocking operations to exchange variable-length data efficiently in resource-constrained environments. , built on a but with custom IPC for its application ecosystem, employs the mechanism to enforce ; it uses a driver to route transactions between client proxies and nodes, enabling secure method invocations across app boundaries while minimizing overhead through parceling of data and one-way references. On Windows, the (COM) enables communication across processes or machine boundaries, where clients invoke methods on server objects via proxies and stubs, leveraging RPC for marshaling and unmarshaling parameters to abstract IPC details.

References

  1. [1]
    [PDF] Interprocess Communication In UNIX and Windows NT - Brown CS
    3. 1. Introduction. Interprocess communication (IPC) refers to the coordination of activities among cooperating processes. A common example of this need is ...
  2. [2]
    [PDF] An Introductory 4.4BSD Interprocess Communication Tutorial
    At first, however, IPC was limited to processes communicating within a single machine. With Berkeley UNIX. 4.2BSD this expanded to include IPC between machines.
  3. [3]
    The Interprocess Communication (IPC) Overview - IBM
    Jun 17, 2018 · Interprocess communication (IPC) is used for programs to communicate data to each other and to synchronize their activities.How To See What Is Currently... · Shared Memory · Understanding Memory Mapping
  4. [4]
    [PDF] Evaluation of Inter-Process Communication Mechanisms - cs.wisc.edu
    The goal of the study is to empirically compute the latency and throughput of popular IPC mechanisms. This requires a reliable and accurate way of measuring the ...
  5. [5]
    [PDF] An Advanced 4.4BSD Interprocess Communication Tutorial
    The BSD IPC allows processes to rendezvous in many ways. Processes may rendezvous through a UNIX file system-like name space (a space where all names are path.<|control11|><|separator|>
  6. [6]
    Interprocess communications - Win32 apps | Microsoft Learn
    Feb 13, 2024 · The Windows operating system provides mechanisms for facilitating communications and data sharing between applications.Using the Clipboard for IPC · Using COM for IPC
  7. [7]
    Chapter 5 Interprocess Communication Mechanisms
    Linux supports three types of interprocess communication mechanisms that first appeared in Unix TM System V (1983). These are message queues, semaphores and ...
  8. [8]
    Chapter 7 Interprocess Communication (System Interface Guide)
    Sockets provide point-to-point, two-way communication between two processes. Sockets are very versatile and are a basic component of interprocess and ...System V Ipc · System V Messages · System V Semaphores<|control11|><|separator|>
  9. [9]
    [PDF] Chapter 3: Processes - Operating System Concepts
    Cooperating processes need interprocess communication (IPC). ▫. Two models of IPC. ○. Shared memory. ○. Message passing. Page 27. 3.27. Silberschatz, Galvin and ...
  10. [10]
    Kernel 2: Process isolation - CS 61 2017
    Modern OSes isolate process memory from kernel memory (“kernel isolation”), and also isolate different processes' memory from each other. Each process has ...
  11. [11]
    [PDF] Interprocess Communication (IPC) - CS@Cornell
    • Message passing may be either blocking or non- blocking. • Blocking is considered synchronous. – Blocking send has the sender block until the message is.
  12. [12]
    [PDF] On Interprocess Communication - Leslie Lamport
    Dec 25, 1985 · To motivate the formalism, let us consider the question of atomicity. Most treatments of concurrent processing assume the existence of atomic.
  13. [13]
    [PDF] Module VII Process Management: Coordination And ... - CS@Purdue
    Inter-Process Communication (Message Passing) ... – Producer / consumer interaction. – Mutual ... d Important principle: No operating system function ...
  14. [14]
    [PDF] Compatible Time-Sharing System (1961-1973) Fiftieth Anniversary ...
    Jun 1, 2011 · In the rest of this commemorative brochure, we sketch the history of CTSS, include retrospective memories from several of the key CTSS personnel ...
  15. [15]
    [PDF] the multics interprocess communication facility - People | MIT CSAIL
    This paper describes the Inter-Process Communication (IPC) facility which was developed for the Multics (Multiplexed Information and Computing. System) system ...
  16. [16]
    Transcending POSIX: The End of an Era? - USENIX
    Sep 8, 2022 · Early versions of Unix supported signals and pipes [2]. Signals enabled programmers to programmatically handle hardware faults, and this ...<|separator|>
  17. [17]
    https://doc.cat-v.org/unix/pipes/
    No information is available for this page. · Learn whyMissing: history | Show results with:history
  18. [18]
    An Advanced 4.4BSD Interprocess Communication Tutorial
    This document provides an introduction to the interprocess communication facilities included in the 4.4BSD release of the UNIX system.Missing: history | Show results with:history
  19. [19]
    RFC 1057: RPC: Remote Procedure Call Protocol specification
    This document specifies version two of the message protocol used in Sun's Remote Procedure Call (RPC) package.
  20. [20]
    CORBA® History | Object Management Group
    You will find the following specifications here. CORBA 1.0 (October 1991) Included the CORBA Object model, Interface Definition Language™ (IDL™), and the core ...
  21. [21]
    IEEE 1003.1-1988 - IEEE SA
    IEEE 1003.1-1988 is the IEEE Standard Portable Operating System Interface for Computer Environments, now superseded by 1003.1-1990.
  22. [22]
    [PDF] The Architectural Implications of Cloud Microservices
    Abstract— Cloud services have recently undergone a shift from monolithic applications to microservices, with hundreds or thousands of.Missing: evolution post-
  23. [23]
    Inter-Process Communication in a Microservices Architecture | F5
    Jul 24, 2015 · In a microservices application, the services need an inter-process communication (IPC). Later on we will look at specific IPC technologies ...Missing: 2010 | Show results with:2010
  24. [24]
    [PDF] lmbench: Portable Tools for Performance Analysis - USENIX
    For example, context switches require saving the current process state and loading the state of the next process. However, memory latency is rarely accurately ...
  25. [25]
    How long time does a context switch take in Linux (ubuntu 18.04)
    Mar 15, 2019 · About 1.2 microseconds which is about a thousand Cycles.
  26. [26]
    [PDF] PipeSwitch: Fast Pipelined Context Switching for Deep Learning ...
    Nov 6, 2020 · Second, IPC optimization is important, which reduces the latency by 16–48 ms. Without IPC opti- mization, the latency is even higher than no ...
  27. [27]
    Taming the Killer Microsecond - ACM Digital Library
    context switch has minimal performance impact. At 10 threads and 1μs device latency, the performance is similar to running the application with data in DRAM ...
  28. [28]
    A Pressure-Aware Policy for Contention Minimization on Multicore ...
    May 25, 2022 · Our approach focuses on minimizing contention on both the main-memory bandwidth and the LLC by monitoring the pressure that each application ...
  29. [29]
    Fast interprocess communication revisited - LWN.net
    Nov 9, 2011 · "Zero-copy" is sometime seen as the holy-grail and, while it is usually impractical to reach that, single-copy can be attained; three of our ...<|control11|><|separator|>
  30. [30]
    Efficient data transfer through zero copy - IBM Developer
    Jan 26, 2022 · Zero copy greatly improves application performance and reduces the number of context switches between kernel and user mode. The Java class ...Data Transfer: The... · Figure 2. Traditional... · Data Transfer: The Zero-Copy...Missing: IPC asynchronous mitigation
  31. [31]
    CWE-367: Time-of-check Time-of-use (TOCTOU) Race Condition
    By tricking the program into performing an operation that would otherwise be impermissible, the attacker has gained elevated privileges. This type of ...Missing: escalation IPC
  32. [32]
    Issues in IPC By Message Passing in Distributed System
    Mar 17, 2022 · This can happen if the prospective sending procedure fails or if the expected message is lost on the network owing to a communication breakdown.
  33. [33]
    Race Condition Vulnerability - GeeksforGeeks
    Oct 30, 2025 · Without proper synchronization, the system might allow both transactions to go through, even if the balance is only enough for one, leaving ...
  34. [34]
    Access to IPC objects - IBM
    To access an IPC object, a process must pass DAC, MIC, and MAC access checks. DAC access checks are based on the mode (owner, group, or world) of the object.
  35. [35]
    Interprocess Communication in Distributed Systems - GeeksforGeeks
    Jul 11, 2025 · Ensuring reliable and consistent communication between distributed components is challenging. ... race conditions and ensure data integrity.
  36. [36]
    [PDF] Buffer Overflows: Attacks and Defenses for the Vulnerability of the ...
    Buffer overflows have been the most common form of security vulnerability for the last ten years. More over, buffer overflow vulnerabilities dominate the area ...
  37. [37]
    System V Shared Memory - Oracle Solaris
    A process creates a shared memory segment using shmget (). This call is also used to get the ID of an existing shared segment. The creating process sets the ...
  38. [38]
    shmget
    The `shmget()` function gets an XSI shared memory segment, returning a shared memory identifier associated with a key. It returns -1 on error.
  39. [39]
    shmget(2) - Linux manual page - man7.org
    shmget() returns the identifier of the System V shared memory segment associated with the value of the argument key. It may be used either to obtain the ...
  40. [40]
    mmap
    The mmap() function shall establish a mapping between an address space of a process and a memory object. The mmap() function shall be supported for the ...
  41. [41]
    mmap(2) - Linux manual page - man7.org
    Memory mapped by mmap() is preserved across fork(2), with the same attributes. A file is mapped in multiples of the page size.
  42. [42]
    3.2. IPC Models — Computer Systems Fundamentals
    Shared memory also has another disadvantage that message passing avoids, which is the problem of synchronization. If both processes try to write to the shared ...
  43. [43]
    3.7.1. POSIX Shared Memory - Computer Science - JMU
    The major disadvantage of shared memory is that the processes must take extra precaution to synchronize access to the region.
  44. [44]
    POSIX Shared Memory Example
    Jan 31, 2014 · The producer writes to a newly-created shared memory segment, while the consumer reads from it and then removes it. Don't confuse this simple ...
  45. [45]
    Lab 7- System V IPC: message queues, semaphores, shared memory
    EXERCISE 2: The Consumer-Producer Problem using Shared Memory. Problem Statement. Write 2 programs, producer.c implementing a producer and consumer.c ...<|control11|><|separator|>
  46. [46]
    How are Unix pipes implemented? - Abhijit Menon-Sen
    Mar 23, 2020 · TUHS confirms that Third edition Unix from February 1973 was the first version to include pipes: The third edition of Unix was the last version ...
  47. [47]
    Chapter 6 Interprocess Communication (Programming Interfaces ...
    The pipe connects the resulting processes when the parent process forks. A pipe has no existence in any file name space, so it is said to be anonymous. A pipe ...System V Ipc · System V Messages · System V SemaphoresMissing: semantics | Show results with:semantics<|separator|>
  48. [48]
  49. [49]
    fifo(7) - Linux manual page - man7.org
    Under Linux, opening a FIFO for read and write will succeed both in blocking and nonblocking mode. POSIX leaves this behavior undefined. This can be used to ...
  50. [50]
  51. [51]
    unix(7) - Linux manual page - man7.org
    Valid socket types in the UNIX domain are: SOCK_STREAM, for a stream-oriented socket; SOCK_DGRAM, for a datagram-oriented socket that preserves message ...
  52. [52]
  53. [53]
    Do UNIX Domain Sockets Overflow? - Unix & Linux Stack Exchange
    May 15, 2016 · Yes, all types of Unix domain sockets (datagram, stream and sequenced-packet) are reliable, in-order delivery mechanisms. That is, they don't drop data.
  54. [54]
    Pipelines (Bash Reference Manual)
    ### Summary of Pipe Operator | for Command Chaining
  55. [55]
  56. [56]
  57. [57]
  58. [58]
    sysvipc(7) - Linux manual page - man7.org
    The System V message queue API consists of the following system calls: msgget(2) Create a new message queue or obtain the ID of an existing message queue.
  59. [59]
    mq_open
    The mq_open() function shall establish the connection between a process and a message queue with a message queue descriptor. It shall create an open message ...
  60. [60]
    mq_send
    The mq_send() function shall add the message pointed to by the argument msg_ptr to the message queue specified by mqdes. The msg_len argument specifies the ...<|separator|>
  61. [61]
    mq_overview(7) - Linux manual page - man7.org
    Two processes can operate on the same queue by passing the same name to mq_open(3). Messages are transferred to and from a queue using mq_send(3) and mq_receive ...
  62. [62]
    signal(7) - Linux manual page - man7.org
    The signal being delivered is also added to the signal mask, unless SA_NODEFER was specified when registering the handler. These signals are thus blocked while ...
  63. [63]
    [PDF] Co-operating sequential processes - Pure
    In other words: when a group of cooperating sequential processes have to be constructed and the overall behaviour of these processes combined has to satisfy ...
  64. [64]
    [PDF] Semaphores - cs.wisc.edu
    Indeed, Dijkstra and colleagues invented the semaphore as a single primitive for all things related to synchronization; as you will see, one can use semaphores ...
  65. [65]
    System V Semaphores (Programming Interfaces Guide)
    Semaphores enable processes to query or alter status information. They are often used to monitor and control the availability of system resources such as ...Missing: sysvsem | Show results with:sysvsem
  66. [66]
    Monitors: an operating system structuring concept
    This paper develops Brinch-Hansen's concept of a monitor as a method of structuring an operating system. It introduces a form of synchronization, ...
  67. [67]
    Synchronized Methods - Essential Java Classes
    Synchronized methods enable a simple strategy for preventing thread interference and memory consistency errors: if an object is visible to more than one thread, ...
  68. [68]
    [PDF] Deadlocks: Detection & Avoidance - Cornell: Computer Science
    Example 1: Semaphores. 4 semaphore: file_mutex = 1. /* protects file resource ... Single lock for entire system? • Impose partial ordering on resources,.
  69. [69]
    [PDF] Lecture 12: Deadlock
    Deadlock Prevention. Another potential technique for preventing deadlock: It is also possible to use time-out values to prevent deadlock. P. = (printer.lock ...
  70. [70]
    pthread_mutex_lock
    `pthread_mutex_lock()` locks a mutex; if already locked, the thread blocks until available. It returns the mutex in a locked state with the calling thread as ...
  71. [71]
    pthread_cond_wait
    The pthread_cond_wait() and pthread_cond_timedwait() functions are used to block on a condition variable. They are called with mutex locked by the calling ...
  72. [72]
    pthread_cond_signal
    The pthread_cond_signal() call unblocks at least one of the threads that are blocked on the specified condition variable cond (if any threads are blocked on ...
  73. [73]
    Priority Inheritance Protocols: An Approach to Real-Time ...
    An investigation is conducted of two protocols belonging to the priority inheritance protocols class; the two are called the basic priority inheritance protocol ...
  74. [74]
    [PDF] Priority inheritance protocols: an approach to real-time synchronization
    In this paper, we investigate the synchronization problem in the context of priority-driven preemptive scheduling, an ap- proach used in many real-time systems.
  75. [75]
    [PDF] Lecture #9: Monitors, Condition Variables, and Readers-Writers
    Problem 2 -- Semaphores have “hidden” internal state. Problem 3 – careful interleaving of “synchronization” and “mutex” semaphores.
  76. [76]
    Whither Sockets? - Communications of the ACM
    Jun 1, 2009 · Developed by the Computer Systems Research Group at the University of California at Berkeley, the sockets API was first released as part of the ...Introduction · History of Sockets · Lack of Support for Multihoming
  77. [77]
    2. General Information
    Support for UNIX domain sockets is mandatory. UNIX domain sockets provide process-to-process communication in a single system. Headers. The symbolic constant ...
  78. [78]
    [PDF] Implementing Remote Procedure Calls
    Implementing Remote Procedure Calls. Andrew Birrell & Bruce Nelson 1984. • Andrew Birrell known for Grapevine (1981), with first distributed naming system ...
  79. [79]
    [PDF] Implementing remote procedure calls - Semantic Scholar
    Implementing remote procedure calls · A. Birrell, B. Nelson · Published in ACM Transactions on Computer… 1 February 1984 · Computer Science.Missing: Integrated | Show results with:Integrated
  80. [80]
    RPC: Remote Procedure Call Protocol Specification Version 2
    This document specifies version two of the message protocol used in ONC Remote Procedure Call (RPC). The message protocol is specified with the eXternal Data ...
  81. [81]
    [PDF] 05-rpc.pdf - Carnegie Mellon University
    • At-most-once. • Use a sequence # to ensure idempotency against network retransmissions. • and remember it at the server. At-least-once versus at-most-once?
  82. [82]
    Digital Library: Communications of the ACM
    CORBA implements distribution by building proxy objects ... ORB technology provides object location transparency and hides the details of marshaling and ...
  83. [83]
    [PDF] CORBA Scripting Language Specification - Object Management Group
    Feb 5, 2003 · CORBA 1.1 was introduced in 1991 by Object Management Group. (OMG) and defined the Interface Definition Language (IDL) and the Application Pro-.
  84. [84]
  85. [85]
    Distributed Component Object Model (DCOM) Remote Protocol
    Oct 11, 2022 · [MS-DCOM]: Distributed Component Object Model (DCOM) Remote Protocol ... Date. Protocol Revision. Revision Class. Downloads. 4/23/2024. 25.0.Published Version · Previous Versions
  86. [86]
    What Ever Happened to…and Other Tech History
    Apr 29, 2025 · Thus, DCOM (Distributed Component Object Model, originally “Network OLE”) was born. Microsoft saw DCOM as the way all developers would build ...
  87. [87]
    Java Remote Method Invocation: 3 - RMI System Overview
    The distributed garbage collection algorithm interacts with the local Java virtual machine's garbage collector in the usual ways by holding normal or weak ...
  88. [88]
    futex(2) - Linux manual page - man7.org
    A user- space program employs the futex() system call only when it is likely that the program has to block for a longer time until the condition becomes true.
  89. [89]
    futex2 - The Linux Kernel documentation
    futex, or fast user mutex, is a set of syscalls to allow userspace to create performant synchronization mechanisms, such as mutexes, semaphores and ...Futex2 · User Api · Futex_waitv()
  90. [90]
    epoll(7) - Linux manual page - man7.org
    The epoll API performs a similar task to poll(2): monitoring multiple file descriptors to see if I/O is possible on any of them. The epoll API can be used ...Missing: multiplexing | Show results with:multiplexing
  91. [91]
  92. [92]
  93. [93]
    ipcmk(1) - Linux manual page - man7.org
    DESCRIPTION top​​ ipcmk allows you to create POSIX and System V inter-process communication (IPC) objects: shared memory segments, message queues, and semaphore ...