Sun RPC
Sun RPC, also known as Open Network Computing (ONC) Remote Procedure Call, is a protocol developed by Sun Microsystems in the early 1980s that enables a program on one computer to execute a subroutine or procedure on a remote server as if it were a local function call, abstracting the complexities of network communication.[1][2] It operates on a client-server model where the client sends a call message containing the program number, version number, procedure number, credentials, and arguments, and the server responds with a reply message indicating success or failure, using eXternal Data Representation (XDR) for platform-independent data serialization and deserialization.[3][1]
The protocol emerged as part of Sun's ONC architecture to facilitate distributed computing across heterogeneous systems, initially supporting transport-specific implementations before evolving to transport-independent RPC (TI-RPC) in systems like SunOS 5.0 and Solaris 9, allowing flexibility over protocols such as UDP and TCP without requiring code recompilation.[2][3] Version 2 of the protocol, the most widely deployed, was formally specified in RFC 1057 in 1988 and later updated in RFC 5531 in 2009, with administration transitioning from Sun Microsystems to the Internet Assigned Numbers Authority (IANA).[1][3] It draws inspiration from earlier RPC models, such as Xerox's Courier protocols, and has been integral to services like the Network File System (NFS), providing a foundation for remote file access and other networked applications.[3][1]
Key features of Sun RPC include support for multiple authentication flavors—such as AUTH_NONE for unauthenticated calls, AUTH_SYS for Unix-style credentials, and RPCSEC_GSS for secure, Kerberos-based protection—along with at-least-once call semantics over UDP (with idempotent procedures to handle duplicates) and at-most-once semantics over TCP to ensure reliable delivery.[2][3] The protocol uses rpcbind (formerly portmapper) on port 111 to dynamically map service program numbers to network ports, enabling service discovery without hardcoded addresses, and includes tools like rpcgen for automatically generating client stubs and server skeletons from interface definition files written in XDR notation.[2][1] Additional capabilities encompass broadcasting for one-way messaging, batching of calls, and multithreading support in modern implementations, making it suitable for concurrent, scalable distributed systems.[2]
History
Development at Sun Microsystems
Sun Microsystems was founded on February 24, 1982, by Vinod Khosla, Andy Bechtolsheim, Bill Joy, and Scott McNealy, with the company name derived from Stanford University Network (SUN).[4] From its inception, the company emphasized UNIX-based workstations designed for networked environments, incorporating TCP/IP networking as a core feature to support collaborative computing in academic and research settings.[5] This focus on distributed systems laid the groundwork for innovations in remote interprocess communication, addressing the limitations of early UNIX environments where resource sharing across machines required low-level programming.[6]
Sun RPC, also known as ONC RPC, originated in the early 1980s at Sun Microsystems, modeled after Xerox's Courier RPC protocols to facilitate standardized communication in heterogeneous networks.[7] Development began in 1984 as part of Sun's efforts to enable transparent remote procedure execution in UNIX systems, motivated by the need to simplify distributed computing for file sharing and resource access without exposing programmers to socket-level details.[8] Engineers at Sun aimed to create a client-server model that allowed workstations to operate as a unified system, providing individual computational power alongside shared resources and centralized administration.[6]
The protocol was first integrated into SunOS 2.0 in May 1985, with SunOS 3.0 in February 1986 providing further enhancements and coinciding with the broader adoption on the Sun-3 workstation series launched in 1985, serving as the underpinning for the Network File System (NFS).[9][10] By the mid-1980s, Sun provided the first commercial RPC libraries and compilers, marking a key milestone in making remote procedure calls accessible for enterprise networked applications.[6] This initial implementation emphasized portability across operating systems and transport protocols like UDP/IP and TCP/IP, establishing Sun RPC as a cornerstone of Sun's networking ecosystem.[10]
Standardization and Evolution
In 1986, Sun Microsystems announced the Open Network Computing (ONC) initiative, releasing RPC as part of a suite of freely available network technologies designed to promote interoperability across diverse systems.[11] This move transitioned RPC from an internal tool to an open framework, enabling widespread adoption in Unix-like environments.
The Internet Engineering Task Force (IETF) formalized Sun RPC through a series of Request for Comments (RFC) documents. RFC 1014, published in June 1987, specified the External Data Representation (XDR) standard integral to RPC data encoding.[12] This was followed by RFC 1050 in April 1988, providing an initial protocol specification, which was quickly updated by RFC 1057 in June 1988 to define RPC version 2 as the core message protocol.[13] The modern reference remains RFC 5531, published in May 2009, which consolidates ONC RPC version 2 as an Internet Standards Track protocol, incorporating clarifications on authentication and program numbering managed by the Internet Assigned Numbers Authority (IANA).[14]
RPC evolved from version 1 to version 2 with enhancements focused on authentication and binding mechanisms. Version 2 introduced support for multiple authentication protocols, including AUTH_UNIX for basic Unix-style credentials and AUTH_DES for encrypted authentication using DES and Diffie-Hellman key exchange, addressing limitations in secure credential transmission.[13] Binding improvements relied on the Port Mapper service, introduced in the late 1980s on port 111 to dynamically map RPC program numbers to transport addresses, facilitating UDP and TCP operations.[13] Version 1 details are largely omitted in contemporary implementations due to these advancements.[14]
In the 1990s, Sun introduced rpcbind as a more secure replacement for the Port Mapper, supporting transport-independent binding via versions 3 and 4 to mitigate vulnerabilities in earlier dynamic port allocation.[15] This evolution extended to Transport-Independent RPC (TI-RPC), which generalized binding across protocols like UDP, TCP, and later IPv6, ensuring broader network compatibility.[16]
Sun's acquisition by Oracle Corporation, completed in January 2010 following a 2009 agreement, shifted stewardship of ONC RPC from Sun to Oracle, prompting community efforts to maintain open implementations.[17] Prior to the acquisition, Sun relicensed RPC code under a permissive BSD-like terms in 2009 to sustain open-source development. Community forks, such as libtirpc, emerged to provide IPv6 support and independence from proprietary influences, building on TI-RPC foundations for modern deployments.[18]
Overview
Core Principles
Sun RPC, or Sun Remote Procedure Call, is designed to enable client programs to execute procedures on remote servers in a manner that mimics local procedure calls, thereby abstracting the complexities of network communication. This core concept of the Remote Procedure Call (RPC) paradigm allows developers to invoke remote services without directly managing the underlying transport mechanisms, such as message serialization, transmission, or error recovery. By providing this abstraction, Sun RPC aims to promote code portability and simplify distributed application development across heterogeneous systems.[1]
Central to achieving this seamless experience are the levels of transparency facilitated by client and server stubs, which are automatically generated from interface definitions. Procedure transparency hides the location of the invoked procedure, making remote execution appear local to the caller. Parameter transparency ensures that arguments and return values are passed and received without the programmer needing to handle data formatting or endianness differences, relying on a standardized encoding scheme. Transport transparency further abstracts the network protocol details, allowing the same RPC interface to operate over different transports without code modifications. These stubs encapsulate the RPC protocol logic, including call invocation and reply handling, thus insulating applications from distribution intricacies.[19][1]
Sun RPC maintains transport independence by supporting both UDP and TCP, with UDP serving as the default for its performance advantages in low-latency, connectionless environments, while TCP provides reliable, ordered delivery for scenarios requiring robustness against packet loss. Binding protocols, as defined in RFC 1833, facilitate this flexibility by mapping service endpoints to universal addresses, enabling dynamic discovery and connection establishment irrespective of the underlying transport. This design choice underscores the protocol's adaptability to varying network conditions without imposing transport-specific constraints on service definitions.[20][1]
To ensure reliability amid network uncertainties, Sun RPC uses at-least-once semantics over UDP (relying on idempotent procedures for handling potential duplicates) and at-most-once semantics over TCP, leveraging unique transaction identifiers (XIDs) in each message to aid in duplicate detection. Upon a call, the client sends a request and awaits a reply; if a timeout occurs, retransmissions use the same XID, allowing servers to detect duplicates where state is maintained, thus minimizing side effects from transient failures like packet loss or server unavailability.[1]
Services in Sun RPC are uniquely identified through a structured numbering system: a 32-bit program number designates the service (with Sun reserving 0x00000000 to 0x1fffffff for defined programs and users reserving 0x20000000 to 0x3fffffff for site-specific use), a version number tracks interface evolution, and a procedure number specifies the exact operation within that version. This hierarchical identification enables precise routing and versioning, supporting multiple concurrent services on the same host while maintaining backward compatibility.[1]
Key Components
Sun RPC relies on several core architectural elements to facilitate remote procedure calls between clients and servers, enabling location transparency in distributed computing environments. These components work together to abstract the complexities of network communication, allowing programmers to invoke remote procedures as if they were local function calls.[1]
Client and server stubs form the foundational interface layer in Sun RPC. Client stubs are automatically generated pieces of code that marshal procedure arguments into network messages and unmarshal replies upon receipt, effectively hiding the details of data transmission from the application developer. Similarly, server stubs receive incoming call messages, unmarshal the arguments to invoke the corresponding local procedure, and then marshal the results back to the client. This stub mechanism ensures that the programmer focuses on procedure logic rather than protocol specifics.[21]
The runtime library provides essential support functions for managing the RPC lifecycle on both client and server sides. On the client, functions such as clnt_call initiate remote procedure invocations by handling message transmission and awaiting responses. For servers, svc_run establishes a listening loop to accept and dispatch incoming calls to the appropriate stubs. These library routines also manage error reporting, ensuring robust handling of communication failures without burdening application code.[21]
Services in Sun RPC register themselves using unique program numbers and version identifiers to enable discovery and invocation. This registration occurs with a service registry, typically via the portmapper, allowing servers to claim dynamic ports beyond well-known assignments. Clients then query this registry to bind to services, resolving program details to actual transport endpoints for communication.[22]
Authentication in Sun RPC is handled through flavor-specific mechanisms integrated into the call protocol. The AUTH_NULL flavor provides no authentication, transmitting an empty credential and verifier for unauthenticated operations. In contrast, AUTH_UNIX employs Unix-style credentials, including user ID, group ID, and supplementary groups, to verify the caller's identity on the server side. These mechanisms support varying levels of security without mandating complex setups.[23]
Protocol Specification
Sun RPC messages are encoded as binary sequences using the External Data Representation (XDR) standard, ensuring platform-independent serialization of data units. Each message begins with a 32-bit transaction identifier (XID), which allows the client to match replies to corresponding calls, followed by a 32-bit message type discriminator that distinguishes between call and reply messages. The two primary message types are CALL (value 0), used by clients to invoke remote procedures, and REPLY (value 1), used by servers to respond.[24]
The CALL message structure, defined in XDR as struct call_body, includes several fixed fields after the common header: a 32-bit RPC version number, which must be 2 for compatibility; a 32-bit program number identifying the remote service; a 32-bit version number for that program; and a 32-bit procedure number specifying the operation within the program. Following these are authentication-related fields: an opaque credential structure for client authentication and an opaque verifier to validate the credential. The message concludes with procedure-specific arguments, serialized according to the program's XDR definition. For example, the XDR specification is:
struct call_body {
unsigned int rpcvers;
unsigned int prog;
unsigned int vers;
unsigned int [proc](/page/Procedure);
opaque_auth cred;
opaque_auth verf;
/* procedure specific parameters, e.g., union { ... } args; */
};
```[](https://datatracker.ietf.org/doc/html/rfc5531#section-9.1)
The REPLY message mirrors the CALL in starting with the XID and message type ([1](/page/1)), but then includes a 32-bit reply [status](/page/Status) indicating whether the message is accepted (MSG_ACCEPTED = 0) or denied (MSG_DENIED = [1](/page/1)). For accepted replies, an additional opaque verifier from the [server](/page/Server) precedes a 32-bit accept [status](/page/Status), which can be [SUCCESS](/page/Success) (0) indicating normal execution with procedure-specific results appended; PROG_UNAVAIL (1) for unavailable programs; PROG_MISMATCH (2) including supported version range; PROC_UNAVAIL (3) for unavailable procedures; GARBAGE_ARGS (4) for invalid arguments; or SYSTEM_ERR (5) for system errors. Denied replies specify a reject [status](/page/Status), either RPC_MISMATCH (0) with supported RPC version range or AUTH_ERROR (1) with an authentication failure code. The XDR for the reply body is:
struct call_body {
unsigned int rpcvers;
unsigned int prog;
unsigned int vers;
unsigned int [proc](/page/Procedure);
opaque_auth cred;
opaque_auth verf;
/* procedure specific parameters, e.g., union { ... } args; */
};
```[](https://datatracker.ietf.org/doc/html/rfc5531#section-9.1)
The REPLY message mirrors the CALL in starting with the XID and message type ([1](/page/1)), but then includes a 32-bit reply [status](/page/Status) indicating whether the message is accepted (MSG_ACCEPTED = 0) or denied (MSG_DENIED = [1](/page/1)). For accepted replies, an additional opaque verifier from the [server](/page/Server) precedes a 32-bit accept [status](/page/Status), which can be [SUCCESS](/page/Success) (0) indicating normal execution with procedure-specific results appended; PROG_UNAVAIL (1) for unavailable programs; PROG_MISMATCH (2) including supported version range; PROC_UNAVAIL (3) for unavailable procedures; GARBAGE_ARGS (4) for invalid arguments; or SYSTEM_ERR (5) for system errors. Denied replies specify a reject [status](/page/Status), either RPC_MISMATCH (0) with supported RPC version range or AUTH_ERROR (1) with an authentication failure code. The XDR for the reply body is:
union reply_body switch (reply_stat stat) {
case MSG_ACCEPTED:
accepted_reply areply;
case MSG_DENIED:
rejected_reply rreply;
};
[Authentication](/page/Authentication) in both call and reply messages employs opaque structures (`opaque_auth`), consisting of a 32-bit authentication flavor (e.g., AUTH_NONE = 0 or AUTH_SYS = 1) followed by up to 400 bytes of flavor-specific opaque data as a byte array, which the RPC protocol treats as uninterpreted binary without parsing. This design allows flexibility for various [authentication](/page/Authentication) mechanisms while maintaining protocol simplicity.[](https://datatracker.ietf.org/doc/html/rfc5531#section-9.3)
Over UDP, Sun RPC messages are transmitted as single datagrams, with typical sizes limited to 32 KB to align with common network file system (NFS) transfer blocks and avoid excessive IP fragmentation; the protocol does not natively support RPC-level fragmentation or reassembly, relying instead on the underlying transport for any necessary packet splitting.[](https://datatracker.ietf.org/doc/html/rfc5531#section-7)[](https://nfs.sourceforge.net/)
### Call Semantics and Error Handling
In Sun RPC, the execution semantics of remote procedure calls are designed to provide reliability in the face of network uncertainties, primarily supporting at-most-once and at-least-once guarantees rather than exactly-once semantics.[](https://datatracker.ietf.org/doc/html/rfc1057#section-4) The default at-most-once model ensures that if a reply is received by the client, the procedure has been executed no more than once on the server, achieved through the use of a transaction identifier (XID) that allows the server to detect and discard duplicate requests without re-execution.[](https://datatracker.ietf.org/doc/html/rfc1057#section-4) However, due to the inherent unreliability of networks—such as potential server crashes after execution but before sending a reply—exactly-once semantics cannot be guaranteed; instead, at-least-once semantics apply when no reply is received, meaning the procedure may execute multiple times if the client retries.[](https://datatracker.ietf.org/doc/html/rfc1057#section-4) This approach prioritizes avoiding unnecessary re-executions over absolute idempotency, making it suitable for operations where duplicates are tolerable or detectable at the application level.[](https://datatracker.ietf.org/doc/html/rfc1057#section-4)
Retry mechanisms in Sun RPC are client-driven and tied to transport protocols, with no built-in RPC-layer retransmission logic.[](https://datatracker.ietf.org/doc/html/rfc1057#section-4) Over UDP, which is connectionless and unreliable, the client must implement timeouts and retransmit the entire call message upon expiration, using the same XID to enable server-side duplicate detection.[](https://datatracker.ietf.org/doc/html/rfc1057#section-4) The server, upon receiving a duplicate (matched via XID), simply resends the previous reply without re-executing the procedure, thus preserving at-most-once semantics.[](https://datatracker.ietf.org/doc/html/rfc1057#section-4) Timeouts are configurable by the client application, often employing exponential backoff—starting with a short initial interval (e.g., 0.5 seconds) and doubling on each retry—to balance responsiveness and network load.[](https://datatracker.ietf.org/doc/html/rfc1057#section-4) In contrast, TCP provides stream-oriented reliability, eliminating the need for client retries as the transport layer handles acknowledgments and retransmissions, though it introduces higher latency compared to UDP's low-overhead, datagram-based approach.[](https://datatracker.ietf.org/doc/html/rfc1057#section-4) This UDP preference supports performance-sensitive applications like file systems, where low latency outweighs occasional retries.[](https://datatracker.ietf.org/doc/html/rfc1057#section-4)
Error handling in Sun RPC distinguishes between transport-level issues, protocol violations, and application-specific failures, processed through structured reply messages.[](https://datatracker.ietf.org/doc/html/rfc1057#section-8) System errors, such as `TIMEDOUT` (occurring when the client exhausts retries without a response), are managed at the client side outside the RPC protocol, often triggering application-level recovery.[](https://datatracker.ietf.org/doc/html/rfc1057#section-4) RPC-level errors are conveyed in the reply's `reply_stat` field: `MSG_ACCEPTED` indicates the call was received but may have failed due to issues like `PROG_UNAVAIL` (program not available), `PROG_MISMATCH` (version mismatch), `PROC_UNAVAIL` (procedure not available), or `GARBAGE_ARGS` (invalid arguments); `MSG_DENIED` signals deeper protocol problems, such as `RPC_MISMATCH` (incompatible RPC versions) or `AUTH_ERROR` (authentication failures with subcodes like `AUTH_BADCRED` for bad credentials).[](https://datatracker.ietf.org/doc/html/rfc1057#section-8) Upon receiving a reply, the client matches the XID to the original call and inspects the status: for accepted replies, it decodes the body for success or program-specific errors (e.g., user-defined fault codes); denied replies prompt immediate retry or failure without body processing.[](https://datatracker.ietf.org/doc/html/rfc1057#section-8) These mechanisms ensure robust error propagation while keeping the protocol lightweight.[](https://datatracker.ietf.org/doc/html/rfc1057#section-8)
## Supporting Technologies
### External Data Representation (XDR)
[External Data Representation](/page/External_Data_Representation) (XDR) is a standard for specifying and encoding data in a machine-independent manner, designed to facilitate the exchange of data between diverse computer architectures in network protocols such as [Sun RPC](/page/Sun_RPC). Developed by [Sun Microsystems](/page/Sun_Microsystems) in the early [1980s](/page/1980s) as part of their Open Network Computing (ONC) architecture, XDR serves as the [presentation layer](/page/Presentation_layer) for RPC, ensuring that data structures defined in high-level languages like C can be serialized and deserialized portably without architecture-specific dependencies. It addresses challenges posed by varying byte orders (big-endian vs. little-endian), integer sizes, and floating-point representations across systems like Sun workstations, VAX, and [IBM](/page/IBM) PCs.[](https://docs-archive.freebsd.org/44doc/psd/24.xdr/paper.pdf)[](https://datatracker.ietf.org/doc/html/rfc1014)
The core purpose of XDR within Sun RPC is to provide a [canonical](/page/Canonical) data format that abstracts away low-level details, allowing RPC procedures to transmit arguments and results transparently over networks like [UDP](/page/UDP) or [TCP](/page/TCP)/[IP](/page/IP). By enforcing a uniform big-endian byte order and 4-byte alignment for all data units, XDR simplifies cross-platform interoperability while minimizing conversion overhead on the sender or receiver side. This approach aligns with the ISO [presentation layer](/page/Presentation_layer) model but incorporates implicit typing for efficiency in protocol specifications, such as those used in the Network File System (NFS). [Sun Microsystems](/page/Sun_Microsystems) formalized XDR in [1987](/page/1987) through [RFC](/page/RFC) 1014, which has since become the authoritative specification, with later updates like [RFC](/page/RFC) 1832 refining aspects of the library without altering the fundamental encoding rules.[](https://datatracker.ietf.org/doc/html/rfc1014)[](https://docs-archive.freebsd.org/44doc/psd/24.xdr/paper.pdf)
XDR defines a simple language for describing data structures, using an extended Backus-Naur Form (BNF) syntax to declare types, constants, and typedefs that mirror C-like constructs. Basic atomic types include 32-bit signed and unsigned integers, 64-bit hyper integers, IEEE 32-bit floats, 64-bit doubles, booleans (as 32-bit enums), and enumerations. Composite types encompass fixed and variable-length arrays, [strings](/page/String) (length-prefixed sequences of bytes), opaque byte arrays, structures (sequences of typed fields), discriminated unions (type-safe variants based on an enum tag), and optional data (pointers to nullable structures). All data is encoded into a stream of network bytes, with variable-length elements padded to 4-byte boundaries using zero bytes for alignment, ensuring predictable [serialization](/page/Serialization) regardless of the host's native padding conventions. For example, a [string](/page/String) "hello" encodes as a 4-byte length (5), followed by the 5 bytes of data and three null padding bytes, totaling 12 bytes.[](https://datatracker.ietf.org/doc/html/rfc1014)
In practice, XDR is implemented via a library of C routines that act as filters on input/output streams, supporting three operations: encoding (serializing to bytes), decoding (deserializing from bytes), and freeing (releasing dynamically allocated memory). These routines, included in Sun's `librpc` and accessible via `<rpc/xdr.h>`, handle type-specific conversions; for instance, `xdr_int()` marshals a 32-bit integer in big-endian format, while `xdr_union()` dispatches to the appropriate arm based on a discriminator value. Streams can be created for memory buffers (`xdrmem_create`), standard I/O (`xdrstdio_create`), or record-marked protocols like TCP (`xdrrec_create`), making XDR adaptable to RPC's transport needs. Programs using XDR compile against standard C libraries, with no special linking required on Sun systems. A representative example is serializing a simple structure like a filename and owner ID:
[Authentication](/page/Authentication) in both call and reply messages employs opaque structures (`opaque_auth`), consisting of a 32-bit authentication flavor (e.g., AUTH_NONE = 0 or AUTH_SYS = 1) followed by up to 400 bytes of flavor-specific opaque data as a byte array, which the RPC protocol treats as uninterpreted binary without parsing. This design allows flexibility for various [authentication](/page/Authentication) mechanisms while maintaining protocol simplicity.[](https://datatracker.ietf.org/doc/html/rfc5531#section-9.3)
Over UDP, Sun RPC messages are transmitted as single datagrams, with typical sizes limited to 32 KB to align with common network file system (NFS) transfer blocks and avoid excessive IP fragmentation; the protocol does not natively support RPC-level fragmentation or reassembly, relying instead on the underlying transport for any necessary packet splitting.[](https://datatracker.ietf.org/doc/html/rfc5531#section-7)[](https://nfs.sourceforge.net/)
### Call Semantics and Error Handling
In Sun RPC, the execution semantics of remote procedure calls are designed to provide reliability in the face of network uncertainties, primarily supporting at-most-once and at-least-once guarantees rather than exactly-once semantics.[](https://datatracker.ietf.org/doc/html/rfc1057#section-4) The default at-most-once model ensures that if a reply is received by the client, the procedure has been executed no more than once on the server, achieved through the use of a transaction identifier (XID) that allows the server to detect and discard duplicate requests without re-execution.[](https://datatracker.ietf.org/doc/html/rfc1057#section-4) However, due to the inherent unreliability of networks—such as potential server crashes after execution but before sending a reply—exactly-once semantics cannot be guaranteed; instead, at-least-once semantics apply when no reply is received, meaning the procedure may execute multiple times if the client retries.[](https://datatracker.ietf.org/doc/html/rfc1057#section-4) This approach prioritizes avoiding unnecessary re-executions over absolute idempotency, making it suitable for operations where duplicates are tolerable or detectable at the application level.[](https://datatracker.ietf.org/doc/html/rfc1057#section-4)
Retry mechanisms in Sun RPC are client-driven and tied to transport protocols, with no built-in RPC-layer retransmission logic.[](https://datatracker.ietf.org/doc/html/rfc1057#section-4) Over UDP, which is connectionless and unreliable, the client must implement timeouts and retransmit the entire call message upon expiration, using the same XID to enable server-side duplicate detection.[](https://datatracker.ietf.org/doc/html/rfc1057#section-4) The server, upon receiving a duplicate (matched via XID), simply resends the previous reply without re-executing the procedure, thus preserving at-most-once semantics.[](https://datatracker.ietf.org/doc/html/rfc1057#section-4) Timeouts are configurable by the client application, often employing exponential backoff—starting with a short initial interval (e.g., 0.5 seconds) and doubling on each retry—to balance responsiveness and network load.[](https://datatracker.ietf.org/doc/html/rfc1057#section-4) In contrast, TCP provides stream-oriented reliability, eliminating the need for client retries as the transport layer handles acknowledgments and retransmissions, though it introduces higher latency compared to UDP's low-overhead, datagram-based approach.[](https://datatracker.ietf.org/doc/html/rfc1057#section-4) This UDP preference supports performance-sensitive applications like file systems, where low latency outweighs occasional retries.[](https://datatracker.ietf.org/doc/html/rfc1057#section-4)
Error handling in Sun RPC distinguishes between transport-level issues, protocol violations, and application-specific failures, processed through structured reply messages.[](https://datatracker.ietf.org/doc/html/rfc1057#section-8) System errors, such as `TIMEDOUT` (occurring when the client exhausts retries without a response), are managed at the client side outside the RPC protocol, often triggering application-level recovery.[](https://datatracker.ietf.org/doc/html/rfc1057#section-4) RPC-level errors are conveyed in the reply's `reply_stat` field: `MSG_ACCEPTED` indicates the call was received but may have failed due to issues like `PROG_UNAVAIL` (program not available), `PROG_MISMATCH` (version mismatch), `PROC_UNAVAIL` (procedure not available), or `GARBAGE_ARGS` (invalid arguments); `MSG_DENIED` signals deeper protocol problems, such as `RPC_MISMATCH` (incompatible RPC versions) or `AUTH_ERROR` (authentication failures with subcodes like `AUTH_BADCRED` for bad credentials).[](https://datatracker.ietf.org/doc/html/rfc1057#section-8) Upon receiving a reply, the client matches the XID to the original call and inspects the status: for accepted replies, it decodes the body for success or program-specific errors (e.g., user-defined fault codes); denied replies prompt immediate retry or failure without body processing.[](https://datatracker.ietf.org/doc/html/rfc1057#section-8) These mechanisms ensure robust error propagation while keeping the protocol lightweight.[](https://datatracker.ietf.org/doc/html/rfc1057#section-8)
## Supporting Technologies
### External Data Representation (XDR)
[External Data Representation](/page/External_Data_Representation) (XDR) is a standard for specifying and encoding data in a machine-independent manner, designed to facilitate the exchange of data between diverse computer architectures in network protocols such as [Sun RPC](/page/Sun_RPC). Developed by [Sun Microsystems](/page/Sun_Microsystems) in the early [1980s](/page/1980s) as part of their Open Network Computing (ONC) architecture, XDR serves as the [presentation layer](/page/Presentation_layer) for RPC, ensuring that data structures defined in high-level languages like C can be serialized and deserialized portably without architecture-specific dependencies. It addresses challenges posed by varying byte orders (big-endian vs. little-endian), integer sizes, and floating-point representations across systems like Sun workstations, VAX, and [IBM](/page/IBM) PCs.[](https://docs-archive.freebsd.org/44doc/psd/24.xdr/paper.pdf)[](https://datatracker.ietf.org/doc/html/rfc1014)
The core purpose of XDR within Sun RPC is to provide a [canonical](/page/Canonical) data format that abstracts away low-level details, allowing RPC procedures to transmit arguments and results transparently over networks like [UDP](/page/UDP) or [TCP](/page/TCP)/[IP](/page/IP). By enforcing a uniform big-endian byte order and 4-byte alignment for all data units, XDR simplifies cross-platform interoperability while minimizing conversion overhead on the sender or receiver side. This approach aligns with the ISO [presentation layer](/page/Presentation_layer) model but incorporates implicit typing for efficiency in protocol specifications, such as those used in the Network File System (NFS). [Sun Microsystems](/page/Sun_Microsystems) formalized XDR in [1987](/page/1987) through [RFC](/page/RFC) 1014, which has since become the authoritative specification, with later updates like [RFC](/page/RFC) 1832 refining aspects of the library without altering the fundamental encoding rules.[](https://datatracker.ietf.org/doc/html/rfc1014)[](https://docs-archive.freebsd.org/44doc/psd/24.xdr/paper.pdf)
XDR defines a simple language for describing data structures, using an extended Backus-Naur Form (BNF) syntax to declare types, constants, and typedefs that mirror C-like constructs. Basic atomic types include 32-bit signed and unsigned integers, 64-bit hyper integers, IEEE 32-bit floats, 64-bit doubles, booleans (as 32-bit enums), and enumerations. Composite types encompass fixed and variable-length arrays, [strings](/page/String) (length-prefixed sequences of bytes), opaque byte arrays, structures (sequences of typed fields), discriminated unions (type-safe variants based on an enum tag), and optional data (pointers to nullable structures). All data is encoded into a stream of network bytes, with variable-length elements padded to 4-byte boundaries using zero bytes for alignment, ensuring predictable [serialization](/page/Serialization) regardless of the host's native padding conventions. For example, a [string](/page/String) "hello" encodes as a 4-byte length (5), followed by the 5 bytes of data and three null padding bytes, totaling 12 bytes.[](https://datatracker.ietf.org/doc/html/rfc1014)
In practice, XDR is implemented via a library of C routines that act as filters on input/output streams, supporting three operations: encoding (serializing to bytes), decoding (deserializing from bytes), and freeing (releasing dynamically allocated memory). These routines, included in Sun's `librpc` and accessible via `<rpc/xdr.h>`, handle type-specific conversions; for instance, `xdr_int()` marshals a 32-bit integer in big-endian format, while `xdr_union()` dispatches to the appropriate arm based on a discriminator value. Streams can be created for memory buffers (`xdrmem_create`), standard I/O (`xdrstdio_create`), or record-marked protocols like TCP (`xdrrec_create`), making XDR adaptable to RPC's transport needs. Programs using XDR compile against standard C libraries, with no special linking required on Sun systems. A representative example is serializing a simple structure like a filename and owner ID:
struct example {
string name<32>; /* fixed-length string */
int owner_id;
};
For a 32-character string, this would be encoded by calling `xdr_example()` on an XDR stream, producing a 40-byte output (4 bytes for length + 32 bytes for data + 4 bytes for the int).[](https://docs-archive.freebsd.org/44doc/psd/24.xdr/paper.pdf)[](https://datatracker.ietf.org/doc/html/rfc1014)
XDR's design prioritizes simplicity and performance, avoiding complex features like explicit versioning or schema negotiation to keep RPC overhead low, though this requires careful definition of data structures in protocol specifications to prevent evolution issues. Its influence extends beyond Sun RPC to other ONC-based services, where it ensures reliable data transfer without embedding architecture-specific code in the protocol logic. Despite the rise of alternatives like [Protocol Buffers](/page/Protocol_Buffers), XDR remains integral to legacy systems and standards-compliant implementations of NFS and related protocols.[](https://datatracker.ietf.org/doc/html/rfc1014)
### Port Mapper and rpcbind
In Sun RPC, the Port Mapper serves as a critical service location mechanism that dynamically maps RPC program numbers and their versions to the corresponding [port](/page/Port) numbers used by server processes, facilitating client-server communication over [UDP](/page/UDP) or [TCP](/page/TCP) transports. It operates as an RPC program itself, identified by the program number [100000](/page/100,000) (PMAP_PORT), and listens on the well-known [port](/page/Port) 111 for incoming requests, allowing clients to discover [service](/page/Service) endpoints without prior knowledge of specific ports.[](https://datatracker.ietf.org/doc/html/rfc1057) This dynamic binding is essential because RPC servers typically bind to ephemeral ports rather than fixed ones, enabling flexible resource allocation on the host.[](https://datatracker.ietf.org/doc/html/rfc1057)
The Port Mapper protocol, defined in version 2 as part of the initial Sun RPC specification, supports a set of procedures for managing these mappings, all encoded using XDR for interoperability. The core procedures include:
- **NULL (procedure 0)**: Performs no operation and returns no results, used for basic connectivity checks.
- **GETPORT (procedure 3)**: Takes a program number, version number, and protocol identifier (e.g., IPPROTO_UDP = 17 or IPPROTO_TCP = 6) as input and returns the associated port number if the service is registered; otherwise, it returns 0.
- **SET (procedure 1)**: Registers a new mapping by providing the program number, version, protocol, and port; it succeeds (returns TRUE) only if no conflicting mapping exists for the same program, version, and protocol, preventing overwrites.
- **UNSET (procedure 2)**: Removes a mapping for a given program and version, ignoring the protocol and port fields in the request.
- **DUMP (procedure 4)**: Retrieves a complete list of all current mappings on the host.
Each mapping entry is structured as a tuple consisting of an unsigned [integer](/page/Integer) program number, an unsigned [integer](/page/Integer) version number, a [protocol](/page/Protocol) identifier, and an unsigned [integer](/page/Integer) port number, stored in a simple database maintained by the Port Mapper daemon.[](https://datatracker.ietf.org/doc/html/rfc1057) In typical usage, RPC servers register their services with the Port Mapper upon startup by invoking the SET procedure, ensuring their endpoints are advertised. Clients, prior to issuing RPC calls, query the Port Mapper using GETPORT to obtain the correct port for the target [program](/page/Program) and [version](/page/Version), then direct subsequent requests to that port. This flow supports both [unicast](/page/Unicast) and broadcast RPC semantics, where broadcasts can leverage the Port Mapper for [service discovery](/page/Service_discovery) across networks.[](https://datatracker.ietf.org/doc/html/rfc1057)
Early implementations of the Port Mapper suffered from significant security vulnerabilities, as the protocol lacked built-in [authentication](/page/Authentication), allowing remote attackers to perform SET or UNSET operations and potentially overwrite legitimate mappings, leading to denial-of-service attacks or service spoofing. For instance, unauthorized registrations could redirect clients to malicious endpoints, exploiting the trust in port 111 as an open service.[](https://docs.oracle.com/cd/E19683-01/816-1435/rpcproto-69061/index.html)
To address these issues, the rpcbind protocol evolved from the Port Mapper in [1995](/page/1995), as specified in RFC 1833, introducing versions 3 and 4 while maintaining the same program number 100000 and port 111 for compatibility. Unlike the transport-specific Port Mapper, rpcbind uses a universal address format for mappings, consisting of tuples with program, version, network identifier (netid), address, and owner fields, enabling support for diverse underlying protocols beyond [UDP](/page/UDP) and [TCP](/page/TCP). Key security enhancements in rpcbind include restricting SET and UNSET operations to local [loopback](/page/Loopback) transports, preventing remote tampering and mitigating exploitation risks. The procedures were extended accordingly, with RPCBPROC_SET for registration, RPCBPROC_GETADDR (replacing GETPORT) for queries returning full addresses, and RPCBPROC_UNSET for deregistration, alongside retained support for DUMP and indirect call procedures. This evolution improved resilience while preserving the core registration and query flow: servers still register locally on startup, and clients query remotely for addresses before initiating RPC calls.[](https://datatracker.ietf.org/doc/html/rfc1833)[](https://docs.oracle.com/cd/E19683-01/816-1435/rpcproto-69061/index.html)
## Implementations and Tools
### rpcgen Compiler
The rpcgen compiler is a tool designed to automate the generation of C code for [Remote Procedure Call](/page/Remote_procedure_call) (RPC) applications in the Sun RPC framework, also known as Open Network Computing (ONC) RPC. It processes input files written in the RPC language, an interface definition language (IDL) similar to C, to produce client stubs, server skeletons, header files, and [External Data Representation](/page/External_Data_Representation) (XDR) routines for data serialization. This automation simplifies the development of distributed applications by handling the low-level details of RPC communication and data marshalling, allowing programmers to focus on the core logic of remote procedures.[](https://docs.oracle.com/cd/E18752_01/html/816-1435/rpcgenpguide-24243.html)[](https://docs.oracle.com/cd/E19683-01/816-1435/rpcgenpguide-21470/index.html)
rpcgen accepts files with a `.x` extension as input, where developers define RPC programs, versions, procedures, and data types using RPC language syntax. For instance, a simple `.x` file might declare a program with procedures like `ADD(int x, int y)` returning an integer, specifying parameters and return types that rpcgen uses to generate compatible code. The tool parses these definitions to create the necessary stubs that encapsulate RPC calls as local function invocations on the [client side](/page/Client-side) and dispatch incoming calls on the [server](/page/Server) side, while ensuring data is encoded and decoded via XDR for network transmission.[](https://docs.oracle.com/cd/E18752_01/html/816-1435/rpcgenpguide-24243.html)[](https://docs.oracle.com/cd/E19683-01/816-1435/rpcgenpguide-21470/index.html)
Key command-line options control the output of rpcgen. The `-a` flag generates all necessary files, including headers, client stubs, server skeletons, and XDR routines. The `-c` option produces only the XDR routines for data encoding and decoding. The `-s` option specifies the transport protocol, such as [UDP](/page/UDP) or [TCP](/page/TCP) (e.g., `rpcgen -s udp foo.x`), allowing customization for different network behaviors. The `-h` flag outputs solely the C header file containing structure definitions and procedure prototypes. These options enable selective compilation, optimizing the build process for client-only or server-only development.[](https://docs.oracle.com/cd/E18752_01/html/816-1435/rpcgenpguide-24243.html)[](https://www.cisco.com/c/en/us/td/docs/ios/sw_upgrades/interlink/r2_0/rpc_pr/rprpcgen.html)
Typical output files from rpcgen, assuming an input file named `foo.x`, include `foo.h` for shared header definitions, `foo_clnt.c` containing client stub routines that handle RPC calls, `foo_svc.c` with server skeleton code for procedure dispatching, and `foo_xdr.c` for XDR filter functions to marshal and unmarshal data structures. These files are standard C source code that can be compiled with a C compiler like `gcc`. For example, the client stub in `foo_clnt.c` might include a function like `foo_add_1(int *argp, CLIENT *clnt)`, which internally manages the RPC invocation over the network.[](https://docs.oracle.com/cd/E19683-01/816-1435/rpcgenpguide-21470/index.html)[](https://docs.oracle.com/cd/E18752_01/html/816-1435/rpcgenpguide-24243.html)
The standard workflow for using rpcgen begins with writing the RPC interface in a `.x` file, followed by invoking rpcgen to generate the output files (e.g., `rpcgen -a foo.x`). Developers then implement the server procedures by filling in the skeletons in `foo_svc.c`, compile all generated `.c` files along with application code using a [C](/page/C--) compiler, and link the resulting object files with the RPC [runtime library](/page/Runtime_library) (such as `librpc` or `libtirpc`) to produce the final [executable](/page/Executable). On the [client side](/page/Client-side), the stubs in `foo_clnt.c` are linked similarly, enabling transparent remote calls via functions that mimic local procedure invocations. This process ensures compatibility with the ONC RPC protocol while abstracting network details.[](https://docs.oracle.com/cd/E18752_01/html/816-1435/rpcgenpguide-24243.html)[](https://docs.oracle.com/cd/E19683-01/816-1435/rpcgenpguide-21470/index.html)
While effective for C-based development, rpcgen has limitations in that it is primarily focused on generating C code and lacks native support for modern programming languages like [Java](/page/Java) or [Python](/page/Python), requiring manual adaptations or third-party tools for cross-language RPC implementations.[](https://www.cisco.com/c/en/us/td/docs/ios/sw_upgrades/interlink/r2_0/rpc_pr/rprpcgen.html)
### Client and Server Libraries
Sun RPC provides client and server libraries as part of its ONC (Open Network Computing) implementation, offering C-language [APIs](/page/Apis) for developing distributed applications that invoke remote procedures over networks. These libraries were historically integrated into the standard C library (libc) on many UNIX systems, but in modern implementations are often provided separately, such as by libtirpc.[](https://fedoraproject.org/wiki/Changes/SunRPCRemoval) In contemporary systems, the Transport-Independent RPC library (libtirpc) provides these APIs and is commonly used for new developments. They enable developers to create client handles, register server procedures, handle [authentication](/page/Authentication), and manage errors without directly manipulating the underlying [protocol](/page/Protocol). The APIs emphasize simplicity, with [XDR](/page/External_Data_Representation) (External Data Representation) routines used for serializing arguments and results to ensure machine-independent data exchange.[](https://bitsavers.org/pdf/sun/sunos/4.1/800-3850-10A_Network_Programming_Guide_199003.pdf)
The client-side API centers on functions for establishing connections and making synchronous calls. The `clnt_create` function creates a client handle by specifying the server hostname, program number, version number, and transport protocol such as UDP or TCP, returning a `CLIENT *` pointer on success or `NULL` on failure, with errors detailed in the global `rpc_createerr` structure.[](https://bitsavers.org/pdf/sun/sunos/4.1/800-3850-10A_Network_Programming_Guide_199003.pdf) Once created, `clnt_call` invokes a remote procedure synchronously, taking the client handle, procedure number, input XDR procedure and argument, output XDR procedure and result buffer, and a timeout (defaulting to 25 seconds); it returns an `enum clnt_stat` value, such as `RPC_SUCCESS` for success or `RPC_TIMEDOUT` for expiration.[](https://bitsavers.org/pdf/sun/sunos/4.1/800-3850-10A_Network_Programming_Guide_199003.pdf) The `clnt_control` function allows runtime adjustments to the client handle, such as setting custom timeouts via the `CLSET_TIMEOUT` option and a `struct timeval` argument, returning a boolean indicating success.[](https://bitsavers.org/pdf/sun/sunos/4.1/800-3850-10A_Network_Programming_Guide_199003.pdf) Client resources are freed using `clnt_destroy`. These APIs support both UDP for low-latency, idempotent calls (limited to under 8KB) and TCP for reliable, stream-oriented communication.[](https://bitsavers.org/pdf/sun/sunos/4.1/800-3850-10A_Network_Programming_Guide_199003.pdf)
On the server side, the [APIs](/page/Apis) facilitate procedure registration and request dispatching. The `registerrpc` function registers a [procedure](/page/Procedure) with the RPC runtime, specifying the program number, version number, [procedure](/page/Procedure) number, the procedure's [function pointer](/page/Function_pointer), and XDR routines for input and output; it returns a [boolean](/page/Boolean) or [integer](/page/Integer) indicating success (non-zero) or failure.[](https://bitsavers.org/pdf/sun/sunos/4.1/800-3850-10A_Network_Programming_Guide_199003.pdf) The `svc_run` function starts the server's main [event loop](/page/Event_loop), blocking to receive and dispatch incoming RPC requests until an error or signal interruption occurs.[](https://bitsavers.org/pdf/sun/sunos/4.1/800-3850-10A_Network_Programming_Guide_199003.pdf) For sending replies, `svc_sendreply` transmits the output to the client, taking the server transport handle (`SVCXPRT *`), output XDR [procedure](/page/Procedure), and result pointer; it handles [serialization](/page/Serialization) and network transmission.[](https://docs.oracle.com/cd/E19455-01/805-7224/6j6q44cgg/index.html) Server transports are created with functions like `svcudp_create` for [UDP](/page/UDP) sockets. These components allow servers to handle multiple concurrent requests efficiently.[](https://bitsavers.org/pdf/sun/sunos/4.1/800-3850-10A_Network_Programming_Guide_199003.pdf)
Authentication in Sun RPC is managed through dedicated [APIs](/page/Apis) that create credential handles for secure procedure invocation. The `authnone_create` function generates an authentication handle with no credentials (`AUTH_NULL` flavor), suitable for unsecured environments, returning an `AUTH *` pointer.[](https://bitsavers.org/pdf/sun/sunos/4.1/800-3850-10A_Network_Programming_Guide_199003.pdf) For UNIX-style [authentication](/page/Authentication), `authunix_create` builds a handle using the client's [hostname](/page/Hostname), user ID ([UID](/page/UID)), group ID (GID), supplementary group count, and group list array, employing the `AUTH_UNIX` flavor with numeric identifiers for permission checks; `authunix_create_default` uses the current process's credentials automatically.[](https://bitsavers.org/pdf/sun/sunos/4.1/800-3850-10A_Network_Programming_Guide_199003.pdf) These handles are attached to client or server contexts via `clnt` or `svc` structures, with credentials limited to under 400 bytes in an `opaque_auth` structure. DES-based [authentication](/page/Authentication) is also supported via `authdes_create`, but UNIX [authentication](/page/Authentication) remains the most common for ONC deployments.[](https://bitsavers.org/pdf/sun/sunos/4.1/800-3850-10A_Network_Programming_Guide_199003.pdf)
Error reporting relies on enumerated statuses and diagnostic functions for [debugging](/page/Debugging). The `clnt_stat` enum categorizes client errors, including `RPC_SUCCESS` (0), `RPC_CANTENCODEARGS` (1), `RPC_CANTDECODERES` (13), and `RPC_TIMEDOUT` (5), returned by `clnt_call` or accessible via `clnt_geterr`.[](https://bitsavers.org/pdf/sun/sunos/4.1/800-3850-10A_Network_Programming_Guide_199003.pdf) Similarly, `svc_stat` enums handle server-side issues like `SVC_STAT_SUCC` or `SVC_STAT_RPC_MISMATCH`. Functions such as `clnt_perror` and `clnt_sperror` translate these statuses into human-readable messages, while server errors use `svcerr_noproc` for invalid procedures. Protocol-level errors appear in reply messages via `accept_stat` (e.g., `PROG_UNAVAIL` = 1) or `reject_stat` (e.g., `AUTH_ERROR` = 1).[](https://bitsavers.org/pdf/sun/sunos/4.1/800-3850-10A_Network_Programming_Guide_199003.pdf)
For portability, the libraries are embedded in UNIX libc, supporting diverse architectures like [SPARC](/page/SPARC), VAX, and x86 through XDR's canonical format, which enforces 4-byte alignment and byte-order independence.[](https://bitsavers.org/pdf/sun/sunos/4.1/800-3850-10A_Network_Programming_Guide_199003.pdf) They assume UNIX semantics but integrate with standard headers like `<netdb.h>` for name resolution. Bindings exist for other languages, such as [Java](/page/Java) implementations that provide equivalent client and server stubs, using rpcbind for [service discovery](/page/Service_discovery), similar to C implementations.[](https://link.springer.com/chapter/10.1007/978-3-642-60247-4_14)
## Applications and Usage
### Integration with Network File System (NFS)
Sun RPC serves as the foundational transport mechanism for the [Network File System](/page/Network_File_System) (NFS), enabling transparent remote file access across networks by defining NFS operations as remote procedures. Developed by [Sun Microsystems](/page/Sun_Microsystems) in the mid-1980s, NFS version 2 (NFSv2) was the first implementation to leverage Sun RPC version 2, treating NFS as a specific RPC "[program](/page/Program)" identified by the unique program number 100003. This integration allows clients to invoke file system operations—such as reading, writing, and directory traversal—through standardized RPC calls, abstracting the underlying network communication and promoting [interoperability](/page/Interoperability) across heterogeneous systems.[](https://datatracker.ietf.org/doc/html/rfc1094)
The NFS protocol specification uses Sun RPC's [External Data Representation](/page/External_Data_Representation) (XDR) for encoding arguments and results, ensuring machine-independent data serialization. RPC messages for NFS include a header with the program number (100003), version number (2 for NFSv2), and procedure number, followed by XDR-marshaled parameters. For instance, the core NFS procedures encompass operations like NFSPROC_GETATTR (procedure 1), which retrieves [file attributes](/page/File_attribute); NFSPROC_READ (procedure 6), which fetches file data up to 8,192 bytes; and NFSPROC_WRITE (procedure 8), which modifies files with unstable or stable semantics to handle retries efficiently. These 18 procedures in NFSv2 are designed to be stateless, meaning each call contains all necessary context, which simplifies server implementation and enhances reliability over unreliable transports like [UDP](/page/UDP). The default transport for NFS is [UDP](/page/UDP) on port 2049, though [TCP](/page/TCP) support was added later for better performance in wide-area networks.[](https://datatracker.ietf.org/doc/html/rfc1094)[](https://datatracker.ietf.org/doc/html/rfc1014)
Integration extends beyond core file operations to auxiliary protocols that also rely on Sun RPC. The Mount protocol, assigned program number 100005 and version 3, handles the mounting of remote [file](/page/File) systems by exporting lists of available shares and establishing NFS handles—opaque 32-byte identifiers representing files or directories. Similarly, the Network Lock Manager (NLM), using program number 100021 and version 4, provides [file](/page/File) locking services through procedures like NLMPROC_LOCK for acquiring locks and NLMPROC_TEST for checking lock [status](/page/Status), addressing NFS's initial lack of built-in locking to prevent concurrent [access](/page/Access) issues. These components communicate via the rpcbind service (formerly Port Mapper, program 100000) to dynamically discover service ports, ensuring flexible deployment without fixed port assignments. This modular RPC-based architecture facilitated NFS's rapid adoption, as it allowed developers to use tools like rpcgen to generate client and server stubs from interface definitions.[](https://datatracker.ietf.org/doc/html/rfc1094)
NFS version 3 (NFSv3), standardized in 1995, retained the core Sun RPC integration while introducing enhancements for scalability and robustness. It maintains the same program number (100003) but uses version 3, expanding to 22 [procedure](/page/Procedure)s, including new ones like [ACCESS](/page/Access) (procedure 4) for permission checks without attribute fetches and READDIRPLUS (procedure 17) for efficient directory listings with file handles and attributes in a single call. NFSv3 supports [TCP](/page/TCP) alongside [UDP](/page/UDP), improving handling of large transfers (up to 64 KB per READ/WRITE) and adding a COMMIT procedure (21) for explicit [data synchronization](/page/Data_synchronization).[](https://datatracker.ietf.org/doc/html/rfc1813) Authentication remains RPC-based, supporting mechanisms like AUTH_UNIX for UID/GID mapping and AUTH_DES for secure time-stamped credentials. This evolution preserved [backward compatibility](/page/Backward_compatibility) with Sun RPC while addressing limitations in NFSv2, such as [file size](/page/File_size) caps and [error](/page/Error) [reporting](/page/Reporting), making NFSv3 the dominant version for [enterprise](/page/Enterprise) use through the 1990s and early 2000s.
The design rationale for using Sun RPC in NFS emphasized simplicity and portability, as RPC provided a procedure-oriented [paradigm](/page/Paradigm) that mirrored local UNIX system calls, easing development for [kernel](/page/Kernel) and user-space implementations. By building on RPC's transport independence, NFS could operate over various [protocols](/page/Protocol) without redesign, though early versions prioritized low-latency [LAN](/page/Lan) environments. Seminal work by Sun engineers highlighted RPC's role in streamlining protocol maintenance, with NFS procedures defined in a concise RPC/XDR [specification language](/page/Specification_language) that automated code generation. This integration not only accelerated NFS deployment on [SunOS](/page/SunOS) but also influenced subsequent distributed file systems by demonstrating RPC's efficacy for high-throughput, stateless services.
### Other Network Services
Sun RPC underpins several network services in the Open Network Computing (ONC) ecosystem beyond the primary [Network File System](/page/Network_File_System) (NFS), enabling distributed system administration, monitoring, and auxiliary functions through standardized program numbers registered with the portmapper or rpcbind. These services leverage RPC's procedure call semantics to facilitate remote interactions, often running as daemons that register their availability for client access.[](https://www.iana.org/assignments/rpc-program-numbers/rpc-program-numbers.xhtml)[](https://datatracker.ietf.org/doc/html/rfc5531)
The [Network Information Service](/page/NIS) (NIS), formerly known as [Yellow Pages](/page/Yellow_pages), operates as an [RPC-based directory service](/page/Directory_service) using program number 100004 for the ypserv daemon, which manages replicated, read-only databases for network-wide information such as user accounts, hostnames, and network maps.[](https://www.iana.org/assignments/rpc-program-numbers/rpc-program-numbers.xhtml) Clients employ the ypbind daemon, associated with program 100007, to dynamically [bind](/page/BIND) to available NIS servers and query these databases via RPC calls, supporting centralized administration in [Unix-like](/page/Unix-like) environments.[](https://www.iana.org/assignments/rpc-program-numbers/rpc-program-numbers.xhtml) This setup allows seamless distribution of configuration data across heterogeneous systems without [manual](/page/Manual) synchronization.[](https://bitsavers.org/pdf/sun/sunos/4.1/800-3850-10A_Network_Programming_Guide_199003.pdf)
Status monitoring is facilitated by the RSTAT service, utilizing program number 100024 for the status monitor version 2 (statmon2), which exposes procedures such as STAT to retrieve load averages and UPTIME to obtain system uptime from the remote kernel.[](https://www.iana.org/assignments/rpc-program-numbers/rpc-program-numbers.xhtml) The rstatd daemon implements this service, enabling remote hosts to poll performance metrics for diagnostics and recovery, such as detecting reboots in clustered setups.[](https://bitsavers.org/pdf/sun/sunos/4.1/800-3850-10A_Network_Programming_Guide_199003.pdf) These RPC procedures provide lightweight, real-time access to kernel statistics without requiring full system access.[](https://www.iana.org/assignments/rpc-program-numbers/rpc-program-numbers.xhtml)
The Network Lock Manager (NLM) provides distributed file locking via program number 100021, essential for coordinating access in shared NFS environments through the lockd daemon.[](https://www.iana.org/assignments/rpc-program-numbers/rpc-program-numbers.xhtml) Key procedures include LM_LOCK for acquiring locks, LM_UNLOCK for releasing them, LM_TEST for checking lock status, and LM_CANCEL for asynchronous cancellation, ensuring atomicity and preventing race conditions across networked file systems.[](https://bitsavers.org/pdf/sun/sunos/4.1/800-3850-10A_Network_Programming_Guide_199003.pdf) This service integrates closely with NFS operations to maintain data consistency.[](https://www.iana.org/assignments/rpc-program-numbers/rpc-program-numbers.xhtml)
Additional examples of Sun RPC-based services include the Bootparams protocol, assigned program number 100026 and implemented by the rpc.bootparamd or bootparamd daemon, which supplies essential parameters like root path and swap server details to diskless clients during network booting.[](https://www.iana.org/assignments/rpc-program-numbers/rpc-program-numbers.xhtml) Similarly, the Spray service, under program number 100012 via the sprayd daemon, functions as a network load-testing tool by emitting bursts of RPC packets to evaluate throughput and latency under stress.[](https://www.iana.org/assignments/rpc-program-numbers/rpc-program-numbers.xhtml) These utilities highlight RPC's versatility in supporting bootstrapping and performance diagnostics in ONC networks.[](https://bitsavers.org/pdf/sun/sunos/4.1/800-3850-10A_Network_Programming_Guide_199003.pdf)
## Legacy and Modern Relevance
### Variants and Extensions
Sun RPC, originally designed for [UDP](/page/UDP) transport, has seen several variants and extensions to [support](/page/Support) diverse environments, enhanced [security](/page/Security), and improved [performance](/page/Performance) over connection-oriented protocols. These modifications maintain [compatibility](/page/Compatibility) with the core ONC RPC version 2 [protocol](/page/Protocol) while addressing specific needs such as multi-transport [support](/page/Support) and cryptographic protections.[](https://datatracker.ietf.org/doc/html/rfc5531)
TI-RPC, or Transport Independent RPC, emerged as a key extension developed by [Sun Microsystems](/page/Sun_Microsystems) as part of [UNIX System V](/page/UNIX_System_V) Release 4 (SVR4) and later integrated into [UnixWare](/page/UnixWare) systems. Unlike the original transport-specific RPC (TS-RPC), which tied applications to [UDP](/page/UDP) or [TCP](/page/TCP), TI-RPC abstracts the transport layer, allowing applications to operate over multiple protocols including [UDP](/page/UDP), [TCP](/page/TCP), and later [IPv6](/page/IPv6) stacks without code modifications. This independence is achieved through environment variables like NETPATH and NETCONFIG, which enable dynamic transport selection based on [network](/page/Network) configuration files. TI-RPC's [IPv6](/page/IPv6) [support](/page/Support), introduced in [Solaris](/page/Solaris) implementations, ensures RPC applications can leverage dual-stack environments, with [TCP](/page/TCP) or [UDP](/page/UDP) using [IPv6](/page/IPv6) addresses seamlessly.[](https://docs.oracle.com/cd/E18752_01/html/816-1435/oncintro-2.html)[](https://docs.oracle.com/cd/E18752_01/html/816-1435/portrpc-99.html)
For security enhancements, RPCSEC_GSS provides a framework for integrating the Generic Security Service API (GSS-API), including [Kerberos](/page/Kerberos) v5, as a credential mechanism atop ONC RPC. Specified in RFC 2203, this [protocol](/page/Protocol) defines a layered security model where RPC messages carry GSS-API tokens for authentication, integrity, and confidentiality without altering the base RPC header. It supports three service types—none, integrity, and privacy—and enables secure context establishment between client and server, making it suitable for distributed systems requiring strong [mutual authentication](/page/Mutual_authentication). RPCSEC_GSS has been widely adopted in NFSv3 and earlier for [Kerberos](/page/Kerberos)-based access control.[](https://www.rfc-editor.org/rfc/rfc2203.html)
Extensions to RPC over [TCP](/page/TCP) address limitations in the original record-marking scheme, which prepends a 4-byte length field to each message but can suffer from [head-of-line blocking](/page/Head-of-line_blocking) and inefficiency in high-latency networks. [RFC](/page/RFC) 5666 introduces RPC-over-RDMA as a transport extension, replacing traditional [TCP](/page/TCP) framing with a more efficient header that includes chunk lists for [direct memory access](/page/Direct_memory_access), supporting inline data, RDMA reads, and writes within a single RPC [transaction](/page/Transaction). This framing improvement reduces CPU overhead and enables [zero-copy](/page/Zero-copy) transfers, though it requires RDMA-capable hardware and is primarily used in [high-performance computing](/page/High-performance_computing) environments like NFS over [InfiniBand](/page/InfiniBand).[](https://datatracker.ietf.org/doc/html/rfc5666)
Microsoft's adaptations of ONC RPC evolved into [DCE/RPC](/page/DCE/RPC) as part of the [Distributed Computing Environment](/page/Distributed_Computing_Environment) (DCE) standard, incorporating elements from Sun's protocol while introducing significant modifications for enterprise interoperability. [DCE/RPC](/page/DCE/RPC) replaces ONC's 32-bit program numbers with 128-bit UUIDs for unique interface identification, allowing globally unique service definitions without central registration. It also adds support for [pipes](/page/PIPES), enabling streaming data transfer and asynchronous operations not native to ONC RPC, such as partial results and multi-part responses. Microsoft's implementation, known as MS-RPC, extends [DCE/RPC](/page/DCE/RPC) further by integrating named pipes over [SMB](/page/SMB) for local and remote communication, facilitating Windows-specific features like [COM](/page/Com)/DCOM integration while maintaining wire compatibility with DCE tools.[](https://nvlpubs.nist.gov/nistpubs/Legacy/IR/nistir5277.pdf)[](https://pubs.opengroup.org/onlinepubs/9629399/chap2.htm)[](https://learn.microsoft.com/en-us/windows/win32/rpc/using-asynchronous-rpc-with-dce-pipes)
An early security variant, Secure RPC using the AUTH_DES flavor, was introduced by Sun in the [1980s](/page/1980s) to provide encryption beyond the basic AUTH_SYS mechanism. AUTH_DES employs [Diffie-Hellman key exchange](/page/Key_exchange) for session keys and [DES](/page/DES) for encrypting credentials and data in RPC messages, authenticating both hosts and users via public keys stored in a secure database. However, due to [DES](/page/DES)'s 56-bit key length vulnerability to brute-force attacks, AUTH_DES has been deprecated in modern systems, with recommendations to migrate to stronger mechanisms like RPCSEC_GSS.[](https://docs.oracle.com/cd/E19253-01/816-1435/6m7rrfn86/index.html)[](https://docs.oracle.com/cd/E26505_01/html/E27224/auth-2.html)[](https://datatracker.ietf.org/doc/html/rfc5531)
### Current Adoption and Alternatives
Sun RPC, also known as ONC RPC, maintains persistent usage in modern [Unix-like](/page/Unix-like) systems primarily for [legacy](/page/Legacy) support in network services such as the Network File System (NFS). Since the release of [glibc](/page/Glibc) 2.32 in 2020, [Linux](/page/Linux) distributions have removed built-in Sun RPC support from glibc, replacing it with the libtirpc library, which provides TI-RPC and maintains ONC RPC compatibility. In [Linux](/page/Linux) distributions, the rpcbind package provides the essential RPC port mapping functionality required for ONC RPC operations, enabling compatibility with NFSv3, which relies on rpcbind for port mapping, while NFSv4 uses a fixed port and does not require it, though both use ONC RPC semantics.[](https://www.gnu.org/software/libc/news+announce/2020-08-06-glibc-2.32.html)[](https://fedoraproject.org/wiki/Changes/SunRPCRemoval)[](https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/ch-nfs) Similarly, [Oracle Solaris](/page/Oracle_Solaris) and related systems continue to integrate ONC RPC for NFS implementations, ensuring [backward compatibility](/page/backward_compatibility) in enterprise environments where older file-sharing setups persist.[](https://docs.oracle.com/cd/E36784_01/html/E36862/oncintro-5.html) NFSv3 and NFSv4 both build upon ONC RPC as their underlying transport mechanism, with NFSv4 using compound operations over the same RPC layer defined in RFC 5531.[](https://datatracker.ietf.org/doc/html/rfc7530)
Despite this ongoing role, ONC RPC adoption has declined in new deployments due to longstanding [security](/page/Security) vulnerabilities, particularly in the portmapper service, which has been targeted by exploits allowing remote code execution and unauthorized access.[](https://www.fortra.com/resources/vulnerabilities/rpc-portmapper)[](https://www.skywaywest.com/2021/01/what-is-an-open-portmapper-vulnerability/) The rise of web-oriented architectures has further contributed to this shift, as HTTP/[REST](/page/REST) APIs offer simpler integration with internet-scale services, browser compatibility, and standardized [security](/page/Security) models that address RPC's limitations in distributed, heterogeneous environments.[](https://cloud.google.com/blog/products/application-development/rest-vs-rpc-what-problems-are-you-trying-to-solve-with-your-apis)
Contemporary alternatives to ONC RPC emphasize performance, language interoperability, and modern transport protocols. [gRPC](/page/GRPC), developed by [Google](/page/Google), provides a high-performance RPC framework over [HTTP/2](/page/HTTP/2) with [Protocol Buffers](/page/Protocol_Buffers) for serialization, supporting streaming and bidirectional communication while evolving the core RPC paradigm without direct reliance on ONC mechanisms.[](https://grpc.io/docs/what-is-grpc/core-concepts/) [Apache Thrift](/page/Apache_Thrift), originally created by [Facebook](/page/Facebook), serves as another cross-language RPC system with binary serialization and support for multiple transports, enabling scalable service development as an evolution beyond traditional ONC RPC constraints.[](https://engineering.fb.com/2014/02/20/open-source/under-the-hood-building-and-open-sourcing-fbthrift/)
Migration from ONC RPC often involves refactoring to these frameworks or adapting legacy services for cloud-native environments. For instance, containerized NFS deployments in platforms like Oracle Cloud Infrastructure and [Red Hat](/page/Red_Hat) [OpenShift](/page/OpenShift) allow ONC RPC-based file systems to operate within [Docker](/page/Docker) and [Kubernetes](/page/Kubernetes), facilitating hybrid transitions while maintaining compatibility.[](https://blogs.oracle.com/cloud-infrastructure/post/mounting-oci-file-storage-and-other-nfs-shares-on-docker-containers)[](https://docs.redhat.com/en/documentation/openshift_container_platform/4.8/html/storage/configuring-persistent-storage) Overall, while ONC RPC endures in established Unix ecosystems, its role is increasingly supplanted by more secure and flexible options in [greenfield](/page/Greenfield) projects.
For a 32-character string, this would be encoded by calling `xdr_example()` on an XDR stream, producing a 40-byte output (4 bytes for length + 32 bytes for data + 4 bytes for the int).[](https://docs-archive.freebsd.org/44doc/psd/24.xdr/paper.pdf)[](https://datatracker.ietf.org/doc/html/rfc1014)
XDR's design prioritizes simplicity and performance, avoiding complex features like explicit versioning or schema negotiation to keep RPC overhead low, though this requires careful definition of data structures in protocol specifications to prevent evolution issues. Its influence extends beyond Sun RPC to other ONC-based services, where it ensures reliable data transfer without embedding architecture-specific code in the protocol logic. Despite the rise of alternatives like [Protocol Buffers](/page/Protocol_Buffers), XDR remains integral to legacy systems and standards-compliant implementations of NFS and related protocols.[](https://datatracker.ietf.org/doc/html/rfc1014)
### Port Mapper and rpcbind
In Sun RPC, the Port Mapper serves as a critical service location mechanism that dynamically maps RPC program numbers and their versions to the corresponding [port](/page/Port) numbers used by server processes, facilitating client-server communication over [UDP](/page/UDP) or [TCP](/page/TCP) transports. It operates as an RPC program itself, identified by the program number [100000](/page/100,000) (PMAP_PORT), and listens on the well-known [port](/page/Port) 111 for incoming requests, allowing clients to discover [service](/page/Service) endpoints without prior knowledge of specific ports.[](https://datatracker.ietf.org/doc/html/rfc1057) This dynamic binding is essential because RPC servers typically bind to ephemeral ports rather than fixed ones, enabling flexible resource allocation on the host.[](https://datatracker.ietf.org/doc/html/rfc1057)
The Port Mapper protocol, defined in version 2 as part of the initial Sun RPC specification, supports a set of procedures for managing these mappings, all encoded using XDR for interoperability. The core procedures include:
- **NULL (procedure 0)**: Performs no operation and returns no results, used for basic connectivity checks.
- **GETPORT (procedure 3)**: Takes a program number, version number, and protocol identifier (e.g., IPPROTO_UDP = 17 or IPPROTO_TCP = 6) as input and returns the associated port number if the service is registered; otherwise, it returns 0.
- **SET (procedure 1)**: Registers a new mapping by providing the program number, version, protocol, and port; it succeeds (returns TRUE) only if no conflicting mapping exists for the same program, version, and protocol, preventing overwrites.
- **UNSET (procedure 2)**: Removes a mapping for a given program and version, ignoring the protocol and port fields in the request.
- **DUMP (procedure 4)**: Retrieves a complete list of all current mappings on the host.
Each mapping entry is structured as a tuple consisting of an unsigned [integer](/page/Integer) program number, an unsigned [integer](/page/Integer) version number, a [protocol](/page/Protocol) identifier, and an unsigned [integer](/page/Integer) port number, stored in a simple database maintained by the Port Mapper daemon.[](https://datatracker.ietf.org/doc/html/rfc1057) In typical usage, RPC servers register their services with the Port Mapper upon startup by invoking the SET procedure, ensuring their endpoints are advertised. Clients, prior to issuing RPC calls, query the Port Mapper using GETPORT to obtain the correct port for the target [program](/page/Program) and [version](/page/Version), then direct subsequent requests to that port. This flow supports both [unicast](/page/Unicast) and broadcast RPC semantics, where broadcasts can leverage the Port Mapper for [service discovery](/page/Service_discovery) across networks.[](https://datatracker.ietf.org/doc/html/rfc1057)
Early implementations of the Port Mapper suffered from significant security vulnerabilities, as the protocol lacked built-in [authentication](/page/Authentication), allowing remote attackers to perform SET or UNSET operations and potentially overwrite legitimate mappings, leading to denial-of-service attacks or service spoofing. For instance, unauthorized registrations could redirect clients to malicious endpoints, exploiting the trust in port 111 as an open service.[](https://docs.oracle.com/cd/E19683-01/816-1435/rpcproto-69061/index.html)
To address these issues, the rpcbind protocol evolved from the Port Mapper in [1995](/page/1995), as specified in RFC 1833, introducing versions 3 and 4 while maintaining the same program number 100000 and port 111 for compatibility. Unlike the transport-specific Port Mapper, rpcbind uses a universal address format for mappings, consisting of tuples with program, version, network identifier (netid), address, and owner fields, enabling support for diverse underlying protocols beyond [UDP](/page/UDP) and [TCP](/page/TCP). Key security enhancements in rpcbind include restricting SET and UNSET operations to local [loopback](/page/Loopback) transports, preventing remote tampering and mitigating exploitation risks. The procedures were extended accordingly, with RPCBPROC_SET for registration, RPCBPROC_GETADDR (replacing GETPORT) for queries returning full addresses, and RPCBPROC_UNSET for deregistration, alongside retained support for DUMP and indirect call procedures. This evolution improved resilience while preserving the core registration and query flow: servers still register locally on startup, and clients query remotely for addresses before initiating RPC calls.[](https://datatracker.ietf.org/doc/html/rfc1833)[](https://docs.oracle.com/cd/E19683-01/816-1435/rpcproto-69061/index.html)
## Implementations and Tools
### rpcgen Compiler
The rpcgen compiler is a tool designed to automate the generation of C code for [Remote Procedure Call](/page/Remote_procedure_call) (RPC) applications in the Sun RPC framework, also known as Open Network Computing (ONC) RPC. It processes input files written in the RPC language, an interface definition language (IDL) similar to C, to produce client stubs, server skeletons, header files, and [External Data Representation](/page/External_Data_Representation) (XDR) routines for data serialization. This automation simplifies the development of distributed applications by handling the low-level details of RPC communication and data marshalling, allowing programmers to focus on the core logic of remote procedures.[](https://docs.oracle.com/cd/E18752_01/html/816-1435/rpcgenpguide-24243.html)[](https://docs.oracle.com/cd/E19683-01/816-1435/rpcgenpguide-21470/index.html)
rpcgen accepts files with a `.x` extension as input, where developers define RPC programs, versions, procedures, and data types using RPC language syntax. For instance, a simple `.x` file might declare a program with procedures like `ADD(int x, int y)` returning an integer, specifying parameters and return types that rpcgen uses to generate compatible code. The tool parses these definitions to create the necessary stubs that encapsulate RPC calls as local function invocations on the [client side](/page/Client-side) and dispatch incoming calls on the [server](/page/Server) side, while ensuring data is encoded and decoded via XDR for network transmission.[](https://docs.oracle.com/cd/E18752_01/html/816-1435/rpcgenpguide-24243.html)[](https://docs.oracle.com/cd/E19683-01/816-1435/rpcgenpguide-21470/index.html)
Key command-line options control the output of rpcgen. The `-a` flag generates all necessary files, including headers, client stubs, server skeletons, and XDR routines. The `-c` option produces only the XDR routines for data encoding and decoding. The `-s` option specifies the transport protocol, such as [UDP](/page/UDP) or [TCP](/page/TCP) (e.g., `rpcgen -s udp foo.x`), allowing customization for different network behaviors. The `-h` flag outputs solely the C header file containing structure definitions and procedure prototypes. These options enable selective compilation, optimizing the build process for client-only or server-only development.[](https://docs.oracle.com/cd/E18752_01/html/816-1435/rpcgenpguide-24243.html)[](https://www.cisco.com/c/en/us/td/docs/ios/sw_upgrades/interlink/r2_0/rpc_pr/rprpcgen.html)
Typical output files from rpcgen, assuming an input file named `foo.x`, include `foo.h` for shared header definitions, `foo_clnt.c` containing client stub routines that handle RPC calls, `foo_svc.c` with server skeleton code for procedure dispatching, and `foo_xdr.c` for XDR filter functions to marshal and unmarshal data structures. These files are standard C source code that can be compiled with a C compiler like `gcc`. For example, the client stub in `foo_clnt.c` might include a function like `foo_add_1(int *argp, CLIENT *clnt)`, which internally manages the RPC invocation over the network.[](https://docs.oracle.com/cd/E19683-01/816-1435/rpcgenpguide-21470/index.html)[](https://docs.oracle.com/cd/E18752_01/html/816-1435/rpcgenpguide-24243.html)
The standard workflow for using rpcgen begins with writing the RPC interface in a `.x` file, followed by invoking rpcgen to generate the output files (e.g., `rpcgen -a foo.x`). Developers then implement the server procedures by filling in the skeletons in `foo_svc.c`, compile all generated `.c` files along with application code using a [C](/page/C--) compiler, and link the resulting object files with the RPC [runtime library](/page/Runtime_library) (such as `librpc` or `libtirpc`) to produce the final [executable](/page/Executable). On the [client side](/page/Client-side), the stubs in `foo_clnt.c` are linked similarly, enabling transparent remote calls via functions that mimic local procedure invocations. This process ensures compatibility with the ONC RPC protocol while abstracting network details.[](https://docs.oracle.com/cd/E18752_01/html/816-1435/rpcgenpguide-24243.html)[](https://docs.oracle.com/cd/E19683-01/816-1435/rpcgenpguide-21470/index.html)
While effective for C-based development, rpcgen has limitations in that it is primarily focused on generating C code and lacks native support for modern programming languages like [Java](/page/Java) or [Python](/page/Python), requiring manual adaptations or third-party tools for cross-language RPC implementations.[](https://www.cisco.com/c/en/us/td/docs/ios/sw_upgrades/interlink/r2_0/rpc_pr/rprpcgen.html)
### Client and Server Libraries
Sun RPC provides client and server libraries as part of its ONC (Open Network Computing) implementation, offering C-language [APIs](/page/Apis) for developing distributed applications that invoke remote procedures over networks. These libraries were historically integrated into the standard C library (libc) on many UNIX systems, but in modern implementations are often provided separately, such as by libtirpc.[](https://fedoraproject.org/wiki/Changes/SunRPCRemoval) In contemporary systems, the Transport-Independent RPC library (libtirpc) provides these APIs and is commonly used for new developments. They enable developers to create client handles, register server procedures, handle [authentication](/page/Authentication), and manage errors without directly manipulating the underlying [protocol](/page/Protocol). The APIs emphasize simplicity, with [XDR](/page/External_Data_Representation) (External Data Representation) routines used for serializing arguments and results to ensure machine-independent data exchange.[](https://bitsavers.org/pdf/sun/sunos/4.1/800-3850-10A_Network_Programming_Guide_199003.pdf)
The client-side API centers on functions for establishing connections and making synchronous calls. The `clnt_create` function creates a client handle by specifying the server hostname, program number, version number, and transport protocol such as UDP or TCP, returning a `CLIENT *` pointer on success or `NULL` on failure, with errors detailed in the global `rpc_createerr` structure.[](https://bitsavers.org/pdf/sun/sunos/4.1/800-3850-10A_Network_Programming_Guide_199003.pdf) Once created, `clnt_call` invokes a remote procedure synchronously, taking the client handle, procedure number, input XDR procedure and argument, output XDR procedure and result buffer, and a timeout (defaulting to 25 seconds); it returns an `enum clnt_stat` value, such as `RPC_SUCCESS` for success or `RPC_TIMEDOUT` for expiration.[](https://bitsavers.org/pdf/sun/sunos/4.1/800-3850-10A_Network_Programming_Guide_199003.pdf) The `clnt_control` function allows runtime adjustments to the client handle, such as setting custom timeouts via the `CLSET_TIMEOUT` option and a `struct timeval` argument, returning a boolean indicating success.[](https://bitsavers.org/pdf/sun/sunos/4.1/800-3850-10A_Network_Programming_Guide_199003.pdf) Client resources are freed using `clnt_destroy`. These APIs support both UDP for low-latency, idempotent calls (limited to under 8KB) and TCP for reliable, stream-oriented communication.[](https://bitsavers.org/pdf/sun/sunos/4.1/800-3850-10A_Network_Programming_Guide_199003.pdf)
On the server side, the [APIs](/page/Apis) facilitate procedure registration and request dispatching. The `registerrpc` function registers a [procedure](/page/Procedure) with the RPC runtime, specifying the program number, version number, [procedure](/page/Procedure) number, the procedure's [function pointer](/page/Function_pointer), and XDR routines for input and output; it returns a [boolean](/page/Boolean) or [integer](/page/Integer) indicating success (non-zero) or failure.[](https://bitsavers.org/pdf/sun/sunos/4.1/800-3850-10A_Network_Programming_Guide_199003.pdf) The `svc_run` function starts the server's main [event loop](/page/Event_loop), blocking to receive and dispatch incoming RPC requests until an error or signal interruption occurs.[](https://bitsavers.org/pdf/sun/sunos/4.1/800-3850-10A_Network_Programming_Guide_199003.pdf) For sending replies, `svc_sendreply` transmits the output to the client, taking the server transport handle (`SVCXPRT *`), output XDR [procedure](/page/Procedure), and result pointer; it handles [serialization](/page/Serialization) and network transmission.[](https://docs.oracle.com/cd/E19455-01/805-7224/6j6q44cgg/index.html) Server transports are created with functions like `svcudp_create` for [UDP](/page/UDP) sockets. These components allow servers to handle multiple concurrent requests efficiently.[](https://bitsavers.org/pdf/sun/sunos/4.1/800-3850-10A_Network_Programming_Guide_199003.pdf)
Authentication in Sun RPC is managed through dedicated [APIs](/page/Apis) that create credential handles for secure procedure invocation. The `authnone_create` function generates an authentication handle with no credentials (`AUTH_NULL` flavor), suitable for unsecured environments, returning an `AUTH *` pointer.[](https://bitsavers.org/pdf/sun/sunos/4.1/800-3850-10A_Network_Programming_Guide_199003.pdf) For UNIX-style [authentication](/page/Authentication), `authunix_create` builds a handle using the client's [hostname](/page/Hostname), user ID ([UID](/page/UID)), group ID (GID), supplementary group count, and group list array, employing the `AUTH_UNIX` flavor with numeric identifiers for permission checks; `authunix_create_default` uses the current process's credentials automatically.[](https://bitsavers.org/pdf/sun/sunos/4.1/800-3850-10A_Network_Programming_Guide_199003.pdf) These handles are attached to client or server contexts via `clnt` or `svc` structures, with credentials limited to under 400 bytes in an `opaque_auth` structure. DES-based [authentication](/page/Authentication) is also supported via `authdes_create`, but UNIX [authentication](/page/Authentication) remains the most common for ONC deployments.[](https://bitsavers.org/pdf/sun/sunos/4.1/800-3850-10A_Network_Programming_Guide_199003.pdf)
Error reporting relies on enumerated statuses and diagnostic functions for [debugging](/page/Debugging). The `clnt_stat` enum categorizes client errors, including `RPC_SUCCESS` (0), `RPC_CANTENCODEARGS` (1), `RPC_CANTDECODERES` (13), and `RPC_TIMEDOUT` (5), returned by `clnt_call` or accessible via `clnt_geterr`.[](https://bitsavers.org/pdf/sun/sunos/4.1/800-3850-10A_Network_Programming_Guide_199003.pdf) Similarly, `svc_stat` enums handle server-side issues like `SVC_STAT_SUCC` or `SVC_STAT_RPC_MISMATCH`. Functions such as `clnt_perror` and `clnt_sperror` translate these statuses into human-readable messages, while server errors use `svcerr_noproc` for invalid procedures. Protocol-level errors appear in reply messages via `accept_stat` (e.g., `PROG_UNAVAIL` = 1) or `reject_stat` (e.g., `AUTH_ERROR` = 1).[](https://bitsavers.org/pdf/sun/sunos/4.1/800-3850-10A_Network_Programming_Guide_199003.pdf)
For portability, the libraries are embedded in UNIX libc, supporting diverse architectures like [SPARC](/page/SPARC), VAX, and x86 through XDR's canonical format, which enforces 4-byte alignment and byte-order independence.[](https://bitsavers.org/pdf/sun/sunos/4.1/800-3850-10A_Network_Programming_Guide_199003.pdf) They assume UNIX semantics but integrate with standard headers like `<netdb.h>` for name resolution. Bindings exist for other languages, such as [Java](/page/Java) implementations that provide equivalent client and server stubs, using rpcbind for [service discovery](/page/Service_discovery), similar to C implementations.[](https://link.springer.com/chapter/10.1007/978-3-642-60247-4_14)
## Applications and Usage
### Integration with Network File System (NFS)
Sun RPC serves as the foundational transport mechanism for the [Network File System](/page/Network_File_System) (NFS), enabling transparent remote file access across networks by defining NFS operations as remote procedures. Developed by [Sun Microsystems](/page/Sun_Microsystems) in the mid-1980s, NFS version 2 (NFSv2) was the first implementation to leverage Sun RPC version 2, treating NFS as a specific RPC "[program](/page/Program)" identified by the unique program number 100003. This integration allows clients to invoke file system operations—such as reading, writing, and directory traversal—through standardized RPC calls, abstracting the underlying network communication and promoting [interoperability](/page/Interoperability) across heterogeneous systems.[](https://datatracker.ietf.org/doc/html/rfc1094)
The NFS protocol specification uses Sun RPC's [External Data Representation](/page/External_Data_Representation) (XDR) for encoding arguments and results, ensuring machine-independent data serialization. RPC messages for NFS include a header with the program number (100003), version number (2 for NFSv2), and procedure number, followed by XDR-marshaled parameters. For instance, the core NFS procedures encompass operations like NFSPROC_GETATTR (procedure 1), which retrieves [file attributes](/page/File_attribute); NFSPROC_READ (procedure 6), which fetches file data up to 8,192 bytes; and NFSPROC_WRITE (procedure 8), which modifies files with unstable or stable semantics to handle retries efficiently. These 18 procedures in NFSv2 are designed to be stateless, meaning each call contains all necessary context, which simplifies server implementation and enhances reliability over unreliable transports like [UDP](/page/UDP). The default transport for NFS is [UDP](/page/UDP) on port 2049, though [TCP](/page/TCP) support was added later for better performance in wide-area networks.[](https://datatracker.ietf.org/doc/html/rfc1094)[](https://datatracker.ietf.org/doc/html/rfc1014)
Integration extends beyond core file operations to auxiliary protocols that also rely on Sun RPC. The Mount protocol, assigned program number 100005 and version 3, handles the mounting of remote [file](/page/File) systems by exporting lists of available shares and establishing NFS handles—opaque 32-byte identifiers representing files or directories. Similarly, the Network Lock Manager (NLM), using program number 100021 and version 4, provides [file](/page/File) locking services through procedures like NLMPROC_LOCK for acquiring locks and NLMPROC_TEST for checking lock [status](/page/Status), addressing NFS's initial lack of built-in locking to prevent concurrent [access](/page/Access) issues. These components communicate via the rpcbind service (formerly Port Mapper, program 100000) to dynamically discover service ports, ensuring flexible deployment without fixed port assignments. This modular RPC-based architecture facilitated NFS's rapid adoption, as it allowed developers to use tools like rpcgen to generate client and server stubs from interface definitions.[](https://datatracker.ietf.org/doc/html/rfc1094)
NFS version 3 (NFSv3), standardized in 1995, retained the core Sun RPC integration while introducing enhancements for scalability and robustness. It maintains the same program number (100003) but uses version 3, expanding to 22 [procedure](/page/Procedure)s, including new ones like [ACCESS](/page/Access) (procedure 4) for permission checks without attribute fetches and READDIRPLUS (procedure 17) for efficient directory listings with file handles and attributes in a single call. NFSv3 supports [TCP](/page/TCP) alongside [UDP](/page/UDP), improving handling of large transfers (up to 64 KB per READ/WRITE) and adding a COMMIT procedure (21) for explicit [data synchronization](/page/Data_synchronization).[](https://datatracker.ietf.org/doc/html/rfc1813) Authentication remains RPC-based, supporting mechanisms like AUTH_UNIX for UID/GID mapping and AUTH_DES for secure time-stamped credentials. This evolution preserved [backward compatibility](/page/Backward_compatibility) with Sun RPC while addressing limitations in NFSv2, such as [file size](/page/File_size) caps and [error](/page/Error) [reporting](/page/Reporting), making NFSv3 the dominant version for [enterprise](/page/Enterprise) use through the 1990s and early 2000s.
The design rationale for using Sun RPC in NFS emphasized simplicity and portability, as RPC provided a procedure-oriented [paradigm](/page/Paradigm) that mirrored local UNIX system calls, easing development for [kernel](/page/Kernel) and user-space implementations. By building on RPC's transport independence, NFS could operate over various [protocols](/page/Protocol) without redesign, though early versions prioritized low-latency [LAN](/page/Lan) environments. Seminal work by Sun engineers highlighted RPC's role in streamlining protocol maintenance, with NFS procedures defined in a concise RPC/XDR [specification language](/page/Specification_language) that automated code generation. This integration not only accelerated NFS deployment on [SunOS](/page/SunOS) but also influenced subsequent distributed file systems by demonstrating RPC's efficacy for high-throughput, stateless services.
### Other Network Services
Sun RPC underpins several network services in the Open Network Computing (ONC) ecosystem beyond the primary [Network File System](/page/Network_File_System) (NFS), enabling distributed system administration, monitoring, and auxiliary functions through standardized program numbers registered with the portmapper or rpcbind. These services leverage RPC's procedure call semantics to facilitate remote interactions, often running as daemons that register their availability for client access.[](https://www.iana.org/assignments/rpc-program-numbers/rpc-program-numbers.xhtml)[](https://datatracker.ietf.org/doc/html/rfc5531)
The [Network Information Service](/page/NIS) (NIS), formerly known as [Yellow Pages](/page/Yellow_pages), operates as an [RPC-based directory service](/page/Directory_service) using program number 100004 for the ypserv daemon, which manages replicated, read-only databases for network-wide information such as user accounts, hostnames, and network maps.[](https://www.iana.org/assignments/rpc-program-numbers/rpc-program-numbers.xhtml) Clients employ the ypbind daemon, associated with program 100007, to dynamically [bind](/page/BIND) to available NIS servers and query these databases via RPC calls, supporting centralized administration in [Unix-like](/page/Unix-like) environments.[](https://www.iana.org/assignments/rpc-program-numbers/rpc-program-numbers.xhtml) This setup allows seamless distribution of configuration data across heterogeneous systems without [manual](/page/Manual) synchronization.[](https://bitsavers.org/pdf/sun/sunos/4.1/800-3850-10A_Network_Programming_Guide_199003.pdf)
Status monitoring is facilitated by the RSTAT service, utilizing program number 100024 for the status monitor version 2 (statmon2), which exposes procedures such as STAT to retrieve load averages and UPTIME to obtain system uptime from the remote kernel.[](https://www.iana.org/assignments/rpc-program-numbers/rpc-program-numbers.xhtml) The rstatd daemon implements this service, enabling remote hosts to poll performance metrics for diagnostics and recovery, such as detecting reboots in clustered setups.[](https://bitsavers.org/pdf/sun/sunos/4.1/800-3850-10A_Network_Programming_Guide_199003.pdf) These RPC procedures provide lightweight, real-time access to kernel statistics without requiring full system access.[](https://www.iana.org/assignments/rpc-program-numbers/rpc-program-numbers.xhtml)
The Network Lock Manager (NLM) provides distributed file locking via program number 100021, essential for coordinating access in shared NFS environments through the lockd daemon.[](https://www.iana.org/assignments/rpc-program-numbers/rpc-program-numbers.xhtml) Key procedures include LM_LOCK for acquiring locks, LM_UNLOCK for releasing them, LM_TEST for checking lock status, and LM_CANCEL for asynchronous cancellation, ensuring atomicity and preventing race conditions across networked file systems.[](https://bitsavers.org/pdf/sun/sunos/4.1/800-3850-10A_Network_Programming_Guide_199003.pdf) This service integrates closely with NFS operations to maintain data consistency.[](https://www.iana.org/assignments/rpc-program-numbers/rpc-program-numbers.xhtml)
Additional examples of Sun RPC-based services include the Bootparams protocol, assigned program number 100026 and implemented by the rpc.bootparamd or bootparamd daemon, which supplies essential parameters like root path and swap server details to diskless clients during network booting.[](https://www.iana.org/assignments/rpc-program-numbers/rpc-program-numbers.xhtml) Similarly, the Spray service, under program number 100012 via the sprayd daemon, functions as a network load-testing tool by emitting bursts of RPC packets to evaluate throughput and latency under stress.[](https://www.iana.org/assignments/rpc-program-numbers/rpc-program-numbers.xhtml) These utilities highlight RPC's versatility in supporting bootstrapping and performance diagnostics in ONC networks.[](https://bitsavers.org/pdf/sun/sunos/4.1/800-3850-10A_Network_Programming_Guide_199003.pdf)
## Legacy and Modern Relevance
### Variants and Extensions
Sun RPC, originally designed for [UDP](/page/UDP) transport, has seen several variants and extensions to [support](/page/Support) diverse environments, enhanced [security](/page/Security), and improved [performance](/page/Performance) over connection-oriented protocols. These modifications maintain [compatibility](/page/Compatibility) with the core ONC RPC version 2 [protocol](/page/Protocol) while addressing specific needs such as multi-transport [support](/page/Support) and cryptographic protections.[](https://datatracker.ietf.org/doc/html/rfc5531)
TI-RPC, or Transport Independent RPC, emerged as a key extension developed by [Sun Microsystems](/page/Sun_Microsystems) as part of [UNIX System V](/page/UNIX_System_V) Release 4 (SVR4) and later integrated into [UnixWare](/page/UnixWare) systems. Unlike the original transport-specific RPC (TS-RPC), which tied applications to [UDP](/page/UDP) or [TCP](/page/TCP), TI-RPC abstracts the transport layer, allowing applications to operate over multiple protocols including [UDP](/page/UDP), [TCP](/page/TCP), and later [IPv6](/page/IPv6) stacks without code modifications. This independence is achieved through environment variables like NETPATH and NETCONFIG, which enable dynamic transport selection based on [network](/page/Network) configuration files. TI-RPC's [IPv6](/page/IPv6) [support](/page/Support), introduced in [Solaris](/page/Solaris) implementations, ensures RPC applications can leverage dual-stack environments, with [TCP](/page/TCP) or [UDP](/page/UDP) using [IPv6](/page/IPv6) addresses seamlessly.[](https://docs.oracle.com/cd/E18752_01/html/816-1435/oncintro-2.html)[](https://docs.oracle.com/cd/E18752_01/html/816-1435/portrpc-99.html)
For security enhancements, RPCSEC_GSS provides a framework for integrating the Generic Security Service API (GSS-API), including [Kerberos](/page/Kerberos) v5, as a credential mechanism atop ONC RPC. Specified in RFC 2203, this [protocol](/page/Protocol) defines a layered security model where RPC messages carry GSS-API tokens for authentication, integrity, and confidentiality without altering the base RPC header. It supports three service types—none, integrity, and privacy—and enables secure context establishment between client and server, making it suitable for distributed systems requiring strong [mutual authentication](/page/Mutual_authentication). RPCSEC_GSS has been widely adopted in NFSv3 and earlier for [Kerberos](/page/Kerberos)-based access control.[](https://www.rfc-editor.org/rfc/rfc2203.html)
Extensions to RPC over [TCP](/page/TCP) address limitations in the original record-marking scheme, which prepends a 4-byte length field to each message but can suffer from [head-of-line blocking](/page/Head-of-line_blocking) and inefficiency in high-latency networks. [RFC](/page/RFC) 5666 introduces RPC-over-RDMA as a transport extension, replacing traditional [TCP](/page/TCP) framing with a more efficient header that includes chunk lists for [direct memory access](/page/Direct_memory_access), supporting inline data, RDMA reads, and writes within a single RPC [transaction](/page/Transaction). This framing improvement reduces CPU overhead and enables [zero-copy](/page/Zero-copy) transfers, though it requires RDMA-capable hardware and is primarily used in [high-performance computing](/page/High-performance_computing) environments like NFS over [InfiniBand](/page/InfiniBand).[](https://datatracker.ietf.org/doc/html/rfc5666)
Microsoft's adaptations of ONC RPC evolved into [DCE/RPC](/page/DCE/RPC) as part of the [Distributed Computing Environment](/page/Distributed_Computing_Environment) (DCE) standard, incorporating elements from Sun's protocol while introducing significant modifications for enterprise interoperability. [DCE/RPC](/page/DCE/RPC) replaces ONC's 32-bit program numbers with 128-bit UUIDs for unique interface identification, allowing globally unique service definitions without central registration. It also adds support for [pipes](/page/PIPES), enabling streaming data transfer and asynchronous operations not native to ONC RPC, such as partial results and multi-part responses. Microsoft's implementation, known as MS-RPC, extends [DCE/RPC](/page/DCE/RPC) further by integrating named pipes over [SMB](/page/SMB) for local and remote communication, facilitating Windows-specific features like [COM](/page/Com)/DCOM integration while maintaining wire compatibility with DCE tools.[](https://nvlpubs.nist.gov/nistpubs/Legacy/IR/nistir5277.pdf)[](https://pubs.opengroup.org/onlinepubs/9629399/chap2.htm)[](https://learn.microsoft.com/en-us/windows/win32/rpc/using-asynchronous-rpc-with-dce-pipes)
An early security variant, Secure RPC using the AUTH_DES flavor, was introduced by Sun in the [1980s](/page/1980s) to provide encryption beyond the basic AUTH_SYS mechanism. AUTH_DES employs [Diffie-Hellman key exchange](/page/Key_exchange) for session keys and [DES](/page/DES) for encrypting credentials and data in RPC messages, authenticating both hosts and users via public keys stored in a secure database. However, due to [DES](/page/DES)'s 56-bit key length vulnerability to brute-force attacks, AUTH_DES has been deprecated in modern systems, with recommendations to migrate to stronger mechanisms like RPCSEC_GSS.[](https://docs.oracle.com/cd/E19253-01/816-1435/6m7rrfn86/index.html)[](https://docs.oracle.com/cd/E26505_01/html/E27224/auth-2.html)[](https://datatracker.ietf.org/doc/html/rfc5531)
### Current Adoption and Alternatives
Sun RPC, also known as ONC RPC, maintains persistent usage in modern [Unix-like](/page/Unix-like) systems primarily for [legacy](/page/Legacy) support in network services such as the Network File System (NFS). Since the release of [glibc](/page/Glibc) 2.32 in 2020, [Linux](/page/Linux) distributions have removed built-in Sun RPC support from glibc, replacing it with the libtirpc library, which provides TI-RPC and maintains ONC RPC compatibility. In [Linux](/page/Linux) distributions, the rpcbind package provides the essential RPC port mapping functionality required for ONC RPC operations, enabling compatibility with NFSv3, which relies on rpcbind for port mapping, while NFSv4 uses a fixed port and does not require it, though both use ONC RPC semantics.[](https://www.gnu.org/software/libc/news+announce/2020-08-06-glibc-2.32.html)[](https://fedoraproject.org/wiki/Changes/SunRPCRemoval)[](https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/ch-nfs) Similarly, [Oracle Solaris](/page/Oracle_Solaris) and related systems continue to integrate ONC RPC for NFS implementations, ensuring [backward compatibility](/page/backward_compatibility) in enterprise environments where older file-sharing setups persist.[](https://docs.oracle.com/cd/E36784_01/html/E36862/oncintro-5.html) NFSv3 and NFSv4 both build upon ONC RPC as their underlying transport mechanism, with NFSv4 using compound operations over the same RPC layer defined in RFC 5531.[](https://datatracker.ietf.org/doc/html/rfc7530)
Despite this ongoing role, ONC RPC adoption has declined in new deployments due to longstanding [security](/page/Security) vulnerabilities, particularly in the portmapper service, which has been targeted by exploits allowing remote code execution and unauthorized access.[](https://www.fortra.com/resources/vulnerabilities/rpc-portmapper)[](https://www.skywaywest.com/2021/01/what-is-an-open-portmapper-vulnerability/) The rise of web-oriented architectures has further contributed to this shift, as HTTP/[REST](/page/REST) APIs offer simpler integration with internet-scale services, browser compatibility, and standardized [security](/page/Security) models that address RPC's limitations in distributed, heterogeneous environments.[](https://cloud.google.com/blog/products/application-development/rest-vs-rpc-what-problems-are-you-trying-to-solve-with-your-apis)
Contemporary alternatives to ONC RPC emphasize performance, language interoperability, and modern transport protocols. [gRPC](/page/GRPC), developed by [Google](/page/Google), provides a high-performance RPC framework over [HTTP/2](/page/HTTP/2) with [Protocol Buffers](/page/Protocol_Buffers) for serialization, supporting streaming and bidirectional communication while evolving the core RPC paradigm without direct reliance on ONC mechanisms.[](https://grpc.io/docs/what-is-grpc/core-concepts/) [Apache Thrift](/page/Apache_Thrift), originally created by [Facebook](/page/Facebook), serves as another cross-language RPC system with binary serialization and support for multiple transports, enabling scalable service development as an evolution beyond traditional ONC RPC constraints.[](https://engineering.fb.com/2014/02/20/open-source/under-the-hood-building-and-open-sourcing-fbthrift/)
Migration from ONC RPC often involves refactoring to these frameworks or adapting legacy services for cloud-native environments. For instance, containerized NFS deployments in platforms like Oracle Cloud Infrastructure and [Red Hat](/page/Red_Hat) [OpenShift](/page/OpenShift) allow ONC RPC-based file systems to operate within [Docker](/page/Docker) and [Kubernetes](/page/Kubernetes), facilitating hybrid transitions while maintaining compatibility.[](https://blogs.oracle.com/cloud-infrastructure/post/mounting-oci-file-storage-and-other-nfs-shares-on-docker-containers)[](https://docs.redhat.com/en/documentation/openshift_container_platform/4.8/html/storage/configuring-persistent-storage) Overall, while ONC RPC endures in established Unix ecosystems, its role is increasingly supplanted by more secure and flexible options in [greenfield](/page/Greenfield) projects.