Public-key cryptography
Public-key cryptography, also known as asymmetric cryptography, is a class of cryptographic algorithms that utilize a pair of related keys—a public key, which can be openly shared, and a private key, which must remain secret—to perform encryption, decryption, digital signing, and verification operations. The public key is used by anyone to encrypt messages or verify signatures intended for the key owner, while only the private key holder can decrypt those messages or produce valid signatures, with the relationship between the keys designed such that deriving the private key from the public key is computationally infeasible. This approach addresses the key distribution challenges of symmetric cryptography by enabling secure communication over untrusted networks without requiring parties to exchange secrets in advance.[1][2][3] The foundational ideas of public-key cryptography emerged in the mid-1970s amid growing concerns over secure data transmission in computer networks. In their 1976 paper "New Directions in Cryptography," Whitfield Diffie and Martin E. Hellman introduced the concept of asymmetric key pairs, proposing a system where encryption keys could be publicly listed in directories, allowing any user to securely send encrypted messages to another without prior coordination or secure channels for key exchange. This innovation built on earlier theoretical work in information theory but shifted focus toward practical, computationally secure systems resistant to eavesdropping. Independently, in 1977, Ronald L. Rivest, Adi Shamir, and Leonard M. Adleman published their RSA algorithm, the first viable implementation of a public-key cryptosystem, relying on the mathematical difficulty of factoring the product of two large prime numbers to ensure security.[1][4] Public-key cryptography underpins essential security mechanisms in digital systems, including confidentiality via encryption of sensitive data, authentication to verify identities, integrity to detect tampering, and non-repudiation to prevent denial of actions through digital signatures. It facilitates key establishment protocols, such as Diffie-Hellman key agreement, for deriving shared symmetric keys over public channels, and supports public key infrastructures (PKI) that issue and manage digital certificates binding public keys to verified entities via trusted certification authorities. Widely deployed in protocols like Transport Layer Security (TLS) for web browsing, Secure/Multipurpose Internet Mail Extensions (S/MIME) for email, and virtual private networks (VPNs), it enables scalable secure transactions in e-commerce, government services, and enterprise networks, with standards like RSA and elliptic curve cryptography (ECC) providing varying levels of security based on key sizes (e.g., 2048-bit RSA for at least 112 bits of security). As computational threats evolve, including those from quantum computing, ongoing standardization efforts emphasize robust key management and migration to post-quantum alternatives while maintaining compatibility with existing infrastructures.[5][3]Fundamentals
Definition and Principles
Public-key cryptography, also known as asymmetric cryptography, is a cryptographic system that utilizes a pair of related keys—a public key and a private key—to secure communications and data. Unlike symmetric cryptography, which relies on a single shared secret key for both encryption and decryption, public-key cryptography employs distinct keys for these operations: the public key is freely distributed and used for encryption or signature verification, while the private key is kept secret and used for decryption or signature generation. This asymmetry ensures that the private key cannot be feasibly derived from the public key, providing a foundation for secure interactions without the need for prior secret key exchange.[6][7] The core principles of public-key cryptography revolve around achieving key security objectives through the key pair mechanism. Confidentiality is ensured by encrypting messages with the recipient's public key, allowing only the private key holder to decrypt and access the plaintext. Integrity and authentication are supported via digital signatures, where the sender signs the message with their private key, enabling the recipient to verify authenticity and unaltered content using the sender's public key. Non-repudiation is also provided, as a valid signature binds the sender irrevocably to the message, preventing denial of origin. These principles rely on the computational difficulty of inverting certain mathematical functions without the private key, often referred to as trapdoor one-way functions.[6][7] Developed in the 1970s to address the key distribution challenges inherent in symmetric systems—where securely sharing a single key over insecure channels is problematic—public-key cryptography revolutionized secure communication by enabling key exchange via public directories. In a basic workflow, the sender obtains the recipient's public key, encrypts the plaintext message to produce ciphertext, and transmits it over an open channel; the recipient then applies their private key to decrypt the message, ensuring only they can recover the original content. This approach underpins modern secure protocols without requiring trusted intermediaries for initial key setup.[7][6]Key Components
Public and private keys form the core of public-key cryptography, generated as a mathematically related pair through specialized algorithms. The public key is designed for open distribution to enable secure communications with multiple parties, while the private key must be kept confidential by its owner to maintain system security. This asymmetry allows anyone to encrypt messages or verify signatures using the public key, but only the private key holder can decrypt or produce valid signatures.[8][9] Certificates play a crucial role in associating public keys with specific identities, preventing impersonation and enabling trust in distributed systems. Issued by trusted certificate authorities (CAs), a public key certificate contains the public key, the holder's identity details, and a digital signature from the CA verifying the binding. This structure, as defined in standards like X.509, allows verification of key authenticity without direct knowledge of the private key.[10][11] Key rings provide a practical mechanism for managing multiple keys, particularly in decentralized environments. In systems like Pretty Good Privacy (PGP), a public key ring stores the public keys of other users for encryption and verification, while a separate private key ring holds the user's own private keys, protected by passphrases. These structures facilitate efficient key lookup and usage without compromising secrecy.[12][13] Different public-key algorithms exhibit varying properties in terms of key size, computational demands, and achievable security levels, influencing their suitability for applications. The table below compares representative algorithms for equivalent security strength, based on NIST guidelines for key lengths providing at least 128 bits of security against classical attacks. Computational costs are relative, with elliptic curve cryptography (ECC) generally requiring fewer resources due to smaller keys and optimized operations compared to RSA or DSA.[3][14]| Algorithm | Key Size (bits) | Relative Computation Cost | Security Level (bits) |
|---|---|---|---|
| RSA | 3072 | High (modular exponentiation intensive) | 128 |
| ECC | 256 | Low (efficient scalar multiplication) | 128 |
| DSA | 3072 (modulus) | Medium (discrete log operations) | 128 |
Mathematical Foundations
Asymmetric Encryption Basics
Asymmetric encryption, a cornerstone of public-key cryptography, employs a pair of mathematically related keys: a publicly available encryption key that anyone can use to encipher a message, and a secret private key held only by the intended recipient for decryption. This approach allows secure communication over insecure channels without the need for prior secret key exchange, fundamentally differing from symmetric methods by decoupling encryption and decryption processes. The underlying mathematics is rooted in modular arithmetic, where computations are confined to residues modulo a large composite integer n, enabling efficient operations while obscuring the original plaintext without the private key. At the heart of asymmetric encryption lie one-way functions, which are algorithms or mathematical operations that are straightforward and efficient to compute in the forward direction—for instance, transforming an input x to an output y = f(x)—but computationally infeasible to reverse, meaning finding x given y requires prohibitive resources unless augmented by a hidden "trapdoor" parameter known only to the key holder. These functions provide the asymmetry: the public key enables easy forward computation for encryption, while inversion demands the private key's trapdoor information, rendering decryption secure against adversaries. A basic representation of the encryption process uses modular exponentiation: the ciphertext c is generated as c \equiv m^e \pmod{n}, where m is the plaintext message, e is the public exponent component of the public key, and n is the modulus. Decryption reverses this via the private exponent d, yielding m \equiv c^d \pmod{n}, with the relationship between e and d tied to the structure of n. The security of asymmetric encryption schemes relies on well-established computational hardness assumptions, such as the integer factorization problem, where decomposing a large composite n = p \cdot q (with p and q being large primes) into its prime factors is believed to be intractable for sufficiently large values using current algorithms and computing power.Trapdoor One-Way Functions
Trapdoor one-way functions form the foundational mathematical primitive enabling public-key cryptography by providing a mechanism for reversible computation that is computationally feasible only with privileged information. Introduced by Diffie and Hellman, these functions are defined such that forward computation is efficient for anyone, but inversion—recovering the input from the output—is computationally intractable without a secret "trapdoor" parameter, which serves as the private key.[1] With the trapdoor, inversion becomes efficient, allowing authorized parties to decrypt or verify messages while maintaining security against adversaries. This asymmetry underpins the feasibility of public-key systems, where the public key enables easy forward evaluation, but the private key (trapdoor) is required for reversal. Trapdoor functions are typically categorized into permutation-based and function-based types, depending on whether they preserve one-to-one mappings. Permutation-based trapdoor functions, such as those underlying the RSA cryptosystem, involve bijective mappings that are easy to compute forward but hard to invert without knowledge of the trapdoor, often relying on the difficulty of factoring large composite numbers.[4] For instance, in RSA, the public operation raises a message to a power modulo a composite modulus n = pq, while inversion uses the private exponent derived from the prime factors p and q. In contrast, function-based examples like the Rabin cryptosystem employ quadratic residues modulo n, where forward computation squares the input modulo n, and inversion requires extracting square roots, which is feasible only with the factorization of n.[17] These examples illustrate how trapdoor functions can be constructed from number-theoretic problems, ensuring that the public key reveals no information about the inversion process. The inversion process in trapdoor functions can be formally expressed as recovering the original message m from the ciphertext c using the private key: m = \text{private_key}(c) This operation leverages the secret trapdoor, such as the prime factors in RSA or Rabin, to efficiently compute the inverse without solving the underlying hard problem directly.[4][17] The security of trapdoor one-way functions is established through provable reductions to well-studied hard problems in computational number theory, ensuring that breaking the function is at least as difficult as solving these problems. For permutation-based schemes like RSA and Rabin, security reduces to the integer factorization problem: an adversary who can invert the function efficiently could factor the modulus n, a task believed to be intractable for large semiprimes on classical computers.[4][17] Similarly, other trapdoor constructions, such as those based on the discrete logarithm problem in elliptic curves or finite fields, reduce inversion to computing discrete logarithms, providing rigorous guarantees that the system's hardness inherits from these foundational assumptions. This reductionist approach allows cryptographers to analyze and trust public-key schemes by linking their security to long-standing open problems.Core Operations
Key Generation and Distribution
In public-key cryptography, key generation produces a mathematically linked pair consisting of a public key, which can be freely shared, and a private key, which must remain secret. Methods vary by algorithm; for systems like RSA based on integer factorization, this generally involves selecting large, randomly chosen prime numbers as foundational parameters, computing a modulus from their product, and deriving public and private exponents that enable asymmetric operations based on the underlying trapdoor one-way function.[18] In general, the process uses high-entropy random bits from approved sources to select parameters suited to the computational hard problem of the algorithm (e.g., elliptic curve parameters for ECC), and must occur within a secure environment, often using approved cryptographic modules to ensure the keys meet the required security strength.[19] High entropy is essential during key generation to produce unpredictable values, preventing attackers from guessing or brute-forcing the private key from the public one. Random bit strings are sourced from approved random bit generators (RBGs), such as those compliant with NIST standards, which must provide at least as many bits of entropy as the target security level—for instance, at least 256 bits for 128-bit security.[19] Insufficient entropy, often from flawed or predictable sources like low-variability system timers, can render keys vulnerable; a notable example is the 2006-2008 Debian OpenSSL vulnerability (CVE-2008-0166), where a packaging change reduced the random pool to a single PID value, generating only about 15,000 possible SSH keys and enabling widespread compromises.[20] Secure distribution focuses on disseminating the public key while protecting the private key's secrecy. Methods include direct exchange through trusted channels like in-person handoff or encrypted email, publication in public directories or key servers for open retrieval, or establishment via an initial secure channel to bootstrap trust.[21] To mitigate risks like man-in-the-middle attacks, public keys are often accompanied by digital signatures for validation, confirming their authenticity without delving into signature mechanics. Private keys are never distributed and must be generated and stored with protections against extraction, such as hardware security modules. Common pitfalls in distribution, such as unverified public keys, can undermine the system, emphasizing the need for integrity checks during sharing.[19]Encryption and Decryption Processes
In public-key cryptography, the encryption process begins with preparing the plaintext message for secure transmission using methods specific to the algorithm. The message is first converted into a numerical representation and padded using a scheme such as Optimal Asymmetric Encryption Padding (OAEP) to achieve a length compatible with the key size, prevent attacks like chosen-ciphertext vulnerabilities, and randomize the input for semantic security.[22] In the RSA algorithm, for example, the padded message m, treated as an integer less than the modulus n, is then encrypted by raising it to the power of the public exponent e modulo n, yielding the ciphertext c = m^e \mod n.[4] Other schemes, such as ElGamal, employ different operations based on discrete logarithms. This operation ensures that only the corresponding private key can efficiently reverse it, leveraging the trapdoor property of the underlying one-way function.[4] Decryption reverses this process using the private key. In RSA, the recipient applies the private exponent d to the ciphertext, computing m = c^d \mod n, which recovers the padded message.[4] The padding is then removed, with built-in integrity checks—such as hash verification in OAEP—to detect and handle errors like invalid padding or tampering, rejecting the decryption if inconsistencies arise.[22] This step ensures the original message is accurately restored only by the legitimate holder of the private key, maintaining confidentiality.[22] Public-key encryption typically processes messages in fixed-size blocks limited by algorithm parameters, such as the modulus n in RSA (typically 2048 to 4096 bits as of 2025), unlike many symmetric stream ciphers that handle arbitrary lengths continuously.[4] This imposes a per-block size restriction, often requiring messages to be segmented or, for larger data, combined with symmetric methods in hybrid systems to encrypt bulk content efficiently. Performance-wise, public-key operations incur significant computational overhead due to large-integer arithmetic, which scales cubically with the parameter size and requires far more resources—often thousands of times slower—than symmetric counterparts for equivalent security levels.[4] For instance, encrypting a 200-digit block with RSA on general-purpose hardware in the 1970s took seconds to minutes, highlighting the need for optimization techniques like exponentiation by squaring.[4] This overhead limits direct use for high-volume data, favoring hybrid approaches where public-key methods secure symmetric keys.Digital Signature Mechanisms
Digital signature mechanisms in public-key cryptography provide a means to verify the authenticity and integrity of a message, ensuring that it originated from a specific signer and has not been altered. Introduced conceptually by Diffie and Hellman in 1976, these mechanisms rely on asymmetric key pairs where the private key is used for signing and the public key for verification, leveraging the computational infeasibility of deriving the private key from the public one.[1] This approach allows anyone to verify the signature without needing to share secret keys securely. To create a digital signature, the signer first applies a collision-resistant hash function to the message, producing a fixed-size digest that represents the message's content. The signer then applies their private key to this digest, effectively "encrypting" it to generate the signature. For instance, in the RSA algorithm proposed by Rivest, Shamir, and Adleman in 1978, the signature S is computed as S = h^d \mod n, where h is the hash of the message, d is the private exponent, and n is the modulus derived from the product of two large primes.[4] This process ensures that only the holder of the private key can produce a valid signature, as the operation exploits the trapdoor one-way function inherent in the public-key system. For longer messages, hashing is essential to reduce the input to a manageable size, preventing the need to sign each block individually while maintaining security.[23] Other schemes, such as DSA or ECDSA, use different signing operations based on their mathematical foundations. Verification involves the recipient recomputing the hash of the received message and using the signer's public key to "decrypt" the signature, yielding the original digest. The verifier then compares this decrypted value with the newly computed hash; if they match, the signature is valid, confirming both the message's integrity and the signer's identity. In RSA terms, this check is performed by computing h' = S^e \mod n, where e is the public exponent, and ensuring h' equals the hash of the message.[4] The use of strong hash functions is critical here, as their collision resistance property makes it computationally infeasible for an attacker to find two different messages with the same hash, thereby preventing forgery of signatures on altered content.[24] A key property of digital signatures is non-repudiation, which binds the signer irrevocably to the message since only their private key could have produced the valid signature, and the public key allows third-party verification without the signer's involvement.[1] This feature underpins applications such as secure email protocols and software distribution, where verifiable authenticity is paramount.[23]Applications and Schemes
Secure Data Transmission
Public-key cryptography plays a pivotal role in secure data transmission by enabling the establishment of encrypted channels over open networks without requiring pre-shared secrets between parties. This addresses the key distribution problem inherent in symmetric cryptography, allowing communicators to securely exchange information even in untrusted environments like the internet. By leveraging asymmetric key pairs, it ensures confidentiality, as data encrypted with a public key can only be decrypted by the corresponding private key held by the intended recipient.[1] In protocols such as Transport Layer Security (TLS), public-key cryptography facilitates key exchange during the initial handshake to derive symmetric session keys for bulk data encryption. For instance, TLS 1.3 mandates the use of ephemeral Diffie-Hellman (DHE) or elliptic curve Diffie-Hellman (ECDHE) key exchanges, where parties generate temporary public values to compute a shared secret, providing forward secrecy to protect past sessions against future key compromises. This mechanism authenticates the exchange via digital signatures and encrypts subsequent handshake messages, ensuring secure transmission of application data thereafter.[25][25] For email encryption, standards like OpenPGP and S/MIME rely on public-key cryptography to protect message confidentiality. OpenPGP employs a hybrid approach where a randomly generated symmetric session key encrypts the email content, and that session key is then encrypted using the recipient's public key (e.g., via RSA or ElGamal) before transmission.[13] Similarly, S/MIME uses the Cryptographic Message Syntax (CMS) to wrap a content-encryption key with the recipient's public key through algorithms like RSA or ECDH, supporting enveloped data structures for secure delivery.[26][26] In file sharing scenarios, public-key cryptography enables secure uploads and downloads by allowing senders to encrypt files with the recipient's public key prior to transmission, preventing interception on public networks. OpenPGP implements this by applying the same hybrid encryption process to files as to messages, where symmetric encryption handles the data and public-key encryption secures the session key, ensuring end-to-end confidentiality without shared infrastructure.[13] This approach integrates with symmetric methods for performance, as explored in hybrid systems.[13]Authentication and Non-Repudiation
Public-key cryptography enables authentication by allowing parties to verify the identity of a communicator through the use of digital signatures, which demonstrate possession of a corresponding private key without revealing it.[27] In this context, authentication confirms that the entity claiming an identity is genuine, while non-repudiation ensures that a signer cannot later deny having performed a signing operation, providing evidentiary value in disputes.[4] These properties rely on the asymmetry of key pairs, where the private key signs data and the public key verifies it, binding actions to specific identities.[28] Challenge-response authentication protocols leverage digital signatures to prove private key possession securely. In such protocols, a verifier sends a random challenge—a nonce or timestamped value—to the claimant, who then signs it using their private key and returns the signature along with the challenge.[27] The verifier checks the signature against the claimant's public key; successful verification confirms the claimant controls the private key, as forging the signature would require solving the underlying hard problem, such as integer factorization in RSA.[27] This method resists replay attacks when fresh challenges are used and is specified in standards like FIPS 196 for entity authentication in computer systems.[27] Non-repudiation in public-key systems is achieved through timestamped digital signatures that bind a signer's identity to a document or action, making denial infeasible due to the cryptographic uniqueness of the signature. A signer applies their private key to hash the document and a trusted timestamp, producing a verifiable artifact that third parties can validate with the public key.[28] This ensures the signature was created after the timestamp and before any revocation, providing legal evidentiary weight, as outlined in digital signature standards like DSS.[28] The RSA algorithm, introduced in 1977, formalized this capability by enabling signatures that are computationally infeasible to forge without the private key.[4] Another prominent application is in blockchain and cryptocurrency systems, where users generate public-private key pairs to create wallet addresses from public keys and sign transactions with private keys; verifiers use the public key to confirm authenticity and prevent unauthorized spending, ensuring non-repudiation across distributed networks.[29] Certificate-based authentication extends these mechanisms by linking public keys to real-world identities via X.509 certificates issued by trusted authorities. Each certificate contains the subject's public key, identity attributes (e.g., name or email), and a signature from the issuing certification authority, forming a chain of trust from a root authority.[30] During authentication, the verifier validates the certificate chain, checks revocation status via certificate revocation lists, and uses the bound public key to confirm signatures, ensuring the key belongs to the claimed entity.[30] This approach, profiled in RFC 5280, supports scalable identity verification in distributed systems.[30] In software updates, public-key cryptography facilitates code signing, where developers sign binaries with their private key to assure users of authenticity and integrity, preventing tampering during distribution.[31] For instance, operating systems like Windows verify these signatures before installation, using the associated public key or certificate to block unsigned or altered code.[31] Similarly, in legal documents, electronic signatures employ public-key digital signatures to provide non-repudiation, as seen in frameworks like the U.S. ESIGN Act, which recognizes signatures verifiable via public keys as binding equivalents to handwritten ones.[32] This ensures contracts or approvals cannot be repudiated, with timestamps adding proof of creation time.[32]Hybrid Systems
Combining with Symmetric Cryptography
Public-key cryptography is frequently combined with symmetric cryptography in hybrid cryptosystems to optimize security and performance. Asymmetric methods handle initial key establishment or exchange securely over public channels, deriving a shared symmetric key without prior secrets, while symmetric algorithms then encrypt and decrypt the bulk of the data due to their greater speed and efficiency for large volumes. This hybrid model addresses the computational intensity of public-key operations, which are impractical for direct encryption of extensive payloads, and enhances overall system resilience.[3]Protocol Examples
Key examples of hybrid systems include TLS, where public-key-based key exchanges like ECDHE derive session keys for symmetric ciphers such as AES, securing web communications.[25] In email and file encryption via OpenPGP, a symmetric key encrypts the content, which is then wrapped with the recipient's public key for secure delivery.[13] Similarly, Internet Protocol Security (IPsec) uses Internet Key Exchange (IKE) with Diffie-Hellman to establish symmetric keys for VPN tunnels, combining authentication via digital signatures with efficient data protection.[3]Hybrid Systems
Combining with Symmetric Cryptography
Public-key cryptography, while enabling secure key distribution without prior shared secrets, is computationally intensive and significantly slower for encrypting large volumes of data compared to symmetric cryptography. Symmetric algorithms, such as AES, excel at efficiently processing bulk data due to their simpler operations, but they require a secure channel for key exchange to prevent interception. This disparity in performance—where public-key methods can be up to 1,000 times slower than symmetric ones for equivalent security levels—necessitates a hybrid approach to leverage the strengths of both paradigms.[33][34] In hybrid systems, public-key cryptography facilitates the secure exchange of a temporary symmetric session key, which is then used to encrypt the actual payload. The standard pattern involves the sender generating a random symmetric key, applying it to encrypt the message via a symmetric algorithm, and subsequently encrypting that symmetric key using the recipient's public key before transmission. Upon receipt, the recipient decrypts the symmetric key with their private key and uses it to decrypt the message. This method ensures confidentiality without the overhead of applying public-key operations to the entire data stream.[35][36] The efficiency gains from this integration are substantial; for instance, hybrid encryption achieves approximately 1,000-fold speedup in bulk data processing relative to pure public-key encryption, making it practical for real-world applications like secure file transfers or streaming. Standards such as Hybrid Public Key Encryption (HPKE) incorporate ephemeral Diffie-Hellman key exchanges within public-key frameworks to encapsulate symmetric keys securely, enhancing forward secrecy while maintaining compatibility with symmetric ciphers.[33][35] This conceptual hybrid model underpins many secure communication protocols, balancing security and performance effectively.[35]Protocol Examples
Public-key cryptography is integral to several widely adopted hybrid protocols, where it facilitates initial authentication and key agreement before transitioning to efficient symmetric encryption for the bulk of data transmission. These protocols leverage asymmetric mechanisms to establish trust and shared secrets securely over untrusted networks, ensuring confidentiality, integrity, and authenticity. Representative examples include the Transport Layer Security (TLS) handshake, Secure Shell (SSH), and Internet Protocol Security (IPSec) with Internet Key Exchange (IKE). In the TLS 1.3 handshake, public-key cryptography is employed for server authentication and ephemeral key agreement. The server presents an X.509 certificate containing its public key, typically based on RSA or elliptic curve cryptography (ECC), which the client verifies against a trusted certificate authority. The server then signs a handshake transcript using its private key (via algorithms like RSA-PSS or ECDSA) to prove possession and authenticity. Concurrently, the client and server perform an ephemeral Diffie-Hellman (DHE) or elliptic curve Diffie-Hellman (ECDHE) exchange using supported groups such as x25519 or secp256r1 to derive a shared secret. This secret, combined with the transcript via HKDF, generates symmetric keys for AES-GCM or ChaCha20-Poly1305 encryption of subsequent application data, embodying the hybrid model.[37] The SSH protocol utilizes public-key cryptography primarily for host and user authentication, alongside key exchange for session establishment. During the transport layer negotiation, the server authenticates itself by signing the key exchange hash with its host private key (e.g., RSA or DSA), allowing the client to verify the signature against the server's known public key. User authentication follows via the "publickey" method, where the client proves possession of a private key by signing a challenge message, supporting algorithms like ssh-rsa or ecdsa-sha2-nistp256. Key agreement occurs through Diffie-Hellman groups (e.g., group14-sha256), producing a shared secret from which symmetric session keys are derived for ciphers like AES and integrity via HMAC, securing the remote login channel.[38][39] In IPSec, public-key cryptography is optionally integrated into the IKEv2 protocol for peer authentication and key exchange during security association setup. Authentication employs digital signatures in AUTH payloads, using RSA or DSS on certificates to verify peer identities, with support for X.509 formats and identifiers like FQDN or ASN.1 distinguished names. Key exchange relies on ephemeral Diffie-Hellman (e.g., Group 14: 2048-bit modulus) to establish shared secrets with perfect forward secrecy, from which symmetric keys are derived via a pseudorandom function (PRF) like HMAC-SHA-256. These keys then protect IP traffic using Encapsulating Security Payload (ESP) with symmetric algorithms such as AES in GCM mode, enabling secure VPN tunnels. While pre-shared keys are common, public-key methods enhance scalability in large deployments.[40] The following table compares these protocols in terms of public-key types and recommended security levels, based on NIST guidelines for at least 112-bit security strength (equivalent to breaking 2^112 operations).| Protocol | Key Types Used for Authentication | Key Types Used for Key Exchange | Recommended Security Levels (Key Sizes) |
|---|---|---|---|
| TLS 1.3 | RSA (2048 bits), ECDSA (P-256 or P-384) | ECDHE (x25519 or secp256r1, ~256 bits) | 128-bit (ECC) or 112-bit (RSA/DH) |
| SSH | RSA (2048 bits), ECDSA (P-256 or P-384), DSA | DH (2048 bits, Group 14) | 112-bit (RSA/DH) or 128-bit (ECC) |
| IPSec (IKEv2) | RSA (2048 bits), DSS (2048 bits) | DH (2048 bits, Group 14) | 112-bit (RSA/DSS/DH) |