Fact-checked by Grok 2 weeks ago

Public-key cryptography

Public-key cryptography, also known as asymmetric cryptography, is a class of cryptographic algorithms that utilize a pair of related keys—a public key, which can be openly shared, and a private key, which must remain secret—to perform encryption, decryption, digital signing, and verification operations. The public key is used by anyone to encrypt messages or verify signatures intended for the key owner, while only the private key holder can decrypt those messages or produce valid signatures, with the relationship between the keys designed such that deriving the private key from the public key is computationally infeasible. This approach addresses the key distribution challenges of symmetric cryptography by enabling secure communication over untrusted networks without requiring parties to exchange secrets in advance. The foundational ideas of public-key cryptography emerged in the mid-1970s amid growing concerns over secure data transmission in computer networks. In their 1976 paper "New Directions in Cryptography," and E. Hellman introduced the of asymmetric key pairs, proposing a system where encryption keys could be publicly listed in directories, allowing any user to securely send encrypted messages to another without prior coordination or secure channels for . This innovation built on earlier theoretical work in but shifted focus toward practical, computationally secure systems resistant to . Independently, in 1977, , , and Leonard M. Adleman published their algorithm, the first viable implementation of a public-key , relying on the mathematical difficulty of factoring the product of two large prime numbers to ensure security. Public-key cryptography underpins essential security mechanisms in digital systems, including confidentiality via encryption of sensitive data, authentication to verify identities, integrity to detect tampering, and non-repudiation to prevent denial of actions through digital signatures. It facilitates key establishment protocols, such as Diffie-Hellman key agreement, for deriving shared symmetric keys over public channels, and supports public key infrastructures (PKI) that issue and manage digital certificates binding public keys to verified entities via trusted certification authorities. Widely deployed in protocols like Transport Layer Security (TLS) for web browsing, Secure/Multipurpose Internet Mail Extensions (S/MIME) for email, and virtual private networks (VPNs), it enables scalable secure transactions in e-commerce, government services, and enterprise networks, with standards like RSA and elliptic curve cryptography (ECC) providing varying levels of security based on key sizes (e.g., 2048-bit RSA for at least 112 bits of security). As computational threats evolve, including those from quantum computing, ongoing standardization efforts emphasize robust key management and migration to post-quantum alternatives while maintaining compatibility with existing infrastructures.

Fundamentals

Definition and Principles

Public-key cryptography, also known as asymmetric cryptography, is a cryptographic that utilizes a pair of related keys—a public key and a private key—to secure communications and data. Unlike symmetric , which relies on a single shared secret key for both and decryption, public-key cryptography employs distinct keys for these operations: the public key is freely distributed and used for or verification, while the private key is kept secret and used for decryption or generation. This asymmetry ensures that the private key cannot be feasibly derived from the public key, providing a foundation for secure interactions without the need for prior secret key exchange. The core principles of public-key cryptography revolve around achieving key security objectives through the key pair mechanism. Confidentiality is ensured by encrypting messages with the recipient's public , allowing only the private holder to decrypt and access the . Integrity and are supported via digital signatures, where the sender signs the message with their private , enabling the recipient to verify authenticity and unaltered content using the sender's public . is also provided, as a valid signature binds the sender irrevocably to the message, preventing denial of origin. These principles rely on the computational difficulty of inverting certain mathematical functions without the private , often referred to as one-way functions. Developed in the to address the challenges inherent in symmetric systems—where securely sharing a single over insecure channels is problematic—public-key cryptography revolutionized by enabling via public directories. In a basic workflow, the sender obtains the recipient's public key, encrypts the plaintext message to produce ciphertext, and transmits it over an open channel; the recipient then applies their private key to decrypt the message, ensuring only they can recover the original content. This approach underpins modern secure protocols without requiring trusted intermediaries for initial key setup.

Key Components

Public and private s form the core of public-key cryptography, generated as a mathematically related pair through specialized algorithms. The public is designed for open distribution to enable secure communications with multiple parties, while the private must be kept confidential by its owner to maintain system security. This asymmetry allows anyone to encrypt messages or verify signatures using the public , but only the private holder can decrypt or produce valid signatures. Certificates play a crucial role in associating public keys with specific identities, preventing impersonation and enabling trust in distributed systems. Issued by trusted , a contains the public key, the holder's identity details, and a from the CA verifying the binding. This structure, as defined in standards like , allows verification of key authenticity without direct knowledge of the private key. Key rings provide a practical mechanism for managing multiple keys, particularly in decentralized environments. In systems like Pretty Good Privacy (PGP), a public key ring stores the public keys of other users for and , while a separate private key ring holds the user's own private keys, protected by passphrases. These structures facilitate efficient key lookup and usage without compromising secrecy. Different public-key algorithms exhibit varying properties in terms of , computational demands, and achievable levels, influencing their suitability for applications. The table below compares representative algorithms for equivalent security strength, based on NIST guidelines for key lengths providing at least 128 bits of security against classical attacks. Computational costs are relative, with (ECC) generally requiring fewer resources due to smaller keys and optimized operations compared to RSA or DSA.
AlgorithmKey Size (bits)Relative Computation CostSecurity Level (bits)
3072High (modular exponentiation intensive)128
256Low (efficient scalar multiplication)128
3072 (modulus)Medium (discrete log operations)128
Usability of public-key systems hinges on secure during key creation, as predictable can undermine the mathematical hardness assumptions underlying key pair security. Deterministic or weakly random sources risk exposing private keys through bias or predictability, necessitating cryptographically secure pseudorandom number generators compliant with standards like those in NIST SP 800-90.

Mathematical Foundations

Asymmetric Encryption Basics

Asymmetric encryption, a cornerstone of public-key cryptography, employs a pair of mathematically related keys: a publicly available key that anyone can use to encipher a , and a secret private key held only by the intended recipient for decryption. This approach allows over insecure channels without the need for prior secret , fundamentally differing from symmetric methods by encryption and decryption processes. The underlying mathematics is rooted in , where computations are confined to residues modulo a large composite n, enabling efficient operations while obscuring the original without the private . At the heart of asymmetric encryption lie one-way functions, which are algorithms or mathematical operations that are straightforward and efficient to compute in the forward direction—for instance, transforming an input x to an output y = f(x)—but computationally infeasible to reverse, meaning finding x given y requires prohibitive resources unless augmented by a hidden "" parameter known only to the key holder. These functions provide the : the public key enables easy forward computation for , while inversion demands the private key's trapdoor information, rendering decryption secure against adversaries. A basic representation of the encryption process uses : the c is generated as c \equiv m^e \pmod{n}, where m is the message, e is the public exponent component of the key, and n is the . Decryption reverses this via the exponent d, yielding m \equiv c^d \pmod{n}, with the relationship between e and d tied to the structure of n. The of asymmetric schemes relies on well-established computational hardness assumptions, such as the problem, where decomposing a large composite n = p \cdot q (with p and q being large primes) into its prime factors is believed to be intractable for sufficiently large values using current algorithms and computing power.

Trapdoor One-Way Functions

Trapdoor one-way functions form the foundational mathematical primitive enabling public-key cryptography by providing a mechanism for reversible computation that is computationally feasible only with privileged information. Introduced by Diffie and Hellman, these functions are defined such that forward computation is efficient for anyone, but inversion—recovering the input from the output—is computationally intractable without a secret "" parameter, which serves as the private key. With the trapdoor, inversion becomes efficient, allowing authorized parties to decrypt or verify messages while maintaining security against adversaries. This asymmetry underpins the feasibility of public-key systems, where the public key enables easy forward evaluation, but the private key () is required for reversal. Trapdoor functions are typically categorized into permutation-based and function-based types, depending on whether they preserve one-to-one mappings. Permutation-based trapdoor functions, such as those underlying the RSA cryptosystem, involve bijective mappings that are easy to compute forward but hard to invert without knowledge of the trapdoor, often relying on the difficulty of factoring large composite numbers. For instance, in RSA, the public operation raises a message to a power modulo a composite modulus n = pq, while inversion uses the private exponent derived from the prime factors p and q. In contrast, function-based examples like the Rabin cryptosystem employ quadratic residues modulo n, where forward computation squares the input modulo n, and inversion requires extracting square roots, which is feasible only with the factorization of n. These examples illustrate how trapdoor functions can be constructed from number-theoretic problems, ensuring that the public key reveals no information about the inversion process. The inversion process in trapdoor functions can be formally expressed as recovering the original message m from the ciphertext c using the private key: m = \text{private_key}(c) This operation leverages the secret trapdoor, such as the prime factors in RSA or Rabin, to efficiently compute the inverse without solving the underlying hard problem directly. The security of trapdoor one-way functions is established through provable reductions to well-studied hard problems in computational number theory, ensuring that breaking the function is at least as difficult as solving these problems. For permutation-based schemes like RSA and Rabin, security reduces to the integer factorization problem: an adversary who can invert the function efficiently could factor the modulus n, a task believed to be intractable for large semiprimes on classical computers. Similarly, other trapdoor constructions, such as those based on the discrete logarithm problem in elliptic curves or finite fields, reduce inversion to computing discrete logarithms, providing rigorous guarantees that the system's hardness inherits from these foundational assumptions. This reductionist approach allows cryptographers to analyze and trust public-key schemes by linking their security to long-standing open problems.

Core Operations

Key Generation and Distribution

In public-key cryptography, key generation produces a mathematically linked pair consisting of a public key, which can be freely shared, and a private key, which must remain secret. Methods vary by algorithm; for systems like based on , this generally involves selecting large, randomly chosen prime numbers as foundational parameters, computing a from their product, and deriving public and private exponents that enable asymmetric operations based on the underlying trapdoor one-way function. In general, the process uses high-entropy random bits from approved sources to select parameters suited to the computational hard problem of the algorithm (e.g., elliptic curve parameters for ), and must occur within a secure , often using approved cryptographic modules to ensure the keys meet the required security strength. High is essential during to produce unpredictable values, preventing attackers from guessing or brute-forcing the private key from the one. Random bit strings are sourced from approved random bit generators (RBGs), such as those compliant with NIST standards, which must provide at least as many bits of as the target level—for instance, at least 256 bits for 128-bit . Insufficient , often from flawed or predictable sources like low-variability timers, can render keys vulnerable; a notable example is the 2006-2008 vulnerability (CVE-2008-0166), where a change reduced the random pool to a single value, generating only about 15,000 possible SSH keys and enabling widespread compromises. Secure distribution focuses on disseminating the public key while protecting the private key's secrecy. Methods include direct exchange through trusted channels like in-person handoff or , publication in public directories or key servers for open retrieval, or establishment via an initial to bootstrap trust. To mitigate risks like man-in-the-middle attacks, public keys are often accompanied by digital signatures for validation, confirming their authenticity without delving into signature mechanics. Private keys are never distributed and must be generated and stored with protections against extraction, such as hardware security modules. Common pitfalls in distribution, such as unverified public keys, can undermine the system, emphasizing the need for integrity checks during sharing.

Encryption and Decryption Processes

In public-key cryptography, the encryption process begins with preparing the plaintext message for secure transmission using methods specific to the algorithm. The message is first converted into a numerical representation and padded using a scheme such as (OAEP) to achieve a length compatible with the , prevent attacks like chosen-ciphertext vulnerabilities, and randomize the input for . In the algorithm, for example, the padded message m, treated as an integer less than the modulus n, is then encrypted by raising it to the power of the public exponent e modulo n, yielding the ciphertext c = m^e \mod n. Other schemes, such as ElGamal, employ different operations based on discrete logarithms. This operation ensures that only the corresponding private key can efficiently reverse it, leveraging the trapdoor property of the underlying . Decryption reverses this process using the private key. In , the recipient applies the private exponent d to the , computing m = c^d \mod n, which recovers the padded . The padding is then removed, with built-in checks—such as hash verification in OAEP—to detect and handle errors like invalid padding or tampering, rejecting the decryption if inconsistencies arise. This step ensures the original is accurately restored only by the legitimate holder of the private key, maintaining . Public-key encryption typically processes messages in fixed-size blocks limited by algorithm parameters, such as the modulus n in (typically 2048 to 4096 bits as of 2025), unlike many symmetric stream ciphers that handle arbitrary lengths continuously. This imposes a per-block size restriction, often requiring messages to be segmented or, for larger data, combined with symmetric methods in systems to encrypt bulk content efficiently. Performance-wise, public-key operations incur significant computational overhead due to large-integer , which scales cubically with the parameter size and requires far more resources—often thousands of times slower—than symmetric counterparts for equivalent levels. For instance, encrypting a 200-digit with on general-purpose hardware in the took seconds to minutes, highlighting the need for optimization techniques like . This overhead limits direct use for high-volume data, favoring hybrid approaches where public-key methods secure symmetric keys.

Digital Signature Mechanisms

Digital signature mechanisms in public-key cryptography provide a means to verify the authenticity and integrity of a message, ensuring that it originated from a specific signer and has not been altered. Introduced conceptually by Diffie and Hellman in , these mechanisms rely on asymmetric key pairs where the private key is used for signing and the public key for verification, leveraging the computational infeasibility of deriving the private key from the public one. This approach allows anyone to verify the signature without needing to share secret keys securely. To create a digital signature, the signer first applies a collision-resistant hash function to the message, producing a fixed-size digest that represents the message's content. The signer then applies their private key to this digest, effectively "encrypting" it to generate the signature. For instance, in the RSA algorithm proposed by Rivest, Shamir, and Adleman in 1978, the signature S is computed as S = h^d \mod n, where h is the hash of the message, d is the private exponent, and n is the modulus derived from the product of two large primes. This process ensures that only the holder of the private key can produce a valid signature, as the operation exploits the trapdoor one-way function inherent in the public-key system. For longer messages, hashing is essential to reduce the input to a manageable size, preventing the need to sign each block individually while maintaining security. Other schemes, such as DSA or ECDSA, use different signing operations based on their mathematical foundations. Verification involves the recipient recomputing the of the received message and using the signer's key to "decrypt" the , yielding the original digest. The verifier then compares this decrypted value with the newly computed ; if they match, the is valid, confirming both the message's and the signer's . In terms, this check is performed by computing h' = S^e \mod n, where e is the exponent, and ensuring h' equals the of the message. The use of strong functions is critical here, as their property makes it computationally infeasible for an attacker to find two different messages with the same , thereby preventing of signatures on altered content. A property of signatures is , which binds the signer irrevocably to the message since only their private could have produced the valid , and the public allows third-party verification without the signer's involvement. This feature underpins applications such as secure protocols and , where verifiable authenticity is paramount.

Applications and Schemes

Secure Data Transmission

Public-key cryptography plays a pivotal role in secure data transmission by enabling the establishment of encrypted channels over open networks without requiring pre-shared secrets between parties. This addresses the problem inherent in symmetric cryptography, allowing communicators to securely exchange information even in untrusted environments like the internet. By leveraging asymmetric key pairs, it ensures , as data encrypted with a public key can only be decrypted by the corresponding private key held by the intended recipient. In protocols such as (TLS), public-key cryptography facilitates during the initial handshake to derive symmetric session keys for bulk data encryption. For instance, TLS 1.3 mandates the use of ephemeral Diffie-Hellman (DHE) or Diffie-Hellman (ECDHE) key exchanges, where parties generate temporary public values to compute a , providing to protect past sessions against future key compromises. This mechanism authenticates the exchange via digital signatures and encrypts subsequent handshake messages, ensuring secure transmission of application data thereafter. For email encryption, standards like OpenPGP and rely on public-key cryptography to protect message confidentiality. OpenPGP employs a hybrid approach where a randomly generated symmetric encrypts the email content, and that session key is then encrypted using the recipient's public key (e.g., via or ElGamal) before transmission. Similarly, uses the (CMS) to wrap a content-encryption key with the recipient's public key through algorithms like or ECDH, supporting enveloped data structures for secure delivery. In file sharing scenarios, public-key cryptography enables secure uploads and downloads by allowing senders to encrypt files with the recipient's public key prior to transmission, preventing interception on public networks. OpenPGP implements this by applying the same hybrid encryption process to files as to messages, where symmetric encryption handles the data and public-key encryption secures the , ensuring end-to-end without shared infrastructure. This approach integrates with symmetric methods for performance, as explored in hybrid systems.

Authentication and Non-Repudiation

Public-key cryptography enables by allowing parties to verify the of a communicator through the use of digital signatures, which demonstrate possession of a corresponding private key without revealing it. In this context, confirms that the entity claiming an is genuine, while ensures that a signer cannot later deny having performed a signing operation, providing evidentiary value in disputes. These properties rely on the asymmetry of key pairs, where the private key signs data and the public key verifies it, binding actions to specific identities. Challenge-response authentication protocols leverage digital signatures to prove private key possession securely. In such protocols, a verifier sends a random —a or timestamped value—to the claimant, who then signs it using their private and returns the signature along with the . The verifier checks the signature against the claimant's public ; successful confirms the claimant controls the private , as forging the signature would require solving the underlying hard problem, such as in . This method resists replay attacks when fresh challenges are used and is specified in standards like FIPS 196 for entity in computer systems. Non-repudiation in public-key systems is achieved through timestamped s that bind a signer's identity to a or , making denial infeasible due to the cryptographic uniqueness of the . A signer applies their private to the and a trusted , producing a verifiable artifact that third parties can validate with the public . This ensures the was created after the and before any , providing legal evidentiary weight, as outlined in digital standards like DSS. The algorithm, introduced in 1977, formalized this capability by enabling signatures that are computationally infeasible to forge without the private . Another prominent application is in and systems, where users generate public-private pairs to create wallet addresses from public keys and sign transactions with private keys; verifiers use the public to confirm authenticity and prevent unauthorized spending, ensuring across distributed networks. Certificate-based authentication extends these mechanisms by linking public keys to real-world identities via certificates issued by trusted authorities. Each certificate contains the subject's public , identity attributes (e.g., name or ), and a from the issuing certification authority, forming a from a root authority. During authentication, the verifier validates the certificate chain, checks status via certificate lists, and uses the bound public to confirm signatures, ensuring the key belongs to the claimed entity. This approach, profiled in RFC 5280, supports scalable identity verification in distributed systems. In software updates, public-key cryptography facilitates code signing, where developers sign binaries with their private key to assure users of authenticity and integrity, preventing tampering during distribution. For instance, operating systems like Windows verify these signatures before , using the associated public key or to block unsigned or altered code. Similarly, in legal documents, electronic signatures employ public-key digital signatures to provide , as seen in frameworks like the U.S. ESIGN Act, which recognizes signatures verifiable via public keys as binding equivalents to handwritten ones. This ensures contracts or approvals cannot be repudiated, with timestamps adding proof of creation time.

Hybrid Systems

Combining with Symmetric Cryptography

Public-key cryptography is frequently combined with symmetric cryptography in hybrid cryptosystems to optimize security and performance. Asymmetric methods handle initial key establishment or exchange securely over public channels, deriving a shared symmetric key without prior secrets, while symmetric algorithms then encrypt and decrypt the bulk of the data due to their greater speed and efficiency for large volumes. This hybrid model addresses the computational intensity of public-key operations, which are impractical for direct of extensive payloads, and enhances overall .

Protocol Examples

Key examples of hybrid systems include TLS, where public-key-based key exchanges like ECDHE derive session keys for symmetric ciphers such as , securing web communications. In email and file encryption via OpenPGP, a symmetric key encrypts the content, which is then wrapped with the recipient's public key for secure delivery. Similarly, uses with Diffie-Hellman to establish symmetric keys for VPN tunnels, combining authentication via digital signatures with efficient data protection.

Hybrid Systems

Combining with Symmetric Cryptography

Public-key cryptography, while enabling secure without prior shared secrets, is computationally intensive and significantly slower for encrypting large volumes of data compared to symmetric cryptography. Symmetric algorithms, such as , excel at efficiently processing bulk data due to their simpler operations, but they require a for to prevent interception. This disparity in performance—where public-key methods can be up to 1,000 times slower than symmetric ones for equivalent security levels—necessitates a approach to leverage the strengths of both paradigms. In hybrid systems, public-key cryptography facilitates the secure exchange of a temporary symmetric , which is then used to encrypt the actual . The standard pattern involves the sender generating a random symmetric key, applying it to encrypt the message via a symmetric algorithm, and subsequently encrypting that symmetric key using the recipient's public key before transmission. Upon receipt, the recipient decrypts the symmetric key with their private key and uses it to decrypt the message. This method ensures without the overhead of applying public-key operations to the entire data stream. The efficiency gains from this integration are substantial; for instance, hybrid encryption achieves approximately 1,000-fold speedup in bulk data processing relative to pure public-key encryption, making it practical for real-world applications like secure file transfers or streaming. Standards such as Hybrid Public Key Encryption (HPKE) incorporate ephemeral Diffie-Hellman key exchanges within public-key frameworks to encapsulate symmetric keys securely, enhancing while maintaining compatibility with symmetric ciphers. This conceptual hybrid model underpins many secure communication protocols, balancing security and performance effectively.

Protocol Examples

Public-key cryptography is integral to several widely adopted hybrid protocols, where it facilitates initial authentication and key agreement before transitioning to efficient symmetric encryption for the bulk of data transmission. These protocols leverage asymmetric mechanisms to establish trust and shared secrets securely over untrusted networks, ensuring confidentiality, integrity, and authenticity. Representative examples include the Transport Layer Security (TLS) handshake, Secure Shell (SSH), and Internet Protocol Security (IPSec) with Internet Key Exchange (IKE). In the TLS 1.3 , public-key cryptography is employed for server authentication and ephemeral key agreement. The server presents an certificate containing its public key, typically based on or (), which the client verifies against a trusted . The server then signs a handshake transcript using its private key (via algorithms like -PSS or ECDSA) to prove possession and authenticity. Concurrently, the client and server perform an ephemeral Diffie-Hellman (DHE) or Diffie-Hellman (ECDHE) using supported groups such as x25519 or secp256r1 to derive a . This secret, combined with the transcript via , generates symmetric keys for AES-GCM or of subsequent application data, embodying the hybrid model. The SSH protocol utilizes public-key cryptography primarily for host and user authentication, alongside key exchange for session establishment. During the transport layer negotiation, the server authenticates itself by signing the key exchange hash with its host private key (e.g., RSA or DSA), allowing the client to verify the signature against the server's known public key. User authentication follows via the "publickey" method, where the client proves possession of a private key by signing a challenge message, supporting algorithms like ssh-rsa or ecdsa-sha2-nistp256. Key agreement occurs through Diffie-Hellman groups (e.g., group14-sha256), producing a from which symmetric session keys are derived for ciphers like and integrity via , securing the remote login channel. In , public-key cryptography is optionally integrated into the IKEv2 protocol for peer and during setup. Authentication employs digital signatures in AUTH payloads, using or DSS on certificates to verify peer identities, with support for formats and identifiers like FQDN or distinguished names. relies on ephemeral Diffie-Hellman (e.g., Group 14: 2048-bit modulus) to establish shared secrets with perfect , from which symmetric keys are derived via a pseudorandom function (PRF) like HMAC-SHA-256. These keys then protect IP traffic using Encapsulating (ESP) with symmetric algorithms such as in GCM mode, enabling secure VPN tunnels. While pre-shared keys are common, public-key methods enhance scalability in large deployments. The following table compares these protocols in terms of public-key types and recommended levels, based on NIST guidelines for at least 112-bit strength (equivalent to breaking 2^112 operations).
ProtocolKey Types Used for Key Types Used for Recommended Levels (Key Sizes)
TLS 1.3 (2048 bits), ECDSA ( or )ECDHE (x25519 or secp256r1, ~256 bits)128-bit (ECC) or 112-bit (/DH)
SSH (2048 bits), ECDSA ( or ), DSADH (2048 bits, Group 14)112-bit (/DH) or 128-bit (ECC)
IPSec (IKEv2) (2048 bits), DSS (2048 bits)DH (2048 bits, Group 14)112-bit (/DSS/DH)

Security Considerations

Algorithmic Vulnerabilities

Public-key cryptographic algorithms, while mathematically sound under their assumed hardness problems, are susceptible to vulnerabilities arising from their implementations, particularly those that leak information through non-ideal execution environments. These algorithmic weaknesses, often termed side-channel or implementation attacks, exploit physical or observational characteristics of computations rather than breaking the underlying mathematical foundations. Such vulnerabilities can compromise private keys or decrypt ciphertexts without directly solving problems like or logarithms. Side-channel attacks represent a prominent class of algorithmic vulnerabilities, where attackers infer secret information from unintended information leakage during key operations. Timing attacks, first demonstrated by Paul Kocher in 1996, exploit variations in the execution time of cryptographic operations, such as in or Diffie-Hellman, to recover private keys by analyzing timing differences correlated with intermediate values. For instance, in implementations, the time taken for squaring and multiplication steps can reveal bits of the private exponent if not constant-time. Power analysis attacks extend this concept by measuring electrical power consumption during computations. Simple power analysis () observes direct patterns in power traces to distinguish operations like multiplications from squarings in , while differential power analysis (DPA), introduced by Kocher, Jaffe, and Jun in 1999, uses statistical methods on multiple traces to isolate key-dependent signals, enabling key recovery from devices like smart cards with as few as a few hundred measurements. These attacks target public-key primitives broadly, including operations, by correlating power fluctuations with data manipulations. Fault injection attacks induce computational errors during algorithm execution to reveal private keys, exploiting the sensitivity of public-key schemes to arithmetic integrity. In 1997, Boneh, DeMillo, and Lipton demonstrated that injecting a single fault into an signature computation using the () allows an attacker to factor the modulus and recover the private key, as the faulty output provides equations solvable via the . This vulnerability affects CRT-optimized implementations, where a fault in one modular computation yields related faulty and correct signatures that expose the . Similar techniques apply to other schemes, such as elliptic curve digital signatures, where induced faults in point multiplications can leak scalar secrets. Countermeasures like error detection and redundancy are essential but increase computational overhead. Padding oracle attacks exploit flaws in the padding schemes used within public-key encryption protocols, allowing adaptive chosen-ciphertext decryption without the private key. In 1998, Daniel Bleichenbacher described an on with PKCS#1 v1.5 , where an revealing whether a ciphertext decrypts to valid enables the attacker to iteratively refine the plaintext space, decrypting arbitrary messages with around 360,000 queries in practice. This vulnerability stems from the malleability of and the 's distinction between valid and invalid paddings, affecting protocols like SSL/TLS. Similarly, Serge Vaudenay's 2002 analysis of CBC-mode in systems revealed that validation can decrypt ciphertexts block-by-block, applicable to public-key wrapped symmetric keys in standards like and WTLS, with decryption requiring roughly 128*b queries for a b-block message. These attacks highlight the need for deterministic checks without leakage, leading to the adoption of stricter schemes like OAEP and countermeasures in RFC 3447.

Key Management Challenges

One significant challenge in public-key cryptography arises from key alteration attacks, where an adversary performs a man-in-the-middle (MITM) substitution by intercepting and replacing a legitimate public key with a malicious one during distribution or exchange. This allows the attacker to decrypt intercepted communications or forge signatures while remaining undetected, as the victim and intended recipient continue using the substituted key. To mitigate such attacks, public keys are typically distributed via trusted channels or certified through (PKI) mechanisms that bind keys to verified identities. Another critical issue involves key revocation and expiration, as public keys must be invalidated when compromised, superseded, or reaching the end of their cryptoperiod to prevent misuse. Mechanisms like Certificate Revocation Lists (CRLs), standardized in the framework, provide a signed list issued by a certification authority () containing serial numbers of revoked certificates, along with revocation dates and reasons such as key compromise. Relying parties query CRLs periodically or use extensions like delta CRLs for incremental updates to check certificate status during validation, ensuring that expired or invalid keys are not trusted. (OCSP) serves as an alternative for real-time queries, though CRLs remain foundational for batch revocation in large-scale PKI deployments. Secure storage of private keys poses substantial risks, as exposure to theft or unauthorized access can compromise entire cryptographic systems. Hardware Security Modules (HSMs), validated under standards like , are widely recommended to protect private keys by performing cryptographic operations within tamper-resistant hardware, preventing key export and limiting access to authorized processes. These modules ensure confidentiality and integrity through physical and logical controls, such as split knowledge procedures where no single entity holds full access, thereby reducing insider threats and side-channel attacks. Backup and recovery processes introduce further challenges, as private keys must be securely archived for without enabling unauthorized restoration. Key recovery systems (KRS) often involve escrowing portions of keys with trusted third parties or using encrypted backups stored in separate, physically secure locations, with requiring multi-party approval to maintain . NIST guidelines emphasize that backups should use approved key-wrapping techniques and be protected equivalently to operational keys, while destruction of unnecessary copies prevents accumulation of vulnerabilities over time. PKI frameworks briefly support such through CA-managed policies.

Infrastructure and Metadata Risks

Public Key Infrastructure (PKI) forms the foundational framework for managing public keys and certificates in asymmetric cryptography systems, enabling secure verification of identities and trust establishment across networks. Central to PKI are Certificate Authorities (CAs), which are trusted entities responsible for issuing, signing, and managing digital certificates that bind public keys to specific identities or entities. Registration Authorities (RAs) complement CAs by performing identity verification and authentication tasks on behalf of the CA before certificate issuance, ensuring that only legitimate requests are processed. Trust chains, or certification paths, link end-entity certificates back to a trusted root CA through a series of intermediate certificates, allowing relying parties to validate authenticity via transitive trust. A primary infrastructure risk in PKI arises from CA compromise, where attackers gain unauthorized access to a CA's private keys or systems, enabling the issuance of fraudulent certificates that can impersonate legitimate entities. The 2011 DigiNotar breach exemplifies this vulnerability: hackers infiltrated the Dutch CA's network in June 2011, compromising its systems and issuing over 500 rogue certificates for high-profile domains like google.com, which were used in man-in-the-middle attacks targeting Iranian users. This incident eroded global trust in the affected CA, leading to its revocation from major browser trust stores and eventual bankruptcy in September 2011, while prompting widespread reforms in CA auditing and liability standards. Certificate mis-issuance represents another systemic threat, occurring when CAs erroneously validate and issue certificates to unauthorized parties due to flawed vetting processes or insider errors, potentially enabling phishing sites or unauthorized access. Studies of web PKI certificates have identified mis-issuance rates through tools like ZLint, revealing persistent errors in validation that undermine the entire trust model. Beyond core infrastructure, risks in public-key cryptography persist even when payloads are encrypted, as ancillary data such as headers, timestamps, and details can leak sensitive patterns about communication endpoints, volumes, or timings. For instance, in protocols like TLS, unencrypted headers may expose server names or negotiation parameters, while embedded timestamps in certificates can reveal issuance patterns indicative of user behavior. Such leaks facilitate attacks, where adversaries infer relationships or routines without decrypting content, as demonstrated in schemes where servers manipulate timings to expose communication graphs. To mitigate these infrastructure risks, employ Modules (HSMs), tamper-resistant devices that securely generate, store, and use private keys, preventing extraction even under physical attack and ensuring compliance with standards like FIPS 140-3. For exposure, emerging protocols incorporate for headers and auxiliary fields, such as in forward-edge designs that obfuscate version, length, and protocol to hinder detection and analysis. These measures, including multi-server chaining with mathematical guarantees against pattern revelation, enhance privacy without compromising core cryptographic functions.

Historical Development

Pre-1970s Concepts

In the early , described a biliteral in his work De Augmentis Scientiarum, a steganographic system that encoded messages using two distinct forms of letters (e.g., variations in typefaces or shapes) embedded within an innocuous carrier text. This method allowed for the concealment of information without altering the apparent meaning of the document, representing an early exploration of communication where the encoding mechanism could be somewhat decoupled from the shared understanding of the content, though it relied on the recipient knowing the two alphabets to decode. During the 1940s and 1950s, applied to in his seminal 1949 paper "Communication Theory of Secrecy Systems," formalizing the requirements for secure communication. Shannon proved that achieving perfect secrecy necessitates a secret key at least as long as the , randomly generated, and used only once (as in the ), but he underscored the practical challenge of securely distributing such keys between parties over insecure channels without prior shared secrets—a limitation that became known as the problem. This analysis highlighted the theoretical need for alternative approaches to key establishment, setting the stage for later innovations in asymmetric systems. In 1970, British cryptographer James H. Ellis at proposed the concept of "non-secret encryption" in an internal existence proof, demonstrating theoretically that encryption could be performed using a public algorithm where the sender and receiver share only a secret key for decryption, without the need for secure beforehand. Ellis's work outlined a lookup-table model for such a system but lacked a practical implementation, remaining classified until the ; it anticipated the core idea of public-key cryptography by separating the roles of encryption and decryption keys.

Classified and Public Innovations

In the early 1970s, researchers at the UK's Government Communications Headquarters (GCHQ), specifically within its Communications-Electronics Security Group (CESG), developed the foundational concepts of public-key cryptography in a classified environment. James H. Ellis initiated this work in 1969 by exploring solutions to key distribution problems, leading to a theoretical demonstration of the feasibility of non-secret encryption—a system where the encryption key could be publicly shared without compromising security. In January 1970, Ellis formalized this idea in an internal report titled "The Possibility of Secure Non-Secret Digital Encryption," providing an existence proof but lacking a practical implementation due to computational limitations at the time. Building on Ellis's vision, Clifford C. Cocks, a mathematician who joined in 1973, devised a practical scheme in November of that year, using properties of large prime numbers and to enable asymmetric similar to later public algorithms. Independently addressing the challenge, Malcolm J. Williamson, who joined in 1974, developed a method in January 1974 for two parties to agree on a over an insecure channel, akin to logarithm-based systems. These innovations, kept secret for reasons, established the core principles of public-key cryptography by 1975, though they remained internal to and were not deployed operationally due to hardware constraints. The public emergence of public-key cryptography occurred in the mid-1970s through unclassified academic work in the United States. In November 1976, Whitfield Diffie and Martin E. Hellman published "New Directions in Cryptography" in IEEE Transactions on Information Theory, introducing a key agreement protocol that allowed secure key exchange without prior shared secrets, relying on the computational difficulty of the discrete logarithm problem. This paper laid the groundwork for public-key systems by emphasizing one-way functions and public directories for key distribution. Following closely, in February 1978 (submitted April 1977), Ronald L. Rivest, Adi Shamir, and Leonard M. Adleman described the RSA cryptosystem in Communications of the ACM, proposing an encryption method based on the hardness of integer factorization, which supported both confidentiality and digital signatures. The contributions remained classified until December 1997, when the UK government declassified key documents and publicly announced their prior inventions at a in , England, acknowledging the independent public discoveries while revealing the earlier classified timeline. This disclosure highlighted that 's work predated the public papers by several years but had no immediate impact on due to secrecy. On the commercialization front, Rivest, Shamir, and Adleman filed a U.S. for the on December 14, 1977 (U.S. , granted September 20, 1983), marking the first for a public-key system. To exploit this invention, they founded RSA Data Security Inc. in 1982, which licensed the technology to software vendors and integrated into products like secure and protocols, driving its adoption in commercial applications despite ongoing enforcement until expiration in 2000.

Specific Algorithms

RSA and Integer Factorization

The algorithm, named after its inventors , , and , is a foundational public-key introduced in that enables secure data transmission without prior exchange of secret keys. It operates on the principle of asymmetric encryption, where a public key encrypts messages and a corresponding private key decrypts them, making it suitable for applications like secure and digital signatures. The algorithm's design leverages basic to ensure that deriving the private key from the public key is computationally infeasible for large parameters. Key generation in RSA begins with selecting two large, distinct prime numbers, denoted as p and q, typically of similar bit length to balance security and efficiency. The modulus n is then computed as the product n = p \times q, which forms the core of both the public and private keys. The totient \phi(n) = (p-1)(q-1) is calculated next, followed by choosing a public exponent e such that $1 < e < \phi(n) and \gcd(e, \phi(n)) = 1, often using small values like 65537 for performance. The private exponent d is derived as the modular multiplicative inverse of e modulo \phi(n), satisfying d \times e \equiv 1 \pmod{\phi(n)}. The public key consists of (n, e), while the private key is (n, d), with p and q kept secret to prevent reconstruction of \phi(n). Encryption transforms a plaintext message m (where $0 \leq m < n) into ciphertext c using the public key via the formula: c = m^e \mod n Decryption reverses this process with the private key: m = c^d \mod n This works because of Euler's theorem, which guarantees that m^{k \phi(n) + 1} \equiv m \pmod{n} for \gcd(m, n) = 1, and since d \equiv e^{-1} \pmod{\phi(n)}, it follows that c^d \equiv m \pmod{n}; the relation holds even if \gcd(m, n) \neq 1 due to the Chinese Remainder Theorem applied to p and q. In practice, messages longer than n are segmented or hybridized with symmetric ciphers, but raw RSA without padding is deterministic and vulnerable to certain attacks, necessitating probabilistic enhancements. The security of RSA fundamentally relies on the computational hardness of integer factorization: given n, it is easy to compute but extremely difficult to recover p and q without exhaustive search or advanced algorithms like the General Number Field Sieve, especially for large n. Factoring n allows computation of \phi(n) and thus d, compromising the system; no efficient classical algorithm exists for factoring semiprimes of sufficient size, providing the assumed one-way trapdoor function. However, security also depends on proper implementation, as side-channel attacks or poor randomness in key generation can weaken it independently of factorization difficulty. To mitigate vulnerabilities in raw RSA, such as chosen-ciphertext attacks, padding schemes are essential. The Optimal Asymmetric Encryption Padding (OAEP) scheme, standardized in PKCS#1, incorporates a feistel-like structure with a hash function (e.g., SHA-256) and mask generation function to add randomness and diffusion to the plaintext before exponentiation, ensuring semantic security under the RSA assumption. OAEP transforms the message into a padded block that includes seed bytes, making identical plaintexts yield different ciphertexts and resisting adaptive attacks. For key sizes, NIST recommends at least 2048 bits for RSA moduli to achieve 112 bits of security through 2030, with 3072 bits for longer-term protection against foreseeable advances in factoring; smaller keys like 1024 bits are deprecated due to demonstrated factorizations. Implementations must generate primes using probabilistic tests (e.g., Miller-Rabin) to ensure their secrecy and uniformity.

Elliptic Curve Cryptography

Elliptic curve cryptography (ECC) is a public-key cryptosystem that leverages the algebraic structure of elliptic curves over finite fields to provide security based on the difficulty of the elliptic curve discrete logarithm problem (ECDLP). Proposed independently by Neal Koblitz and Victor S. Miller in the mid-1980s, ECC enables efficient implementations of key agreement, encryption, and digital signatures with smaller key sizes compared to other systems offering equivalent security levels. The core operations rely on the group structure of points on the curve, where scalar multiplication serves as the foundational hard problem. An elliptic curve over a prime finite field \mathbb{F}_p (with p > 3) is typically defined by the Weierstrass equation y^2 = x^3 + ax + b \pmod{p}, where a, b \in \mathbb{F}_p satisfy the non-singularity condition $4a^3 + 27b^2 \not\equiv 0 \pmod{p} to ensure the curve is . The set of points (x, y) satisfying this equation, augmented by a \mathcal{O} serving as the identity, forms an under a geometrically defined . Point addition combines two points to yield a third on the curve: for distinct points P = (x_1, y_1) and Q = (x_2, y_2), the slope \lambda = (y_2 - y_1)(x_2 - x_1)^{-1} \pmod{p}, and the sum R = P + Q = (x_3, y_3) where x_3 = \lambda^2 - x_1 - x_2 \pmod{p}, \quad y_3 = \lambda(x_1 - x_3) - y_1 \pmod{p}. Point doubling for P = (x_1, y_1) uses \lambda = (3x_1^2 + a)(2y_1)^{-1} \pmod{p}, with the same formulas for R = 2P. These operations are efficient and enable repeated to compute scalar multiples kP for integer k. Key generation in ECC involves selecting a standardized curve with a large prime order subgroup generated by a base point G, where the subgroup order n is approximately the size of the field (e.g., n \approx p). The private key is a scalar k \in \{1, 2, \dots, n-1\}, and the corresponding public key is the point Q = kG, computed via scalar multiplication. Security rests on the ECDLP: given G and Q, computing k is computationally infeasible for properly chosen curves, as no subexponential algorithms are known, unlike some discrete logarithm variants in finite fields. The Diffie-Hellman (ECDH) protocol adapts key agreement to this setting: party A selects private key a and computes A = aG, while party B uses b and B = bG; the is then abG, derived as aB = bA. This is standardized in NIST SP 800-56A for pairwise key establishment using elliptic curves. For digital signatures, the (ECDSA) generates a pair (r, s) where r is the x-coordinate (mod n) of kG for random k, and s = k^{-1}(e + rd) \pmod{n} with private key d and message hash e; checks if s^{-1}e G + s^{-1}r Q = (r, \cdot). ECDSA is specified in FIPS 186-5 as an approved method for federal systems. A primary advantage of ECC is its efficiency: it achieves comparable strengths to with significantly smaller s and faster computations, reducing , , and demands—particularly beneficial for resource-constrained devices. For instance, a 256-bit ECC provides approximately 128 bits of , equivalent to a 3072-bit , while a 384-bit ECC matches 192-bit against a 7680-bit . These levels are recommended by NIST for protecting sensitive data through at least 2030 (128-bit) or longer.

Discrete Logarithm-Based Systems

Discrete logarithm-based systems rely on the computational hardness of the problem (DLP) in finite fields, particularly in the of integers modulo a large prime p. The DLP is defined as follows: given a prime p, a g of the \mathbb{Z}_p^\times, and an element y \in \mathbb{Z}_p^\times, find the integer x such that g^x \equiv y \pmod{p}, where $0 \leq x < p-1. This problem is believed to be intractable for sufficiently large p (typically thousands of bits), as no efficient algorithm is known to solve it in general, distinguishing it from easier problems like . The Diffie-Hellman (DH) key exchange, introduced in 1976, was the first practical application of the DLP in public-key cryptography and enables two parties to establish a over an insecure channel without prior secrets. In the protocol, both parties agree on public parameters p and g; Alice selects a private exponent a and sends g^a \pmod{p} to Bob, who selects b and sends g^b \pmod{p}. The is then g^{ab} \pmod{p}, computable by Alice as (g^b)^a \pmod{p} and by Bob as (g^a)^b \pmod{p}. Security relies on the difficulty of the computational Diffie-Hellman problem, a variant of the DLP, preventing an eavesdropper from deriving the secret from the public exchanges. ElGamal encryption, proposed by Taher ElGamal in 1985, extends the DH mechanism to provide asymmetric encryption based on the DLP. The public key consists of p, g, and y = g^x \pmod{p} where x is the private key. To encrypt a message m (with $0 < m < p), the sender chooses a random k and computes the ciphertext as a pair (c_1, c_2), where c_1 = g^k \pmod{p} and c_2 = m \cdot y^k \pmod{p}. Decryption recovers m = c_2 \cdot (c_1^x)^{-1} \pmod{p}, leveraging the private key x. This scheme is probabilistically secure under the decisional Diffie-Hellman assumption but requires careful padding for in practice. Variants of DL-based systems include the (DSA), standardized by NIST in the Digital Signature Standard (DSS), which adapts ElGamal principles for digital signatures. In DSA, a signer uses private key x to produce a signature (r, s) on a message hash, verifiable with the public key y = g^x \pmod{p}; it provides under the DLP's hardness. Parameter selection is critical for : NIST recommends domain parameters with a p of at least 2048 bits (providing approximately 112 bits of ) for both DH and DSA, with subgroup order q of 256 bits, and larger sizes like 3072 bits for extended protection against advances in algorithms such as the number field sieve.

Modern and Future Directions

Post-Quantum Cryptography

Public-key cryptography faces existential threats from , particularly through , which was introduced by in 1994 and enables efficient and computation on a sufficiently large quantum computer. This quantum algorithm would render widely used systems like , (), and discrete logarithm-based protocols insecure by solving their underlying hard problems in polynomial time, potentially allowing decryption of encrypted data harvested today for future quantum attacks. As quantum computers advance toward practical scalability, the need for quantum-resistant alternatives has driven the development of (), which relies on mathematical problems believed to be resistant to both classical and quantum attacks. PQC algorithms are categorized into several families, including lattice-based, hash-based, and code-based schemes, each offering key encapsulation or digital signatures with varying trade-offs in security and efficiency. Lattice-based candidates, such as CRYSTALS-Kyber, leverage the hardness of problems like (LWE) and have emerged as frontrunners due to their versatility in and signatures. Hash-based signatures like SPHINCS+ provide provable security based on the collision resistance of hash functions, making them suitable for long-term digital signatures without relying on unproven number-theoretic assumptions. Code-based systems, exemplified by variants and the selected HQC, base their security on the difficulty of decoding random linear error-correcting codes, offering robust options with decades of cryptographic scrutiny. The U.S. National Institute of Standards and Technology (NIST) has led the global standardization effort for PQC since , conducting multiple rounds of public evaluation to select algorithms resistant to quantum threats. In 2022, NIST advanced CRYSTALS-Kyber, CRYSTALS-Dilithium, , and SPHINCS+ to the final round, culminating in the publication of (FIPS) 203 (ML-KEM from ), FIPS 204 (ML-DSA from ), and FIPS 205 (SLH-DSA from SPHINCS+) on August 13, 2024. In March 2024, NIST selected the code-based HQC algorithm as a backup key-encapsulation mechanism to diversify options beyond lattice-based schemes, with a draft standard expected by 2026. NIST continues evaluation of additional algorithms like for digital signatures, with standards anticipated in 2025. These selections prioritize algorithms with strong margins, efficient implementations, and minimal side-channel vulnerabilities, as verified through extensive . Transitioning to PQC introduces significant challenges, including substantially larger key and ciphertext sizes compared to classical systems—for instance, Kyber-512 public keys are about 800 bytes versus 65 bytes uncompressed for NIST P-256 —leading to increased storage, bandwidth, and latency in protocols like TLS. Performance impacts are notable, with PQC signatures and encapsulations often requiring 10-100 times more computational resources than equivalents, straining resource-constrained devices such as endpoints and necessitating approaches during . Organizations must address interoperability with legacy systems, update cryptographic libraries, and conduct risk assessments to mitigate "" threats, with NIST recommending phased crypto-agility strategies to facilitate this shift.

Standardization and Emerging Threats

The Internet Engineering Task Force (IETF) has advanced standardization efforts for integrating post-quantum cryptography (PQC) into existing protocols, particularly through hybrid approaches that maintain compatibility with classical systems. RFC 9180, published in May 2022, defines a hybrid public key encryption (HPKE) scheme that combines key encapsulation mechanisms (KEMs) with key derivation functions and authenticated encryption, enabling the use of both classical and quantum-resistant algorithms to encrypt arbitrary-sized plaintexts. This standard supports post-quantum security by allowing hybrid modes, such as those incorporating pre-shared keys or authentication, to enhance resilience against quantum threats when paired with non-quantum-resistant KEMs. Following TLS 1.3's ratification in 2018, post-2020 updates have emphasized its role as the baseline for PQC integration, with the IETF draft on hybrid key exchange in TLS 1.3—last updated in November 2025 and now in the RFC Editor's Queue—specifying methods to concatenate public keys and shared secrets from multiple algorithms without adding round trips, ensuring security if at least one component remains unbroken. Emerging threats to public-key cryptography extend beyond quantum risks, including AI-assisted that leverages to identify patterns in encrypted data or optimize attacks on underlying mathematical problems. Systematic reviews highlight AI's potential to accelerate differential or infer keys from side-channel data, posing risks to systems like and by exploiting computational inefficiencies in traditional defenses. Additionally, supply-chain compromises targeting cryptographic keys have surged, with attackers exploiting vulnerabilities in software dependencies or to extract private keys, as seen in incidents involving code-signing abuse where stolen keys enable distribution under trusted identities. These attacks often involve side-channel techniques, such as on devices, to reveal sensitive cryptographic outputs during manufacturing or distribution. As of November 2025, the European Union's Quantum Flagship initiative remains active in its second phase under , with a exceeding €400 million over 20 new projects focused on quantum communication, computing, and sensing. Globally, migration timelines to PQC vary by region but emphasize phased transitions: the National Cyber Security Centre (NCSC) recommends completing discovery of crypto assets by 2028 and high-priority migrations by 2031, with initial validations for PQC modules expected in 2025; Canada's Cyber Centre outlines a roadmap for non-classified systems starting inventory in 2025 and full implementation by 2035; and the Post-Quantum Cryptography Coalition's roadmap urges organizations to align preparation with threat models, projecting widespread adoption by 2030 to protect long-lived data. Hybrid PQC designs address transition challenges by combining classical algorithms, such as those based on discrete logarithms, with post-quantum primitives like lattice-based schemes to ensure and during . Standardized terminology for these post-quantum/traditional (PQ/T) hybrid schemes defines them as multi-algorithm constructions for key establishment or signatures that require security from at least one component against both classical and quantum adversaries. For instance, protocols like TLS 1.3 can incorporate PQ/T hybrids through composite key exchanges, allowing legacy systems to interoperate while providing against quantum attacks, as outlined in NCSC guidance for minimal disruption in enterprise environments. This approach mitigates risks from incomplete migrations by preserving and confidentiality properties in mixed deployments.

References

  1. [1]
    [PDF] New Directions in Cryptography - Stanford Electrical Engineering
    IT-22,. NO. 6, NOVEMBER. 1976. New Directions in Cryptography. Invited Paper. WHITFIELD. DIFFIE AND MARTIN E. HELLMAN,. MEMBER,. IEEE. Abstract-Two kinds of ...
  2. [2]
    public key cryptography (PKC) - Glossary | CSRC
    A cryptographic system where users have a private key that is kept secret and used to generate a public key (which is freely provided to others).
  3. [3]
    [PDF] Recommendation for Key Management: Part 1 - General
    May 5, 2020 · Public-key. (asymmetric-key) cryptographic algorithm. A cryptographic algorithm that uses two related keys: a public key and a private key.
  4. [4]
    [PDF] A Method for Obtaining Digital Signatures and Public-Key ...
    A public-key cryptosystem can be used to “bootstrap” into a standard encryption scheme such as the NBS method. Once secure communications have been established,.
  5. [5]
    [PDF] Introduction to public key technology and the federal PKI infrastructure
    Sep 13, 2021 · Public key cryptography uses two electronic keys: a public key and a private key. The public key can be known by anyone while the private ...
  6. [6]
  7. [7]
    New directions in cryptography | IEEE Journals & Magazine
    New directions in cryptography ... Abstract: Two kinds of contemporary developments in cryptography are examined. Widening applications of teleprocessing have ...
  8. [8]
    [PDF] Public-Key Cryptography and the RSA Algorithm Lecture Notes on ...
    Feb 11, 2013 · The two keys in such a key pair are referred to as the public key and the private key. • With public key cryptography, all parties interested in ...
  9. [9]
    [PDF] Recommendation for Cryptographic Key Generation
    Jul 23, 2018 · Key pair A private key and its corresponding public key; a key pair is used with an asymmetric-key (public-key) algorithm.
  10. [10]
    [PDF] Public key certificates - Introduction to Cryptography CS 355
    Public-Key Certificates. • A certificate binds identity (or other information) to public key. • Contents signed by a trusted Public-Key or. Certificate ...
  11. [11]
    [PDF] FIPS 196, Entity Authenication Using Public Key Cryptography
    Feb 18, 1997 · Binding: an acknowledgment by a trusted third party that associates an entity's identity with its public key. This may take place through (1) a ...
  12. [12]
    [PDF] PGP - CS@Purdue
    Each user of PGP has two data structures to hold keys: one for his own public/private key pairs and one to store the public keys of other users. These data ...
  13. [13]
    RFC 4880 - OpenPGP Message Format - IETF Datatracker
    BEGIN PGP PUBLIC KEY BLOCK Used for armoring public keys. BEGIN PGP PRIVATE KEY BLOCK Used for armoring private keys. BEGIN PGP MESSAGE, PART X/Y Used for ...<|control11|><|separator|>
  14. [14]
    [PDF] Energy Analysis of Public-Key Cryptography on Small Wireless ...
    In this paper, we quantify the energy costs of authentication and key exchange based on public-key cryptography on an 8- bit microcontroller platform. We ...
  15. [15]
    [PDF] Recommendation for Cryptographic Key Generation
    Jun 2, 2020 · For example, the nonce may be a random value that is generated anew for each use, a timestamp, a sequence number, or some combination of these.
  16. [16]
    Cryptographic Algorithm Validation Program CAVP
    Algorithm specifications for current FIPS-approved and NIST-recommended random number generators are available from the Cryptographic Toolkit.
  17. [17]
    [PDF] MIT/LCS/TR-212 - Digitalized Signatures and Public Key Functions
    while р and प are the private key used by the issuer for production of signatures and function inversion. These functions can be used for all the.
  18. [18]
    [PDF] A Method for Obtaining Digital Signatures and Public-Key ...
    An encryption method is presented with the novel property that publicly re- vealing an encryption key does not thereby reveal the corresponding decryption key.
  19. [19]
  20. [20]
    [SECURITY] [DSA 1571-1] New openssl packages fix predictable ...
    May 13, 2008 · Luciano Bello discovered that the random number generator in Debian's openssl package is predictable. This is caused by an incorrect Debian-specific change to ...
  21. [21]
  22. [22]
    RFC 8017 - PKCS #1: RSA Cryptography Specifications Version 2.2
    This document provides recommendations for the implementation of public-key cryptography based on the RSA algorithm.
  23. [23]
  24. [24]
    [PDF] Randomized hashing for digital signatures
    Dec 15, 2022 · Collision resistance is a required property for the cryptographic hash functions used in Digital. Signature Applications. The intent of this ...
  25. [25]
    RFC 8446: The Transport Layer Security (TLS) Protocol Version 1.3
    - Static RSA and Diffie-Hellman cipher suites have been removed; all public-key based key exchange mechanisms now provide forward secrecy. - All handshake ...
  26. [26]
    RFC 8551 - Secure/Multipurpose Internet Mail Extensions (S/MIME ...
    RFC 8551 defines S/MIME 4.0, which provides a way to send secure MIME data using digital signatures for authentication and encryption for confidentiality.
  27. [27]
    FIPS 196, Entity Authentication Using Public Key Cryptography
    This standard specifies two challenge-response protocols by which entities in a computer system may authenticate their identities to one another.
  28. [28]
    [PDF] Digital Signature Standard (DSS) - NIST Technical Series Publications
    Feb 5, 2024 · c. ANS X9. 31-1998, Digital Signatures Using Reversible Public Key Cryptography for the Financial Services Industry (rDSA).
  29. [29]
    RFC 5280 - Internet X.509 Public Key Infrastructure Certificate and ...
    This memo profiles the X.509 v3 certificate and X.509 v2 certificate revocation list (CRL) for use in the Internet.
  30. [30]
    Understanding Digital Signatures | CISA
    Feb 1, 2021 · Public key cryptography can be used several ways to ensure confidentiality, integrity, and authenticity. Public key cryptography can. Ensure ...Missing: principles | Show results with:principles
  31. [31]
    Digital Signature Requirements & Regulations | Sectigo® Official
    Digital signatures, a secure form of electronic signatures, utilize PKI to authenticate signers and ensure document integrity. They're legally binding.
  32. [32]
    Asymmetric Key Ciphers | Practical Cryptography for Developers
    Jun 19, 2019 · ... symmetric ciphers (e.g. the RSA encryption is 1000 times slower than AES). ... public-key cryptography and symmetric crypto algorithm: In the ...
  33. [33]
    Crypto 101: Public-Key Cryptography - AspEncrypt.com
    In real-world implementations, public keys are rarely used to encrypt actual messages as public-key cryptography is very slow, about 1000 times slower that ...
  34. [34]
    RFC 9180 - Hybrid Public Key Encryption - IETF Datatracker
    May 13, 2022 · Combining the two yields the key management advantages of asymmetric cryptography and the performance benefits of symmetric cryptography.
  35. [35]
    Understand Key Concepts in Tink | Google for Developers
    Nov 14, 2024 · Hybrid encryption combines the efficiency of symmetric encryption with the convenience of public-key encryption. To encrypt a message, a ...
  36. [36]
    RFC 8446 - The Transport Layer Security (TLS) Protocol Version 1.3
    This document specifies version 1.3 of the Transport Layer Security (TLS) protocol. TLS allows client/server applications to communicate over the Internet.
  37. [37]
    RFC 4253: The Secure Shell (SSH) Transport Layer Protocol
    ### Summary of Public-Key Cryptography in SSH Transport Layer (RFC 4253)
  38. [38]
    RFC 4252 - The Secure Shell (SSH) Authentication Protocol
    This document describes the SSH authentication protocol framework and public key, password, and host-based client authentication methods.
  39. [39]
    RFC 7296 - Internet Key Exchange Protocol Version 2 (IKEv2)
    This document describes version 2 of the Internet Key Exchange (IKE) protocol. IKE is a component of IPsec used for performing mutual authentication.
  40. [40]
    [PDF] Part 3: Application-Specific Key Management Guidance
    Jan 1, 2015 · NIST is responsible for developing information security standards and guidelines, including minimum requirements for Federal information systems ...
  41. [41]
    [PDF] Side-Channel Attacks: Ten Years After Its Publication and the ...
    Abstract. Side-channel attacks are easy-to-implement whilst powerful attacks against cryptographic implementations, and their targets range from primitives, ...
  42. [42]
    [PDF] Timing Attacks on Implementations of Diffie-Hellman, RSA, DSS ...
    Some cryptosystems will need to be re- vised to protect against the attack, and new protocols and algorithms may need to incorporate measures to prevent timing ...
  43. [43]
    [PDF] Differential Power Analysis - Paul Kocher
    This paper examines specific methods for analyz- ing power consumption measurements to find secret keys from tamper resistant devices. We also discuss ...
  44. [44]
    [PDF] Chosen Ciphertext Attacks against Protocols Based on the RSA ...
    In this paper, we describe a different algorithm that has as its goal to minimize the number of chosen ciphertexts; thus, we show the practicality of the attack ...
  45. [45]
    [PDF] Module 2: Cryptographic Tools, Key Distribution and Management
    – First, Alice must ask Bob for his public key. – If Bob sends his public key to Alice, but Mallory is able to intercept it, a MITM attack can begin.
  46. [46]
  47. [47]
    [PDF] PKI: The Basics of Public Key Infrastructure - OAKTrust
    In PKI environments, entities like certificate authorities (CAs) and registration authorities (RAs) provide services similar to those of the Department of ...
  48. [48]
    [PDF] Chapter 9 PUBLIC KEY CRYPTOGRAPHY AND KEY MANAGEMENT
    Feb 15, 2000 · b. Public key certificates are used by Certification Authorities (CAs) to identify, validate, and bind an individual's identity to a public key.
  49. [49]
    DigiNotar Files for Bankruptcy in Wake of Devastating Hack - WIRED
    Sep 20, 2011 · The breach allowed the intruder to trick DigiNotar's system into issuing him more than 500 fraudulent digital certificates for top internet ...
  50. [50]
    [PDF] DigiNotar: Dissecting the First Dutch Digital Disaster
    Sep 2, 2011 · The main problem remains that an actual compromise of a CA appeared to be largely unexpected and unanticipated. Any description of the ...<|separator|>
  51. [51]
    [PDF] Tracking Certificate Misissuance in the Wild - J. Alex Halderman
    To quantify misissuance (i.e., certificates with errors) in the. Web PKI, we run ZLint on the 240 million browser-trusted certificates in Censys [17]. We find ...
  52. [52]
    [1806.03160] Reducing Metadata Leakage from Encrypted Files and ...
    Jun 8, 2018 · Abstract:Most encrypted data formats leak metadata via their plaintext headers, such as format version, encryption schemes used, number of ...
  53. [53]
    Protecting sensitive metadata so it can't be used for surveillance
    Feb 26, 2020 · In short, the malicious server can drop messages or modify sending times to create communications patterns that reveal direct links between ...
  54. [54]
    Protect your private keys - NCSC.GOV.UK
    Nov 6, 2020 · A CA's private key should be stored in hardware-based protection, such as a Hardware Security Module (HSM). This provides tamper-resistant secure storage.
  55. [55]
    [2405.13310] Bytes to Schlep? Use a FEP: Hiding Protocol Metadata ...
    May 22, 2024 · This design hides communications metadata, such as version and length fields, and makes it difficult to even determine what protocol is being used.
  56. [56]
    [PDF] Bacon's Bilateral Cipher
    In 1623, Francis Bacon created a cipher system using the techniques of substitution and steganography - the art of writing hidden messages in such a way ...Missing: asymmetric | Show results with:asymmetric
  57. [57]
    [PDF] Communication Theory of Secrecy Systems - cs.wisc.edu
    As a first step in the mathematical analysis of cryptography, it is necessary to idealize the situation suitably, and to define in a mathematically acceptable.
  58. [58]
    [PDF] The history of Non-Secret Encryption by J H ELLIS Public-key ...
    This is the history of invention and early development of NSE by CESG. Some time after the basic work had been done reference 1 was published by Diffie and ...
  59. [59]
    Milestones:Invention of Public-key Cryptography, 1969 - 1975
    Jun 14, 2022 · In 1973, Clifford Cocks, a mathematics graduate from Cambridge, England, with expertise in number theory, joined GCHQ. He was asked to find a ...Missing: CESG 1970-1973
  60. [60]
  61. [61]
  62. [62]
    US4405829A - Cryptographic communications system and method
    Filing date: 1977-12-14. Legal status: Expired - Lifetime. Application US05/860,586 events. A timeline of key events for this patent application, including ...Missing: commercialization | Show results with:commercialization
  63. [63]
    History of RSA Security Inc. – FundingUniverse
    Rivest, Shamir, and Adleman obtained a patent through MIT for their development, and in 1982 they set up a company in Adleman's apartment. It was called RSA ...
  64. [64]
    RFC 8017: PKCS #1: RSA Cryptography Specifications Version 2.2
    This document provides recommendations for the implementation of public-key cryptography based on the RSA algorithm.
  65. [65]
    [PDF] Elliptic Curve Cryptosystems - Evervault
    Abstract. We discuss analogs based on elliptic curves over finite fields of public key cryptosystems which use the multiplicative group of a finite field.
  66. [66]
    [PDF] Use of Elliptic Curves in Cryptography - Victor S. Miller - Evervault
    We discuss the use of elliptic curves in cryptography. In particular, we propose an analogue of the. Diffie-Hellmann key exchange protocol which appears to be ...Missing: paper | Show results with:paper
  67. [67]
    Elliptic Curve Cryptography | CSRC
    Jan 12, 2017 · NIST has standardized elliptic curve cryptography for digital signature algorithms in FIPS 186 and for key establishment schemes in SP 800-56A.
  68. [68]
    SP 800-56A Rev. 3, Recommendation for Pair-Wise Key ...
    Apr 16, 2018 · This Recommendation specifies key-establishment schemes based on the discrete logarithm problem over finite fields and elliptic curves.
  69. [69]
    FIPS 186-5, Digital Signature Standard (DSS) | CSRC
    Oct 31, 2019 · As part of these updates, NIST is proposing to adopt two new elliptic curves, Ed25519 and Ed448, for use with EdDSA. EdDSA is a deterministic ...Missing: ECDH | Show results with:ECDH<|separator|>
  70. [70]
    [PDF] Handbook of Applied Cryptography
    This is a Chapter from the Handbook of Applied Cryptography, by A. Menezes ... §3.6 The discrete logarithm problem. 103. 3.6 The discrete logarithm problem.
  71. [71]
    [PDF] A public key cryptosystem and a signature scheme based on ...
    The paper described a public key cryptosystem and a signature scheme based on the difficulty of computing discrete logarithms over finite fields. The ...
  72. [72]
    [PDF] Algorithms for Quantum Computation: - Discrete Log and Factoring
    email: shor@research.att.com. 1. Abstract. This paper gives algorithms for the discrete log and the factoring problems that take random polynomial time on a ...Missing: original | Show results with:original
  73. [73]
    Algorithms for quantum computation: discrete logarithms and factoring
    This paper gives Las Vegas algorithms for finding discrete logarithms and factoring integers on a quantum computer that take a number of steps which is ...
  74. [74]
    NIST Post-Quantum Cryptography Standardization
    FIPS 203, FIPS 204 and FIPS 205, which specify algorithms derived from CRYSTALS-Dilithium, CRYSTALS-KYBER and SPHINCS+, were published August 13, 2024.Round 3 Submissions · Call for Proposals · Round 1 SubmissionsMissing: selections | Show results with:selections
  75. [75]
    Round 4 Submissions - Post-Quantum Cryptography | CSRC
    Round 4 Submissions: Public-key Encryption and Key-establishment Algorithms ; Classic McEliece (merger of Classic McEliece and NTS-KEM. GZ file (4MB) KAT files ( ...
  76. [76]
    NIST Releases First 3 Finalized Post-Quantum Encryption Standards
    Aug 13, 2024 · The fourth draft standard based on FALCON is planned for late 2024. While there have been no substantive changes made to the standards since the ...
  77. [77]
    NIST Selects HQC as Fifth Algorithm for Post-Quantum Encryption
    Mar 11, 2025 · The new algorithm will serve as a backup for the general encryption needed to protect data from quantum computers developed in the future.<|control11|><|separator|>
  78. [78]
    [PDF] NIST IR 8547 initial public draft, Transition to Post-Quantum ...
    Nov 12, 2024 · This requires updating the underlying cryptographic implementations, adjusting to changes in key sizes and algorithm performance, and ensuring.
  79. [79]
    [PDF] MIGRATION TO POST-QUANTUM CRYPTOGRAPHY - NIST NCCoE
    The new algorithms may not have the same performance or reliability characteristics as legacy algorithms due to differences in key size, signature size, error ...
  80. [80]
    Hybrid key exchange in TLS 1.3
    ### Summary of Hybrid Key Exchange in TLS 1.3 (draft-ietf-tls-hybrid-design-16)
  81. [81]
    (PDF) How Artificial Intelligence become a threat to cryptography-A ...
    Dec 19, 2023 · Identify the potential risks and threats posed by AI to cryptography. Examine the existing literature to identify gaps and opportunities for ...
  82. [82]
    Supply Chain Cyber Attacks
    Hackers apply abnormal voltage, temperature, or signals to electronic devices. This stress causes errors that can expose sensitive outputs or cryptographic keys ...Supply Chain Cybersecurity · Why Supply Chain... · Key Vulnerabilities In The...<|separator|>
  83. [83]
    Cryptography: A Forgotten Part of Software Supply Chain Security
    Aug 15, 2024 · Immature Key Management Practices: Poorly managed cryptographic keys can be exploited, leading to unauthorized access and data breaches.Missing: public- | Show results with:public-
  84. [84]
    Homepage of Quantum Flagship | Quantum Flagship
    European Quantum Technologies Conference 2025 will be held at Øksnehallen, Copenhagen, Denmark, on 10-12 November! Registration is now open! Secure your spot ...Projects · Quantum Community · Quantum Principles · Quantum Community Network
  85. [85]
    Timelines for migration to post-quantum cryptography - NCSC.GOV.UK
    Mar 20, 2025 · During 2025, we expect to see the first cryptographic modules validated to FIPS 140-3 under the NIST's Cryptographic Module Validation Program.
  86. [86]
    Roadmap for the migration to post-quantum cryptography for the ...
    Jun 24, 2025 · This publication is the Cyber Centre's recommended roadmap for the migration of non-classified IT systems within the GC to use PQC.
  87. [87]
    [PDF] Post-Quantum Cryptography (PQC) Migration Roadmap
    Determining the appropriate timeline for your organization's PQC migration involves estimating migration, information shelf-life, and threat timelines. As seen ...
  88. [88]
    Terminology for Post-Quantum Traditional Hybrid Schemes
    Jun 13, 2025 · Schemes that combine post-quantum and traditional algorithms for key establishment or digital signatures are often called hybrids.
  89. [89]
    Next steps in preparing for post-quantum cryptography - NCSC.GOV ...
    These two protocols have been modified to incorporate PQ/T hybrid schemes in a relatively simple and backwards-compatible way. PQ/T hybrid key establishment ...