Code signing
Code signing is a cryptographic process used to digitally sign software executables, scripts, and other code artifacts to verify their authenticity and integrity, ensuring that the code originates from a trusted author and has not been altered or tampered with since signing.[1] This technique employs digital signatures generated using a private key, paired with a public key certificate issued by a trusted certification authority (CA), allowing verifiers such as operating systems or users to confirm the signer's identity and detect any modifications.[2][3] In practice, code signing plays a critical role in the software supply chain by mitigating risks from malicious alterations, unauthorized distribution, and supply chain attacks, as highlighted in security frameworks for firmware, operating systems, mobile applications, and container images.[1] The process typically involves three key roles: the developer, who creates and submits the code; the signer, who applies the digital signature using protected private keys; and the verifier, who validates the signature against the signer's public key and certificate chain.[1] Platforms like macOS require code signing for app distribution to enforce Gatekeeper protections, while Microsoft's Authenticode enables similar verification for Windows drivers and executables, often embedding signatures in catalog files to support integrity checks without altering the core binaries.[2][3] Beyond basic verification, code signing supports advanced features such as timestamping from a Time Stamp Authority (TSA) to prove the exact signing time, enhancing long-term validity even after certificate expiration, and integration with hardware security modules (HSMs) for key protection against theft or compromise.[1] Security considerations include selecting robust cryptographic algorithms, managing trust anchors through root CAs, and conducting regular audits to prevent issues like rogue certificates or weak key generation, which have been implicated in major incidents.[1] Overall, widespread adoption of code signing strengthens ecosystem trust, with requirements enforced by major vendors to block unsigned or invalidly signed code from execution.[2][3]Fundamentals
Definition and Purpose
Code signing is a security process in which software developers attach a digital signature to executables, binaries, or scripts, employing public-key cryptography to verify the software's origin and ensure it has not been altered or tampered with since signing.[4] This mechanism allows end-users and systems to confirm that the code originates from a legitimate source, thereby distinguishing trusted software from potentially malicious alterations during distribution.[5] The primary purposes of code signing are to guarantee software integrity by detecting any post-signing modifications, authenticate the developer's identity to establish provenance, and foster trust in software distribution channels by preventing malware from masquerading as legitimate applications.[6] By embedding this cryptographic assurance, code signing mitigates risks associated with unverified code execution, such as the introduction of vulnerabilities or unauthorized changes.[7] Code signing emerged in the mid-1990s alongside the development of digital signature standards like PKCS#7, which was published by RSA Security in the early 1990s.[8] It gained widespread adoption in the late 1990s for enterprise software distribution, driven by the need to secure executable content in growing networked environments, with technologies like Microsoft's Authenticode introduced in 1996.[9] This process typically relies on digital certificates issued by trusted certificate authorities to bind the signature to a verified identity.[10] Among its key benefits, code signing significantly reduces the risk of executing malicious or compromised code by providing verifiable proof of unaltered software.[11] It also enables operating systems and platforms to implement execution policies, such as restricting or blocking the running of unsigned applications to enhance overall system security.[12]Technical Mechanism
The technical mechanism of code signing relies on asymmetric cryptography to ensure the integrity and authenticity of software executables, scripts, or other code artifacts. Developers begin by generating a public-private key pair using established cryptographic libraries, where the private key remains secret and the public key is associated with a digital certificate.[13][4] The code is then processed through a hashing algorithm to produce a fixed-size digest representing its contents; for example, SHA-256 is commonly used to generate a 256-bit hash value that uniquely identifies the unaltered code.[4][14] This hash is encrypted with the developer's private key to create a digital signature, which serves as proof that the code has not been modified since signing.[15] The signature, along with the associated public key certificate, is embedded into the code's metadata structure, such as the Portable Executable (PE) format for Windows binaries, forming a self-contained signed package.[13] Verification occurs at runtime or during installation when the receiving system recomputes the hash of the current code and compares it to the hash extracted from the embedded signature. If the hashes match, the system decrypts the signature using the public key to retrieve the original hash and confirm its validity, thereby establishing the code's integrity.[4] The process then validates the public key's certificate chain, tracing back through intermediate certificates to a trusted root certificate authority (CA) to ensure the signer's identity is authentic and the certificate has not expired or been revoked.[4] This chain validation relies on standards like X.509, which defines the structure for public key certificates including fields for the subject's name, public key, validity period, and issuer signature.[16] Code signing employs standardized formats to encapsulate signatures and certificates, primarily the Cryptographic Message Syntax (CMS) as specified in RFC 5652, which evolved from PKCS#7 and supports signed data structures with multiple signers, digest algorithms, and optional attributes.[17] In CMS, the SignedData content type includes the encapsulated content info, certificates, and signer infos, where each signer info contains the signature value computed over the digest and signed attributes.[17] Hash algorithms have evolved to address security vulnerabilities; early implementations used MD5 (128-bit) and later SHA-1 (160-bit), but due to collision attacks, modern code signing mandates stronger algorithms like SHA-256 from the SHA-2 family or SHA-3 for enhanced resistance to cryptanalytic attacks.[14] Practical implementation involves tools for generating and applying signatures. OpenSSL, an open-source cryptography library, provides command-line utilities likeopenssl cms for creating CMS/PKCS#7 signatures on arbitrary data, enabling custom code signing workflows.[4] These tools integrate with build systems such as Maven or CMake, allowing automated signing during compilation to embed signatures without manual intervention.[4]
Security Features
Certificate Authorities and Trusted Identification
Certificate authorities (CAs) serve as trusted third-party entities that verify the identity of software developers or organizations before issuing X.509 digital certificates for code signing. Examples include DigiCert and Sectigo, which act as independent validators to ensure that only legitimate entities receive these certificates.[18][10] The primary role of a CA in this context is to perform due diligence on the applicant's identity, thereby establishing a foundation of trust that allows end-users and systems to authenticate the origin and integrity of signed code without direct knowledge of the signer.[19][20] The trust model underpinning code signing certificates relies on a hierarchical chain within the public key infrastructure (PKI). An end-user code signing certificate is digitally signed by an intermediate CA, which is itself signed by higher-level intermediates or ultimately by a root CA. Root CAs are pre-trusted, with their public keys embedded in operating system and application trust stores, such as those in Microsoft Windows or Apple ecosystems.[21][22] This chain enables verifiers to recursively validate each certificate against the issuer's public key, culminating in confirmation against the trusted root, thus preventing forgery or impersonation in the signing process.[23] The issuance process begins with the developer generating a key pair and submitting a certificate signing request (CSR) along with proof of identity to the CA. For organizations, this typically includes business registration documents, tax IDs, or addresses; for individuals, government-issued photo identification such as passports or driver's licenses is required. Per CA/B Forum Baseline Requirements, effective June 1, 2023, the private key must be generated, stored, and used exclusively within a cryptographic module certified to FIPS 140-2 Level 2 or Common Criteria EAL 4+ to protect against compromise.[24] The CA then conducts organization validation (OV), which involves confirming the entity's legal existence, operational address, and operational control through independent sources like public records or phone verification.[25][26][27] Upon approval, the CA issues the X.509 certificate, which embeds the developer's public key, distinguished name, serial number, and a validity period—historically up to 39 months, though the CA/Browser Forum has mandated a reduction to a maximum of 460 days for certificates issued after March 1, 2026.[28][29][30] To address compromised or invalid certificates, CAs implement revocation mechanisms that allow real-time or periodic checks of certificate status. Certificate Revocation Lists (CRLs) are digitally signed files published by the CA at regular intervals, listing the serial numbers of revoked certificates along with revocation reasons and dates. Alternatively, the Online Certificate Status Protocol (OCSP) enables on-demand queries to the CA's server for the status of a specific certificate, providing responses such as "good," "revoked," or "unknown."[31][32] These tools ensure that systems can detect and reject signatures from invalidated certificates, maintaining the overall security of the code signing ecosystem.[33]Extended Validation Certificates
Extended Validation (EV) certificates for code signing represent a high-assurance standard established by the CA/B Forum, requiring certificate authorities to perform thorough identity vetting of the applicant organization. This process verifies legal existence by confirming registration with the relevant incorporating or registration agency in the subject's jurisdiction, physical existence through validation of a business presence at a specified address, and operational existence to ensure active business operations as of the issuance date. The vetting, which involves document review, database checks, and potential phone verification, typically spans several days to a week or more, depending on the applicant's responsiveness and the complexity of the organization.[28][34][35] In contrast to Organization Validated (OV) or Domain Validated (DV) certificates, which rely on less stringent checks like basic domain control or organizational details, EV certificates mandate audited compliance with CA/B Forum guidelines, including ongoing CA process audits for reliability. This results in certificates featuring unique identifiers, such as the EV policy Object Identifier (OID) 2.23.140.1.1, enabling operating systems to recognize and afford elevated trust to EV-signed code. Key fields in these X.509 certificates include the subject organization name, serial number for uniqueness, and additional attributes like jurisdiction of incorporation and physical address components, all encoded to provide verifiable transparency without including domain names.[28][36][37] EV-signed executables in Microsoft Windows environments display the verified organization name as the publisher in User Account Control (UAC) prompts, replacing generic "unknown" warnings with identifiable details, while also receiving immediate positive reputation from Microsoft SmartScreen to minimize or eliminate download and execution alerts. This visual and behavioral trust enhancement helps users confidently identify legitimate software publishers.[38][39][40] Adoption of EV code signing certificates is common in enterprise software development, where they are often required for distribution through platforms like the Microsoft Store or for compliance in regulated industries to demonstrate rigorous identity assurance. Certificate authorities such as Entrust and GlobalSign provide these certificates, with annual pricing typically ranging from $300 to $500, reflecting the intensive validation and hardware security module requirements.[41][42][43]Time-Stamping Protocols
Time-stamping protocols in code signing attach a trusted timestamp to a digital signature, proving that the signature was created at a specific point in time and enabling verification even after the signing certificate expires. This is achieved through a Time-Stamping Authority (TSA), a trusted third party that generates time-stamp tokens using a reliable time source, as defined in the Internet X.509 Public Key Infrastructure Time-Stamp Protocol (TSP) outlined in RFC 3161.[44] Per CA/B Forum Baseline Requirements, effective April 15, 2025, TSA private keys for Root and Subordinate CA certificates (with validity over 72 months) must be protected in a hardware cryptographic module certified to FIPS 140-2 Level 3 or Common Criteria EAL 4+, maintained in a high-security zone. Examples of TSAs include free services like FreeTSA.org, which provides RFC 3161-compliant timestamps without cost for basic use, and commercial providers such as Sectigo (formerly Comodo), which offers timestamping via http://timestamp.sectigo.com.[](https://www.freetsa.org/index_en.php)[](https://www.sectigo.com/resource-library/time-stamping-server)[](https://cabforum.org/uploads/Baseline-Requirements-for-the-Issuance-and-Management-of-Code-Signing.v3.9.pdf) The process begins after the code is signed with a private key; the signer submits a hash of the signature (typically using SHA-256 in modern implementations) to the TSA via an HTTP or TCP request formatted according to RFC 3161.[44][45] The TSA verifies the request, appends the current UTC time from a trusted source (such as NTP-synchronized clocks), signs the hash with its own certificate, and returns a TimeStampToken containing the timestamp information, including a serial number for uniqueness and the hashing algorithm used.[44] This token is then embedded into the signature envelope, often as an unsigned attribute in CMS/PKCS #7 structures, ensuring the timestamp is cryptographically bound to the original signature.[44] These protocols provide several benefits for code signing security. By establishing the exact creation time of the signature, time-stamping prevents replay attacks, as verifiers can check that the timestamp aligns with the expected temporal context and detect any attempts to reuse outdated signatures.[46] It also supports long-term validity, allowing signatures to be verified post-certificate expiration as long as the timestamp falls within the certificate's validity period and the TSA's certificate chain remains trustworthy, which is crucial for archival integrity of software artifacts.[44][47] Integration of time-stamping is seamless in common tools; for instance, Microsoft's SignTool.exe automates the process using the /tr option to specify a TSA URL, such as http://timestamp.sectigo.com, and supports SHA-256 hashing for requests without additional configuration for basic services.[48] Many TSAs, including FreeTSA.org and Sectigo, default to SHA-256 for compatibility and security, offering no-cost options for non-commercial or low-volume use while ensuring compliance with RFC 3161 standards.[49][45]Alternatives to Certificate Authorities
Self-signed certificates represent a basic alternative to traditional Certificate Authorities (CAs) in code signing, where developers generate their own public-private key pair and certificate using tools like OpenSSL or PowerShell's New-SelfSignedCertificate cmdlet.[50] These certificates are suitable for internal tools, development, or testing environments, as they allow signing without external validation, but they inherently lack third-party trust since no CA vouches for the issuer's identity.[51] Verification depends on manual distribution of the public key to recipients, who must explicitly trust it by importing it into their local certificate store, such as the Trusted People store on Windows.[50] Web of trust models, inspired by Pretty Good Privacy (PGP), provide a decentralized approach where users mutually vouch for each other's public keys through signatures, forming chains of trust without a central authority.[52] In open-source projects, this is implemented via tools like GnuPG, with keys distributed through keyservers or repositories; for instance, the Linux kernel community uses PGP signatures on Git tags and tarballs, relying on the web of trust to verify maintainer identities post the 2011 kernel.org compromise.[53] Trust levels are calculated based on signature paths from known trusted keys, enabling collaborative verification in ecosystems like Linux distributions where developers sign each other's keys to build collective assurance.[54] Decentralized options extend this further by leveraging distributed technologies for identity and verification, bypassing CA hierarchies altogether. Projects like Sigstore enable keyless code signing through OpenID Connect (OIDC) providers for identity proof, issuing short-lived certificates via Fulcio and logging signatures in the tamper-evident Rekor transparency log for public auditability.[55][56] Blockchain-based methods, such as anchoring code hashes or signatures to Ethereum for timestamping, provide immutable proof of existence and integrity without centralized issuance, often combined with smart contracts for verification.[57] Hardware Security Modules (HSMs) support these by securely generating and storing keys in tamper-resistant hardware, facilitating self-signed or decentralized signing while ensuring private keys never leave the device.[58] These alternatives offer significant trade-offs compared to CA-based systems: they reduce costs and accelerate issuance by eliminating vetting processes, making them ideal for open-source or internal use, as seen in Git's support for GPG-signed commits where developers verify authenticity via personal keyrings. However, they increase risks of impersonation due to the absence of independent identity validation, requiring robust key distribution and user diligence to mitigate potential supply chain threats.[51][58]Challenges and Limitations
Common Security Problems
One major vulnerability in code signing arises from the theft or compromise of private keys associated with code signing certificates. When attackers gain access to these keys, they can sign malicious code as if it originated from a trusted entity, bypassing verification mechanisms and enabling widespread distribution of malware.[59] A prominent example is the 2011 breach of DigiNotar, a Dutch certificate authority, where intruders compromised the private keys and issued over 500 fraudulent certificates, including code signing ones, affecting millions of users primarily through man-in-the-middle attacks on services like Gmail in Iran. This incident led to the revocation of DigiNotar's root certificates across major trust stores and the company's bankruptcy.[60][61] Similarly, the 2020 SolarWinds supply chain attack involved Russian state-sponsored actors injecting malware into legitimate software updates, which were then signed using SolarWinds' legitimate code signing certificate after compromising the build process, compromising thousands of organizations including U.S. government agencies.[62][63] In 2023, attackers stole encrypted code signing certificates from GitHub, including those for GitHub Desktop and Atom, potentially allowing malicious software to be signed as legitimate GitHub releases; GitHub revoked the certificates and advised users to update affected software.[64] Algorithmic weaknesses in hashing functions used for code signing signatures further exacerbate risks. Deprecated algorithms like SHA-1 are susceptible to collision attacks, where attackers generate two different files with identical hashes, allowing substitution of malicious code without invalidating the signature. The 2017 SHAttered attack demonstrated the first practical collision for SHA-1, producing two distinct PDFs with the same hash, highlighting its vulnerability for digital signatures including code signing; despite transitions to stronger hashes like SHA-256, legacy SHA-1-signed code remains in use, delaying full mitigation.[65][66] Timestamping failures can undermine the long-term validity of code signatures by failing to provide reliable proof of signing time relative to certificate expiration or revocation. Outages or connectivity issues with Time-Stamping Authorities (TSAs) prevent acquisition of valid timestamps during signing, rendering signatures time-bound to the certificate's validity period and potentially invalidating them prematurely. For instance, the 2019 expiration of Comodo's TSA certificate (timestamp.comodoca.com) caused widespread errors and outages in timestamped code validation across various environments. Additionally, use of untrusted or compromised TSAs allows attackers to forge timestamps; in one described scenario, an adversary intercepts timestamp requests and supplies a response from a non-trustworthy TSA, leading verifiers to accept invalid signatures.[67][68][69][70] Other systemic issues include signature stripping in repackaged malware, where attackers decompile legitimate signed applications, remove the original digital signature, inject malicious payloads, and redistribute the altered unsigned or re-signed binaries to evade detection. Over-reliance on centralized trust stores amplifies risks from root CA compromises; the 2015 Symantec incidents involved multiple misissuances of rogue certificates, including an unauthorized Extended Validation certificate for google.com issued without proper validation, prompting employee terminations and widespread distrust of Symantec roots by browsers like Chrome. These events exposed how flaws in CA operations can propagate untrusted certificates into trust stores, enabling fake code signing.[71][72][73][74][75]Mitigation Strategies
Mitigation strategies for code signing vulnerabilities focus on proactive measures to protect private keys, ensure cryptographic robustness, integrate verification into development workflows, and enable rapid detection and response to compromises. These practices help developers and organizations minimize risks such as unauthorized code distribution and supply chain attacks by emphasizing secure handling, standards compliance, and ongoing monitoring.[76] Key management is a cornerstone of code signing security, beginning with the use of Hardware Security Modules (HSMs) for private key storage to prevent unauthorized access and extraction. HSMs provide tamper-resistant environments that isolate keys from software-based threats, ensuring that signing operations occur within protected hardware.[76] Regular key rotation—typically every 1-2 years or after potential exposure—limits the impact of a compromised key by reducing its lifespan and validity period.[77] Additionally, enabling multi-factor authentication (MFA) for Certificate Authority (CA) accounts and key access controls adds layers of identity verification, thwarting credential-based attacks.[78] Updating cryptographic algorithms addresses evolving threats to hashing integrity, with a mandate to transition to SHA-256 or stronger variants following the 2017 SHAttered collision attack on SHA-1, which demonstrated practical forgery risks for code signing. The National Institute of Standards and Technology (NIST) deprecated SHA-1 for digital signatures in 2013 and fully retired it by December 31, 2030, urging immediate adoption of SHA-2 and SHA-3 families to maintain collision resistance.[79] Microsoft accelerated this by deprecating SHA-1 code signing support in 2017, requiring SHA-256 for new certificates to align with browser and OS enforcement.[80] Organizations should monitor NIST guidelines and conduct periodic audits to ensure compliance with these post-2017 standards.[81] Enhancing verification involves embedding code signing policies directly into Continuous Integration/Continuous Deployment (CI/CD) pipelines to automate integrity checks during builds and deployments. Tools like Cosign, developed by the Sigstore project, facilitate container image signing and verification without long-term key management, using short-lived keys and transparency logs for reproducible attestations.[82] This integration ensures that only signed artifacts proceed to production, reducing the window for tampering in automated workflows.[83] For incident response, continuous monitoring of Online Certificate Status Protocol (OCSP) responders and Certificate Revocation Lists (CRLs) is essential to detect and enforce revocations promptly, as OCSP provides real-time status queries while CRLs offer batch updates for offline validation.[31] Supply chain risk audits, guided by the Supply-chain Levels for Software Artifacts (SLSA) framework introduced in 2021, evaluate build provenance and integrity controls to identify weaknesses before deployment.[84] SLSA's tiered levels promote verifiable builds and signed artifacts, enabling organizations to respond to breaches by revoking affected certificates and tracing impacted distributions.[84]Implementations
Apple Ecosystems
In Apple's ecosystems, code signing is a mandatory security mechanism for distributing and executing software on macOS and iOS platforms, ensuring that applications originate from verified developers and remain untampered. It integrates deeply with the App Store distribution model, where all submitted apps must be signed using Apple-issued certificates to pass review and installation checks. For macOS, Gatekeeper enforces signing by verifying Developer ID certificates on downloaded apps, preventing execution of unsigned or tampered code outside the App Store. Similarly, iOS requires signed apps bundled with provisioning profiles to install on devices, tying code to specific developer identities and device capabilities.[85][2][86] Certificates for code signing are issued by the Apple Worldwide Developer Relations (WWDR) Certification Authority, an intermediate authority under Apple's public key infrastructure that validates developer identities through the Apple Developer Program. Developers generate certificate signing requests via Keychain Access or Xcode, then obtain identities such as development certificates for testing, distribution certificates for App Store releases, or ad-hoc certificates for limited device installations without App Store involvement. These certificates embed the developer's Team ID in the subject organizational unit field, enabling the system to enforce trust chains during validation. For non-App Store macOS distribution, Developer ID Application or Installer certificates allow direct downloads while complying with Gatekeeper, requiring membership in the Apple Developer Program.[87][88][89] Xcode provides built-in code signing during the build process, automatically embedding signatures using thecodesign command-line tool for manual operations, which applies cryptographic hashes and certificates to binaries, bundles, and frameworks. Entitlements, defined in a .entitlements property list file, grant apps specific permissions like access to the camera or sandboxing, and Xcode merges these during signing to match provisioning profiles. For debugging, Xcode generates .dSYM files containing symbol information tied to the signed build, enabling symbolication of crash reports without exposing source code. Provisioning profiles, which are signed property lists combining certificates, app IDs, and device UDIDs, are essential for iOS and extend to macOS for capabilities like push notifications; they support development (for registered devices), ad-hoc (for limited distribution), and distribution types.[90][91][92]
Enforcement occurs at multiple levels: on macOS, System Integrity Protection (SIP) restricts modifications to system files and blocks loading of unsigned kernel extensions (kexts), requiring them to be signed with a Developer ID Kexts certificate and approved by users via System Preferences. Gatekeeper scans downloads for valid signatures and, since macOS Catalina (10.15) in 2019, mandates notarization—a cloud-based Apple review process that staples a ticket to signed apps, confirming absence of malware before Gatekeeper allows execution. On iOS, the system rejects unsigned or mismatched provisioning profile apps at installation, ensuring only authorized code runs on devices. Kernel extensions demand special handling, with signatures verified against the boot policy; unsigned kexts fail to load under SIP, promoting safer alternatives like DriverKit system extensions.[93][94][95]