Fact-checked by Grok 2 weeks ago

Evaluation Assurance Level

The Evaluation Assurance Level (EAL) is a predefined numerical rating within the Common Criteria for Information Technology Security Evaluation (CC), an international standard (ISO/IEC 15408) that specifies a framework for assessing the security functions and assurance of IT products and systems. EALs represent points on a hierarchical assurance scale, ranging from EAL1 to EAL7, where each level defines a package of assurance requirements that increase in depth, rigor, and scope to verify that the product's security claims are met. The CC framework, managed under the Common Criteria Recognition Arrangement (CCRA) by participating nations, enables independent evaluation laboratories to test products against Protection Profiles or Security Targets, culminating in certification at a specific EAL. EAL1 provides basic functional testing for straightforward security needs, while progressively higher levels—such as EAL2 (structural testing with basic design review), EAL3 (methodical testing and configuration management), and EAL4 (methodical design, testing, and review)—demand more extensive evidence of development processes, vulnerability analysis, and independent verification. EAL5 through EAL7 incorporate advanced techniques like semi-formal or formal design verification and comprehensive testing, typically reserved for high-risk environments like government or critical infrastructure systems. EAL certifications assure consumers and acquirers of a product's reliability, with mutual recognition up to EAL4 under the CCRA, facilitating global trade while promoting consistent methodologies. However, EAL does not directly measure a product's inherent strength but rather the thoroughness of its , emphasizing that higher levels often require significant developer investment in documentation and testing.

Overview

Definition and Purpose

The Evaluation Assurance Level (EAL) is a numerical grade ranging from 1 to 7 assigned to an IT product upon completion of a evaluation under the framework, signifying the rigor and depth of testing and verification conducted to confirm the product's properties. It constitutes a predefined package of assurance requirements (SARs) drawn from Part 3, providing a standardized metric for the confidence that the product correctly implements its specified functional requirements (SFRs). The purpose of EALs is to establish a graduated scale for gauging the trustworthiness of IT products against their requirements, enabling organizations to make informed decisions during and to conduct effective assessments in environments where is paramount. By delineating levels of evaluation effort, EALs help quantify the potential for undetected vulnerabilities or flaws, thereby supporting the selection of products that align with specific threat models and operational needs. At their core, EALs rely on assurance components outlined in Part 3, which include families such as (ADV) for and , guidance (AGD) for operational documentation, life-cycle support (ALC) for configuration and remediation processes, testing (ATE) for coverage and depth verification, and (AVA) for flaw identification. These components emphasize the systematic collection of —ranging from functional specifications and test results to detailed architectural analyses—as well as escalating testing depths from basic subsystem checks to full representation, complemented by vulnerability analyses that progress from basic surveys to advanced penetration testing against high attack potentials. EALs balance functional security, which defines the intended security behaviors of the product, with assurance, which evaluates the reliability of those behaviors through progressive increases in scrutiny and evidence demands, ensuring greater confidence in the product's security without expanding its functional scope. The provides the overarching international standard for these evaluations, harmonizing practices across participating nations.

Relation to Common Criteria

The (CC), internationally standardized as ISO/IEC 15408, provides a framework for evaluating the security of products and systems. In its current iteration, CC:2022, the standard is divided into five parts. Part 1 offers an introduction and general model, establishing the foundational concepts, terminology, and evaluation principles. Part 2 defines security functional requirements, cataloging the specific security functionalities that products must demonstrate. Part 3 outlines security assurance requirements, which include predefined assurance packages such as the Evaluation Assurance Levels (EALs). Part 4 specifies the framework for evaluation methods and procedures, guiding how assurance requirements are assessed. Part 5 provides pre-defined packages of security requirements to support consistent evaluations. Within the , Evaluation Assurance Levels serve as a core subset of the assurance packages detailed in Part 3, offering a standardized grading scale for the rigor of evaluations. Product vendors incorporate an EAL by selecting it as the target assurance package when developing a (PP)—a reusable template of requirements—or a Security Target (ST), which tailors requirements to a specific product. This selection ensures that the evaluation addresses both functional needs from Part 2 and the corresponding assurance activities from the chosen EAL. The evaluation process under involves independent, certified laboratories that conduct testing and analysis in accordance with the Common Evaluation Methodology (CEM, ISO/IEC 18045). Vendors submit their product, along with the or specifying the target EAL, to an accredited lab for assessment. National certification schemes oversee this process; for example, the National Information Assurance Partnership (NIAP) in the United States accredits Common Criteria Testing Laboratories (CCTLs) and validates evaluation results to ensure compliance with scheme-specific policies. Successful evaluations culminate in the issuance of a by the national scheme, affirming the product's adherence to the specified EAL. As of November 2025, the active version of remains CC:2022 Revision 1, following the ongoing transition from CC version 3.1 Revision 5, initiated in 2022 with key deadlines through 2027; this version maintains the traditional structure of seven EALs (EAL1 through EAL7) without alteration, alongside new assurance packages for enhanced flexibility. Ongoing maintenance by the Development Board ensures alignment with evolving security needs, with certifications like those issued in 2025 explicitly referencing CC:2022.

History and Development

Origins in Predecessor Standards

The Evaluation Assurance Levels (EALs) within the framework trace their roots to several national and regional IT security evaluation standards developed in the 1980s and early 1990s, which established foundational concepts of assurance through graduated levels of testing, design verification, and rigor. These predecessors addressed the growing need for standardized assessments of computer system security amid increasing reliance on IT for sensitive applications, particularly in and sectors. In the United States, the (TCSEC), commonly known as , was first issued in 1983 by the National Computer Security Center under the and updated in 1985 by the Department of Defense. TCSEC defined four divisions of security protection—D (minimal), C (discretionary), B (mandatory), and A (verified)—with seven hierarchical classes from D to A1, where higher classes demanded progressively rigorous evidence of secure design, such as models and extensive testing to ensure protection against unauthorized disclosure. This class-based structure directly influenced the escalating assurance requirements in later EALs, emphasizing comprehensive life-cycle evaluation from design to operation. Europe's Information Technology Security Evaluation Criteria (ITSEC) emerged in 1990, jointly developed by , , the , and the to harmonize national approaches. Unlike TCSEC's integrated classes, ITSEC innovated by separating security functionality (rated F1 to F10 based on features like identification and audit) from assurance (levels E1 to E6, ranging from basic testing at E1 to structured and at E6), allowing flexible combinations that better accommodated diverse product types and international . This decoupling of functionality and assurance became a core principle in the EAL framework, enabling evaluations focused on implementation confidence independent of feature scope. Canada's Trusted Computer Product Evaluation Criteria (CTCPEC), published in 1993 by the , built on TCSEC while incorporating ITSEC elements, such as explicit separation of functional and assurance requirements across levels A1 to D, with added emphasis on products and lifecycle management. These standards collectively highlighted the limitations of fragmented national criteria, which often required redundant evaluations for products entering multiple markets, motivating international collaboration. The push for unification arose from the need to streamline global trade in secure IT products by eliminating duplicate certifications and fostering mutual among governments. In June 1993, sponsoring organizations from the TCSEC, ITSEC, CTCPEC, and Critères de Sécurité des Systèmes d'Information (CSSI) initiated development of a harmonized standard, culminating in the first version of the as ISO/IEC 15408 in 1999. This unification integrated the rigor of TCSEC classes, ITSEC's separation model, and CTCPEC's practical adaptations into the EALs, providing a single international benchmark for assurance.

Evolution of the Common Criteria Framework

The (CC) framework was first published as version 1.0 in 1996, establishing a standardized approach to IT security evaluation that harmonized disparate national standards into a unified international model, with the ISO/IEC 15408 standard adopting version 2.1 in 1999. This initial version introduced the concept of (PPs), which allow stakeholders to define reusable sets of requirements for specific product types, promoting consistency across evaluations. Concurrently, the framework shifted toward modular assurance packages, enabling evaluators to select and combine assurance components tailored to the target's needs rather than rigid, predefined levels, thereby enhancing flexibility while maintaining rigor in assessing functions and vulnerabilities. Subsequent revisions refined these foundations, with CC version 3.1 released in 2007 to streamline evaluation processes and reduce redundancies. Key updates included CC 3.1 Revision 4 in September 2012, which expanded the assurance component catalog by adding extended components for advanced threats, such as enhanced analysis and testing methodologies. Further, CC 3.1 Revision 5, published in April 2017, incorporated minor clarifications to guidance without altering the core Evaluation Assurance Levels (EALs). In November 2022, CC version 2022 (CC v2022 Release 1) was published as the current version, introducing updates to assurance components, new families for contemporary threats like quantum-resistant , and better support for such as AI/ML security; new evaluations must use CC v2022 starting July 2024, with transitional use of CC 3.1 permitted until December 2027 for compliant Security Targets. These evolutions emphasized practical improvements, such as better support for iterative development and evidence collection, to balance assurance depth with evaluation efficiency. The framework's international adoption was bolstered by the Recognition Arrangement (CCRA), established in 1999 among founding members including , , , the , and the , to facilitate mutual acceptance of evaluation results up to EAL4, covering a broad range of commercial products while reserving higher levels for specialized national needs. A 2014 revision to the CCRA, effective from 2017, reaffirmed and refined limitations to mutual recognition for EAL1 through EAL4 (or equivalent assurance packages up to AVA_VAN.3 when based on collaborative Protection Profiles), citing the escalating costs and complexity of higher-level evaluations, which often exceed practical benefits for most IT products. This focus on lower to mid-level assurances has driven broader participation. Adaptations under the CCRA have addressed emerging technologies, with collaborative efforts yielding extensions for and (IoT) security. For instance, the CC in the Cloud (CCitC) guidance, developed through international working groups and first publicly released in February 2024 (version 1.1.1 as of 2025), provides methodologies for evaluating virtualized environments and multi-tenant systems, ensuring assurance continuity in dynamic deployments. Similarly, specialized PPs for IoT devices, such as the IoT Secure Communications Profile released in December 2019, incorporate components for constrained environments, including secure and remote attestation, to mitigate risks like unauthorized access in resource-limited ecosystems. These developments reflect the framework's responsiveness to while preserving core assurance principles.

Assurance Levels

EAL1: Functionally Tested

EAL1, or Evaluation Assurance Level 1, represents the most basic level of assurance in the Common Criteria framework, providing evidence that the target of evaluation (TOE) functions as specified in its functional requirements. It is applicable in circumstances where some confidence in correct operation is desired, but the threats to security are not considered serious, such as in low-risk environments involving the protection of personal data. This level requires a limited Security Target (ST) that states only the security functional requirements (SFRs) without deriving them from specific threats or organizational security policies. The assurance package for EAL1 consists of key components including ADV_FSP.1 (Basic functional specification), ATE_IND.1 (Independent testing – conformance), and AVA_VAN.1 (Vulnerability survey). Under ADV_FSP.1, the developer must provide a functional specification that describes the TOE security functions (TSF) and their external interfaces in sufficient detail to allow understanding of how the SFRs are realized. This specification serves as the basis for subsequent testing but does not involve any analysis of the TOE's internal design or structure. ATE_IND.1 mandates independent testing by the evaluator to confirm conformance to the functional specification. AVA_VAN.1 requires a search of public domain sources to identify potential vulnerabilities in the TOE, assessing whether any known issues could lead to exploitable weaknesses under stated assumptions, though it does not include active penetration testing or advanced analysis. The testing scope at EAL1 is confined to functional verification, where the evaluator independently tests the TOE against the SFRs to confirm that the TOE security functionality behaves as documented. No design analysis, structural examination, or configuration management beyond basic documentation is required, emphasizing a straightforward demonstration of intended functions rather than rigorous scrutiny of implementation details. This approach ensures minimal expenditure of time and resources, allowing evaluations to proceed with limited developer assistance if needed. Evidence requirements for EAL1 include the limited ST, the functional and interface specification from ADV_FSP.1, test results from ATE_IND.1, and the vulnerability survey report from AVA_VAN.1, along with basic guidance documentation for secure operation. There is no mandate for independent penetration testing, as the vulnerability assessment relies solely on publicly available information. Typically, EAL1 is suited for consumer software or systems in low-risk settings, such as basic personal information protection tools, where the shortest evaluation durations apply due to the limited scope. Higher EALs build upon this foundation by incorporating greater rigor in design and testing. Under CC:2022, all EALs include ASE components for ST evaluation, and certifications typically conform to Protection Profiles at the specified EAL.

EAL2: Structurally Tested

EAL2, known as Structurally Tested, builds on the of EAL1 by incorporating a of the target's structure to provide moderate assurance of functionality implementation. This level requires the to supply design information, test documentation, and evidence demonstrating that the target of evaluation (TOE) operates as specified, while evaluators perform independent testing to confirm coverage and basic resistance. It is designed for scenarios where the cooperates fully but extensive redesign or formal analysis is not feasible, offering a cost-effective step up in rigor for products facing moderate threats. The core assurance components for EAL2 include ADV_FSP.2 (Security-enforcing functional specification), which mandates a detailed, informal specification tracing security functional requirements (SFRs) to the TOE's interfaces, including parameters, error handling, and high-level design of security mechanisms. ATE_COV.1 (Evidence of coverage) requires developers to provide test evidence showing that security functions are exercised through the TOE's interfaces, ensuring partial coverage of the functional specification via developer-conducted tests. AVA_VAN.2 (Vulnerability analysis) involves a developer assessment of potential weaknesses, considering basic attack potential from public sources, supplemented by evaluator penetration testing to identify exploitable flaws. These components emphasize structural elements absent in EAL1, such as a basic TOE design description under ADV_TDS.1, which reviews the security-relevant code structure, subsystems, and interfaces for consistency with the functional specification. Additionally, developer testing occurs with some independence under ATE_IND.2, where tests are performed at the developer's site but reviewed by evaluators. Evidence requirements under EAL2 include via ALC_CMC.2, which ensures a system for identifying, controlling, and tracking configuration items to maintain TOE integrity across versions, and ALC_DEL.1, mandating documented procedures for secure delivery to end-users, verifying and preventing tampering during . These life-cycle controls support without the methodical rigor of higher levels. EAL2 is typically applied to general commercial products and network devices in medium-risk environments, such as firewalls or routers handling enterprise traffic, where moderate assurance suffices for off-the-shelf software or legacy systems without high-threat exposure. For instance, products like routers and switches have been certified at EAL2 for secure network operations. This level balances development effort and evaluation time, typically taking 6 to 12 months at a of $200,000 to $500,000 depending on the TOE scope and evaluation lab, making it suitable for applications requiring structured testing over basic functionality alone.

EAL3: Methodically Tested and Checked

Evaluation Assurance Level 3 (EAL3), known as "Methodically Tested and Checked," extends the structural testing of EAL2 by incorporating systematic and depth in testing to enhance reliability in development processes. This level targets moderate assurance for IT products, emphasizing thorough investigation without extensive re-engineering, making it suitable for environments where security is important but not mission-critical. The core assurance components of EAL3 include ALC_CMC.3 for systematic , which requires developers to establish a configuration management system covering the target of evaluation (TOE) implementation representation with authorization controls to track changes and versions. ATE_DPT.1 addresses depth testing by mandating that developers provide test documentation based on the TOE , enabling evaluators to perform independent tests of the TOE functions (TSF) at a basic level. Although ADV_IMP.1 for implementation representation is typically associated with higher levels, EAL3 incorporates elements of review through related development components like ADV_TDS.2, ensuring the architectural supports methodical . Key features of EAL3 center on a methodical review of the implementation, achieved through developer-provided evidence of testing aligned with functional specifications and TOE design, supplemented by selective evaluator confirmation. Independent vulnerability analysis is conducted under AVA_VAN.2, where evaluators assess the TOE's resistance to attackers with basic capabilities, using provided , designs, and guidance to identify potential weaknesses. This analysis demonstrates that no residual vulnerabilities allow unauthorized access or disruption beyond expected threats in controlled settings. EAL3 is typically applied to operational systems in controlled environments, such as for or , where moderate assurance suffices for deployment in organizational settings without extreme risks. Evidence requirements include procedures for flaw remediation, often augmented via ALC_FLR components, mandating developers to define processes for identifying, reporting, and correcting flaws with timely updates and mechanisms. Basics of a secure development environment are ensured through ALC_DVS.1, requiring identification of measures to protect the development process from unauthorized interference, and ALC_DEL.1, which outlines secure delivery procedures to maintain TOE integrity post-development.
Assurance ComponentFocus AreaKey Requirement
ALC_CMC.3Systematic Authorization controls and for TOE changes
ATE_DPT.1Depth TestingDesign-based testing and independent verification
AVA_VAN.2Vulnerability AnalysisDemonstration of resistance to basic attacks via provided evidence

EAL4: Methodically Designed, Tested, and Reviewed

Evaluation Assurance Level 4 (EAL4), known as "Methodically Designed, Tested, and Reviewed," represents a moderate to high assurance package in the Common Criteria framework, suitable for products requiring increased confidence in security without substantial additional costs beyond standard commercial development practices. It builds upon the methodical testing and checking of EAL3 by incorporating more rigorous design reviews and documentation to ensure the target of evaluation (TOE) is systematically developed. The assurance components for EAL4 include a focused set drawn from various classes, such as ADV_ARC.1 for security architecture description, which requires developers to provide evidence that the TOE's architecture supports self-protection, separation of domains, and prevention of bypasses. For internal design, ADV_TDS.3 mandates a basic , detailing subsystems, modules, their purposes, interactions, and interfaces to demonstrate structured development. ALC_CMC.4 ensures comprehensive coverage through problem tracking, maintaining traceability of changes across the TOE's lifecycle. Other supporting components encompass complete functional specifications (ADV_FSP.4) and implementation representations (ADV_IMP.1) to facilitate thorough analysis. Key features of EAL4 emphasize detailed design documentation, including semiformal functional specifications that cover all TOE security functionality (TSF) interfaces, their parameters, and error handling. Independent testing (ATE_IND.2) verifies a sample of the TSF against developer tests, ensuring coverage of security functional requirements (ATE_COV.2). Vulnerability assessment under AVA_VAN.3 involves focused analysis using operational guidance, design, and implementation details to identify potential weaknesses, including penetration testing against enhanced basic attack potential. EAL4 is typically applied to products in , such as secure operating systems like used in government and enterprise environments, and hardware security modules like Entrust nShield for cryptographic operations in financial systems. It serves as a common target for , where moderate assurance is required for devices like Versa Operating System in energy grids and financial trading networks. Evidence requirements for EAL4 include robust life-cycle support through a developer-defined model (ALC_LCD.1) and automated (ALC_CMC.4), ensuring controlled production and acceptance procedures. Development security is addressed via ALC_DVS.1, which identifies measures to protect the TOE's and during , alongside well-defined tools (ALC_TAT.1). Secure delivery procedures (ALC_DEL.1) further safeguard the TOE post-.

EAL5: Semi-Formally Designed and Tested

EAL5 provides a higher level of assurance than preceding levels by incorporating semi-formal design techniques and more rigorous testing methodologies, enabling developers to achieve substantial confidence in the of the Target of Evaluation () through structured commercial practices. Building on EAL4's methodical design base, EAL5 emphasizes semi-formal specifications to model the 's architecture and behavior more precisely, while requiring extensive independent testing and vulnerability assessments. This level is designed for environments where moderate to high risks are present, balancing development rigor with practical feasibility. The core assurance components of EAL5 include ADV_TDS.4, which mandates a description using semi-formal notation to specify interfaces, data flows, and internal structures, facilitating deeper analysis of dependencies. ATE_FUN.1 requires that traces functions through the TOE, ensuring coverage of all external interfaces and demonstrating fulfillment of functional requirements under specified conditions. AVA_VAN.4 involves a thorough examination of potential weaknesses aligned with the , identifying exploitable flaws through and testing. These components collectively ensure that the TOE's and implementation are scrutinized for consistency and resistance to known attack vectors. Key features of EAL5 highlight its focus on enhanced design expressiveness and testing depth, such as the application of semi-formal design notations—including state machines or finite state models—to represent the 's operational states and transitions, which aids in detecting inconsistencies early in development. Testing extends to comprehensive interface coverage, verifying interactions in developer-configured TOE environments to simulate real-world usage. This approach supports flexible yet secure configurations, where the TOE can be tailored by developers without compromising overall assurance. EAL5 is typically employed in high-risk systems demanding robust protection, such as smart cards for cryptographic operations and secure identity , as well as applications requiring resistance to sophisticated threats. For instance, platforms like NXP's SmartMX2 have achieved EAL5+ certification for use in systems supporting payment and . Evidence requirements for EAL5 certification include advanced under ALC_CMC.4, which enforces automated tools for version tracking, , and problem reporting to maintain TOE integrity throughout development. Additionally, TOE delivery via ALC_DEL.1 or higher ensures procedures for secure packaging, distribution, and installation, preventing tampering during transit.

EAL6: Semi-Formally Verified Design and Tested

Evaluation Assurance Level 6 (EAL6), titled "Semi-formally verified design and tested," builds upon the semi-formal design and testing of EAL5 by incorporating additional to ensure the implementation aligns precisely with the design, providing high assurance for systems handling significant risks to high-value assets. This level emphasizes a structured development environment with rigorous techniques, including semi-formal of the design and formal modeling of the , to confirm that the Target of Evaluation () behaves securely under all specified conditions. EAL6 is intended for environments where the potential impact of security failures is severe, justifying the investment in extensive analysis and testing. The assurance package for EAL6 includes specific components across several classes to achieve this verification. In the development class (ADV), key elements are ADV_SPM.1 (formal TOE security policy model), which requires a formal model of the TOE security policy with proofs demonstrating its consistency and correspondence to the functional specification; ADV_IMP.2 (implementation of the TSF), ensuring a complete mapping of the implementation representation to the design; and ADV_TDS.5 (functional specification with modular design), providing a semi-formal modular design for detailed analysis with tracing to security functional requirements (SFRs). Note that semi-formal verification in EAL6, unlike the full formal methods of higher levels, uses notations like Z or B-method for structured but not mathematically exhaustive proofs. The life-cycle support class (ALC) features ALC_LCD.1 (developer-defined life-cycle model), mandating a measurable life-cycle architecture with defined roles, responsibilities, and metrics for development processes. Other supporting components include ALC_CMC.5 and ALC_CMS.5 for comprehensive configuration management, ensuring all artifacts are tracked and controlled. Key features of EAL6 center on verifying the fidelity between design and implementation while incorporating rigorous testing to uncover potential misbehaviors. Verification involves semi-formal to confirm that the TOE implementation matches the semi-formal , including modular into subsystems and with formal high-level proofs. Testing requirements under the tests class (ATE) include ATE_COV.3 (exhaustive coverage), which demands of test coverage for all TOE Security Functionality (TSF) interfaces, including parameter variations, boundary values, and scenarios; and ATE_DPT.3 (testing: depth), requiring testing at the subsystem and levels against both and representations. Misbehavior is addressed through the class (AVA), particularly AVA_VAN.5 (advanced methodical vulnerability ), which involves methodical examination and penetration testing to identify weaknesses exploitable with high attack potential, including intentional misuses and environmental interactions. Independent testing by evaluators (ATE_IND.2) ensures replication and extension of developer tests, enhancing objectivity. EAL6 is typically applied to highly sensitive systems, such as cryptographic modules used in defense or , where threats are sophisticated and the need for near-maximum assurance outweighs the elevated costs of and evaluation. The evidence requirements emphasize a structured , including with implementation standards (ALC_TAT.3) using well-defined tools and minimizing (ADV_INT.3), alongside independent flaw analysis through vulnerability assessments. Developers must provide comprehensive documentation, such as formal policy models with proofs, detailed test plans and results with coverage rationales, lists, and life-cycle measurement outcomes, all subjected to evaluator scrutiny to demonstrate the absence of significant vulnerabilities. This level's rigor ensures the TOE's security is verifiable to a high degree without requiring fully .

EAL7: Formally Verified Design and Tested

EAL7 represents the highest assurance level in the framework, applicable to targets of evaluation (TOEs) in extremely high-risk environments where formal mathematical is essential to confirm properties. It builds on lower levels by requiring a fully formal approach to , including mathematical proofs of correspondence between specifications, models, and implementations, typically employing theorem provers such as Isabelle or for exhaustive analysis. This level surpasses EAL6's semi-formal by mandating complete formal specifications and proofs for all functions. The core assurance components for EAL7 include ADV_FSP.6, which demands a complete semi-formal functional specification augmented by formal top-level specifications using to precisely define all TOE security function interfaces (TSFIs) and their behaviors. ADV_INT.3 requires a systematic representation of the TOE's internal structure, justified as minimally complex to support formal analysis, covering , , and interfaces across the entire (TCB). ATE_DPT.4 mandates testing at the level of the implementation representation, ensuring exhaustive coverage of all TSF subsystems and modules against formal design documentation, with independent verification of test results. Additionally, ADV_SPM.1 specifies a formal TOE security policy model with mathematical proofs demonstrating consistency and correspondence to the . Key features of EAL7 emphasize formal models and proofs to verify security policies rigorously, often using automated theorem provers to establish properties like non-interference or control. Exhaustive testing is conducted against these formal specifications, including depth testing of implementation details to confirm absence of deviations. Vulnerability analysis under AVA_VAN.5 extends beyond the security target () by performing advanced methodical assessments against high attack potential, leveraging all available documentation, test data, and formal evidence to identify and refute exploitable flaws. EAL7 is typically employed for ultra-high security applications, such as systems or cryptographic modules protecting , where the risk of compromise justifies extensive resources. Its rarity stems from the substantial complexity and expertise required for , limiting certifications to specialized, tightly scoped TOEs. Evidence requirements for EAL7 encompass a full life-cycle model under ALC_LCD.2, detailing measurable development phases from to , along with comprehensive including formal proofs, semiformal modular designs (ADV_TDS.6), and complete implementation mappings (ADV_IMP.2). This includes rigorous (ALC_CMC.5) covering all development tools and artifacts, ensuring and throughout the TOE's lifecycle.

Implications of Assurance Levels

Impact on Cost and Schedule

Higher Evaluation Assurance Levels (EALs) in the framework significantly increase the financial and temporal demands on product developers due to the escalating requirements for , rigorous testing, independent verification, and involvement of accredited laboratories. For instance, achieving EAL1 or EAL2 typically involves basic functional and structural testing, resulting in costs estimated at $80,000 to $150,000 and timelines of 3 to 6 months, primarily driven by initial preparation and lab assessments. In contrast, EAL4 demands methodical design reviews and comprehensive testing, pushing costs to $175,000 to $300,000 and extending schedules to 7 to 12 months, while EAL5 through EAL7 incorporate semi-formal or processes that can elevate expenses into the millions of dollars and prolong evaluations beyond 18 months, owing to iterative reviews and specialized expertise. These cost escalations stem from multiple components, including internal development efforts for enhanced documentation, consulting fees to bridge gaps, and laboratory charges for in-depth analysis, with vendors bearing all expenses as bodies like the National Information Assurance Partnership (NIAP) impose no direct fees. Schedule delays at higher EALs arise from the need for multiple iterations of design, testing, and validation, often compounded by the complexity of Protection Profiles and the availability of qualified Testing Laboratories (CCTLs). Total project timelines, including pre-evaluation preparation, frequently reach 1 to 2 years for EAL4 and longer for advanced levels. While pursuing higher EALs imposes substantial trade-offs in terms of delayed market entry and , it mitigates long-term risks such as vulnerabilities and failures, potentially yielding cost savings in post-certification maintenance and liability. Industry data indicates that a majority of certifications are at EAL4 or below, reflecting a practical balance where lower levels suffice for most commercial applications without the prohibitive burdens of EAL5-7. This distribution underscores the framework's flexibility, allowing developers to align assurance with project constraints while still achieving mutual recognition under international agreements.

Augmentation of Requirements

Augmentation of requirements in the framework refers to the process of enhancing a base Evaluation Assurance Level (EAL) by incorporating additional assurance components from assurance families not included in the original EAL package or by substituting them with hierarchically higher components from the same family. This approach allows for customized security evaluations that address specific needs without necessitating a full upgrade to a higher EAL. The augmentation process is specified within the Security Target (ST) document, where the developer identifies and justifies the selected additional components, demonstrating their utility and added value to the overall assurance. For instance, at EAL4, which includes a basic , augmentation with AVA_VAN.5 elevates the analysis to address moderate attack potential, providing deeper scrutiny against sophisticated threats. Common augmentations involve components such as ALC_FLR.3 for systematic flaw remediation or ALC_TDA for detailed TOE design analysis, ensuring all dependencies are resolved. The certification body reviews and approves these additions during evaluation to confirm compliance and appropriateness. This mechanism offers flexibility by tailoring assurance to unique threat environments or regulatory demands, particularly in Protection Profiles (PPs) where baseline augmentations like AVA_VAN.5 and ALC_DVS.2 are often mandated for package-augmented conformance. It enables higher confidence in security properties without the disproportionate effort of advancing to the next EAL, balancing rigor with practical constraints. However, augmentations are subject to limitations: they can only increase assurance and cannot reduce components from the base EAL, and claiming exact conformance to a are prohibited from further augmentations. All additions must align with the hierarchical structure of assurance components and receive evaluator verification to prevent inconsistencies in evaluation depth.

Notation and Certification Process

The notation for an Evaluation Assurance Level (EAL) follows the format EALn, where n is a number from 1 to 7 representing the predefined package of assurance requirements, with higher numbers indicating greater rigor in evaluation depth and methodology. Augmented EALs, which incorporate additional or substituted assurance components beyond the base package, are denoted as EALn augmented with [components], often abbreviated with a plus sign and specific identifiers, such as EAL4+AVA_VAN.5 to indicate augmentation with advanced methodical vulnerability analysis. The process under commences with the definition of a (ST) or (PP), which specifies the (TOE), security problem definition, objectives, and functional and assurance requirements. This is followed by conducted by an accredited , encompassing seven phases: (1) ST/PP for conformance and consistency; (2) TOE design and development assessment, including functional specifications and implementation; (3) testing and , covering independent tests and penetration analysis; (4) life-cycle support and review; (5) additional development and delivery procedures; (6) flaw remediation processes; and (7) documentation of results in an Evaluation Technical Report (ETR). The submits the ETR to the scheme for validation, where the body reviews the evidence, conducts oversight, and confirms compliance with standards. Upon successful validation, the scheme issues a detailing the exact EAL, any augmentations, and components, which remains valid for up to 5 years unless withdrawn or extended through re-assessment. Certificates explicitly list the achieved EAL and assurance components, enabling verification of scope and limitations, and are publicly searchable in registries such as the Portal's Certified Products List. As of 2025, approximately 1,800 certifications have been issued worldwide, with the majority at EAL1 through EAL4 due to their balance of assurance and practicality for commercial IT products.

Usage and Recognition

International Agreements and Mutual Recognition

The Recognition Arrangement (CCRA), established in 1999 and ratified in its current form in September 2014, is an international agreement among government bodies to facilitate mutual recognition of IT product security evaluations based on the standard. This arrangement ensures that evaluations performed by accredited laboratories in participating countries are accepted across borders, promoting consistent security assurance without redundant testing, particularly for products up to Evaluation Assurance Level 4 (EAL4). As of 2025, the CCRA includes 37 member countries, encompassing both certificate-producing participants that conduct evaluations and certificate-consuming participants that rely on foreign certifications. Under the CCRA, mutual recognition applies to certificates for IT products and protection profiles that meet specific criteria, such as with approved Collaborative Protection Profiles (s) up to EAL4, including additional flaw remediation components (ALC_FLR). For lower levels, evaluations at EAL1 and EAL2 are recognized regardless of whether they align with a , provided they follow methodologies. Higher assurance levels beyond EAL4, such as EAL5 through EAL7, are not covered by the standard mutual recognition and typically require bilateral or multilateral agreements between specific countries for sensitive applications. This limitation stems from the increased rigor and considerations involved in higher-level evaluations. Key participants in the CCRA include major certification bodies such as the ' National Information Assurance Partnership (NIAP), the United Kingdom's National Cyber Security Centre (NCSC), and Germany's (BSI), which oversee evaluations and authorize certificates within their jurisdictions. These entities collaborate to maintain the arrangement's integrity, with NIAP, for instance, serving as the U.S. representative and ensuring alignment with federal procurement requirements. Countries not participating in the CCRA, such as some non-signatory nations, operate independent schemes but may accept CCRA-recognized certificates on a voluntary basis or through separate bilateral pacts. Complementing the global CCRA is the SOG-IS Mutual Recognition Agreement (SOG-IS MRA) under the SOG-IS framework, which provides enhanced recognition within for EAL1 through EAL4 certificates issued by member states. This regional agreement, operational since the , extends to EAL3 and EAL4 for certificate-producing nations and aligns with CCRA principles to avoid duplication in intra-European trade, while integrating with emerging EU schemes like the EU (EUCC). Non-CCRA participants in may leverage SOG-IS MRA for limited reciprocity, though full benefits require CCRA membership.

Examples of Certified Products and Applications

Microsoft Windows operating systems, such as and later versions including , have received certifications at EAL4 augmented with additional assurance requirements through the National Information Assurance Partnership (NIAP). Similarly, integrated circuits from , like the SLE78 family, have achieved EAL5+ certifications from bodies such as the German Federal Office for Information Security (BSI), enabling secure applications in identity and payment systems. EAL7 certifications remain exceptionally rare due to their demand for ; however, the seL4 exemplifies this level through machine-checked proofs that exceed EAL7 requirements, though it has not undergone full certification. In government sectors, EAL4 or higher certifications are often mandated; for instance, the U.S. Department of Defense includes NIAP-validated products at EAL4+ on its approved lists for network usage. Financial applications leverage EAL2 to EAL4 certifications for to meet Hardware Security Module (HSM) standards, as seen in products like Thales HSMs certified at EAL4+. For (IoT) deployments, Protection Profiles such as the IoT PP require EAL4 conformance for sensors and devices to ensure robust security in connected environments. As of 2025, there has been a notable increase in EAL2 to EAL4 s for services, reflecting growing adoption in infrastructures. These s facilitate processes, such as tenders that specify minimum EAL3 or equivalent compliance to streamline vendor evaluation and ensure interoperability.

Criticisms and Limitations

Common Critiques

One prominent critique of the Evaluation Assurance Level (EAL) system within the framework is its high costs and extended timelines, which often deter small and medium-sized enterprises (SMEs) from pursuing . Evaluations at higher levels, such as EAL4, can take 12 to 24 months and cost from mid-six figures to millions of dollars, making it prohibitive for smaller vendors who lack the resources to sustain such processes. This financial and temporal burden contributes to the relative rarity of higher EALs (5-7), with approximately 80% of s occurring at EAL4 or below, limiting their application to niche, high-stakes scenarios like or systems rather than broader use. Critics also argue that the EAL system's scope is inherently limited, emphasizing design-time assurance while overlooking threats and operational realities. The framework primarily assesses static , , and testing under controlled conditions, but it does not adequately evaluate dynamic risks such as attacks or environmental factors post-deployment. For instance, evaluations exclude comprehensive analysis of non-IT countermeasures or the product's behavior in real-world operational environments, leaving certified products vulnerable to threats that emerge after . A related concern is the overemphasis on procedural and , which does not reliably ensure in the field. While EAL s require rigorous adherence and extensive , they often fail to prevent post- vulnerabilities, as the focus remains on development artifacts rather than exhaustive runtime validation. Notable examples include flaws discovered in EAL4-certified products, such as those affected by the (CVE-2017-15361), which enabled private key recovery in Infineon implementations certified at EAL5+ levels since 2012, and other issues like (CVE-2019-15809) and TPM-Fail (CVE-2019-16863) in high-assurance ECDSA implementations. Analysis of over 1,600 certificates reveals that 59% of vulnerabilities in EAL4+ products surfaced after , with maintenance updates frequently lacking transparency on fixes, underscoring that provides no absolute guarantee against field exploits. Furthermore, the EAL system is viewed as outdated for addressing contemporary cybersecurity threats, with slow adaptation to paradigms like AI/ML integration and zero-trust architectures. Its static, configuration-focused approach struggles with the dynamic, adaptive nature of modern risks, such as -driven attacks or continuous verification in zero-trust models, as highlighted in critiques from the that point to gaps in handling . This lag is evident in the framework's limited evolution since its core development, rendering higher EALs insufficient for evolving threat landscapes despite their intended rigor. Additionally, as of 2025, the has begun transitioning to the EU Cyber (EUCC) framework under the EU Cybersecurity Act, with EU schemes ceasing to approve evaluations after February 27, 2025. EUCC builds on CC standards but replaces EALs with new assurance levels (e.g., "Substantial" equivalent to EAL2+), introduces mandatory and (AVA_PAM), and initially limits Protection Profiles to specific categories like smart cards and . This shift challenges the mutual recognition and global applicability of EAL certifications in , a key market, potentially fragmenting international assurance practices.

Alternatives to EAL

Several assurance frameworks have emerged as alternatives to the Evaluation Assurance Level (EAL) within the , particularly to address its perceived rigidity, high costs, and focus on process-oriented evaluations rather than risk-based or outcome-driven approaches. These alternatives prioritize flexibility, continuous monitoring, and alignment with modern development practices, making them suitable for , cryptographic, and ecosystems. The provides a of and privacy controls tailored to federal information systems, emphasizing risk-based selection without predefined assurance levels like EAL. It organizes controls into families such as and incident response, allowing organizations to tailor implementations based on mission impact and threat profiles. This framework underpins the (FedRAMP), which authorizes cloud service providers through baseline controls derived from SP 800-53, enabling scalable assessments for dynamic environments like . For cryptographic modules, the Federal Information Processing Standard (FIPS) 140-3 offers a targeted validation program with four levels of security requirements, focusing on design, implementation, and operational integrity of cryptographic functions. Unlike EAL's broad IT product evaluation, FIPS 140-3 aligns with ISO/IEC 19790 and typically requires less time—often several months compared to EAL's potential year-long process—due to its narrower scope on crypto-specific validations. This makes it a preferred alternative for and software modules handling sensitive . Emerging frameworks further shift toward continuous assurance and vendor-centric evaluations. The UK's National Cyber Security Centre (NCSC) Cyber Assessment Framework (CAF) provides a principles-based tool for organizations to self-assess and improve , structured around objectives like managing risks and protecting against cyber threats. Released in version 4.0 on August 6, 2025, CAF supports ongoing monitoring rather than one-time certifications, aiding operators in demonstrating compliance. Similarly, initiatives under the Recognition Arrangement (CCRA) are exploring vendor security evaluations with continuous assurance elements since 2023, aiming to integrate agile practices into mutual recognition. In comparison, these alternatives emphasize measurable outcomes and adaptability over EAL's formal, process-heavy methodology; for instance, DevSecOps integrates security into agile development pipelines using risk frameworks like NIST's, enabling rapid iterations and automated testing without EAL's extensive documentation burdens. This outcome-oriented focus reduces evaluation timelines and costs while maintaining assurance for evolving threats.

References

  1. [1]
    Evaluation Assurance Level - Glossary | CSRC
    Definitions: Set of assurance requirements that represent a point on the Common Criteria predefined assurance scale.
  2. [2]
    [PDF] CC2022PART1R1.pdf - Common Criteria
    Examples of provided packages include the evaluation assurance levels (EAL) ... EXAMPLE Evaluation Assurance Level 1 is also known as “EAL1”. NOTE For those ...
  3. [3]
    [PDF] Pre-defined packages of security requirements November 2022 CC ...
    Nov 20, 2022 · 4.3 Evaluation assurance level objectives. As outlined in 4.4, seven hierarchically ordered evaluation assurance levels are defined in this.
  4. [4]
    [PDF] Security assurance components September 2012 Version 3.1
    Sep 1, 2012 · EVALUATION ASSURANCE LEVELS .......................................................... 30. 8.1 Evaluation assurance level (EAL) overview ...
  5. [5]
    Common Criteria : CC Portal
    Common Criteria · Products can be evaluated by competent and independent licensed laboratories so as to determine the fulfilment of particular security ...Certified Products · Publications · CCRA · Members of the CCRA
  6. [6]
    NIAP - Homepage
    Any product accepted into evaluation under the U.S. CC Scheme must claim compliance to an NIAP-approved PP. Protection Profiles have been approved for use by ...CCRA · Common Criteria Testing Labs · Products · Login
  7. [7]
    Common Criteria Evaluation Assurance Levels - From EAL 1 To EAL 4
    Nov 30, 2023 · EAL1 through EAL7 represents a grading system assigned after a security evaluation based on the Common Criteria, a global standard since 1999.
  8. [8]
    Common Criteria Certification Services by Corsec
    Common Criteria is an internationally recognized set of guidelines (ISO 15408), which define a common framework for evaluating security features and ...
  9. [9]
    [PDF] Common Criteria - National Security Agency
    The. Common Criteria route enables more precision in evaluations, greater clarity in acquisition decisions, a better balance of features, security and ...
  10. [10]
    [PDF] National Information Assurance Partnership /Common Criteria ...
    The NIAP Common Criteria Scheme overcomes these limitations and enables consumers to obtain an impartial assessment of an IT product by an independent entity. ...
  11. [11]
    [PDF] National Information Assurance Partnership (NIAP) Common ...
    Mar 23, 2006 · • Evaluation Assurance Level (EAL). • Protection Profile (PP). • Security Target (ST). • Target of Evaluation (TOE). • Evaluators. • Validators.
  12. [12]
    [PDF] CC2022PART3R1.pdf - Common Criteria
    evaluation assurance levels along with their relationships, and the structure of the composed assurance packages (CAPs). It also characterizes the assurance ...
  13. [13]
    Common Criteria nach ISO/IEC 15408:2022 - BSI
    This ISO version of the Common Criteria now consists of five parts: Part 1: Introduction and general model · Part 2: Security functional components · Part 3 ...
  14. [14]
    Evaluation Process - NIAP
    To begin the evaluation process, a product vendor chooses an approved Common Criteria Testing Lab (CCTL) to conduct the product evaluation.
  15. [15]
    [PDF] CEM2022R1.pdf - Common Criteria
    The use of italics indicates text that has a precise meaning. For security assurance requirements the convention is for special verbs relating to evaluation.
  16. [16]
    [PDF] CC2022CEM2022TransitionPolicy.pdf - Common Criteria
    Apr 20, 2023 · CC v3.1 R5 is the last revision of version 3.1 and may optionally be used for evaluations of Products and Protection Profiles starting no ...
  17. [17]
    [PDF] EUCC-3110-2025-08-2500053-01-CR - Common Criteria
    Aug 29, 2025 · This Certification Report states the outcome of the Common Criteria security evaluation of the NXP. JCOP 8.x/9.x with eUICC extension on SN300 ...
  18. [18]
    History - Common Criteria
    The Common Criteria for Information Technology Security Evaluation (aka. Common Criteria) was developed by the governments of Canada, France, Germany, ...Missing: Assurance Level predecessors
  19. [19]
    [PDF] Introduction and general model August 1999 Version 2.1 CC
    A package consisting of assurance components from Part 3 that represents a point on the CC predefined ...
  20. [20]
    [PDF] Trusted Computer System Evaluation Criteria ["Orange Book"]
    Oct 8, 1998 · The TCB shall designate each communication channel and I/O device as either single-level or miltilevel. Any change in this designation shall be ...
  21. [21]
    [PDF] Information Technology Security Evaluation Criteria ( ITSEC ...
    Jun 28, 1991 · 2.2. In these criteria, security features are viewed at three levels. The most abstract view is of security objectives: the contribution to ...
  22. [22]
    [PDF] CEM-99/045 Part 2: Evaluation Methodology - Common Criteria
    Aug 11, 1999 · The. CEM is a companion document to the Common Criteria for Information Technology Security. Evaluation (CC) and is the result of extensive ...
  23. [23]
    [PDF] Security assurance components April 2017 Version 3.1 Revision 5
    Apr 1, 2017 · [CC-1]. Common Criteria for Information Technology. Security Evaluation, Version 3.1, revision 5, April. 2017. Part 1: Introduction and general ...
  24. [24]
    [PDF] Report BSI-DSZ-CC-0281-2005 - Common Criteria
    Dec 22, 2005 · This agreement on the mutual recognition of IT security certificates was extended to include certificates based on the CC for all evaluation ...
  25. [25]
    [PDF] IoT Secure Communications Module Protection Profile (IoT-SCM-PP)
    Dec 19, 2019 · The purpose of this Common Criteria (CC) Protection Profile (PP) is to standardize the security requirements of an IoT Secure Communications ...
  26. [26]
    [PDF] Guidelines for Developer Documentation - Common Criteria
    evaluation assessment of a PP, an ST or a TOE, against defined criteria. evaluation assurance level (EAL) an assurance package, consisting of assurance.
  27. [27]
    [PDF] Juniper Networks M,T, MX and PTX Routers and EX9200 Switches ...
    Sep 3, 2014 · the following secure network devices running Junos OS 13.3R1.8 ... Certified Products list. CM. Configuration Management. EAL. Evaluation ...
  28. [28]
    [PDF] Common Criteria Certificate
    Jan 31, 2014 · This Common Criteria Certificate is for Cisco Catalyst Switches (3560C, 3560X, and 3750X) running IOS 15.0(2)SE4, evaluated for security ...Missing: EAL2 | Show results with:EAL2
  29. [29]
    [PDF] Lecture 80: Common Criteria Evaluations - UT Computer Science
    Testing costs are driven by a relatively small market, complexity and need for a skilled staff. EAL2 costs $100K to $170K and takes four to six months. EAL4 ...
  30. [30]
    Red Hat Achieves Common Criteria Security Certification for Red ...
    Oct 26, 2016 · Red Hat Enterprise Linux 7.1 has been awarded the Common Criteria Certification at Evaluation Assurance Level (EAL) 4+ for an unmodified commercial operating ...
  31. [31]
    Entrust nShield 5 Hardware Security Module Achieves Common ...
    Jun 27, 2024 · Examples of these certifications include the globally recognized Common Criteria certification. Entrust nShield hardware security modules (HSMs) ...
  32. [32]
    Versa Achieves Common Criteria EAL4+ Certification
    Dec 11, 2023 · Versa Operating System (VOS) Independently Certified to Meet Stringent EAL4+ Security Requirements Used in High-Assurance Deployments.Missing: procurement | Show results with:procurement
  33. [33]
    Smart card platform is 'world's first' to be EAL5+ certified
    NXP's SmartMX2 technology is the world's first secure smart card platform with MIFARE Plus & MIFARE DESFire EV1 functionality to be certified at EAL 5+.Missing: military | Show results with:military
  34. [34]
    [PDF] Smart Card Security WP
    The. Department of Defense Common Access Card uses smart card technology for the credentialing of all military and civilian personnel. The Department of ...
  35. [35]
    Top 10 Things You Should Know About Common Criteria
    Apr 13, 2020 · It is used specifically to ensure that IT products meet standard security requirements for government or specific market deployments.
  36. [36]
    [PDF] Common Criteria: A Survey of Its Problems and Criticism - Jim Yuill
    This paper presents a survey of the problems and criticism reported about CC. This paper is based on a broad review of the CC literature from government, ...
  37. [37]
    About Us - What is NIAP/CCEVS
    This program includes the NIAP-managed Common Criteria Evaluation and Validation Scheme (CCEVS or Scheme), a national program for developing Protection Profile ...
  38. [38]
    [PDF] eBook - Lightship Security
    1. How much does Common Criteria certification cost? A CC evaluation is a significant undertaking and will generally cost somewhere in the hundreds of thousands ...<|control11|><|separator|>
  39. [39]
    [PDF] NIAP End of Year Report
    NIAP completed its first Cloud evaluation, Microsoft Intune in December 2024 against the Mobile. Device Management (MDM) PP 4.0 and MDM Agent Version 1.0.
  40. [40]
    [PDF] Protection Profile Cryptographic Service Provider - Common Criteria
    This PP claims package-augmented conformance to EAL4. The minimum assurance level for this protection profile is EAL4 augmented with AVA_VAN.5 and ALC_DVS.2.
  41. [41]
    [PDF] CCDB-012-v1.0-2021-Sep-30-Final-Certificate_Validity.pdf
    Sep 30, 2021 · Common Criteria certificates have in the past been issued with unlimited validity period, unless they are withdrawn.
  42. [42]
    Certified Products : CC Portal - Common Criteria
    Sitemap | Contact · Login · NEWS · ICCC · PROTECTION PROFILES · COLLABORATIVE PPS · CERTIFIED PRODUCTS · TECHNICAL COMMUNITIES · PUBLICATIONS · ABOUT THE CC ...Missing: network | Show results with:network
  43. [43]
    Certified Products List - Statistics : CC Portal
    * A Certified Product may have multiple Categories associated with it. Certified Products by Assurance Level and Certification Date. EAL, 2015, 2016, 2017, 2018 ...
  44. [44]
    CCRA - NIAP
    The purpose of this Arrangement is to ensure IT products evaluated according to the terms of the CCRA are mutually recognized by all member nations, allowing ...Missing: BSI | Show results with:BSI
  45. [45]
    None
    **Summary of CCRA Final Update v16.5.1 (May 14, 2014)**
  46. [46]
    Common Criteria Certification & Compliance - NetApp
    Seven levels describe the rigor and depth of the assessment, with EAL1 being the most basic and EAL7 the most stringent. The CCRA has agreed that EAL1 and EAL2 ...
  47. [47]
    Members of the CCRA : CC Portal - Common Criteria
    Members of the CCRA ; United States, National Information Assurance Partnership Department of Defense ATTN: NIAP, Suite 6982 9800 Savage Road Ft. Meade, MD 20755 ...
  48. [48]
    Recognition of CC (Common Criteria) certificates in the context ... - BSI
    The CCRA relates to CC certificates based on a Collaborative Protection Profile (cPP) (used strictly as envisaged), to certificates for assurance components up ...Missing: participants NIAP
  49. [49]
    SOG-IS - Home
    For certificate producing nations there are also two levels of recognition within the agreement: Certificate recognition up to EAL4 (as in CCRA); Certificate ...
  50. [50]
    Microsoft Windows Platform Products Awarded Common Criteria ...
    Dec 14, 2005 · The following products have earned EAL 4 Augmented with ALC_FLR.3 certification from NIAP: Microsoft Windows Server™ 2003, Standard Edition (32- ...
  51. [51]
    MTCOS native generic ID solution - Infineon Technologies
    Easy backend integration via MTCOS-supported middleware. The OS is CC EAL 5+ certified. Dual-interface capability supports contactless communication (ISO/IEC ...
  52. [52]
    seL4 Proofs & Certification
    In particular, Common Criteria's EAL 7 requires a formal model of the security policy, functional specification and design specification, with formal ...
  53. [53]
    USGv6 for IPv6, Common Criteria EAL 4+, and certifications that ...
    Jun 17, 2013 · FIPS 140-2 is typically required to complete CC EAL4+ ... Department of Defense list of approved products and can used on the US DoD networks.
  54. [54]
    Luna General Purpose HSMs - Thales
    Common Criteria EAL4+ (AVA_VAN.5 and ALC_FLR.2) Certified against the Protection Profile EN 419 221-5; Listed as Qualified Signature or Seal Creation Device ...<|control11|><|separator|>
  55. [55]
    [PDF] IoT Secure Element Protection Profile (IoT-SE-PP) | Common Criteria
    Dec 19, 2019 · For a TOE supporting firmware update, this PP is conforming to assurance package EAL4 augmented by AVA_VAN.4 and ALC_FLR.1 as defined in Common ...
  56. [56]
    [PDF] Lessons Learned from the First High Assurance (EAL 6+) Common ...
    Lessons learned include: don't underestimate the pain of validating the PP, and reuse other cert results/artifacts.
  57. [57]
    Public disclosure: Vulnerable RSA generation CVE-2017-15361 | roca
    Oct 16, 2017 · The vulnerability is present in NIST FIPS 140-2 and CC EAL 5+ certified devices since at least the year 2012. ROCA Impact. The algorithmic ...
  58. [58]
    [PDF] arXiv:2311.17603v2 [cs.CR] 1 Jul 2024
    Jul 1, 2024 · such as Common Criteria undergo significant scrutiny during the costly ... and in the required finances (hundreds of thousands of dollars or more) ...
  59. [59]
    A Flexible Risk-Based Security Evaluation Methodology for ... - MDPI
    Some certification frameworks such as Common Criteria (CC) [14] support the comparability among the results of independent cybersecurity evaluations through ...
  60. [60]
    SP 800-53 Rev. 5, Security and Privacy Controls for Information ...
    This publication provides a catalog of security and privacy controls for information systems and organizations to protect organizational operations and assets.SP 800-53B · SP 800-53A Rev. 5 · CPRT Catalog · CSRC MENU
  61. [61]
    Cyber Assessment Framework - NCSC.GOV.UK
    The Cyber Assessment Framework (CAF) is a tool to help organisations assess and improve their cyber security and resilience, managing cyber risks and protecting ...Introduction to the Cyber ...CAF Objective A - Managing ...CAF Objective B - Protecting ...Introduction to the CAF ...Consolidated view of CAF ...
  62. [62]
    [PDF] Arrangement on the Recognition of Common Criteria Certificates
    May 23, 2000 · This arrangement aims to ensure high IT evaluation standards, improve product availability, and allow use of certified products without further ...Missing: 1999 | Show results with:1999