Fact-checked by Grok 2 weeks ago

Trusted Computer System Evaluation Criteria

The Trusted Computer System Evaluation Criteria (TCSEC), commonly referred to as , is a (DoD) standard that defines technical criteria and methodologies for evaluating the security safeguards of automated (ADP) systems designed to protect classified or sensitive . Published in December 1985 as DoD 5200.28-STD, it supersedes an earlier 1983 version and serves as a basis for certifying systems, guiding manufacturers in developing secure products, and specifying security requirements in DoD acquisitions. The TCSEC emerged from efforts dating back to a 1967 Defense Science Board on , with key developments including National Bureau of Standards (NBS) workshops in 1977–1978, contributions from , and the DoD Computer Security Initiative launched in 1977. It forms the centerpiece of the DoD's of security standards, which use color-coded bindings for various guidelines, and emphasizes through concepts like the from the 1972 Anderson Report and the Bell-LaPadula model. The criteria focus on four primary areas: (defining rules), accountability (requiring mechanisms for security-relevant events), assurance (evaluating the trustworthiness of security mechanisms through testing and ), and documentation (providing user and administrative guidance). At its core, the TCSEC organizes evaluation into four hierarchical divisions, each containing one or more classes that increase in rigor from minimal to verified protection: These classes assess the Trusted Computing Base (TCB)—the components providing security enforcement—against requirements for identification and authentication, object reuse prevention, continuous protection, and . While influential in early evaluations, the TCSEC was largely superseded in the 1990s by international standards like the , though it remains a foundational reference for understanding hierarchical security assurance.

Introduction

Purpose and Scope

The Trusted Computer System Evaluation Criteria (TCSEC) provides a standardized framework for evaluating the security of computer systems intended to process classified or sensitive information, with the primary goal of assessing protections against unauthorized disclosure and modification. This criteria enables the U.S. Department of Defense (DoD) to measure the trustworthiness of systems handling such data up to the Top Secret classification level, serving as a basis for procurement specifications and evaluations of commercial products. The scope of TCSEC is confined to automated data processing systems, primarily concentrating on confidentiality and integrity in multilevel secure environments typical of DoD applications. It addresses technical security features in hardware, firmware, and software but excludes considerations of or personnel security, focusing solely on the mechanisms within the system's . TCSEC employs a hierarchical evaluation approach that categorizes systems into protection levels of increasing rigor, allowing for graduated assessments of based on , , and requirements. This structure arose from the DoD's need for reliable trusted systems amid the demands of the era.

Key Terminology

The Trusted Computer System Evaluation Criteria (TCSEC), commonly known as , establishes a foundational vocabulary for evaluating the of computer systems, with key terms originating from the U.S. Department of Defense's () research in the 1970s that prioritized formal models over ad-hoc implementations. These concepts emphasize rigorous enforcement of rules in systems handling sensitive , influencing subsequent standards worldwide. The refers to the totality of protection mechanisms within a computer system—including , , and software—that collectively enforce the system's and isolate protected objects such as code and data from untrusted components. The must be isolated from the rest of the system to prevent unauthorized interference, forming the core of any trusted system's security architecture. The is an abstract machine that mediates all access requests between subjects (e.g., users or processes) and objects (e.g., files or resources) to enforce the security policy, ensuring that it is tamper-proof, invoked for every access, and sufficiently small to allow thorough analysis and testing. This concept, central to the TCB's , guarantees complete without bypasses, a principle derived from early studies on secure system design. A Security Policy constitutes a high-level set of rules and statements defining what information is protected, how access is controlled, and the dissemination limits for sensitive data, often modeled formally such as through the Bell-LaPadula model for confidentiality. It serves as a core requirement in TCSEC evaluations, providing the explicit framework that the TCB must implement precisely. Assurance Levels represent the degree of confidence that a system correctly enforces its throughout its lifecycle, from to , with higher levels demanding more rigorous methods like formal proofs and extensive testing. These levels are integral to TCSEC's hierarchical evaluation classes, building on 1970s research to quantify trustworthiness in processing .

Historical Development

Origins and DoD Initiatives

The origins of the Trusted Computer System Evaluation Criteria (TCSEC) trace back to the late , including the 1967 Defense Science Board on (Ware Report), which highlighted technical limitations in securing shared systems, and early 1970s, amid growing U.S. Department of Defense () concerns over securing computer systems during the . The development of , initiated by the Advanced Research Projects Agency () in 1969, highlighted vulnerabilities in networked resource sharing. Early -funded experiments in , such as the System Development Corporation's ADEPT-50 project in the late , demonstrated significant risks in environments where users with varying clearance levels shared resources, exposing the inadequacies of ad-hoc protections against unauthorized access and data leakage. A pivotal initiative emerged from these challenges with the establishment of the Computer Security Evaluation Center (CSEC) in 1981, under the National Computer Security Center (NCSC). The landmark Anderson Report, formally titled the "Computer Security Technology Planning Study" and published in October 1972 by James P. Anderson & Co. for the Electronic Systems Division (ESD) of the U.S. , identified critical gaps in existing computer architectures, emphasizing the need for standardized criteria to evaluate security effectiveness against malicious insiders in shared systems. The report advocated for a concept—a tamper-proof mechanism to enforce access controls—and recommended a multi-year investment to develop certifiable secure systems, estimating annual losses from insecure setups at around $100 million. James Anderson and collaborators at ESD built on this foundation through the 1970s, evolving from informal evaluations to formalized criteria that introduced hierarchical security classes, influenced by mathematical models like Bell-LaPadula for control. The Computer Security Initiative, launched in 1977, further advanced these efforts. This work culminated in Directive 5200.28, issued on March 21, 1988, which mandated the use of TCSEC for all automated information systems processing classified data, requiring continuous protective features to prevent unauthorized disclosure. These pre-publication efforts transitioned into the official TCSEC document in 1983.

Publication History and Revisions

The Trusted Computer System Evaluation Criteria (TCSEC) was initially published on August 15, 1983, as CSC-STD-001-83 by the National Computer Security Center (NCSC) under the U.S. Department of Defense (), marking the formal establishment of standardized evaluation guidelines for computer systems. This document served as the cornerstone of the DoD's , a collection of approximately 20 color-coded manuals providing guidance on various topics, with the TCSEC distinguished by its orange cover and thus nicknamed the "Orange Book." A revised version, DoD 5200.28-STD, was issued on December 26, 1985, superseding the 1983 interim standard and incorporating refinements based on early implementation feedback while maintaining the core structure of evaluation divisions and classes. In 1987, the Trusted Network Interpretation (TNI) extended the TCSEC to networked systems through CSC-STD-005-87, published on July 31, 1987, and known as the "Red Book" for its cover; this interpretation aligned network security requirements with the original criteria without altering the base document. Efforts to update the TCSEC continued into the early , including a 1991 draft proposal that aimed to incorporate explicit controls for and system alongside , drawing from emerging federal criteria needs, though this revision was ultimately not finalized or adopted. In 1992, the Federal Criteria for Information Technology Security were released as a transitional , building directly on the TCSEC to facilitate alignment with international standards while preserving its evaluation methodology during the shift. By the late , international efforts led to the TCSEC's deprecation; it was effectively superseded in 1999 with the adoption of the , with formal policy support via NSTISSP No. 11 in 2004, promoting the as the successor standard for evaluating security. This transition reflected the need for a globally recognized framework, ending the TCSEC's role in new evaluations while allowing legacy certifications to persist for existing systems.

Core Security Requirements

Policy Requirements

The Trusted Computer System Evaluation Criteria (TCSEC) mandates that evaluated systems must implement a well-defined and explicit as the foundational element of protection, ensuring that to resources is controlled in a manner consistent with the system's intended security objectives. This encompasses either discretionary , where or resource owners specify permissions based on individual discretion (such as read, write, or execute granted to specific or groups), or mandatory controls, where the system enforces decisions through centrally administered rules independent of actions. The must be clearly articulated in the system's documentation, providing evaluators with a precise framework to assess whether the (TCB)—the components responsible for enforcing security—adequately protects against unauthorized disclosure, modification, or denial of service. In higher evaluation classes, the security policy incorporates formal models to rigorously define confidentiality protections, with the Bell-LaPadula model serving as a seminal example for multilevel secure systems. This model enforces two core properties: the Simple Security Property (no read up), which prevents a subject at a given security level from reading an object at a higher level, and the *-Property (no write down), which prohibits a subject from writing to an object at a lower security level, thereby preventing inadvertent information leakage across sensitivity levels. These rules are applied through sensitivity labels assigned to all subjects and objects, ensuring hierarchical classifications (e.g., unclassified, secret, ) and non-hierarchical categories (e.g., special access compartments) are consistently managed to maintain integrity. The policy's reliance on such models in classes and above links to assurance requirements by necessitating formal proofs of model consistency with the TCB's axioms. The is further required to incorporate mechanisms for user identification and , ensuring that every is verifiably linked to an authorized or before accessing protected resources, typically through methods like passwords or unique identifiers protected against compromise. Additionally, the principle of least privilege must be addressed, limiting subjects to the minimum access rights necessary for their functions to reduce potential damage from errors or malicious actions. During , the is scrutinized for —covering all aspects of the system's needs—and , with evaluators verifying that no ambiguities or conflicts exist between policy statements and the TCB's enforcement capabilities. TCSEC distinguishes the security policy into internal and external components to clarify enforcement boundaries: the internal policy governs the TCB's operations, such as access mediation and label manipulation within protected subsystems, ensuring isolation and integrity of security-critical functions. In contrast, the external policy pertains to user-visible interfaces, including how sensitivity labels are presented (e.g., in output markings or access denial messages) and how users interact with the system without compromising underlying protections. This separation facilitates comprehensive evaluation by allowing assessors to confirm that the TCB internally upholds the policy while externally providing transparent, non-bypassable controls that align with user expectations and operational needs.

Accountability Requirements

The Trusted Computer System Evaluation Criteria (TCSEC) mandates as a fundamental requirement to enable the and tracking of user actions, thereby supporting the of policies through auditable records. This ensures that security-relevant events can be reconstructed for investigation, violation detection, and purposes. mechanisms are integrated into the (TCB) and apply across all evaluation divisions from (Discretionary Protection) upward, serving as a feature that complements other requirements like policy . Central to is the requirement for unique user and . The must provide the capability to uniquely identify each individual Automated Data Processing (ADP) system user, ensuring that no two users share the same identifier. mechanisms, such as passwords or other protected methods, verify the claimed and associate it with all subsequent actions, preventing anonymous access and enabling . These features enforce individual responsibility and are essential for linking events to specific actors. Audit trails form the core of by recording security-relevant events in a protected manner. The is required to generate, maintain, and safeguard audit records from modification, unauthorized access, or destruction, with read access restricted to authorized personnel such as security administrators. These trails capture accesses to protected objects, including the subject's , the object's , the specific performed (e.g., read, write, or execute), and the event . Additional events logged include the use of and mechanisms, the introduction or deletion of objects into a user's , operator and administrator s, and other policy-enforced events. This comprehensive integrates with the system's to classify and prioritize auditable occurrences. To address the potential volume of audit data, TCSEC emphasizes capabilities for reduction and review. The system must support selective auditing based on user identity, event type, or other criteria, allowing administrators to filter and analyze logs efficiently without overwhelming storage or processing resources. Review mechanisms enable the examination of audit records for patterns indicating violations, with tools for searching, sorting, and reporting to facilitate timely response. In higher evaluation classes, these features extend to support , ensuring that logged actions cannot be plausibly denied by the user. Overall, contributes to the broader assurance of the by providing verifiable evidence of compliance with objectives.

Assurance Requirements

Assurance in the Trusted Computer System Evaluation Criteria (TCSEC) refers to the measures providing evidence that the (TCB)—the totality of protection mechanisms within a system—correctly enforces the intended without design or implementation flaws, as demonstrated through design analysis, development practices, and operational testing. This assurance is achieved by building confidence in the TCB's reliability across the system's life cycle, ensuring that security policies for , marking, identification, and are interpreted accurately and consistently. The levels of assurance escalate progressively from informal methods in lower divisions, such as basic operational testing in Division C, to rigorous formal verification in Division A, where mathematical proofs confirm the consistency between the security policy model and the system's detailed design specifications. Key elements include life-cycle controls that mandate security testing at every stage, from initial design through flaw remediation, along with the development of a formal top-level specification (FTLS) and detailed top-level specification (DTLS) to validate the TCB's adherence to policy axioms. Configuration management is required to track and control changes to TCB components, employing tools for version identification, comparison, and protection of master copies to prevent unauthorized modifications. Covert channel analysis forms a critical assurance feature, involving systematic identification of potential covert storage and timing channels within the TCB, followed by bandwidth estimation to quantify information leakage risks and ensure they remain within acceptable limits. System integrity mechanisms provide for periodic validation of TCB hardware and firmware operations, confirming that the system remains in a secure state during runtime. Additionally, recovery procedures ensure that, following system failures or disruptions, the TCB can be restored securely without compromising overall protection objectives.

Documentation Requirements

The Trusted Computer System Evaluation Criteria (TCSEC) mandates comprehensive to support the , , and secure of trusted computer systems, encompassing descriptions of the , (TCB) design specifications, and user manuals. This ensures that evaluators can assess compliance with security requirements, while providing users and administrators with clear guidance on leveraging protection mechanisms effectively. Key components include the Security Features User's Guide, which offers a consolidated summary of the TCB's protection mechanisms, operational guidelines, and their interactions, applicable across all evaluation classes from C1 to A1. The Trusted Facility Manual details administrative cautions, procedures for audit file management, operator functions, and secure TCB generation and recovery processes, with escalating specificity—for instance, adding audit record structures in C2 and secure startup procedures in B3. Test Documentation outlines plans, procedures, and results demonstrating the functionality of security mechanisms, including /, interfaces, and testing outcomes such as covert channel bandwidth analysis in B2 and formal top-level specification (FTLS) mapping in A1. Documentation requirements intensify progressively across divisions, reflecting higher assurance needs; for example, Division C (C1 and ) requires basic descriptions of the protection philosophy and interfaces, while Division B ( to B3) incorporates informal or formal models, detailed consistency analyses, and covert channel mitigations, culminating in Division A (A1) with verified formal models and extensive design specifications for . These documents must cover the system's hardware and to delineate boundaries and interfaces, ensuring from untrusted components. Security testing results, including evidence of resource and resistance, are integral to validate mechanism efficacy. Throughout the system life cycle, documentation is maintained and updated via practices, particularly rigorous in and , to preserve consistency from design through deployment and maintenance, thereby supporting ongoing verification by evaluators and secure usage by operators. This recorded evidence plays a critical role in assurance validation by providing traceable artifacts for independent review.

Evaluation Divisions and Classes

Division D: Minimal Protection

Division D, designated as Minimal Protection, constitutes the lowest tier in the Trusted Computer System Evaluation Criteria (TCSEC) framework, encompassing systems that provide the least assurance against unauthorized or modification of data. This division includes a single reserved exclusively for computer systems that have been formally evaluated but fail to satisfy the security requirements of any higher , such as those mandating discretionary or mandatory mechanisms. As outlined in the original TCSEC document, "This contains only one . It is reserved for those systems that have been evaluated but that fail to meet the requirements for a higher evaluation ." No specific , , assurance, or requirements are imposed on systems in Division D, distinguishing it from higher divisions that enforce structured controls like , , and auditing. Instead, these systems rely on or practical measures, with no mandated (TCB) to mediate or enforce a . The absence of such features results in minimal protection, rendering these systems unsuitable for processing classified or sensitive unclassified information, as they offer no verified safeguards against failures. Evaluation under this division focuses primarily on confirming the lack of discretionary controls, capabilities, or enforcement mechanisms that characterize Divisions C through A. Typical systems assigned to Division D include basic, single-user operating environments or legacy software lacking inherent features, where depends entirely on user discretion rather than system-enforced policies. For instance, simple operating systems without user identification, , or resource would qualify, as they demonstrate significant deficiencies in providing even baseline protection. The evaluation process for Division D is thus confirmatory in nature, verifying through testing and documentation review that the system does not meet the threshold for higher classes, thereby highlighting its inadequacy for trusted operations.

Division C: Discretionary Protection

Division C of the Trusted Computer System Evaluation Criteria (TCSEC) establishes requirements for systems that implement discretionary protection mechanisms, suitable for environments where access is controlled based on user discretion rather than strict system-enforced policies, such as in multi-user settings with moderate sensitivity needs. These systems rely on need-to-know access controls and basic auditing to ensure accountability, without the or mandatory protections required in higher divisions. The division emphasizes informal security policy definitions, where protection is defined through user-specified rules rather than rigorous mathematical models, and assurance is achieved primarily through rather than formal proofs. Division C comprises two subclasses: C1 (Discretionary Security Protection) and (Controlled Access Protection). Class C1 focuses on basic discretionary access controls (DAC) that allow users or administrators to specify access permissions for objects, such as files or programs, using mechanisms like access control lists (ACLs) or self/group/public read/write/execute permissions. Under C1, the (TCB) must identify users before granting access to mediated objects and authenticate them via protected mechanisms, such as passwords, to link actions to specific individuals. in C1 is limited to user identification without comprehensive auditing, while assurance involves testing the TCB's for and ensuring no obvious bypasses exist. Documentation for C1 includes a Features User's Guide outlining protection mechanisms and user guidelines, a Trusted Facility Manual describing administrative procedures, test documentation verifying functionality, and basic design documentation on TCB interfaces. Class C2 builds on C1 by introducing controlled access protections that enhance individual and in multi-user environments. It mandates unique user identification for all subjects, stricter DAC to prevent unauthorized propagation of access rights, and default protection for objects until explicitly modified. Key requirements include object reuse procedures to clear residual information before reallocation, ensuring no prior user data remains accessible, and session locking after inactivity or user request to prevent unauthorized access. For , C2 requires audit logs to record security-relevant events, such as user logins, accesses, creations, deletions, and modifications, including details like user identity, event type, date, time, and success/failure status. Assurance in C2 involves testing for resource isolation, where the enforces distinct address spaces for processes and protects data from modification, alongside system checks. extends C1 requirements with procedures for generation, maintenance, and review, as well as guidance on trusted distribution and recovery. In both subclasses, the policy for discretionary controls centers on user-driven access decisions mediated by the , with limited system enforcement to support basic auditing and prevent casual breaches in controlled settings. This approach provides sufficient protection for unclassified or moderately sensitive data but escalates to mandatory protections in Division B for higher assurance needs.

Division B: Mandatory Protection

Division B of the Trusted Computer System Evaluation Criteria (TCSEC) establishes requirements for systems that enforce (MAC) policies to protect multilevel sensitive data, ensuring that access decisions are mediated by the (TCB) based on sensitivity labels rather than user discretion. These systems must implement a formal model, typically the Bell-LaPadula model, which enforces the simple security property (no read up) and the *-property (no write down) to maintain in multilevel secure environments. Assurance is provided through structured design analysis and partial verification, focusing on the TCB's enforcement, while accountability is enhanced via comprehensive system-wide mechanisms that log security-relevant events, including access attempts and label changes. Division B is divided into three subclasses—B1, B2, and B3—each building upon the previous with increasing rigor in enforcement, assurance, and to handle sensitive data in trusted environments. Class , known as Labeled Security Protection, requires the to enforce over all subjects and storage objects using sensitivity labels that include hierarchical classifications and non-hierarchical categories, ensuring label integrity and proper exportation of labeled information (e.g., human-readable markings on output). in B1 includes unique user identification, protection of authentication data, and audit trails capturing events such as object with associated security levels. Assurance features an informal model, operational testing to identify design and implementation flaws, and through distinct address spaces for subjects. must detail the model and protection mechanisms, providing evidence of Bell-LaPadula enforcement. Class , Structured Protection, extends B1 by requiring enforcement over all system resources, including input/output devices, with subject sensitivity labels and device labeling capabilities to prevent unauthorized flows. It introduces formal modeling and a descriptive top-level specification (DTLS) for the , alongside analysis to identify and bound potential information leaks, with bandwidths exceeding 100 bits per second classified as high and requiring audits or mitigation. Accountability is bolstered by a trusted path for user authentication (e.g., ) and enhanced auditing of usage events. Assurance emphasizes a structured design with least privilege separation, , and verification that the DTLS matches the implementation, while maintaining . Comprehensive documentation includes the formal model, DTLS, analysis results, and details. Class B3, Security Domains, refines B2 by mandating strict and refined discretionary access controls, such as inclusion and exclusion lists, to separate security domains and prevent unauthorized inter-domain access. The must support trusted recovery from failures without compromising and monitor auditable events for threshold breaches, notifying administrators of potential violations. Assurance requires minimizing complexity for thorough testing, ensuring consistency between the formal model and DTLS, and comprehensive analysis with bandwidth controls. Documentation encompasses recovery procedures, structuring for auditability, and evidence of system-wide audit capabilities that integrate with requirements. Overall, Division B provides a balanced framework for mandatory protection, suitable for many Department of Defense applications handling .

Division A: Verified Protection

Division A represents the highest level of assurance in the Trusted Computer System Evaluation Criteria (TCSEC), emphasizing formal verification methods to ensure that the Trusted Computing Base (TCB) rigorously protects sensitive or classified information through mandatory and discretionary security controls. Systems evaluated under this division must demonstrate that their design and implementation are consistent with a mathematically precise security policy model, providing the strongest guarantees against unauthorized access or disclosure. This level is intended for environments requiring the utmost confidence in security, such as those handling top-secret data, where any potential flaw could have catastrophic consequences. The sole subclass in Division A is A1: Verified Design, which builds upon the requirements of Division B3 by incorporating formal top-level specification and verification of the TCB design. Under A1, the security policy must be expressed through a formal model that is a mathematically precise statement, proven to meet the system's security objectives, including discretionary access control that granularly manages permissions between users and named objects, object reuse that prevents residual information access by revoking authorizations prior to reallocation, sensitivity labels for all objects and subjects, and mandatory access control enforcing a hierarchical classification scheme with non-hierarchical categories. Accountability is ensured via robust identification and authentication mechanisms, including protected authentication data and a trusted path for secure user interactions such as login and security level changes, alongside non-bypassable audit capabilities that log security-relevant events with user identity, object attributes, and outcomes for comprehensive review. Assurance in A1 is achieved through exhaustive formal verification, requiring a documented formal model of the security policy, a formal top-level specification (FTLS) that precisely describes the TCB's mechanisms and is verified for consistency with the model, and proof that the TCB implementation aligns with the FTLS via mathematical analysis and testing. This includes formal covert channel analysis to identify and mitigate bandwidth for unauthorized information flows, configuration management to control all changes to the TCB throughout its life cycle, and system integrity validation to confirm operational security. Documentation is extensive, encompassing a Security Features User's Guide for end-users, a Trusted Facility Manual for administrators on TCB initialization and maintenance, detailed test plans and results demonstrating FTLS consistency, and design documentation with verification evidence mapping the formal model to source code. Only a handful of systems ever achieved A1 certification due to the immense cost and complexity of formal verification, with notable examples including Boeing's Multilevel Secure Local Area Network (MLS LAN), Honeywell's Secure Communications Processor (SCOMP), and the Trusted Network Processor, all specialized network products requiring mathematical proofs of security properties. These verifications provide assurance through exhaustive analysis of all potential flaws, ensuring the TCB's policy enforcement via verified models and life-cycle controls.

Application and Evaluation

Matching Classes to Security Needs

Selecting the appropriate TCSEC evaluation class involves assessing the operational , levels, and to ensure the system's align with the required protection level. Division D is typically suited for low-, unclassified where minimal protection suffices, such as non-sensitive administrative systems with no mandatory controls needed. Division C applies to general-purpose systems requiring discretionary controls, like handling unclassified but , where and permissions are primary concerns. For classified single-level operations, Division B provides mandatory protection through features like and auditing, while Division A is reserved for high-trust, scenarios demanding , such as systems processing multiple levels simultaneously. Environmental factors significantly influence class selection, including the , measures, and multi-user access requirements. A high-threat with sophisticated adversaries and limited physical safeguards may necessitate B or higher to enforce mandatory controls and labeling, whereas lower-threat settings with robust physical protections might suffice with C. For instance, multi-user systems in shared facilities require classes that support individual accountability and resource isolation to mitigate insider threats. The Department of Defense () employed these matching principles in its Trusted Product List (now known as the Evaluated Products List or ) for procurement decisions, evaluating systems against specific needs; B2-class systems, for example, were selected for tactical military applications involving partitioned to handle classified data in dynamic, high-risk field operations. Balancing trade-offs between cost, performance, and is central to selection, as higher divisions impose greater expenses and potential overhead from features like analysis and formal proofs, which can degrade system efficiency. For example, opting for a B3- system in a low- environment increases unnecessary costs without proportional benefits, while selecting a C2- for a multilevel classified setup risks inadequate safeguards against unauthorized flows, potentially leading to breaches or failures. Mismatched selections often arise from underestimating threats, such as deploying Division C in a multi-user classified network, resulting in elevated of discretionary access violations; conversely, over-specifying to Division A in single-level operations wastes resources on unverifiable assurances. Proper alignment, guided by indices that quantify against user clearances and environmental threats, ensures optimal without excessive burden.

Evaluation and Certification Process

The and process for the Trusted Computer System Evaluation Criteria (TCSEC) is a structured, multi-phase procedure managed by the National Computer Security Center (NCSC), an arm of the , to assess the security of computer systems against defined criteria. Vendors initiate the process by contacting the NCSC's Office of Industrial Relations and submitting a proposal package, which includes a company profile, market information, and a detailed product proposal; this leads to a program decision and, if approved, a that outlines the evaluation scope and responsibilities. Vendors conduct an initial self-evaluation, preparing design documentation, test plans, and evidence of compliance with TCSEC requirements for policy enforcement, , assurance, and . Following this, the NCSC performs a formal third-party , which includes comprehensive document review of user manuals, administrative guides, hardware/software specifications, and program logic; to verify implementation; and, for systems targeting B2 through A1, penetration testing to assess resistance to unauthorized access. The process encompasses several phases: pre-review for initial assessment, vendor assistance for clarification, design analysis for architectural review, formal for detailed testing and analysis, and rating maintenance for ongoing validation. Qualified evaluation teams at the NCSC serve as independent evaluators, applying criteria checklists derived from the TCSEC to systematically verify requirements across divisions and classes; these teams analyze system design, source code, object code, and operational procedures to identify and ensure correction of security flaws before re-testing. Site visits may occur as part of the formal to observe and in context, enhancing the thoroughness of the assessment. The TCSEC employs a hierarchical rating system, where classes within divisions (D through A) build cumulatively; a system must fully meet all requirements of a target class and all lower classes to achieve that rating, with failure in any prerequisite blocking progression to higher assurance levels. Successful completion results in assignment of a specific class rating (e.g., C2 or B3) and inclusion on the NCSC's Evaluated Products List, serving as a DoD certification of trustworthiness for procurement and deployment. Post-certification maintenance occurs through the Rating Maintenance Program (RAMP), where vendors submit Rating Maintenance Reports detailing product changes for NCSC review; minor modifications can preserve the original rating without full re-evaluation, while significant alterations may trigger partial or complete reassessment to ensure sustained compliance. This ongoing mechanism supports rating validity over the product's lifecycle, with the NCSC overseeing updates to prevent degradation of security assurances.

Legacy and Influence

Supersession by Common Criteria

The Trusted Computer System Evaluation Criteria (TCSEC) was effectively superseded by the Common Criteria (CC) through an international harmonization effort in the mid-1990s, culminating in the adoption of CC as a global standard. CC Version 1.0, released in 1994, integrated elements from the U.S. Federal Criteria (intended as a TCSEC successor), the European Information Technology Security Evaluation Criteria (ITSEC), and the Canadian Trusted Computer Product Evaluation Criteria (CTCPEC), among others, to create a unified framework for evaluating IT security. In 1999, CC Version 2.1 was published and adopted as the international standard ISO/IEC 15408, leading to the withdrawal of TCSEC-based evaluations by the U.S. government, with the Trusted Product Evaluation Program concluding in 2000. A key feature of the CC is its use of seven Evaluation Assurance Levels (EAL1 to EAL7), which provide scalable assurance based on rigorous testing and documentation; these levels map approximately to TCSEC classes, with EAL4 roughly equivalent to TCSEC in terms of structured design and testing depth. This structure facilitated a transition from TCSEC's rigid, U.S.-focused divisions emphasizing mandatory versus discretionary protection to a more flexible, globally applicable model centered on protection profiles—reusable templates defining security requirements for specific technology types. The CC de-emphasizes the binary mandatory/discretionary distinction in favor of customizable functional and assurance requirements, allowing broader application to commercial IT products beyond government systems. The supersession was driven by the need to support in secure IT products, as disparate criteria like TCSEC hindered mutual recognition of evaluations across borders; additionally, TCSEC's focus on 1980s-era mainframe technology had become outdated amid the rise of networked, distributed systems. TCSEC played a foundational role in shaping 's assurance concepts, providing a basis for the EAL framework while enabling legacy systems certified under TCSEC to be mapped to equivalent levels for continued use—such as treating B2-certified products as EAL4 equivalents during the transition period. This migration guidance, issued by bodies like NIST, ensured minimal disruption for existing deployments while promoting adoption of the new international standard.

Impact on Global Security Standards

The Trusted Computer System Evaluation Criteria (TCSEC) significantly shaped the development of the Information Technology Security Evaluation Criteria (ITSEC) in during the , serving as a foundational reference for harmonizing security evaluations across nations like , , the , and the . ITSEC adapted TCSEC's hierarchical classes—such as C1, C2, B1, B2, and B3—for its functionality levels (F-C1 to F-B3), while introducing greater flexibility in assurance and coverage of and beyond TCSEC's primary focus on . This influence facilitated mutual recognition of evaluation results internationally, promoting a more unified approach to IT security assessment. TCSEC's concepts extended to networked environments through the Trusted Network Interpretation (TNI), which interpreted its criteria for distributed systems, emphasizing trust in communications and adding rationale for applying reference monitor principles to networks. Globally, TCSEC promoted assurance hierarchies that influenced subsequent frameworks, including the separation of functionality and assurance levels seen in later standards, encouraging graded security requirements based on risk. Key concepts like the Trusted Computing Base (TCB)—the set of components enforcing security policy—and the reference monitor, which mediates access to prevent bypasses, were inherited in the Common Criteria (CC), where formal methods for verification at higher assurance levels echo TCSEC's B3 and A1 classes. By the late 1990s, TCSEC evaluations had certified numerous operating systems and products, with an average of about eight operating system evaluations annually, demonstrating its practical impact on secure system procurement. Despite its influence, TCSEC faced critiques for its rigidity, particularly in adapting to software and evolving networked architectures, as its focus and documentation demands were seen as overly prescriptive for non-military applications. In the U.S. Department of Defense (), TCSEC's legacy persists in classified contexts, where its evaluation classes remain a baseline for assessing automated information systems handling sensitive data, even as broader frameworks like the (RMF)—which evolved from the DoD Information Assurance Certification and Accreditation Process (DIACAP)—incorporate its emphasis on assurance through policy enforcement and verification. This enduring role underscores TCSEC's contribution to global standards prioritizing verifiable security in high-stakes environments.

References

  1. [1]
    [PDF] Trusted Computer System Evaluation Criteria ["Orange Book"]
    Oct 8, 1998 · The trusted computer system evaluation criteria will be used directly and indirectly in the certification process. Along with applicable policy ...
  2. [2]
    [PDF] Trusted Computer System Evaluation Criteria(TCSEC)
    The TCSEC defines 6 evaluation classes identified by the rating scale from lowest to highest: D, C1, C2, B1,. B2, B3, and A1. An evaluated computer product ...
  3. [3]
    [PDF] Trusted Computer System Evaluation Criteria Trusted Computer ...
    The development of trusted computer system evaluation criteria can be traced back to the need for secure computing environments during the Cold War era. As ...
  4. [4]
    [PDF] Computer Security Technology Planning Study (Volume I)
    Oct 8, 1998 · This report presents a research and devel opment plan to guide the work leading to the achievement of secure multilevel computer systems for the ...Missing: trusted | Show results with:trusted
  5. [5]
  6. [6]
    [PDF] The Birth and Death of the Orange Book - Bitsavers.org
    This article traces the origins of computer security research and the path that led from a focus on government-funded research and system development to a focus ...
  7. [7]
    None
    ### Summary of Key Points from the Anderson Report on Computer Security Criteria for DoD
  8. [8]
    Security Requirements for Automated Information Systems (AISs ...
    Provides mandatory, minimum AIS security requirements. More stringent requirements may be necessary for selected systems based on an assessment of acceptable ...Missing: TCSEC | Show results with:TCSEC
  9. [9]
    DoD Rainbow Series - NIST Computer Security Resource Center
    Dec 26, 1985 · DoD 5200.28-STD "Orange Book", DoD Trusted Computer System Evaluation Criteria (December 26, 1985) · CSC-STD-002-85 "Green Book", DoD Password ...
  10. [10]
    [PDF] A review of U.S. and European security evaluation criteria
    Several United States and European documents describing criteria for specifying and evaluating the trust of computer products and systems have been written.
  11. [11]
    [PDF] Trusted Computer System Evaluation Criteria
    The TCSEC was introduced in. 1983 as part of the DoD's efforts to standardize security evaluations. Its primary objectives include: - Defining clear criteria ...
  12. [12]
    [PDF] Looking Back at the Bell-La Padula Model
    Dec 7, 2005 · The Bell-La Padula security model produced conceptual tools for the analysis and design of secure computer sys- tems. Together with its sibling ...
  13. [13]
    [PDF] A Guide to Procurement of Trusted Systems - DTIC
    Products List for Trusted Computer Systems. Many trusted system requirements can be effectively met, using existing evaluated products from this source document ...<|control11|><|separator|>
  14. [14]
    NCSC-TG-002 [Bright Blue Book] Trusted Product Evaluation
    The Trusted Product Evaluation Program focuses on the security evaluation of commercially produced and supported computer systems by evaluating the technical ...Missing: tactical | Show results with:tactical
  15. [15]
    Chapter: Criteria to Evaluate Computer and Network Security
    The ITSEC effort represents a serious attempt to transcend some of the limitations in the TCSEC, including the criteria for integrity and availability. However, ...
  16. [16]
    [PDF] Proceedings of the 9th National Computer Security Conference ...
    Sep 15, 1986 · ... site visits had been completed, the assessment team convened at the ... NCSC-WA-001-85,. National Computer Security. Center, Fort George ...
  17. [17]
    [PDF] Trusted Product Evaluation Program Procedures
    The Rating Maintenance Program (RAMP) provides a mechanism for a vendor to maintain the TCSEC rating of a product throughout its life cycle. During RAMP ...
  18. [18]
    History - Common Criteria
    Orange Book) developed by the United States Department of Defense and the Canadian CTCPEC derived from the TCSEC standard. By unifying security evaluation ...Missing: superseded CNSS
  19. [19]
    [PDF] National Information Assurance Partnership (NIAP) Common ...
    Mar 23, 2006 · Common Criteria Evaluation and Validation Scheme and the conclusions of the testing laboratory in the evaluation technical report are ...
  20. [20]
    [PDF] Trusted Product Evaluation Program: "End of an Era"
    Oct 8, 1998 · We are now moving to a program which accredits commercial laboratories to evaluate COTS products against the TCSEC or the Common Criteria (CC).
  21. [21]
    [PDF] Common Criteria(CC)
    The table below depicts the correspondence between the TCSEC and the CC. TCSEC. CC. D. ---. ---. EAL1. C1. EAL2. C2. EAL3. B1. EAL4. B2. EAL5. B3. EAL6. A1.
  22. [22]
    [PDF] Introduction and general model August 1999 Version 2.1 CC
    Aug 1, 1999 · This version of the Common Criteria for Information Technology Security. Evaluation (CC 2.1) is a revision that aligns it with International ...Missing: withdrawn harmonized
  23. [23]
    [PDF] ITL Bulletin Common Criteria: Launching the International Standard ...
    Nov 1, 1998 · Version 1.0 of the CC was completed ... The EALs have been developed with the goal of preserving the concepts of assurance drawn from the source.Missing: withdrawn harmonized
  24. [24]
    [PDF] Trends in Government Endorsed Security Product Evaluations
    Common Criteria evaluations should occur more often as vendors choose to pursue a single, internationally recognized Common Criteria evalua- tion instead of ...
  25. [25]
    [PDF] ITL Bulletin An Overview of the Common Criteria Evaluation and ...
    Oct 1, 2000 · The Common Criteria Scheme enables consumers to obtain an impartial assessment of an IT product by an independent laboratory. This security.
  26. [26]
    [PDF] Information Technology Security Evaluation Criteria ( ITSEC ...
    Jun 28, 1991 · 1.36 The TCSEC defines seven sets of evaluation criteria called classes (D, C1, C2, B1, B2,. B3 and A1), grouped into four divisions (D, C, B ...
  27. [27]
    TRUSTED NETWORK INTERPRETATION ENVIRONMENTS ...
    NCSC-evaluation refers specifically to the process in which the NCSC determines ... NCSC evaluation is an evaluation against the TCSEC (and the TNI).Missing: visits | Show results with:visits
  28. [28]
    [PDF] Introduction and general model January 2004 Version 2.2 R
    Jan 2, 2004 · Common Criteria approach ... All the members of a class share a common focus, while differing in coverage of security objectives.
  29. [29]
    [PDF] Trends in Government Endorsed Security Product Evaluations
    In the 1990s the number of OS evaluations leveled off at a rough average of 8 per year. During the same period, other types of products accounted for an av ...
  30. [30]
    A Guide to Understanding Trusted Recovery in Trusted Systems
    This document contains suggestions and recommendations derived from TCSEC objectives but which the TCSEC does not require. Examples in this document are not the ...Missing: LaPadula | Show results with:LaPadula
  31. [31]
    [PDF] DoDI 8510.01, "Risk Management Framework for DoD Systems ...
    Jul 19, 2022 · Establishes the cybersecurity Risk Management Framework (RMF) for DoD Systems (referred to in this issuance as “the RMF”) and establishes policy ...