Mandatory access control
Mandatory access control (MAC) is a nondiscretionary access control mechanism that enforces centralized security policies to restrict subjects' (such as users or processes) access to objects (such as files or resources) based on sensitivity labels assigned to the information and clearance levels granted to the subjects.[1] Unlike discretionary access control (DAC), where resource owners determine permissions, MAC prevents subjects from altering access rules, ensuring uniform enforcement across the system to protect classified or sensitive data. This model originated in military and government contexts to implement multilevel security policies, such as the need-to-know principle for handling confidential information.[2] MAC policies constrain subjects from actions like passing information to unauthorized entities, granting privileges to others, modifying security attributes, or changing access rules, thereby maintaining confidentiality and integrity. Foundational MAC models include the Bell-LaPadula model, developed in 1973 for confidentiality, which enforces the "simple security property" (no read up) and "*property" (no write down) to prevent unauthorized data leakage from higher to lower security levels.[3] Complementing this, the Biba model, introduced in 1975, focuses on integrity with rules like "no read down" and "no write up" to avoid corruption of high-integrity data by lower-integrity sources.[4] These models are mediated by a reference monitor—a tamper-proof component that validates all access requests against the policy. In practice, MAC is implemented in high-assurance operating systems, such as Security-Enhanced Linux (SELinux), developed by the National Security Agency, which integrates Flask architecture for flexible MAC enforcement in Linux kernels.[5] Other examples include AppArmor, which provides path-based MAC in Linux distributions, and systems like those in Trusted Computer System Evaluation Criteria (TCSEC), formerly known as the Orange Book, for evaluating secure systems.[6] MAC remains essential for environments requiring strict control over information flow, including defense, intelligence, and critical infrastructure protection.Core Principles
Definition and Characteristics
Mandatory access control (MAC) is a non-discretionary access control mechanism in which access decisions are made by a central authority based on predefined security policies, rather than allowing users to determine access permissions.[7] Unlike discretionary access control (DAC), which permits users to share resources at their discretion, MAC enforces policies uniformly across all subjects and objects within a system boundary to prevent unauthorized information flows.[8] Key characteristics of MAC include system-wide enforcement of fixed security labels assigned to subjects (such as users or processes) and objects (such as files or resources), with no provision for user overrides.[9] These labels represent sensitivity levels or classifications that dictate allowable interactions, emphasizing principles like least privilege—granting only the minimum access necessary—and need-to-know, ensuring access is limited to information required for authorized tasks.[1] This approach focuses on protecting confidentiality and integrity by controlling information flow, making MAC particularly suitable for environments requiring strict multilevel security.[10] The enforcement of MAC policies relies on the trusted computing base (TCB), which comprises the hardware, firmware, and software components responsible for mediating all access attempts and upholding the system's security policy.[11] The TCB includes a reference monitor that intercepts every access request, verifies it against the policy, and ensures isolation from untrusted elements, thereby guaranteeing that MAC rules cannot be circumvented.[11] In the basic workflow of MAC, subjects and objects are assigned fixed labels by administrators or the system policy during creation or classification.[1] When a subject attempts to access an object, the reference monitor within the TCB evaluates the request by comparing the subject's label (e.g., clearance level) against the object's label (e.g., sensitivity) according to predefined rules, granting or denying access accordingly to maintain policy compliance.[10] This process occurs transparently for every operation, ensuring consistent enforcement without user intervention.[9]Labels and Security Levels
In mandatory access control (MAC), security labels are essential mechanisms for classifying subjects and objects according to their sensitivity and access requirements. These labels typically consist of two primary components: sensitivity levels and categories. Sensitivity levels form a hierarchical classification, often represented as ordered values such as unclassified, confidential, secret, and top secret, where higher levels indicate greater protection needs.[12] Categories, in contrast, are non-hierarchical compartments that provide finer-grained control, such as project-specific tags like "NATO" for alliance-related information or "financial" for economic data, allowing restrictions beyond simple hierarchy.[12] The structure of labels combines hierarchical and non-hierarchical elements to enforce policies through a dominance relation. In this relation, a label L_s (for a subject) dominates a label L_o (for an object) if the sensitivity level of L_s is greater than or equal to that of L_o, and the set of categories in L_s contains all categories in L_o. Formally, dominance is defined as: \text{level}(L_s) \geq \text{level}(L_o) \quad \text{and} \quad \text{categories}(L_s) \supseteq \text{categories}(L_o) This relation ensures that access decisions align with the central policy by comparing label components directly.[13][12] Subjects, such as users or processes, are assigned clearances represented as ranges of sensitivity levels and category sets, indicating the maximum and minimum authorizations they hold. Objects, like files or database records, receive fixed classifications with specific levels and categories that denote their inherent sensitivity. For read access, a subject's clearance must dominate the object's classification, meaning the subject's authorized level meets or exceeds the object's level while encompassing its categories; write access typically requires the reverse dominance to prevent unauthorized information flow.[12] To mitigate covert channels where higher-level information might inferentially leak to lower levels, MAC systems employ polyinstantiation, which allows multiple instances of the same object to exist at different security levels. For example, a database entry might have a low-level version with sanitized data and a high-level version with full details, visible only to cleared subjects, thereby preserving security without revealing absences that could signal sensitive content.[14][15]Access Control Models
Bell-LaPadula Model
The Bell-LaPadula model was developed in 1973 by David Elliott Bell and Leonard J. LaPadula at the MITRE Corporation under contract for the Electronic Systems Division of the U.S. Air Force.[16] This formal security model provides a mathematical foundation for enforcing confidentiality in computer systems through mandatory access control, particularly in environments handling classified information.[3] It addresses the need to prevent unauthorized disclosure by controlling information flow based on hierarchical security levels assigned to subjects (active entities like users or processes) and objects (passive entities like files).[16] The model's core properties are the Simple Security Property and the Star Property (*-property). The Simple Security Property ensures confidentiality by prohibiting a subject from reading an object at a higher security level than its own, often termed "no read up."[3] This means a subject at security level n cannot access information classified above n, preventing lower-level entities from obtaining sensitive data. The Star Property complements this by forbidding a subject from writing or appending to an object at a lower security level, known as "no write down."[3] Thus, a subject at level n can only write to objects at level n or higher, avoiding the inadvertent leakage of high-sensitivity information to lower levels through storage or transmission.[16] To formalize these rules, access is defined as follows:- For read access from subject s to object o: \text{level}(s) \geq \text{level}(o).
- For write access from subject s to object o: \text{level}(s) \leq \text{level}(o).
Biba Model
The Biba model, developed by Kenneth J. Biba in 1975, serves as a foundational mandatory access control framework specifically designed to enforce data integrity in computer systems, acting as a counterpart to confidentiality-focused models like Bell-LaPadula by prioritizing protection against unauthorized modifications rather than disclosures.[20] Originally formulated for secure Department of Defense environments, it addresses the need to maintain the validity and trustworthiness of information flows, ensuring that high-integrity data remains untainted by lower-integrity sources.[20] At its core, the model incorporates two primary properties to safeguard integrity. The simple integrity property prohibits a subject from reading an object of lower integrity (no read down), preventing the contamination of a subject's knowledge with potentially unreliable data. The *-integrity property (or star-integrity property) restricts a subject from writing to an object of higher integrity (no write up in the strict variant), thereby blocking low-integrity subjects from altering trusted data.[20] These rules invert the access flow directions typical in confidentiality models, where information is allowed to flow downward but not upward; here, integrity considerations demand controlled upward flows to preserve trustworthiness.[20] Integrity levels in the Biba model form a hierarchy ranging from low (representing tainted or untrusted data) to high (indicating trusted or validated information), often exemplified by levels such as confidential, secret, and top secret. These levels are augmented by categories for finer compartmentalization, creating a lattice structure similar to—but inverted from—confidentiality labels, where dominance relations ensure that access respects integrity ordering.[20] Formally, read access is granted to a subject s from an object o only if the integrity level of o is greater than or equal to that of s, denoted as \text{level}(o) \geq \text{level}(s). Write access, under the strict policy, is permitted solely when the levels are equal, \text{level}(o) = \text{level}(s), enforcing precise integrity matching for modifications.[20] To address practical limitations of the strict policy, Biba proposed extensions such as the low-water-mark policy, which dynamically adjusts a subject's integrity level downward to the minimum of its current level and that of any read object, allowing writes to lower levels but risking progressive degradation.[20] Another variant, the ring policy, maintains fixed subject levels while permitting reads from any level but confining writes to equal or lower objects, balancing flexibility with integrity controls.[20] However, these dynamic mechanisms introduce a key limitation: the potential for integrity starvation, where repeated interactions with low-integrity objects cause subjects to drop to the lowest level, eventually restricting access to higher-integrity resources and undermining system utility.[20]Other Models
The Chinese Wall model, proposed by Brewer and Nash in 1989, addresses conflicts of interest in commercial environments by dynamically restricting access to data based on an object's membership in conflict classes and a subject's prior access history.[21] In this model, objects are grouped into conflict classes representing competing interests, such as shares in rival companies, and access to an object in a new class is denied if the subject has previously accessed data from a conflicting class, thereby preventing indirect information leakage through aggregation.[21] Unlike traditional label-based MAC, the Chinese Wall enforces a non-static policy where the system's state evolves with user actions, ensuring that once a "wall" is crossed by reading from one class, access to alternatives is permanently barred for that session.[21] The Clark-Wilson model, introduced by Clark and Wilson in 1987, emphasizes integrity protection for commercial data processing through well-formed transactions, separation of duties, and certified procedures rather than hierarchical labels. It defines constrained data items (CDIs) that must remain valid and unconstrained data items (UDIs) as inputs, with access mediated exclusively by transformation procedures (TPs) validated against an integrity policy to prevent unauthorized modifications. Although lacking explicit security labels, the model's centralized enforcement of rules by a trusted computing base mirrors MAC principles, focusing on certifying that only authorized, auditable operations alter critical data while maintaining separation of duties to mitigate insider threats. Denning's lattice model, developed in 1976, provides a general framework for secure information flow using partially ordered security lattices to support non-hierarchical policies beyond simple confidentiality or integrity.[22] In this model, information flows are governed by lattice relations where a subject at level s can read from levels \leq s and write to levels \geq s, but the lattice structure allows for arbitrary partial orders to model complex dependencies, such as in military or multilevel systems.[22] The approach unifies various flow restrictions by classifying systems based on their security objectives, enabling formal verification that no unintended flows occur through program execution or storage.[22] Hybrid models integrating mandatory access control labels with role-based access control (RBAC) enhance granularity by constraining role activations or permissions with MAC policies, allowing finer control in environments requiring both centralized enforcement and user-specific assignments.[23] For instance, in such hybrids, roles may be activated only if a user's MAC label dominates the required security level, preventing privilege escalation across sensitivity compartments while leveraging RBAC for operational separation.[23] This integration, as formalized by Sandhu in 2000, configures RBAC hierarchies to emulate both mandatory and discretionary constraints without compromising the non-discretionary nature of MAC.[23] Formal aspects of MAC extend to type enforcement and domain models, which partition subjects into constrained domains and objects into types, restricting transitions to prevent unauthorized flows. Type enforcement, originally proposed by Boebert and Kain in 1985 for secure Ada targets, uses a table to define allowable domain-type interactions, ensuring that processes execute within bounded privileges akin to MAC confinement. Early domain models, such as those in Xerox's Grapevine system from 1982, applied domain-based access control in distributed settings by associating resources with organizational domains and enforcing authentication-linked permissions to isolate inter-domain communications.[24]Historical Development
Origins in Multilevel Security
The origins of mandatory access control (MAC) trace back to the Cold War era, when the U.S. military faced urgent needs to safeguard classified information in increasingly shared computing environments. As time-sharing systems emerged in the late 1950s and 1960s, the risk of unauthorized data leakage between users with varying clearance levels became a critical concern, particularly for handling sensitive defense data across multiple security classifications.[25] Early efforts at organizations like RAND Corporation and System Development Corporation (SDC) explored multilevel access mechanisms to prevent such leaks, laying groundwork for systems that could process data at different sensitivity levels without compromising security.[25] A pivotal moment came with the 1972 Anderson Report, formally titled the Computer Security Technology Planning Study, commissioned by the U.S. Air Force. Authored by James P. Anderson, this study highlighted the vulnerabilities of discretionary access controls—where users could freely share resources—in multiuser systems, noting risks such as Trojan horse attacks that could propagate classified information to unauthorized parties.[26] It recommended the development of multilevel secure (MLS) systems incorporating a reference monitor to enforce centralized, non-discretionary policies based on security labels, emphasizing the need for hardware-enforced protections to achieve reference validation.[26] Anderson's work was instrumental in formalizing these policy requirements, influencing subsequent U.S. Department of Defense (DoD) initiatives by articulating the technical and operational necessities for trusted computing in classified environments. In response to these recommendations, early prototypes emerged in the 1960s and 1970s to demonstrate feasible MLS implementations. Honeywell developed the Secure Communications Processor (SCOMP) in the mid-1970s, a minicomputer-based system enhanced with a hardware Security Protection Module to enforce multilevel security through segmented memory and strict access partitioning, serving as a front-end processor for secure networks. Similarly, MITRE Corporation advanced secure system prototypes during this period, including modeling and evaluation efforts for time-sharing systems like Multics, which incorporated early ring-based protections and access controls to handle multilevel data in a shared environment.[27] These prototypes validated the practicality of MAC principles, such as label-based enforcement, in real hardware-software configurations.[28] The culmination of these foundational efforts appeared in the Trusted Computer System Evaluation Criteria (TCSEC), commonly known as the Orange Book, initially published by the DoD in 1983 and formalized as a standard in 1985. This standard defined rigorous requirements for MAC in MLS systems, mandating discretionary and mandatory controls for higher evaluation classes: B2 (structured protection, requiring formal security policy models and verified designs) and B3 (security domains, with audited MAC enforcement to prevent unauthorized flows).[11] By specifying MAC as essential for confidentiality in partitioned environments, TCSEC provided a benchmark that directly built on the Anderson Report's vision and prototype experiences, guiding the certification of secure systems for government use.[11]Evolution and Standards
Following the foundational work in multilevel security during the Cold War era, mandatory access control (MAC) evolved through standardized evaluation frameworks in the late 20th century. The U.S. Department of Defense's Trusted Computer System Evaluation Criteria (TCSEC, or "Orange Book"), initially published in 1983 and formalized as a standard in 1985, established benchmarks for secure systems, emphasizing MAC for confidentiality in multilevel environments. This was supplanted in the 1990s by the international Common Criteria (ISO/IEC 15408), first published in 1999, which introduced a modular approach to security functional requirements, including MAC via components like FDP_IFF (information flow control) and support for labeled security policies in protection profiles such as the Labeled Security Protection Profile (LSPP). Systems incorporating MAC are often evaluated at Evaluation Assurance Level 4 (EAL4) or higher under Common Criteria, providing evidence of robust design and testing for commercial and government applications.[11][29][30] Key milestones in the 1980s advanced MAC implementation beyond theoretical models. The Defense Advanced Research Projects Agency (DARPA) funded secure Unix initiatives, such as the Kernelized Secure Operating System (KSOS) project starting in 1980, which aimed to retrofit MAC into Unix-like systems for trusted computing while maintaining portability. Similarly, Gemini Computers developed the Gemini Multiprocessing Secure Operating System (GEMSOS) in the mid-1980s, a kernel-based design that achieved the TCSEC's highest A1 rating by 1989, enabling multilevel secure multiprocessing with object reuse and audit mechanisms for real-time applications like network gateways. These efforts demonstrated MAC's feasibility in hardware-supported, high-assurance environments.[31] The 1990s and 2000s saw MAC transition from military silos to commercial operating systems, driven by government investment in open-source security. In 1999, the National Security Agency (NSA) launched the Security-Enhanced Linux (SELinux) project in collaboration with Secure Computing Corporation, allocating initial funding to prototype type enforcement—a flexible MAC mechanism—on the Linux kernel to support diverse policies without kernel modifications. This culminated in the first public release in December 2000 under the GNU General Public License, facilitating integration into enterprise distributions and broadening MAC adoption beyond classified systems.[5][32] International standards bodies like NIST and ISO have since extended MAC principles to modern infrastructures, particularly cloud and virtualization. NIST's Special Publication 800-210 (2020) outlines access control guidance for cloud service models, recommending MAC for protecting hypervisors in Infrastructure as a Service (IaaS) against privilege escalation and ensuring isolation in virtualized environments through label-based flow control. Complementing this, ISO/IEC 27017 (2015) provides cloud-specific controls under the ISO 27001 framework, incorporating MAC-like mandatory policies for data classification and segregation in virtual machines to address shared resource risks. These standards emphasize interoperability and assurance for global deployments.[33] From 2020 to 2025, MAC has integrated with containerization and zero-trust architectures to address distributed threats. In container ecosystems like Kubernetes, MAC extensions such as SELinux enable pod-level labeling and policy enforcement, preventing lateral movement in microservices while supporting dynamic scaling. NIST's SP 800-207 (2020) on zero-trust architecture advocates MAC-inspired policy decision points for continuous authentication and attribute-based enforcement, applied in hybrid clouds to verify access regardless of location, with adoption surging post-2022 supply chain incidents to mitigate insider and runtime risks.[34]Implementations in Operating Systems
Linux and Unix-like Systems
In Unix-like systems, mandatory access control (MAC) has evolved from early implementations in secure variants of Unix to integrated kernel modules in modern Linux distributions. One pioneering example is Trusted Solaris, developed by Sun Microsystems in the 1990s, which introduced compartment-based labeling to enforce multilevel security (MLS) policies alongside traditional discretionary access controls.[35] This system assigned security labels to processes and objects, restricting access based on hierarchical levels and non-hierarchical compartments to prevent information leakage in classified environments.[36] SELinux (Security-Enhanced Linux), developed by the National Security Agency (NSA) and released to the open-source community in December 2000, represents a foundational MAC implementation for Linux.[5] It leverages the Linux Security Modules (LSM) framework to provide fine-grained control through Type Enforcement (TE), Role-Based Access Control (RBAC), and MLS components.[37] In SELinux, security contexts are assigned to subjects (processes) and objects (files, sockets) in the formatuser:role:type:level, where the type defines domain-specific permissions, roles limit transitions, and levels enforce MLS rules influenced by the Bell-LaPadula model to prevent unauthorized information flow from higher to lower security classifications.[38] Policies are written in a domain-specific language and compiled into binary modules loaded by the kernel, enabling enforcement modes such as enforcing, permissive (logging violations without blocking), and disabled.[39]
AppArmor, initially developed under the name SubDomain by Immunix in the late 1990s and later supported by Canonical since 2009, offers a path-based MAC alternative that is simpler to configure than SELinux.[40] Integrated into the Linux kernel via LSM since version 2.6.36, it confines applications using per-program profiles that specify allowed access to file paths, network capabilities, and system calls based on execution context.[41] Profiles operate in enforce or complain modes, with the former blocking violations and the latter logging them for tuning, making AppArmor suitable for desktop and server environments where administrative overhead must be minimized.[42]
Smack (Simplified Mandatory Access Control Kernel), introduced in the Linux kernel version 2.6.25 in 2008, is designed for resource-constrained embedded systems and prioritizes simplicity over complexity.[43] It assigns human-readable labels to tasks (processes) and inodes (file system objects), enforcing access decisions via a single rule set that supports read/write/execute permissions and MLS hierarchies without the extensive policy infrastructure of SELinux.[44] Smack's lightweight design avoids performance overheads associated with attribute-based systems, making it ideal for IoT and automotive applications where boot-time labeling and runtime checks must be efficient.[45]
TOMOYO Linux, launched in March 2003 and sponsored by NTT Data Corporation, is another LSM-based MAC implementation focusing on process behavior analysis and restriction.[46] It uses path-name-based policies to define security domains for processes, allowing learning modes to generate policies from observed executions and enforcing them via domain transitions. TOMOYO supports exception handling for legitimate violations and is designed for ease of policy management in enterprise environments, remaining actively maintained as of 2025.
Landlock, introduced in Linux kernel 5.13 in 2021, provides unprivileged MAC for sandboxing file system access.[47] As a stackable LSM, it allows non-root processes to restrict their own access to files and directories through hierarchical rules, preventing escalation while complementing system-wide policies like DAC and other LSMs. Landlock is particularly useful for application-level confinement, with ongoing enhancements including support for "weird files" in kernel 6.14 (2025).
Configuration and monitoring of these MAC systems in Linux typically involve dedicated tools and logging mechanisms. For SELinux, the setsebool command toggles boolean policy options to enable or disable specific rules without reloading full policies, such as allowing HTTPD to connect to databases. Enforcement events, including denials and audits, are recorded in /var/log/audit/audit.log by the Linux Audit daemon (auditd), which can be queried using tools like ausearch or sealert for troubleshooting and policy refinement.[48] AppArmor profiles are managed via aa-genprof for automatic generation from learning mode traces and aa-logprof for interactive log-based updates, with logs directed to /var/log/[syslog](/page/Syslog) or dedicated files.[40] Smack configuration relies on kernel boot parameters like smack.default_level and user-space tools such as chsmack for labeling files, with access denials logged through the kernel's printk mechanism or audit subsystem.[43]