Fact-checked by Grok 2 weeks ago

Swiss cheese model

The is a in risk analysis and safety management, developed by psychologist James Reason in 1990, that illustrates how accidents occur in complex socio-technical systems through the of multiple layers of defense represented as slices of , each containing randomly positioned holes symbolizing weaknesses or potential points; a penetrates the system only when these holes temporarily align across all layers, allowing an error trajectory to pass unimpeded. Central to the model is the distinction between active failures—immediate, unsafe acts committed by individuals at the "sharp end" of operations, such as pilots or surgeons—and latent failures, which are dormant weaknesses embedded in the system due to earlier decisions by designers, managers, or regulators, such as inadequate protocols or flawed design. These latent conditions create the "holes" in the defensive layers, which vary in size and position over time, emphasizing that no single failure is sufficient to cause an accident but rather a rare conjunction of contributing factors. Originally articulated in Reason's seminal paper on latent human failures in complex systems, the model shifted focus from blaming individual errors to examining organizational and systemic vulnerabilities, influencing safety investigations by promoting proactive barrier strengthening rather than reactive fault-finding. In , it informs incident analyses by mapping aligned failures in procedural, , and defenses to prevent recurrence. The framework has broad applications beyond , including healthcare, where it informs root cause analyses for incidents by identifying how latent risks—such as understaffing or poor communication protocols—combine with active errors like misadministration to breach safeguards. It has also been applied to , notably in layered prevention strategies during the . In and industries, it guides assessments for high-hazard environments, underscoring the need for redundant, defenses to minimize alignment probabilities. Despite its enduring influence, critics note limitations in addressing non-linear interactions or adaptive human behaviors, prompting refinements like the 2006 "Revisiting the Swiss Cheese Model" to incorporate trajectory variability.

Overview and History

Definition and Purpose

The Swiss cheese model is a metaphorical framework in and that depicts complex socio-technical systems as a stack of slices, each representing a sequential layer of defense, barrier, or safeguard against potential . The irregular holes in these slices symbolize weaknesses, such as human , organizational flaws, or technical vulnerabilities, which vary in size and position across layers. In circumstances, the non-alignment of these holes prevents any adverse —such as an or —from propagating through the entire system, thereby maintaining safety. The primary purpose of the model is to illustrate that accidents emerge not from isolated, single-point failures but from the uncommon linear alignment of multiple weaknesses across all defensive layers, enabling a clear path for the hazard to reach its harmful outcome. This conceptualization shifts focus from individual in a "person approach" to a "system approach," emphasizing how active errors at the operational level interact with latent conditions embedded in organizational structures, procedures, and . By promoting this holistic view, the model aids in accident prevention by underscoring the need for robust, multi-layered defenses in high-reliability industries. A key benefit of the Swiss cheese model lies in its encouragement of proactive , where organizations systematically identify and address latent conditions—such as poor , inadequate , or resource gaps—that create or enlarge holes in the defenses, thereby reducing the likelihood of hazardous alignments. This approach fosters a culture of continuous improvement in safety protocols without relying solely on reactive measures after incidents occur. Originating in the 1990s, the model was developed by psychologist James Reason as a core element of human error theory and organizational accident analysis.

Development by James Reason

James Reason, a British born on May 1, 1938, served as a professor of at the , where he focused on research particularly during the 1980s and 1990s, with continued contributions into the 2000s. His studies emphasized the cognitive and organizational aspects of errors in complex systems, building on earlier experimental work in . Reason first introduced the Swiss cheese model in his 1990 book Human Error, using the analogy of Swiss cheese slices to represent layered defenses in safety systems, with holes symbolizing potential weaknesses that could align to allow failures. This framework drew from his development of the Generic Error-Modelling System (GEMS), also detailed in the same publication, which categorized error types based on skill, rule, and knowledge-based behaviors. The model evolved through Reason's subsequent work, particularly in his 1997 book Managing the Risks of Organizational Accidents, where he elaborated on its application to management. Reason died on February 4, 2025. Post-1990s, it was adapted for practical safety audits and investigations, influencing standards such as those from the (ICAO), which adopted a similar in the early 1990s.

Core Components

Slices and Holes

The Swiss Cheese Model, developed by James Reason in 1990, employs the metaphor of stacked slices of to depict the multilayered defenses inherent in complex sociotechnical systems. Each slice represents a sequential layer of protection, such as engineered safeguards, administrative procedures, training protocols, or supervisory oversight, which collectively form a robust barrier against hazards despite individual imperfections. These layers are designed to provide , ensuring that no compromises the entire system. The holes within each slice symbolize specific weaknesses or gaps in those defensive layers, manifesting as procedural oversights, equipment malfunctions, design flaws, or instances of . These vulnerabilities differ in size, shape, and position across slices, reflecting the unique characteristics and limitations of each protective measure—for example, a small might indicate a minor deficiency, while a larger one could represent a systemic equipment issue. Under normal conditions, the irregular placement of holes prevents any straight path through the stack, maintaining overall system integrity. Unlike fixed barriers, the holes in the model are inherently dynamic, capable of appearing, enlarging, or shifting due to factors such as organizational adaptations, gradual , or evolving operational demands. This variability underscores the need for ongoing monitoring and reinforcement of defenses to address emerging gaps. Holes may also arise from active errors at the operational level or latent conditions embedded within the system, further emphasizing the model's focus on systemic resilience.

Active and Latent Failures

Active failures, also referred to as active errors or violations, are unsafe acts committed directly by individuals at the frontline of operations, often termed the "sharp end" of the system. These include slips, lapses, mistakes, or deliberate rule violations by operators such as pilots, surgeons, or maintenance technicians, which occur immediately and are typically short-lived and detectable upon investigation. For instance, a pilot misreading an altitude instrument during a critical phase of flight exemplifies an active failure, as it stems from an individual's action in direct contact with the process. Latent failures, conversely, encompass hidden or dormant deficiencies embedded within the organizational system, originating from upstream decisions by designers, managers, regulators, or policymakers. These conditions arise from factors such as flawed system design, insufficient training programs, inadequate , or resource constraints like understaffing that foster among personnel. Unlike active failures, latent conditions are insidious, persisting over time and potentially remaining undetected until they combine with other elements to precipitate an incident; an example is organizational understaffing in an that leads to chronic pilot exhaustion, eroding overall vigilance. Within the Swiss cheese model, active failures manifest as transient holes in the innermost defensive layer, directly impacting immediate operational safeguards, whereas latent failures generate or exacerbate persistent holes across multiple outer layers, weakening the system's overall . This distinction highlights how accidents rarely result from isolated frontline errors but from the alignment of these failure types. Reason's analyses of accidents in high-reliability sectors like and highlight the significant contribution of latent conditions to many incidents, emphasizing the need to target systemic vulnerabilities for effective risk .

Operational Mechanism

Alignment of Defenses

In the Swiss cheese model, alignment of defenses refers to the rare circumstance in which the holes—representing weaknesses or potential failure points—in multiple successive layers of protection temporarily coincide, permitting a or trajectory to traverse the entire system. This alignment creates a clear pathway akin to sighting through aligned apertures in stacked slices of , where each slice symbolizes a defensive barrier such as procedures, equipment safeguards, or supervisory oversight. James Reason introduced this dynamic in his framework to illustrate how isolated vulnerabilities become consequential only when they synchronize across layers. Several factors can precipitate this alignment by dynamically repositioning or enlarging the holes within defensive slices. Triggering events, including acute on operators, unforeseen operational conditions, or sequences of interdependent errors, may cause latent weaknesses to shift into with active lapses, thereby opening a conduit for hazards. Active failures at the operational forefront, influenced by immediate circumstances, interact with pre-existing latent conditions to facilitate such in a single sentence. To counteract alignment, preventive strategies focus on fortifying individual slices and promoting persistent misalignment of holes. Approaches include bolstering through duplicate safeguards, incorporating error-proofing mechanisms like fail-safes in , and enforcing procedural tools such as checklists to shrink potential windows and add resilient barriers. These measures collectively reduce the opportunities for coincidental overlaps. Although the random distribution of holes across independent slices renders full alignment statistically improbable under normal conditions, systemic pressures can elevate this risk by systematically widening holes or inducing correlations between layers. Examples of such issues encompass chronic under-resourcing, inadequate training protocols, or deferred maintenance, which erode defensive integrity and heighten vulnerability to breach.

Path to Accident

In the Swiss Cheese Model, the path to an accident commences with the initiation of a , which may arise from external threats or internal errors entering the at its periphery. This represents a potential that seeks to traverse the stacked defensive layers, starting with the outermost slice. If the aligns with a hole in this initial layer—stemming from either active failures at the operational level or latent conditions embedded in organizational processes—it successfully penetrates and advances toward the next defense. The propagation of the occurs as it navigates through successive slices, requiring precise alignment of holes across multiple layers to avoid being halted by intact cheese. Each layer functions as a barrier with variable weaknesses, and the hazard's progression depends on the dynamic positioning of these holes, which can shift due to evolving system conditions. This step-by-step breaching continues, potentially reaching the "sharp end" of the system—where frontline operations interface with critical assets, people, or outcomes—only if the trajectory remains unobstructed throughout. Full penetration of all defensive layers culminates in an , manifesting as tangible , harm, or system failure at the sharp end. Conversely, misalignment at any single layer can intercept and mitigate the , reducing the incident's severity or preventing it entirely by redirecting or containing the . Central to the model is the understanding that such constitute "organizational ," arising not from isolated errors but from the rare convergence of multiple aligned weaknesses across defenses, underscoring the need for robust, multi-layered to disrupt potential paths.

Applications and Extensions

Healthcare and Patient Safety

The Swiss cheese model gained significant traction in healthcare following the 1999 Institute of Medicine "To Err Is Human: Building a Safer ," which popularized a systems-based approach to analyzing medical errors and advocate for safer practices beyond individual blame. The report highlighted how errors often result from organizational and environmental factors rather than solely frontline actions, aligning with the model's depiction of multiple defensive layers where weaknesses can align to permit harm. This adoption shifted efforts toward identifying latent conditions, such as inadequate or resource shortages, that create vulnerabilities across system defenses. In healthcare contexts, the model illustrates patient safety incidents like surgical errors, where active failures—such as miscommunication between team members during handoffs—combine with latent issues like chronic understaffing to bypass safeguards. For instance, in wrong-site surgery cases, holes in slices representing protocols (e.g., site verification checklists), alarms (e.g., electronic alerts), and supervision (e.g., senior oversight) may align if a rushed procedure due to staffing shortages leads to overlooked confirmations. These examples underscore how the model's layers, adapted to clinical environments, reveal pathways to adverse events when defenses fail collectively. The model's influence prompted the development of tools like the World Health Organization's Surgical Safety Checklist, introduced in 2008, which adds redundant defensive layers to misalign potential holes and enhance team communication. By standardizing pre-, intra-, and postoperative verifications, the checklist addresses both active errors (e.g., procedural oversights) and latent ones (e.g., poor coordination), thereby strengthening the overall system. Implementation of such interventions has yielded measurable outcomes, with studies showing reductions in adverse events by 30-50% in surgical settings, including lowered complication rates and mortality. These improvements emphasize fostering a in healthcare, where errors are reported without punitive repercussions to encourage learning and hole-plugging, rather than individuals. This approach has reinforced the model's role in promoting proactive enhancements across medical institutions.

Aviation and Other Industries

The Swiss cheese model has been prominently applied in to analyze major accidents, such as the 1977 Tenerife airport disaster, where a collision between two aircraft resulted in 583 fatalities due to the alignment of multiple failures, including miscommunications, from a bomb threat diversion, and inadequate adherence to takeoff procedures. In this incident, latent conditions like organizational pressures and active errors such as misunderstood radio transmissions created a trajectory through defensive layers, as illustrated by James Reason's framework. The model's emphasis on layered defenses influenced the development of the Human Factors Analysis and Classification System (HFACS), which extends Reason's concepts to systematically identify causal factors in mishaps. Regulatory bodies have integrated the Swiss cheese model into protocols, particularly through () training. The (FAA) incorporates the model in its human factors guidelines for maintenance and operations, using it to train personnel on recognizing how holes in procedural, supervisory, and organizational slices can align to cause errors. Similarly, the (ICAO) endorses programs that draw on the model to enhance , with guidelines updated in the 2000s to emphasize non-technical skills like communication and in high-risk environments. These adaptations, formalized in FAA 120-51D (2004), have contributed to a decline in accident rates by promoting proactive identification of latent failures. Beyond , the model has been adapted in the industry to dissect events like the 1986 , where latent design flaws in the reactor, combined with procedural violations during a safety test and inadequate emergency responses, allowed a and fire to breach multiple safety barriers. Reason applied his model to incidents like , where organizational and regulatory weaknesses—such as insufficient training and suppressed safety reporting—aligned with active operator errors to escalate the catastrophe. In , particularly chemical processing, the model informs risk assessments for incidents involving procedural gaps, as seen in analyses of plant leaks where failures in equipment maintenance, operator oversight, and emergency protocols create aligned vulnerabilities. The promotes its use in to evaluate layered protections against hazards like toxic releases. The Swiss cheese model has also extended to cybersecurity, where it conceptualizes layered defenses—such as firewalls, intrusion detection, and user training—as cheese slices with inherent weaknesses that must overlap imperfectly to prevent breaches. In this domain, HFACS adaptations apply the model to human error in security operations, identifying how latent issues like outdated policies align with active mistakes, such as phishing susceptibility, to enable attacks. Industry standards from organizations like the International Society of Automation incorporate it for industrial control systems, emphasizing multiple barriers to mitigate cyber-physical risks in critical infrastructure. During the (2020–2023), the model was adapted to to illustrate multilayered defenses against virus transmission, including , , , and , emphasizing that no single measure is foolproof but combinations reduce risk.

Criticisms and Limitations

Critics have argued that the Swiss cheese model oversimplifies accident causation by portraying failures as linear sequences of aligned weaknesses, thereby neglecting the nonlinear dynamics and loops inherent in socio-technical systems. This sequential perspective, as highlighted by Erik Hollnagel in his development of the Functional Resonance Analysis Method (), treats accidents as epidemiological outcomes of failure combinations rather than emergent results of performance variability and functional resonances. Hollnagel (2012) critiques the model for ignoring adaptive behaviors and interactions that can amplify or dampen risks in real-time, proposing as an alternative that models how system functions resonate to produce both successes and failures. Similarly, Sidney Dekker (2006) and Nancy Leveson (2012) describe it as inadequately capturing systemic interactions, where holes in defenses do not remain static but evolve through ongoing processes. A key limitation of the model lies in its focus on linear failure paths, which underemphasizes system and the capacity for to prevent incidents. It assumes defenses as independent layers, yet in practice, these are interdependent, with alignments influenced by dynamic factors like and environmental changes, making empirical quantification of "hole sizes" or alignment probabilities challenging. The model's weakly predictive nature further restricts its utility, as it offers qualitative insights into potential trajectories but struggles to specify timing, location, or likelihood without additional tools. This has led to calls for supplementation with methods like the Cognitive Reliability and Error Analysis Method () to better account for human and organizational variability. In response to these critiques, the Swiss cheese model has been integrated with frameworks such as the Human Factors Analysis and Classification System (HFACS), which expands its layers to include detailed categories of unsafe acts, preconditions, supervision, and organizational influences, enhancing its applicability in . Post-2010, it has evolved within broader approaches, such as (Systems-Theoretic Accident Model and Processes), to incorporate nonlinear constraints and feedback, allowing for more robust analysis of complex interactions. These adaptations address some oversimplifications while preserving the model's value as a foundational for multilayered defenses. The model exhibits gaps in addressing rare, black-swan events, where unpredictable, high-impact incidents arise from non-linear confluences beyond simple hole alignments, as these defy the probabilistic assumptions of layered defenses. It is also less effective for highly automated systems, where failures stem from rigid programming and emergent software interactions rather than human latent conditions, requiring complementary models that emphasize systemic control structures.

References

  1. [1]
    The contribution of latent human failures to the breakdown of ...
    The contribution of latent human failures to the breakdown of complex systems. J. Reason. Google Scholar · Find this author on PubMed · Search for more papers ...
  2. [2]
    James Reason HF Model | SKYbrary Aviation Safety
    In the Swiss Cheese model, an organisation's defences against failure are modelled as a series of barriers, represented as slices of the cheese. The holes in ...
  3. [3]
    Understanding the “Swiss Cheese Model” and Its Application to ...
    The Swiss Cheese Model is commonly used to guide root cause analyses (RCAs) and safety efforts across a variety of industries, including healthcare.
  4. [4]
    [PDF] Reasons Swiss Cheese Model - NHS Wales
    The model depicts slices of cheese stacked side by side (as shown in diagram one). Each slice represents the defences integrated into the design of a system, ...
  5. [5]
    Good and bad reasons: The Swiss cheese model and its critics
    This article provides a historical and critical account of James Reason's contribution to safety research with a focus on the Swiss cheese model (SCM), ...
  6. [6]
    The Swiss cheese model of safety incidents: are there holes in the ...
    Nov 9, 2005 · James Reason proposed the image of "Swiss cheese" to explain the occurrence of system failures, such as medical mishaps [1–5]. According to this ...Missing: definition seminal
  7. [7]
    (PDF) A comprehensive review of the Swiss cheese model in risk ...
    Aug 6, 2025 · The SCM, developed by James Reason in the 1990s, is a widely recognized and influential model used to understand and manage complex systems ...Missing: seminal | Show results with:seminal
  8. [8]
    James Reason, Who Used Swiss Cheese to Explain Human Error ...
    Mar 13, 2025 · In 1962, he graduated from the University of Manchester with a degree in psychology.
  9. [9]
    Professor Jim Reason FBA | The British Academy
    Professor Jim Reason FBA. Psychology. Elected 1999. Fellow type: UK Emeritus ... University of Manchester Professor of Psychology. 1977 - 2001. University of ...Missing: biography | Show results with:biography
  10. [10]
    Swiss Cheese Model - The Decision Lab
    A professor at Manchester University named James Reason is credited with the creation of the Swiss Cheese Model.5 He presented the concept in his book ...
  11. [11]
    Human Error and Defense in Depth: From the “Clambake” to the ...
    Aug 16, 2017 · This article studies a case of efficient collaboration: the Swiss Cheese Model (SCM) of accidents. Since the early 1990s, SCM of the psychologist James Reason ...
  12. [12]
    [PDF] Revisiting the "Swiss Cheese" Model of Accidents - EUROCONTROL
    The ICAO Human Factors and Flight Safety Working Group adopted it in the early 90'S as a conceptual framework. As usual, many people – including the author ...
  13. [13]
    The Swiss cheese model of safety incidents: are there holes in the ...
    Reason's Swiss cheese model has become the dominant paradigm for analysing medical errors and patient safety incidents.Questionnaire · Table 1 · Discussion
  14. [14]
    Systems Approach | PSNet - Patient Safety Network - AHRQ
    Reason used the terms active errors and latent errors to distinguish individual from system errors. Active errors almost always involve frontline personnel ...
  15. [15]
    Human error: models and management - PMC - PubMed Central - NIH
    The Swiss cheese model of system accidents. Defences, barriers, and safeguards occupy a key position in the system approach. High technology systems have ...Missing: seminal | Show results with:seminal
  16. [16]
    Human error: models and management | The BMJ
    Mar 18, 2000 · Human error: models and management · Person approach · System approach · Evaluating the person approach · The Swiss cheese model of system accidents.
  17. [17]
    [PDF] Models of Causation: Safety - The OHS Body of Knowledge
    The “Swiss Cheese” model was only one component of a more comprehensive model he titled the Reason Model of Systems Safety (Reason 1997) (Figure 7). Figure 7: ...
  18. [18]
    [PDF] NASA Accident Precursor Analysis Handbook
    The Swiss Cheese Model of accident causation, originally proposed by James Reason. [12], likens a system‟s barriers against severe failure to a series of ...
  19. [19]
    To Err is Human - NCBI Bookshelf - NIH
    This book sets forth a national agenda--with state and local implications--for reducing medical errors and improving patient safety through the design of a ...
  20. [20]
    The impact of “To Err Is Human” on patient safety in ... - Frontiers
    The models proposed by Reason (39, 45, 86, 87) provided medical metaphors (e.g., resident pathogens) as well as memorable graphical representations (e.g., Swiss ...
  21. [21]
    Swiss Cheese Model | PSNet - Patient Safety Network - AHRQ
    In the model, each slice of cheese represents a safety barrier or precaution relevant to a particular hazard. For example, if the hazard were wrong-site surgery ...
  22. [22]
    The Swiss Cheese Model of patient safety errors - Spok Inc.
    Oct 29, 2020 · Miscommunication often plays a significant role in these errors. In a recent analysis of 10 years' worth of claims data, communication among ...Missing: understaffing | Show results with:understaffing
  23. [23]
    Safe surgery: Tool and Resources - World Health Organization (WHO)
    The WHO Surgical Safety Checklist was developed after extensive consultation aiming to decrease errors and adverse events, and increase teamwork and ...
  24. [24]
    Surgical Safety Checklist to Reduce Morbidity and Mortality
    A 19-item surgical safety checklist designed to improve team communication and consistency of care would reduce complications and deaths associated with ...Missing: 30-50% | Show results with:30-50%
  25. [25]
    Checklist helps reduce surgical complications, deaths
    Dec 11, 2010 · Checklist helps reduce surgical complications, deaths. Surgical adverse events reduced by one third in trials in eight countries. 11 December ...Missing: 30-50% | Show results with:30-50%
  26. [26]
    [PDF] The Impact Of The Authority Gradient Created By Rank Imbalance ...
    Apr 28, 2023 · ... Tenerife Airport disaster of 1977 ... Latent failures. Page 22. 14 or conditions are most notably mentioned in James Reason's Swiss Cheese model ...<|separator|>
  27. [27]
    Human Factors Analysis and Classification System (HFACS)
    Crew Resource Management: Refers to factors that include communication, coordination, planning, and teamwork issues. Personal Readiness: Refers to off-duty ...
  28. [28]
    [PDF] Human Factors Guide for Aviation Maintenance and Inspection
    the Swiss Cheese Model one of most accepted models of accident causation is James reason's “swiss cheese” model of human error (reason, 1990; Figure 7-3).
  29. [29]
    Military, Civil and International Regulations To Decrease Human ...
    Feb 20, 2022 · The "Swiss cheese model" and CRM added additional layers of defense to aviation mishap prevention. Crew resource management is now considered ...
  30. [30]
    [PDF] AC 120-51D - Crew Resource Management Training
    Feb 8, 2001 · This advisory circular (AC) presents guidelines for developing, implementing, reinforcing, and assessing Crew Resource Management (CRM) training ...
  31. [31]
    Plug the Holes in the Swiss Cheese Model - AIChE
    The Swiss cheese model. Investigations have revealed that most industrial incidents include multiple independent failures.
  32. [32]
    [PDF] Reducing human error in cyber security using the Human Factors ...
    HFACS was based on the Swiss cheese model of latent and active failures concept developed by (Reason, 1990). The goal of this research is to create a ...
  33. [33]
    Excerpt #5: Industrial Cybersecurity Case Studies and Best Practices
    The bowtie or Swiss cheese model is used in organizations to answer the question are we still safe to operate? by interrogating data related to each barrier. ...
  34. [34]
    Systems thinking, the Swiss Cheese Model and accident analysis
    The aim of this paper was to consider whether the SCM can provide a systems thinking approach and remain a viable option for accident analysis.
  35. [35]
    (PDF) A BRIEF AND UNOFFICIAL HISTORY OF THE FRAM
    Aug 19, 2025 · FRAM acronym means Method and not Model. The FRAM in this way different from other analysis methods, such as the Swiss cheese (Reason,.
  36. [36]
    [PDF] Systems thinking, the Swiss Cheese Model and accident analysis
    The SCM has drawn criticism from a number of researchers (e.g. Dekker, 2006, p.89;. Hollnagel, 2012, p.14; Leveson, 2012, p.19) who describe it as a sequential.
  37. [37]
    [PDF] safety, black swans and sifs - BazEkon
    This lack of linearity, is a major flaw with James Reason's Swiss Cheese Model. ... problem is that automated systems follow programs exactly as they are told.