Fact-checked by Grok 2 weeks ago

Human reliability

Human reliability refers to the probability that a human performer will successfully complete a specified task without committing an that could lead to system failure or . It is fundamentally the inverse of probability and is influenced by factors such as task complexity, environmental conditions, , and performance shaping factors (PSFs) like or time . Human reliability analysis (HRA) is a structured discipline that integrates and behavioral science to identify, model, and quantify human contributions to overall system risk, particularly in probabilistic risk assessments (PRA). HRA aims to predict the likelihood of human failure events (HFEs) and recommend mitigation strategies, such as improved , interface , or procedural barriers, to enhance and reliability. By combining qualitative taxonomies with quantitative probability estimates—often derived from operational data, laboratory studies, or expert judgment—HRA evaluates how human actions interact with hardware and software in socio-technical systems. The field originated in the 1950s at , initially applied to aircraft and nuclear weapons systems, with the first human reliability data bank established in 1962 by the American Institute for Research. Key developments in the 1970s and 1980s included the Technique for Human Error Rate Prediction (THERP) following the , which emphasized PSFs, and subsequent advancements like the Standardized Plant Analysis Risk Human Reliability Analysis (SPAR-H) and in the 1990s and beyond. These evolutions reflect a shift from simplistic error probabilities to dynamic models accounting for context and , as seen in post-Chernobyl analyses highlighting human factors in major incidents. HRA methods vary by approach: first-generation techniques like THERP and Human Error Assessment and Reduction (HEART) focus on task decomposition and generic error rates, while second- and third-generation methods, such as A for Human (ATHEANA) and MERMOS, incorporate contextual dependencies and recovery opportunities for more nuanced predictions. Data sources include incident reports, simulator exercises, and databases like the Nuclear Regulatory Commission's HRA database, enabling estimates where contributes to approximately 65-90% of incidents across high-hazard sectors such as , chemical, and . Applications of human reliability span safety-critical industries, including for licensing and design optimization, chemical and offshore oil for hazard mitigation, for crew performance evaluation, healthcare for procedural error reduction, and for operator-task reliability assessments. In railways and maritime sectors, HRA supports accident investigations and . Emerging applications in cybersecurity and autonomous systems address human factors in cyber-physical interactions and vehicle reliability. Recent advancements as of 2025 include machine learning-enhanced predictive models and dynamic HRA for and advanced environments, further integrating with evolving technologies. Overall, HRA promotes resilient systems by prioritizing and continuous performance improvement.

Fundamentals

Definition and Principles

Human reliability is defined as the probability that a human performs a specified task correctly within a given time under stated conditions, without committing errors that could lead to system failure. This concept is often quantified as a reliability measure R = 1 - P_e, where P_e represents the probability (HEP), emphasizing the probabilistic nature of human performance in complex systems. Key principles of human reliability distinguish between , which involves unintentional deviations from intended actions, and human failure, which may include intentional non-compliance such as violations of procedures. In socio-technical systems, where humans interact with technology and organizational elements, human reliability assesses the contributions of these interactions to overall system performance and safety. The HEP is fundamentally a function of performance influencing factors (PIFs) and task demands, such as environmental stressors or procedural complexity, which modulate the likelihood of errors. Human reliability plays a critical role in enhancing system reliability within safety-critical domains, where human errors contribute to 70-90% of failures in industries like and . For instance, analyses of aviation accidents indicate that 60-80% involve to some degree, underscoring the need to integrate into assessments. The scope of human reliability encompasses both quantitative assessments, which estimate numerical probabilities of errors, and qualitative evaluations, which identify error modes and influencing conditions. It differs from human factors engineering, which primarily focuses on designing interfaces and environments to optimize , whereas human reliability predicts error likelihoods in operational contexts. PIFs serve as key modulators of reliability, with deeper exploration in subsequent sections on performance factors.

Historical Development

The roots of human reliability concepts emerged in the from advancements in behavioral sciences and , initially addressing human errors in military and aviation systems in the aftermath of . analyses highlighted the limitations of treating humans solely as system components, drawing on to quantify error rates in man-machine interactions. This period laid the groundwork for systematic studies of in high-stakes environments, shifting focus from mechanical failures to behavioral contributors. A pivotal milestone occurred in 1962 when Alan Swain presented the Technique for Human Error Rate Prediction (THERP) at a symposium of the Human Factors Society, formalizing methods for predicting probabilities through task decomposition and performance shaping factors. The 1970s saw accelerated development, spurred by the 1975 Reactor Safety Study (WASH-1400), which pioneered the integration of human reliability analysis (HRA) into (PRA) for nuclear facilities by estimating error contributions to accident sequences. The 1979 further catalyzed this integration, revealing how human factors could exacerbate system failures and prompting regulatory endorsements for enhanced PRA methodologies. HRA evolved through distinct generations: first-generation techniques from the to emphasized static error probabilities derived from empirical task data; second-generation approaches in the 1990s incorporated models to account for mental processes underlying errors; and third-generation methods from the 2000s emphasized dynamic contextual influences, performance recovery, and socio-technical systems. Major incidents like the 1986 accident exposed deficiencies in operator training and , driving advancements in the late and 1990s, while post-2000 developments, including responses to the 2011 Fukushima Daiichi accident, underscored human responses in extreme conditions. These events drove international standards, including IAEA guidelines for HRA in nuclear safety assessments that promote structured analysis of human and organizational factors. Concurrently, the field advanced toward data-driven HRA, leveraging empirical databases from operational events and simulations to refine error probability estimates beyond expert judgment. Recent developments as of 2025 include the integration of for predictive modeling of probabilities and specialized frameworks for advanced reactors with increased .

Performance Influencing Factors

Individual and Psychological Factors

Individual and psychological factors play a critical role in human reliability by influencing cognitive and behavioral processes that determine task performance and error likelihood. Psychological elements, such as and , can impair , , and response accuracy, while individual attributes like , , and modulate susceptibility to these effects. These factors are intrinsic to the person and distinct from external influences, often compounding to elevate probabilities (HEPs) in high-stakes environments like operations or . Stress, both acute and chronic, affects human reliability through its impact on arousal levels and cognitive function. The Yerkes-Dodson posits an inverted U-shaped relationship between (or ) and , where moderate enhances efficiency for simple tasks, but excessive degrades on complex ones by overwhelming and increasing . In human reliability contexts, high can double or quintuple error rates, as seen in models adjusting HEPs for stressors like time pressure. For instance, acute from emergencies may lead to , reducing , while erodes long-term resilience to errors. Fatigue, influenced by circadian rhythms and , similarly undermines reliability by diminishing vigilance and executive control. Circadian dips, particularly during night shifts, align with natural low points in , exacerbating error propensity in prolonged operations. of 24 hours or more can impair psychomotor vigilance by causing lapses equivalent to blood alcohol concentrations of 0.10%, leading to slower reaction times and up to a threefold increase in errors on sustained tasks. Chronic partial restriction compounds these effects, reducing overall cognitive throughput without full recovery during off-duty periods. Motivation and attitude further shape reliability, with low motivation fostering complacency that manifests as skill-based slips or violations. Complacency arises from overfamiliarity with routines, prompting reduced vigilance and normalization of deviations, as observed in maintenance tasks where operators skip checks due to perceived low risk. Positive attitudes, conversely, bolster adherence to procedures, but demotivating factors like role ambiguity can inflate error rates in repetitive scenarios. Among individual factors, and exhibit a nonlinear influence on error rates. Novices exhibit peak error probabilities—often 5-10 times higher than experts—due to incomplete and higher during skill acquisition. Error rates decline sharply with 2-5 years of deliberate practice, stabilizing at low levels for mid-career professionals, but rise again in later years (post-60) from age-related declines in processing speed and . Physical conditions, such as or hearing impairments, amplify these vulnerabilities; uncorrected visual deficits can elevate misperception errors in monitoring tasks by twofold, while disrupts communication, contributing to coordination failures in team settings. Personality traits, particularly risk-taking propensity, also affect reliability by predisposing individuals to unsafe decisions. High risk-takers, often characterized by elevated extraversion and low , are more prone to rule-based errors in ambiguous situations, with studies in high-hazard industries showing 1.5-2 times higher violation rates compared to cautious peers. These traits interact with situational demands, where impulsive personalities under may bypass safeguards, heightening overall system . Quantification of these factors often involves HEP adjustments via performance shaping factors (PSFs) in human reliability analysis. For , multipliers range from 1 (nominal) to 2 (high) or 5 (extreme), applied to base error probabilities to reflect intensified cognitive . under poor fitness for duty similarly yields multipliers of 2-3, while low can increase HEPs by factors of 3-10 depending on task type (action vs. ). These values derive from empirical data in probabilistic safety assessments, emphasizing scale without exhaustive benchmarks. Interactions among factors often compound unreliability, such as exacerbating in extended operations, where elevated amplifies sleep-deprived lapses, potentially multiplying error probabilities beyond additive effects. Organizational modulators, like training regimens, can mitigate these but remain secondary to intrinsic dynamics.

Organizational and Environmental Factors

Organizational factors play a critical role in shaping human performance reliability by influencing the broader context in which individuals operate. , defined as the shared values and attitudes toward safety within an organization, significantly affects error reporting and learning from incidents. In a , which emphasizes for at-will choices while supporting error reporting without fear of unjust punishment, employees are more likely to disclose errors, enabling systemic improvements and reducing the recurrence of human failures. Conversely, a blame culture discourages open communication, leading to underreporting and persistent risks, as it attributes errors primarily to individual fault rather than systemic issues. Training and programs are foundational organizational elements that directly impact probability (HEP). Inadequate or lack of can substantially elevate rates, with multipliers in human reliability analysis (HRA) methods indicating increases of up to 17 times for unfamiliar tasks due to insufficient preparation. For instance, in the HEART method, error-producing conditions related to inexperience or lack of adjust baseline HEPs by factors ranging from 2 to 17, highlighting how organizational investment in ongoing development mitigates these risks. Similarly, and levels influence reliability; understaffing often results in overload, where high as a performance influencing factor (PIF) can multiply HEPs by up to 5 in HRA assessments, as operators face divided attention and from excessive demands. Environmental factors encompass physical surroundings that can impair perceptual and cognitive processes, thereby degrading . Poor conditions hinder visual tasks, increasing misreads and errors in information processing by contributing to PIFs that elevate HEPs through reduced signal detection. Extreme temperatures further compromise , with showing that a 1°C rise in ambient heat can increase rational choice violations by approximately 1.1 percentage points, reflecting impaired cognitive function under . System design, particularly human-machine interfaces (HMIs), also falls under environmental influences; poorly designed interfaces, such as ambiguous displays or non-intuitive controls, promote slips and lapses by failing to support error detection, often quantified in HRA as PIFs with multipliers up to 5 for interface inadequacy. Team dynamics within organizations mediate reliability through interpersonal interactions. Communication breakdowns, exacerbated by hierarchical structures, can prevent timely error correction; for example, authority gradients—where subordinates hesitate to challenge superiors—act as a PIF that suppresses and heightens the of unaddressed s. In HRA, such dynamics are modeled as organizational PIFs, adjusting HEPs to account for reduced team coordination. Quantification of these factors in HRA relies on PIF weighting schemes to modify baseline HEPs. Organizational inefficiency, including suboptimal safety culture or resource allocation, is incorporated as a composite PIF in methods like SPAR-H, where it can adjust probabilities by factors of 1.25 to 10 based on contextual severity. These schemes prioritize seminal approaches, such as those in HEART and CREAM, which integrate organizational and environmental elements to provide a holistic assessment of reliability influences.

Human Reliability Analysis Methods

First-Generation Techniques

First-generation techniques in human reliability analysis emerged in the 1960s and 1970s, primarily within (PRA) frameworks for the nuclear industry, focusing on decomposing tasks into basic elements to predict error probabilities using empirical and judgment. These methods emphasized static, task-oriented models that quantified human error rates (HERs) through nominal probabilities adjusted by performance shaping factors (PSFs), such as or equipment design, but often neglected dynamic cognitive processes. The Technique for Human Error Rate Prediction (THERP), developed by Alan D. Swain in and detailed in the 1983 NUREG/CR-1278 handbook, represents a foundational approach. It involves step-by-step , breaking procedures into sequences of actions and verifications, such as reading instruments or manipulating controls, to identify potential error modes like omissions or commissions. Human error probabilities (HEPs) are calculated by assigning nominal rates from a database— for instance, 0.003 for reading an analog meter under good conditions—and multiplying by PSF multipliers (e.g., increasing error by a factor of 2-5 for high ), while incorporating dependency modeling across tasks or team members via five levels from zero to complete dependence. Event trees integrate these to estimate overall task failure probabilities, with uncertainties captured using lognormal distributions and error factors (e.g., EF=10 for routine tasks). In the 1980s, the Human Cognitive Reliability (HCR) method, introduced by G.W. Hannaman and colleagues in an EPRI report, extended predictions to time-critical cognitive tasks by correlating error likelihood with response time ratios. It classifies operator behaviors into skill-based (automatic, low error), rule-based (procedural, moderate error), and knowledge-based (diagnostic, high error) categories, drawing from Rasmussen's SRK framework. HEPs are derived from time-reliability curves, where the probability of failure decreases as available time increases relative to a nominal response time— for example, a skill-based action completed in twice the nominal time might yield an HEP near 0.001. PSFs like stress or adjust these curves, making HCR suitable for dynamic PRA scenarios in rooms. Other PRA-based techniques include the Human Error Assessment and Reduction Technique (HEART), proposed by J.C. Williams in 1986, which simplifies analysis for broader applications. HEART categorizes tasks into 5-9 generic types (e.g., "totally unfamiliar task" with nominal HEP of 0.55 or "routine highly practiced task" at 0.0004) and identifies error modes such as omission or commission, then applies up to 38 error-producing conditions (EPCs) like poor interface design. The HEP is computed as the nominal rate multiplied by EPC multipliers (ranging from 1.3 to 50), weighted by the assessed proportion of affect (APOA, typically 0.5-1.0), providing a quick estimate without detailed task breakdown. Similarly, the Success Likelihood Index Method (SLIM), developed by D.E. Embrey and team in 1984, relies on structured expert judgment to scale PSFs. Experts rate 4-10 key PSFs (e.g., time urgency or procedures quality) on a 0-10 success likelihood scale, deriving a composite index that is logarithmically transformed into an HEP, often calibrated against benchmarks like 0.01 for moderately complex tasks. These techniques, while pioneering in quantifying HERs for PRA, share limitations as static models that overlook deeper cognitive contexts, dependencies beyond basic levels, and non- applications, leading to subjective adjustments and limited empirical validation outside controlled settings. Primarily tailored for risk assessments, they prioritize probabilistic over behavioral .

Second- and Third-Generation Techniques

Second-generation human reliability analysis (HRA) techniques, emerging in the 1990s, shifted focus from simple task-based error probabilities to cognitive processes underlying , building on foundational probabilistic approaches by incorporating psychological models of error causation. A seminal framework in this category is the Generic Error Modeling System (GEMS), developed by James Reason in 1990, which classifies errors according to the stage of information processing where they occur. GEMS distinguishes between skill-based errors, such as slips (unintentional actions due to attentional failures) and lapses (memory failures), which arise during routine, automatic performance, and knowledge-based mistakes, which involve flawed plans or decisions in novel situations. This taxonomy emphasizes how cognitive demands and environmental cues influence error likelihood, enabling analysts to predict errors in dynamic settings rather than static tasks. Another key second-generation method is the Cognitive Reliability and Error Analysis Method (), introduced by Erik Hollnagel in 1998, which integrates cognitive engineering principles to assess error probabilities through performance modes and contextual factors. categorizes operator performance into four modes—strategic (proactive planning), tactical (rule-following), opportunistic (time-pressured actions), and scrambled (degraded )—with error probabilities varying accordingly, such as approximately 0.00003 for strategic actions but up to 0.4 for scrambled ones under poor conditions. Central to is its Context Control Model, which evaluates nine performance influencing factors (PIFs), including working conditions, time available, and organizational support, each scored from -1 (adverse) to +1 (supportive) to adjust baseline error rates. This approach allows for a more nuanced quantification of probabilities (HEPs) by weighting cognitive and contextual elements, often reducing overestimation in complex scenarios compared to earlier methods. Third-generation techniques, developed from the late 1990s onward, further advanced HRA by embedding error analysis within probabilistic risk assessment (PRA) frameworks and emphasizing error-forcing contexts, particularly in high-stakes environments like nuclear power. The A Technique for Human Error Analysis (ATHEANA), sponsored by the U.S. Nuclear Regulatory Commission (NRC) in the mid-1990s, identifies unsafe acts by linking PRA-defined initiating events to potential human errors through triggers such as equipment failures or procedural ambiguities. ATHEANA then examines error-forcing contexts (EFCs), including time pressure and stress, to estimate HEPs, typically deriving them from empirical data rather than fixed tables, which supports tailored assessments in accident sequences. Similarly, the Method for the Representation of Errors under Stress (MERMOS), developed by Électricité de France (EDF) in the late 1990s for nuclear control room operations, focuses on stress-induced performance degradation by modeling operator cognition under emergency conditions. MERMOS uses a retrospective analysis of operator actions, classifying errors by cognitive functions like diagnosis and decision-making, and incorporates stress levels to adjust HEPs, drawing on simulator and incident data for validation. These generations introduced enhancements in modeling error , recognizing that not all errors propagate unchecked, with probabilities of detection (P_d) often ranging from 0.1 to 0.9 based on monitoring opportunities and , derived from empirical sources such as incident reports in databases like the Human Event Repository and Analysis (). For instance, in and ATHEANA, is factored into HEP calculations by assessing post-error detection via PIFs or EFCs, allowing for adjustments that reflect real-world mitigation. Data from events, including those compiled by the NRC, provide the empirical basis for these ranges, ensuring estimates align with observed rates in control room simulations and actual incidents. Overall, second- and third-generation techniques offer advantages in handling dynamic, context-rich scenarios by accounting for cognitive depth and recovery, which reduces underestimation of HEPs in complex systems compared to first-generation task decompositions. This evolution enables more realistic assessments in industries with high human-system , prioritizing influential cognitive and organizational factors over simplistic rates. Emerging fourth-generation methods, developed since the 2010s and advancing as of 2025, incorporate dynamic modeling techniques such as Bayesian networks and to integrate and adaptive cognitive simulations, addressing evolving contexts in socio-technical systems like autonomous operations.

Specialized Frameworks

Specialized frameworks in human reliability provide structured approaches for classifying and investigating human errors after incidents, emphasizing the identification of causal chains rather than prospective probability estimation. These tools draw on performance influencing factors such as and organizational conditions to dissect error pathways, facilitating targeted interventions in high-risk domains like and operations. By focusing on , they enable investigators to uncover latent failures that contribute to active errors, promoting systemic improvements over blame. The Human Factors Analysis and Classification System (HFACS), developed in the late 1990s by Scott A. Shappell and Douglas A. Wiegmann for U.S. military aviation, offers a hierarchical taxonomy for categorizing human contributions to accidents. Grounded in James Reason's Swiss Cheese model of accident causation, HFACS organizes failures into four tiers: organizational influences at the highest level, which encompass resource management, organizational climate, and process deficiencies; unsafe supervision, including inadequate oversight and failure to correct known problems; preconditions for unsafe acts, covering eight categories such as environmental factors (e.g., physical environment), operator conditions (e.g., fatigue), personnel factors (e.g., training), and team dynamics (e.g., crew resource management); and unsafe acts at the base, divided into errors (skill-based slips, decision mistakes, perceptual errors) and violations (routine or exceptional deviations). This structure, comprising 12 primary categories across the tiers, allows for comprehensive mapping of how latent organizational issues align with active frontline errors. Other notable frameworks include James Reason's classification of error modes, which distinguishes slips (observable execution failures, like pressing the wrong button), lapses (unobservable or failures, like forgetting a step), mistakes (flawed or problem-solving), and violations (intentional rule-breaking, either routine or exceptional). Originally outlined in Reason's 1990 seminal work , this typology serves as a foundational diagnostic for dissecting cognitive and behavioral breakdowns in incident reviews. Extensions of the Human Error Assessment and Reduction Technique (HEART), first proposed by J.C. Williams in 1985 for task-based error evaluation, have been adapted beyond (PRA) domains, such as in healthcare and , to qualitatively assess error-prone tasks by weighting influencing factors like and procedures without relying on quantitative probabilities. Similarly, the Information-Decomposition Analysis of Crew activities (IDAC) framework, developed by Ali Mosleh and colleagues in the mid-2000s, models crew performance through cognitive stages—information processing, decision-making, and action execution—in dynamic, team-based scenarios like rooms, decomposing responses to stressors for post-event and error tracing. These frameworks are primarily applied in post-incident investigations to trace latent failures back through causation chains, integrating performance influencing factors like or poor supervision as diagnostic inputs. For instance, HFACS analyses of accidents have revealed that while unsafe acts appear in nearly 90% of cases, organizational influences underlie a substantial portion when examined hierarchically, contributing to broader systemic patterns. Unlike predictive human reliability methods, specialized frameworks like HFACS and IDAC adopt a orientation, prioritizing the elucidation of causation sequences over the quantification of failure probabilities. This diagnostic emphasis supports regulatory and organizational learning, focusing on holistic chains of influence rather than isolated event likelihoods.

Applications and Case Studies

Key Industries

In , human reliability analysis (HRA) is integrated into probabilistic risk assessments (PRA) to evaluate operator performance during reactor operations, including shutdown sequences where errors in and procedural execution can impact . For instance, HRA quantifies human error probabilities (HEPs) for such tasks, typically ranging from 0.01 to 0.1 under nominal conditions, adjusted by performance shaping factors like stress and training. This approach supports system reliability modeling and design improvements in environments. In , crew resource management () training applies human reliability principles to mitigate crew errors by emphasizing communication, decision-making, and teamwork in high-stakes operations. is employed in simulations to predict and reduce error rates through enhanced error trapping and recovery, contributing to a dramatic decline in accident rates since the . HRA techniques have also been applied in simulations for specific scenarios, such as events. The oil and gas sector, particularly offshore operations, focuses HRA on maintenance and drilling tasks where environmental performance influencing factors (PIFs) such as harsh weather increase error risks. Techniques like the Human Error Assessment and Reduction Technique (HEART) are adapted to quantify HEPs for these activities, incorporating error-producing conditions to prioritize interventions in dynamic settings. In healthcare, HRA methods predict errors in surgical teams by analyzing cognitive and organizational factors to enhance patient safety protocols. The Cognitive Reliability and Error Analysis Method (CREAM) has been adapted for this domain, identifying common error modes like misdiagnosis or procedural omissions that contribute significantly to adverse events, which affect approximately 10% of hospitalized patients globally (with about half deemed preventable). Similarly, in manufacturing, HRA supports error prediction along assembly lines by classifying failure modes in manual tasks, such as component misalignment, to improve process reliability and reduce defects. In , HRA is used for investigations and to assess operator reliability in signal passing and maintenance tasks, supporting and enhancements. In the sector, HRA evaluates crew performance in and emergency response, incorporating factors like and bridge resource management to mitigate collision risks. Emerging applications include cybersecurity, where HRA models human errors in threat detection and response within IT operations centers as of 2023. In autonomous systems, such as unmanned aerial vehicles and self-driving cars, HRA addresses human oversight and handover errors, integrating with reliability assessments up to 2025. Cross-industry standards guide HRA implementation, with the (IAEA) providing frameworks for integrating human factors into safety assessments across nuclear and analogous high-risk sectors. Complementary guidelines, such as those in ISO 31010 for , recommend HRA techniques to address human factors in diverse contexts. Methods like THERP serve as foundational tools in these applications for HEP estimation.

Major Incidents and Lessons

The 1979 accident at the Three Mile Island nuclear power plant in exemplified operator misdiagnosis stemming from poor human-machine interfaces and high-stress conditions, where ambiguous and inadequate led to in recognizing a loss-of-coolant event. Human and organizational factors were identified as a root cause, with human errors contributing significantly to the accident's progression and partial core meltdown. Subsequent human reliability analysis (HRA) efforts, including retrospective applications of methods like MERMOS, underscored the need for enhanced operator through realistic simulators to mitigate diagnostic errors under pressure. These insights prompted regulatory reforms, such as improved designs and emergency procedures, influencing standards worldwide. The 1986 Chernobyl disaster in Ukraine involved rule violations and knowledge-based mistakes by operators, exacerbated by organizational pressures that prioritized production over safety and fostered a deficient safety culture. Post-accident analysis using frameworks like the Human Factors Analysis and Classification System (HFACS) revealed how latent organizational failures, including inadequate training and suppression of safety concerns, enabled the escalation from a test procedure to a catastrophic explosion and release of radioactive material. This highlighted the role of performance-influencing factors (PIFs) such as poor communication and overconfidence in high-stakes environments. The findings drove international reforms, including strengthened safety culture assessments and redesigned reactor control systems to prevent unauthorized interventions. Beyond nuclear incidents, the 1977 Tenerife airport collision between two aircraft demonstrated how communication errors, compounded by stress and hierarchical dynamics in the cockpit, can lead to runway incursions. Misunderstandings in radio transmissions, such as ambiguous phrases like "we are now at takeoff," occurred amid fog and time pressures, resulting in the deadliest accident with 583 fatalities. Similarly, the 2010 Deepwater Horizon oil rig explosion in the arose from maintenance oversights, including misinterpreted pressure tests and inadequate well control procedures due to confirmation bias and fatigue among the crew. These cases illustrate the pervasive impact of human factors across sectors, with lessons emphasizing enhanced error recovery protocols, such as standardized in and rigorous well integrity checks in operations. A key takeaway from these incidents is the value of mitigating PIFs through targeted interventions; for instance, the adoption of structured checklists in has significantly reduced procedural errors by standardizing actions and minimizing omissions under . The broader impacts include a toward third-generation HRA methods following the 2011 Daiichi accident, which exposed how seismic events impose cognitive , impairing and amid infrastructure damage and hazards. Empirical data from these major incidents have populated HRA databases, such as those compiled by the IAEA from event reports and simulator studies, enabling more context-sensitive error probability estimates and cross-industry applications for prevention.

References

  1. [1]
    [PDF] Human Reliability Assessment
    Human Reliability Assessment. Human reliability is the opposite of human error. It is the probability of successfully performing a task.
  2. [2]
    [PDF] Human Reliability Analysis - Idaho National Laboratory
    Definition of HRA. Human Reliability Analysis. • General Definition: A study of human contribution to overall risk when interacting with a system. − Part of ...
  3. [3]
    [PDF] Introduction to Human Reliability Analysis (HRA)
    When combined with a system's reliability analyses, HRAs assess detrimental effects human errors have on the system. HRA involves error identification, modeling ...
  4. [4]
    A Narrative Review of Human Reliability Analysis (HRA) Techniques ...
    This research comprehensively examines several HRA techniques, detailing their applications in a wide array of sectors, including railway, healthcare, ...Missing: sources | Show results with:sources
  5. [5]
    An Overview of Human Reliability Analysis Techniques ... - IntechOpen
    Mar 13, 2013 · The objective of a human reliability analysis is 'to evaluate the operator's contribution to system reliability' and, more precisely, 'to predict human error ...<|separator|>
  6. [6]
    Human Reliability - an overview | ScienceDirect Topics
    Human reliability analysis (HRA) aims at systematically identifying and analysing the causes, consequences and contributions of human failures in socio- ...
  7. [7]
    [PDF] Challenges in Human Reliability Analysis (HRA) - OSTI.GOV
    Human reliability analysis (HRA) is used in the context of probabilistic risk assessment (PRA) to provide risk information regarding human performance to.
  8. [8]
    Managing human failures: Overview - HSE
    Nov 4, 2024 · Managing human failures: Overview · Human failure aide-memoire ( PDF ) – This aide-memoire gives more information about the different failure ...
  9. [9]
    [PDF] How Many Performance Shaping Factors are Necessary for Human ...
    The purpose of this paper is to explore the role of PSFs across different stages of HRA, including identification of potential human errors, modeling of these ...
  10. [10]
    human performance models and human errors in air traffic ...
    Aug 9, 2025 · contributions (e.g. nuclear power - 70-90 ... human. functions. Error mechanisms and failure modes. depend on mental functions and knowledge which.
  11. [11]
    [PDF] Human Error and Commercial Aviation Accidents
    Failed to correct known problems. The remaining two categories of unsafe supervision, the failure to correct known problems and supervisory violations, are ...Missing: nuclear | Show results with:nuclear
  12. [12]
    (PDF) Is Human Reliability Relevant to Human Factors?
    Aug 10, 2025 · This paper presents a number of views from a panel discussion on the relationship between human reliability analysis (HRA) and human factors.
  13. [13]
    [PDF] Human Reliability Analysis Methods for Calculating Effects of ...
    that, given the PIFs, there is a high probability of error. The quantification of the HEP was not considered to be a primary concern. Instead, the high HEP ...
  14. [14]
  15. [15]
    Cognitive, Endocrine and Mechanistic Perspectives on Non-Linear ...
    The Yerkes-Dodson Law shows a non-linear relationship between arousal and performance, where high arousal impairs difficult tasks, but simple tasks are ...
  16. [16]
    Circadian Rhythms, Sleep Deprivation, and Human Performance
    Both acute total sleep deprivation and chronic sleep restriction increase homeostatic sleep drive and degrade waking neurobehavioral functions.
  17. [17]
    Human reliability under sleep deprivation: Derivation of performance ...
    One particular cause of fatigue is sleep deprivation; its effect on performance is not explicitly included in current HRA methods [23].<|control11|><|separator|>
  18. [18]
    [PDF] Human Factors - FAA Safety
    Human factors, like fatigue and stress, contribute to aviation accidents, with 80% of maintenance errors involving them. Human error is a major cause of  ...
  19. [19]
    The Human Factors "Dirty Dozen" | SKYbrary Aviation Safety
    The Dirty Dozen refers to twelve of the most common human error preconditions, or conditions that can act as precursors, to accidents or incidents.Missing: reliability | Show results with:reliability
  20. [20]
    Human reliability and Organizational factors—How do Human ...
    Results suggest that active errors, in the form of SPADs are influenced by factors as age, experience, average length of certified itineraries and previous ...
  21. [21]
    Mediating Effect of Risk Propensity between Personality Traits and ...
    Feb 11, 2020 · This study takes a psychological approach and examines how personality traits affect unsafe behavior of construction workers.
  22. [22]
    [PDF] The SPAR-H Human Reliability Analysis Method
    Based on review of first- and second-generation. HRA methods, the SPAR-H method assigns human activity to one of two general task categories: action or ...
  23. [23]
    Just Culture: A Foundation for Balanced Accountability and Patient ...
    In a just culture, both the organization and its people are held accountable while focusing on risk, systems design, human behavior, and patient safety. Safety.
  24. [24]
    Fostering a just culture in healthcare organizations: experiences in ...
    Aug 13, 2022 · A just culture is regarded as vital for learning from errors and fostering patient safety. Key to a just culture after incidents is a focus on ...
  25. [25]
    [PDF] Review of human reliability assessment methods RR679 - IChemE
    Derbyshire. SK17 9JN. Human reliability assessment (HRA) involves the use of qualitative and quantitative methods to assess the human contribution to risk. ...Missing: authoritative | Show results with:authoritative<|control11|><|separator|>
  26. [26]
    [PDF] Guidance for Human Error Analysis (HEA)
    Nov 21, 2019 · HEA enhances system reliability and safety by identifying where significant human errors could occur, the conditions that could provoke these ...
  27. [27]
    Hotter nights are disrupting sleep and hurting economic decisions
    Jan 16, 2025 · Our findings revealed that a 1°C increase in midnight temperature was associated with a 1.1 percentage point rise in rational choice violations.
  28. [28]
    [PDF] Building a Psychological Foundation for Human Reliability Analysis
    • Excessive Authority Gradient. Personality/Individual Differences PIFs. Examples. • Leadership style. • Deficiency in resource/task management. • Knowledge ...
  29. [29]
    [PDF] INL/EXT-10-18533, Rev. 2, "SPAR-H Step-by-Step Guidance."
    These steps are: Step-1, Categorizing the HFE as Diagnosis and/or Action; Step-2,. Rate the Performance Shaping Factors; Step-3, Calculate PSF-Modified HEP; ...
  30. [30]
    Inter-relationships between performance shaping factors for human ...
    Lastly, HEART [7] estimates final HEPs by multiplying nominal human unreliability with assessed effects which are calibrated from the multiplier of each EPC as ...
  31. [31]
    [PDF] Fifty Years of THERP and Human Reliability Analysis
    Human reliability analysis (HRA) is the predictive study of human errors, typically in safety-critical domains like nuclear power generation (Swain and Guttman, ...
  32. [32]
    [PDF] NUREG/CR-1278, "Handbook of Human Reliability Analysis with ...
    ... human in a man-machine system that began after World War II and was influenced by the then new field of engineering psychology (Fitts, 1951a and b; Taylor ...
  33. [33]
    [PDF] NUREG/CR-4016, Vol. 1, "Application of Slim-Maud
    The research conducted by BNL investigated one method of obtaining human reliability estimates from expert judges--the Success Likelihood Index Method-.<|control11|><|separator|>
  34. [34]
    Generic Error-Modelling System (GEMS) | SKYbrary Aviation Safety
    EURCONTROL describes GEMS as "an error classification scheme developed by [Dr. James] Reason that focuses on cognitive factors in human error as opposed to ...Missing: paper | Show results with:paper
  35. [35]
    (PDF) MERMOS: EDF's new advanced HRA method - ResearchGate
    Mar 1, 2017 · The research achievements in this paper can provide theoretical guidance to improve human error root reason analysis, an analysis basis of ...
  36. [36]
    [PDF] Human Event Repository and - Analysis (HERA) System
    The objective of HERA is to make available empirical and experimental human performance data, from commercial nuclear power plants (NPPs) and other related ...
  37. [37]
    [PDF] A Human Error Analysis of Commercial Aviation Accidents Using the ...
    The Human Factors Analysis and Classification System (HFACS) is a general human error framework originally developed and tested within the U.S. military as ...Missing: origin | Show results with:origin
  38. [38]
    [PDF] The Human Factors Analysis and Classification System--HFACS
    Drawing upon Reason's (1990) concept of latent and active failures, HFACS describes four levels of failure: 1) Unsafe Acts, 2) Preconditions for Unsafe Acts, 3 ...Missing: origin | Show results with:origin
  39. [39]
    Human error: models and management - PMC - PubMed Central - NIH
    They continually rehearse familiar scenarios of failure and strive hard to imagine novel ones. Instead of isolating failures, they generalise them. Instead of ...
  40. [40]
    Heart—A Proposed Method for Achieving High Reliability in Process ...
    Aug 7, 2025 · It highlights those human errors, often stemming from inadequate training, poor communication, and flawed equipment design, significantly ...<|control11|><|separator|>
  41. [41]
    [PDF] Human Reliability Analysis for Nuclear Installations
    The importance of HOFs in nuclear safety was highlighted by the Three Mile. Island accident in 1979, and HOFs were assessed to be a root cause of the accident.
  42. [42]
    [PDF] The Evolution of Crew Resource Management Training in ...
    The research presented at this meeting identified the human error aspects of the majority of air crashes as failures of interpersonal communications, decision ...
  43. [43]
    [PDF] Successes and Failures in Civil Aviation - CORE Scholar
    Crew Resource Management was widely adopted by U.S. airlines, and has generally been credited with helping to dramatically reduce their accident rates.
  44. [44]
    [PDF] Human Reliability Analysis for Oil and Gas Operations - arXiv
    HEART was designed to be a quick and simple method for quantifying the risk of human error. – the method is designed to be used for HEP calculation only. HEART ...
  45. [45]
    [PDF] Human reliability analysis in healthcare: Application of the cognitive ...
    Jun 4, 2012 · Cognitive Reliability and Error Analysis Method (CREAM)​​ Since the stated goal of HRA is to improve reliability and safety, the ultimate goal is ...
  46. [46]
    Adverse Events - StatPearls - NCBI Bookshelf - NIH
    Aug 2, 2025 · Adverse events are common across healthcare systems, with some studies' results reporting that 10% to 25% of patients are affected.
  47. [47]
    Classification and Quantification of Human Error in Manufacturing
    The paper shows an application of human reliability analysis in a realistic manufacturing context to identify where and why manual assembly errors occur.
  48. [48]
    (PDF) Which Human Reliability Analysis Methods Are Most used in ...
    Aug 31, 2023 · Human reliability Analysis (HRA) is a method to evaluate human error risk in critical safety tasks, according to influencing factors such as ...