Fact-checked by Grok 2 weeks ago

Human error

Human error is defined as the failure of a planned sequence of mental or physical activities to achieve an intended outcome, encompassing unintentional deviations such as slips (actions not executed as planned), lapses ( failures), mistakes (flawed plans), and violations (intentional deviations from rules). This phenomenon arises from interactions between and complex systems, leading to in domains like healthcare, , , and . Human error contributes to a substantial proportion of accidents and incidents worldwide; for example, it is associated with 60-80% of accidents and approximately 75% of crashes as of 2004. The study of human error employs two primary conceptual approaches: the person approach, which attributes errors to individual failings like or lack of training and advocates remedies such as or , and the system approach, which views errors as inevitable outcomes of flawed system , emphasizing prevention through robust safeguards and error-proofing. A seminal framework in the system approach is James Reason's , which depicts safety defenses as multiple layers of Swiss cheese with varying holes (weaknesses); accidents occur when these holes align, allowing active failures (e.g., operator slips) to combine with latent conditions (e.g., poor equipment or inadequate ). This model highlights how errors often stem from upstream organizational factors rather than isolated individual actions. Causes of human error are categorized into active failures—immediate unsafe acts by frontline operators with short-lived effects—and latent conditions, which are dormant system flaws like under-resourced environments or ambiguous procedures that predispose individuals to err. In healthcare, for instance, human error is estimated to contribute to 60-80% of adverse events, frequently due to high , communication breakdowns, or complex processes. Prevention strategies focus on systemic interventions, including standardized protocols, to reduce , enhanced in , and fostering a "just culture" that encourages error reporting without punitive blame to identify and mitigate underlying risks. High-reliability organizations, such as nuclear power plants, exemplify success by prioritizing vigilance and adaptive processes to harness human variability for safety.

Core Concepts

Definition and Scope

Human error refers to the unintended deviation from an individual's intended actions, plans, rules, or expectations, arising from failures in execution, , or rather than external disruptions. This core definition, widely adopted in psychological and safety research, encompasses slips—failures in action execution, such as performing the wrong movement despite correct intentions; lapses—failures in or , like forgetting a step in a sequence; and mistakes—flaws in the or problem-solving process, where the chosen or method is inappropriate. These distinctions highlight that human error stems from cognitive or behavioral shortcomings inherent to , not deliberate intent. The scope of human error primarily covers individual-level actions in both routine daily tasks and high-stakes complex environments, such as or healthcare, where variability in can lead to discrepancies between what was planned and what occurs. It is distinctly differentiated from violations, which involve intentional rule-breaking for personal or situational reasons, and from accidents, which represent the harmful outcomes or chain reactions triggered by errors rather than the errors themselves. For instance, a slip might manifest in everyday life as adding to coffee instead of due to a momentary attentional lapse, while in a professional setting, a lapse could involve a operator omitting a decimal point in financial records, potentially causing minor discrepancies if caught early. A key conceptual framework for understanding how human errors propagate within systems is James Reason's , which depicts organizational defenses as multiple parallel slices of stacked together. Each slice represents a layer of protection—such as procedures, , or equipment—with irregularly shaped and sized holes symbolizing inherent weaknesses or potential failure points that vary in position and scale; in a single slice, the holes do not align to allow passage, but when the slices are aligned such that holes temporarily line up across all layers, an error's trajectory can penetrate unimpeded, resulting in system failure. This analogy underscores the interplay between immediate active errors (like a slip by an ) and latent conditions (underlying systemic flaws), emphasizing that errors alone rarely cause harm without aligned vulnerabilities.

Historical Development

The study of human error emerged in the early amid the rise of practices that prioritized efficiency over worker capabilities. Winslow Taylor's 1911 publication, The Principles of Scientific Management, exemplified this approach by advocating for the scientific optimization of tasks through time-motion studies, often disregarding human variability and psychological factors, which later highlighted the need to address error-prone conditions in mechanized work environments. Following , aviation accidents underscored the limitations of blaming individual pilots, prompting systematic analyses of design-induced errors. In 1947, Paul M. Fitts and Richard E. Jones analyzed 460 pilot-error incidents in operating aircraft controls, revealing that many "errors" stemmed from poor instrument layouts and control similarities, thus pioneering human factors engineering to mitigate such systemic contributors. By the mid-20th century, research shifted toward cognitive frameworks for understanding error across performance levels. In the 1980s, Jens Rasmussen developed the skills-rules-knowledge (SRK) framework, categorizing into skill-based (automatic), rule-based (procedural), and knowledge-based (analytical) modes to explain how errors arise from mismatched mental processing in complex systems. This built on quantitative approaches, as seen in John W. Senders and Neville Moray's 1991 book Human Error: Cause, , and , which formalized error probabilities and prediction models based on empirical data from laboratory and field studies, emphasizing prevention through workload management. The late 20th century marked a pivotal turn toward systems-oriented perspectives, influenced by major accidents. James Reason's 1990 book Human Error introduced the generic error-modeling system (GEMS), integrating slips (execution failures), lapses (memory failures), and mistakes (planning flaws) to model error causation beyond individual blame. The 1986 nuclear disaster accelerated this shift, with investigations revealing that operator actions were symptoms of deeper organizational issues; Reason's subsequent analyses emphasized "latent failures"—dormant conditions like inadequate training and flawed safety protocols that align to enable active errors, as detailed in his 1990 examination of the incident's breakdown of complex defenses. Into the 2020s, human error research has increasingly incorporated (AI) in autonomous systems, focusing on hybrid human-AI interactions where overreliance or miscalibration leads to novel error types. Incidents involving Tesla's and Full Self-Driving features, such as fatal crashes in 2019–2025 attributed to drivers disengaging amid AI limitations in edge cases like poor visibility, have prompted analyses of "shared responsibility" errors, where human complacency amplifies AI shortcomings. Scholarly work, including reviews of risk-informed decision-making, highlights how AI tools can detect human errors but introduce new ones through opaque algorithms, urging integrated frameworks for safer human-AI up to 2025.

Theoretical Frameworks

Models of Human Performance

Models of human performance provide theoretical frameworks for understanding how variations in operator control and cognitive demands in dynamic systems contribute to errors, emphasizing the interaction between human actions and contextual factors rather than inherent deficiencies. These models shift focus from error as a deviation from norms to variability in performance shaped by environmental pressures, time constraints, and resource availability, enabling predictions of error likelihood in complex operations such as or . A key example is Erik Hollnagel's Contextual Control Model (COCOM), introduced in , which describes in terms of four control modes determined by the competence of the agent, the form of exercised, and the constructs used to match actions to goals. In the strategic mode, operators plan comprehensively with full awareness of goals and resources, achieving high competence. The tactical mode involves rule-based actions with moderate planning. In the opportunistic mode, performance relies on immediate cues with limited foresight, often leading to inefficiencies. The scrambled mode occurs under extreme stress or overload, resulting in chaotic actions. These modes illustrate how performance shaping factors like time pressure or inadequate can degrade , leading to errors in sociotechnical systems. Errors can also be viewed as natural variability in performance, where normal fluctuations in attention or cause deviations from intended outcomes, particularly under the Efficiency-Thoroughness Trade-Off (ETTO) principle. This principle posits that individuals and organizations must balance (speed and resource optimization) against thoroughness (accuracy and completeness), often prioritizing one at the expense of the other based on contextual demands such as deadlines or safety requirements. For instance, in high-pressure scenarios, favoring can lead to overlooked checks and errors, while excessive thoroughness may slow operations and induce fatigue-related mistakes, framing errors not as failures but as adaptive responses to systemic trade-offs. Quantitative models like the Technique for Human Error Rate Prediction (THERP) offer probabilistic assessments of performance reliability by decomposing tasks into subtasks and estimating error probabilities. THERP uses a basic formula for the probability of at least one error across n independent tasks, each with reliability r (where error probability e = 1 - r):
P(\text{error}) = 1 - r^n
For example, in an assembly line task with 10 subtasks each having a reliability of 0.99 (e = 0.01), the overall error probability is $1 - 0.99^{10} \approx 0.095, or about 9.5%, highlighting how cumulative small errors amplify in sequential operations. Performance influencing factors such as stress or training adjust these base rates through multipliers, allowing tailored predictions for industrial settings.
In applications, these models reveal how fatigue narrows the human performance envelope—the range of safe operational states—in aviation, where prolonged duty periods reduce tactical control and increase opportunistic errors during critical phases like takeoff. Studies show fatigue significantly degrades performance in flight simulations, reducing situation awareness and increasing error likelihood during critical phases like takeoff, informing crew scheduling regulations. Similarly, the 1979 Three Mile Island nuclear accident analysis applied early performance models to demonstrate how diagnostic errors, exacerbated by alarm overload and systemic design flaws, led to prolonged core damage, underscoring the need for resilient system designs.

Cognitive and Psychological Theories

Cognitive and psychological theories provide foundational explanations for human error by examining the internal mental processes that lead to deviations from intended actions. One prominent framework is the dual-process theory, which posits that human cognition operates through two systems: , which is fast, intuitive, and automatic, and System 2, which is slower, deliberative, and effortful. Under high-stress conditions, reliance on increases, often resulting in errors because it prioritizes speed over accuracy and is susceptible to biases and heuristics. This theory, elaborated by , highlights how System 1's dominance in urgent situations can bypass critical evaluation, contributing to lapses in judgment. Attention and memory failures represent another key area where cognitive limitations precipitate errors, particularly through models of . Alan Baddeley's working memory model, introduced in 1974, describes a multicomponent system involving a central executive for control, a phonological loop for verbal information, a visuospatial sketchpad for visual-spatial data, and later an episodic buffer for integration. This framework explains lapses as failures in the central executive's capacity to manage limited attentional resources, leading to overload and errors in information processing. In multitasking scenarios, —responsible for remembering to perform intended actions in the future—often falters, as divided disrupts cue detection and intention retrieval, increasing error rates in complex environments. Heuristics and biases further illuminate how systematic deviations in thinking underpin human error in . The , where individuals assess event likelihood based on the ease of recalling examples, can lead to overestimation of rare risks, skewing risk perceptions and responses. Similarly, drives people to favor confirming preexisting beliefs while ignoring contradictory , fostering persistent errors in . For instance, in the 1999 Mars Climate Orbiter mission, the navigation team's reliance on prior assumptions about software compatibility without rigorous verification of unit conversions (English versus metric) exemplified , resulting in a trajectory error that destroyed the spacecraft. Emotional influences, particularly , exacerbate cognitive vulnerabilities by inducing , a narrowing of perceptual focus that diminishes . High-stress states trigger physiological responses, such as elevated , which constrain attentional breadth and impair peripheral information processing, often leading to overlooked cues and erroneous actions. This phenomenon reduces the ability to maintain a comprehensive of the , heightening error probability in dynamic, high-stakes contexts.

Types and Classification

Categories of Errors

Human errors are commonly categorized into slips and mistakes based on the stage of cognitive processing where they occur. Slips involve errors in the execution of an intended action, such as pressing the wrong button due to a momentary lapse in , where the goal is correct but the performance deviates from the plan. In contrast, mistakes arise from flaws in the planning or intention itself, such as a misdiagnosis from an incorrect of symptoms, leading to the pursuit of an inappropriate objective. This distinction, rooted in Donald Norman's execution-evaluation cycle, highlights how slips disrupt the translation of intentions into actions, while mistakes reflect deeper issues in intention formation. Another key categorization distinguishes active errors from latent errors, emphasizing their immediacy and origin within systems. Active errors, also known as frontline or unsafe acts, are immediate and observable mistakes made by individuals directly interacting with the system, such as a nurse administering the wrong due to a mix-up in labeling during a busy shift. Latent errors, conversely, are hidden weaknesses embedded in organizational structures that may not manifest until triggered, like chronic staffing shortages in a that compromise oversight and enable active errors to occur. James Reason's 1997 typology underscores that while active errors demand quick detection, latent ones require systemic analysis to prevent adverse events, as illustrated in healthcare where latent conditions like inadequate training protocols amplify risks. Errors can also be classified as exogenous or endogenous based on their triggering mechanisms. Exogenous errors stem from external environmental factors, such as poor labeling that leads a to select the incorrect tool in a dimly lit workspace. Endogenous errors, by comparison, originate from internal individual states, including distractions or that cause an to overlook a critical step in a routine . This aids in identifying whether interventions should target the surrounding context or personal factors to reduce error likelihood. In team settings, errors often manifest as shared or coordination failures due to breakdowns in communication and . Team errors occur when collective actions deviate from intended outcomes, such as in surgical teams where misaligned handoffs between surgeons and anesthesiologists result in delayed responses to patient changes. Communication failures contribute to approximately 30% of exchanges in operating rooms, including ambiguities in instructions that lead to procedural mismatches among members. These shared errors highlight the interdependence in high-stakes environments, where individual lapses amplify into group-level disruptions.

Taxonomies and Frameworks

Taxonomies and frameworks provide structured approaches to systematically classify, analyze, and predict human errors, enabling researchers and practitioners to move beyond descriptive categories toward actionable insights in safety-critical domains. These systems often draw on and to map error pathways, incorporating hierarchical levels, modes, and influencing factors for comprehensive investigation. The Human Factors Analysis and Classification System (HFACS), developed by Wiegmann and Shappell in 2003, extends James Reason's to create a practical tool for dissecting human contributions to accidents, particularly in . It structures errors across four tiers—unsafe acts (e.g., errors and violations), preconditions for unsafe acts (e.g., environmental, condition, and personnel factors), unsafe (e.g., inadequate training and planning), and organizational influences (e.g., and organizational climate)—encompassing eight primary categories in total. Originally applied to U.S. mishaps, HFACS has been used to analyze hundreds of accidents, such as approximately 332 U.S. military mishaps from the , where unsafe acts were linked to nearly 80% of incidents and higher-level factors like organizational influences and were identified as contributors. This multi-level facilitates the of latent failures, supporting targeted interventions to reduce error rates. The Generic Error Modeling System (GEMS), introduced by James Reason in 1990, classifies errors based on Rasmussen's skill-rule-knowledge (SRK) framework, delineating three performance levels: skill-based (automatic actions prone to slips and lapses), rule-based (stored rules leading to mistakes in application or choice), and knowledge-based (problem-solving in novel situations susceptible to misdiagnoses). Slips involve execution failures, while mistakes stem from flawed intentions, with GEMS using flowcharts to trace error origins through cognitive processes like attention capture or similarity matching. Widely adopted in , GEMS has informed error analysis in and , emphasizing how and exacerbate slips at the skill-based level, which are common in routine operations. The Technique for the Retrospective and Predictive Analysis of Cognitive Errors (TRACEr), developed by Shorrock and Kirwan in 2002 and refined in 2006, offers a domain-specific for cognitive , particularly suited to dynamic environments like healthcare. It combines 28 error modes (e.g., omission, selection, and timing failures) with nine performance shaping factors (e.g., mental workload and ) across stages of information processing, action planning, and execution, allowing for both backward-looking incident reviews and forward predictive modeling. In applications like industrial tasks, TRACEr identifies cognitive modes such as omissions and selection failures; it has been adapted for healthcare to analyze adverse events, including medication errors. Recent advancements as of 2025 have extended these frameworks with augmentation for cyber-physical systems, where algorithms enhance error classification by processing on human-automation interactions. For instance, -integrated approaches with HFACS use for automated safety analysis in .

Causes and Contributing Factors

Individual Sources

Individual sources of human error encompass personal physiological, cognitive, and experiential factors that impair performance independently of external systems. These elements can lead to lapses in judgment, slowed reactions, or misinterpretations, particularly in demanding tasks requiring sustained or precise execution. Understanding these sources is crucial for recognizing how internal states influence reliability in high-stakes environments like , healthcare, and . Physiological factors play a central role in individual human error, with being one of the most prevalent. Disruptions to circadian rhythms, such as those experienced during or irregular sleep schedules, can significantly reduce vigilance and cognitive performance. studies on personnel have shown that such disruptions impair performance, with fatigue contributing to up to 20% of accidents according to reports. manifests as decreased , slower response times, and heightened proneness to lapses. Illness and medication side effects further exacerbate these risks by altering physiological states. Acute illnesses like colds or can diminish concentration and reaction speed, while side effects from common medications—such as drowsiness from antihistamines or impaired coordination from pain relievers—compromise cognitive and motor functions in operators. In , regulatory bodies like the FAA preclude certain medications for pilots precisely because they can impair , , and judgment, leading to errors in flight control. These physiological impairments often interact cumulatively, amplifying error potential during prolonged tasks. Skill and training gaps at the individual level frequently result in rule-based mistakes, where operators apply incorrect procedures due to inexperience. Inexperienced individuals, lacking familiarity with situational nuances, may misapply memorized rules, leading to unintended outcomes in routine operations. For instance, pilots or technicians might select the wrong step under time pressure, as rule-based errors are particularly common among those with limited exposure to varied scenarios. can decay over time without regular or , resulting in forgotten sequences or hesitant execution. This decay is evident in fields like , where infrequent repairs lead to reliance on outdated or incomplete , heightening the risk of procedural violations. Over time, decay compounds these issues; even well-established fade without regular or , resulting in forgotten sequences or hesitant execution. Perceptual errors arise from individual sensory misinterpretations, often in challenging conditions that overload or deceive the senses. In low-visibility environments, such as or night operations, pilots may experience visual s that distort spatial awareness; for example, a downsloping can create the illusion that the is lower than it actually is, leading pilots to fly a higher-than-optimal approach. width illusions similarly mislead, where narrower runways appear farther away, causing pilots to descend prematurely. Sensory overload in noisy settings further disrupts focus, as excessive auditory stimuli impair selective and increase . Research on noise exposure demonstrates that high-decibel environments reduce cognitive performance, leading to more frequent errors in tasks requiring auditory or sustained . These perceptual distortions highlight how individual vulnerabilities can precipitate critical mistakes without adequate compensatory strategies. In the 2020s, the shift to has introduced new individual sources of error tied to home-based distractions and digital fatigue. Studies during and post-COVID-19 reveal that household interruptions—such as family demands or —correlate with higher levels, potentially affecting performance in knowledge work, including and tasks. , characterized by cognitive exhaustion from prolonged video calls, has been linked to challenges in communication due to limitations in non-verbal cues; workers often report missing visual cues in virtual settings. These factors underscore how modern work arrangements can amplify individual vulnerabilities, contributing to errors like overlooked details or flawed collaborations. As of 2025, emerging factors such as over-reliance on (AI) tools are contributing to individual errors through automation complacency and skill degradation. NASA research highlights how excessive dependence on AI in space operations can lead to reduced vigilance and errors when systems fail, emphasizing the need for balanced human-AI interaction.

Organizational and Environmental Influences

Organizational and environmental influences play a pivotal role in precipitating human errors by shaping the conditions under which individuals operate within complex systems. Safety culture deficits, characterized by high-pressure environments that prioritize production over caution, often foster shortcuts and normalize risky behaviors. In the 1986 Challenger space shuttle disaster, intense schedule pressures from and contractor Morton led to the override of engineers' warnings about O-ring failures in cold weather, resulting in the vehicle's explosion shortly after launch. This incident exemplified how organizational emphasis on meeting deadlines eroded safety protocols, contributing to a latent failure that manifested as a catastrophic error. Design flaws in human-machine interfaces further exacerbate errors by creating mismatches between system demands and user capabilities, allowing latent conditions to enable active mistakes. The 2018 and 2019 Boeing 737 MAX crashes, involving Lion Air Flight 610 and Ethiopian Airlines Flight 302, were linked to flaws in the Maneuvering Characteristics Augmentation System (MCAS), where a single faulty angle-of-attack sensor could trigger erroneous nose-down commands without adequate pilot awareness or redundancy. Investigations revealed that Boeing's design choices, influenced by competitive pressures to minimize pilot training costs, concealed the system's reliance on one sensor, leading to uncontrollable aircraft behavior and the loss of 346 lives. Such environmental design shortcomings highlight how poorly integrated technology can amplify human error in high-stakes operations. Communication breakdowns, often rooted in hierarchical structures or physical environmental factors, disrupt and impair . In intensive care units (ICUs), handoff errors during shift changes frequently result from incomplete or ambiguous , with hierarchical dynamics silencing junior staff warnings about patient risks, contributing to adverse events like medication oversights. Environmental noise, poor lighting, or cluttered workspaces in such settings further degrade focus, increasing the likelihood of miscommunications during critical transitions. These systemic barriers underscore the need to address collective contextual drivers rather than isolated individual lapses. Recent disruptions, such as the 2023 global challenges stemming from geopolitical tensions and lingering post-pandemic effects, have elevated operational vulnerabilities in through reliance on improvised processes. The Business Continuity Institute's resilience surveys indicate that such environmental pressures can amplify risks, including those from human error in process adaptations, across supply networks. This illustrates how transient organizational adaptations to external shocks can perpetuate error-prone environments.

Impacts

In High-Risk Industries

In high-risk industries, human error plays a pivotal role in incidents that can result in widespread harm, often amplifying the consequences of technical or environmental challenges. These sectors, including , healthcare, , and transportation, demand rigorous adherence to protocols, yet lapses in judgment, communication, or oversight frequently lead to catastrophic outcomes. Understanding these errors highlights the need for enhanced training and systemic safeguards tailored to operational pressures. In , human error, particularly by pilots, accounts for approximately 70% of incidents, underscoring its dominance as a causal factor in accidents. The 1977 Tenerife airport disaster exemplifies this, where a coordination failure between pilots and —stemming from ambiguous radio communications and —led to two Boeing 747s colliding on the runway, killing all 583 people aboard in the deadliest aviation accident in history. This event revealed how stress and poor team dynamics can cascade into tragedy, prompting global reforms in cockpit . In healthcare, human errors contribute significantly to patient harm, with medical errors estimated to cause over 250,000 deaths annually , positioning them as a potential third leading based on a 2016 analysis by researchers that extrapolated from prior studies on adverse events (though the methodology has faced criticism for potential overestimation). Post-2020, the rapid expansion of telemedicine during the has introduced new vulnerabilities, such as diagnostic misses due to limited physical examinations and communication barriers, increasing error risks in remote settings. The nuclear and energy sector illustrates how operator decisions can exacerbate failures, as seen in the 2011 Fukushima Daiichi meltdown. Following a that disabled cooling systems, operators delayed injecting seawater into reactors due to hesitation over permanent damage and unclear authority chains, allowing core meltdowns to worsen and releasing radioactive materials. In the 2020s, human factors have also surfaced in grid failures; for instance, inadequate forecasting and manual overrides during high renewable integration have contributed to blackouts, such as those during events straining hybrid grids. In transportation, the shift toward autonomous vehicles has spotlighted human error in oversight roles, particularly distracted monitoring during transitions from manual to automated control. The 2024 incidents involving Uber-partnered self-driving services, including a used as a makeshift that collided with an in due to the human operator's inattention, highlight ongoing risks as drivers become complacent in semi-autonomous systems. These events echo broader patterns where human distraction amplifies vulnerabilities in evolving vehicle technologies.

Societal and Economic Consequences

Human error in healthcare settings imposes a significant burden worldwide, with unsafe care causing over 3 million deaths annually and affecting approximately 1 in 10 patients globally. In low- and middle-income countries, the risk is particularly acute, where 4 in 100 patients die due to such incidents, many of which stem from preventable errors like mistakes or diagnostic oversights. During the , human errors in —such as inefficient processes and failure to reach contacts promptly—exacerbated transmission, contributing to higher infection rates in affected regions. The economic ramifications of error extend far beyond individual incidents, with accidents and illnesses—often rooted in human factors—costing the nearly 3 trillion dollars annually, equivalent to about 4% of world GDP. In healthcare alone, harm from errors reduces economic growth by 0.7% each year, with indirect costs reaching trillions of dollars through lost and long-term . These burdens manifest in elevated premiums and litigation expenses in error-prone sectors; for instance, claims arising from medical errors in the contribute to over $20 billion in annual healthcare system costs, driving up rates for providers and institutions. Human error scandals have eroded social trust in institutions, as seen in the 2015 Volkswagen emissions cheating case, where deliberate software manipulation to evade regulations not only led to $14.7 billion in settlements but also severely damaged public confidence in the and regulatory bodies. This loss of faith amplifies broader societal skepticism toward corporate and governmental accountability. Moreover, the impacts of human error disproportionately affect low-wage workers, who face heightened risks in hazardous environments with inadequate safety measures, leading to higher rates of injury and illness compared to higher-paid employees. In the 2020s, errors in cybersecurity have triggered long-term economic volatility, particularly in and ; for example, the 2021 stemmed from oversight in password management, disrupting fuel supplies across the East Coast and causing widespread market instability and billions in economic losses. Such incidents highlight a growing trend where human factors in systems contribute to cascading societal disruptions, including interruptions and fluctuations.

Prevention and Management

Strategies for Mitigation

(CRM) training programs, originating from workshops in the late 1970s and early 1980s, focus on enhancing team communication, decision-making, and resource utilization to mitigate human errors in high-stakes environments like . These programs emphasize skills such as in voicing concerns, structured briefings before critical operations, and collaborative problem-solving to prevent errors from escalating. By fostering a non-hierarchical culture, CRM has been widely adopted beyond , reducing incident rates through improved interpersonal dynamics. Simulation-based complements by inoculating participants against through deliberate exposure to realistic scenarios, allowing practice of recovery techniques without real-world consequences. In , management within simulators encourages learners to identify, respond to, and learn from mistakes, such as procedural lapses during high-workload flights, thereby building and reducing recurrence. This approach has demonstrated effectiveness in enhancing threat detection and avoidance in complex operations. Procedural safeguards, such as checklists, provide structured verification to minimize slips and lapses in routine tasks. The World Health Organization's Surgical Safety Checklist, introduced in 2008, standardizes pre-, intra-, and post-operative verifications, reducing surgical complications and mortality by over 30% in implemented settings. By prompting essential checks like patient identity confirmation and equipment functionality, it catches errors that might otherwise go unnoticed. Redundancy in critical tasks involves multiple verifications or backups to detect and correct before they impact outcomes. In healthcare and , dual-operator reviews for high-risk actions, such as dosing or flight path calculations, serve as a safeguard, with studies showing reduced propagation through layered checks. This principle ensures that no single oversight compromises in sequential processes. Ergonomic design principles, including Fitts' Law, guide interface layouts to reduce movement-related slips by optimizing control placement and sizing. Formulated in 1954, Fitts' Law quantifies the time (MT) required to reach a target as: MT = a + b \log_2 \left( \frac{D}{W} + 1 \right) where a and b are empirically derived constants, D is the distance to the target, and W is its width; shorter distances and larger targets decrease selection time, minimizing errors in panels or digital interfaces. Applications in design, for instance, position frequently used switches closer to reduce inadvertent activations. Behavioral interventions target cognitive biases through targeted practices to foster error awareness and adaptation. Mindfulness training, involving focused attention and non-judgmental awareness exercises, counters biases like by improving , with studies showing reduced implicit biases and better in professional settings. Feedback loops for learning establish regular debriefs where individuals review incidents, identify patterns, and adjust behaviors, promoting a culture of continuous improvement and lower error rates in organizations. These loops ensure lessons from near-misses inform future actions without punitive repercussions.

Technological and Systemic Approaches

Technological and systemic approaches to human error prevention emphasize integrating advanced tools and organizational structures that detect, mitigate, or redesign processes to minimize error occurrence, particularly in environments where limitations intersect with automated systems. These methods shift focus from individual blame to proactive safeguards, leveraging to augment capabilities and systemic redesigns to foster . Automation aids, such as error-detecting (AI) in , have shown promise in reducing diagnostic oversights. Early AI systems like for showed concordance rates of around 93% with tumor board recommendations in some studies from the , though later evaluations revealed limitations and the system was discontinued in 2023. In , fail-safe mechanisms like devices—simple engineering controls that prevent incorrect assembly or operation—have been widely adopted to eliminate human errors at the source, such as through physical guides or sensors that halt processes upon detecting anomalies, thereby reducing defect rates in assembly lines. Systemic redesigns draw on (HRO) principles to create error-resistant structures. Developed by and Kathleen M. Sutcliffe, these principles include a preoccupation with , where organizations actively scan for early signs of deviation rather than waiting for major incidents, as seen in nuclear power plants and systems that maintain near-perfect safety records despite high-risk operations. Complementing this, policies promote error reporting by distinguishing between honest mistakes and willful violations, rewarding voluntary disclosures to enable learning; in aviation, such policies implemented by the have increased incident reporting since the 1990s, facilitating systemic improvements without punitive repercussions. Monitoring tools enhance error detection through real-time behavioral analysis. Eye-tracking technology, which measures gaze patterns and fixation durations, identifies lapses in high-stakes tasks like surgical procedures, revealing that operators with divided exhibit prolonged fixations on irrelevant areas, allowing interventions to refocus efforts and reduce procedural errors in simulated environments. Research using on FAA UAS sighting data has achieved up to 95.7% accuracy in predicting small unmanned system violation risks based on historical patterns. As of , the FAA's Roadmap for Safety Assurance outlines strategies for integrating AI to enhance safety in , including error detection in unmanned systems. Hybrid human-AI systems address transition-related errors in semi-autonomous operations. In autonomous vehicles, where the system handles driving but requires human in certain scenarios, handover errors—such as delayed responses due to out-of-the-loop syndrome—pose significant risks; countermeasures like adaptive requests, which use auditory and haptic cues tailored to driver levels, have been studied to improve during transitions.

Debates and Controversies

Attribution of Error

The attribution of human error remains a contentious issue in safety-critical fields, often pitting individual accountability against broader systemic factors. The "person approach" emphasizes blaming individuals for errors stemming from forgetfulness, inattention, or carelessness, treating human failure as the primary cause of incidents. In contrast, the "system approach," as elaborated by Dekker, views errors as symptoms of underlying organizational deficiencies, such as flawed processes or inadequate training, rather than isolated personal failings. This distinction highlights the pitfalls of blame culture, where post-mortems frequently succumb to —overestimating the foreseeability of errors after the fact—leading to that overlooks contextual pressures and latent system weaknesses. Legal implications exacerbate these debates, with increasing criminalization of professional errors in the underscoring tensions between punishment and learning. For instance, following the 2013 Asiana Airlines Flight 214 crash in , which killed three people due to pilot mismanagement of , investigations highlighted lapses amid broader questions of and , reflecting a punitive stance on perceived . Such cases illustrate how hindsight-driven investigations can prioritize personal culpability over systemic contributors like or gaps. To counter this, "just culture" frameworks promote non-punitive reporting by distinguishing honest mistakes from reckless or at-risk behaviors, encouraging voluntary disclosure to enhance safety without fear of prosecution. Originating in through efforts by organizations like the , these models balance accountability with trust, as advocated in Dekker's analysis of error management. Cultural variations further complicate error attribution, with Western individualistic societies tending to emphasize personal responsibility—aligning with the person approach—while collectivist Asian cultures prioritize situational and . on causal attributions shows that East Asians are more likely to invoke external factors, such as organizational pressures or environmental constraints, in explaining errors, contrasting with Westerners' focus on internal traits. In industries like or , this manifests in Asian contexts where error blame may diffuse across teams to preserve harmony, potentially hindering individual but fostering systemic improvements. A recent high-profile example is the 2022 collapse of the cryptocurrency exchange , where founder Sam Bankman-Fried's actions were attributed variably to personal fraud—resulting in his conviction on charges including wire fraud and —or to systemic greed within the unregulated sector. Prosecutors portrayed it as deliberate of customer funds for personal gain, embodying the person approach, while defenders highlighted industry-wide lapses in oversight and risk management as enabling conditions. This case underscores ongoing debates, where amplifies calls for individual punishment amid broader questions of regulatory failure.

Resilience Engineering Perspectives

Resilience engineering represents a in understanding human error, viewing it not as an inherent defect or isolated but as a symptom of unhandled variability within complex socio-technical systems. Pioneered by Erik Hollnagel, this approach emphasizes building systems that succeed under varying conditions rather than solely preventing errors. Central to resilience engineering are four core abilities: anticipating potential disruptions, monitoring ongoing , responding effectively to challenges, and learning from experiences to adapt future operations. These principles, outlined in Hollnagel's foundational work, reframe human error as a normal outcome of variability in human actions and system interactions, rather than a deviation requiring blame or elimination. In contrast to traditional safety models, which focus on error avoidance through barriers and procedures—often labeling actions as "errors" to be minimized—resilience engineering prioritizes the capacity for success despite uncertainty and variability. Traditional views, rooted in linear cause-effect analyses, tend to hindsight-bias investigations toward blame, overlooking how systems routinely adapt to anomalies. Resilience engineering avoids this by examining how organizations maintain functionality amid pressures, such as air traffic controllers dynamically adjusting to unexpected or equipment issues to prevent collisions, thereby absorbing potential errors through collective adaptation rather than rigid rules. Applications of resilience engineering extend to diverse domains where human error intersects with systemic variability. In cybersecurity, resilient designs incorporate mechanisms to absorb and mitigate human-induced vulnerabilities, such as automated detection and isolation of networks following phishing-induced breaches, allowing systems to continue operations without cascading failures. Similarly, in 2020s climate response systems, resilience principles guide the development of adaptive platforms that account for human errors in data interpretation or model assumptions, enabling real-time adjustments in disaster preparedness to handle uncertain predictions from volatile environmental data. Despite its strengths, resilience engineering faces criticisms regarding its measurability and practical implementation. Quantifying the four abilities remains challenging, as they involve latent potentials rather than observable outcomes, leading to debates on developing reliable indicators without reverting to traditional metrics that undervalue adaptation. As the field evolves, it is integrating to enhance proactive , with AI-driven tools enabling predictive monitoring and automated responses to variability. This aligns with emerging regulations, such as the EU's (entered into force in 2024), which mandates resilient design in digital products to anticipate and recover from AI-related errors and cyber threats.

References

  1. [1]
    Why Do Errors Happen? - To Err is Human - NCBI - NIH
    The work of Reason provides a good understanding of errors. He defines an error as the failure of a planned sequence of mental or physical activities to achieve ...Why Do Accidents Happen? · Research on Human Factors · Summary
  2. [2]
    Human error: models and management - PMC - PubMed Central - NIH
    The human error problem can be viewed in two ways: the person approach and the system approach. Each has its model of error causation.The Swiss Cheese Model Of... · Some Paradoxes Of High... · High Reliability...
  3. [3]
    [PDF] Human Error and Commercial Aviation Accidents
    For example, 90 of 134 commercial aviation accidents (67%) were associated with aircrew and/or supervisory error. Page 11. 7. To aid in the process, descriptive ...Missing: prevalence | Show results with:prevalence
  4. [4]
    [PDF] If Human Error is the cause of most aviation accidents, should we ...
    During 2004 in the United States, pilot error was listed as the primary cause of 78.6% of fatal general aviation accidents, and as the primary cause of 75.5% ...Missing: prevalence | Show results with:prevalence
  5. [5]
    [PDF] HUMAN ERROR: A CONCEPT ANALYSIS
    For example, human error is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations, and the actual ...
  6. [6]
    Human Error Types | SKYbrary Aviation Safety
    “Slips and lapses are errors which result from some failure in the execution and/or storage stage of an action sequence.” Reason refers to these errors as ...
  7. [7]
    Human Error | Crystal Lean Solutions
    We categorise human errors as either slips, lapses, mistakes, or violations. Error. Slips – occur when an incorrect action is completed when trying to ...
  8. [8]
    Understanding the “Swiss Cheese Model” and Its Application to ...
    The Swiss Cheese Model is commonly used to guide root cause analyses (RCAs) and safety efforts across a variety of industries, including healthcare.Figure 1 · Table 1 · Implications
  9. [9]
    [PDF] Frederick Winslow Taylor, The Principles of Scientific Management
    They heartily cooperate with the men so as to insure all of the work being done in accordance with the principles of the science which has been developed.Missing: human | Show results with:human
  10. [10]
    [PDF] Reproduced by - DTIC
    FITTS, Ph. ... Underlying the study was the assumption that many s-called "pilot errors" are reY&lly due to the design characteristics of aircraft instruments.
  11. [11]
    [PDF] Skills, Rules, and Knowledge: Signals, Signs and Symbols and ...
    3, MAY/~ 1983. 257. Skills, Rules, and Knowledge; Signals, Signs, and Symbols, and Other Distinctions in. Human Performance Models. JENS RASMUSSEN, SENIOR ...
  12. [12]
    Human error: Cause, prediction, and reduction. - APA PsycNet
    Senders, J. W., & Moray, N. P. (1991). Human error: Cause, prediction, and reduction. Lawrence Erlbaum Associates, Inc. Abstract. This volume is drawn from the ...
  13. [13]
    The Contribution of Latent Human Failures to the Breakdown ... - jstor
    The test plan at Chernobyl required that the emergency core cooling system should be switched off, and the need to improvise in an unfamiliar and increasingly ...
  14. [14]
    U.S. opens Tesla probe after more crashes involving its so-called full ...
    Oct 9, 2025 · The new investigation follows a host of other probes into the FSD feature on Teslas, which has been blamed for several injuries and deaths.
  15. [15]
    A Comprehensive Review of Human Error in Risk-Informed Decision ...
    Jun 9, 2025 · This review synthesizes recent advances at the intersection of risk‐informed decision making, human reliability assessment (HRA), artificial intelligence (AI), ...Missing: 2020s | Show results with:2020s
  16. [16]
    COCOM - Erik Hollnagel
    A contextual control model is based on three main concepts: competence, control, and constructs. An essential part of control is planning what to do in the ...Missing: rates | Show results with:rates
  17. [17]
    ETTO principle - Erik Hollnagel
    The trade-off may favour thoroughness over efficiency if safety and quality are the dominant concerns, and efficiency over thoroughness if throughput and output ...
  18. [18]
    [PDF] NUREG/CR-1278, "Handbook of Human Reliability Analysis with ...
    NOTICE. This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United.Missing: formula | Show results with:formula
  19. [19]
    [PDF] The human performance envelope: Past research, present activities ...
    – Parallel development of human performance envelope model for pilots and controllers. – Collaboration of Europe and US research. • Controlled simulations ...
  20. [20]
    Human reliability data, human error and accident models ...
    Human reliability data, human error and accident models—illustration through the Three Mile Island accident analysis. Author links open overlay panel. Pierre ...
  21. [21]
    Dual Process Theory - an overview | ScienceDirect Topics
    Dual-process theories refer to cognitive frameworks that partition human cognition into two distinct types of processes: Type 1, characterized by automatic and ...
  22. [22]
    Dual Process Theory: Analyzing Our Thought Process for Decision ...
    Oct 12, 2017 · System 2 is a far slower process that engages conscious reflection and can evaluate System 1 conclusions for error. System 2 is used for more ...
  23. [23]
    [PDF] WORKING MEMORY
    An association between memory errors and errors due to acoustic masking of speech. Nature (London). 1962, 193, I~J4-1315. Conrad, R., &: Hull, A. J ...
  24. [24]
    Attention and intended action in multitasking: An understanding of ...
    Reason [12] stressed the vulnerability of individuals to prospective memory errors as among the most common form of human fallibility in everyday life.
  25. [25]
    Judgment under Uncertainty: Heuristics and Biases - Science
    A better understanding of these heuristics and of the biases to which they lead could improve judgments and decisions in situations of uncertainty.
  26. [26]
    [PDF] Lost In Translation - Sma.nasa.gov.
    September 23, 1999: The Mars Climate Orbiter approached Mars 170km too close to the surface; atmospheric forces are believed to have destroyed the spacecraft. ...
  27. [27]
    Situational awareness - what it means for clinicians, its ... - PubMed
    Loss of situation awareness can occur in many different settings, particularly during stressful and unexpected situations. Tunnel vision is a classic example ...
  28. [28]
    Situational Awareness for Clinicians: Safety & Recognition
    Jul 22, 2016 · Tunnel vision is a classic example where clinicians focus on one aspect of care, often to the detriment of overall patient management.
  29. [29]
    (PDF) Categorization of Action Slips - ResearchGate
    To understand and address the underlying causes of errors beyond behavioral outcomes, Norman (1981) proposed the classification of slips, lapses, and mistakes.
  30. [30]
    The Differences Between Human Error, At-Risk Behavior ... - ECRI
    Rating 4.6 (5) Jun 18, 2020 · Human error is an inevitable, unpredictable, and unintentional failure in the way we perceive, think, or behave. It is not a behavioral choice— ...
  31. [31]
    Team errors: definition and taxonomy - ScienceDirect.com
    Team errors are human errors that are made by individuals or groups of people in a team context. Two axes make up the taxonomy of team errors.
  32. [32]
    The team will effectively communicate and exchange critical ... - NCBI
    A study of communication failures in the operating room found that they occur in approximately 30% of team exchanges (23). Fully one third of these breakdowns ...
  33. [33]
    A Human Error Approach to Aviation Accident Analysis
    Dec 22, 2017 · Citation. Get Citation. Wiegmann, D.A., & Shappell, S.A. (2003). A Human Error Approach to Aviation Accident Analysis: The Human Factors ...
  34. [34]
    (PDF) The Human Factors Analysis and Classification System-HFACS
    PDF | On Jan 1, 2000, Scott A. Shappell and others published The Human Factors Analysis and Classification System-HFACS | Find, read and cite all the ...
  35. [35]
    Generic Error-Modelling System (GEMS) | SKYbrary Aviation Safety
    EURCONTROL describes GEMS as "an error classification scheme developed by [Dr. James] Reason that focuses on cognitive factors in human error as opposed to ...
  36. [36]
    Development and application of a human error identification tool for ...
    This paper outlines a human error identification (HEI) technique called TRACEr--technique for the retrospective and predictive analysis of cognitive errors ...Missing: 2006 healthcare
  37. [37]
    Technique for the Retrospective and Predictive Analysis of Cognitive E
    Since the 1980s, human error analysis (HEA) techniques have developed to address an established need in safetycritical industries to identify the human.Missing: healthcare | Show results with:healthcare
  38. [38]
    Applications of integrated human error identification techniques on ...
    M.T. Baysari et al. A reliability and usability study of TRACEr-RAV: the technique for the retrospective analysis of cognitive errors – for rail, Australian ...
  39. [39]
    [PDF] A Survey of Fatigue Factors in Regional Airline Operations
    A NASA study of short-haul air transport pilots flying for major airlines used both physiological and subjective measures to assess fatigue factors, and the ...
  40. [40]
    AVIATION MEDICINE, ILLNESS AND LIMITATIONS FOR FLYING
    Use of certain medications may be precluded in pilots because they may alter memory, concentration, alertness, and coordination, or otherwise compromise flight ...
  41. [41]
    Operator Response - the SRK Model - exida
    Jun 9, 2021 · Errors in rule-based and knowledge-based behavior, which are called mistakes, are common to inexperienced operators. Skill-based response ...
  42. [42]
    Memory Failure Types and Human Error | By Ginette Collazo, Ph.D.
    Over time, even well-learned memories tend to fade if they are not reviewed or used often enough. Absentmindedness: Absentmindedness is a type of memory failure ...
  43. [43]
    8 Optical Illusions Pilots Should Understand And Know How To Avoid
    A downsloping runway can create the illusion that the aircraft is lower than it actually is, leading to a higher approach. An upsloping runway can create the ...Missing: perceptual mistake
  44. [44]
    The Effect of Noise Exposure on Cognitive Performance and Brain ...
    Altered cognitive function leads to human error and subsequently increases accidents. ... The impact of low frequency noise on human mental performance.
  45. [45]
    Analyzing the effect of distractions of working from home on mental ...
    Oct 15, 2025 · This study aims to analyze the influence of workspace and personal characteristics, mediated by workspace distractions at home, on employees' stress and ...Missing: 2020s | Show results with:2020s
  46. [46]
    Workplace Communication Statistics (2025) - Pumble
    Feb 11, 2025 · 37% of the surveyed leaders claimed that having to extend timelines was the worst consequence of miscommunication, and; 32% of business leaders ...
  47. [47]
    v1ch8 - NASA
    Report of the PRESIDENTIAL COMMISSION on the Space Shuttle Challenger Accident. Chapter VIII: Pressures on the System. [164] With the 1982 completion of the ...
  48. [48]
    [PDF] Summary of the FAA's Review of the Boeing 737 MAX
    This report provides a detailed technical account of the lessons learned since the two fatal accidents involving the Boeing 737 MAX aircraft, ...
  49. [49]
    Falling through the Cracks: Information Breakdowns in Critical Care ...
    Handoffs have been recognized as a major healthcare challenge primarily due to the breakdowns in communication that occur during transitions in care.Group Handoffs In Micu · Data Analysis · ResultsMissing: hierarchical silencing
  50. [50]
    [PDF] BCI Supply Chain Resilience Report 2023 (Sponsored by SGS)
    One respondent elaborated on this threat, reporting that the main cause of cyber incidents has been human error. The impact consisted of large amounts of ...
  51. [51]
    Patient safety - World Health Organization (WHO)
    Sep 11, 2023 · No one should be harmed in health care; however, there is compelling evidence of a huge burden of avoidable patient harm globally across the ...
  52. [52]
    COVID-19 Contact Tracing: Challenges and Future Directions - PMC
    This process is extremely time consuming, inefficient, highly error prone and not scalable.
  53. [53]
    Costs of Workplace Accidents and Illnesses Worldwide - Ludus Global
    Jan 17, 2024 · The International Labour Organization (ILO) estimates the costs of workplace accidents and illnesses at almost 3 trillion dollars annually worldwide.
  54. [54]
    Medical Error Reduction and Prevention - StatPearls - NCBI Bookshelf
    Feb 12, 2024 · 22. Singh H, Schiff GD, Graber ML, Onakpoya I, Thompson MJ. The global burden of diagnostic errors in primary care. BMJ Qual Saf. 2017 Jun ...
  55. [55]
    Volkswagen to Spend Up to $14.7 Billion to Settle Allegations of ...
    Jun 28, 2016 · Volkswagen to Spend Up to $14.7 Billion to Settle Allegations of Cheating Emissions Tests and Deceiving Customers on 2.0 Liter Diesel Vehicles.
  56. [56]
    Worksite Health Promotion for Low-wage Workers - PubMed Central
    Low-wage workers experience socioeconomic and racial disparities in health, including higher rates of morbidity and mortality, greater exposure to physical and ...
  57. [57]
    Mitigating Human Error and Insider Threats in Critical Infrastructure
    Oct 14, 2024 · For example, the 2021 Colonial Pipeline attack occurred after a single compromised password resulted in significant fuel supply disruptions ...<|separator|>
  58. [58]
    The Attack on Colonial Pipeline: What We've Learned & What ... - CISA
    May 7, 2023 · On May 7, 2021, a ransomware attack on Colonial Pipeline captured headlines around the world with pictures of snaking lines of cars at gas stations across the ...
  59. [59]
    [PDF] Resource Management on the Flight Deck
    The fostering of increased awareness and use of available resources by aircrews under high workload conditions is becoming a matter of greater.Missing: seminal | Show results with:seminal
  60. [60]
    [PDF] Crew Resource Management: A Literature Review - Semantic Scholar
    Abstract: The roots of Crew Resource Management training in the United States are usually traced back to a workshop sponsored by the National Aeronautics and ...
  61. [61]
    [PDF] The Evolution of Crew Resource Management Training in ...
    Abstract. Changes in the nature of CRM training in commercial aviation are described, including its shift from Cockpit to Crew Resource Management.Missing: seminal | Show results with:seminal
  62. [62]
    [PDF] Error Management Training - Study Two - ATSB
    Each of the exercises contained within the simulator-based training syllabus can be interpreted as “threats” according to the definition within in the Threat ...Missing: inoculation | Show results with:inoculation
  63. [63]
    the challenges facing effective error management in aviation training.
    In a study examining the use of error in simulation-based driver training, Ivancic and Hesketh (2000) demonstrate that training sessions in which participants ...Missing: inoculation | Show results with:inoculation
  64. [64]
    Safe surgery - World Health Organization (WHO)
    The Surgical Safety Checklist has been shown to reduce complications and mortality by over 30 percent. The Checklist is simple and can be completed in under ...
  65. [65]
    A Surgical Safety Checklist to Reduce Morbidity and Mortality in a ...
    In this study, a checklist-based program was associated with a significant decline in the rate of complications and death from surgery in a diverse group of ...
  66. [66]
    Human errors and their prevention in healthcare - PMC - NIH
    Up to 80–85% incidence of errors and near-misses are reported in anonymous surveys.Defining Error, Near Miss... · Communication · Table 4Missing: prevalence | Show results with:prevalence
  67. [67]
  68. [68]
  69. [69]
    Meditation in the Workplace: Does Mindfulness Reduce Bias and ...
    Apr 10, 2022 · Mindfulness has been linked to reductions in implicit age bias, sunk-cost decision-making bias and increases in organisational citizenship behaviours (OCB).Introduction · Experiment One · Experiment One Results · General Discussion
  70. [70]
    Cognitive biases and mindfulness | Humanities and Social Sciences ...
    Feb 3, 2021 · Mindfulness increases rationality because it improves decision-making by reducing behavioral biases. Moreover, the Langerian method by which ...
  71. [71]
    Fostering A Culture Of Organizational Learning - SafetyStratus
    Mar 14, 2025 · Feedback Loops: Create feedback loops that ensure that lessons learned from incidents are applied to enhance systems, processes, and ...
  72. [72]
    Best Methods to Reduce Human Error in Manufacturing - Dozuki
    Using a system of double-checks and fail-safes will help limit the risk involved with actions that can result in errors. It goes back to the adage: measure ...
  73. [73]
    Principle 1: Preoccupation with Failure - Managing the Unexpected
    Sep 2, 2015 · Preoccupation with failure, the first high reliability organization (HRO) principle, captures the need for continuous attention to anomalies.
  74. [74]
    Just Culture | SKYbrary Aviation Safety
    Under “Just Culture” conditions, individuals are not blamed for 'honest errors', but are held accountable for willful violations and gross negligence. People ...
  75. [75]
    A review of eye tracking for understanding and improving diagnostic ...
    Feb 22, 2019 · The present review provides an overview of eye-tracking technology, the perceptual and cognitive processes involved in medical interpretation.Recognition Errors · Eye Tracking In Competency... · Future Research Directions
  76. [76]
    Using machine learning algorithms to predict the risk of small ...
    Aug 7, 2025 · This research uses machine learning algorithms to predict the risk of sUAS violation incidents using the FAA's UAS sighting data with a sample ...Missing: oversight | Show results with:oversight
  77. [77]
    Public perception of autonomous vehicle capability determines ...
    Studies of accidents with fully autonomous systems show that people apply inconsistent standards when making a judgment of blame on these systems compared to on ...
  78. [78]
    [PDF] Perspectives on Human Error: Hindsight Biases and Local Rationality
    “Human error” in one sense is a label invoked by stakeholders after-the-fact in a psychological and social process of causal attribution. Psychologists and.
  79. [79]
    Causal Attributions for Industrial Accidents: A Culture-Comparative ...
    Mar 15, 2006 · Theory and research on causal attribution have primarily focused on Western population samples. Given the important cultural differences ...
  80. [80]
    Court says Sam Bankman-Fried ran FTX as a 'personal fiefdom' - BBC
    Nov 22, 2022 · Troubled crypto firm FTX collapsed after being "run as a personal fiefdom of Sam Bankman-Fried", a US bankruptcy court has heard.
  81. [81]
    [PDF] A White Paper on Resilience Engineering for ATM - Eurocontrol
    The added value of an Resilience Engineering approach is that it provides a way to address the issues of emergent accidents and the often disproportionate ...
  82. [82]
    SP 800-160 Vol. 2 Rev. 1, Developing Cyber-Resilient Systems
    Dec 9, 2021 · Cyber resiliency engineering intends to architect, design, develop, implement, maintain, and sustain the trustworthiness of systems.
  83. [83]
    Center for Engineering Resilience and Climate Adaptation
    Building resilient, equitable, and adaptive solutions to strengthen our infrastructure and communities in the face of current and projected climate conditions.
  84. [84]
  85. [85]
    Cyber Resiliency Engineering Framework | MITRE
    Sep 1, 2011 · This framework provides a way to structure discussions and analyses of cyber resiliency goals, objectives, practices, and costs.
  86. [86]