Human error
Human error is defined as the failure of a planned sequence of mental or physical activities to achieve an intended outcome, encompassing unintentional deviations such as slips (actions not executed as planned), lapses (memory failures), mistakes (flawed plans), and violations (intentional deviations from rules).[1] This phenomenon arises from interactions between human cognition and complex systems, leading to unintended consequences in domains like healthcare, aviation, transportation, and engineering.[2] Human error contributes to a substantial proportion of accidents and incidents worldwide; for example, it is associated with 60-80% of commercial aviation accidents and approximately 75% of general aviation crashes as of 2004.[3][4] The study of human error employs two primary conceptual approaches: the person approach, which attributes errors to individual failings like carelessness or lack of training and advocates remedies such as discipline or further education, and the system approach, which views errors as inevitable outcomes of flawed system design, emphasizing prevention through robust safeguards and error-proofing.[2] A seminal framework in the system approach is James Reason's Swiss cheese model, which depicts safety defenses as multiple layers of Swiss cheese with varying holes (weaknesses); accidents occur when these holes align, allowing active failures (e.g., operator slips) to combine with latent conditions (e.g., poor equipment design or inadequate staffing).[2] This model highlights how errors often stem from upstream organizational factors rather than isolated individual actions.[1] Causes of human error are categorized into active failures—immediate unsafe acts by frontline operators with short-lived effects—and latent conditions, which are dormant system flaws like under-resourced environments or ambiguous procedures that predispose individuals to err.[2] In healthcare, for instance, human error is estimated to contribute to 60-80% of adverse events, frequently due to high workload, communication breakdowns, or complex processes.[1] Prevention strategies focus on systemic interventions, including standardized protocols, automation to reduce cognitive load, enhanced training in error recognition, and fostering a "just culture" that encourages error reporting without punitive blame to identify and mitigate underlying risks.[2] High-reliability organizations, such as nuclear power plants, exemplify success by prioritizing vigilance and adaptive processes to harness human variability for safety.[2]Core Concepts
Definition and Scope
Human error refers to the unintended deviation from an individual's intended actions, plans, rules, or expectations, arising from failures in execution, memory, or planning rather than external disruptions.[5] This core definition, widely adopted in psychological and safety research, encompasses slips—failures in action execution, such as performing the wrong movement despite correct intentions; lapses—failures in memory or attention, like forgetting a step in a sequence; and mistakes—flaws in the planning or problem-solving process, where the chosen goal or method is inappropriate.[6] These distinctions highlight that human error stems from cognitive or behavioral shortcomings inherent to human performance, not deliberate intent.[2] The scope of human error primarily covers individual-level actions in both routine daily tasks and high-stakes complex environments, such as aviation or healthcare, where variability in human behavior can lead to discrepancies between what was planned and what occurs.[2] It is distinctly differentiated from violations, which involve intentional rule-breaking for personal or situational reasons, and from accidents, which represent the harmful outcomes or chain reactions triggered by errors rather than the errors themselves.[6] For instance, a slip might manifest in everyday life as adding salt to coffee instead of sugar due to a momentary attentional lapse, while in a professional setting, a lapse could involve a data entry operator omitting a decimal point in financial records, potentially causing minor discrepancies if caught early.[7] A key conceptual framework for understanding how human errors propagate within systems is James Reason's Swiss cheese model, which depicts organizational defenses as multiple parallel slices of Swiss cheese stacked together.[2] Each slice represents a layer of protection—such as procedures, training, or equipment—with irregularly shaped and sized holes symbolizing inherent weaknesses or potential failure points that vary in position and scale; in a single slice, the holes do not align to allow passage, but when the slices are aligned such that holes temporarily line up across all layers, an error's trajectory can penetrate unimpeded, resulting in system failure.[8] This analogy underscores the interplay between immediate active errors (like a slip by an operator) and latent conditions (underlying systemic flaws), emphasizing that errors alone rarely cause harm without aligned vulnerabilities.[2]Historical Development
The study of human error emerged in the early 20th century amid the rise of industrial management practices that prioritized efficiency over worker capabilities. Frederick Winslow Taylor's 1911 publication, The Principles of Scientific Management, exemplified this approach by advocating for the scientific optimization of tasks through time-motion studies, often disregarding human variability and psychological factors, which later highlighted the need to address error-prone conditions in mechanized work environments.[9] Following World War I, aviation accidents underscored the limitations of blaming individual pilots, prompting systematic analyses of design-induced errors. In 1947, Paul M. Fitts and Richard E. Jones analyzed 460 pilot-error incidents in operating aircraft controls, revealing that many "errors" stemmed from poor instrument layouts and control similarities, thus pioneering human factors engineering to mitigate such systemic contributors.[10] By the mid-20th century, research shifted toward cognitive frameworks for understanding error across performance levels. In the 1980s, Jens Rasmussen developed the skills-rules-knowledge (SRK) framework, categorizing human behavior into skill-based (automatic), rule-based (procedural), and knowledge-based (analytical) modes to explain how errors arise from mismatched mental processing in complex systems.[11] This built on quantitative approaches, as seen in John W. Senders and Neville Moray's 1991 book Human Error: Cause, Prediction, and Reduction, which formalized error probabilities and prediction models based on empirical data from laboratory and field studies, emphasizing prevention through workload management.[12] The late 20th century marked a pivotal turn toward systems-oriented perspectives, influenced by major accidents. James Reason's 1990 book Human Error introduced the generic error-modeling system (GEMS), integrating slips (execution failures), lapses (memory failures), and mistakes (planning flaws) to model error causation beyond individual blame. The 1986 Chernobyl nuclear disaster accelerated this shift, with investigations revealing that operator actions were symptoms of deeper organizational issues; Reason's subsequent analyses emphasized "latent failures"—dormant conditions like inadequate training and flawed safety protocols that align to enable active errors, as detailed in his 1990 examination of the incident's breakdown of complex defenses.[13] Into the 2020s, human error research has increasingly incorporated artificial intelligence (AI) in autonomous systems, focusing on hybrid human-AI interactions where overreliance or miscalibration leads to novel error types. Incidents involving Tesla's Autopilot and Full Self-Driving features, such as fatal crashes in 2019–2025 attributed to drivers disengaging monitoring amid AI limitations in edge cases like poor visibility, have prompted analyses of "shared responsibility" errors, where human complacency amplifies AI shortcomings.[14] Scholarly work, including reviews of risk-informed decision-making, highlights how AI tools can detect human errors but introduce new ones through opaque algorithms, urging integrated frameworks for safer human-AI symbiosis up to 2025.[15]Theoretical Frameworks
Models of Human Performance
Models of human performance provide theoretical frameworks for understanding how variations in operator control and cognitive demands in dynamic systems contribute to errors, emphasizing the interaction between human actions and contextual factors rather than inherent deficiencies. These models shift focus from error as a deviation from norms to variability in performance shaped by environmental pressures, time constraints, and resource availability, enabling predictions of error likelihood in complex operations such as nuclear power or aviation.[16] A key example is Erik Hollnagel's Contextual Control Model (COCOM), introduced in 1993, which describes human performance in terms of four control modes determined by the competence of the agent, the form of control exercised, and the constructs used to match actions to goals. In the strategic mode, operators plan comprehensively with full awareness of goals and resources, achieving high competence. The tactical mode involves rule-based actions with moderate planning. In the opportunistic mode, performance relies on immediate cues with limited foresight, often leading to inefficiencies. The scrambled mode occurs under extreme stress or overload, resulting in chaotic actions. These modes illustrate how performance shaping factors like time pressure or inadequate feedback can degrade control, leading to errors in sociotechnical systems.[16] Errors can also be viewed as natural variability in performance, where normal fluctuations in attention or decision-making cause deviations from intended outcomes, particularly under the Efficiency-Thoroughness Trade-Off (ETTO) principle. This principle posits that individuals and organizations must balance efficiency (speed and resource optimization) against thoroughness (accuracy and completeness), often prioritizing one at the expense of the other based on contextual demands such as deadlines or safety requirements. For instance, in high-pressure scenarios, favoring efficiency can lead to overlooked checks and errors, while excessive thoroughness may slow operations and induce fatigue-related mistakes, framing errors not as failures but as adaptive responses to systemic trade-offs.[17] Quantitative models like the Technique for Human Error Rate Prediction (THERP) offer probabilistic assessments of performance reliability by decomposing tasks into subtasks and estimating error probabilities. THERP uses a basic formula for the probability of at least one error across n independent tasks, each with reliability r (where error probability e = 1 - r):P(\text{error}) = 1 - r^n
For example, in an assembly line task with 10 subtasks each having a reliability of 0.99 (e = 0.01), the overall error probability is $1 - 0.99^{10} \approx 0.095, or about 9.5%, highlighting how cumulative small errors amplify in sequential operations. Performance influencing factors such as stress or training adjust these base rates through multipliers, allowing tailored predictions for industrial settings.[18] In applications, these models reveal how fatigue narrows the human performance envelope—the range of safe operational states—in aviation, where prolonged duty periods reduce tactical control and increase opportunistic errors during critical phases like takeoff. Studies show fatigue significantly degrades performance in flight simulations, reducing situation awareness and increasing error likelihood during critical phases like takeoff, informing crew scheduling regulations. Similarly, the 1979 Three Mile Island nuclear accident analysis applied early performance models to demonstrate how diagnostic errors, exacerbated by alarm overload and systemic design flaws, led to prolonged core damage, underscoring the need for resilient system designs.[19][20]