Fact-checked by Grok 2 weeks ago

Hazard analysis

Hazard analysis is a systematic and proactive process for identifying potential hazards—sources of such as injuries, illnesses, or environmental damage—in workplaces, systems, processes, or products, followed by evaluating their likelihood and severity to prioritize controls and mitigate risks. This approach is fundamental to , enabling organizations to address root causes before incidents occur and ensuring with regulatory standards. In practice, hazard analysis involves several key steps, including collecting data from sources like equipment manuals, safety data sheets, and incident reports; conducting inspections and worker consultations; and assessing categories of hazards such as chemical, physical, biological, and ergonomic. Methods range from qualitative techniques, like brainstorming potential failure modes, to more structured tools such as checklists, (JSA), depending on the complexity of the operation. The goal is to characterize hazards by their potential consequences and implement interim or permanent controls, prioritizing those with high severity and probability. Hazard analysis finds broad applications across industries, including occupational safety where the U.S. (OSHA) mandates it for effective safety programs; chemical and manufacturing sectors through (PHA) to manage risks from hazardous materials; and food production via Hazard Analysis and Critical Control Points (HACCP), a framework developed by and the U.S. Army in the and now enforced by the FDA to prevent contamination. In nuclear and energy facilities, the U.S. Department of Energy () applies it to examine design weaknesses and potential accidents affecting workers, the public, and the environment. Overall, it underpins by integrating with broader strategies like emergency planning and continuous monitoring to foster safer operations.

Fundamentals

Definition and Scope

Hazard analysis is a systematic for identifying potential hazards, evaluating the risks they pose, and developing controls to prevent or mitigate harm in , , or environments. This approach involves examining material, , , and operational characteristics that could lead to undesirable consequences, followed by an assessment of those consequences' likelihood, severity, and potential impacts. In essence, it provides a structured framework to anticipate threats before they materialize, ensuring safer design, operation, and maintenance across various contexts. The core objectives of hazard analysis are to pinpoint sources of harm—including physical agents like or , chemical substances such as solvents or toxins, biological factors like pathogens, ergonomic stressors from repetitive motions, and psychosocial elements such as —and to assess their potential likelihood and consequences. By prioritizing these findings, the process guides the implementation of targeted safety measures to reduce exposure and protect personnel, assets, and the environment. In scope, hazard analysis extends to disciplines including for system design, occupational for worker , and environmental safeguards against broader ecological impacts. A fundamental distinction lies between hazards, defined as inherent sources of potential harm, and risks, which represent the combined probability of occurrence and the severity of resulting harm. This methodology adopts a proactive stance to foresee and avert incidents, differing from reactive strategies that respond post-event, and integrates seamlessly with systems as mandated by standards like those from OSHA.

Historical Development

The origins of hazard analysis trace back to the 19th-century industrial safety movements in the , where the Health and Morals of Apprentices Act of 1802 marked the first legislative effort to address workplace hazards by regulating ventilation, lighting, and working hours for child apprentices in cotton mills, thereby emphasizing basic hazard identification to prevent health risks. Subsequent , building on this foundation through the and beyond, expanded protections to include machinery safeguards and inspections, laying early groundwork for systematic hazard evaluation in industrial settings. In the mid-20th century, hazard analysis advanced significantly in the aerospace and nuclear industries following , driven by the need to manage complex system failures in high-stakes environments. The U.S. Air Force's development of in the early 1960s for the Minuteman missile program provided a deductive method to identify potential failure causes, which further refined and applied during the to enhance reliability and safety assessments. Concurrently, the nuclear sector adopted techniques, with early applications in reactor safety studies that quantified hazard likelihoods to inform design and operational controls. A key influence during this period was H.W. Heinrich's 1931 publication Industrial Accident Prevention, which introduced the accident pyramid concept—positing that for every 300 near-misses and 29 minor injuries, one major accident occurs—shifting focus toward proactive analysis of minor incidents to prioritize hazards. The 1970s and 1980s saw formalization of hazard analysis in chemical and process industries, spurred by major disasters that exposed gaps in systematic evaluation. The 1974 Flixborough explosion in the UK, caused by a faulty pipe modification releasing vapor and killing 28 people, prompted the Court of Inquiry to recommend comprehensive (PHA) for identifying and mitigating risks in modifications and operations. This was amplified by the 1984 in , where a leak at a plant resulted in thousands of deaths and injuries, accelerating global adoption of rigorous hazard assessment standards to prevent catastrophic releases. In response, the U.S. (OSHA) issued its (PSM) standard in 1992, mandating PHA techniques like hazard and operability studies for highly hazardous chemicals to evaluate process risks systematically. From the 1990s onward, hazard analysis expanded across sectors, integrating into , software, and occupational health frameworks. The U.S. (FDA) mandated Hazard Analysis and Critical Control Points (HACCP) for processing in 1997, requiring identification and control of biological, chemical, and physical hazards throughout the , while the endorsed HACCP guidelines internationally that same year to standardize global practices. In software and , the (IEC) published in 1998, establishing requirements for hazard analysis in electrical, electronic, and programmable systems to ensure safety integrity levels in critical applications. More recently, the (ISO) released in 2018, providing a framework for occupational health and safety management that incorporates hazard identification and , with ongoing integrations of for predictive hazard detection in industrial contexts as of the .

Techniques and Methods

Qualitative Techniques

Qualitative techniques in hazard analysis involve non-numerical, descriptive methods that rely on expert judgment, structured discussions, and systematic questioning to identify potential hazards and operability issues, making them particularly suitable for early-stage phases or systems where quantitative may be unavailable. These approaches emphasize brainstorming and team collaboration to explore deviations from intended operations, focusing on causes, consequences, and existing safeguards without assigning probabilities or metrics. They are foundational in standards, such as those outlined by OSHA, which mandate their use for initial evaluations in high-risk industries. Key qualitative methods include the (HAZOP), What-If Analysis, Analysis, and Preliminary Hazard Analysis (PHA). HAZOP, developed in the 1960s by (ICI) and first publicly documented in , uses predefined guide words—such as "no/not," "more," "less," "part of," and "reverse"—applied to process parameters like , , or pressure to systematically identify deviations from design intent. What-If Analysis employs open-ended questioning in a brainstorming format, prompting scenarios like "What if power fails?" or "What if a sticks?" to uncover potential modes and their impacts. Analysis draws from standardized lists of common hazards, such as those related to equipment, materials, or human factors, to ensure comprehensive coverage of known risks during reviews. PHA serves as an initial screening tool, broadly assessing system hazards by categorizing energy sources, modes, and environmental interactions to prioritize further analysis. These techniques typically unfold in team-based workshops comprising multidisciplinary experts, including engineers, operators, and safety specialists, facilitated by an independent leader to maintain objectivity. The process involves dividing the system into manageable nodes or sections, applying the method's framework to probe for issues, and documenting findings in worksheets that capture deviations, underlying causes, possible consequences, and recommended safeguards or actions. For instance, in , each node is examined sequentially, with the team recording entries only for credible deviations to avoid speculation. Qualitative techniques offer flexibility and cost-effectiveness for initial hazard identification, leveraging collective expertise to reveal issues that might otherwise emerge late in development, while requiring minimal resources compared to data-intensive alternatives. However, their reliance on subjective judgment can introduce biases, and outcomes heavily depend on the team's and , potentially overlooking or interactive hazards if discussions are dominated by a few members. A representative example is the application of HAZOP in chemical plants, where guide words like "more" applied to the might reveal a deviation leading to overpressurization from a blocked outlet, prompting of relief valves as safeguards. This structured deviation analysis helps prioritize design modifications early, enhancing overall .

Quantitative Techniques

Quantitative techniques in hazard analysis utilize numerical models and probabilistic data to estimate the likelihood and consequences of hazards, enabling evidence-based in industries such as , , and chemical processing where precision is critical. These methods contrast with qualitative approaches by incorporating mathematical computations, often relying on historical data or statistical distributions to quantify risks. They are particularly valuable for complex systems, allowing analysts to predict probabilities and prioritize strategies. Key methods include (FTA), (ETA), and Failure Modes and Effects Analysis (FMEA). employs a top-down, deductive approach, constructing a logic diagram from an undesired top event (system failure) downward to basic contributing events using Boolean gates to represent logical relationships. Originating in 1962 from H.A. at Bell Laboratories under a U.S. contract for evaluating the Minuteman missile system, calculates failure probabilities as follows: for an AND gate, the output probability P is the product of input probabilities (P = \prod P_i); for an OR gate, P = 1 - \prod (1 - P_i). P_{\text{AND}} = \prod_{i=1}^{n} P_i P_{\text{OR}} = 1 - \prod_{i=1}^{n} (1 - P_i) ETA, an inductive forward-branching method, starts from an initiating event and maps subsequent success or failure paths of protective functions, enumerating possible accident sequences and their probabilities. Developed in the 1975 WASH-1400 Reactor Safety Study by the U.S. Nuclear Regulatory Commission, ETA quantifies overall risk by multiplying branch probabilities along each path. FMEA identifies potential failure modes at component or subsystem levels, evaluating each by severity (S, typically 1-10 scale for impact), occurrence (O, probability of failure), and detection (D, likelihood of identifying the failure before occurrence), yielding a Risk Priority Number (RPN = S × O × D) to rank risks. Formalized in MIL-STD-1629A (1980) by the U.S. Department of Defense, FMEA originated earlier in 1949 military procedures for equipment reliability. The process begins with data collection, sourcing failure rates from historical records, reliability databases (e.g., OREDA for offshore systems), or accelerated testing to populate model inputs. Specialized software facilitates modeling, such as ReliaSoft BlockSim for constructing and simulating and diagrams, and ReliaSoft XFMEA for tabulating FMEA worksheets and computing RPNs. follows, varying input parameters (e.g., failure probabilities) to evaluate their influence on overall estimates, identifying critical uncertainties. These techniques offer objectivity in risk quantification, facilitating precise prioritization and , as demonstrated in nuclear safety applications where / combinations reduced estimated core melt probabilities by orders of magnitude. However, they are data-intensive, requiring accurate estimates that may be unavailable for novel systems, and often assume event independence, potentially underestimating correlated failures.

Step-by-Step Process

Hazard analysis follows a structured, iterative process to systematically identify, evaluate, and mitigate potential hazards within a system or operation. This ensures comprehensive coverage of risks while allowing flexibility for across industries. is typically conducted by a multidisciplinary team to incorporate diverse expertise, such as , operations, and professionals, enhancing the accuracy and completeness of the assessment. The standard steps begin with defining the system boundaries and objectives, which involves establishing the of the , including the processes, , and personnel involved, as well as the specific goals for risk . This step sets clear parameters to focus efforts and avoid overextension. Next, are identified through methods like brainstorming sessions, site walkthroughs, and of historical , aiming to uncover potential sources of such as chemical releases, mechanical failures, or human errors. Following identification, the analysis examines the causes and consequences of each , determining how deviations might occur and the potential impacts on , environment, or property. This phase often employs worksheets or software tools, such as PHA-Pro, to organize data and facilitate detailed examination. Risks are then evaluated by combining assessments of likelihood and severity, typically using a where risk level is calculated as the product of these factors, to prioritize hazards requiring immediate attention. Recommendations for controls are developed next, guided by the hierarchy of controls, which prioritizes elimination of the hazard, followed by with safer alternatives, like barriers, administrative measures such as procedures, and as a . This approach maximizes effectiveness and feasibility of . Finally, the process concludes with documentation of findings, recommendations, and action plans, followed by review and verification, including post-implementation checks to ensure controls are effective and the analysis is updated for any system changes. The entire process is iterative, with ongoing monitoring to address new hazards or modifications. Qualitative and quantitative methods can be integrated throughout, starting with qualitative techniques for initial screening and progressing to quantitative for high-priority risks, allowing for a balanced that combines judgment with numerical . Multidisciplinary teams play a crucial role in this , providing varied perspectives to reduce oversights and biases. Tools like standardized worksheets or dedicated software support and , streamlining the . Best practices include involving stakeholders early to gather comprehensive input and foster buy-in, as well as scheduling regular updates to the analysis in response to changes in operations, regulations, or , aligning with principles of continual in standards. This ensures the process remains relevant and proactive. Common pitfalls to avoid encompass incomplete scoping, which may exclude critical elements like off-site impacts, and assessment biases arising from over-reliance on individual expertise without team , potentially leading to underestimation of risks. Addressing these through rigorous and inclusive participation enhances the reliability of the hazard analysis. As of 2025, emerging developments are integrating (AI) and digital tools into traditional techniques to enhance hazard analysis. AI-assisted methods, such as for predictive deviation analysis in HAZOP or automated data extraction for FTA inputs, enable faster identification of complex interactions and real-time risk monitoring, particularly in process industries. These advancements, including AI-enhanced software for PHA, improve accuracy and efficiency while addressing limitations like subjectivity in qualitative approaches, though they require validation against regulatory standards like those from OSHA.

Risk Evaluation

Severity Assessment

Severity assessment in hazard analysis evaluates the magnitude of potential harm or adverse consequences resulting from a hazard, encompassing outcomes such as human injury or fatality, environmental damage, property loss, or financial impact. This measure focuses solely on the extent of the impact, independent of the probability of occurrence, to inform risk prioritization when combined with likelihood evaluations. Severity is typically categorized using standardized scales that range from negligible to catastrophic, often employing descriptive labels or numerical assignments such as 1 to 10, where higher values indicate greater harm. Common categories, as defined in system safety standards, include:
Severity CategoryDescriptionNumerical Example (Scale 1-10)Example Consequence
CatastrophicResults in death, multiple severe injuries, irreversible significant environmental , or monetary loss exceeding $10 million.S=10Multiple fatalities from a structural due to flaw.
CriticalCauses permanent partial , hospitalization of three or more personnel, reversible significant environmental , or monetary loss between $1 million and $10 million.S=8Permanent partial or hospitalization of three or more personnel from equipment malfunction leading to fire.
MarginalLeads to injury or illness resulting in at least one lost work day, reversible moderate environmental , or monetary loss between $100,000 and $1 million.S=4Minor injury requiring medical treatment from a chemical spill.
NegligibleInvolves no lost work days, minimal environmental , or monetary loss under $100,000.S=1Slight with no injuries from routine equipment wear.
Alternative descriptive scales, such as low, medium, and high, may be used in non-military contexts for . Assessment methods for severity include consequence modeling, which simulates potential outcomes to quantify impacts; judgment, where qualified professionals estimate based on experience; and regulatory benchmarks that classify by type and outcome. For instance, consequence modeling employs to predict the spread and effects of chemical releases, estimating zones of or to determine magnitude. judgment is particularly valuable in data-scarce scenarios, drawing on structured elicitation techniques to mitigate biases in evaluating complex hazards. Regulatory benchmarks, such as those from OSHA, categorize severity by criteria like , days away from work due to , or need for hospitalization, providing standardized thresholds for occupational hazards. Key factors influencing severity assessment encompass direct effects, such as immediate injuries from an event, and indirect effects, like or long-term issues from evacuation following a . Assessments must also account for vulnerable populations, including children or the elderly, who may experience amplified harm from environmental due to physiological sensitivities. Documentation of severity typically involves tables defining categories with specific examples tied to hazards, ensuring in reports; for example, an equipment failure causing a might be rated as critical if it endangers multiple workers, supported by modeling results or incident .

Likelihood Assessment

Likelihood assessment in hazard involves evaluating the probability or with which a identified may realize into an incident, typically expressed in terms such as events per year, per operation, or over the system's lifecycle. This step focuses on estimating how often the hazard could occur under given conditions, distinct from its potential consequences, to inform risk prioritization. Likelihood is often categorized qualitatively to facilitate consistent evaluation across teams, with common scales including five levels: Frequent, Probable, Occasional, Remote, and Improbable. For instance, Frequent hazards occur repeatedly (e.g., daily in operations, assigned a score of L=10); Probable ones happen periodically (e.g., weekly or monthly, L=7); Occasional events arise yearly (L=4); Remote occurrences span 10 or more years (L=1); and Improbable events are so rare as to be negligible over the system's life (L=0.1). These categories can be semi-quantitative, mapping to probability ranges such as Frequent (≥10^{-1}), Probable (10^{-2} to 10^{-1}), Occasional (10^{-3} to 10^{-2}), Remote (10^{-6} to 10^{-3}), and Improbable (<10^{-6}). Several methods are employed to assess likelihood, drawing on empirical and analytical approaches. Historical data from incident databases, such as those maintained by the Center for Chemical Process Safety (CCPS), provide incident rates to estimate frequencies for similar hazards. Modeling techniques like simulations generate probability distributions by simulating thousands of scenarios with variable inputs, offering robust estimates for complex systems. Human factors analysis incorporates error probabilities from operator interactions, using tools like to quantify how influences hazard initiation. Key factors influencing likelihood include exposure time, which increases probability proportional to the duration of hazard contact, and the effectiveness of safeguards, such as barriers or alarms that reduce occurrence rates if properly maintained. Bayesian updating refines initial estimates by incorporating new , such as recent near-misses, to dynamically adjust probabilities in light of evolving . Likelihood assessments are documented in tables or matrices to ensure and team alignment, often including rationale and supporting data. The following example illustrates categorized likelihoods for common hazards:
Hazard ExampleCategoryDescription/FrequencyLikelihood Score (L)Rationale/Source
Electrical faults in aging systemsFrequentMultiple occurrences per year due to insulation degradation10Susceptible to aging failures per nuclear plant data
Chemical leaks from routine maintenanceProbableWeekly to monthly in high-exposure operations7Based on CCPS historical incident rates
Structural collapse in OccasionalOnce per year in vulnerable sites4Modeled via for environmental factors
Rare equipment RemoteOnce every 10+ years1Low probability adjusted via Bayesian methods

Risk Prioritization

Risk prioritization in hazard analysis involves integrating severity and likelihood assessments to rank risks systematically, enabling decision-makers to allocate resources effectively toward the most critical threats. A common approach is to calculate a index by multiplying the severity score (S) by the likelihood score (L), yielding a numerical value that indicates the overall level, such as Risk Index = S × L. This multiplicative method assumes between severity and likelihood, providing a straightforward way to compare risks across scenarios. Alternatively, risks can be prioritized using qualitative or semi-quantitative matrices that plot severity against likelihood without numerical , facilitating visual . Key tools for risk prioritization include the , often structured as a 5x5 grid where rows represent likelihood levels (e.g., rare to almost certain) and columns represent severity levels (e.g., negligible to catastrophic). Cells in the matrix are color-coded to denote risk categories: green for low risk (acceptable without further action), yellow for medium risk (requiring ), and red for high risk (demanding immediate controls). Complementing this is the ALARP principle, which classifies risks as intolerable (must be eliminated), broadly acceptable (negligible further effort needed), or tolerable only if reduced to through cost-effective measures. These tools ensure prioritization aligns with organizational risk tolerance and regulatory standards. The prioritization process assigns action levels based on the derived categories: high risks prompt immediate corrective actions, medium risks necessitate scheduled mitigations, and low risks involve ongoing monitoring. Tolerability criteria provide benchmarks for acceptability, such as in the nuclear industry where individual of fatality must remain below 10^{-6} per year for the general public. Following prioritization, risks are reassessed after implementing controls to verify reductions, with dynamic software tools enabling updates to matrices as new emerges. For illustration, consider a chemical processing plant where a potential leak is assessed with severity S=10 (catastrophic environmental damage) and likelihood L=7 (probable within a year), resulting in a high-risk classification requiring urgent containment upgrades.
Severity (S)Likelihood (L)Risk Index (S × L)Priority LevelExample Action
10 (Catastrophic)7 (Probable)70HighImplement redundant barriers immediately
8 (Critical)4 (Occasional)32MediumSchedule review within 6 months
4 (Marginal)1 (Remote)4LowMonitor annually during inspections

Applications

Process and Industrial Safety

Hazard analysis plays a critical role in and , particularly in high-hazard environments such as oil refineries, petrochemical plants, and facilities, where the handling of flammable, , or toxic substances poses significant risks to workers, communities, and the environment. In the United States, the (OSHA) mandates (PHA) under the Process Safety Management (PSM) standard (29 CFR 1910.119) for facilities managing highly hazardous chemicals, requiring employers to systematically identify, evaluate, and control hazards to prevent catastrophic releases. This regulatory framework emphasizes proactive to mitigate potential accidents in these sectors. Adaptations of hazard analysis techniques in these industries include mandatory PHA revalidations at least every five years to account for process changes, new safety information, and incident , ensuring ongoing relevance and effectiveness. and Operability (HAZOP) studies are commonly applied to and diagrams (P&IDs) to systematically examine deviations in process parameters, while (), a quantitative , models potential failure pathways in complex systems like reactors and pipelines. Layer of Protection Analysis (LOPA) is frequently used to evaluate independent layers (IPLs), such as alarms, relief valves, and interlocks, determining if sufficient safeguards exist to reduce to tolerable levels without over-reliance on any single measure. These methods integrate seamlessly with design and operational reviews, often referenced briefly in for probabilistic quantification. Key hazards addressed in process and industrial safety include explosions from overpressure or ignition sources and toxic releases from leaks or ruptures, which can lead to fires, environmental contamination, and loss of life. A prominent example is the disaster in 1988, where a during maintenance ignited, causing explosions that killed 167 workers on the North Sea oil platform; investigations revealed failures in permit systems and emergency response, prompting global enhancements in safety standards and practices. Implementation of hazard analysis often involves integration with (PTW) systems, where identified risks inform work authorizations, isolation procedures, and simultaneous operations controls to prevent conflicting activities in hazardous areas. The benefits are evident in reduced incident rates; for instance, diligent PHA processes have been linked to up to a 40% decrease in incidents, while the EU's Seveso III Directive, which mandates similar hazard assessments, has contributed to overall improvements in preventing major accidents at industrial sites.

Software and Systems Engineering

In software and systems engineering, hazard analysis is essential for ensuring the safety of critical systems such as and medical devices, where failures can lead to catastrophic consequences. These systems often involve complex interactions between hardware, software, and human operators, necessitating rigorous methods to identify and mitigate risks early in the design phase. Key standards guide this process: provides objectives for in airborne systems, emphasizing hazard analysis to achieve levels based on failure severity, while outlines a framework for in automotive electrical/electronic systems, including hazard analysis and (HARA) to derive safety goals. Adaptations of hazard analysis techniques address software-specific challenges, such as code faults and emergent behaviors. Software Failure Modes and Effects Analysis (Software FMEA) extends traditional FMEA to evaluate potential software deficiencies, like incorrect algorithms or data handling errors, by systematically identifying failure modes and their impacts on . System-Theoretic Process Analysis (STPA), developed by Nancy Leveson, focuses on unsafe control actions and hierarchical interactions to uncover emergent hazards in complex socio-technical systems, offering a more holistic alternative to event-based methods. For cybersecurity hazards, identifies potential vulnerabilities and vectors in software architectures, using structured approaches like STRIDE (Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege) to prioritize mitigations. Qualitative techniques, such as What-If analysis, may be briefly applied to explore high-level system scenarios during initial threat identification. Prominent hazards in these domains include software bugs that propagate to physical failures, as seen in the incidents from 1985 to 1987, where race conditions and inadequate error handling in the machine's software led to massive overdoses, causing at least three deaths and highlighting the dangers of unverified software assumptions. Similarly, the crashes in 2018 and 2019 were linked to flaws in the (MCAS), where hazard analysis overlooked single-point failures in sensor data processing, resulting in uncommanded nose-down inputs and 346 fatalities, underscoring oversight gaps in assessments. Implementation integrates hazard analysis into structured lifecycles like the , which pairs development phases (requirements to coding) with corresponding verification activities ( to system validation), ensuring hazards are traced and addressed iteratively. Verification through rigorous testing, , and matrices confirms compliance with standards, reducing residual risks. This approach enhances reliability in autonomous systems, such as self-driving vehicles, by proactively eliminating hazards and improving overall system resilience against failures.

Food Safety and HACCP

Hazard analysis plays a pivotal role in by systematically identifying and controlling potential risks throughout the , from sourcing to final , thereby minimizing the incidence of foodborne illnesses. In the United States, regulatory mandates have driven the adoption of Hazard Analysis and Critical Control Points (HACCP) systems, with the U.S. Department of Agriculture (USDA) requiring HACCP for meat and processing under the 1996 Pathogen Reduction Rule to address microbial hazards like E. coli. Similarly, the (FDA) mandated HACCP for in 1995 and for processors in 2001, following earlier proposals in the late , to prevent outbreaks from and chemical contaminants. These requirements emphasize proactive hazard identification over end-product testing, significantly reducing risks in high-volume food production. The HACCP framework, developed in the 1960s by and Pillsbury for safety, is structured around seven core principles to ensure systematic control of hazards. Principle 1 involves conducting a thorough hazard analysis to identify potential biological (e.g., pathogens such as or E. coli), chemical (e.g., allergens or adulterants like unauthorized pesticides), and physical (e.g., foreign objects) hazards that are reasonably likely to occur. Principle 2 requires determining critical control points (CCPs), such as cooking or steps, where controls can prevent, eliminate, or reduce hazards to acceptable levels. Principle 3 establishes critical limits for each CCP, like minimum cooking temperatures of 71°C (160°F) for to kill pathogens. Principle 4 sets up monitoring procedures, often continuous or frequent checks, to ensure CCPs remain under control. Principle 5 outlines corrective actions, such as product disposal or process adjustments, for any deviations. Principle 6 mandates validation of the HACCP plan through scientific evidence and ongoing verification via audits and testing. Principle 7 emphasizes record-keeping to document all aspects, facilitating and . A landmark example illustrating the need for HACCP was the 1993 E. coli O157:H7 outbreak, which sickened over 700 people and caused four child deaths due to undercooked hamburgers contaminated at the supply level, prompting accelerated U.S. regulatory adoption of HACCP to avert similar incidents. Implementation of HACCP begins with assembling a multidisciplinary team, including experts, production staff, and personnel, who receive specialized training to apply the principles effectively. A key tool is the , which maps every step—from receiving ingredients to and —to pinpoint potential hazard introduction points and CCPs, ensuring comprehensive coverage of the operation. Globally, HACCP has been standardized through the Commission, with the 2020 revision of the General Principles of Food Hygiene (CXC 1-1969) updating guidance on hazard analysis, CCP determination, and verification to incorporate modern tools like decision trees and enhanced emphasis on management, promoting uniform adoption across international supply chains.

Examples and Case Studies

Simple Hazard Analysis

A simple hazard analysis in an office environment can be applied to ergonomic setups, where everyday tasks like pose potential risks to employee . This approach follows a basic step-by-step process of identifying hazards, evaluating risks, and recommending controls, making it accessible for small-scale assessments without specialized tools. One common hazard is from prolonged keyboard use, often exacerbated by awkward wrist positions or fixed desk heights that force poor . Severity is typically assessed as marginal, involving injuries like muscle strains or tendinitis that require medical attention but do not result in permanent disability or lost limbs. Likelihood is frequent—rated as likely—for workers maintaining suboptimal postures over several hours daily, based on routine exposure to ergonomic risk factors such as and awkward motions. To mitigate this, recommended controls include installing adjustable desks and keyboard trays to allow neutral positions, along with mandatory ergonomic training to educate workers on proper setup and adjustments. These measures align with OSHA guidelines for preventing musculoskeletal disorders in general settings. A of this analysis uses a straightforward method to systematically review the . The following illustrates a basic evaluation for an office typing task:
HazardCauseConsequenceRisk LevelAction
Prolonged use with fixed, non-adjustable setupMarginal injury (e.g., wrist requiring treatment)Medium (Severity: 2; Likelihood: 3)Provide adjustable tray and conduct annual training
Implementing these low-cost fixes, such as ergonomic adjustments and programs, reduces incidence and by preventing work-related strains that commonly lead to missed workdays. OSHA emphasizes that such interventions in environments enhance overall and under general standards.

Risk Management in Practice

Hazard analysis serves as a foundational step in , transitioning identified hazards into actionable mitigation strategies that reduce potential impacts across organizational operations. Results from techniques like Preliminary Hazard Analysis (PHA) are directly incorporated into response plans, where hazard causes and severities guide the development of preventive controls and contingency measures to limit escalation during incidents. This integration ensures that hazard insights inform at all levels, from phases to ongoing operations, fostering a proactive approach to as outlined in established frameworks. A practical illustration occurs in construction sites, where fall hazards—often rated as critical in severity due to potential for fatalities and occasional in likelihood based on site-specific factors like height and access—are systematically addressed through . After , involves installing physical barriers such as guardrails and providing worker on fall protection equipment, followed by regular audits to verify compliance and effectiveness. These steps not only address immediate threats but also embed ongoing monitoring to adapt to changing site conditions, demonstrating how hazard analysis drives layered defenses in high-risk environments. Key tools in this practice include bow-tie diagrams, which map threats leading to a central top event (such as loss of control over a ) and subsequent consequences, while highlighting preventive and recovery barriers to interrupt pathways. Complementing this, evaluation quantifies the remaining exposure after controls are applied, ensuring that any unacceptable levels prompt further refinements to achieve acceptable thresholds. Such tools facilitate and of actions, briefly linking back to risk prioritization for efficient in management plans. Adopting these practices yields significant benefits, including enhanced compliance with international standards like , which emphasizes iterative risk processes to protect organizational value and safety. Moreover, effective implementation contributes to substantial cost savings by averting workplace injuries; for instance, the total economic burden of U.S. work injuries in reached $176.5 billion, underscoring the financial imperative of proactive hazard-informed .

Real-World Case Study

The , occurring on April 20, 2010, in the , exemplifies the catastrophic consequences of inadequate hazard analysis in operations. The incident involved the failure of the (BOP) on the well, operated by in partnership with and , where unstable cement used to seal the well allowed hydrocarbons to migrate upward, leading to a pressure buildup and uncontrolled blowout. Investigations revealed that the foamed cement slurry was contaminated and unstable, a risk that and were aware of from prior tests but failed to adequately address through rigorous hazard identification. This oversight stemmed from insufficient application of hazard and operability (HAZOP) studies, which did not thoroughly evaluate potential deviations in cementing processes or well integrity under high-pressure conditions. The explosion and subsequent fire resulted in 11 worker deaths, 17 injuries, and the largest in history, with an estimated 4.9 million barrels of crude oil released into the Gulf over 87 days. The spill caused extensive environmental damage, including the deaths of and of coastal ecosystems, alongside economic losses exceeding $60 billion for cleanup, restoration, and compensation. In response, the U.S. Bureau of Safety and Environmental Enforcement (BSEE), established in 2011 following the incident, mandated enhanced safety measures, including the integration of the American Petroleum Institute's Recommended Practice 75 (API RP 75) into Safety and Environmental Management Systems (SEMS) regulations to require comprehensive hazard assessments and audits. API RP 75 was updated post-incident to emphasize contractor oversight, human factors, and independent verification of critical barriers like cementing and BOP systems. Key lessons from the disaster underscore the necessity of independent third-party reviews to challenge internal assumptions and identify overlooked hazards, as recommended by the National Commission on the BP and . Additionally, the use of quantitative modeling, such as event tree analysis () for scenarios, has been advocated to probabilistically assess failure pathways, including influx and barrier failures, enabling better risk prioritization in deepwater operations. Globally, the incident prompted the European Union's Offshore Safety Directive (2013/30/), which mandates risk-based safety cases, emergency preparedness, and independent audits for all offshore installations to prevent major accidents. In the 2020s, lessons from inform hazard analysis for net-zero transitions, particularly in offshore (CCS) projects, where similar well integrity risks—such as CO₂ leakage from unstable seals—necessitate advanced HAZOP and to mitigate environmental hazards akin to oil spills.

References

  1. [1]
  2. [2]
    Hazard Analysis - DOE Directives
    Hazard Analysis. Definition. The determination of material, system, process, and plant characteristics that can produce undesirable consequences, followed by ...
  3. [3]
    [PDF] Job Hazard Analysis - OSHA
    steps, hazards, and protections that apply to your industry. When conducting your own job safety analysis, be sure to consult the Occupational Safety and ...
  4. [4]
    HACCP Principles & Application Guidelines
    ### Summary of Hazard Analysis, HACCP Principles, and Application in Food Safety
  5. [5]
    Introduction to Hazard Identification and Risk Analysis - AIChE
    A process hazard analysis (PHA) is a HIRA that meets specific regulatory requirements in the U.S. Figure 9.1 illustrates the increasing rigor of risk ...
  6. [6]
    Safety Management - Hazard Identification and Assessment - OSHA
    A critical element of any effective safety and health program is a proactive, ongoing process to identify and assess such hazards.
  7. [7]
    OSHA's Clinicians Web Page | Occupational Safety and Health ...
    Identify occupational health hazards, such as biological, chemical, physical, ergonomic and psychological, for all worker groups and industries being served.
  8. [8]
    [PDF] Process Safety Management - OSHA
    The process hazard analysis is a thorough, orderly, systematic approach for identifying, evaluating, and controlling the hazards of processes involving highly ...
  9. [9]
    Early factory legislation - UK Parliament
    In 1802 the Health and Morals of Apprentices Act was passed, the very first piece of factory legislation. Its promoter was Sir Robert Peel.Missing: history safety
  10. [10]
    Factory Acts | Timeline, Features, Impact | History Worksheets
    The Factory Acts were first introduced in the early 19th century, with the initial legislation, the Health and Morals of Apprentices Act 1802, followed by ...
  11. [11]
    [PDF] Fault Tree Analysis - NASA Technical Reports Server (NTRS)
    This paper reviews and classifies fault-tree analysis methods developed since 1960 for system safety and reliability. Fault-tree analysis is a useful ...
  12. [12]
    [PDF] PRA History Reliability Engineering and System Safety Nov 2004.
    Abstract. This paper reviews the historical development of the probabilistic risk assessment (PRA) methods and applications in the nuclear industry.
  13. [13]
    Heinrich Pyramid | SKYbrary Aviation Safety
    Heinrich's Accident Pyramid Description A pictorial description of the relationship between occurrences and more serious incidents and accidents.
  14. [14]
    [PDF] Safety under scrutiny — Flixborough 1974 - IChemE
    In 1974, a temporary pipe modification at Flixborough, lacking engineering calculations, led to a failure, releasing boiling cyclohexane and causing an ...
  15. [15]
    The Bhopal tragedy: its influence on process and community safety ...
    The chemical accident at 12:45 AM on December 3, 1984 in Bhopal India had a profound effect on the practice of chemical process safety in the United States.
  16. [16]
    Guidance for Industry: Seafood HACCP Transition Guidance - FDA
    Sep 20, 2018 · Those regulations became effective on December 18,1997. As a companion to the regulation, FDA also issued a guidance document entitled the ...
  17. [17]
    [PDF] IEC 61508 Overview Report - exida
    Jan 2, 2006 · IEC 61508 is an international standard for the “functional safety” of electrical, electronic, and programmable electronic equipment.
  18. [18]
    ISO 45001:2018 - Occupational health and safety management ...
    In stockIt enables organizations to systematically assess hazards and implement risk control measures, leading to reduced workplace injuries, illnesses and incidents.ISO/CD 45001 · Amendment 1 · EnglishMissing: post- | Show results with:post-
  19. [19]
    1910.119 - Process safety management of highly hazardous chemicals. | Occupational Safety and Health Administration
    ### Summary of Process Hazard Analysis (e)(1) and (e)(2) from 29 CFR 1910.119
  20. [20]
    [PDF] ICI's contribution to process safety - IChemE
    Hazard and operability studies (Hazop). They were developed in ICI in 1963 and the first paper on them was published in 1974 (Lawley, 1974). They have been ...Missing: origin | Show results with:origin
  21. [21]
    [PDF] Risk Assessment 9. HAZOP - NTNU
    A Hazard and Operability (HAZOP) study is a structured and systematic examination of a planned or existing process or operation in order to identify.Missing: origin | Show results with:origin
  22. [22]
    APPENDIX VI-“WHAT-IF” HAZARD ANALYSIS
    ### Summary of What-If Analysis
  23. [23]
    [PDF] Risk Assessment 9. Preliminary hazard analysis - NTNU
    The PHA identifies where energy may be released and which hazardous events that may occur, and gives a rough estimate of the severity of each hazardous event.Missing: qualitative | Show results with:qualitative
  24. [24]
    Quantitative Risk Assessment Technique - ScienceDirect.com
    Quantitative risk assessment techniques are defined as systematic and quantitative analyses of potential hazards and their associated risks, ...
  25. [25]
    [PDF] RISK ASSESSMENT OVERVIEW
    An event tree starts with the initiating event and progresses through the scenario, a series of successes or failures of intermediate events (also called ...
  26. [26]
    [PDF] Risk Assessment - Quantitative Methods Training Module
    Identify several commonly used quantitative risk assessment methods, including modeling, event trees, Multi-Criteria Decision Analysis, Monte Carlo Analyses, ...
  27. [27]
    [PDF] An Introduction to Fault Tree Analysis - University of Nottingham
    The concept of expressing the system failure causes in a logic diagram, which became known as a fault tree, was established in the early. 1960's by Watson ...Missing: seminal HA
  28. [28]
    [PDF] Chapter 3 Event Tree Analysis - NTNU
    An event tree analysis (ETA) shows all possible outcomes from an accidental event, considering safety barriers and additional factors. It identifies potential ...
  29. [29]
    [PDF] NUREG-75/014 (WASH-1400), Reactor Safety Study: An ...
    Jun 9, 2015 · Analysis of the event tree, as indicated in Appendix V, indicates that the most likely way for TML sequences to develop is for transients to ...Missing: seminal | Show results with:seminal
  30. [30]
    [PDF] MIL-STD-1629A - DSI International
    Failure mode and effects analysis (FMEA). A procedure by which each potential failure mode in a system is analyzed to determine the results or effects ...
  31. [31]
    Fault Tree Analysis (FTA) | www.dau.edu
    FTA is a method used to analyze the potential for system or machine failure by graphically and mathematically representing the system itself.
  32. [32]
    Reliability Analysis and Management - ReliaSoft - HBK
    Explore our reliability software solutions for advanced analysis and improved product performance. Ensure reliability with cutting-edge tools.ReliaSoft XFMEA · ReliaSoft BlockSim · ReliaSoft SEP: Web Portal for...
  33. [33]
    [PDF] HOW TO AVOID FAILURES-(FMEA and/or FTA)
    Apr 3, 2017 · There are benefits and limitations in FTA as in FMEA. The most significant benefits and limitations are presented below: Benefits. Limitations.
  34. [34]
    Hazard and Risk - Risk Assessment - CCOHS
    Field-level risk assessments can use qualitative or semi-quantitative methods for assessing risk, including the matrices shown above.Missing: authoritative | Show results with:authoritative
  35. [35]
    ISO 31000:2018 - Risk management — Guidelines
    In stockIt outlines a comprehensive approach to identifying, analyzing, evaluating, treating, monitoring and communicating risks across an organization. Why is ISO ...ISO/WD 31000 · The basics · IEC 31010:2019
  36. [36]
    About Hierarchy of Controls - CDC
    Apr 10, 2024 · The hierarchy of controls identifies a preferred order of actions to best control hazardous workplace exposures.
  37. [37]
  38. [38]
    Risk Assessment: The common pitfalls and how to avoid them - RoSPA
    Oct 28, 2025 · Bridget Leathley offers valuable advice on avoiding the most frequent mistakes made while conducting risk assessments.
  39. [39]
    Common Process Hazard Analysis Pitfalls to Avoid - Cognascents
    Feb 10, 2022 · Common pitfalls to look out for in your next PHA are: PHAs are a way to identify hazards of the process and make the plant safer.
  40. [40]
    [PDF] MIL-STD-882E - NDE-Ed.org
    May 11, 2012 · (2) For risk assessment, the SSPP shall list the severity categories, probability levels, and. Risk Assessment Codes (RACs) that shall be ...
  41. [41]
    Expert Judgment in Risk Analysis and Management: Process ...
    We propose four categories of expert judgment and present three case studies which illustrate some of the pitfalls commonly encountered in its use. We conclude ...
  42. [42]
    Guidelines for Consequence Analysis of Chemical Releases - AIChE
    This Guidelines book provides technical information on how to conduct a consequence analysis to satisfy your company's needs and the EPA rules.
  43. [43]
  44. [44]
    [PDF] 3.2 Disaster risk factors – hazards, exposure and vulnerability
    Populations are often talked about as being directly or indirectly affected. Direct effects include injury, illness, other health effects, evacuation and ...
  45. [45]
    Approaches for Assessing Risks to Sensitive Populations
    If risk assessments consider factors such as age, genetics, environment, exposure, or combinations of these and other factors, then the underlying assumption is ...
  46. [46]
    Likelihood | www.dau.edu
    Likelihood. Risk likelihood is the evaluated probability an event will occur given existing conditions. The estimated likelihood of the risk should be tied to a ...
  47. [47]
    CHEF Overview - AIChE
    May 21, 2024 · Techniques for evaluation of likelihood (historical data, fault tree, and event tree), and; Quantification of risk (i.e., risk is a function ...
  48. [48]
    Use of Monte Carlo Simulation in Risk Assessments - US EPA
    Oct 3, 2016 · This document recommends guidelines under which Region III risk assessors may accept the optional use of Monte Carlo simulation to develop multiple descriptors ...
  49. [49]
    Human factors in risk assessment - HSE
    Nov 4, 2024 · Organisations need to take a proportionate approach to human factors in risk assessment based on their hazard and risk profile.
  50. [50]
    Bayesian Updating in Natural Hazard Risk Assessment
    Sep 22, 2015 · The present paper investigates a Bayesian approach for the updating of probabilistic models in the context of risk management of natural hazards ...Missing: likelihood | Show results with:likelihood
  51. [51]
    [PDF] NUREG/CR-6788, "Evaluation of Aging and Qualification Practices ...
    While the number of failures is relatively low, the data indicate that cable splices are susceptible to aging degradation that can lead to failure. Cable ...
  52. [52]
  53. [53]
    29 CFR 1910.119 -- Process safety management of highly ... - eCFR
    (1) The employer shall perform an initial process hazard analysis (hazard evaluation) on processes covered by this standard. The process hazard analysis shall ...
  54. [54]
    What is HAZOP? Hazard and Operability Study | SafetyCulture
    Aug 19, 2025 · When beginning a HAZOP study, it is important to identify the processes in operations, be familiar with the process/piping and instrumentation ...
  55. [55]
    Layers of Protection Analysis (LOPA) - DEKRA
    Layers of Protection Analysis (LOPA) is an efficient tool for estimating frequency, probability, and severity of an incident scenario.
  56. [56]
    HAZard and OPerability analysis (HAZOP) process. - ResearchGate
    FTA was employed to identify hazards and critical failure pathways, with process mapping supported by a piping and instrumentation diagram (P&ID), while AHP ...<|separator|>
  57. [57]
    [PDF] 7. Job Safety Analysis / Permit to Work - IOGP
    A Permit to Work system is a formal written system used to control certain types of work which are identified as potentially hazardous. It is also a means of.
  58. [58]
    The Importance of an Effective PHA/HAZOP Closeout Process
    Nov 20, 2024 · The Benefits Are Clear: Reduced Incident Rates: Companies with diligent PHA closeout processes have reported up to a 40% reduction in process ...
  59. [59]
    Statistical survey on the prevention of major industrial accidents ...
    The study also highlighted the effectiveness of the SEVESO III Directive in reducing incidents, emphasising the importance of safety measures in industry.
  60. [60]
    [PDF] FAA Order 8110.49A, Software Approval Guidelines
    Mar 29, 2018 · This order explains how Federal Aviation Administration (FAA) aircraft certification staff can use and apply RTCA/DO-178B and RTCA/DO-178C ...
  61. [61]
    [PDF] Infusing Reliability Techniques into Software Safety Analysis
    Researchers and practitioners have introduced reliability techniques, such as FMEA and Fault Tree Analysis (FTA), to software safety analysis [1] [2] [3]. As ...
  62. [62]
    [PDF] Systems Theoretic Process Analysis (STPA) Tutorial
    (System-Theoretic Process Analysis). • Identify accidents and hazards ... (Leveson, 2011). STAMP Model. STPA Hazard. Analysis. © Copyright John Thomas ...
  63. [63]
    Threat Modeling Process - OWASP Foundation
    Threat Analysis. It is frequently claimed that “a prerequisite in the analysis of threats is the understanding of the generic definition of risk.” But this is ...Introduction · Threat Model Information... · Stride Threat ListMissing: hazards | Show results with:hazards
  64. [64]
    Microsoft Security Development Lifecycle Threat Modelling
    You can use threat modeling to shape your application's design, meet your company's security objectives, and reduce risk. threat_modeling. There are five major ...Missing: hazards | Show results with:hazards
  65. [65]
    [PDF] therac.pdf - Nancy Leveson
    Between June 1985 and January 1987, a computer-controlled radiation ther- apy machine, called the Therac-25, massively overdosed six people. These accidents ...Missing: 1985-87 NASA
  66. [66]
    [PDF] Assumptions Used in the Safety Assessment Process and the Effects ...
    Sep 26, 2019 · The NTSB reviewed sections of Boeing's system safety analysis for stabilizer trim control that pertained to MCAS on the 737 MAX. Boeing's ...
  67. [67]
    ISO 26262 Functional Safety in Automotive Software - eInfochips
    May 26, 2025 · Phases of the V-Model: · 1. Concept Phase: Hazard analysis and risk assessment (HARA) is used to identify dangers and set safety targets. · 2.
  68. [68]
  69. [69]
    [PDF] Ergonomics for the Prevention of Musculoskeletal Disorders - OSHA
    Routine exposure to these ergonomic risk factors for several hours a day can cause soft tissue injuries or musculoskeletal disorders (MSDs). Some examples of ...
  70. [70]
    [PDF] Hazard Assessment and Job Safety Analysis - OSHA
    “A hazard is any source of potential damage, harm or adverse health effects on something or someone under certain conditions at work.”
  71. [71]
  72. [72]
    [PDF] The Advantages of Ergonomics - Oregon OSHA
    What are the advantages of ergonomics? 1. Increased savings. • Fewer injuries. • More productive and sustainable employees. • Fewer workers' compensation claims.<|control11|><|separator|>
  73. [73]
    Preliminary Hazard Analysis - an overview | ScienceDirect Topics
    Preliminary hazard analysis (PHA) is defined as a method of safety evaluation that identifies primary risks within a system, assesses potential accident ...
  74. [74]
    [PDF] Construction Focus Four: Fall Hazards - OSHA
    Using ladders more safely is one way to start preventing falls at your work site. Set an example at work. Your co-workers can learn a lot from you. At first ...
  75. [75]
    Risk Assessment Matrix: How to Calculate & Use ... - Vector Solutions
    Jan 23, 2025 · A risk assessment matrix is a tool used to assess, evaluate, and prioritize workplace risks by examining both their likelihood and severity. How ...
  76. [76]
    The bowtie method - Barrier Based Risk Management Knowledge ...
    A 'bowtie' is a diagram that visualizes the risk you are dealing with in just one, easy to understand picture. The diagram is shaped like a bow-tie, ...
  77. [77]
    Incorporating inherent and residual risk in your risk assessment
    Feb 28, 2024 · Residual risk refers to the risk level after implementing control processes to mitigate the inherent risk. The level of residual risk depends on ...<|separator|>
  78. [78]
    Work Injury Costs - Injury Facts - National Safety Council
    The total cost of work injuries in 2023 was $176.5 billion. This figure includes wage and productivity losses of $53.1 billion, medical expenses of $36.8 ...
  79. [79]
    [PDF] Deepwater Horizon Accident Investigation Report | BP
    Sep 8, 2010 · If foamed cement is contaminated by synthetic oil- based mud (SOBM), base oil or cement spacer, it can become unstable, resulting in nitrogen.
  80. [80]
    BP was warned about cement at gulf disaster well - The Guardian
    Oct 28, 2010 · Tests on cement mixture found it to be unstable weeks before fatal explosion at Deepwater Horizon rig, US investigators told.Missing: instability | Show results with:instability
  81. [81]
    [PDF] Report regarding the causes of the april 20, 2010 macondo well ...
    Apr 20, 2010 · Deepwater Horizon Oil Spill and Deepwater Drilling (“Presidential. Commission”) showed that the foamed cement slurry used on the Macondo well.Missing: instability | Show results with:instability
  82. [82]
    [PDF] Deepwater Horizon Lessons Learned for the Emergency Response ...
    A long list of lessons learned was generated after the review of the reports. The review team categorized the items in the following manner: 1. For Action. 2 ...Missing: hazard ETA
  83. [83]
    Oil and Gas and Sulphur Operations in the Outer Continental Shelf ...
    Apr 5, 2013 · The current BSEE SEMS regulations incorporate by reference the entirety of the American Petroleum Institute's Recommended Practice 75 (API RP 75) ...
  84. [84]
    2 Changes in Offshore Safety Since 2010
    BSEE should actively encourage API to update the language of API RP 75 concerning human factors. Updating the language or noting in this instance, “may ...
  85. [85]
    Risk assessment on deepwater drilling well control based on ...
    This paper proposes a dynamic risk assessment model for evaluating the safety of deepwater drilling operations.
  86. [86]
    [PDF] Directive 2013/30/EU of the European Parliament and of the Council ...
    Jun 12, 2013 · Directive 2013/30/EU aims to reduce major accidents in offshore oil and gas operations, protect the marine environment, and establish minimum  ...
  87. [87]
    [PDF] Deep Trouble The Risks of Offshore Carbon Capture and Storage
    The single biggest risk of. CO₂ leakage comes from the interaction of injected. CO₂ with legacy oil and gas wells. And yet the sites being heavily targeted for ...