Fact-checked by Grok 2 weeks ago

Toxicology testing

Toxicology testing encompasses the laboratory-based detection, identification, and quantification of toxic substances in biological samples such as , urine, tissues, or fluids to assess , diagnose , or determine . This discipline integrates with biological principles to evaluate adverse effects from chemicals, drugs, or environmental agents, supporting applications in clinical , , occupational health, and regulatory safety assessments. Key methods include initial screening for rapid detection followed by confirmatory techniques like gas chromatography-mass spectrometry (GC-MS) or liquid chromatography-mass spectrometry (LC-MS) for precise identification and measurement, which are essential due to the limitations of screening tests in specificity and potential for false positives. Historically rooted in ancient recognition of poisons, toxicology testing formalized in the with advances in chemical , evolving through 20th-century of protocols like LD50 assays and regulatory frameworks to address workplace and pharmaceutical hazards. Notable developments include the shift toward high-throughput and computational models in recent decades, reducing reliance on while enhancing predictive accuracy for human toxicity, as pursued by agencies like the EPA. These innovations address empirical challenges in traditional methods, such as interspecies extrapolation errors and high costs, though confirmatory testing remains critical for causal determination in cases of overdose or impairment. Despite its utility, toxicology testing faces controversies over accuracy, including calibration errors, chain-of-custody issues, and incomplete substance panels that may miss novel or low-dose toxins, necessitating full and peer-reviewed validation to mitigate forensic misinterpretations. Limitations in specimen types, such as urine's detection window variability or testing's potential ethnic biases, underscore the need for context-specific interpretation, while ethical concerns drive ongoing refinement toward non-animal alternatives without compromising evidential reliability. Advances in investigative toxicology, integrating and , promise improved mechanistic insights but require rigorous empirical substantiation to overcome historical over-reliance on correlative data.

Fundamentals

Definition and Scope

Toxicology testing involves the systematic evaluation of chemical substances, drugs, and environmental agents to determine their potential to induce adverse effects in biological systems, including humans, animals, and ecosystems. This process quantifies toxicity through dose-response relationships, identifies no-observed-adverse-effect levels (NOAEL), and elucidates mechanisms of harm such as cellular damage or organ dysfunction. The primary aim is to assess safety for intended uses, preventing harm from exposure via ingestion, inhalation, dermal contact, or injection. The scope encompasses acute, subchronic, and chronic exposure studies, covering endpoints like lethality, , carcinogenicity, reproductive and developmental toxicity, , and immunotoxicity. Testing protocols adhere to regulatory guidelines from agencies such as the FDA, EPA, and ICH, incorporating empirical data from controlled experiments to establish safe exposure limits. It extends beyond pharmaceuticals to industrial chemicals, pesticides, , and food additives, integrating multidisciplinary approaches to predict real-world risks. In , toxicology testing occurs post-initial to define dosing safety and identify attrition risks early, reducing late-stage failures. Environmentally, it evaluates ecological impacts and risks from pollutants, supporting risk assessments under frameworks like REACH in or TSCA in the . Forensic and clinical applications include screening for intoxicants in overdoses or legal investigations, though these differ from preclinical safety evaluations by focusing on retrospective detection rather than prospective hazard identification.

Core Principles and Endpoints

The core principle of toxicology testing is the dose-response relationship, which quantifies how the magnitude of adverse effects correlates with the administered dose of a substance, underpinning identification and characterization. This relationship typically exhibits a below which no observable toxic effect occurs for most non-carcinogenic endpoints, reflecting biological homeostatic mechanisms that repair or tolerate low-level exposures, though some genotoxic carcinogens may follow a due to stochastic DNA damage. Dose-response data are plotted with dose on the x-axis and response (e.g., percentage affected) on the y-axis, enabling derivation of metrics like the median effective dose (ED50) or (LD50), which inform safety margins by extrapolating from high-dose animal data to low-dose human exposures. Key endpoints in toxicology testing evaluate specific adverse outcomes across exposure durations and biological levels, prioritizing empirical measurement of lethality, organ dysfunction, and reproductive/developmental effects to establish safe exposure limits. Acute toxicity endpoints focus on short-term high-dose effects, with the LD50 defined as the dose causing death in 50% of test subjects; for oral administration, LD50 values exceeding 2000 mg/kg indicate low acute hazard potential per regulatory classifications. Subchronic and chronic endpoints assess cumulative effects, such as the no-observed-adverse-effect level (NOAEL), the highest dose showing no statistically or biologically significant toxicity in studies lasting weeks to lifetimes, used to calculate reference doses (RfD) by applying uncertainty factors (typically 10-1000) for interspecies and intraspecies variability. Specialized endpoints target mechanisms like (e.g., mutagenicity via reversion rates), carcinogenicity (tumor incidence rates), and (behavioral or neuropathological changes), integrated into tiered testing strategies to minimize false negatives while adhering to the (LOAEL) when NOAEL data are absent. Route-specific endpoints account for differences, as may yield lower effective doses than oral due to direct systemic entry, emphasizing pathway in . These endpoints collectively support by linking dose to verifiable histopathological, biochemical, or functional alterations, avoiding overreliance on surrogate biomarkers without confirmatory evidence.
EndpointDescriptionTypical Application
LD50Median dose lethal to 50% of Acute
NOAELHighest dose with no adverse effectsDeriving safe human exposure limits (e.g., RfD)
LOAELLowest dose with observable adverse effectsSupplemental to NOAEL when data-limited
Median effective concentration for 50% response (e.g., cell viability) screening for potency

Historical Development

Pre-20th Century Origins

Early recognition of toxic substances dates to ancient civilizations, where empirical observations of plant, animal, and mineral poisons informed rudimentary testing through trial-and-error application in hunting, warfare, and assassination. In around 1500 BCE, texts like the documented over 800 medicinal recipes, some involving toxic agents such as aconite and , with effects assessed via observation of symptoms in patients or captives. Similar practices emerged in and , where philosophers like (c. 371–287 BCE) classified poisonous plants based on observed lethality in animals and humans, laying groundwork for between exposure and outcome. A pivotal advancement occurred with (132–63 BCE), king of , who conducted systematic self-experimentation and trials on prisoners to evaluate poison potency and efficacy. Fearing , he ingested sublethal doses of toxins like and viper venom daily, developing a universal called mithridatium—a compound of over 60 ingredients tested for protective effects against common poisons. This marked the earliest recorded empirical protocol, emphasizing dose-response relationships and controlled exposure to predict survival outcomes. During the , Philippus Aureolus Theophrastus Bombastus von Hohenheim, known as (1493–1541), formalized by rejecting Galenic humoral theory in favor of chemical causation, asserting that "all substances are s; there is none which is not a . The right dose differentiates a from a remedy." He pioneered animal dosing experiments with mercury, , and to quantify therapeutic thresholds versus lethality, influencing iatrochemistry and establishing toxicity as dependent on exposure level rather than inherent malevolence. In the , Mathieu Joseph Bonaventure Orfila (1787–1853) advanced testing rigor through forensic-oriented protocols, publishing Traité des poisons in 1814, which detailed isolation and detection of alkaloids like via animal administration and post-mortem analysis. Orfila's experiments on dogs and rabbits quantified absorption, distribution, and elimination of toxins, refuting prior anecdotal methods and enabling courtroom verification of poisoning intent. Concurrently, James Marsh's 1836 test for —reducing it to gas for visual confirmation—facilitated precise detection in tissues, reducing false negatives in medico-legal cases.

20th Century Standardization and Expansion

In the early 20th century, toxicology testing saw the introduction of quantitative methods to standardize assessments of substance potency. British pharmacologist John William Trevan developed the median lethal dose (LD50) test in 1927, which quantifies the dose required to kill 50% of a test population, primarily to compare the relative strengths of biological preparations like insulin and antitoxins rather than for absolute toxicity prediction. This metric facilitated more reproducible comparisons amid variability in early biological assays, though it later became central to regulatory toxicity evaluations despite criticisms of its statistical assumptions and ethical concerns. The 1937 Elixir Sulfanilamide disaster, where over 100 individuals died from renal failure due to the toxic solvent in a liquid formulation, prompted the to enact the Federal Food, Drug, and Cosmetic Act of 1938. This legislation mandated preclinical demonstrations, including animal toxicity testing, prior to drug marketing, marking a shift from post-market reactive measures to proactive empirical validation of profiles. Enforcement by the (FDA) emphasized multi-species, multi-dose studies to identify acute and subacute effects, establishing foundational protocols that expanded toxicology's role in protection. Mid-century pharmaceutical expansion and the 1961-1962 thalidomide tragedy, which caused thousands of birth defects including phocomelia in and was averted in the by FDA reviewer Frances Oldham Kelsey's scrutiny, catalyzed further standardization. The Kefauver-Harris Amendments of 1962 required manufacturers to prove both safety and efficacy through rigorous controlled studies, incorporating reproductive and developmental toxicity endpoints like assays in rodents and rabbits. This response highlighted causal links between inadequate testing and human harm, driving international regulatory alignment, such as the formation of the Society of Toxicology in 1961 and later harmonized guidelines by bodies like the in the . By century's end, testing protocols encompassed chronic, genotoxic, and carcinogenic assessments, reflecting empirical refinements from disaster-driven evidence rather than theoretical ideals.

Methodologies

In Vivo Animal Testing

In vivo animal testing constitutes a core methodology in for assessing the adverse effects of chemicals, drugs, and environmental agents on living organisms, enabling evaluation of systemic toxicity, , and long-term outcomes through physiological processes such as , , , and excretion. This approach utilizes intact animals to model complex biological interactions that simpler or computational methods may overlook, including organ-specific responses and compensatory mechanisms. Regulatory agencies like the FDA and EPA mandate such testing for safety assessments, drawing on standardized protocols to derive metrics like the (NOAEL) for human risk extrapolation. Commonly employed species include such as rats and mice, selected for their genetic uniformity, rapid reproduction, and lower costs, with rats preferred in repeated-dose studies due to established historical databases. Non-rodent models, including , minipigs, or cynomolgus monkeys, are required alongside rodents in pharmaceutical to provide complementary data, justified by metabolic similarities to humans—such as comparable profiles in for certain clearances. Species selection must be scientifically rationalized, prioritizing pharmacological relevance over phylogenetic proximity, as non-human are used sparingly due to ethical and logistical constraints. Acute toxicity studies evaluate single or short-term exposures, typically via oral gavage, dermal application, or , observing animals for 14 days post-dosing for endpoints like mortality, behavioral changes, and gross . Historical LD50 tests, introduced by J.W. Trevan in to standardize potency amid variable bioassays, determined the dose lethal to 50% of a population but have been largely replaced by humane alternatives like Test Guideline 423 (Acute Toxic Class method, adopted 2001), which uses sequential dosing in groups of three per step to classify with fewer animals—typically 9-15 per sex. Subchronic studies, per 408 (updated 2018), involve 90-day repeated dosing in (at least 10/sex/group at low doses, up to 20 for full endpoints), monitoring body weight, food consumption, , and to identify target organs. Chronic toxicity and carcinogenicity tests extend to 6-24 months in ( 452 and 451), using 50-70 animals/sex/group to detect delayed effects like neoplasia. These tests adhere to the 3Rs principle—replacement, reduction, and refinement—outlined in guidelines since the , minimizing animal numbers through statistical designs like up-and-down procedures (OECD 425) that dose sequentially based on prior outcomes. Advantages include capturing emergent toxicities from multi-organ interactions, as evidenced by models identifying thalidomide's teratogenicity in the despite initial trials. However, limitations persist: interspecies physiological differences yield modest predictive accuracy, with animal data failing to forecast hepatotoxicity in approximately 40-50% of cases, prompting regulatory pushes toward integrated approaches combining with non-animal methods. Despite advances in alternatives, testing remains indispensable for in dose-response relationships, though source critiques note overreliance may stem from regulatory inertia rather than unassailable superiority.

In Vitro Cellular and Tissue Models

In vitro cellular and tissue models in toxicology testing involve the use of isolated cells, engineered tissues, or organotypic cultures to evaluate the adverse effects of chemicals, drugs, and environmental agents outside of whole-organism systems. These models typically assess endpoints such as , , , and metabolic disruption through high-throughput assays like MTT for cell viability or assays for DNA damage. Unlike methods, they enable rapid screening and mechanistic insights at the cellular level, reducing reliance on animals while prioritizing human-relevant cell sources like primary hepatocytes or (iPSC)-derived lines. Traditional two-dimensional (2D) monolayer cultures, often using immortalized cell lines such as HepG2 for liver toxicity, provide foundational data but oversimplify tissue architecture and intercellular interactions, leading to underestimation of chronic effects. Advances in models, including spheroids, organoids, and bioprinted constructs, address these shortcomings by recapitulating , , and multicellular organization; for instance, liver spheroids from primary human hepatocytes sustain phase I/II metabolism for up to 21 days, improving prediction of drug-induced liver injury compared to 2D systems. Microfluidic organs-on-chips integrate flow dynamics and co-cultures to mimic barriers like the blood-brain or , enhancing relevance for profiling. Despite these improvements, in vitro models face inherent limitations rooted in their isolation from systemic , including absent immune responses, hormonal influences, and long-term , which can result in poor translatability to outcomes; studies show concordance rates of only 60-70% for predictions between advanced models and . Variability arises from donor-specific differences in primary cells and challenges in scaling complex constructs for reproducibility, with technical hurdles like nutrient gradients in avascular tissues limiting culture duration to weeks rather than months. Validation efforts, such as those under the U.S. FDA's New Approach Methodologies (NAMs) program initiated in 2017, emphasize quantitative structure-activity relationships (QSAR) integration to bridge gaps, yet full replacement of testing remains constrained by these physiological disconnects. Regulatory acceptance has progressed for specific endpoints, with the (ECHA) validating assays for skin sensitization (e.g., DPRA method since 2016) and eye irritation under REACH, but broader adoption for systemic requires tiered strategies combining data with in silico predictions. In the U.S., the FDA Modernization Act 2.0 (2022) encourages non-animal methods, including complex models for (IND) submissions, particularly in where iPSC-derived cardiac tissues have supported safety assessments. Peer-reviewed benchmarks indicate that while models outperform in predicting nanoparticle (e.g., via improved barrier penetration assays), empirical validation against historical datasets is essential to counter biases in over-optimistic industry reports. Ongoing initiatives, like the Adverse Outcome Pathway (AOP) framework from the (updated 2023), facilitate integration by linking molecular initiating events to apical outcomes, fostering causal realism in risk assessment.

In Silico Computational Predictions

In silico computational predictions in employ mathematical and algorithmic models to forecast the toxicological properties of chemicals based on their molecular structures, physicochemical properties, and known data from similar compounds, thereby enabling hazard identification without conducting wet-laboratory experiments. These approaches, often integrated into integrated testing strategies, leverage quantitative structure-activity relationship (QSAR) models, which correlate structural descriptors—such as molecular weight, , and electronic features—with biological endpoints like , mutagenicity, or carcinogenicity. For instance, the U.S. Agency's Toxicity Estimation Software Tool (TEST), released in versions up to 4.3 as of 2023, applies consensus QSAR methods to predict endpoints including fathead minnow LC50 and pyriformis IGC50, achieving balanced accuracies around 70-80% for certain datasets when validated against experimental data. Key methodologies include rule-based expert systems, analog-based read-across, and -enhanced QSAR. Read-across extrapolates from data-rich analogs to target chemicals by identifying structural similarities, as formalized in the QSAR Toolbox, which supports grouping chemicals into categories for endpoint prediction and has been updated through version 4.6 in 2023 to incorporate over 100,000 experimental records. techniques, such as random forests or deep neural networks, have advanced predictive power by handling nonlinear relationships; a 2019 noted that models combining multiple algorithms improved accuracy for endpoints like skin up to 85% on datasets from the . Molecular docking and modeling simulate interactions between chemicals and biological targets, predicting binding affinities for receptors implicated in pathways, such as the for dioxin-like effects. Regulatory acceptance hinges on adherence to validation principles established in 2007, which require models to have a defined endpoint, unambiguous algorithm, established applicability domain, mechanistic interpretation where possible, and robust internal/external performance metrics like exceeding 0.7 for binary classifications. The 2023 (Q)SAR Assessment Framework further guides evaluation of predictions by assessing model reliability, prediction uncertainty, and consistency across tools, emphasizing conservative approaches for borderline cases in chemical registration under REACH. Despite these advances, prediction accuracies remain variable; for carcinogenicity, descriptor-only QSAR models yield around 62% overall accuracy on external test sets, underscoring challenges in capturing complex dynamics like and . Advantages of in silico methods include rapid screening of large chemical libraries—processing millions of compounds in days versus years for tests—substantial cost reductions estimated at 50-90% per compound in early stages, and ethical benefits by minimizing animal use in line with the 3Rs principle (, , refinement). However, limitations persist, including over-reliance on training quality, which often derives from heterogeneous experimental sources prone to inter-laboratory variability; poor to scaffolds outside the applicability ; and "black-box" opacity in advanced models, complicating causal mechanistic insights essential for regulatory confidence. Validation against diverse, high-quality datasets remains critical, as unaddressed biases in historical —such as underrepresentation of chemicals—can propagate errors, with studies showing up to 20-30% false positives for acute oral in conservative models. Ongoing integration with in vitro data and physiologically based pharmacokinetic modeling aims to enhance reliability, as evidenced by hybrid approaches achieving 10-15% accuracy gains in recent benchmarks.

Applications

Drug Development and Safety Assessment

Toxicology testing is integral to pharmaceutical , serving as the primary mechanism to evaluate the profile of candidate compounds and mitigate risks prior to exposure. In the preclinical phase, these assessments identify potential adverse effects, determine no-observed-adverse-effect levels (NOAELs), and establish safe starting doses for clinical trials by characterizing dose-response relationships and target organ toxicities. Standard studies include single- and repeat-dose toxicity tests in and non-rodents, evaluations via assays like the and micronucleus assay, and specialized assessments for carcinogenicity, reproductive toxicity, and immunotoxicity, all conducted under (GLP) standards to ensure data reliability for regulatory submission. These evaluations directly inform (IND) applications to agencies like the FDA, where toxicology data must demonstrate that the proposed clinical dose poses minimal risk based on extrapolations from animal and . Early-stage screening employs high-throughput and methods to filter compounds for , , and , reducing the pipeline of candidates advanced to resource-intensive . For instance, investigative toxicology integrates biomarkers, transcriptomics, and to uncover mechanisms of idiosyncratic toxicities, enabling structure-activity relationship refinements during lead optimization and thereby lowering attrition rates attributable to safety issues, which historically contribute to about 25-30% of compound failures across development stages. , distribution, metabolism, and excretion () toxicology further assesses metabolite-related risks, such as reactive intermediates that may evade initial screens, with guidelines emphasizing evaluation of major human metabolites in animal models when differs. Safety pharmacology studies, required under ICH S7A guidelines, complement core by probing functional effects on vital systems like cardiovascular, respiratory, and central nervous, often using isolated or telemetry-equipped animals to predict human-relevant liabilities such as QT interval prolongation. In biopharmaceutical development, adapts to modalities like monoclonal antibodies through species-specific testing and surrogate molecules when is limited, focusing on exaggerated and release risks. Despite advances, discrepancies between preclinical and clinical outcomes persist, as evidenced by post-approval withdrawals like in 2000 due to undetected , highlighting the need for human-relevant models while affirming 's role in averting widespread harm. Overall, rigorous application of these tests has enabled safer progression of therapies, with FDA approvals increasingly reliant on integrated nonclinical packages that balance efficacy signals against toxicity thresholds.

Environmental Monitoring and Risk Assessment

Toxicology testing supports by employing bioassays to evaluate the presence and effects of contaminants in ecosystems, including , , soil, and air. These tests expose sentinel organisms—such as , invertebrates like , , and —to environmental samples or extracts, measuring endpoints like mortality, reproduction, and growth inhibition to detect levels. Whole Effluent Toxicity (WET) methods, standardized under the U.S. since 1984, assess the combined toxic potential of industrial and municipal wastewater discharges by subjecting aquatic species to serial dilutions of effluent, with test durations ranging from 24 hours for acute effects to 7-10 days for chronic impacts. Such monitoring ensures compliance with discharge permits and tracks pollutant trends, as seen in EPA programs evaluating over 1,000 permitted facilities annually for compliance with toxicity limits. In contaminated site remediation, toxicity testing verifies the effectiveness of cleanup efforts, particularly at locations designated under the Comprehensive Environmental Response, Compensation, and Liability Act of 1980. Laboratory and bioassays compare pre- and post-remediation samples, quantifying reductions in bioavailable toxins; for instance, tests on sediment pore water can indicate whether or capping has mitigated risks to benthic organisms. Complementary chemical analyses, such as gas chromatography-mass spectrometry for persistent organic pollutants, integrate with toxicity data to pinpoint causal agents, avoiding overreliance on concentration alone which may not reflect ecological . Risk assessment in environmental toxicology integrates testing data into frameworks that estimate adverse outcomes for ecosystems and human health via exposure pathways. The EPA's process, formalized in 1983 guidelines and updated through the Integrated Risk Information System (IRIS), comprises four steps: hazard identification using acute and chronic toxicity tests to flag substances like or pesticides; dose-response assessment deriving benchmarks such as no-observed-adverse-effect levels (NOAEL) from rodent or aquatic studies; modeling contaminant uptake through , , or dermal contact; and risk characterization combining these to compute probabilities, often expressed as hazard quotients exceeding 1 indicating potential concern. For ecological contexts, EPA's Ecological Soil Screening Levels (Eco-SSLs), derived from species sensitivity distributions across 20+ taxa, provide protective thresholds for soil contaminants like or PAHs, benchmarked against 2005-2020 toxicity datasets. These assessments prioritize causal linkages, distinguishing direct toxic effects from indirect stressors like habitat alteration, and incorporate uncertainty factors (typically 10-fold for interspecies ) to account for data gaps. Recent advancements, including pathway-based testing aligned with adverse outcome pathways (AOPs), refine predictions by focusing on molecular initiating events, as demonstrated in evaluations of chemical mixtures where traditional single-substance tests underestimate synergistic . Empirical validation against field observations, such as post-Superfund monitoring showing 70-90% reductions in endpoints after remediation, underscores the frameworks' utility in informing without assuming source neutrality.

Forensic and Clinical Diagnostics

Forensic toxicology testing applies analytical methods to postmortem biological samples, such as , , vitreous humor, and tissues, to identify and quantify , alcohols, poisons, and their metabolites that may have contributed to or legal impairment. This sub-discipline supports criminal investigations, including drug-facilitated crimes and vehicular homicides, by establishing causal links between toxicants and outcomes through toxicological profiles correlated with findings and scene evidence. Key techniques include - (GC-MS) for volatile compounds and confirmation, liquid chromatography-tandem mass spectrometry (LC-MS/MS) for broad-spectrum drug detection with high sensitivity down to nanogram-per-milliliter levels, and headspace gas chromatography for alcohols. These methods enable differentiation between therapeutic, recreational, and lethal concentrations, though interpretation requires consideration of postmortem redistribution, where drug levels can artificially elevate in central sites due to processes. In clinical diagnostics, toxicology testing facilitates rapid identification of xenobiotics in living patients presenting with suspected overdoses or poisonings, guiding administration, , and supportive care. or assays predominate for acute settings, detecting substances like s, benzodiazepines, and salicylates via s for initial screening followed by confirmatory to resolve issues and quantify levels for therapeutic monitoring. toxicology screens, often employing enzyme-multiplied techniques (EMIT), provide longer detection windows for exposure but lack precision for timing or impairment assessment without correlative clinical data. Laboratory-developed tests, predominantly -based, have expanded capabilities for novel psychoactive substances since 2020, addressing gaps in commercial panels amid rising synthetic cases. Challenges in both domains include matrix effects from sample degradation, co-ingestants complicating spectra, and the need for validated cutoffs; for instance, forensic thresholds for driving impairment typically exceed 0.08% blood alcohol concentration, varying by jurisdiction. Recent advances, such as in LC-MS for untargeted screening, have improved detection of emerging drugs like analogs in , enhancing turnaround times to under 24 hours in equipped labs. Integration of high-resolution has boosted specificity, reducing false positives from isobaric interferences, though forensic application demands chain-of-custody protocols to ensure evidentiary integrity. Clinical workflows increasingly incorporate for rapid , yet confirmatory central lab analysis remains essential for medicolegal reliability.

Industry Structure

Contract Research Organizations

Contract research organizations (CROs) specialize in providing outsourced preclinical and analytical services to pharmaceutical, biotechnology, and chemical industries, with toxicology testing forming a core component to assess potential adverse effects of substances on biological systems. These entities conduct studies ranging from acute and evaluations to , , and carcinogenicity assessments, often under (GLP) standards to ensure data reliability for regulatory submissions. By leveraging specialized facilities and expertise, CROs enable sponsors to accelerate development timelines while mitigating in-house resource constraints. The global toxicology testing services market, predominantly serviced by , reached USD 41.24 billion in 2025 and is projected to expand to USD 67.34 billion by 2034, driven by rising demand for safety data in candidates and environmental agents amid stricter regulatory scrutiny. Within the broader preclinical sector, toxicology testing accounted for approximately 22-24% of revenue in 2024, underscoring its dominance due to mandatory requirements for (IND) and (NDA) filings. Growth factors include increasing by small biotech firms lacking internal capabilities and the shift toward complex biologics necessitating specialized assays. Prominent CROs in toxicology include Charles River Laboratories, which offers comprehensive GLP-compliant studies evaluating chemical, drug, and substance toxicity in vivo and in vitro models; WuXi AppTec, providing six key toxicology study types for IND/NDA support such as single-dose and repeat-dose assessments; and Scantox, specializing in genetic toxicology with regulatory genotoxicity portfolios. Other notable players encompass Envigo (now Inotiv) for pharmacology and toxicology in early-stage development, and Pharmaron for integrated services combining advanced tech with safety testing. These organizations collectively employ about 10% of practicing toxicologists worldwide, facilitating high-throughput screening and specialized endpoints like immunotoxicity. CROs must adhere to international standards such as GLP guidelines to validate integrity, with services extending to pharmacokinetics-integrated to predict relevance from animal data. While outsourcing reduces costs—potentially by 20-30% compared to internal labs—it introduces risks like data discrepancies if oversight is inadequate, as evidenced by occasional FDA warnings for non-compliance in tox . Nonetheless, CROs' scalability supports diverse applications, from to validation.

Market Dynamics and Key Players

The global toxicology testing market, encompassing services for safety assessment, , and forensic analysis, was valued at approximately USD 39.05 billion in 2024 and is projected to reach USD 67.34 billion by 2034, reflecting a (CAGR) of about 5.6%. This expansion is driven primarily by escalating pharmaceutical (R&D) investments, with global biopharma R&D spending exceeding USD 200 billion annually as of 2023, necessitating robust toxicity evaluations to comply with regulatory mandates from agencies like the FDA and . Additional factors include heightened environmental risk assessments amid industrial concerns and the forensic demand for accurate substance detection in clinical and legal contexts. Market dynamics are influenced by a shift toward non-animal testing methodologies, such as and approaches, which accounted for over 40% of workflows by 2024 due to ethical pressures and advancements in cellular models, though traditional testing persists for regulatory validation. Challenges include high operational costs—often exceeding USD 1 million per comprehensive study—and variability in predictive accuracy across test types, prompting consolidation among providers to achieve . Regional dynamics show dominating with over 40% market share in 2024, fueled by stringent U.S. regulations and proximity to major pharma hubs, while exhibits the fastest growth at a CAGR above 7%, driven by expanding contract research organizations (CROs) in and . Key players in the toxicology testing industry include contract research organizations and instrument providers that dominate service delivery and technology enablement. , a leading , specializes in integrated services for preclinical , conducting thousands of studies annually and holding significant capacity in both and alternative assays. Thermo Fisher Scientific Inc. leads in instrumentation and reagents, supplying tools like mass spectrometers essential for (, , metabolism, ) , with revenues from life sciences exceeding USD 10 billion in 2023. Other major entities encompass , which focuses on environmental and food testing with global lab networks processing millions of samples yearly; Agilent Technologies, providing analytical instruments for precise detection; and (formerly Covance), emphasizing with capabilities in and carcinogenicity assessments. Merck KGaA and also feature prominently, the former in biotech reagents and the latter in clinical diagnostics, collectively controlling over 50% of the market through mergers and innovation in automated platforms.

Regulatory Frameworks

United States Regulations

In the , toxicology testing for pharmaceuticals, biologics, and medical devices is primarily regulated by the (FDA) under the Federal Food, Drug, and Cosmetic Act, requiring nonclinical laboratory studies to assess safety prior to human trials or marketing. These studies must comply with (GLP) standards outlined in 21 CFR Part 58, which govern the organization, personnel, facilities, equipment, testing procedures, and record-keeping for nonclinical studies supporting applications (INDs), new drug applications (NDAs), or biologics license applications. The FDA's provides detailed guidelines for designing toxicity studies, including acute, subchronic, , and reproductive/developmental assessments, emphasizing dose selection, animal (typically and non-rodents), and histopathological evaluations to predict human risk. The FDA Modernization Act 2.0, enacted on December 29, 2022, amended the Federal Food, Drug, and Cosmetic Act to eliminate the mandatory requirement for in preclinical drug safety assessments, permitting alternatives such as cell-based assays, organ-on-chip models, and computational () predictions if they reliably demonstrate safety and efficacy. In April 2025, the FDA announced a to phase out requirements for certain drugs, including monoclonal antibodies, by integrating non- approaches like AI-driven modeling, while maintaining rigorous validation standards to ensure predictive accuracy comparable to traditional methods. This shift aims to reduce reliance on models, which have historically shown limitations in translatability to outcomes, but regulators require case-by-case justification for alternatives to avoid underestimating risks. For environmental chemicals, pesticides, and industrial substances, the Environmental Protection Agency (EPA) enforces toxicology testing under the Toxic Substances Control Act (TSCA, enacted 1976) and the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA, originally 1947 with amendments). TSCA mandates pre-manufacture notices for new chemicals, requiring toxicity data on health effects, environmental fate, and exposure, generated per EPA GLP standards in 40 CFR Part 792. FIFRA similarly requires registrants to submit data from studies on acute, subchronic, chronic, oncogenic, reproductive, and ecotoxicological effects using harmonized test guidelines in the Office of Chemical Safety and Pollution Prevention (OCSPP) Series 870 for health effects and Series 850 for ecological effects. These guidelines specify protocols like oral gavage for acute toxicity (e.g., LD50 determinations in rats) and 90-day repeated-dose studies, with GLP compliance monitored through inspections to ensure data integrity for risk assessments. EPA GLP regulations differ from FDA's in scope—extending to environmental and chemical fate studies—and enforcement, with Part 160 applying specifically to pesticide testing under FIFRA, while Part 792 covers TSCA-related non-pesticide substances. Both agencies conduct compliance audits, with violations potentially leading to study invalidation or enforcement actions, though EPA emphasizes data quality over methodological rigidity, increasingly accepting validated non-animal methods aligned with OECD guidelines where scientifically justified. Occupational toxicology testing falls under the Occupational Safety and Health Administration (OSHA), which references EPA/FDA data for permissible exposure limits but does not independently mandate testing. Overall, U.S. regulations prioritize reproducible, quality-controlled data to inform causal risk determinations, balancing innovation in alternatives against empirical validation needs.

European Union Directives

The 's primary framework for toxicological testing of chemicals is established under Regulation (EC) No 1907/2006, known as REACH (Registration, Evaluation, Authorisation and Restriction of Chemicals), which requires manufacturers and importers to submit dossiers including toxicological data for substances produced or imported in quantities exceeding 1 tonne per annum. Testing requirements are tiered by annual tonnage bands outlined in REACH Annexes VII to X, starting with basic endpoints such as acute oral , skin and eye irritation, skin sensitization, and bacterial gene mutation assays for 1-10 tonnes per year, and escalating to sub-chronic (90-day) repeated-dose , prenatal developmental , and potentially carcinogenicity studies for higher volumes above 1000 tonnes per year. REACH emphasizes the 3Rs principle (, , refinement) by mandating use of existing data, methods, and read-across approaches before vertebrate , with all studies conducted under and aligned with test guidelines. Complementing REACH, Regulation (EC) No 1272/2008 on , labelling and packaging of substances and mixtures (CLP) mandates hazard using toxicological evidence, such as LD50/LC50 values for categories (1-5) and criteria for skin corrosion, serious eye damage, respiratory sensitization, and specific target organ toxicity. Classifications derive directly from toxicological studies, informing safety data sheets and risk management, with updates to Annex VI harmonized lists reflecting peer-reviewed data from sources like ECHA committees. In the cosmetics sector, Regulation (EC) No 1223/2009 prohibits the sale of products tested on animals within the , with a full ban on for ingredients and finished products effective from March 11, 2013, for endpoints like repeated-dose toxicity, , and toxicokinetics, unless no validated alternatives exist as determined by the Scientific Committee on Consumer Safety (SCCS). Safety assessments must instead rely on toxicological profiles evaluating endpoints such as local irritation, , , and systemic effects via margin-of-safety calculations using no-observed-adverse-effect levels (NOAELs) from non-animal data, human studies, or structure-activity relationships. For pharmaceuticals, Directive 2001/83/EC, as amended, requires comprehensive non-clinical toxicological documentation for marketing authorizations, guided by scientific guidelines that specify studies including single-dose and repeat-dose (up to chronic durations), batteries, carcinogenicity in (if warranted by or ), and reproductive/developmental across multiple species. These align with ICH harmonized standards, prioritizing dose-response data to establish safe starting doses for clinical trials, with juvenile and reproductive addressed for relevant indications. Overall, EU frameworks integrate toxicological testing to inform risk assessments while increasingly favoring non-animal approaches, though vertebrate studies remain required where alternatives are insufficient for regulatory decisions.

Global Harmonization Efforts

The International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH), formed in 1990 by regulatory authorities and industry from the , , and the , leads efforts to standardize non-clinical toxicology testing for . ICH guidelines, particularly the M3(R2) on non-clinical studies to support human clinical trials (adopted at Step 4 in November 2009) and the S-series (S1 to S11) addressing specific toxicities such as carcinogenicity, , , and immunotoxicity, promote alignment on study design, duration, and endpoints to ensure data from toxicology assessments are mutually accepted across adopting regions including the , , , , and . These guidelines emphasize integration of into toxicity studies and specify requirements like repeat-dose toxicity testing scaled to clinical exposure durations, reducing discrepancies that previously required region-specific repeat studies. For industrial chemicals, pesticides, and environmental toxicants, the Organisation for Economic Co-operation and Development () drives harmonization through its Test Guidelines (TGs) and the Mutual Acceptance of Data (MAD) system. Established by a 1981 OECD Council Decision—binding on all 38 member countries and extended to non-members via adherence—the MAD requires acceptance of toxicology data generated under TGs and (GLP) principles, eliminating the need for duplicate testing and estimated to save millions in animal use and costs annually. Section 4 of the TGs covers health effects, including acute oral toxicity (e.g., TG 420, 423, 425 adopted or updated post-2000 to refine animal use), repeated-dose toxicity (TG 407, 408), genetic toxicology (TG 471-489), and carcinogenicity (TG 451, 453), with over 150 guidelines developed since 1981 and regular updates incorporating validated alternatives. U.S. EPA participation in revisions ensures alignment for regulatory purposes like registration. Complementing these, the Globally Harmonized System (GHS) of Classification and Labelling of Chemicals, initiated in 1992 and first published in 2003 with revisions through 2023 (Rev. 10), standardizes hazard classification using data for categories like (based on LD50 values) and specific target organ toxicity, deferring testing protocols to frameworks such as TGs while enabling consistent global labeling and safety data sheets across 80+ implementing countries. This system supports trade by minimizing classification disputes reliant on divergent interpretations, though it does not mandate harmonized testing methods directly.

Advances and Alternatives

New Approach Methodologies

New approach methodologies (NAMs) encompass a range of non-animal techniques designed to assess chemical , including cellular assays, in chemico assays measuring chemical reactivity, and computational models that predict biological interactions based on structure-activity relationships. These methods prioritize mechanistic understanding of toxicity pathways over traditional whole-animal testing, enabling of thousands of compounds for endpoints such as skin sensitization or . For instance, the has validated assays like the Direct Peptide Reactivity Assay for skin sensitization, which detects protein binding as a key event in allergic responses, demonstrating concordance with animal data in over 80% of cases for certain datasets. Regulatory agencies have increasingly integrated NAMs into decision-making frameworks to reduce reliance on vertebrate animals. The U.S. Environmental Protection Agency (EPA) outlined a New Approach Methods Work Plan in June 2020, focusing on developing and validating NAMs for TSCA chemical evaluations, with goals to prioritize safer chemicals and minimize animal use by incorporating tools like quantitative structure-activity relationship (QSAR) models. Similarly, the FDA's report highlights NAMs' potential for timely of products like food additives, emphasizing integrated strategies combining data with modeling. In Europe, the () supports NAMs aligned with the 3Rs principles (replacement, reduction, refinement), as seen in qualified assays for embryotoxicity prediction under REACH regulations. These efforts build on adverse outcome pathway (AOP) frameworks, which map molecular initiating events to apical outcomes, facilitating read-across from tested analogs to untested chemicals. Despite progress, NAMs face validation challenges, particularly for systemic or repeated-dose toxicities where models may not fully replicate organism-level or . A 2023 National Academies report recommended EPA develop standardized frameworks for evaluating NAM reliability, noting gaps in bridging high-throughput to regulatory potency estimates, as current predictions can underperform for novel scaffolds with limited training . Empirical comparisons show NAMs excelling in local toxicity endpoints driven by reactivity, such as eye irritation, but struggling with dynamic processes like developmental due to incomplete recapitulation of interactions. Peer-reviewed analyses indicate that while NAM batteries can achieve 70-90% accuracy for acute hazards when calibrated against historical animal , broader adoption requires overcoming inter-laboratory variability and establishing quantitative uncertainty bounds, as unvalidated extrapolations risk underestimating chronic risks. Ongoing initiatives, including AI-enhanced NAM , aim to address these by automating curation and pathway , though human oversight remains essential to mitigate model biases from training datasets skewed toward pharmaceuticals over industrial chemicals.

Integration of AI and Machine Learning

Artificial intelligence (AI) and (ML) have been integrated into toxicology testing primarily through predictive modeling to forecast chemical toxicity based on molecular structures, reducing reliance on time-intensive and assays. These approaches leverage algorithms trained on large datasets, such as ToxCast and Tox21, which contain results for thousands of chemicals across hundreds of assays. Quantitative structure-activity relationship (QSAR) models, enhanced by ML techniques like random forests and support vector machines, enable early identification of potential toxicities, including and , with reported accuracies exceeding 80% in some validated cases. Deep learning variants, including convolutional neural networks (CNNs) and transformers, have advanced predictions by processing complex chemical representations, such as SMILES strings or molecular graphs, to capture nonlinear relationships between and endpoints. For instance, a 2024 transformer-based model demonstrated superior performance in predicting acute and chronic toxicities compared to traditional QSAR, achieving area under the curve () values up to 0.92 on datasets by directly extracting toxicity-specific features from chemical inputs. deep learning frameworks, incorporating data alongside structural inputs, further improve drug-induced toxicity forecasts, as evidenced by models integrating and for organ-specific risks like . These methods support high-throughput in , potentially accelerating candidate prioritization while minimizing false positives from empirical testing. In regulatory contexts, AI/ML models aid hazard identification for environmental chemicals, with efforts like the U.S. EPA's use of ToxCast-derived predictions to prioritize substances for further evaluation. A review of 93 peer-reviewed studies since 2015 highlights the prevalence of ensemble methods and neural networks in these applications, though validation against independent datasets remains essential to mitigate overfitting. Despite efficiencies, challenges persist, including the "black-box" nature of deep models, which limits mechanistic interpretability crucial for causal toxicological reasoning, and biases from imbalanced training data that can underestimate rare toxicities. Ongoing developments emphasize explainable AI techniques, such as SHAP values, to enhance transparency and regulatory acceptance.

Efforts to Reduce Animal Use

The 3Rs principle—Replacement, Reduction, and Refinement—serves as the foundational framework for minimizing animal use in toxicology testing, originally articulated in by W.M.S. and R.L. Burch in The Principles of Humane Experimental Technique. Replacement involves substituting animal models with non-animal methods, such as assays or computational models; aims to decrease the number of animals required while maintaining scientific validity; and Refinement seeks to lessen suffering through improved procedures or housing. In toxicology, these principles have driven the validation of alternatives like cell-based assays for acute systemic toxicity, achieving reductions in animal numbers; for instance, strategic application of the 3Rs in regulatory toxicity testing has yielded major savings, with one study reporting over 50% decreases in rodent use across programs. In the United States, the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), established under the National Program, coordinates federal efforts to identify, develop, and validate test methods that reduce, refine, or replace animal use in safety assessments, including toxicology endpoints like skin sensitization and acute oral . The National Institute of Environmental Health Sciences' NICEATM supports ICCVAM by evaluating alternative approaches, such as predictions and reduced-protocol tests, leading to regulatory acceptance of methods that minimize animal involvement; examples include guideline revisions for testing that cut bird use. The FDA Modernization Act 2.0, enacted on December 29, 2022, amended the Federal Food, , and Cosmetic Act to authorize non-animal alternatives—including systems, microphysiological assays, and computer modeling—for demonstrating drug safety and efficacy in applications, explicitly ending the mandatory requirement previously in place since 1938. European efforts under regulation (Regulation (EC) No 1907/2006) mandate that registrants prioritize alternatives to whenever scientifically possible, prohibiting vertebrate tests if data can be obtained from existing information, methods, or read-across approaches, with a "last resort" clause for unavoidable cases. The (ECHA) has advanced this through initiatives like the 2023 roadmap toward phasing out , focusing on non-animal methods for endpoints such as and carcinogenicity, though implementation faces challenges from data gaps leading to over 4 million animals used in REACH systemic studies by 2023. Globally, the Council for Harmonisation (ICH) incorporates 3Rs-aligned guidelines to avoid duplicate testing across jurisdictions, reducing overall animal demands in pharmaceutical toxicology.

Controversies and Criticisms

Scientific Limitations and Predictive Accuracy

Animal models in toxicology testing demonstrate limited predictive accuracy for outcomes, primarily due to interspecies differences in , , , (ADME), receptor profiles, and disease . A 2000 analysis of 150 pharmaceutical compounds reported 71% concordance for detecting the presence of organ toxicities using combined and non- , with non-rodents alone at 63% and at 43%; highest agreement occurred for hematological, gastrointestinal, and cardiovascular effects, while cutaneous toxicities showed the lowest. Subsequent independent reviews, however, have identified methodological flaws in such surveys, including toward marketed drugs and overestimation of predictivity; for instance, only 37% of 76 animal studies replicated findings in humans, with overall agreement approximating 50%—near random chance for many endpoints. Likelihood ratios (LR) from animal toxicology data further underscore weak diagnostic utility: median negative LRs range from 1.12 () to 1.82 (), indicating that negative animal results barely reduce the probability of and fail to reliably exclude . Positive predictive values (PPV) for specific toxicities remain modest; in a of 108 drugs, PPVs for all-grade toxicities varied from 57% () to 72% (), dropping for severe (grade 3/4) events, while negative predictive values hovered at 50-57% across species, with consistent prediction limited to hematologic effects and poor performance for neurologic or cutaneous ones. These metrics contribute to substantial false negatives, as exemplified by compounds like , which showed no teratogenicity in standard models but caused severe birth defects, and TGN1412, which elicited storms in humans despite safety in preclinical animals. High clinical attrition reflects these gaps: approximately 89% of drugs advancing past fail in human trials, with implicated in nearly 50% of preclinical-to-phase I failures, despite regulatory mandates for extensive animal data. False positives, conversely, inflate development costs by discarding candidates with animal-specific toxicities irrelevant to humans, such as exaggerated sensitivities to certain carcinogens. In vitro assays, intended as alternatives or adjuncts, exhibit comparable or greater limitations in predictive accuracy, lacking systemic organ interactions, immune dynamics, and long-term exposure modeling essential for causal pathways. While high-throughput screens can identify acute with moderate correlation to human LD50 values (e.g., via basal models), they underperform for chronic, multi-organ, or idiosyncratic effects, often requiring validation whose own predictivity is flawed. Overall, current paradigms prioritize over precise human forecasting, with empirical data revealing insufficient causal fidelity to justify sole reliance on either animal or non-animal methods without integrated, human-relevant refinements.

Ethical Debates and Animal Welfare

Ethical debates surrounding animal use in toxicology testing center on the moral justification of inflicting , distress, and on sentient beings to assess for humans, with critics contending that such practices violate principles of unnecessary . Procedures like acute oral toxicity tests, which historically involved the LD50 —forcing escalating doses until 50% mortality—have been criticized for causing severe , including convulsions, organ failure, and prolonged agony without adequate analgesia, prompting reforms to phase out LD50 in favor of fixed-dose or up-and-down procedures that use fewer animals and humane endpoints. Animal welfare advocates, drawing on utilitarian frameworks, argue that underpins the acceptance of these tests, as the potential benefits to humans do not outweigh the inherent cruelty, especially given evidence of interspecies physiological differences that undermine the tests' validity. The 3Rs framework—Replacement, Reduction, and Refinement—introduced by William Russell and Rex Burch in 1959, serves as the cornerstone for addressing these concerns by promoting non-animal alternatives where feasible, minimizing animal numbers through statistical optimization, and alleviating suffering via better husbandry, anesthetics, and early criteria. In regulatory , application of the 3Rs has yielded measurable improvements; for instance, strategic focus on these principles in toxicity testing protocols resulted in major reductions in animal use, with one analysis reporting savings equivalent to thousands of spared in developmental programs. Oversight bodies, such as Institutional Animal Care and Use Committees (IACUCs) in the U.S., mandate 3Rs compliance, yet implementation varies, with persistent reports of inadequate refinement in chronic studies involving repeated dosing or dermal exposure, where animals endure weeks of without sufficient pain mitigation. Proponents of in defend it as a necessary safeguard against harm, citing empirical precedents where models detected carcinogens like thalidomide's teratogenicity before exposure, arguing that ethical absolutism ignores causal chains linking data to regulatory bans on hazardous substances. Critics counter that the paradigm's scientific limitations—such as low concordance rates, where tests predict only a fraction of toxicities—render much futile, with systematic reviews indicating poor translatability that inflates costs and delays without proportional safety gains. These tensions have fueled calls for ethical re-evaluation, particularly as non- methods gain traction, though entrenched regulatory reliance on data sustains the debate, highlighting a disconnect between advancements and systemic inertia.

Regulatory Overreach and Economic Impacts

Critics of toxicology testing regulations contend that frameworks like the European Union's REACH impose demands for exhaustive data generation that surpass evidence-based necessities for , constituting regulatory overreach by prioritizing precautionary principles over predictive efficacy. A 2009 analysis in highlighted that REACH compliance could necessitate 20 times more animals for testing and incur costs sixfold higher than prior estimates, due to requirements for studies on thousands of existing chemicals despite limited validation of their human relevance. Such mandates, proponents of reform argue, reflect an overreliance on traditional animal models with poor trans-species , as evidenced by frequent drug withdrawals post-market for liver toxicity despite preclinical clearance. In the United States, the 2016 Toxic Substances Control Act (TSCA) amendments have drawn similar scrutiny for expanding EPA authority to require toxicity testing on high-priority chemicals without sufficient thresholds for data waivers or alternatives, potentially amplifying administrative burdens on industry. This has led to projections of substantial increases in animal use for compliance, as new prioritization processes mandate comprehensive evaluations absent streamlined options for low-exposure substances. Economically, these regulations exact significant tolls, with REACH's registration phase alone estimated to cost chemical firms €2.8–5.2 billion in testing and administrative expenses by 2018, diverting resources from to redundant data submission. In the U.S., toxicity testing requirements for under TSCA-like scrutiny are forecasted to range from $250 million to $1.2 billion for existing substances, imposing barriers to market entry and escalating production costs that consumers ultimately bear through higher prices for goods like and coatings. Broader analyses indicate that such outlays contribute to reduced R&D in new chemistries, with firms relocating operations to jurisdictions with laxer standards, thereby undermining domestic competitiveness. These impacts persist despite debates over net benefits, as cost-benefit evaluations often undervalue foregone while overemphasizing unquantified avoidance.

References

  1. [1]
    Toxicology Screening - StatPearls - NCBI Bookshelf - NIH
    Toxicological screens take place for various reasons, including a forensic investigation, rape, use of illicit drugs, and drug overdose. While consent is ...
  2. [2]
    Clinical Toxicology Testing | ARUP Consult
    Jun 16, 2025 · Clinical toxicology in the laboratory sphere deals with the identification of chemicals, drugs, or toxins that may affect patients and ...
  3. [3]
    Toxicology Testing - an overview | ScienceDirect Topics
    Toxicology testing is defined as the evaluation process that investigates the presence of drugs and substances in individuals, particularly in cases of ...
  4. [4]
    OBJECTIVE TESTING – URINE AND OTHER DRUG TESTS - PMC
    Here, we review drugs commonly included in testing panels, bodily fluids and tissues tested, indications for testing, practical concerns, and issues unique to ...
  5. [5]
    Toxicology Testing - an overview | ScienceDirect Topics
    Toxicology testing can trace its origins to the need to identify workplace hazards. Regulations and regulatory test guidelines have been developed over the last ...
  6. [6]
    From Classical Toxicology to Tox21: Some Critical Conceptual ... - NIH
    The advances in toxicology made during the last 30–40 years are expected to provide more innovative and efficient approaches to risk assessment.
  7. [7]
    High-Throughput Toxicology | US EPA
    Aug 27, 2025 · High-Throughput Toxicology (HTT) research develops and applies New Approach Methods to reduce the use of animals for testing thousands of chemicals.Missing: applications | Show results with:applications<|separator|>
  8. [8]
    Advances in acute toxicity testing: strengths, weaknesses and ... - NIH
    This review is focused on the update in acute systemic toxicity testing of chemicals. Merits and demerits of these advances were also highlighted.
  9. [9]
  10. [10]
    Errors in toxicology testing and the need for full discovery
    The errors range from outright fraud to more subtle problems with calibration, chain of custody, instrument maintenance, compound identification, source code, ...
  11. [11]
    Errors in toxicology testing and the need for full discovery - PMC
    Jul 17, 2025 · This paper presents a review of notable ... peer-reviewed literature, and credible news reporting supported by official documentation.
  12. [12]
    [PDF] Advantages and Disadvantages of Drug-Testing Specimens
    No on-site testing. Continuing concerns regarding ethnic, hair-color bias. Use of body hair forensically controversial. Testing may not detect single use drug.
  13. [13]
    Recommendations for Toxicological Investigation of Drug-Impaired ...
    This report describes updates to the National Safety Council's Alcohol, Drugs and Impairment Division's recommendations for drug testing in driving under ...
  14. [14]
    The evolving role of investigative toxicology in the pharmaceutical ...
    Feb 13, 2023 · This Perspective article summarizes the key goals of investigative toxicology, highlights current approaches and discusses selected emerging ...
  15. [15]
    Toxicology | National Institute of Environmental Health Sciences
    Mar 19, 2025 · Toxicology is the study of the harmful effects of chemicals, substances, or environmental agents on living systems.
  16. [16]
    Toxicology Research - FDA
    Jan 21, 2025 · FDA uses toxicology research to determine if substances are harmful, potential health risks, and exposure levels, aiming to ensure no harm at ...
  17. [17]
    Toxicology Testing in the 21st Century (Tox21) | US EPA
    May 13, 2025 · Tox21 researchers aim to develop better toxicity assessment methods to quickly and efficiently test whether certain chemical compounds have the potential to ...
  18. [18]
    Toxicology testing in drug discovery and development - PubMed
    The primary objective of toxicology studies in the drug discovery process is to evaluate the safety of potential drug candidates.<|separator|>
  19. [19]
    Summary | Toxicity Testing for Assessment of Environmental Agents
    The goals of toxicity testing are to identify possible adverse effects of exposure to environmental agents, to develop dose-response relationships that can ...Missing: drug | Show results with:drug
  20. [20]
    Dose-Response Relationship - an overview | ScienceDirect Topics
    The dose-response relationship is a central concept in toxicology. It is a framework around which all hazard assessment testing is performed and dose-response ...
  21. [21]
    The dose response principle from philosophy to modern toxicology
    Knowledge of the dose response relationship enables a better understanding of causality between a toxicant and its effects, the minimum dose where an affect ...
  22. [22]
    2.2: The Dose Response Relationship - Chemistry LibreTexts
    Jul 6, 2022 · The dose-response relationship is an essential concept in toxicology. It correlates exposures with changes in body functions or health.
  23. [23]
    A Review of the LD50 and Its Current Role in Hazard Communication
    Dec 21, 2020 · Compounds with an oral LD50 of 0–50 mg/kg are considered highly toxic, whereas compounds with an LD50 of greater than 2000 mg/kg are considered ...
  24. [24]
    Reference Dose (RfD): Description and Use in Health Risk ... - EPA
    Mar 15, 1993 · A NOAEL is an experimentally determined dose at which there was no statistically or biologically significant indication of the toxic effect of ...
  25. [25]
    No-Observed-Adverse-Effect Level - an overview - ScienceDirect.com
    The establishment of a no-observed-adverse-effect level (NOAEL) is one of the key endpoints arising out of nonclinical toxicity studies for the safety ...
  26. [26]
    Key Principles of Toxicology and Exposure - CHEMM
    Principle 1: Using Toxic Syndrome Recognition for Rapid Diagnosis and Empiric Therapy · Principle 2: Route of Exposure is a Determinant of Toxicity · Principle 3: ...
  27. [27]
    Assessment of Toxicity - Science and Judgment in Risk ... - NCBI - NIH
    Evaluation of toxicity involves two steps: hazard identification and dose-response evaluation. Hazard identification includes a description of the specific ...Hazard Identification · Dose-Response Assessment · Toxic Effects Other Than...Missing: core | Show results with:core
  28. [28]
    Poisoning Crimes and Forensic Toxicology Since the 18th Century
    This article traces the history of poisoning crimes and the related medico-scientific discipline of forensic toxicology using textbooks, key trials, and crime ...
  29. [29]
    [PDF] The Historical Development of Animal Toxicity Testing
    Jan 24, 1997 · This paper traces the historical development of animal toxicity testing, from its ancient origins through the period of standardization ...
  30. [30]
    Full article: Mithridates VI Eupator, father of the empirical toxicology
    Famous names in toxicology. Mithridates VI of Pontus the first experimental toxicologist. Adverse Drug React Toxicol Rev 1995; 14:1–6. (Open in a new window) ...
  31. [31]
    Mithridates of Pontus and His Universal Antidote - ScienceDirect.com
    Mithridates is recognized as the first experimental toxicologist for his extensive investigations into a vast number of poisons and antidotes.
  32. [32]
    The Contribution of Paracelsus to Modern Toxicology
    Jun 24, 2020 · At the heart of Paracelsus' medical theory is the belief that all matter can be reduced to three basic elements: sulphur, mercury and salt.
  33. [33]
    PROFILES IN TOXICOLOGY Paracelsus
    Paracelsus also encouraged the use of experimental animals to study the effects of chemicals for both beneficial effects and to identify toxic effects. As in ...
  34. [34]
    Mathieu Joseph Bonaventure Orfila (1787-1853): The Founder ... - NIH
    Orfila is considered the father of modern toxicology. Using laboratory experiments, clinical data, and sometimes post-mortem examination, he developed a ...
  35. [35]
    Biographies: Mathieu Joseph Bonaventure Orfila (1787–1853)
    Mathieu Joseph Bonaventure Orfila (1787–1853), often called the "Father of Toxicology," was the first great 19th-century exponent of forensic medicine.
  36. [36]
    Early Forensics: The Problem of Arsenic - Dittrick Medical History ...
    Oct 26, 2017 · The Marsh test was taken further yet by a French case, the poisoning of Charles Lafarge. Marie Lafarge was charged with the murder and the trial ...
  37. [37]
    Alternative approaches in median lethality (LD50) and acute toxicity ...
    The LD50 test was introduced by Trevan in 1927 for biological standardization of dangerous drugs. Since then, the LD50 has gained wide acceptance as a ...
  38. [38]
    Food Additive Safety Laws - Extoxnet
    Prohibited the sale of adulterated food and drugs in interstate commerce. 1938 - Federal Food, Drug, and Cosmetic Act ... of Food Science and Toxicology - ...
  39. [39]
    Federal Food, Drug and Cosmetics Act - ScienceDirect.com
    The Food, Drug and Cosmetic Act of 1938. The public reaction to the ... Food and Chemical Toxicology. Journal.
  40. [40]
    Thalidomide: the tragedy of birth defects and the effective treatment ...
    The thalidomide tragedy marked a turning point in toxicity testing, as it prompted United States and international regulatory agencies to develop systematic ...
  41. [41]
    Sixty years on: the history of the thalidomide tragedy
    Dec 2, 2021 · In June 1963 the Committee on the Safety of Drugs was set up, with detailed recommendations that drugs should be tested for potential toxic ...
  42. [42]
    Role of animal models in biomedical research: a review - PMC
    Jul 1, 2022 · Animal models are considered as most important in vivo models in terms of basic pharmacokinetic parameters like drug efficiency, safety, ...
  43. [43]
    Use of Animals in Research - Society of Toxicology
    SOT provides information on the use of animals in research, including expectations for compliance with all safety, legal, and ethical practices.
  44. [44]
    [PDF] Roadmap to Reducing Animal Testing in Preclinical Safety Studies
    Apr 10, 2025 · • Refined In Vivo Methods (for transition): As the field reduces reliance on animal testing, interim steps can involve refined in vivo methods.
  45. [45]
    Test No. 408: Repeated Dose 90-Day Oral Toxicity Study in Rodents
    This Test Guideline is intended primarily for use with rodents (rat preferably). At least 20 animals (10 female and 10 male) should be used for each test group.
  46. [46]
    Justification for species selection for pharmaceutical toxicity studies
    Nov 24, 2020 · This article summarizes discussions on the scientific justification and other considerations taken into account to ensure the most appropriate animal species ...
  47. [47]
    Review of the use of two species in regulatory toxicology studies
    Current regulatory guidelines usually require safety and tolerability data from two species, a rodent (rat or mouse) and a non-rodent (dog, ...
  48. [48]
    Species selection for nonclinical safety assessment of drug candidates
    Selecting the relevant/appropriate animal species for toxicity testing increases the likelihood of detecting potential effects in humans, and although recent ...
  49. [49]
    Test No. 420: Acute Oral Toxicity - Fixed Dose Procedure | OECD
    The test substance is administered in a single dose by gavage using a stomach tube or a suitable intubation canula. Animals should be fasted prior to dosing. A ...
  50. [50]
    (PDF) OECD GUIDELINES FOR ACUTE ORAL TOXICITY STUDIES
    OECD guidelines for Acute oral toxicity include 420 (Fixed Dose Procedure), 423 (Acute Toxic Class Method) and 425 (Up-and-Down procedure).<|separator|>
  51. [51]
    [PDF] OECD Test Guideline 423 - National Toxicology Program
    Dec 17, 2001 · The acute toxic class method (1) set out in this Guideline is a stepwise procedure with the use of 3 animals of a single sex per step. Depending ...
  52. [52]
    Test No. 452: Chronic Toxicity Studies | OECD
    The Test Guideline focuses on rodents and oral administration. Both sexes should be used. For rodents, at least 20 animals per sex per group should normally be ...
  53. [53]
    [PDF] OECD Test Guideline 425 - National Toxicology Program
    The test proced ure described in this Guideline is of value in minimizing the number of animals required to estimate the acute oral toxicity of a chemical. In ...
  54. [54]
    Limitations of Animal Studies for Predicting Toxicity in Clinical Trials
    Early prediction of human toxicity is critical in decreasing the costs of drug development, and in silico testing has recently been promoted as an important, ...Translational Toolbox · Alternatives To Animal... · Tissue Engineering
  55. [55]
    The (misleading) role of animal models in drug development
    Apr 7, 2024 · However, while animal studies have contributed to important advances, too much reliance on animal models can also mislead drug development. This ...How and why are animal... · Developing targeted... · Limitations of animal models
  56. [56]
    In vitro Models for Chemical Toxicity: Review of their Applications ...
    Jul 10, 2019 · In this re view, I collected the references about not only in vitro models for chemical toxicity, but also their applications and prospects ...
  57. [57]
    In vitro toxicity model: Upgrades to bridge the gap between ... - NIH
    This article briefly describes various approaches of the currently used models for toxicity screening, to justify the selection of in vitro cell-based models.
  58. [58]
    A critical evaluation of in vitro cell culture models for high-throughput ...
    Commonly employed methods of in vitro testing, including dissociated, organotypic, organ/explant, and 3-D cultures, are reviewed here.
  59. [59]
    A comprehensive review on 3D tissue models - PubMed - NIH
    It has been found that 3D models can accurately represent complex tissue structure of human body and can be used for a wide range of disease modeling purposes.
  60. [60]
    Complex in vitro models positioned for impact to drug testing in ...
    Aug 27, 2024 · This review comes from a pharma perspective, and seeks to provide an evaluation of where CIVMs are poised for maximum impact in the drug development process.
  61. [61]
    In vitro test systems and their limitations - PMC - NIH
    Indeed, in vitro systems have massively promoted our understanding of mechanisms of toxicity (Valente et al., 2012[22]; Kroll et al., 2012[12]; Vávrová et al., ...
  62. [62]
    A review of complex in vitro cell culture stressing the importance of ...
    Apr 23, 2023 · This review looks at some important market needs for more complex in vitro models, the technical difficulties that must be overcome.Introduction · Application requirements and... · The technical difficulties of...
  63. [63]
    Implementing Alternative Methods - FDA
    Jul 31, 2025 · Discusses FDA-wide new in vitro, in vivo, and in silico methods, including research, training, and communication; Acts as a catalyst to foster ...
  64. [64]
    Innovative organotypic in vitro models for safety assessment - NIH
    In this review, we discuss innovative in vitro models currently being used or recently developed as well as the regulatory perspective for toxicological safety ...<|control11|><|separator|>
  65. [65]
    The Role of Complex In Vitro Models in IND Submissions to the FDA
    Dec 9, 2024 · The FDA has shown increasing interest in the use of CIVMs for IND submissions, particularly in the context of toxicology and disease modeling.
  66. [66]
    Comparison of conventional and advanced in vitro models in the ...
    This review focuses on the role of in vitro models in toxicity testing of NPs, without addressing the role of ex vivo systems and organelle testing. Cell ...
  67. [67]
    New approach methodologies in human regulatory toxicology
    NAMs not only serve as alternative non-animal approaches but can also be combined with in vivo test methods, in vitro studies and clinical observations to ...
  68. [68]
    In silico toxicology: computational methods for the prediction of ...
    In silico toxicology is one type of toxicity assessment that uses computational methods to analyze, simulate, visualize, or predict the toxicity of chemicals.
  69. [69]
    Toxicity Estimation Software Tool (TEST) | US EPA
    QSARs are mathematical models used to predict measures of toxicity from the physical characteristics of the structure of chemicals (known as molecular ...
  70. [70]
    QSAR Toolbox
    The Toolbox is a free software application that supports reproducible and transparent chemical hazard assessment.Download latest version · Support · Repository · Databases<|control11|><|separator|>
  71. [71]
    A review on machine learning methods for in silico toxicity prediction
    In silico toxicity prediction plays an important role in the regulatory decision making and selection of leads in drug design as in vitro/vivo methods are ...
  72. [72]
    In silico toxicology: From structure–activity relationships towards ...
    Mar 31, 2020 · We will discuss pitfalls and caveats for model building and frame our vision of the future of in silico toxicology. 2 TOXICITY TESTING. The goal ...Abstract · INTRODUCTION · TOXICITY TESTING · MACHINE LEARNING BASED...
  73. [73]
    Validation of QSAR models for legislative purposes - PMC - NIH
    OECD principles for QSAR model validation include defining end-points, using a univocally given algorithm, defining applicability, statistical evaluation, and ...Missing: guidelines | Show results with:guidelines
  74. [74]
    [PDF] (Q)SAR Assessment Framework - OECD
    Nov 5, 2023 · OECD (2023), (Q)SAR Assessment Framework: Guidance for the regulatory assessment of (Quantitative) Structure - Activity Relationship models,.
  75. [75]
    From QSAR to QSIIR: Searching for Enhanced Computational ... - NIH
    Using chemical descriptors only, kNN QSAR modeling resulted in 62% overall prediction accuracy for rodent carcinogenicity applied to this data set. Importantly, ...
  76. [76]
    What are the advantages of in silico toxicology in drug development?
    What are the advantages of in silico toxicology in drug development? · Developing a new drug is a costly and risky affair. · A toxicity test generates data on the ...
  77. [77]
    The Pros and Cons of the In-silico Pharmaco-toxicology in Drug ...
    Aug 10, 2025 · Toxicity is an essential yet expensive stage in drug discovery. However, an in silico study reduces the cost and time required for this stage.
  78. [78]
    The predictivity of QSARs for toxicity - ScienceDirect.com
    ... QSAR model will ever achieve consistent 100% accuracy. ... Assessment and evaluation of QSAR models for toxicity prediction is a worthwhile exercise.
  79. [79]
    Conservative consensus QSAR approach for the prediction of rat ...
    QSARs are able to predict acute oral GHS classifications conservatively. •. CATMoS, VEGA and TEST predictions were combined in the form of consensus model. •.
  80. [80]
    Advancing predictive toxicology: overcoming hurdles and shaping ...
    Jan 6, 2025 · ... computational toxicology. ... One strategy for predicting in vivo organ-level toxicity is to integrate results from in silico predictions across ...
  81. [81]
    [PDF] Guidance for Industry - S6 Preclinical Safety Evaluation of ... - FDA
    The primary goals of preclinical safety evaluation are: (1) To identify an initial safe dose and subsequent dose escalation schemes in humans; (2) to identify ...
  82. [82]
    6 Types of Toxicology Studies for IND & NDA Programs - WuXi AppTec
    May 2, 2024 · Repeat Dose toxicity testing helps you evaluate the effects of repeated administration over a defined period. Chronic toxicity studies determine ...
  83. [83]
    [PDF] M3(R2) Nonclinical Safety Studies for the Conduct of Human ... - FDA
    pharmaceutical company initiative challenging the regulatory requirement for acute toxicity studies in pharmaceutical drug development. Regul Toxicol ...
  84. [84]
    90% of drugs fail clinical trials
    Mar 12, 2022 · Around 30% were due to unmanageable toxicity or side effects, and 10%-15% were due to poor pharmacokinetic properties, or how well a drug is ...
  85. [85]
    [PDF] Safety Testing of Drug Metabolites Guidance for Industry - FDA
    Nonclinical evaluation of drug safety usually consists of standard animal toxicology studies.4. These studies usually include assessing drug exposure ...
  86. [86]
    [PDF] S7A Safety Pharmacology Studies for Human Pharmaceuticals - FDA
    The objectives of safety pharmacology studies are (1) to identify undesirable pharmacodynamic properties of a substance that may have relevance to its human ...
  87. [87]
    Significance of Toxicology in Drug Product Approvals - Freyr.
    Sep 20, 2024 · Toxicology studies are foundational to the drug product approval process, ensuring that new pharmaceuticals are safe for human use.
  88. [88]
    Using Toxicity Tests in Ecological Risk Assessment | US EPA
    May 20, 2025 · Toxicity tests are used to expose test organisms to a medium-water, sediment, or soil-and evaluate the effects of contamination on the survival, growth, ...
  89. [89]
    Whole Effluent Toxicity Methods | US EPA
    Feb 5, 2025 · WET test methods consist of exposing living aquatic organisms (plants, vertebrates and invertebrates) to various concentrations of a sample of wastewater.
  90. [90]
    [PDF] Using Toxicity Tests in Ecological Risk Assessment - EPA
    Toxicity tests can be used to monitor the remediation of a Superfund site. Specifically, toxicity testing can indicate whether sources of contamination have ...
  91. [91]
    About Risk Assessment | US EPA
    Jun 2, 2025 · EPA uses risk assessment to characterize the nature and magnitude of risks to human health for various populations.Missing: monitoring | Show results with:monitoring
  92. [92]
    EPA Ecological Risk Assessment Guidance
    The Eco-SSL guidance provides a set of risk-based soil screening levels (Eco-SSLs) for several soil contaminants that are frequently of ecological concern.
  93. [93]
    Pathway-Based Approaches for Environmental Monitoring and Risk ...
    Toxicity · Toxicology. Environmental toxicologists supporting risk assessments ... The examples demonstrate how the AOP concept can focus toxicity testing in ...
  94. [94]
    Forensic Toxicology | National Institute of Justice
    Forensic toxicology is the analysis of biological samples for the presence of toxins, including drugs.
  95. [95]
    [PDF] OSAC's purpose is to strengthen the nation's use of forensic science ...
    The field of forensic toxicology involves three main sub-disciplines: postmortem forensic toxicology, human performance toxicology, and forensic drug testing.
  96. [96]
    Forensic Toxicology - Arkansas Department of Public Safety
    Testing · Gas chromatography-mass spectrometry (GC-MS), · Tandem liquid chromatography-mass spectrometry (LC-MS-MS), · Headspace gas chromatography, · UV-visible ...Missing: applications | Show results with:applications
  97. [97]
    Modern Instrumental Methods in Forensic Toxicology - PMC
    This article reviews modern analytical instrumentation in forensic toxicology for identification and quantification of drugs and toxins in biological fluids ...
  98. [98]
    Clinical Drug Testing - StatPearls - NCBI Bookshelf - NIH
    Apr 23, 2023 · Clinical drug testing analyzes plasma, serum, or urine to detect drugs or metabolites, used in suspected overdoses, addiction treatment, and to ...Missing: peer- | Show results with:peer-
  99. [99]
    How to interpret urine toxicology tests
    Urine toxicology tests are qualitative, with varying sensitivity/specificity. Positive results suggest exposure, but not necessarily cause. Negative results ...Physiological Background And... · Interpreting A Urine... · Drugs/toxins Not Detected By...Missing: procedures | Show results with:procedures
  100. [100]
    The critical role of laboratory developed tests in clinical toxicology
    Feb 22, 2023 · Lab-developed tests (LDTs) in toxicology can be used to close clinical care gaps. LDTs in clinical toxicology are almost exclusively mass spectrometry-based ...Missing: peer- | Show results with:peer-
  101. [101]
    Mass Spectrometry Applications for Toxicology - PubMed Central - NIH
    MS has become a powerful analytical technique with a wide range of application used in the Toxicological analysis of drugs, poisons, and metabolites of both.
  102. [102]
    Screening for Forensically Relevant Drugs Using Data-Independent ...
    Apr 4, 2024 · An untargeted extraction and data acquisition method was evaluated for the broad screening of high-priority drugs of abuse in whole blood.
  103. [103]
    (PDF) Advances in Forensic Toxicology - ResearchGate
    Aug 5, 2025 · [1, 2] Forensic toxicology is crucial to criminal investigations. It is essential evidence in drug-related fatalities, DUIs, poisonings, and ...
  104. [104]
    What the lab can and cannot do: clinical interpretation of drug testing ...
    The objective of this section is to review the different biomatrices and analytical methods used to analyze the various drugs tested in the toxicology ...
  105. [105]
    Preclinical GLP Toxicology Studies - Charles River Laboratories
    Preclinical toxicology studies are an essential part of drug development as they help to evaluate the potential safety and toxicity of a drug candidate.
  106. [106]
    Scantox | CRO | Discovery, Toxicology & Formulation Services
    Genetic Toxicology Services​​ Scantox offers a comprehensive portfolio of screening and regulatory genotoxicity products & services, along with assessment of ...Genetic Toxicology Testing · Genetic Toxicology · Regulatory Toxicology · Discovery
  107. [107]
    Contract Research Organization - an overview | ScienceDirect Topics
    Contract Research Organizations (CROs), which are rapidly expanding worldwide, employ approximately 10% of the toxicologists who responded to the respondents ...
  108. [108]
    Toxicology Testing Services Market Size and Forecast 2025 to 2034
    Jan 8, 2025 · The global toxicology testing services market size is accounted at USD 41.24 billion in 2025 and is forecasted to hit around USD 67.34 billion by 2034.
  109. [109]
    Preclinical CRO Market Size & Share | Industry Report, 2033
    The toxicology testing segment accounted for the largest revenue share in the preclinical CRO industry of 22.45% in 2024. The growth of the segment is due to ...
  110. [110]
    Preclinical CRO Market Size at USD 6.8 Billion in 2025, Forecasted ...
    Jul 16, 2025 · By service, the toxicology testing segment accounted for the largest revenue share in the global preclinical CRO market in 2024. By service, the ...
  111. [111]
    Preclinical CRO Market Size to Hit USD 13.14 Billion by 2034
    Service Insights. Based on service, the toxicology testing segment dominated for around 24% of the market share in 2024. As per the research results published ...
  112. [112]
    Top Preclinical Cro Companies & How to Compare Them (2025)
    Oct 7, 2025 · Envigo: Known for toxicology and pharmacology testing, supporting early-stage drug development. Pharmaron: Combines advanced technology with a ...
  113. [113]
    [PDF] An Innovator's Guide to Choosing a CRO
    They can provide essential pre-clinical drug development services from pharmacology and toxicology testing to creating drug delivery devices and performing.
  114. [114]
    The Indispensable Role of Toxicology Testing in Modern Product ...
    Aug 11, 2025 · At its core, toxicology testing involves the systematic evaluation of compounds to determine their potential to cause harm, a process that is ...
  115. [115]
    Toxicology Drug Screening Market - Size, Share & Trends
    Feb 5, 2025 · The Toxicology Drug Screening Market is expected to reach USD 15.78 billion in 2025 and grow at a CAGR of 8.77% to reach USD 24.03 billion ...
  116. [116]
  117. [117]
    In Vitro Toxicology Testing Companies - Mordor Intelligence
    In Vitro Toxicology Testing Top Companies · Charles River · Thermo Fisher Scientific · Eurofins · Merck · Agilent Technologies.
  118. [118]
    Thermo Fisher Scientific Inc. (US) and Agilent Technologies, Inc ...
    Oct 27, 2023 · Thermo Fisher Scientific Inc. (US) and Agilent Technologies, Inc. (US) are Leading Players in the In Vitro Toxicology Testing Market.
  119. [119]
    In-Vitro Toxicology Testing Market Size, Industry Share, 2032
    Key Players Covered: The major companies in the global In-Vitro Toxicology Testing Market include Thermo Fisher Scientific Inc., Merck KGaA, Covance, Inc., Bio ...
  120. [120]
    In-vitro Toxicology Testing Market - Global Industry Analysis and ...
    Ans: The important key players in the Global market are – , Merck KGaA, Charles River, Bio-Rad Laboratories, Inc., Abbott, Thermo Fisher Scientific Inc., ...
  121. [121]
    Toxicology Laboratories in the US Industry Analysis, 2025 - IBISWorld
    The company holding the most market share in the Toxicology Laboratories industry in the United States is Quest Diagnostics Inc.. How competitive is the ...
  122. [122]
    21 CFR Part 58 -- Good Laboratory Practice for Nonclinical ... - eCFR
    This part prescribes good laboratory practices for conducting nonclinical laboratory studies that support or are intended to support applications for research ...Testing Facilities Operation · 58.1 – 58.219 · 58.185 – 58.195 · 58.120 – 58.130
  123. [123]
    Redbook 2000: IV.B.1. General Guidelines for Toxicity Studies - FDA
    Oct 26, 2017 · Nonclinical laboratory studies must be conducted according to U.S. FDA good laboratory practice (GLP) regulations, issued under Part 58. Title ...
  124. [124]
    FDA Modernization Act 2.0 allows for alternatives to animal testing
    The bill essentially refutes the Federal Food, Drug, and Cosmetics Act of 1938, which mandated animal testing for every new drug development protocol.
  125. [125]
    FDA Announces Plan to Phase Out Animal Testing Requirement for ...
    Apr 10, 2025 · The FDA's animal testing requirement will be reduced, refined, or potentially replaced using a range of approaches, including AI-based ...
  126. [126]
    Good Laboratory Practices Standards Compliance Monitoring ... - EPA
    EPA's Good Laboratory Practice Standards (GLPS) compliance monitoring program ensures the quality and integrity of test data submitted to the Agency.
  127. [127]
    Series 870 - Health Effects Test Guidelines | US EPA
    The final Health Effects Test Guidelines are generally intended to meet testing requirements for human health impacts of chemical substances under FIFRA and ...
  128. [128]
    40 CFR Part 792 -- Good Laboratory Practice Standards - eCFR
    This part prescribes good laboratory practices for conducting studies relating to health effects, environmental effects, and chemical fate testing.
  129. [129]
    [PDF] Comparison Chart of FDA and EPA Good Laboratory Practice (GLP ...
    792.15 (a) A testing facility shall permit an authorized employee or duly designated representative of EPA or FDA, at reasonable times and in a reasonable ...
  130. [130]
    Information requirements - ECHA - European Union
    The REACH Regulation requires registrants to prepare a registration dossier. This is composed of a technical dossier and, where relevant, a chemical safety ...
  131. [131]
    Regulation - 1272/2008 - EN - clp regulation - EUR-Lex
    ### Summary of CLP Regulation (EC) No 1272/2008 and Toxicology Testing
  132. [132]
  133. [133]
    Non-clinical: toxicology | European Medicines Agency (EMA)
    Non-clinical: toxicology · Single and repeat-dose toxicity. Questions and answers on the withdrawal of the note for guidance on single dose toxicity ...
  134. [134]
    Safety Guidelines - ICH
    The ICH Harmonised Guideline was adopted under Step 4 in November 2000. This document provides a definition, general principles and recommendations for safety ...
  135. [135]
    M3(R2) Nonclinical Safety Studies for the Conduct of Human Clinical
    Oct 17, 2019 · The purpose of this document is to recommend international standards for, and promote harmonization of, the nonclinical safety studies recommended to support ...
  136. [136]
    [PDF] ICH guideline M3(R2) on non-clinical safety studies for the conduct ...
    The recommendations of this revised guidance further harmonise the nonclinical safety studies to support the various stages of clinical development among ...
  137. [137]
    [PDF] the assessment of systemic exposure in toxicity studies s3a - ICH
    The guidance highlights the need to integrate pharmacokinetics into toxicity testing, which should aid in the interpretation of the toxicology findings and ...
  138. [138]
    The Mutual Acceptance of Data (MAD) System - OECD
    The OECD Mutual Acceptance of Data (MAD) System ensures that a test performed in one country is accepted in over 40 others. Without such a system in place, ...Missing: toxicology | Show results with:toxicology
  139. [139]
    About Test Guidelines for Pesticides and Toxic Substances | US EPA
    Mar 17, 2025 · EPA's test guidelines for pesticides and toxic substances specify EPA-recommended methods to generate data that is submitted to EPA.
  140. [140]
    [PDF] TEST GUIDELINES - OECD
    The Test Guidelines are an integral part of the OECD Council. Decision on the Mutual Acceptance of Data. OECD is developing Test Guidelines since 1981. More ...
  141. [141]
    About the GHS - UNECE
    The GHS also provides a basis for harmonization of rules and regulations on chemicals at national, regional and worldwide level, an important factor also for ...Missing: testing | Show results with:testing
  142. [142]
    [PDF] GLOBALLY HARMONIZED SYSTEM OF CLASSIFICATION AND ...
    The work began with the premise that existing systems should be harmonized in order to develop a single, globally harmonized system to address classification of ...<|separator|>
  143. [143]
    Globally Harmonized System of Classification and Labelling of ...
    The GHS generally defers to the United States Environmental Protection Agency and OECD to provide and verify toxicity testing requirements for substances or ...History · Hazard classification · Testing requirements · Hazard communication
  144. [144]
    New approach methodologies (NAMs): identifying and overcoming ...
    Mar 25, 2024 · New approach methodologies (NAMs) can be defined as any in vitro, in chemico or computational (in silico) method that when used alone, or in ...
  145. [145]
    Use of new approach methodologies (NAMs) to meet regulatory ...
    NAMs are defined as any technology, methodology, approach, or combination that can provide information on chemical hazard and risk assessment without the use of ...
  146. [146]
    New approach methodologies (NAMs) in toxicology - NC3Rs
    New approach methodologies (NAMs) are replacement technologies for assessing chemical or drug toxicity, aiming to reduce reliance on animal tests.
  147. [147]
    New Approach Methods Work Plan | US EPA
    Jun 23, 2020 · This document outlines the EPA's efforts to reduce vertebrate animal testing by further developing New Approach Methods (NAMs).
  148. [148]
    [PDF] Potential Approaches to Drive Future Integration of New Alternative ...
    Oct 7, 2024 · NAMs have the potential to provide more timely and predictive information to assess certain aspects of FDA-regulated products while also ...
  149. [149]
    Regulatory acceptance of new approach methodologies (NAMs) to ...
    New approach methodologies (NAMs) refer to novel methods that are compliant with the so-called 3Rs principles for the ethical use of animals in medicine ...
  150. [150]
    New Report Recommends EPA Develop Framework for Evaluating ...
    Jun 16, 2023 · “New approach methods” draw on a range of novel alternative tests, strategies, and models. These methods have the potential to fill the gaps ...
  151. [151]
    New approach methodologies (NAMs): identifying and overcoming ...
    Mar 25, 2024 · Successful use of NAMs has already been achieved for some specific local defined toxicity endpoints driven by chemical reactivity or ...
  152. [152]
    Recent advances and current challenges of new approach ...
    Mar 13, 2024 · ... new approach methods in toxicology: Exemplified for developmental neurotoxicity. ... challenges for integrating experimental and ...
  153. [153]
    Challenges and opportunities for validation of AI-based new ...
    The integration of artificial intelligence (AI) into new approach methods (NAMs) for toxicology rep-resents a paradigm shift in chemical safety assessment.
  154. [154]
    The role of machine learning in predictive toxicology: A review of ...
    Omics data integration enhances AI predictions of drug-induced toxicity. •. Validation techniques ensure AI model reliability in toxicology. •. AI accelerates ...
  155. [155]
    AI-based toxicity prediction models using ToxCast data
    Computational toxicology aims to provide insights for drug discovery and prioritize environmental chemicals for risk assessment, integrating the latest advances ...
  156. [156]
    Review of machine learning and deep learning models for toxicity ...
    This review summarizes the machine learning- and deep learning-based toxicity prediction models developed in recent years.
  157. [157]
    Transformers enable accurate prediction of acute and chronic ...
    Mar 6, 2024 · We present a AI-based model for predicting chemical toxicity. The model uses transformers to capture toxicity-specific features directly from the chemical ...
  158. [158]
    Multimodal deep learning for chemical toxicity prediction and ...
    Jun 3, 2025 · Second, the use of deep learning models such as Vision Transformer (ViT) and MLP ensures that the model can handle large-scale datasets with ...
  159. [159]
    Recent advances in AI-based toxicity prediction for drug discovery
    This review provides an in-depth examination of AI-driven toxicity prediction, emphasizing its transformative impact on drug discovery.
  160. [160]
    Overcoming barriers to machine learning applications in toxicity ...
    Dec 11, 2023 · Machine Learning (ML), in terms of drug toxicity, refers to computational methods which use patterns extracted from vast datasets to predict ...
  161. [161]
    applications of artificial intelligence in ADMET and toxicity prediction
    Oct 6, 2025 · We also summarize major toxicological databases into four types: chemical toxicity, environmental toxicology, alternative toxicology, and ...
  162. [162]
    The 3Rs - NC3Rs
    The principles of the 3Rs (Replacement, Reduction and Refinement) were developed over 50 years ago providing a framework for performing more humane animal ...
  163. [163]
    Strategic focus on 3R principles reveals major reductions in the use ...
    Jul 23, 2014 · The 3R principles (Replacement, Reduction, Refinement) were applied to reduce animal use in toxicity testing, achieving major savings, with a ...
  164. [164]
    Strategic Focus on 3R Principles Reveals Major Reductions in the ...
    The 3Rs, defined as Replacement, Reduction and Refinement, are fundamental principles for driving ethical research, testing and education using animals.
  165. [165]
    About ICCVAM - National Toxicology Program
    Jul 22, 2025 · ICCVAM is a committee advancing alternatives to animal testing, aiming to reduce, refine, or replace animal use in testing.Icatm · ICCVAM Member Agencies · ICCVAM Workgroups · Agency Activities
  166. [166]
    NICEATM: Alternative Methods - National Toxicology Program - NIH
    Aug 26, 2025 · Test Method Evaluations. ICCVAM and NICEATM have evaluated approaches for replacing and reducing animal use, and ICCVAM has issued ...
  167. [167]
    Alternative Methods Accepted by US Agencies
    Aug 26, 2025 · Avian acute oral toxicity test (reduction of animal use), ICCVAM working group contributed to U.S. OECD test guideline review. U.S.: Accepted ...
  168. [168]
    Alternatives to animal testing under REACH - ECHA - European Union
    Animal tests are only permitted if there is no alternative way to gather the safety information. The law requires companies to use alternative methods whenever ...
  169. [169]
    Roadmap towards phasing out animal testing
    and alternatives to animal testing under REACH legislation. ... This includes endocrine disruption, carcinogenicity, reproductive toxicity, repeated dose toxicity ...Background · Working Groups · Consultation activities · Surveys and interviews
  170. [170]
    4.2 million and counting… The animal toll for REACH ... - PubMed
    The animal toll for REACH systemic toxicity studies. ALTEX. 2023;40(3):389 ... With EU legislators set to consider REACH revisions that could expand animal ...
  171. [171]
    [PDF] FDA, ICH, and the 3Rs
    ICH, the International Council for Harmonisation, aims to reduce unnecessary animal testing through its guidelines, contributing to the 3Rs.<|separator|>
  172. [172]
    Limitations of Animal Studies for Predicting Toxicity in Clinical Trials
    Nov 25, 2019 · A number of studies have reviewed LR of specific drug toxicity tests for which both animal and human data are available. In a review of ...
  173. [173]
    Concordance of the Toxicity of Pharmaceuticals in Humans and in ...
    Jan 1, 2000 · ... animal studies to predict human toxicity (HT). The database was ... These survey results support the value of in vivo toxicology ...<|separator|>
  174. [174]
    Pre-clinical animal models are poor predictors of human toxicities in ...
    Sep 1, 2020 · The goal of preclinical studies in drug development is to predict the behaviours of therapeutic agents in humans. Efficacy, toxicity, and dosing ...
  175. [175]
    How In Vitro Assays Are Transforming Nonclinical Toxicology
    Feb 1, 2024 · Participants in 2023's annual American College of Toxicology (ACT) meeting saw firsthand the industry's growing interest in alternative toxicity ...<|separator|>
  176. [176]
    In vitro cytotoxicity testing prediction of acute human toxicity
    Aug 6, 2025 · This study was designed to compare the cytotoxic concentrations of chemicals, determined with three independent in vitro cytotoxicity ...<|control11|><|separator|>
  177. [177]
    Ethical considerations regarding animal experimentation - PMC - NIH
    They help to identify any dangerous or undesired side effects, such as birth defects, infertility, toxicity, liver damage or any potential carcinogenic effects ...
  178. [178]
    Animals and the 3Rs in toxicology research and testing
    Nov 26, 2015 · This article will discuss opportunities and approaches urgently needed to eliminate pain and distress and to reduce animal numbers in testing.<|separator|>
  179. [179]
    The 3Rs and Humane Experimental Technique: Implementing Change
    Sep 30, 2019 · The Three Rs have provided a clear set of directions for improving the welfare of animals used in research and help to improve scientific ...
  180. [180]
    Why Animal Research? - Stanford Medicine
    Many people have questions about animal testing ethics and the animal testing debate ... toxicity, or cancer-causing potential. U.S. federal laws require that non ...Missing: toxicology | Show results with:toxicology
  181. [181]
    A discussion of the impact of US chemical regulation legislation on ...
    New legislation also has the potential to substantially increase the numbers of animals used in toxicity tests in the near term. However, there are a number of ...
  182. [182]
    The costs of REACH. REACH is largely welcomed, but the ...
    The costs of REACH. REACH is largely welcomed, but the requirement to test existing chemicals for adverse effects is not good news for all. Holger Breithaupt ...
  183. [183]
    The Impact of Toxicity Testing Costs on Nanomaterial Regulation
    Feb 20, 2009 · Cost of testing for hazards from existing nanoparticles in the United States are estimated to range from $250 million to $1.2 billion.
  184. [184]
    Can “Hazard-Cost-Effectiveness Analysis” improve the risk ...
    Cost-Effectiveness Analysis under REACH does not consider hazards of chemicals. · It is used only for regulatory PBTs. · We develop and compare different hazard ...Missing: criticism | Show results with:criticism