An explanation is an epistemic and communicative process that provides understanding by elucidating the reasons, causes, or mechanisms accounting for why a fact, event, or phenomenon occurs, often through the relation between an explanans (the explaining factors) and an explanandum (the thing explained).[1] This concept spans disciplines, serving as a fundamental tool for inquiry, interpretation, and knowledge transmission in human cognition and discourse.In philosophy, particularly the philosophy of science, explanations have been theorized through various models emphasizing logical, causal, or pragmatic structures. Aristotle's framework identifies four types of causes—material, formal, efficient, and final—as essential to answering "why" questions in natural inquiry.[1]John Stuart Mill advanced a deductive approach, positing that explanations involve deriving specific facts from general causal laws of invariable succession.[1] A landmark modern theory is the Deductive-Nomological (DN) model developed by Carl Hempel and Paul Oppenheim, which defines scientific explanation as the logical deduction of a singular event (explanandum) from general laws and antecedent conditions, ensuring both logical validity and empirical truth.[2] Subsequent critiques and alternatives include probabilistic models for nondeterministic cases, unificationist accounts that explain by integrating phenomena under broader principles, and causal-mechanical views focusing on underlying processes rather than mere deduction.[1] These theories debate whether explanations must be complete and objective or can be partial and context-dependent, with relata such as events, facts, or propositions requiring appropriate conceptualization for intelligibility.[1]Beyond philosophy, explanations play a central role in scientific practice, where they enable prediction, unification, and empirical validation across fields like physics and biology. In mechanistic explanations prevalent in the life sciences, understanding arises from decomposing phenomena into component operations and their organization, as opposed to purely law-based accounts.[3] In linguistics and pragmatics, providing an explanation constitutes an assertive speech act, as classified by J.L. Austin and refined by John Searle, wherein the speaker commits to the truth of propositions that clarify, justify, or inform, distinguishing it from directives or expressives.[4]In psychology, explanations underpin folk theories of mind, helping individuals attribute causes to behavior through intentional, causal, or enabling factors to achieve social and cognitive coherence. Bertram Malle's framework highlights explanations as dual cognitive-social acts, rooted in perceptions of intentionality and meaning-making in everyday interactions.[5] Contemporary applications extend to explainable artificial intelligence (XAI), where explanations address the opacity of machine learning models by rendering decisions transparent and interpretable to humans, fostering trust and accountability in high-stakes domains like healthcare and autonomous systems.[6] Overall, the study of explanation reveals its versatility as a bridge between objective reality and subjective comprehension, evolving with interdisciplinary advances.
Definition and Fundamentals
Core Definition
An explanation is fundamentally a communicative act in which one party seeks to render a phenomenon intelligible to another by addressing "why" or "how" questions, thereby fostering understanding without aiming to persuade or evaluate morally.[7] In philosophical terms, it involves presenting statements or narratives that make the occurrence or existence of an event, object, or state of affairs comprehensible, often by invoking covering laws, underlying mechanisms, or contextual factors that connect the phenomenon to broader principles.[7] For instance, Carl Hempel characterized explanation as an argument demonstrating that a phenomenon was to be expected given certain explanatory facts, emphasizing its role in rational inquiry.[7]Explanations possess several key attributes that distinguish them as effective tools for comprehension. They are inherently contrastive, presupposing alternatives against which the phenomenon is evaluated—such as explaining why an event occurred rather than some expected alternative—thus highlighting what makes the actual outcome intelligible.[7] Relevance is another essential feature, requiring the explanation to directly address the specific puzzle or question at hand, often through causal or probabilistic relations that align with the inquirer's interests.[7] Additionally, explanations must be non-circular, avoiding infinite regress or tautological reasoning by grounding their claims in independent, verifiable premises rather than restating the phenomenon itself.[7]A representative example illustrates these attributes: to explain why a bridge collapsed, one might invoke the structural failure due to material fatigue under repeated stress, contrasting it with an intact bridge's resilience and providing relevant engineering principles, without merely describing the event's sequence.The philosophical roots of explanation trace back to Aristotle, whose doctrine of the four causes—material, formal, efficient, and final—served as precursors to modern notions by systematically addressing "why" a thing exists or changes, insisting that true knowledge requires grasping its causes.[8]
Historical Development
The concept of explanation traces its roots to ancient Greek philosophy, particularly in the work of Aristotle, who developed a framework of four causes to account for why things exist or occur. These include the material cause (the substance from which something is made), the formal cause (its structure or essence), the efficient cause (the agent that brings it about), and the final cause (its purpose or telos), with the latter emphasizing teleological explanations central to understanding natural phenomena.[8] This approach influenced Hellenistic thought and persisted through medieval scholasticism, where thinkers like Thomas Aquinas integrated Aristotelian causes with Christian theology, viewing explanations as aligning natural processes with divine purposes to resolve tensions between faith and reason.[9]During the Enlightenment, empiricist critiques reshaped explanations around observable experience. David Hume challenged traditional causal explanations by arguing that causation is not directly perceived but inferred from constant conjunctions of events, undermining metaphysical necessities and emphasizing habitual associations derived from sensory impressions. Immanuel Kant responded by distinguishing explanatory understanding—grounded in the categories of the understanding, such as causality, which structure experience—from regulative principles of reason, which guide inquiry toward systematic unity without constituting objective knowledge.[10]In the 19th century, positivism advanced scientific explanations as empirical generalizations. Auguste Comte's law of three stages posited that explanations evolve from theological and metaphysical to positive (scientific) forms, focusing on observable laws to describe social and natural phenomena.[11]John Stuart Mill extended this in his methods of causal inquiry, such as the method of difference, to identify explanatory regularities through inductive reasoning. By the early 20th century, logical empiricism refined these into deductive structures, culminating in Carl Hempel and Paul Oppenheim's 1948 deductive-nomological (DN) model, which formalized scientific explanation as deriving particular facts from general laws and initial conditions via deduction.[12]Post-World War II developments shifted toward contextual and pragmatic conceptions of explanation. Thomas Kuhn's 1962 analysis of scientific paradigms portrayed explanations as embedded within incommensurable frameworks that evolve through revolutions rather than cumulative progress, challenging the universality of deductive models.[13] Paul Feyerabend's 1975 critique in Against Method further rejected rigid methodological constraints on explanations, advocating epistemological anarchism to allow diverse, context-dependent approaches that foster scientific creativity.[14] The 1960s saw intensified debates on explanatory power, with critics like Wesley Salmon questioning the DN model's adequacy in capturing causal processes and probabilistic elements in scientific practice.[15]
Key Distinctions
Explanation versus Argument
Explanations and arguments both involve reasoning from premises to a conclusion, but they serve distinct purposes in philosophical discourse. An explanation seeks to elucidate why or how a given fact or event is the case, presupposing the truth of the explanandum and aiming to increase understanding by connecting it to broader principles or mechanisms. For instance, explaining why the Earth orbits the Sun might invoke Newton's law of universal gravitation, detailing the attractive force between masses without seeking to prove the orbit's existence. In contrast, an argument endeavors to establish or defend the truth of a conclusion, often against skepticism or alternative views, by providing evidence or logical support for premises leading to that conclusion. A syllogistic argument for the Earth's orbit might proceed: "All bodies with mass exert gravitational force; the Earth and Sun have mass; therefore, the Earth orbits the Sun," aiming to convince rather than merely inform.Philosophically, this distinction is sharpened in Bas van Fraassen's pragmatic theory of explanation, which posits that explanations answer context-specific "why-questions" by providing relevant information that renders the phenomenon intelligible, without necessitating the defense of underlying premises. Explanations thus presuppose acceptance of the fact to be explained and the reliability of the explanatory framework, focusing on relevance to the questioner's interests rather than logical deduction from contested grounds. Arguments, however, challenge or establish premises, often employing deductive or inductive structures to build belief in the conclusion. Van Fraassen emphasizes that this pragmatic asymmetry arises because explanations are not truth-conferring arguments but responses tailored to explanatory demands, avoiding the need to justify the entire theoretical apparatus.Illustrative examples highlight these roles in different domains. In science, an explanation of a hypothesis might describe how quantum entanglement accounts for observed particle correlations, assuming the phenomenon's occurrence to deepen comprehension. Conversely, in legal argumentation, counsel constructs an argument to persuade a jury of a defendant's guilt, marshaling evidence like witness testimony and forensic data to support the conclusion beyond reasonable doubt, rather than merely accounting for an accepted event. A common pitfall arises in pseudoscience, where purported explanations—such as astrological accounts of personality—are treated as persuasive arguments without predictive testability, leading to unfalsifiable claims that mimic scientific rigor but fail to distinguish from mere rationalization.A key criterion for distinguishing the two lies in their temporal orientation: explanations are typically backward-looking or retrodictive, accounting for past or observed events by subsuming them under laws or causes (e.g., why a bridge collapsed under Hempel's covering-law model). Arguments, by comparison, are often forward-looking, predictive, or normative, projecting outcomes or prescribing actions based on premises (e.g., arguing that reinforcing the bridge will prevent future collapses). This retrodictive focus in explanations underscores their role in understanding established facts, whereas arguments' predictive thrust supports decision-making or belief revision.[16]
Explanation versus Justification
Explanations and justifications serve distinct purposes in philosophical inquiry, with explanations focusing on accounting for why an event or phenomenon occurred through causal or mechanistic accounts, while justifications emphasize the normative rightness or evidential support for beliefs, actions, or policies. For instance, an explanation might address the question "Why did the car accident happen?" by citing icy roads as the causal factor, thereby elucidating the occurrence without evaluating its acceptability.[17] In contrast, a justification responds to "Why is this traffic policy acceptable?" by arguing that it maximizes utility for road safety, thereby validating its moral or practical legitimacy.[18] This distinction underscores that explanations are descriptive, aiming to render events intelligible, whereas justifications are prescriptive or evaluative, assessing whether something ought to be endorsed.[19]In epistemological contexts, these concepts diverge further. In ethics, John Rawls differentiates justificatory coherence—achieved through reflective equilibrium where principles align with considered judgments—from explanatory narratives that merely recount historical or causal sequences without normative endorsement.[20] In the philosophy of science, explanations provide understanding of phenomena but do not entail their truth; a compelling explanatory model, such as a now-discredited theory, can illuminate patterns without guaranteeing factual accuracy, unlike justifications that demand evidential warrant for belief.[21] Thus, scientific explanations prioritize intelligibility over veridicality, while justifications hinge on establishing epistemic or moral validity.[7]Illustrative examples highlight this contrast across domains. In legal philosophy, excuses offer explanations for an act's occurrence—such as duress compelling a crime—without affirming its wrongfulness, whereas defenses provide justifications that render the act permissible, like self-defense establishing moral rightness.[22] Similarly, historical events like wars can be explained through causal analyses of geopolitical tensions and resource conflicts, yet such accounts do not justify their moral acceptability, leaving ethical evaluation to separate normative scrutiny.[23]Despite these boundaries, overlaps arise in rationalization, where explanatory reasons are invoked to mimic justifications, often post-hoc to defend actions or beliefs without genuine normative grounding.[17] Explanations remain fundamentally descriptive, detailing how or why something transpired, while justifications are prescriptive, affirming what should be upheld; this separation prevents conflating causal accounts with moral or epistemic endorsements, though rationalizations exploit the ambiguity for persuasive effect.[24]
Explanation versus Description
A fundamental distinction between explanation and description lies in their respective aims: descriptions provide neutral reports of observable facts, while explanations interpret those facts by identifying underlying causes or principles that account for why the phenomena occur. For instance, stating "the sky is blue" constitutes a mere description of an observable property, whereas explaining it through Rayleigh scattering—where shorter-wavelength blue light from sunlight is preferentially scattered by atmospheric molecules—reveals the causal mechanism responsible for the appearance.[25] This interpretive step in explanation goes beyond listing particulars to invoke general principles that connect the observation to broader natural laws.[26]Philosophically, descriptions maintain a stance of neutrality, presenting agreed-upon facts without inherent value judgments, whereas explanations often require abstraction and idealization, which introduce interpretive commitments that may not fully align with real-world complexities. Nancy Cartwright critiques these explanatory ideals, arguing that scientific explanations rely on idealized models and laws (such as those in physics) that hold only under counterfactual conditions, involving abstractions that simplify reality to highlight causal structures but risk misrepresenting actual events.[27] This abstraction enables explanations to provide reasons for phenomena but distinguishes them from the particularity and non-inferential nature of descriptions, which avoid such theoretical overlays.[26]Illustrative examples underscore this contrast across domains. A weather report might describe current conditions—such as temperature, precipitation, and wind speed—as a factual chronicle of observables, remaining confined to immediate particulars without inferring causes.[28] In contrast, a climate model explains long-term patterns, such as global warming trends, by unifying diverse data under principles like greenhouse gas effects and radiative forcing, interpreting why averages shift over decades. Similarly, in historiography, a chronological narrative describes events in sequence (e.g., dates and actions in a battle), but an explanation interprets those events through causal narratives, such as socioeconomic factors driving a revolution, thereby transcending mere reportage to reveal interpretive reasons.[29]Explanations further differ by their criterion of unifying disparate facts under overarching principles, fostering a systematic understanding that descriptions lack. As Michael Friedman articulates, scientific explanation advances comprehension by deriving multiple phenomena from a smaller set of fundamental laws, reducing the apparent independence of facts—unlike descriptions, which treat observables as isolated and non-inferential.[30] This unification criterion ensures explanations provide interpretive depth, connecting particulars to generalizable "how" questions about phenomena.[31]
Types of Explanations
Causal Explanations
Causal explanations attribute phenomena to preceding events or conditions that bring them about, typically by invoking necessary or sufficient conditions, probabilistic regularities, or chains of influence. These explanations answer why-questions by identifying causes that make a difference to the occurrence of an effect, distinguishing them from mere correlations. For example, the assertion that "smoking causes lung cancer" elucidates the disease's onset through probabilistic links between tobacco exposure, genetic mutations, and tumor development, where the exposure increases the likelihood of malignancy beyond baseline rates.[32]Key concepts in causal explanations include counterfactual dependence and manipulability. Philosopher David Lewis formalized counterfactual dependence in his 1973 analysis, defining causation such that event C causes event E if E counterfactually depends on C—meaning, in the closest possible world where C does not occur, E also does not occur.[33] This approach captures intuitive notions of causation by emphasizing what would have happened absent the cause. Complementing this, James Woodward's interventionist or manipulability theory posits that X causes Y if an intervention on X would change Y, while holding other variables constant; this framework, detailed in his 2003 book Making Things Happen, tests causality through hypothetical or actual manipulations, proving especially useful in scientific contexts where experiments isolate variables.[34]Representative examples illustrate causal explanations across disciplines. In physics, the collision of billiard balls provides a classic case: the incoming ball's velocity and momentum cause the target ball's motion via elastic transfer of kinetic energy, adhering to Newtonian laws of conservation.[32] In biology, Darwinian natural selection operates as a causal process, where environmental pressures and heritable variations cause differential reproductive success, leading to adaptive traits in populations over generations, as outlined in Charles Darwin's 1859 On the Origin of Species.[35]Causal explanations face notable challenges, including directionality problems and overdetermination. Directionality issues arise when causes and effects appear simultaneous or bidirectional, complicating the identification of temporal precedence, as seen in feedback loops where an effect reinforces its cause.[36]Overdetermination occurs when multiple independent causes are each sufficient for the effect, such as two bullets from separate assassins striking a target simultaneously, raising questions about which cause truly "makes the difference" without redundancy.[36] These challenges, rooted in philosophical inquiries like David Hume's 18th-century view of causation as habitual constant conjunction rather than inherent necessity, underscore the need for refined criteria in applying causal models.[37]
Teleological Explanations
Teleological explanations account for phenomena by invoking goals, purposes, or functions that an entity serves or achieves, often framing the "why" of a feature in terms of its end-directed role rather than its origins. For instance, stating that "birds have wings for flight" attributes the presence of wings to the purpose of enabling flight, emphasizing prospective ends over antecedent causes.[38] This approach contrasts with purely causal explanations by prioritizing forward-looking functions, though it may reference causal regularities in natural selection to ground those functions.[38]Philosophically, teleological explanations trace back to Aristotle's doctrine of the four causes, where the final cause represents the purpose or end (telos) for which something exists or occurs, such as the growth of teeth in animals for the sake of chewing food.[8] In modern evolutionary biology, Larry Wright revived and refined this idea through his etiological theory of functions, proposing that a trait's function is the effect for which it was selected in its evolutionary history—e.g., the function of the heart is pumping blood because past hearts that did so contributed to survival and reproduction. This etiological view, which is backward-looking and historical, differs from dispositional accounts, which define functions based on a trait's current capacity to perform a beneficial role, regardless of historical selection—such as viewing a trait's function in terms of its disposition to maintain the organism's fitness in the present environment.[38]Examples of teleological explanations abound in biology and the design of artifacts. In biology, feathers may be explained as serving the function of insulation or mating display, selected for those ends over evolutionary time.[38] For artifacts, a hammer's shape is teleologically explained by its purpose of driving nails, where the design intentionally fulfills a human goal.[38] However, such explanations face critiques in physics, where the deterministic framework of classical mechanics, epitomized by Pierre-Simon Laplace's vision of a universe fully predictable from initial conditions and laws, rejects final causes in favor of efficient causation alone, viewing nature as a mechanism without inherent purposes.[39]Limitations of teleological explanations include their potential reduction to underlying causal chains, as argued by Daniel Dennett through his "intentional stance," which treats purpose-attributions as useful predictive strategies rather than literal descriptions of goal-directed mechanisms, thereby dissolving teleology into physical causation. Additionally, they risk anthropomorphism by projecting human-like intentions onto non-intentional processes, such as implying that natural selection "aims" for adaptation, which can mislead if not carefully distinguished from historical etiology.[38]
Mechanistic Explanations
Mechanistic explanations focus on decomposing a phenomenon into its underlying components, operations, and interactions to reveal how the system produces its effects. This approach, often termed the "new mechanism" framework, analyzes mechanisms as consisting of entities (the parts of the system, such as objects or substances) and activities (the productive operations or interactions among those entities), which are spatially and temporally organized to generate regular changes in the phenomenon. For instance, the process of photosynthesis can be mechanistically explained by detailing the entities like chlorophyll molecules and the activities such as electron transport chains that organize lightenergy capture into chemical energy production. This framework, developed by philosophers Peter Machamer, Lindley Darden, and Carl F. Craver, emphasizes productivity—how the organized entities and activities directly bring about the phenomenon—over mere correlation or external causation.[40]In applications, mechanistic explanations are prominent in neuroscience, where they elucidate cognition through the breakdown of neural circuits into interconnected neurons, synapses, and signaling activities that produce behaviors or mental states. For example, explanations of memory formation might detail how hippocampal circuits involve entities like pyramidal cells engaging in activities such as long-term potentiation to organize information storage. In engineering, mechanistic explanations aid in diagnosing system failures by identifying component breakdowns and their interactions, such as how fatigue in metal structures under cyclic loading leads to crack propagation and eventual collapse in bridges or machinery. These analyses enable targeted interventions, like redesigning materials to mitigate stress concentrations.[41]Mechanistic explanations offer advantages in handling complex, multilevel phenomena by bridging micro-level entities (e.g., molecular interactions) to macro-level outcomes (e.g., organismal functions), providing a structured way to address "how" questions through the productivity of organized components. Unlike broader causal accounts that link external events, mechanistic approaches delve into the internal architecture, revealing the spatial and temporal arrangements that make the phenomenon possible across scales. This multilevel productivity allows for more granular understanding, such as how subcellular processes contribute to cellular behaviors without reducing one level to another.[42]Critiques of mechanistic explanations highlight their limitations in applicability and accuracy. Not all phenomena lend themselves to mechanistic decomposition; for example, quantum events like particle entanglement often resist description in terms of productive entities and activities, favoring non-mechanistic accounts that prioritize probabilistic or holistic features instead. Additionally, mechanistic models frequently rely on idealizations—such as abstracting away environmental influences or assuming simplified interactions—which can lead to incomplete or misleading representations if the neglected factors prove crucial to the system's behavior. These issues underscore the need for complementary explanatory strategies in domains beyond classical physical or biological systems.[43][44]
Theories of Explanation
Deductive-Nomological Model
The Deductive-Nomological (DN) model of scientific explanation, proposed by Carl Hempel and Paul Oppenheim, posits that a proper explanation consists of deducing the event or phenomenon to be explained—termed the explanandum—from a set of general laws of nature and specific initial or antecedent conditions, collectively known as the explanans.[45] This model views explanation as a logical deduction where the truth of the explanans guarantees the truth of the explanandum, ensuring that the explanation is both nomological (law-based) and deductive in structure.[21] Formulated in 1948 amid the influence of logical positivism, the DN model sought to provide a unified, formal account of explanation in the sciences, drawing on the Vienna Circle's emphasis on logical empiricism and the verification principle to treat scientific reasoning as rigorously analyzable.[45] Hempel and Oppenheim's work built on earlier positivist ideas, aiming to distinguish scientific explanations from mere descriptions by requiring universal laws as essential premises.[46]The formal schema of the DN model can be represented as follows:
(L1), (L2), ..., (Ln): General laws of nature (universal hypotheses).
(C1), (C2), ..., (Ck): Statements of initial conditions (particular facts).
Logical deduction: From (L) and (C), it follows that (E): The explanandum event or state.
This structure ensures explanatory adequacy through three conditions: the explanans must logically imply the explanandum, the sentences in the explanans must be true, and the explanans must contain at least one general law.[45] For instance, the expansion of a gas in a container can be explained by deducing it from the ideal gas law (PV = nRT) as a general law (L) and specific conditions such as an increase in temperature (C), yielding the observed volume change (E).[21] A key feature of the DN model is its symmetry thesis: the logical form of explanation is identical to that of prediction, meaning that any valid DN explanation could also serve as a prediction if the explanandum were unknown in advance.[45] This symmetry underscores the model's commitment to determinism in sciences where laws allow precise forecasting, such as classical mechanics.[46]The DN model's strengths lie in its applicability to deterministic sciences, where explanations align closely with predictive success, providing a clear criterion for what counts as a scientific explanation and promoting unity across disciplines like physics.[21] It excels in cases involving strict causal laws, offering a framework that emphasizes empirical testability and logical rigor, which resonated with mid-20th-century scientific philosophy.[46] However, the model faces significant weaknesses, particularly in handling explanatory irrelevance and non-deterministic phenomena. A classic counterexample is the flagpole case: the height of a flagpole can be "deduced" from the length of its shadow and laws of optics under known conditions (e.g., noon sunlight), yet intuitively, the shadow does not explain the flagpole's height—the height explains the shadow, highlighting how the DN model permits irrelevant deductions as explanations.[21] This issue, along with failures in domains lacking strict laws, such as certain biological or historical events, reveals the model's limitations in capturing genuine causal directionality or explanatory relevance.[46]
Probabilistic Theories
Probabilistic theories of explanation address phenomena where outcomes are not strictly determined but occur with certain probabilities, extending beyond deterministic models by incorporating statistical laws and inductive reasoning. The Inductive-Statistical (IS) model, developed by Carl Hempel, posits that an explanation consists of particular facts and general statistical laws that render the explanandum highly probable, typically with a probability greater than 0.5 under the requirement of maximal specificity to ensure the explanation is as precise as possible.[47] For instance, the statement that smoking increases the risk of lung cancer to approximately 15% in a specified reference class (e.g., long-term heavy smokers without other risk factors) can serve as an IS explanation if supported by epidemiological laws and initial conditions.[48]Key developments in probabilistic theories emphasize causal underpinnings to resolve limitations in purely statistical accounts. Wesley Salmon advanced the view that explanations involve causal processes—spatio-temporally continuous entities that transmit causal influences—along with causal interactions, providing a framework for probabilistic explanations in non-deterministic systems like quantum mechanics, where outcomes such as particle decay follow probabilistic laws derived from the Schrödinger equation.[49] Similarly, Nancy Cartwright introduced the concept of capacities, arguing that probabilistic explanations rely on stable but context-dependent capacities of entities rather than universal strict laws, allowing for explanations of singular events through measured propensities under specific conditions.Despite these advances, probabilistic theories face significant challenges. The reference class problem arises because the probability of an event depends on the chosen reference class; for example, the probability of developing lung cancer varies dramatically whether the class is "all humans," "smokers," or "smokers exposed to asbestos," complicating the selection of an appropriate class for explanation.[47] Additionally, explanatory asymmetry persists, as probabilistic relations alone do not account for why causes explain effects but not vice versa—Salmon's causal processes address this by grounding explanations in directed causal transmission, distinguishing forward-looking explanations from retrospective ones.[49] In quantum mechanics, this asymmetry manifests in explaining measurement outcomes via probabilistic wave function evolution, but not the reverse.[50]
Pragmatic and Unification Theories
The pragmatic theory of explanation, primarily developed by Bas C. van Fraassen, posits that explanations are context-dependent answers to specific why-questions, tailored to the interests and background knowledge of the audience rather than adhering to a fixed logical structure.[51] According to this view, an explanation succeeds when it provides information relevant to the contrast class and relevance relations specified in the question, such as why a patient has a fever in contrast to not having one, where a doctor's diagnosis of an infection addresses the patient's practical concerns about treatment.[51] This approach emphasizes the illocutionary aspect of explanation as a communicative act, avoiding the need for universal laws or causal necessities by focusing on pragmatic utility in scientific and everyday discourse. For instance, the same phenomenon might be explained differently depending on whether the audience seeks medical advice or epidemiological patterns, highlighting the theory's flexibility in accommodating varied explanatory demands.[51]Building on pragmatic elements, the unification theory, advanced by Philip Kitcher, conceives of explanations as deriving from a system's ability to subsume diverse phenomena under a minimal set of argument patterns grounded in fundamental principles.[52]Kitcher argues that explanatory power arises from the economy and coherence of these derivations, where science progresses by unifying disparate facts—such as celestial and terrestrial mechanics under Newton's laws—into a cohesive framework that reveals underlying regularities without invoking ad hoc adjustments.[52] This contrasts with purely probabilistic accounts by prioritizing structural harmony over statistical coverage, as unification enhances understanding by showing how seemingly unrelated events stem from shared theoretical commitments.[52] In physics, for example, the unification of electromagnetic and weak forces in the electroweak theory exemplifies how reducing multiple phenomena to fewer principles amplifies explanatory depth.[53]Subsequent developments have refined these theories by integrating speech-act and constitutive dimensions. Peter Achinstein extended the pragmatic framework into an illocutionary theory, defining explanation as a deliberate communicative act intended to increase the recipient's understanding of a proposition, subject to felicity conditions like sincerity and relevance, much like other speech acts in Austin's taxonomy. This views explanations not as static propositions but as dynamic interactions, where success depends on the explainer's goals and the audience's response. Complementing this, Michael Friedman's work on constitutive explanations highlights how certain principles—such as coordinate conventions in general relativity—do not causally explain events but constitute the very framework within which empirical laws operate, unifying explanatory practices across paradigm shifts in science.Critiques of pragmatic theories often center on their perceived subjectivity, as the dependence on context and audiencerelevance risks rendering explanations relativistic and lacking objective criteria for evaluation, potentially undermining their role in scientific objectivity.[21] For instance, Kitcher and Salmon have argued that van Fraassen's emphasis on tailored answers fails to distinguish genuine explanations from mere descriptions that happen to satisfy a question.[21] Unification theories, meanwhile, face objections for overemphasizing simplicity and pattern reduction at the expense of causal specificity, as unifying under broad principles may gloss over mechanisms crucial for detailed understanding, such as in biological or historical explanations where causal chains defy neat subsumption. These limitations have prompted calls for hybrid models that balance pragmatic adaptability with unification's structural insights.
Applications and Contemporary Issues
Explanations in Science
In scientific methodology, explanations are integral to the hypothetico-deductive approach, where scientists propose explanatory hypotheses and derive testable predictions to evaluate them against observational data.[54] This method structures empirical inquiry by linking proposed explanations to general laws or principles, allowing for systematic falsification or corroboration through experimentation.[55] For instance, in physics, Albert Einstein's 1915 application of general relativity explained the anomalous 43 arcseconds per century precession of Mercury's perihelion, a discrepancy unresolved by Newtonian mechanics, by deriving the effect from the theory's geodesic equations in curved spacetime. The deductive-nomological model, which formalizes such explanations as logical deductions from laws and initial conditions, underpins many applications of this method in empirical sciences.[55]In biology, explanations often center on evolutionary processes, where traits are accounted for through mechanisms of natural selection, genetic variation, and adaptation over time. Charles Darwin's foundational framework in On the Origin of Species posits that species diversity arises from descent with modification, driven by differential survival and reproduction favoring advantageous heritable traits in varying environments.[56] This approach provides ultimate explanations for biological phenomena, such as the development of antibiotic resistance in bacteria, by tracing patterns to historical contingencies and selective pressures rather than proximate causes alone.[57]Social sciences employ structural explanations to understand inequality, emphasizing systemic arrangements over individual agency. For example, Pierre Bourdieu's theory of capital—encompassing economic, cultural, and social forms—explains persistent disparities in access to resources and opportunities as outcomes of relational positions within social fields, where dominant groups reproduce advantages through habitus and symbolic power.[58] Similarly, Karl Marx's analysis frames class-based inequality as rooted in the capitalist mode of production, where exploitation arises from the extraction of surplus value in labor relations, leading to stratified wealth distribution.[59] These explanations highlight how institutional structures, such as education systems or labor markets, perpetuate inequality independently of personal merit.[60]Contemporary scientific explanations grapple with inter-theory integration, particularly in quantum gravity, where reconciling general relativity's description of spacetime curvature with quantum field theory's probabilistic particles remains elusive. Efforts like loop quantum gravity and string theory seek unified explanations for black hole entropy and cosmic inflation, positing that gravity emerges from quantized spacetime structures or higher-dimensional geometries. As of 2025, physicists are developing laboratory experiments, such as those using entanglement to test gravity's quantum nature, to probe these questions empirically.[61][62] Recent debates have increasingly focused on simulation-based explanations, enabled by post-2020 computational advances in machine learning and high-performance modeling, which generate hypothetical scenarios to elucidate complex systems like climate dynamics or protein folding without analytical solutions.[63] These models, such as those using neural networks for inference in intractable biological processes, offer explanatory power by revealing emergent patterns and causal pathways through iterative virtual experimentation.[64]A persistent challenge in scientific explanations is underdetermination, where available data can support multiple incompatible theories equally well, complicating theory choice.[65] This arises because observations typically constrain theories holistically, leaving room for empirically equivalent alternatives, as seen in rival interpretations of quantum mechanics or historical cases like Ptolemaic versus Copernican models.[66] Scientists address this through auxiliary criteria like simplicity, predictive novelty, and coherence with broader theoretical frameworks, though no single metric resolves all instances.[65]
Explanations in Philosophy and Everyday Contexts
In philosophy, explanations extend beyond empirical domains into metaphysics, where ontological frameworks seek to account for the nature of existence and reality. Metaphysical explanations often involve grounding relations, wherein more fundamental entities or principles provide the basis for less fundamental ones, such as how abstract universals underpin particular instances of being. This approach contrasts with scientific explanations by prioritizing conceptual dependence over empirical causation, as explored in analyses of explanatory metaphysics that connect ontology to broader issues like commitment to entities.[67][68]In ethical philosophy, narrative explanations illuminate human actions by weaving personal histories and contextual stories to justify moral responsibility and decision-making. These narratives render actions intelligible by embedding them within broader life stories, emphasizing psychological connectedness and the coherence of an agent's choices over time, as seen in discussions of structural conditions for ownership in moral contexts. Such explanations differ from purely causal accounts by focusing on the interpretive and emplotting role of stories in ethical evaluation.[69][70]Everyday explanations draw on folk psychology, the commonsense framework humans use to interpret intentions and behaviors in daily interactions. For instance, troubleshooting practical issues—like determining that a car's failure to start stems from a dead battery—relies on intuitive causal reasoning to identify and isolate contributing factors, often selecting a single salient cause amid multiple possibilities. Folklore and myths function as proto-explanations in pre-philosophical traditions, providing narrative accounts for natural events, origins, and social norms that prefigure systematic inquiry, such as tales attributing seasonal changes to divine interventions.[71][72][73]Cultural variations in explanations highlight diverse ways of making sense of the world, including animistic perspectives in indigenous knowledge systems, where natural phenomena are understood through relations with spiritual essences inhabiting animals, plants, and landscapes. These cross-cultural approaches emphasize relational ontologies, viewing events as outcomes of interactions between human and non-human agents, as opposed to individualistic Western causal models. In post-2010s cognitive science and psychology, narrative explanations have emerged as key tools for modeling intuitive theories of mind, enabling individuals to construct coherent stories from social cues and personal experiences to predict and understand behavior.[74][75][76]Despite their utility, folk explanations in philosophy and daily life are vulnerable to cognitive biases, particularly confirmation bias, which leads individuals to selectively seek or interpret evidence that aligns with preexisting beliefs while ignoring contradictory information. This bias undermines the reliability of intuitive reasoning, as seen in everyday judgments where initial assumptions about causes—such as attributing a misfortune to personal fault—persist despite alternative evidence.[77][78]
Debates on Explanatory Pluralism
Explanatory pluralism posits that no single model of scientific explanation adequately captures all instances of explanatory practice, as different domains and questions demand tailored approaches. For instance, causal explanations predominate in physics, while functional or teleological explanations are more apt in biology, reflecting the diverse aims and structures of scientific inquiry. This thesis, defended by philosophers such as Cory Wright and William Bechtel, argues against explanatory monism by highlighting how psychological and cognitive sciences employ multiple, non-equivalent explanatory strategies without one subsuming the others. Similarly, Christopher Pincock advocates for accommodating pluralism within a unified framework, suggesting that explanations share core features while varying in relevance relations based on context.[79][80][81]Central debates surrounding explanatory pluralism revolve around reductionism versus explanatory autonomy and the irrealist challenge. Reductionists contend that higher-level explanations can be fully derived from more fundamental ones, potentially unifying science under a single explanatory model; pluralists counter that phenomena exhibit multiple realizability, preserving the autonomy of disciplinary levels and rejecting strict reduction as empirically inadequate. This tension underscores whether pluralism undermines scientific unity or enriches it by tolerating irreducible diversity. On irrealism, Arthur Fine's fictionalist approach treats theoretical explanations as useful fictions rather than literal truths, aligning with pluralism by denying a monolithic realist commitment to explanatory posits across domains.[82][83][84]Recent developments in the 2020s have integrated explanatory pluralism with Bayesian frameworks and explainable AI (XAI). Post-2015 Bayesian pluralism critiques grand unifying theories like the free-energy principle, arguing instead for diverse probabilistic models that explain cognitive phenomena without a single hierarchical structure. In XAI, pluralism addresses the ambiguity of "explanation" by distinguishing types such as explication (rendering model outputs understandable) and generalization (forming stable predictions), enabling tailored explanations for stakeholders in AI systems. As of 2025, XAI applications have expanded to clinical decision support systems, such as explaining tumor malignancy predictions in oncology, alongside advances in interpretable deep learning and frameworks like AI Explainability 360, with the market projected to reach $9.77 billion.[80][85][86][87] These advancements highlight pluralism's adaptability to interdisciplinary challenges.The implications of explanatory pluralism foster tolerance for varied scientific practices, allowing integration across disciplines without forcing uniformity, but raise concerns about relativism, where any explanation might be deemed equally valid absent clear criteria for evaluation. This balance encourages methodological diversity while guarding against explanatory fragmentation.[82][88]