Fact-checked by Grok 2 weeks ago

Semantics

Semantics is the study of meaning in language, encompassing how linguistic forms such as words, phrases, and sentences relate to concepts, entities, and the relations between those forms themselves, including phenomena like synonymy, antonymy, and hyponymy. It primarily focuses on stable, context-independent meanings, distinguishing it from , which addresses meaning variations influenced by context and speaker intent. Emerging as a formal in the late through the work of scholars like Michel Bréal, semantics gained prominence in during the 1960s and 1970s, evolving alongside developments in and while competing with alternative terms like and semology. Within linguistics, semantics divides into key branches: lexical semantics, which investigates individual word meanings, their semantic features (e.g., +animate or -human), and inter-word relations such as entailment and ; and compositional semantics, which explores how the meanings of complex expressions are systematically built from their components, guided by the principle of compositionality. Central concepts include the distinction between sense (the abstract, dictionary-like meaning of a ) and referent (the specific entity it denotes in context), as well as truth-conditional approaches that analyze meaning through conditions under which statements are true. These elements enable semantic analysis to explain phenomena like why certain sentences are semantically odd (e.g., "The dog barked the cat") based on incompatible features. Beyond linguistics, semantics intersects with , where it examines foundational questions about meaning, truth, and reference in to understand how expressions connect to the . In computer science, particularly , it underpins efforts to model meaning for , enabling machines to interpret and generate human-like text by formalizing meanings in logical structures. This interdisciplinary scope highlights semantics' role in advancing fields from to , with ongoing research emphasizing integration with syntax and for a fuller account of communication.

Overview and Scope

Definition

Semantics is a subfield of that investigates the meaning of linguistic expressions, focusing on how —such as words, phrases, and sentences—relate to the objects, concepts, or situations they represent. This study encompasses both , the literal or dictionary meaning of a that identifies its essential properties, and , the additional associations or implications that evoke emotions, cultural values, or contextual nuances beyond the core . For instance, the denotation of "home" refers to a physical , while its connotation might include feelings of warmth and . The term "semantics" derives from the sémantique, coined in 1883 by philologist Michel Bréal to describe the of , ultimately tracing back to the sēmantikos ("significant"), from sēma ("") and sēmainō ("to signify"). This etymology underscores semantics' foundational concern with signification, distinguishing it from earlier terms like . Semantics is distinct from syntax, which examines the structural rules governing how signs combine to form well-formed expressions without regard to their interpretive content, and pragmatics, which addresses how meaning varies based on context, speaker intent, and situational factors. As articulated by Charles Morris in his foundational semiotic framework, syntax pertains to relations among signs, semantics to relations between signs and their referents, and pragmatics to relations between signs and their interpreters. These boundaries, while not always rigid, highlight semantics' emphasis on stable, context-independent meanings. The scope of semantics extends beyond verbal language to include nonverbal elements like gestures and facial expressions, as well as symbolic systems such as icons, indices, and arbitrary signs that convey meaning independently of linguistic structure. This broader purview allows semantics to analyze how diverse forms of communication encode and transmit significance, from spoken words to visual symbols in cultural artifacts.

Relation to Linguistics and Philosophy

In linguistics, semantics forms one of the primary components of language analysis, integrated within the broader framework of that includes (the study of sounds), (the study of ), (the study of sentence structure), and (the study of language use in context). This positioning underscores semantics' role in bridging formal linguistic structures with interpretive processes, enabling the systematic exploration of how linguistic elements convey meaning across languages. Philosophically, semantics originates in the and maintains strong ties to , where it examines how linguistic meaning facilitates and representation, such as through propositional attitudes like and assertion that depend on semantic for epistemic . It also connects to by addressing —the way expressions denote entities or states of affairs in the world—thus probing the nature of being and existence through linguistic structures, as seen in theories where meanings are functions mapping to possible worlds or objects. A pivotal figure in linking these domains is , whose introduced the distinction between the signifier (the sound-image or form of a linguistic sign) and the signified (the concept it evokes), emphasizing their arbitrary yet inseparable union as the basis for meaning in language systems. This framework highlights semantics' relational nature, where signs gain value through differences within the linguistic structure rather than inherent properties. Semantics further overlaps with , the general study of signs across communicative systems, in which semantics specifically handles the relation between signs and their objects or meanings, as articulated in Charles Sanders Peirce's triadic model of sign, object, and interpretant. Additionally, it intersects with , the philosophical inquiry into , by informing how meanings are contextually unpacked in texts and discourses to achieve understanding.

Fundamental Concepts

Meaning

Meaning is the core subject of semantics, encompassing the ways in which linguistic expressions convey information about the world, intentions, or states of affairs. In semantic theory, meaning is analyzed at multiple levels, reflecting the hierarchical structure of from words to extended texts. Semantics distinguishes three primary types of meaning based on linguistic units: lexical meaning, which pertains to the significance of words or morphemes; sentential meaning, which concerns the of entire sentences; and discourse-level meaning, which involves the and implications across multiple sentences in a or text. Lexical meaning captures the basic referential or conceptual content of words, such as the of "dog" as a four-legged animal. Sentential meaning emerges from the combination of lexical elements according to syntactic rules, determining the truth conditions or propositional content of a complete , like "The barked" asserting a specific . Discourse-level meaning extends this to broader contexts, integrating sentences into a unified or argument, as seen in how successive statements build a story's progression. A key challenge in understanding meaning is , where a single expression admits multiple interpretations, and , where a word has multiple related senses. For instance, the word "" exhibits , referring either to a or the side of a river, with the senses connected through metaphorical extension but distinct in application. Such phenomena highlight how lexical items can carry layered significances that require disambiguation for clear communication. Meaning is often context-dependent, varying with the situational, , or discursive in which an expression is used, without invoking full pragmatic . Indexical terms like "here" or "now" exemplify this, as their shifts based on the utterance's circumstances, altering the overall . This dependence ensures that semantic content adapts to real-world usage while maintaining stability in core denotations. Semantics further differentiates between conventional meaning, encoded in the linguistic system and shared across speakers, and speaker intention, which reflects an individual's communicative purpose in a particular instance. Conventional meaning provides the default of an expression, such as the literal of "It's raining," whereas speaker intention might convey irony or emphasis, like using the same to imply reluctance to go out. This distinction, rooted in theories of , underscores that while semantics prioritizes encoded content, actual use involves speaker-driven nuances. Within meaning, philosophical semantics introduces the sense-reference distinction, where sense is the mode of presentation of an referent, allowing co-referential terms to differ cognitively, as in Frege's analysis of proper names.

Sense and Reference

In his 1892 essay "Über Sinn und Bedeutung" (translated as "On Sense and Reference"), Gottlob Frege introduced the foundational distinction between sense (Sinn) and reference (Bedeutung) to resolve puzzles in the semantics of identity statements. Frege argued that a sign—such as a word or phrase—expresses both a sense, which is the mode of presentation or cognitive content associated with it, and a reference, which is the actual object or entity it denotes in the world. This distinction explains why certain statements can be informative despite denoting the same referent; for instance, the phrases "the morning star" and "the evening star" both refer to the planet Venus but differ in sense because they present it under different descriptive modes—one as the celestial body visible at dawn, the other at dusk—leading to the informative identity "the morning star is the evening star." Frege characterized as an , that captures the informational content conveyed by an expression, independent of individual psychological associations, while pertains to the itself, such as an object for a proper name or a for a . For proper names, the provides a descriptive means of identifying the without being identical to it; thus, names like "" might carry the sense of "the pupil of and teacher of ," which uniquely determines the historical figure without equating to the person. Predicates, or concept-words, extend this framework by having a sense that specifies a or condition (e.g., "is a " as the concept of equine ) and a that is an unsaturated awaiting completion by an to yield a . , in turn, possess a sense equivalent to the thought they express and a to their (true or false), unifying the contributions of their parts. The distinction applies particularly to identity statements, where substituting co-referential terms preserves truth but may alter informativeness based on differing senses. Frege noted that statements like "a = a" are tautological and uninformative because the sense and reference align identically on both sides, conveying no new knowledge, whereas "a = b" can be cognitively significant if "a" and "b" share a reference but diverge in sense, as in the Venus example, thereby expanding the hearer's informational content about the world. Frege's framework faced significant criticism from Saul Kripke in his 1972 lectures, published as Naming and Necessity, which challenged the descriptivist interpretation of sense for proper names. Kripke proposed a causal theory of reference, arguing that names function as rigid designators—terms that refer to the same object in all possible worlds where it exists—fixed by an initial baptism and preserved through a causal chain of communication, rather than by descriptive senses that could vary or fail to uniquely identify. This critique undermines Frege's reliance on senses as clusters of descriptions, suggesting instead that many identity statements involving names are a priori and necessary without depending on contingent sensory content, as seen in cases like "Hesperus is Phosphorus" (alternative names for Venus).

Compositionality

The principle of compositionality states that the meaning of a complex expression in a language is determined by the meanings of its constituent parts and the syntactic rules by which they are combined. This idea, often traced to Gottlob Frege's work on , posits that understanding novel arises from constructing their meanings from familiar components, as Frege noted: "The possibility of our understanding sentences which we have never heard before rests evidently on this, that we can construct the sense of a sentence out of parts that correspond to words." provided the first precise formulation in his intensional semantics, defining meaning as a from possible worlds to extensions, ensuring that complex expressions' meanings derive systematically from their structure and constituents. A straightforward example illustrates this: the phrase "red ball" conveys the concept of a spherical object possessing the property of redness, where the meaning emerges from combining the predicate "red" (describing a color attribute) with the noun "ball" (denoting a round plaything) via syntactic modification. This compositional process extends recursively to larger structures, such as sentences, allowing meanings to build hierarchically without requiring independent storage for every possible combination. However, strict compositionality faces challenges from linguistic phenomena like idioms, where the overall meaning deviates from the sum of parts. For instance, idiomatically means "to die," rather than literally propelling a pail, defying derivation from the individual verbs and nouns involved; such expressions, numbering around 25,000 in English, often require treatment as non-compositional units or via metaphorical reinterpretation. Context-dependent effects, including and pragmatic , further complicate pure application, prompting debates on whether compositionality holds absolutely or as an approximation. In , compositionality underpins recursive semantics, enabling algorithms to compute meanings for unseen sentences by parsing syntactic trees and applying functions to constituent representations. Seminal models, such as recursive neural networks, leverage this to handle hierarchical structures, as in vector-based composition where phrase meanings are derived iteratively from word embeddings, facilitating tasks like on novel inputs. This approach aligns with formal semantics traditions, supporting scalable interpretation while accommodating deviations through hybrid methods.

Truth Conditions

Truth-conditional semantics is a foundational approach in the that identifies the meaning of a declarative with the conditions under which it would be true in a given situation. This view holds that understanding a involves knowing the possible circumstances that make it true or false, thereby capturing its semantic content without invoking or referential objects beyond truth itself. The approach emphasizes empirical adequacy in specifying these conditions for natural languages, treating truth as a central for . Alfred Tarski laid the groundwork for this framework in his , developed for formal languages to avoid paradoxes like the . Central to Tarski's is Convention T, an adequacy condition for any definition of truth: for every s in the object , the truth must entail a of the form "s is true p", where p is the structural translation of s into the . For example, the English "Snow is white" is true snow is white. This biconditional ensures that the mirrors intuitive truth attributions while providing a recursive definition applicable to complex expressions. Tarski's work, originally published in in and elaborated in English in 1944, established truth as a semantically derivable from satisfaction relations in models. Building on Tarski, Donald Davidson advanced a program for truth-conditional semantics in natural languages, arguing that a theory of meaning consists in a Tarskian-style truth theory that yields theorems specifying the truth conditions for each sentence. In his seminal essay, Davidson proposed that such a theory interprets a speaker's utterances by providing satisfaction conditions relative to possible assignments of values to non-logical constants, effectively equating a sentence's meaning with its truth conditions across possible scenarios. This holistic approach integrates compositionality briefly, deriving sentence-level truth conditions from the meanings of subsentential parts via recursive rules. Davidson's framework shifted focus from formal to empirical semantics, emphasizing interpretive utility for understanding linguistic behavior. Davidson's program extends to propositional attitudes, such as and assertion, by analyzing them as relations to under specific truth conditions. In treating indirect , for instance, Davidson's paratactic in his 1968 paper posits that like "Galileo said that the moves" demonstrate an attitude toward a demonstrated , where the that-clause links to the truth-evaluable content of the . This allows attitudes to be unpacked in terms of truth-directed mental states, facilitating a unified semantic treatment of assertion as committing to a proposition's truth and as holding a true under certain interpretations. Despite its strengths, truth-conditional semantics encounters limitations when applied to non-declarative sentences, such as questions and commands, which lack truth values and thus resist analysis solely in terms of truth conditions. Questions like "Is the door open?" seek rather than assert a verifiable state, while imperatives like "Close the door!" direct action without being true or false. These cases require supplementary notions, such as success conditions or force indicators, to account for their semantic roles beyond truth. Critics argue that extending the framework to such moods demands abandoning strict truth-centricity or incorporating pragmatic elements, highlighting the approach's primary suitability for declaratives.

Semiotic Triangle

The semiotic triangle, developed by American philosopher in the late 19th and early 20th centuries, models the of signification as a triadic relation involving a , an object, and an interpretant. In this framework, the sign—also termed the representamen—is the form that stands for something else, such as a word, , or gesture that conveys meaning. The object, or , is the entity in the world to which the sign refers, which may be concrete (like a physical item) or abstract (like an idea). The interpretant is the effect or meaning produced in the mind of the interpreter upon encountering the sign, which can itself become a new sign in an ongoing of . This triadic structure emphasizes that meaning arises not from the sign alone but from its dynamic relation to both the object and the interpretant's response, distinguishing Peirce's model from dyadic theories that pair only signifier and signified. Peirce further classified signs based on their relationship to the object, yielding three categories: icons, indices, and symbols. An signifies through resemblance or similarity to its object, such as a photograph that mirrors its subject or a illustrating a . An connects to its object via a direct causal or existential link, like smoke indicating fire or a pointing gesture directing to a location. A , by contrast, relies on or habit for its meaning, as in linguistic words like "tree," which denote the object through learned social agreement rather than inherent likeness or causation. These distinctions highlight how signs function across degrees of arbitrariness and immediacy in producing meaning. In 1923, linguists C.K. Ogden and I.A. Richards adapted Peirce's triangle for linguistic analysis in their work , renaming the components as (sign), thought or (), and (object) to stress the mediated nature of . Their version underscores that symbols do not directly point to referents but operate through intervening , countering naive views of as a straightforward labeling of reality. This adaptation influenced semantic theories by emphasizing indirect , paralleling in brief the distinction between sense (interpretant-like) and in Frege's philosophy. Peirce's semiotic triangle extends beyond language to non-linguistic signs, analyzing how images and gestures generate meaning through the same triadic relations. For instance, a functions iconically as a sign resembling its object (the depicted person), while evoking an interpretant of or in the viewer; a serves indexically, its direction caused by wind (object) and interpreted as indicating conditions. Gestures, such as a thumbs-up, often operate symbolically via cultural convention or indexically through physical pointing, demonstrating the model's versatility in interpreting visual and bodily communication.

Branches of Semantics

Lexical Semantics

is the branch of semantics concerned with the meanings of individual words and the systematic relationships among them, serving as a foundation for understanding how vocabulary encodes conceptual distinctions in . It examines how words represent entities, actions, qualities, and relations in the , often through networks of lexical relations that reveal the of the . Unlike broader semantic inquiries into sentences or texts, isolates word-level phenomena to uncover patterns of similarity, opposition, and hierarchy in meaning. A core aspect of lexical semantics involves the analysis of word senses and their interrelations, such as synonymy, antonymy, and hyponymy. Synonymy refers to words that share the same or highly similar meanings, allowing partial substitutability in context, as with big and large in describing size. Antonymy captures oppositional relations, including gradable pairs like hot and cold, where meanings exist on a continuum, or complementary pairs like alive and dead, which are mutually exclusive. Hyponymy denotes hierarchical inclusion, where a more specific term (hyponym) falls under a broader one (hypernym); for instance, poodle is a hyponym of dog, which itself is a hyponym of animal, forming taxonomic structures that organize lexical knowledge. These relations highlight how word meanings are not isolated but interconnected, influencing lexical choice and comprehension. Semantic fields group related words into coherent domains, such as kinship terms (, , ) or color terms (, blue, green), where meanings cluster around shared conceptual themes. Within these fields, Eleanor Rosch's posits that relies on prototypical examples rather than strict definitions, with central members exemplifying the category best. For example, a robin is a prototypical bird due to typical attributes like flying and , while a penguin is peripheral, affecting recognition speed and typicality judgments in empirical studies. This approach challenges classical theories of categories as necessary and sufficient conditions, emphasizing fuzzy boundaries and graded membership in lexical organization. Another key in is , which breaks complex word meanings into simpler components or primitives. Anna Wierzbicka's (NSM) approach identifies a set of universal semantic primes—indecomposable concepts like I, you, good, bad, and do—present in all languages, used to explicate meanings without circularity. For instance, the English word can be explicated in NSM as: "X is Y's mother – a. X is a . b. When Y was born, Y's body was inside X's body for some time. c. Because of this, can say things like: this is Y's mother." This facilitates precise, culture-independent representations of lexical meaning. Cross-linguistic variation in underscores how word meanings differ across languages, potentially shaping thought patterns as suggested by the Sapir-Whorf hypothesis of . For example, languages with distinct color terms, like Russian's separate words for (goluboy) and dark blue (siniy), may enhance speakers' discrimination in those hues compared to English speakers. Such differences highlight that lexical structures are not universal but influenced by cultural and environmental factors, though the hypothesis emphasizes influence rather than strict determination of cognition. These word-level insights into individual meanings provide the atomic units for compositional semantics, where they combine to form phrasal interpretations.

Compositional Semantics

Compositional semantics investigates the mechanisms by which the meanings of phrases and sentences are systematically derived from the meanings of their lexical constituents and the combining them. This subfield emphasizes of compositionality, which asserts that the semantic value of a complex expression is a solely of the semantic values of its immediate parts and the compositional rules governing their combination. Developed prominently in the work of , this approach integrates lexical meanings into larger structures while preserving predictability and systematicity in interpretation. A fundamental aspect of compositional semantics concerns phrase structure, particularly how modifiers interact with heads. In adjective-noun combinations, such as "," the adjective's property (e.g., the set of blue entities) composes with the noun's (e.g., the set of skies) typically via intersection or predicate modification, yielding the set of skies that are blue. This simple function application exemplifies how extends to phrasal meanings without loss of atomic properties. Similarly, verb-argument structures rely on theta roles to assign semantic relations between predicates and their arguments; for instance, in "The chef baked the cake," the verb "bake" theta-marks the subject as an (initiator of the action) and the object as a (entity affected or moved). These roles, formalized as proto-roles capturing clusters of entailments like causation and , ensure that argument meanings contribute predictably to the overall event description. Quantifiers introduce complexities in and during . Consider the "Every man loves a ," which admits two readings: one where each man loves some (possibly different) (wide for "every," narrow for "a"), and another where there exists one loved by all men (narrow for "every," wide for "a"). addresses this by treating quantifiers as higher-order functions over predicates, with determined by syntactic position or quantifier raising, allowing distinct compositional derivations for each . Montague grammar provides a foundational framework for such compositions using lambda calculus, where meanings are represented as lambda terms denoting functions from arguments to results. For example, a transitive verb like "loves" denotes a function that takes an object denotation and returns a property of subjects, enabling stepwise application: first combining the verb with its object to form a one-place predicate, then with the subject to yield a truth-evaluable proposition. This typed lambda approach ensures that composition mirrors syntactic structure, with abstraction handling quantificational elements to bind variables systematically. While the resulting meanings can be evaluated through truth conditions in model-theoretic terms, the focus remains on the derivational process rather than final interpretation. Despite its successes, compositional semantics faces empirical challenges from phenomena like and anaphora, which appear to violate strict bottom-up by relying on contextual recovery. In VP , as in "Bill danced, and Sue did _ too," the missing must be reconstructed from an antecedent via semantic identity, often requiring copies of the antecedent's meaning within the elliptical clause to maintain compositionality. Anaphora resolution, such as in "John entered. He was tired," demands dynamic context updates where pronouns compose with prior referents, treating sentences as incremental functions over information states rather than static propositions. These issues have led to extensions like file change semantics, preserving core compositionality while accommodating context dependence.

Formal Semantics

Formal semantics employs mathematical and logical frameworks to provide rigorous, precise analyses of meaning in , focusing on truth conditions and interpretive structures rather than psychological processes. Central to this approach is model-theoretic semantics, which interprets linguistic expressions relative to formal models that specify possible interpretations of the language's elements. These models typically consist of a of entities, an interpretation function assigning referents to non-logical constants, and relations or functions for predicates, enabling the evaluation of sentences as true or false under specific conditions. A foundational development in model-theoretic semantics is possible worlds semantics, introduced by to handle modal notions like necessity and possibility in quantified . Kripke's framework uses a set of possible worlds connected by an accessibility relation, where the truth of a modal statement at a world depends on its truth in accessible worlds, allowing for a semantic analysis of counterfactuals and deontic expressions. David Lewis extended this approach, proposing a general semantics for propositional that treats possible worlds as indices for evaluating intensions—functions from worlds to extensions—thus providing a unified treatment of attitudes, tenses, and modalities. In this setup, the denotation of an expression \alpha, denoted [[ \alpha ]] ^{M,w}, represents the meaning of \alpha in model M at world w, such as the set of individuals satisfying a or the truth value of a . For instance, the denotation of a proper name might be a constant individual across worlds, while that of a is a function mapping worlds to truth values. Richard Montague advanced formal semantics through his integration of and , developing a system to treat fragments compositionally. Montague's approach assigns expressions to types (e.g., types e, truth-value types t), with higher types like (e, t) for predicates, and uses intensional types such as (s, \tau) where s indexes worlds or times to capture modalities and tenses. His extends with operators that shift evaluation worlds, enabling precise translations of English sentences into logical forms, as exemplified in his analysis of quantification where noun phrases denote generalized quantifiers. Formal semantics often assumes of compositionality, whereby the of a complex expression is a of the denotations of its parts and their mode of combination. These tools find key applications in defining semantic relations like entailment and . In model-theoretic terms, a A entails B if, for every model M and world w, [[A]]^{M,w} = 1 implies [[B]]^{M,w} = 1, ensuring that truth is preserved across all possible interpretations. This allows of inferences, such as "Every dog barks" entailing "Some dog barks," by checking satisfaction in all models, and supports systems in . Kripke's possible worlds further refine under uncertainty, distinguishing necessary truths (true in all accessible worlds) from contingent ones. Lewis's framework extends this to counterpart relations for transworld identity, aiding analyses of belief reports and hypothetical scenarios.

Cognitive Semantics

Cognitive semantics views meaning as emerging from human cognition, embodiment, and experiential conceptualization, emphasizing how language reflects mental structures shaped by perception, action, and interaction with the world. Unlike formal approaches that prioritize logical structures, cognitive semantics posits that semantic knowledge is grounded in bodily experiences and cognitive processes, integrating insights from psychology and linguistics to explain how meanings are constructed and understood. This perspective highlights the role of mental models, prototypes, and metaphorical mappings in semantic processing. Frame semantics, developed by Charles Fillmore, conceptualizes meaning through structured representations called , which are coherent background systems evoked by linguistic expressions. For instance, the "buy" activates a commercial frame involving roles such as buyer, seller, , and money, providing the interpretive for understanding the utterance. Fillmore's framework underscores that word meanings are not isolated but depend on these evoked , which organize about scenarios. Conceptual metaphor theory, advanced by and Mark Johnson, argues that abstract concepts are systematically understood and expressed through mappings from concrete, bodily experiences to target domains. A canonical example is the "argument is war" , where expressions like "he attacked my position" or "I defended my point" structure reasoning about as a combative scenario, influencing how people conceptualize and engage in debates. This theory posits that such are not mere linguistic ornaments but fundamental to thought, shaping cognition across cultures while rooted in universal human experiences. Image schemas serve as foundational cognitive structures in cognitive semantics, representing recurring patterns of sensorimotor experience that ground more abstract meanings. Proposed by Mark Johnson, these schemas—such as (involving boundaries, interiors, and exteriors) or (, , )—provide the experiential basis for understanding concepts like "in love" () or "life's journey" (). They link physical interactions to linguistic and conceptual extensions, enabling the metaphorical projection of spatial logic onto non-spatial domains. Empirical support for cognitive semantics comes from neuroimaging studies, which reveal distributed brain networks involved in semantic processing, often integrating sensory-motor areas with higher cognitive regions. Meta-analyses of functional imaging data show consistent activation in the anterior temporal lobe for semantic representations and prefrontal areas for controlled retrieval, aligning with embodied theories by demonstrating how semantic tasks engage modality-specific cortices. For example, processing action-related words activates motor regions, suggesting meanings are tied to experiential simulations. These findings validate the cognitive grounding of semantics through observable neural correlates.

Computational Semantics

Computational semantics is the subfield of semantics that focuses on computationally modeling and processing the meanings of linguistic expressions, combining formal semantic theories with algorithms from and to enable machines to interpret and generate language. This discipline addresses challenges in representing meaning in a way that supports , , and interaction with human language in applications like search engines, virtual assistants, and automated systems. A foundational technique in computational semantics is the use of word embeddings, which map words to dense vectors in a high-dimensional space where semantic relationships are preserved through geometric proximity. The Word2Vec model, developed by Mikolov et al. in 2013, employs shallow neural networks—such as the continuous bag-of-words or skip-gram architectures—to train these embeddings on massive text corpora, capturing similarities like the vector offset "" - "" + "" ≈ "". These representations have enabled downstream tasks in and by quantifying without explicit rules. Subsequent extensions, like , further refined by incorporating global co-occurrence statistics. Semantic parsing extends these ideas to sentence-level meaning by converting natural language inputs into executable logical forms, such as expressions or database queries, allowing systems to answer questions or execute commands. Seminal work by and in 1996 used to learn parsers from examples, mapping utterances to formal queries for domains like . Modern approaches, surveyed by Kamath and Das in 2019, integrate deep neural networks to handle and compositionality, achieving high accuracy on benchmarks like GeoQuery and over 80% exact match on complex question-answering tasks. This process relies on formal semantics for the target representations, ensuring computable truth conditions. In knowledge representation, computational semantics employs ontologies to structure domain knowledge explicitly, facilitating inference and interoperability across systems. The Semantic Web, envisioned by et al. in 2001, proposes augmenting web data with machine-readable metadata to enable automated discovery and integration. Central to this is the Web Ontology Language (OWL), a W3C recommendation from 2004 that defines vocabularies for classes, individuals, and properties with description logic semantics, supporting reasoning via tools like Pellet or HermiT reasoners. OWL ontologies, such as those in biomedical domains like , power and data federation by enabling entailment checks, such as inferring that "heart disease" is a subclass of "cardiovascular disorder". Recent advances have shifted toward contextual and representations with architectures. The model, introduced by Devlin et al. in 2018, pre-trains bidirectional encoders on masked language modeling and next-sentence prediction tasks, yielding contextual embeddings that outperform prior methods on GLUE benchmarks by up to 7.7% in average score. By 2024, large language models have incorporated semantics, as in MM-LLMs surveyed by Zhang et al., which fuse vision-language pre-training to handle tasks like , achieving state-of-the-art results on datasets like VQA v2 with cross-modal alignment techniques. These integrations extend beyond text to reason about meaning in images, videos, and hybrid inputs, supporting applications in and .

Theories of Meaning

Referential Theories

Referential theories of meaning posit that the significance of linguistic expressions arises primarily from their capacity to refer to entities, objects, or states of affairs in the external world, rather than from internal mental associations or contextual uses. In this framework, the meaning of a term is essentially its , enabling to connect directly with reality to convey truth or falsity. This approach, prominent in the , emphasizes over , treating as the foundational mechanism for semantic content. A foundational contribution to referential theories came from John Stuart Mill, who distinguished proper names as denoting particular objects without implying any descriptive attributes or connotations. In his view, names like "Dartmouth" or "John" function solely to indicate the referent itself, lacking the informative content associated with general terms; thus, the meaning of a name is simply the object it picks out, with no additional semantic baggage. Mill's direct reference theory, articulated in A System of Logic (1843), underscores that proper names contribute to propositions by specifying their subjects without embedding qualities, allowing for straightforward identification in discourse. Bertrand Russell extended referential ideas through his , which analyzes definite descriptions—phrases like "the present king of "—not as singular terms referring to entities, but as incomplete symbols that can be unpacked into quantified logical forms. According to Russell, the sentence "The present king of is bald" asserts that there exists exactly one present king of and that this individual is bald; since no such king exists, the statement is false. This analysis, detailed in "On Denoting" (1905), resolves paradoxes in reference by treating descriptions as contributing to the truth conditions of sentences through existential claims, rather than presupposing the existence of a referent. One key criticism of referential theories, particularly Russell's, concerns their handling of fictional or non-existent entities, where reference fails and sentences become problematic. For instance, statements about fictional characters like " lives on " would be deemed false under Russell's scope, as no real exists, yet such is intuitively meaningful and not straightforwardly false in everyday or literary contexts. This limitation highlights how strict referentialism struggles to accommodate empty names or mythical references without rendering large swaths of language semantically defective. The evolution of referential theories addressed these issues through Saul Kripke's causal-historical account in (1980), which posits that proper names serve as rigid designators, linking to their bearers via historical chains originating in an initial "baptism" rather than descriptive content. In Kripke's model, reference is transmitted through a causal chain of communication from the naming event, ensuring stability across possible worlds without relying on contingent attributes; for example, "" refers to the historical figure via this lineage, even if descriptions like "the pupil of " vary. This approach mitigates criticisms of direct reference by grounding meaning in social and historical practices, while preserving the core referential tie to the world. Frege's earlier distinction between (the mode of presentation) and briefly influenced referential debates by suggesting that expressions could share referents yet differ in cognitive significance, though later theorists like Kripke prioritized pure over .

Ideational Theories

Ideational theories of meaning posit that the significance of words and expressions derives from mental ideas or representations within the speaker's mind, rather than from external objects or social conventions. Rooted in empiricist , these theories emphasize that meaning arises from internal psychological states formed through sensory experience, aligning with the view that the mind begins as a (blank slate) upon which ideas are inscribed by . This approach internalizes semantics, treating language as a tool for signifying private mental contents. John Locke, in his seminal work An Essay Concerning Human Understanding (1690), articulated a foundational ideational by arguing that words function as signs of ideas in the mind, not directly of things in the world. Locke contended that ideas, derived from sensation and reflection, constitute the immediate objects of thought, and linguistic meaning consists in the association between words and these internal ideas; for instance, the word "" signifies the complex idea of a yellow, malleable substance formed in the perceiver's mind, rather than the external substance itself. This empiricist framework influenced subsequent thinkers, positing that communication succeeds when speakers share sufficiently similar ideas evoked by the same words, though Locke acknowledged challenges in ensuring such alignment across individuals. A notable critique of the purely private aspects of ideational theories appears in Lewis Carroll's Through the Looking-Glass (1871), where declares, "When I use a word... it means just what I choose it to mean—neither more nor less," illustrating the absurdity of meanings as arbitrary personal mental associations detached from shared understanding. This "Humpty-Dumpty view" highlights the potential in ideational accounts, where subjective ideas could render communication incoherent if not constrained by common experience or . Philosophers have invoked this episode to underscore the limitations of unrestricted internalism, emphasizing that private meanings fail to account for intersubjective agreement in language use. Internalism in semantics extends ideational principles by asserting that the meaning of an expression is fully determined by the speaker's psychological states, independent of external factors like causal histories or environmental contexts. This position, echoing Locke's emphasis on mental content, holds that semantic properties supervene solely on internal features such as beliefs, desires, and conceptual associations, allowing for methodological in psychological explanation—where twin individuals with identical internal states would have identical meanings. Critics, however, argue that internalism struggles with phenomena like or , where meanings appear to depend on broader social or worldly relations. Modern variants of ideational theories include conceptual role semantics (CRS), which defines meaning in terms of the inferential and cognitive roles that expressions play within an individual's mental economy, rather than fixed references. Proponents like and Gilbert Harman maintain that the content of a , such as "," is given by its place in a web of inferences and applications in thought—for example, linking it to fairness in distribution—thus preserving an internalist focus on psychological function. While CRS addresses some critiques of classical ideationalism by incorporating holistic networks of associations, it grapples with externalist challenges, such as Hilary Putnam's , which suggests that environmental factors can alter content without changing internal roles.

Causal Theories

Causal theories of meaning posit that the semantic content of terms, particularly proper names and terms, is determined by causal-historical chains linking words to their referents in the world, rather than by descriptive content in speakers' minds. This approach, often termed causal-historical theory, emphasizes an initial "" or reference-fixing event followed by a chain of communication that preserves the reference. introduced key elements of this view in his lectures on naming, arguing that names like "" rigidly designate their bearer through such causal links, independent of associated descriptions that may vary among speakers. extended this to terms, such as "" or "," where meaning is fixed by the underlying discovered through scientific investigation, exemplified by "" referring to the element with atomic number 79 via its causal connection to samples used in the initial naming. Putnam's Twin Earth thought experiment illustrates the externalist implications of these causal theories. Imagine a planet identical to Earth except that what appears as "water" (H₂O) is actually XYZ, a different substance; Earthlings and Twin Earthlings uttering "water" would have different meanings due to distinct causal histories with their environments, despite identical internal mental states. This demonstrates that meaning is not fully determined by individual psychology but by external causal relations, challenging internalist views and supporting the idea that semantic content includes indexical elements tied to the actual world. Tyler Burge developed social externalism within this framework, arguing that meanings depend on communal linguistic practices and deference to experts. In his arthritis thought experiment, a speaker mistakenly applies "arthritis" to thigh pain, yet retains the correct meaning by deferring to medical authorities whose causal chain fixes the term's reference to joint inflammation. This extends causal theories beyond physical environments to social ones, where reference is preserved through interpersonal transmission. These theories have profound implications for the semantics of , such as chemical elements or biological species, where terms gain content through empirical discovery rather than a priori definitions. For instance, pre-scientific uses of "" succeed in referring via causal chains to the substance later identified as , 79, allowing rigid designation across possible worlds where superficial properties might differ. This framework resolves puzzles about , affirming that natural kind terms express necessary truths about their referents once the causal essence is known, influencing debates in and language.

Use-Based Theories

Use-based theories of meaning posit that the significance of linguistic expressions arises from their employment in communicative practices, rather than from abstract references or mental ideas. These approaches emphasize the social, contextual, and normative dimensions of language, addressing limitations in referential theories by focusing on how words function within ongoing interactions and rule-governed activities. Central to this perspective is the idea that meaning is not fixed but emerges from patterns of use, allowing for flexibility in interpretation while grounding semantics in observable linguistic behavior. Ludwig Wittgenstein's seminal contribution to use-based theories appears in his , where he argues that "the meaning of a word is its use in the " (§43). Wittgenstein illustrates this through the concept of "language games," which are rule-bound activities analogous to games like chess, where words gain meaning from their roles in specific contexts, such as giving orders or describing objects (§7). He rejects the notion of meanings as private, mental entities, contending instead that requires public criteria for correctness, as private ostensive definitions cannot establish shared rules (§258). This leads to the , which demonstrates the impossibility of a understandable only to its solitary user, since justifying rule-following would devolve into mere whim without communal standards (§202). Wittgenstein further develops this via "family resemblances," explaining that many concepts, like "game," lack essential definitions but are unified by overlapping similarities in their applications, such as competition or skill, rather than a single common thread (§66–67). These ideas underscore how meaning is dynamic and embedded in social practices. Building on Wittgensteinian insights, theory shifts attention to the performative aspects of , positing that utterances accomplish actions beyond mere description. J.L. Austin introduced this framework in How to Do Things with Words, distinguishing between constative statements (which describe) and performative ones (which do, like promising or baptizing), arguing that all utterances have an illocutionary force derived from their conventional use in social contexts. refined this in Speech Acts, categorizing speech acts into types such as assertives (committing the speaker to truth), directives (attempting to get the hearer to act), commissives (committing the speaker to future action), expressives (expressing attitudes), and declarations (bringing about changes through utterance, like declaring war). Searle emphasizes that meaning involves not just propositional content but the illocutionary force, which depends on felicity conditions—social rules ensuring the act's success, such as sincerity and authority. This theory highlights how use in appropriate circumstances confers meaning, extending semantics to pragmatic dimensions. Inferential role semantics extends use-based ideas by defining meaning through the network of inferences a licenses in discourse. Robert Brandom, in Making It Explicit, proposes that word meanings stem from their inferential commitments and entitlements within a community's normative practices, where using a like "" commits one to inferring "unmarried man" and entitles others to challenge inconsistencies. This approach treats assertions as moves in a "game of giving and asking for reasons," where semantic content is holistic, determined by relations to other expressions rather than isolated references. Brandom distinguishes this from truth-conditional semantics by prioritizing normative scorekeeping over evaluations of truth values. Such semantics addresses and context-dependence by embedding meaning in practical reasoning. Holistic approaches within use-based theories, particularly W.V.O. Quine's, view meaning as inseparable from the entire web of beliefs and linguistic behavior. In , Quine argues for , asserting that no single sentence has meaning in isolation; instead, confirmation or disconfirmation applies to the system as a whole, as in empirical testing where observations underdetermine theory. This leads to the indeterminacy of translation thesis, where radical translation between languages admits multiple manual equivalents consistent with behavioral data, since native speakers' uses do not uniquely fix reference or truth conditions. Quine's critique challenges atomistic semantics, emphasizing that meaning is use-relative and underdetermined by evidence, requiring pragmatic choices in interpretation.

Historical Development

Ancient Origins

The study of semantics, concerned with the meaning of linguistic expressions, has its roots in ancient philosophical and grammatical traditions that sought to understand the relationship between words, thoughts, and reality. In , Plato's dialogue , composed around 360 BCE, presents a foundational debate on the correctness of names, pitting against . Hermogenes defends the view that names are arbitrary conventions established by social agreement, while Cratylus advocates for a naturalist position, claiming that names inherently resemble the essences of the things they denote through phonetic imitation or etymological derivation. Plato, through , critiques both extremes: he questions the reliability of natural imitation for capturing stable essences and suggests that true names should align with the forms, though the dialogue ultimately leaves the issue unresolved, emphasizing the need for philosophical inquiry into language's limits. This exploration laid early groundwork for referential theories of meaning, where words point to external objects or ideas. Aristotle, in his treatise On Interpretation (likely written in the mid-4th century BCE), advanced a more systematic account by distinguishing between linguistic signs and mental states. He posits that spoken sounds are symbols of affections in the soul—mental impressions or concepts—and that these affections are likenesses of actual things, though the symbols themselves vary across languages while the underlying concepts remain universal. This tripartite structure (words → thoughts → things) underscores semantics as a bridge between conventional language and cognitive representation, influencing later Western theories of signification. Concurrently in ancient , the grammarian formulated a comprehensive system in his (c. 400 BCE), a succinct of approximately 4,000 sūtras that generates forms through rule-based derivations incorporating semantic constraints. Pāṇini's framework integrates semantics via concepts like kāraka (semantic roles, such as or ), which determine case endings based on a verb's meaning and the participants' roles in the action, ensuring that grammatical structure reflects intended sense without relying solely on syntax. This approach, part of the broader tradition, treated language as a tool for precise ritual and philosophical expression, prioritizing semantic accuracy in Vedic usage. Medieval European philosophy extended these inquiries through Peter Abelard's nominalist treatment of universals in works like Logica Ingredientibus (early 12th century). Abelard rejected the realist view that universals (e.g., "humanity") exist as independent entities, instead arguing they are mere nomina—words or mental terms that signify commonalities among particulars through conceptual status rather than real essence. His analysis of signification emphasized how words acquire meaning via imposition (impositio), a conventional act linking terms to res (things) or sermones (common concepts), thereby addressing semantic puzzles in predication and avoiding ontological commitments to abstract forms. This position marked a shift toward language-centered semantics, influencing scholastic debates on reference and generality.

Modern Foundations

The modern foundations of semantics emerged in the late 19th and early 20th centuries, as scholars transitioned from philosophical speculation to systematic, empirical analysis of meaning, building briefly on ancient inquiries into and . This period marked the formalization of semantics as a distinct linguistic and logical , emphasizing structural relations, historical evolution, and observable patterns over or metaphysical interpretations. Gottlob Frege's (1879) represented a pivotal logicist turn, introducing a formal notation system that laid the groundwork for modern predicate logic and profoundly influenced formal semantics by enabling precise analysis of logical structure and meaning independent of psychological associations. Frege's innovation allowed for the representation of complex inferences through symbolic means, shifting focus toward objective, truth-conditional interpretations of expressions. Michel Bréal, a philologist, coined the term "semantics" (sémantique) in 1883 to denote the scientific study of meanings and their evolution, with his Essai de Sémantique (1897) elaborating on as a dynamic process driven by linguistic usage, , and cultural shifts. Bréal emphasized how word meanings expand, restrict, or transform over time, establishing semantics as a branch of concerned with systematic variation rather than static definitions. The term "semantics" largely supplanted earlier equivalents like "semasiology," introduced by Christian Karl Reisig around 1829 to study word meanings and their changes, and "semology," occasionally used for the broader study of linguistic meaning, though these persisted in some traditions. Ferdinand de Saussure's (1916), compiled posthumously from his lectures, introduced the foundational signifier-signified dyad, positing the linguistic sign as an arbitrary union of a sound-image (signifier) and a (signified), thereby prioritizing synchronic structural relations over diachronic . This binary model underscored the relational nature of meaning within systems, influencing structuralist approaches by treating as differential entities defined by their oppositions. Leonard Bloomfield's Language (1933) advanced American structuralism through a behaviorist lens, explicitly avoiding by grounding semantics in stimulus-response correlations and distributional patterns, thus treating meaning as derivable from contextual usage without recourse to mental states. Bloomfield's framework reinforced semantics as an empirical , focusing on phonetic and to infer semantic roles while critiquing subjective interpretations.

Contemporary Developments

In the mid-20th century, Noam Chomsky's introduced the concept of deep structure as a level of syntactic representation that directly interfaces with semantic interpretation, positing that meaning arises from underlying abstract structures generated by innate linguistic rules rather than surface forms alone. This framework, outlined in Aspects of the Theory of Syntax (1965), emphasized the autonomy of syntax from semantics while allowing deep structures to capture semantic relations, influencing subsequent debates on how grammatical rules encode meaning. The late 1960s and 1970s saw semantics gain prominence in , with developments in generative semantics—led by figures like James McCawley, , and John Ross—which proposed that semantic representations drive syntactic transformations, contrasting with interpretive semantics that derived meaning from syntactic deep structures. Concurrently, formal semantics emerged through Richard Montague's work (late 1960s–early 1970s), applying to in papers like "English as a " (1970), enabling precise, model-theoretic analyses of quantification and compositionality that bridged and . By the 1980s, the rise of shifted focus toward embodied and experiential foundations of meaning, with and Mark Johnson's (1980) arguing that conceptual metaphors structure everyday reasoning by mapping abstract domains onto concrete bodily experiences, such as understanding time as a resource ("time is money"). This approach challenged formalist views by integrating semantics with , proposing that metaphors are not mere linguistic ornaments but pervasive cognitive mechanisms shaping thought across languages and cultures. The computational turn accelerated in the early 2000s with the initiative, proposed by , James Hendler, and Ora Lassila in 2001, which envisioned extending the through machine-readable metadata using ontologies and to enable over distributed knowledge. This development facilitated in data exchange, powering applications like linked open data projects that integrate heterogeneous information sources for inference. Post-2010 advancements in neural networks revolutionized by learning distributed representations of meaning from vast corpora. Tomas Mikolov et al.'s (2013) introduced efficient skip-gram and CBOW models to generate dense vector embeddings capturing semantic similarities, such as the proximity of "king" and "queen" in vector space after subtracting "man" and adding "woman." Building on this, Jacob Devlin et al.'s (2018) employed bidirectional architectures pre-trained on masked language modeling to produce contextualized embeddings that outperform prior methods in tasks like natural language inference, achieving state-of-the-art results on benchmarks such as GLUE by better handling and long-range dependencies. In the 2020s, large language models (LLMs) like series have demonstrated emergent semantic capabilities but also highlighted challenges to classical compositionality—the principle that meaning composes systematically from parts—often failing on novel combinations despite excelling in . Studies show LLMs struggle with systematic generalization, such as recombining known lexical items into unseen structures, prompting research into techniques like iterated learning to enhance compositional reasoning. Concurrently, multimodal semantics has advanced through models like CLIP (Contrastive Language-Image Pre-training) by Alec Radford et al. (2021), which aligns image and text embeddings in a joint space via contrastive learning on 400 million pairs, enabling zero-shot transfer to vision tasks and cross-modal semantic retrieval. This integration extends semantics beyond text to visual-linguistic understanding, supporting applications in image captioning and embodied AI.

Interdisciplinary Applications

In Logic and Philosophy

In logic, semantics provides the interpretive framework for formal systems, linking syntactic structures to their intended meanings. A cornerstone of logical semantics is the interplay between proof theory and model theory, exemplified by the concepts of soundness and completeness. Soundness ensures that every theorem provable within a formal system is semantically valid, meaning it holds true in all models of the system. Completeness, conversely, guarantees that every semantically valid formula is provable, establishing an equivalence between syntactic derivability and semantic truth. These properties were rigorously established by Kurt Gödel in his 1929 dissertation, where he proved the completeness theorem for first-order predicate logic, demonstrating that the axioms and rules of inference capture all logical truths. Gödel's result not only validated the expressive power of first-order logic but also underscored the semantic foundations of proof systems, influencing subsequent developments in automated theorem proving and model checking. Philosophical semantics extends these formal tools to broader questions of meaning and existence, particularly through analyses of . Willard Van Orman Quine argued that the ontology of a theory is determined by the entities quantified over in its sentences, famously encapsulated in his criterion: "to be is to be the value of a variable." This view, articulated in his 1948 essay "On What There Is," posits that in commits speakers to the existence of the objects it ranges over, thereby linking semantic structure to metaphysical implications. Quine's approach revolutionized philosophical semantics by emphasizing the role of in uncovering hidden assumptions about reality, challenging traditional metaphysics to align with scientific discourse. A pivotal advancement in philosophical semantics came with the introduction of possible worlds semantics, particularly for dealing with and possibility. developed a frame-based semantics in his 1963 paper "Semantical Considerations on ," where models consist of a set of possible worlds connected by accessibility relations. In this framework, a formula is true at a world if it holds in all (or some) accessible worlds, providing a precise semantic evaluation for operators like (true in all accessible worlds) and possibility (true in some). Kripke frames thus offer a relational structure that captures intuitive notions of counterfactuals and alternative scenarios, bridging logic and by enabling rigorous analysis of metaphysical modalities. Central debates in philosophical semantics revolve around the nature of truth, pitting against . Realists maintain that truth is bivalent and independent of human verification, allowing statements to be true or false even beyond evidential reach. , as championed by , reject this by tying meaning and truth to verifiable conditions, arguing in his 1959 paper "Truth" that understanding a requires the ability to recognize evidence for its truth. This verificationist stance, elaborated in Dummett's later works, challenges classical truth-conditional semantics by insisting that truth must be epistemically accessible, thereby influencing discussions on the semantics of vague predicates and undecidable sentences. These debates highlight semantics' role in resolving tensions between logical formalism and philosophical intuition about meaning.

In Psychology and Cognition

In psychology and cognition, semantics refers to the study of meaning in mental representations, particularly how individuals process, store, and retrieve conceptual knowledge through language and thought. , a core component, encompasses general knowledge about the world, such as facts, concepts, and word meanings, distinct from which involves personal experiences tied to specific times and places. This distinction was first formalized by in 1972, who argued that semantic memory operates independently of contextual recall, enabling retrieval of abstract information without autobiographical reference. has been foundational in , influencing models of storage and retrieval. A key mechanism in semantic processing is priming, where exposure to one stimulus facilitates the processing of related concepts, revealing the interconnected structure of semantic knowledge. Allan Collins and M. Ross Quillian proposed in that semantic memory is organized as hierarchical networks, with concepts represented as nodes linked by associations, such as "canary" inheriting properties like "can fly" from higher-level categories like "bird." Their model predicted verification times for sentences (e.g., "A canary is a bird" faster than "A canary is an animal"), supported by empirical data showing spreading activation through these networks during priming tasks. This network approach has informed experiments demonstrating how semantic priming enhances and influences cognitive tasks like . The acquisition of semantic knowledge begins in early childhood, as children map words to meanings through a combination of innate cognitive abilities, social interaction, and environmental cues. Paul Bloom's 2000 analysis posits that children do not rely solely on statistical associations or simple imitation but use sophisticated mechanisms, including theory of mind and attentional focus, to narrow down word referents from underconstrained input. For instance, toddlers quickly learn that "dog" refers to a category rather than a specific instance by integrating perceptual, syntactic, and pragmatic information, achieving rapid vocabulary growth without exhaustive trial-and-error. Semantic processing disruptions are evident in neurological disorders, providing insights into the neural underpinnings of meaning representation. , a progressive neurodegenerative condition characterized by anterior atrophy, leads to profound loss of conceptual knowledge while sparing other cognitive functions like and . John Hodges and colleagues' 1992 study of patients with this disorder showed selective impairment in comprehending word meanings and , with preserved grammatical abilities, supporting the idea of a dedicated amodal semantic system. Similarly, , often resulting from left-hemisphere strokes, can manifest semantic deficits, as seen in where patients struggle with word comprehension due to impaired access to stored meanings. Studies of deep dysphasia, such as those by Saffran and Marin (1975), highlight how semantic errors in repetition tasks reflect degraded lexical-semantic links, contrasting with phonological impairments. These disorders underscore semantics' vulnerability in cognition, with studies showing that semantic deficits correlate with reduced activation in temporal and frontal regions during .

In Computer Science and AI

In computer science, semantics provides formal methods for specifying the meaning of computational artifacts, ensuring precise interpretation and verification of programs and data. Denotational semantics, a foundational approach, assigns mathematical meanings to programming constructs by translating them into elements of abstract mathematical domains, such as complete partial orders, allowing compositional analysis of program behavior. This framework, developed through the Scott-Strachey approach, enables reasoning about program equivalence and correctness without simulating execution. Gordon Plotkin's 1981 notes on a structural approach to operational semantics complement this by defining program meanings through axiomatic reduction rules that model computation steps mathematically, facilitating proofs of adequacy relative to denotational interpretations. In database systems, semantics underpins the , which treats data as sets of tuples in and defines query operations algebraically to preserve meaning and integrity. Edgar F. Codd's 1970 paper introduced this model, specifying that the semantics of a database query is the mathematical resulting from operations like selection, , and join, ensuring declarative queries yield intended results independently of physical storage. This formalization supports and enables optimization while maintaining semantic consistency, as seen in SQL's reliance on for query evaluation. In artificial intelligence, semantics drives applications in natural language understanding (NLU), particularly for chatbots, where it enables parsing user inputs to infer intent, entities, and context for coherent responses. For instance, semantic parsing techniques map natural language utterances to executable representations, such as lambda calculus expressions, allowing chatbots like those powered by dialog management systems to handle complex interactions. Knowledge graphs further enhance AI semantics by encoding entities and their relationships as directed graphs, with Google's Knowledge Graph, launched in 2012, integrating billions of facts to disambiguate queries and provide context-aware results in search and recommendation systems. Recent developments in the have integrated semantics into large language models (LLMs) for advanced tasks like (), which assigns roles (e.g., , ) to arguments in sentences to capture predicate-argument structures. Studies show LLMs, when prompted appropriately, achieve competitive SRL performance by leveraging emergent semantic understanding, though they face limitations in structured output consistency without . As of 2025, semantics in LLMs has advanced with integration, enabling better cross-modal understanding in models like and successors. Ethical semantics has emerged to detect biases in AI, analyzing semantic embeddings for disparities in representation, such as or racial stereotypes in text generation; for example, identifies semantic-related biases in LLMs by probing word associations and proposes mitigation through fairness-aware training.

References

  1. [1]
    [PDF] A Short Introduction to Semantics - Academy Publication
    Abstract—Semantics is the study of meaning in language. Although it can be conceived as concerned with meaning in general, it is often confined to those ...
  2. [2]
    Semantics - SpringerLink
    Semantics is the study of meaning in language. Introduced into English in the late 19th century, the term has since competed with such other terms as ...
  3. [3]
    Semantics - The Study of Language
    Semantics is the study of the meaning of words, phrases and sentences. In semantic analysis, there is always an attempt to focus on what the words ...
  4. [4]
    [PDF] Semantics, Meaning, Truth and Content: Disentangling Linguistic ...
    Jul 6, 2017 · Central to the philosophy of language research program is the project called “se- mantics”. Semantics, to a first approximation, is that ...
  5. [5]
    [PDF] Semantics and Computational Semantics - Rutgers University
    Computer science is the study of precise descriptions of finite processes; semantics is the study of meaning in language. Thus, computa- tional semantics ...
  6. [6]
    [PDF] SEMANTICS AS A SCIENTIFIC DIRECTION IN MODERN ...
    Mar 29, 2024 · Semantic meaning is the concept of things-objects, objects, actions, signs, and at the same time embodies the main lexical. Page 2. European ...
  7. [7]
    Semantics - Etymology, Origin & Meaning
    Semantic, from French sémantique (1883) and Greek sēmasia "meaning," refers to the study of meaning in language and the relationship between symbols and ...
  8. [8]
    An Account of the Word 'Semantics' - Taylor & Francis Online
    of the French linguist Michel Breal. In an article in 1883 in the annual of a society for Greek studies,5 he launched the word in the following passage: L ...Missing: sēmantikos | Show results with:sēmantikos
  9. [9]
    [PDF] Lecture 4: Formal semantics and formal pragmatics
    Mar 27, 2009 · On this view, syntax concerns properties of expressions, such as well-formedness; semantics concerns relations between expressions and what they ...
  10. [10]
    [PDF] Chapter 5: Components of Language & Reading
    Linguists have identified five basic components (phonology, morphology, syntax, semantics, and pragmatics) found across languages.
  11. [11]
    What is Linguistics? - College of Arts and Sciences
    Syntax - the study of sentence structure; Semantics - the study of linguistic meaning; Pragmatics - the study of how language is used in context; Historical ...
  12. [12]
    Theories of Meaning - Stanford Encyclopedia of Philosophy
    Jan 26, 2010 · One sort of theory of meaning—a semantic theory—is a specification of the meanings of the words and sentences of some symbol system ...Word Meaning · Normativity of Meaning · Meaning Holism · 10
  13. [13]
    Peirce's Theory of Signs - Stanford Encyclopedia of Philosophy
    Oct 13, 2006 · Peirce's Sign Theory, or Semiotic, is an account of signification, representation, reference and meaning.
  14. [14]
    Hermeneutics - Stanford Encyclopedia of Philosophy
    Dec 9, 2020 · Hermeneutics is the study of interpretation. Hermeneutics plays a role in a number of disciplines whose subject matter demands interpretative approaches.
  15. [15]
    Word Meaning - Stanford Encyclopedia of Philosophy
    Jun 2, 2015 · The task of a semantic theory is just to account for the literal meaning of sentences: context does not affect literal semantic content but “ ...
  16. [16]
    Discourse Representation Theory
    May 22, 2007 · Discourse Representation Theory (DRT) is a mentalist, representationalist theory that uses mental representations called discourse ...
  17. [17]
    Polysemy—Evidence from Linguistics, Behavioral Science, and ...
    Mar 1, 2024 · Polysemy is the type of lexical ambiguity where a word has multiple distinct but related interpretations.Lexical Ambiguity: Homonymy... · Computational Approaches to... · Conclusions
  18. [18]
    The Representation of Polysemy: MEG Evidence - PMC - NIH
    In polysemy, on the other hand, a single word form is associated with two or more meanings, traditionally called “senses,” that are distinct but semantically ...
  19. [19]
    Meaning and Context-Sensitivity | Internet Encyclopedia of Philosophy
    The words whose contribution to the contents of utterances depends on the context in which the words are uttered are called context-sensitive.
  20. [20]
    Two-Dimensional Semantics - Stanford Encyclopedia of Philosophy
    Dec 13, 2010 · The context-dependence of the expression 'I' is revealed when we evaluate the use of 'I' with respect to different possible contexts of use.
  21. [21]
    Semantics and Context‐Dependence: Towards a Strawsonian ...
    This chapter considers a familiar argument that the ubiquity of context‐dependence threatens the project of natural language semantics, at least as that project ...Introduction · A brief history of semantics · Demonstratives · Some objections
  22. [22]
    Paul Grice - Stanford Encyclopedia of Philosophy
    Dec 13, 2005 · Grice's idea was to show that the abstract notion of sentence meaning was to be understood in terms of what specific speakers intend on specific ...Meaning · Reasoning · Ordinary Language Philosophy · Bibliography
  23. [23]
    Implicature - Stanford Encyclopedia of Philosophy
    May 6, 2005 · The meaning of a sentence determines its conventional implicatures, but whether a speaker implicates them depends on how the sentence is used.
  24. [24]
    Gottlob Frege - Stanford Encyclopedia of Philosophy
    Sep 14, 1995 · ... Frege believed that the sense of a name varies from person to person.) Using the distinction between sense and denotation, Frege can account ...Frege's Theorem · Frege's Logic · 1. Kreiser 1984 reproduces the...
  25. [25]
    [PDF] On Sense and Reference
    On Sense and Reference. Gottlob Frege. Equality* gives rise to challenging questions which are not altogether easy to answer. Is it a relation? A relation ...Missing: essay citation
  26. [26]
    [PDF] Frege's Theory of Sense and Reference
    Frege often used definite descriptions to express the senses of proper names. For instance, he claimed that the sense of 'Aristotle' (for some speakers) can be ...
  27. [27]
    [PDF] Informative Identities in the Begriffsschrift and 'On Sense and ...
    This paper is about the relationship between Frege's discussions of informative identity statements in the Begriffsschrift and 'On Sense and Reference'. The ...
  28. [28]
    [PDF] Kripke: “Naming and Necessity”
    Oct 20, 2008 · “Frege should be criticized for using the term 'sense' in two senses. For he takes the sense of a designator to be its meaning; and he also ...
  29. [29]
  30. [30]
  31. [31]
    Semiotics for Beginners: Signs - cs.Princeton
    Saussure noted that his choice of the terms signifier and signified helped to indicate 'the distinction which separates each from the other' (Saussure 1983 ...
  32. [32]
    [PDF] An Overview of Lexical Semantics - UC Irvine
    Abstract. This article reviews some linguistic and philosophical work in lexical semantics. In. Section 1, the general methods of lexical semantics are explored ...Missing: scholarly | Show results with:scholarly
  33. [33]
    [PDF] 19 LEXICAL SEMANTICS - Stanford University
    Synonymy holds between different words with the same meaning. • Hyponymy relations hold between words that are in a class-inclusion relation- ship. • Semantic ...
  34. [34]
  35. [35]
    [PDF] David Cruse - Italian Journal of Linguistics
    Synonymy is defined as bilateral hyponymy: (2). X is a synonym of Y iff there exists a meaning postulate relating X' and. Y' of the form: Vx [X'(x) ←Y'(x)].
  36. [36]
    [PDF] Principles of Categorization Eleanor Rosch, 1978 University of ...
    For natural-language categories, to speak of a single entity that is the prototype is either a gross misunderstanding of the empirical data or a covert theory.
  37. [37]
  38. [38]
    [PDF] LINGUISTIC RELATIVITY
    The linguistic relativity hypothesis, the proposal that the particular language we speak influences the way we think about reality, forms one part of the.
  39. [39]
    [PDF] The Principle of Semantic Compositionality
    1. Introduction: What is Semantic Compositionality? The Principle of Semantic Compositionality is the principle that the meaning of an expression is a function ...
  40. [40]
    [PDF] Thematic Proto-Roles and Argument Selection
    Its goals are more modest: (1I) to lay some methodological groundwork for studying thematic roles with the tools of model-theoretic semantics, and to propose ...
  41. [41]
    [PDF] The Proper Treatment of Quantification in Ordinary English
    Also, by S1 and S2, every man 2 PT; and hence, by S14, every man loves a woman such that she loves him 2 Pt. ... ambiguity is clearly a matter of scope, and ...
  42. [42]
    [PDF] Ellipsis: A survey of analytical approaches - Knowledge Base
    Aug 29, 2016 · The term ellipsis has been applied to a wide range of phenomena across the centuries, from any situation in which words appear to be missing ...
  43. [43]
    [PDF] saul a. kripke
    The semantical completeness theorem we gave for modal propositional logic can be extended to the new systems. We can introduce existence as a predicate in the ...
  44. [44]
    [PDF] General Semantics
    I distinguish two topics: first, the description of possible languages or grammars as abstract semantic systems whereby symbols are associated with aspects of ...
  45. [45]
    FRAME SEMANTICS AND THE NATURE OF LANGUAGE* - 1976
    Frame semantics and the nature of language. Charles J. Fillmore, Charles J. Fillmore Department of Linguistics University of Californial BerkeleyMissing: seminal | Show results with:seminal
  46. [46]
    Metaphors We Live By - The University of Chicago Press
    $$12.50The now-classic Metaphors We Live By changed our understanding of metaphor and its role in language and the mind.
  47. [47]
    Where Is the Semantic System? A Critical Review and Meta ...
    Using strict inclusion criteria, we analyzed 120 functional neuroimaging studies focusing on semantic processing. Reliable areas of activation in these studies ...
  48. [48]
    (PDF) Computational Semantics - ResearchGate
    Abstract. Computational semantics is concerned with computing the meanings of linguistic objects such as sentences, text fragments, and dialogue contributions.
  49. [49]
    [PDF] Computational Semantics∗ - Linguistics - UCLA
    Computational semantics is a relatively new discipline that combines insights from formal semantics, computational linguistics, and automated reasoning.
  50. [50]
    Efficient Estimation of Word Representations in Vector Space - arXiv
    Jan 16, 2013 · Efficient Estimation of Word Representations in Vector Space. Authors:Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean.
  51. [51]
    [PDF] Learning Executable Semantic Parsers for Natural Language ...
    Semantic parsing maps natural language into logical forms, which are like programs that are executed to yield the desired behavior.
  52. [52]
    (PDF) The Semantic Web: A New Form of Web Content That is ...
    Aug 6, 2025 · PDF | On May 1, 2001, Tim Berners-Lee and others published The Semantic Web: A New Form of Web Content That is Meaningful to Computers Will ...
  53. [53]
    OWL Web Ontology Language Reference - W3C
    Feb 10, 2004 · This document contains a structured informal description of the full set of OWL language constructs and is meant to serve as a reference for OWL users.Status of this document · Acknowledgments · Introduction · OWL document
  54. [54]
    OWL Web Ontology Language Overview - W3C
    Feb 10, 2004 · The OWL Web Ontology Language is designed for use by applications that need to process the content of information instead of just presenting information to ...
  55. [55]
    [1810.04805] BERT: Pre-training of Deep Bidirectional Transformers ...
    Oct 11, 2018 · BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers.
  56. [56]
    MM-LLMs: Recent Advances in MultiModal Large Language Models
    Jan 24, 2024 · In this paper, we provide a comprehensive survey aimed at facilitating further research of MM-LLMs. Initially, we outline general design formulations for model ...Missing: semantics | Show results with:semantics
  57. [57]
    Names - Stanford Encyclopedia of Philosophy
    Sep 17, 2008 · A proper name is a type of noun phrase. So, for example, the proper name “Alice Walker” consists of two proper nouns: “Alice” and “Walker”.
  58. [58]
    Descriptions - Stanford Encyclopedia of Philosophy
    Mar 2, 2004 · The analysis of descriptions has played an important role in debates about metaphysics, epistemology, semantics, psychology, logic and linguistics
  59. [59]
  60. [60]
    Humpty Dumpty and Verbal Meaning - jstor
    Humpty Dumpty seems to claim is the right arbitrarily to change the meaning of a word, by means of a private mental act. The first procedure is practical; the ...
  61. [61]
    Internalism and Externalism in the Philosophy of Mind and Language
    Content internalism (henceforth internalism) is the position that our contents depend only on properties of our bodies, such as our brains. Internalists ...Tyler Burge and Social... · Different Kinds of Content... · Content Internalism and...
  62. [62]
    Conceptual Role Semantics | Internet Encyclopedia of Philosophy
    Conceptual role semantics (hereafter CRS) is a theory of what constitutes the meanings possessed by expressions of natural languages.
  63. [63]
    The meaning of 'meaning' (Chapter 12) - Philosophical Papers
    Philosophical Papers - November 1975. ... 12 - The meaning of 'meaning'. Published online by Cambridge University Press: 12 January 2010. Edited by. Hilary Putnam.
  64. [64]
    [PDF] Individualism and the Mental | UCLA Philosophy
    We are to conceive of a situation in which the patient proceeds from birth through the same course of physical events that he actually does, right to and in-.
  65. [65]
  66. [66]
  67. [67]
  68. [68]
    [PDF] Convention or Nature? The Correctness of names in Plato's Cratylus
    Jun 20, 2018 · In the dialogue, Plato examines two theories on the correctness of names; conventionalism and naturalism. However, there is no clear positive ...
  69. [69]
    [PDF] Plato's Last Word on Naturalism vs. Conventionalism in the Cratylus. I
    Plato's position in the debate in the Cratylus about the principle of naming things remains debatable in scholarship. Is he a supporter of naturalism as.
  70. [70]
    [PDF] Aristotle on verbal communication: The first chapters of De ...
    This article deals with the communicational aspects of Aristotle's theory of significa- tion as laid out in the initial chapters of the De Interpretatione (Int.) ...
  71. [71]
    [PDF] An Examination of Names in Aristotle's and Plato's Philosophy of ...
    Now spoken sounds are symbols (σύμβολα) of affections in the soul. (ἐν τῇ ψυχῇ παθημάτων), and written marks symbols of spoken sounds. And just as written ...
  72. [72]
    [PDF] Paninian Linguistics - Stanford University
    The grammar is based on the spoken language (bhāṣā) of Panini's time, and also gives rules on Vedic usage and on regional variants. Its optional rules ...
  73. [73]
    [PDF] The structure and semantics in pāṇinian system of grammar
    It may be concluded from Pāṇini's attempts to exclude semantics out of the structure based derivation process that he follows the ancient Indian tradition of ...
  74. [74]
    [PDF] Pāṇini's grammar and its computerization
    Abstract: This article reviews the impact of modern theoretical views on our understanding of the nature and purpose of the grammar of Pāṇini (ca. 350.
  75. [75]
    Nominalism and Semantics in Abelard and Ockham
    Jun 3, 2015 · Peter Abelard and William of Ockham represent the two main figures of the nominalism of the Middle Ages. Both share the fundamental thesis ...Nominalism And Semantics In... · Article Pdf · William Of OckhamMissing: source | Show results with:source
  76. [76]
    [PDF] In the Wake of Abelard: Nominalisms in the Twelfth Century
    I consider Peter Abelard as a nominalist about universals,38 since he holds a version of the two theses of nominalism as I have defined them in the above:.
  77. [77]
    Frege's Logic - Stanford Encyclopedia of Philosophy
    Feb 7, 2023 · At the time of writing Begriffsschrift, Frege did not have a clear distinction between sense and reference. As a result, he instead mobilizes ...
  78. [78]
    Polysemy and Semantic Change in the Arabic Language and ... - jstor
    2 FIRTH (1957: 26) argues that “Bréal is the godfather of the words sémantique and polysémie”. In 1883, Michel Bréal used the word sémantique for the first time ...
  79. [79]
    Essai de Sémantique : (science des significations) - Internet Archive
    Apr 25, 2007 · Essai de Sémantique : (science des significations). by: Bréal, Michel, 1832-1915. Publication date: 1897. Topics: Language, Semantics. Publisher ...
  80. [80]
    Language - Leonard Bloomfield - Google Books
    Intended as an introduction to the field of linguistics, it revolutionized the field when it appeared in 1933 and became the major text of the American ...
  81. [81]
    The Semantic Web - Scientific American
    May 1, 2001 · May 1, 2001 ... The Semantic Web. A new form of Web content that is meaningful to computers will unleash a revolution of new possibilities. By Tim ...
  82. [82]
    Skills-in-Context: Unlocking Compositionality in Large Language ...
    We investigate how to elicit compositional generalization capabilities in large language models (LLMs). Compositional generalization empowers LLMs to solve ...
  83. [83]
    [PDF] Iterated Learning Improves Compositionality in Large Vision ...
    In this paper, we design an iterated learning algorithm that improves the compositionality in large vision- language models, inspired by cultural transmission ...<|separator|>
  84. [84]
    Learning Transferable Visual Models From Natural Language ...
    The paper proposes learning visual models by predicting image-caption pairs, then using natural language for zero-shot transfer to downstream tasks.
  85. [85]
    Kurt Gödel - Stanford Encyclopedia of Philosophy
    Feb 13, 2007 · The theorem as stated by Gödel in Gödel 1930 is as follows: a countably infinite set of quantificational formulas is satisfiable if and only if ...
  86. [86]
    (PDF) Completeness: From Gödel to Henkin - ResearchGate
    Aug 10, 2025 · This paper focuses on the evolution of the notion of completeness in contemporary logic. We discuss the differences between the notions of completeness of a ...
  87. [87]
    Ontological Commitment - Stanford Encyclopedia of Philosophy
    Nov 3, 2014 · For Quine, an acceptable paraphrase is just a sentence that “serves any purposes [of the original] that seem worth serving” (Quine 1960: 214).
  88. [88]
    Inception of Quine's ontology: History and Philosophy of Logic
    This paper traces the development of Quine's ontological ideas throughout his early logical work in the period before 1948. It shows that his ontological ...
  89. [89]
    Modal Logic - Stanford Encyclopedia of Philosophy
    Feb 29, 2000 · Kripke's semantics provides a basis for translating modal formulas into sentences of first-order logic with quantification over possible worlds.
  90. [90]
    Challenges to Metaphysical Realism
    Jan 11, 2001 · The first anti-realist arguments based on explicitly semantic considerations were advanced by Michael Dummett and Hilary Putnam. These are:The Anti-Realist Challenges to... · Realism and Anti-Realism in...
  91. [91]
    Dummett, Michael | Internet Encyclopedia of Philosophy
    Dummett's most celebrated original work lies in his development of anti-realism, based on the idea that to understand a sentence is to be capable of recognizing ...Dummett and Other... · Frege and Dummett · Dummett on Realism and Anti...
  92. [92]
    [PDF] Episodic and Semantic Memory - Alice Kim, PhD
    A Page 19 10. EPISODIC AND SEMANTIC MEMORY/399 similar parallel lies in the distinction between primary and secondary organization in free recall (Tulving, ...
  93. [93]
    [PDF] Retrieval Time from Semantic Memory
    Quillian (1967, 1969) has proposed a model for storing semantic information in a computer memory. In this model each word has stored with it a configuration ...
  94. [94]
    How Children Learn the Meanings of Words - MIT Press Direct
    According to Paul Bloom, children learn words through sophisticated cognitive abilities that exist for other purposes.Missing: paper | Show results with:paper
  95. [95]
    SEMANTIC DEMENTIA | Brain - Oxford Academic
    Abstract. We report five patients with a stereotyped clinical syndrome characterized by fluent dysphasia with severe anomia, reduced vocabulary and prommen.Missing: original | Show results with:original
  96. [96]
    [PDF] A Structural Approach to Operational Semantics - People | MIT CSAIL
    It is the purpose of these notes to develop a simple and direct method for specifying the seman- tics of programming languages. Very little is required in ...
  97. [97]
    [PDF] A Relational Model of Data for Large Shared Data Banks
    A Relational Model of Data for. Large Shared Data Banks. E. F. CODD. IBM Research Laboratory, San Jose, California. Future users of large data banks must be ...
  98. [98]
    A survey on chatbots and large language models - ScienceDirect.com
    The survey also delves into the key components of chatbot systems, including Natural Language Understanding (NLU), dialogue management, and Natural Language ...
  99. [99]
    Introducing the Knowledge Graph: things, not strings - The Keyword
    The Knowledge Graph enables you to search for things, people or places that Google knows about—landmarks, celebrities, cities, sports teams, ...
  100. [100]
    [PDF] 10-Year Literature Review of AI/LLM Bias Research Reveals Narrow ...
    “We primarily categorize biases into semantic-related and semantic-agnostic biases... Semantic-related bias pertains to the bias of evaluators that is affected ...