Fact-checked by Grok 2 weeks ago

Semantic feature

A semantic feature is a basic, indivisible unit of meaning used in lexical semantics to represent and differentiate the senses of words, often denoted by binary oppositions such as [+animate] versus [-animate] or [+human] versus [-human]. These features form the building blocks in componential analysis, a systematic approach that decomposes the meaning of lexical items into atomic components to explain semantic relations, such as hyponymy and incompatibility. For instance, the noun "girl" might be analyzed as [+animate], [+human], [-male], and [-adult], distinguishing it from "cow" ([+animate], [-human]) or "table" ([-animate], [-human]). The concept of semantic features emerged in the 1960s within generative linguistics as part of efforts to formalize semantic theory. Jerrold J. Katz and Jerry A. Fodor introduced it in their influential 1963 paper "The Structure of a Semantic Theory," proposing a model where lexical entries consist of semantic markers (features) and distinguishers, combined with projection rules to generate phrase meanings and account for ambiguities. This framework addressed selectional restrictions, rules that predict the grammaticality or oddness of sentences based on feature compatibility—for example, the verb "ate" requires a [+animate] subject, rendering "The hamburger ate the man" semantically anomalous due to "hamburger" being [-animate]. Semantic features thus bridge lexical meaning and syntactic behavior, influencing how words combine in larger structures. Beyond core , semantic features have applications in and , where they underpin tasks like and construction. For verbs, decompositional approaches extend features into predicate structures, such as representing "kill" as CAUSE(x, BECOME(NOT(ALIVE(y)))), highlighting causal and state-change components. While early models assumed a finite set of universal features, later critiques emphasized and fuzzy boundaries, yet the binary feature system remains a foundational tool for analyzing semantic contrasts across languages.

Fundamentals

Definition

In linguistics, semantic features are defined as the minimal, indivisible units of meaning that serve as atomic components characterizing the conceptual content of lexical items. These features function as primitives in semantic representation, allowing words to be decomposed into basic elements rather than treated as holistic, undivided wholes; for instance, the word "dog" can be analyzed as possessing the features [+animate] and [+canine], which distinguish it from non-living objects or other animals. This approach contrasts with viewing word meanings as monolithic entities, enabling a systematic breakdown that reveals shared and differentiating aspects across vocabulary. A core aspect of semantic features involves binary oppositions, where features are typically expressed as positive or negative values to capture contrasts in meaning, such as [+animate] versus [-animate] (distinguishing living beings like "" from inanimate objects like "") or [+human] versus [-human] (separating people like "" from animals like ""). These oppositions highlight how semantic features define lexical categories by specifying presence or absence of properties, thereby organizing vocabulary into hierarchical structures based on shared traits. Unlike phonological features, which pertain exclusively to the sound properties of linguistic units (e.g., [+voiced] for sounds like /b/ versus [-voiced] for /p/), semantic features are meaning-based and play a pivotal role in delineating conceptual categories without reference to auditory or articulatory form. Semantic features can also be positive, negative, or unmarked, with the concept of indicating that a marked feature (often positive, like [+female]) adds specificity or deviation from a default, while an unmarked feature (often negative, like [-female]) represents a broader, category. For example, "" is unmarked and encompasses both genders, whereas "" is marked with [+female], restricting its application; this asymmetry reflects how marked forms carry more informational load but occur less frequently in use. Markedness thus underscores the non-equivalent nature of binary pairs in semantic systems, influencing how meanings are encoded and interpreted.

Core Principles

Semantic features operate as atomic components of meaning in lexical semantics, where multiple features combine to specify the semantic content of a lexical item. This principle of feature composition posits that the meaning of a word arises from the aggregation of relevant features, which collectively define its membership in broader semantic classes. For instance, the noun "dog" can be represented as [+animate, +animal, -human], indicating it possesses the properties of living entities and non-human fauna while lacking human attributes. This compositional approach allows for systematic relations among words, such as hyponymy, where more specific items inherit features from superordinate classes (e.g., "dog" subsumes under "animal" as [+animate, +animal]). Features exhibit hierarchy and dependency, forming structured networks where certain features presuppose others to ensure coherent semantic representations. Animacy, for example, often implies biological properties, as [+animate] entities are typically subject to selectional restrictions in syntactic contexts requiring living agents (e.g., verbs like "" select [+animate] subjects). This dependency prevents anomalous combinations, such as an inanimate object "chasing" something, by enforcing hierarchical implications in the feature system. Such structures enable the modeling of semantic contrasts and grammatical compatibility across lexical categories. Each distinct lexical item or sense is defined by a distinct configuration of features, ensuring differentiation from other items and facilitating precise semantic disambiguation. For example, "boy" might be [+human, +male, -adult], distinguishing it from "man" as [+human, +male, +adult], while both share core features like [+animate] implied by [+human]. Complementing this are redundancy rules and default features, which eliminate unnecessary specification by inferring implied properties; thus, [+human] defaults to [+animate], avoiding explicit listing and optimizing representational efficiency. Feature valuation is predominantly binary, marked as present [+] or absent [-], which supports clear-cut contrasts essential for semantic opposition (e.g., [+male] vs. [-male] for gender). This binary system underpins antonymy and compatibility tests, as in "mare" being [-male] relative to "stallion" [+male]. While some extensions in later frameworks allow scalar valuations (e.g., degrees of animacy), the core binary approach remains foundational for capturing discrete meaning differences without introducing gradations that complicate compositionality.

Historical Development

Origins in Structural Linguistics

The concept of semantic features emerged within early 20th-century , building on Ferdinand de Saussure's foundational view of language as a system defined by differential relations among signs, where meaning derives from oppositions rather than inherent qualities. This synchronic approach to signs as relational entities influenced subsequent thinkers to adapt oppositional structures from to semantics, positing minimal contrasting units that underpin lexical and grammatical meaning. The , established in 1926, became a key hub for these developments, with extending principles of oppositional structures from to other areas of linguistic analysis, including semantics, during the 1930s and 1940s. Jakobson's work emphasized functional invariants in language, linking sound contrasts to meaning oppositions and laying groundwork for viewing semantics as a structured system of differential features. Anthropologists drew on these linguistic ideas for cultural applications, notably , who in the 1940s applied features to , analyzing terms as oppositional structures in systems of social exchange and alliance. In his 1949 Les Structures élémentaires de la parenté, treated categories—such as affinal/consanguineal—as derived from contrasts like near/distant or /, revealing underlying semantic logics in non-Western societies. Parallel advancements occurred in glossematics, developed by Louis Hjelmslev in the , which formalized semantic features as content-form invariants—abstract, non-substantial units that organize meaning independently of phonetic expression. Hjelmslev's framework, outlined in works like Prolegomena to a Theory of Language (originally 1943, with extensions into the ), distinguished content figurae (feature-like elements) from substance, treating semantics as a pure form amenable to rigorous decomposition without empirical contingencies.

Evolution in Generative Semantics

In Noam Chomsky's 1965 framework outlined in Aspects of the Theory of Syntax, semantic features were formally integrated into as essential components of lexical entries, enabling the selection of words through lexical insertion rules that ensure compatibility with syntactic and semantic constraints during phrase structure formation. These features, including selectional restrictions like [+animate] for certain objects, distinguished between categorical specifications for syntactic and semantic properties that govern meaning preservation across transformations. This integration marked a shift from earlier structuralist approaches by embedding semantic considerations directly into the generative process, prioritizing the syntax-semantics interface over isolated phonological or morphological analyses. The MIT lectures and discussions on nominalizations and semantic interpretation further propelled this development, as they challenged the depth of transformations and emphasized the role of semantic features in underlying representations. By the early 1970s, these ideas fueled the generative semantics debate, particularly through the work of and John R. Ross, who argued that deep structures should be semantically primitive, composed of abstract features and relations, rather than syntactically derived, to better account for phenomena like and . Their 1967 paper "Is Deep Structure Necessary?" exemplified this by proposing that semantic features drive syntactic derivations from the outset, contrasting with surface structure-oriented views. This tension culminated in the mid-1970s split between the interpretive semantics , led by Chomsky, which posited that semantic interpretations are applied to syntactically generated structures via rules, and the generative semantics , advocating for meaning as the generative of syntax through feature-based deep structures. The debate highlighted implications for semantic features, with generative semanticists like Lakoff and Ross viewing them as foundational to universal cognitive processes, while interpretive approaches limited their role to post-syntactic interpretation. In the 1980s, advanced these discussions in by expanding semantic features to incorporate thematic roles, such as and , as decompositional elements within verb representations to capture argument structure and event conceptualization. His 1983 book Semantics and Cognition formalized this feature decomposition for cognitive realism, proposing a parallel architecture where semantic features interface with syntax and visual systems, ensuring representations align with human perceptual and inferential capacities rather than purely syntactic derivations. This work bridged the earlier camps by emphasizing lexical autonomy while retaining generative principles for feature-driven meaning construction.

Theoretical Frameworks

Componential Analysis

represents a foundational in for decomposing word meanings into discrete semantic features, often termed markers or components, which intersect to form complex representations. Pioneered in the generative semantics tradition, this approach treats lexical entries as bundles of binary or polar features that capture essential attributes of meaning. In the seminal model proposed by Katz and Fodor, meanings are structured through a of semantic markers—universal elements denoting systematic conceptual categories—and distinguishers, which provide idiosyncratic details not subject to broader rules. This decomposition allows for the projection of individual word meanings into phrasal and sentential interpretations via recursive rules, enabling the theory to account for how finite lexical knowledge generates infinite novel understandings. Central assumptions of componential analysis include the innateness and universality of semantic markers, posited as innate cognitive primitives shared across languages that reflect inherent conceptual structures. These markers are not language-specific but draw from a universal metatheory, facilitating cross-linguistic comparisons and systematic semantic relations. Dictionary-style decompositions exemplify this by representing words as intersecting feature sets; for instance, "bachelor" is analyzed as comprising markers such as [+human], [+adult], [+male], and [-married], distinguishing it from related terms like "spinster" while highlighting shared human attributes. This framework assumes meanings are atomic and decomposable, with features operating independently yet combinatorially to define lexical senses. Procedures for extracting features rely on , systematically comparing lexical items within semantic domains to isolate differentiating components. By examining synonyms, which share most features (e.g., "" and "" both [+equine, +adult] but differ in [+male] vs. [-male]), and antonyms, which oppose key features (e.g., "" [+adult] vs. "" [-adult]), analysts identify minimal contrasts that define boundaries. This method involves iterative refinement, starting with broad categories like [+animate] vs. [-animate] and narrowing to specifics, ensuring features are minimal and non-redundant where possible. Among its strengths, componential analysis systematically explains semantic relations such as hyponymy, where a hyponym's features form a superset of the hypernym's (e.g., "dog" [+canine, +animal] entails "animal" [+animal]), and synonymy, where terms share identical feature bundles. Semantic redundancy arises when certain features are predictable from others, addressed through redundancy rules that omit implied markers to streamline representations, as in "widow" where [-married] is redundant given [+female, +adult, +spouse]. Feature validity is tested via cancellation and contradiction procedures: a proposed feature is core if negating it yields a contradiction (e.g., "married bachelor" violates [-married]), whereas cancellable additions indicate non-essential implicatures rather than semantic content. These mechanisms enhance the model's explanatory power for lexical coherence and relational networks.

Feature Geometry and Decomposition

Feature geometry in semantics extends the of phonological features, where distinctive features are arranged in tree-like structures to capture dependencies and natural classes, as originally proposed by Clements for . This model organizes semantic features into branching hierarchies or networks, allowing for the of complex meanings through structured dominance relations rather than flat lists. Jackendoff applied such hierarchical conceptual structures to semantics, treating lexical items as function-argument trees that decompose into primitive semantic components, thereby mirroring the geometric approach to reveal underlying conceptual relations. A key distinction within feature geometry concerns privative features, which represent unidirectional oppositions (presence versus absence of a ), versus equipollent features, which involve binary contrasts between two opposing values. In semantic domains such as tense or , privative features like [+perfective] denote the presence of without implying a corresponding [-perfective] counterpart, facilitating more nuanced representations of meaning oppositions. This contrasts with equipollent binary features in traditional , emphasizing asymmetry in semantic hierarchies to better model phenomena like in lexical items. Decomposition of complex predicates involves breaking down verbs into semantic features arranged in a causal or telic structure, enabling systematic analysis of event composition. For instance, the "kill" is as CAUSE(BECOME(NOT(ALIVE))), where an initiates a change of state from alive to not alive in the . This hierarchical highlights how geometric structures encode causation and state change as nested operations, supporting cross-linguistic generalizations in verbal semantics. Proposals for a universal inventory seek to identify a set of semantic applicable across languages, linking them to argument structure via proto-roles. Dowty's proto-roles, such as Proto-Agent (entailing volition, causation, and ) and Proto-Patient (entailing change of state and affectedness), form a cluster of features that predict argument selection without relying on discrete thematic roles, providing a geometric basis for mapping semantics to . In formal terms, a basic decomposition template for a lexical item L in feature geometry can be represented as a tree: L = \begin{bmatrix} F_1 \\ \left[ F_2, F_3 \right] \end{bmatrix} where F denotes atomic features, with F_1 dominating the subtree [F_2, F_3] to capture hierarchical dependencies.

Notation and Examples

Standard Notation Conventions

In linguistic literature on semantic features, the standard bracket notation employs square brackets to specify binary values, with [+feature] indicating the presence of a semantic property and [-feature] denoting its absence. This convention, rooted in early componential analysis, allows for compact representation of a lexical item's semantic profile by listing multiple features within a single bracketed structure, such as [+animate, -human]. Optional or variable features may be marked with ±, as in [+adult, ±female], to capture gradations or context-dependent applicability. Feature matrices provide a tabular format for comparing semantic features across multiple lexical items, with rows typically representing the items and columns the features, facilitating visualization of overlaps, contrasts, and intersections. For instance:
Itemanimatecount
X+-+
Y++-
This matrix notation, common in both theoretical and , supports systematic analysis of semantic relations without requiring exhaustive listings. In formal semantics, semantic features are often formalized using , where a feature set such as F = \{\text{[animate](/page/Animate), [human](/page/Human)}\} defines the properties associated with an expression, and compatibility between expressions is assessed via set intersection to determine shared semantic content. Feature-based predicates can be expressed in lambda notation as \lambda x . [+F](x), representing the function that applies the feature to an argument x, though this is typically used for denotational purposes rather than exhaustive derivations. Notation conventions distinguish between universal features, which employ standardized, IPA-inspired binary symbols like [+animate] for cross-linguistically common , and language-specific features, which may use descriptors tailored to particular lexical domains. In hierarchical models from feature geometry, these basic notations extend to tree-like structures, but the core bracket and matrix forms remain foundational.

Illustrative Examples

In lexical semantics, kinship terms are often decomposed into binary features that capture relational and biological attributes. For instance, the English term "mother" can be represented as [+female, +lineal, +parent], distinguishing it from terms like "grandmother" [+female, +lineal, +grandparent] by the generational distance, while sharing core components with "father" except for the gender specification. Similarly, color terms illustrate perceptual and categorical distinctions; basic color terms like "red" can be analyzed componentially to show relations within semantic fields, such as grouping with other hues based on perceptual prototypes. Cross-linguistically, semantic features like animacy play a key role in noun classification systems. In Bantu languages, such as Swahili, noun classes are marked by prefixes that encode animacy hierarchies, where class 1/2 prefixes (e.g., mu-/wa-*) apply to [+human] nouns like mwanadamu ("person"), distinguishing them from [-human] classes for animals or inanimates. Verb meanings are decomposed into aspectual and causal features to explain event structures. The verb "run" exemplifies an atelic activity as [+motion, +internal cause, -telic], indicating self-initiated movement without an inherent endpoint, unlike telic verbs such as "reach" that include [+telic]. Semantic features also aid in resolving polysemy by highlighting contrasting components. The homonym "bank" is disambiguated through distinct sets: the riverbank sense incorporates [+natural, +geographical, -artifact], while the financial institution sense features [+institution, +financial, +artifact], allowing context to select the appropriate interpretation. In Bible translation, Eugene Nida applied componential analysis to ensure semantic equivalence across languages. For example, the English "peace" (as in inner tranquility) is broken into components like [+absence of anxiety, +harmony], leading to translations such as "sitting down in the heart" in Kekchi to preserve the psychological state without direct lexical borrowing; similarly, "forgive" decomposes into [+overlook offense, +restore relation], rendered in Navajo as "give back his sin" to convey release from guilt.

Applications and Extensions

In Lexical Semantics

In , semantic features serve as the fundamental building blocks for representing word meanings within the , where each lexical entry is conceptualized as a bundle of such features that capture atomic components of . This componential approach enables the systematic organization of the by facilitating the identification of sense relations; for instance, synonyms like "" and "sofa" exhibit complete overlap in their semantic features, while antonyms such as "" and "" differ primarily in a binary feature like [±temperature]. This feature-based structure supports efficient lexical retrieval and comprehension by allowing speakers and listeners to infer relational properties from partial feature matches. Semantic fields emerge from this framework as clusters of words unified by shared semantic features, providing a principled way to categorize lexical items according to conceptual domains. For example, terms in the body parts field, such as "arm" and "leg," are grouped by common features like [+bodily, +extremity, +movable], which distinguish them from internal organs like "heart" ([+bodily, +internal, -movable]. This organization highlights how features not only define individual meanings but also delineate broader lexical networks, aiding in the prediction of word co-occurrence and thematic coherence in discourse. In word formation processes, particularly derivational morphology, semantic features play a crucial role in altering base meanings through affixation. The prefix "un-," for instance, typically negates or reverses a gradable feature in adjectives, transforming "happy" (characterized by [+positive emotion]) into "unhappy" ([-positive emotion]), thereby creating antonymic derivations that preserve the core lexical category while inverting the specified attribute. This mechanism underscores the lexicon's productivity, where feature manipulation generates novel yet interpretable entries. Computational extensions of semantic features have been integral to projects like , developed in the 1990s at , which structures the English lexicon into synsets—groups of near-synonyms representing discrete concepts—implicitly relying on shared features to encode relations such as hyponymy and meronymy. These synsets facilitate machine-readable representations of , enabling applications in by approximating human-like feature overlap for tasks like word similarity computation. A key application of feature-based representations appears in models of lexical access, as outlined by Levelt (1989), where ambiguity resolution during occurs through the activation and competition of semantic features associated with candidate . When a is selected, its features spread to activate matching lexical entries, resolving homonymy or by selecting the lemma whose feature set best aligns with the contextual intent, thus minimizing selection errors in real-time processing.

In Syntax and Morphology

Semantic features play a crucial role in determining structure within syntactic frameworks, particularly through their association with theta-roles that assign semantic interpretations to . In constructions, for instance, the feature [+agent] or related proto-agent properties, such as [+cause], identifies the external as the initiator of the event, ensuring it realizes as the to satisfy the theta criterion. This is evident in verbs like "break," where the causer (e.g., " broke the ") outranks the patient due to embedding in a event structure [x CAUSE [BECOME [y STATE]]], preventing instruments from surfacing as when agents are present. Agreement phenomena further illustrate the between semantics and , where semantic features like and number propagate through syntactic heads to trigger on adjectives, verbs, and pronouns. In such as and , natural features (e.g., [+feminine] for referents) align with , compelling morphological marking on agreeing elements; for example, "la casa blanca" (the white house) uses feminine despite the noun's inanimate semantics, but animate nouns like "la mujer" enforce [+feminine] based on . This semantic conditioning ensures that reflects underlying referential properties, with violations leading to processing costs in real-time comprehension. Case assignment is modulated by features, which influence morphological realization in systems. In languages with split-ergativity conditioned by the person/ hierarchy, the feature [+human] or [+animate] promotes different case patterns for core arguments, with higher-ranked nominals (such as speech-act participants) often following nominative-accusative , while lower-ranked inanimates adhere to ergative-absolutive patterns; this reflects a semantic hierarchy where higher correlates with subject-like treatment. Subcategorization frames encode semantic restrictions on complements, ensuring compatibility via features like [+animate]. The verb "fear," for example, subcategorizes a direct object bearing [+animate] or [+sentient], as in "She fears the lion" but not "*She fears the rock," where the inanimate fails the selectional requirement tied to the experiencer's emotional relation to a conscious entity. Such frames link to syntactic projection, preventing ill-formed structures. In polysynthetic languages, semantic features underpin morphological fusion under Baker's incorporation theory, where nouns incorporate into verbs based on theta-role compatibility, deriving complex words that encode argument structure without independent NPs. For instance, in , a like "I-see-house" fuses the [+house] into the verbal complex, driven by semantic features that allow head movement while preserving . This mechanism highlights how semantic properties motivate morphological incorporation across the syntax-morphology interface.

Criticisms and Alternatives

Key Limitations

One major limitation of the semantic feature approach lies in its assumption of universality, which fails to account for cultural and linguistic variability in how concepts are categorized. For instance, analyses of color terms across languages reveal that basic color distinctions do not follow a uniform binary structure, as some languages encode fewer or different focal colors compared to others, challenging the idea of innate, universal features like [+red] or [-blue]. This variability arises because color categorization evolves in predictable but non-binary stages influenced by environmental and cultural factors, rather than fixed atomic features applicable to all languages. Semantic features also struggle with vagueness and gradience inherent in many categories, where binary oppositions such as [+/-tall] cannot adequately represent fuzzy boundaries or degrees of membership. highlights this issue by demonstrating that membership is graded based on similarity to central exemplars, rather than strict feature checklists; for example, "bird" includes typical instances like robins more readily than atypical ones like penguins, defying binary decompositions that predict all-or-nothing inclusion. This gradience leads to imprecise predictions about word meaning, as features overlook probabilistic and context-dependent interpretations prevalent in everyday language use. Another key drawback is overgeneration, where the combinatorial nature of features predicts an infinite array of possible meanings that do not correspond to actual lexical items or usages in any language. Decompositional models, by treating meanings as bundles of independent primitives, generate non-occurring senses—such as a hypothetical word combining [+animate, +feline, -mammal]—without mechanisms to constrain implausible combinations, resulting in semantically incoherent outputs that exceed observed linguistic reality. This problem underscores the approach's inability to incorporate lexical gaps or idiomatic constraints effectively. Empirical validation of semantic features faces significant challenges, particularly from psycholinguistic experiments in the 1980s that revealed inconsistent evidence for feature-based processing. Studies on semantic priming showed that activation patterns often align more with associative strengths or holistic word meanings than with predicted feature overlaps; for example, priming between related concepts like "doctor" and "nurse" occurs reliably but not always through shared features like [+human, +profession], suggesting distributed or network representations over atomic decomposition. These findings indicate that features are difficult to test psychologically, as behavioral data frequently fail to support their predicted role in real-time language comprehension. In the 1970s, Charles 's development of frame semantics further exposed the atomistic limitations of semantic features by emphasizing holistic, scenario-based meaning structures over isolated primitives. Fillmore critiqued decompositional approaches for reducing complex understandings to disjointed elements, arguing that comprehension relies on evoking integrated —coherent structures like "buying" that link participants and events—rather than binary traits; this reveals how neglects the relational and contextual dependencies essential to semantic interpretation.

Competing Approaches

Prototype theory, developed by Eleanor Rosch, posits that semantic categories are organized around central prototypes or exemplars rather than strict checklists of necessary and sufficient features, allowing for gradience in category membership where items vary in typicality. This approach challenges traditional by emphasizing fuzzy boundaries and family resemblances, where peripheral members share overlapping attributes with prototypes but lack uniform feature decomposition. In handling , prototype theory employs radial categories, structuring multiple related s around a core prototype connected by motivated extensions, such as the central biological sense of "" extending to adoptive or foster roles through experiential links, providing a more flexible account than rigid feature lists. Frame semantics, introduced by Charles Fillmore, models meaning through event-based frames—structured knowledge scenarios evoked by —that integrate contextual elements over isolated atomic features, capturing how words derive significance from relational roles within larger conceptual structures. For instance, the verb "buy" activates a commercial transaction frame involving buyer, seller, goods, and , emphasizing situated rather than inherent properties. Regarding , frame semantics addresses variability by tying senses to frame-specific contexts, such as "run" evoking motion, machinery operation, or perceptual frames depending on the scenario, offering a holistic alternative to componential methods that struggle with context-dependent shifts. Conceptual metaphor theory, as articulated by and Mark Johnson, views meaning as arising from embodied conceptual mappings between source and target domains, contrasting with discrete feature decompositions by grounding semantics in systematic, experiential correlations rather than static attributes. Metaphors like "argument is war" project inferences from physical conflict onto (e.g., "defending a position"), structuring abstract thought through bodily knowledge. This better accommodates by unifying related senses under a single metaphorical mapping, as in "love is a " where expressions like "hitting a dead-end" or "reaching a crossroads" derive from journey-domain projections, avoiding the fragmentation of feature-based analyses. Distributional semantics, originating with John R. Firth's principle that "you shall know a word by the company it keeps," represents meanings as vectors in high-dimensional spaces derived from corpus co-occurrences, serving as a data-driven alternative to manual feature specification. Modern implementations, such as word embeddings, capture semantic relations through contextual similarities without predefined decompositions. For , distributional models excel by inducing sense alternations from usage patterns, such as animal-food shifts (e.g., "chicken" as or ), using unsupervised vector clustering to generalize regular patterns across vocabulary more scalably and accurately than feature checklists, which require exhaustive sense inventories.

References

  1. [1]
    [PDF] Semantic features
    Semantic feature analysis can be found in Bever &. Rosenbaum (1971) and ... In the previous chapter, we concentrated on meaning in language as a prod uct of the ...
  2. [2]
    [PDF] 19 LEXICAL SEMANTICS - Stanford University
    Lexical semantics is the study of word meaning, using a lexeme (form and meaning) and a lemma (citation form) to represent it.
  3. [3]
    [PDF] Logical Structures in the Lexicon - ACL Anthology
    Langages 64, 39-64. Katz, Jerrold J., & Jerry A. Fodor (1963) "The structure of a semantic theory," Language.
  4. [4]
    [PDF] An Overview of Lexical Semantics - UC Irvine
    Abstract. This article reviews some linguistic and philosophical work in lexical semantics. In. Section 1, the general methods of lexical semantics are explored ...
  5. [5]
    [PDF] SEMANTIC FEATURES AND SELECTION RESTRICTIONS
    Semantic features, like [+Speech act verb], are used to predict selection restrictions and are related to semantic agreement, and are used in a broader sense ...
  6. [6]
    [PDF] A journal of languages and linguistics - ResearchGate
    Keywords: Language, Meaning, Componential Analysis, Semantics, Pragmatics. ... The English Language Origin, History and Varieties. Awka I.N.. Okoro ...
  7. [7]
    Semantics - The Study of Language
    Semantics is the study of the meaning of words, phrases and sentences. In semantic analysis, there is always an attempt to focus on what the words ...
  8. [8]
    [PDF] Study on Markedness in Linguistics - Semantic Scholar
    This paper intends to explore markedness from both formal and semantic perspectives: formal markedness, distributional markedness, and semantic markedness.
  9. [9]
    [PDF] The Structure of a Semantic Theory
    Jan 22, 2014 · Such lexical items have special theoretical significance: they are the natural language's representation of semantic categories. Cf. J. J. Katz ...
  10. [10]
  11. [11]
    (PDF) Opposition theory and the interconnectedness of language ...
    Aug 5, 2025 · It was implicit in Saussure's (1916) own principle of différence. In the 1930s the Prague School linguists (Trubetzkoy 1936, 1939;. Jakobson ...
  12. [12]
    [PDF] The Prague School's Early Concept of Distinctive Features in ...
    Based on the works written by Jakobson,. Trubetzkoy, Vachek and other circle members during 1931 to. 1939 in English, French, German and Czech, this essay ...
  13. [13]
    (PDF) Roman Jakobson and the birth of linguistic structuralism
    Aug 5, 2025 · The term “structuralism” was introduced into linguistics by Roman Jakobson in the early days of the Linguistic Circle of Prague, ...
  14. [14]
    [PDF] Roman Jakobson - Verbal Art, Verbal Sign, Verbal Time - Monoskop
    Prague Linguistic Circle, I have pleaded for the removal of the alleged antinomy ... The semantic invariants of the Russian cases are perhaps one of the most.
  15. [15]
    [PDF] Structural Anthropology - Monoskop
    Lévi-Strauss's anthropology emphasizes the close relationship between field work and theory, between the description of social phenomena and structural analysis ...
  16. [16]
    [PDF] Prolegomena to a Theory of Language by Louis Hjelmslev
    Jan 22, 2014 · semantic features (roughly equivalent to H content-figurae) has been made, although his procedure and conclusions remain controversial. The ...
  17. [17]
    A source of inspiration to Systemic Functional Linguistics?
    Aug 10, 2025 · Hjelmslev's Glossematics: A source of inspiration to Systemic Functional Linguistics? September 2010; Journal of Pragmatics 42(9):2562-2578.
  18. [18]
    Aspects of the Theory of Syntax
    ### Summary of Semantic Features and Lexical Insertion Rules in the 1965 Model
  19. [19]
    On Aspects of the Theory of Syntax | Anna Maria Di Sciullo | Inference
    Chomsky also proposed to distinguish between categorical and semantic selectional features. A verb like frighten requires a [+ animate] object; not so, a ...
  20. [20]
    Generative Semantics 2: The Heresy
    Generative Semanticists had extended syntactic derivations deeper, diminished the lexicon, and enriched the scope of transformations. The lectures emphasized ...Missing: split | Show results with:split<|control11|><|separator|>
  21. [21]
    [PDF] Generative Semantics - ERIC
    The earliest generative treatment of semantic, Katz and Fodor's 1963 paper, 'The structure of a semantic theory', ... The Katz-Fodor interpretive theory contains ...
  22. [22]
    The Arguments about Deep Structure - jstor
    Deep structure, surface structure and semantic interpretation. (Citations as published in Chomsky 1972.) 444. Page 23. THE ARGUMENTS ABOUT DEEP STRUCTURE.
  23. [23]
    [PDF] Interpretive vs. Generative Semantics – Two ways of modeling ...
    Within the paradigm of main-stream generative theory, two basic modes of inquiry can be distinguished: interpretive semantics (IS) and generative semantics (GS) ...
  24. [24]
    (PDF) Generative Semantics - ResearchGate
    Aug 6, 2025 · Generative semantics is (or perhaps was) a research program within linguistics, initiated by the work of George Lakoff, John R. Ross, Paul Postal and later ...
  25. [25]
    On the Semantic Content of the Notion of 'Thematic Role'
    The notion of “thematic roles”, a more modem term for Fillmore's (1968) case relations, Jackendoff's (1972, 1976) and Gruber's (1965) thematic relations, and ...
  26. [26]
    Semantics and Cognition - MIT Press
    This book emphasizes the role of semantics as a bridge between the theory of language and the theories of other cognitive capacities.
  27. [27]
  28. [28]
    (PDF) CHAPTER IV METHODS OF SEMANTIC ANALYSIS 4.1 ...
    Componential analysis aims at discovering and organizing the semantic components of the words. On this view, woman would be analysed as having for its meaning ...
  29. [29]
    The geometry of phonological features* | Phonology | Cambridge Core
    Oct 20, 2008 · The geometry of phonological features*. Published online by Cambridge University Press: 20 October 2008. G. N. Clements.
  30. [30]
    [PDF] Kastovsky: “Privative opposition” and lexical semantics
    As the above examples have already shown, immediate oppositions between lexical items do not at once result in a complete feature specification of these items.
  31. [31]
    [PDF] The logic of words: A monadic decomposition of lexical meaning
    Jun 24, 2022 · kill B (CAUSE (x, (BECOME (NOT(alive x))))). (generative semantics) ... Bare words are decomposed in syntax but not in semantics. Compare ...
  32. [32]
    [PDF] Thematic Proto-Roles and Argument Selection
    LANGUAGE, VOLUME 67, NUMBER 3 (1991). The argument for this position involves discourse structure. Natural lan- guages make use of a variety of grammatical ...
  33. [33]
    [PDF] Lecture Notes Chapter 9, “Componential Analysis”
    Levin uses semantic features to describe the interaction between three constructions and four classes of verbs, represented by cut, break, touch, and hit. (1) ...Missing: extraction synonyms antonyms
  34. [34]
    [PDF] A LOGICAL SEMANTICS FOR FEATURE STRUCTURES
    While fea- ture matrices seem to be a more appealing and natural notation for displaying linguistic descrip- tions, logical formulas provide a precise interpre ...<|control11|><|separator|>
  35. [35]
    [PDF] Formal Semantics - Harvard University
    Introduction. Semantics, in its most general form, is the study of how a system of signs or symbols (i.e., a language of some sort) carries information ...
  36. [36]
    [PDF] Lecture 2. Lambda abstraction, NP semantics, and a Fragment of ...
    Feb 21, 2005 · Syntactic Rules and Semantic Rules. Two different approaches to semantic interpretation of natural language syntax (both compositional, both ...
  37. [37]
    7.4: Componential analysis - Social Sci LibreTexts
    Apr 9, 2022 · Componential analysis provides neat explanations for some sense relations. Synonymous senses can be represented as pairs that share all the same components of ...Missing: procedures extraction contrastive synonyms
  38. [38]
    Componential analysis - an overview | ScienceDirect Topics
    Componential analysis is defined as a method for systematically categorizing contrasting terms within a domain by identifying a set of relevant attribute ...
  39. [39]
    Semantic Field Definition and Examples - ThoughtCo
    May 1, 2025 · A semantic field is a group of words that share related meanings or concepts. Examples like blue, red, and yellow show how words are grouped ...
  40. [40]
    [PDF] Un- reveals antonymy in the lexicon Andrew Paczkowski 1 Introduction
    Thus un- creates antonyms from both complementary and gradable adjectives, as in true/untrue and happy/unhappy.
  41. [41]
    WordNet
    WordNet is a large lexical database of English. Nouns, verbs, adjectives and adverbs are grouped into sets of cognitive synonyms (synsets), each expressing a ...Documentation · Related Projects · Frequently Asked Questions · senseidx(5WN)
  42. [42]
    [PDF] Introduction to WordNet: An On-line Lexical Database - Brown CS
    In WordNet, this binary opposition ... The direction of the entailment may therefore be reversed in such cases, depending on the semantic features of the verb's ...
  43. [43]
    [PDF] A theory of lexical access in speech production
    One is what. Levelt (1989) has called the hyperonym problem. When a word's semantic features are active, then, per definition, the feature sets for all of its ...
  44. [44]
    [PDF] Semantic Prominence and Argument Realization II The Thematic ...
    One understanding of the derivative status of semantic roles: Semantic roles are labels for argument positions in an “event structure” —a structured ...
  45. [45]
  46. [46]
    The location of gender features in the syntax - Kramer - Compass Hub
    Nov 24, 2016 · In a n-analysis, gender features can be semantically interpretable, so that the same [+fem] feature is interpreted as female and causes feminine ...
  47. [47]
    Introduction to case, animacy and semantic roles - ResearchGate
    Apr 22, 2016 · The chapters of this volume scrutinize the interplay of different combinations of case, animacy and semantic roles, thus contributing to our ...
  48. [48]
    Split ergativity based on nominal type - ScienceDirect
    In this paper, we examine how person/animacy-based split case marking is synchronically encoded in the grammar, focussing on the distribution of ergative case.2 ...<|separator|>
  49. [49]
    On the semantic content of subcategorization frames - ScienceDirect
    This paper investigates relations between the meanings of verbs and the syntactic structures in which they appear.Missing: fear animate
  50. [50]
    Animacy, Generalized Semantic Roles, and Differential Object Marking
    This chapter addresses the role of case and animacy as interacting cues to role-semantic interpretation in grammar and language processing.
  51. [51]
    Is semantic priming due to association strength or feature overlap? A ...
    In W. E. Cooper & E. C. T. Walker (Eds.),Sentence processing: Psycholinguistic studies presented to Merrill Garrett (pp. 27–85). Hillsdale, NJ: Erlbaum.Missing: inconsistent | Show results with:inconsistent
  52. [52]
  53. [53]
    (PDF) Polysemy, Prototypes, and Radial Categories - ResearchGate
    This article describes four features that are crucial for the cognitive linguistic approach and its relation to polysemy.
  54. [54]
    FRAME SEMANTICS AND THE NATURE OF LANGUAGE* - 1976
    Frame semantics and the nature of language. Charles J. Fillmore, Charles J. Fillmore Department of Linguistics University of Californial Berkeley
  55. [55]
    (PDF) Some Aspects of Semantic Frames and Meaning
    Jul 26, 2025 · It focuses on frame semantics as presenting a systematic description of language meaning and the role of frames in creating conceptual categories.<|separator|>
  56. [56]
    Metaphors We Live By - The University of Chicago Press
    In this updated edition of Lakoff and Johnson's influential book, the authors supply an afterword surveying how their theory of metaphor has developed within ...
  57. [57]
    [PDF] The Contemporary Theory of Metaphor George Lakoff Introduction
    The evidence for the existence of a system of conventional conceptual metaphors is of five types: -Generalizations governing polysemy, that is, the use of words ...
  58. [58]
    Papers in Linguistics : J R Firth - Internet Archive
    Aug 12, 2023 · Papers in Linguistics. by: J R Firth. Publication date: 1957-01-01. Publisher: Oxford University Press. Collection: internetarchivebooks; ...
  59. [59]
    (PDF) Regular polysemy: A distributional model - ResearchGate
    Table 1: Notation and signatures for our framework. We decompose the. score. function into two parts:.