Fact-checked by Grok 2 weeks ago

Principle of compositionality

The principle of compositionality, also known as Frege's principle, states that the meaning of a complex expression is a of the meanings of its constituent parts and the syntactic rules by which those parts are combined. This core tenet of semantics ensures that linguistic interpretation can be systematically built from simpler elements, enabling speakers to understand and produce an infinite number of novel sentences from a finite and grammar. Although often attributed to the philosopher in his foundational work on logic and language from 1884, the principle was not explicitly formulated by him but emerges from his ideas on contextual meaning and the structure of thought. It gained formal prominence in the through the development of model-theoretic semantics by , who treated fragments as formal languages where syntax and semantics form a , preserving compositional structure. Barbara Partee's influential analyses further refined it, addressing ambiguities and context dependencies while emphasizing its role in linking syntax to interpretation. In and , compositionality underpins the productivity and systematicity of human language, allowing finite cognitive resources to handle unbounded expressions, as seen in examples like noun phrases with relative clauses ("the boy who loves Mary") where meaning composes via syntactic rules. Challenges arise from phenomena such as idioms, quantifier scope, and pragmatic context, which sometimes appear non-compositional but can often be resolved by enriching semantic representations, like introducing indices for time or discourse referents. Beyond linguistics, the principle extends to , particularly in for programming languages, where the meaning of code constructs (e.g., "x + 1") is defined compositionally from environments and operations, facilitating modular and verification. In , it informs semantic parsing tasks and neural models for , promoting generalization in and dialogue systems despite tensions with data-driven approaches that may bypass strict compositionality. Overall, compositionality remains a cornerstone for theories of meaning across disciplines, balancing theoretical elegance with empirical complexities.

Definition and Formalization

Core Principle

The principle of compositionality states that the meaning of a complex linguistic expression, such as a or , is determined exclusively by the meanings of its immediate syntactic constituents and the rules used to combine them. This foundational idea ensures that semantic interpretation follows the structure of the syntax in a predictable manner, allowing for the systematic construction of meaning from smaller units like words or morphemes to larger wholes. Without compositionality, understanding would rely on idiosyncratic memorization of every possible combination, rendering communication inefficient and opaque. A straightforward example illustrates this: the phrase "red ball" derives its meaning from the property denoted by "red" (a color attribute) and the object denoted by "ball" (a spherical plaything), combined through a modification rule that specifies how the color applies to the object, resulting in the concept of a ball that is red. Similarly, in a sentence like "The cat chased the mouse," the overall meaning emerges from the meanings of individual words ("cat" as a feline animal, "chased" as a pursuit action, "mouse" as a small rodent) and syntactic rules governing subject-verb-object relations, yielding a complete event description without additional arbitrary elements. These examples highlight how compositionality enables recursive building of expressions, where complex structures can nest indefinitely while preserving interpretability. Syntactic structure provides the blueprint for assembly, while semantic interpretation supplies the content, with the two being interdependent yet distinct: syntax dictates how parts connect (e.g., via adjacency or ), but semantics assigns meanings independently of non-compositional factors like or in the core . This separation underscores compositionality's role in formal semantics as a bridge between form and meaning. The idea traces its foundational influence to Gottlob Frege's work on logical , which emphasized systematic meaning construction. Formal mathematical versions of the principle, explored elsewhere, build on this intuitive base to model semantic operations precisely.

Mathematical Formulations

The of compositionality can be formally defined for a L with a specified syntax as follows: a semantics [\![\cdot]\!] is compositional if, for any complex expression formed by combining subexpressions \alpha and \beta via syntactic rule R, the meaning [\![\alpha \beta]\!] equals F([\![\alpha]\!], [\![\beta]\!], R), where F is a fixed function independent of the particular expressions involved. This ensures that the interpretation of the whole depends solely on the interpretations of the parts and the mode of syntactic combination, without reference to extraneous factors. An alternative formulation expresses compositionality via a meaning function M such that for any complex s composed of parts p_1, \dots, p_n under syntactic \sigma, M(s) = g(M(p_1), \dots, M(p_n), \sigma), where g depends only on the \sigma and the meanings of the parts, not on their identities. This recursive definition aligns with homomorphic mappings in formal semantics, where semantic operations mirror syntactic ones. In Montague grammar, compositionality is realized through function application within a type-theoretic framework, where denotations are assigned types and interpreted relative to models, often incorporating indices for context (e.g., worlds and times). Syntactic rules are paired with corresponding semantic rules: if a syntactic rule states that expressions of categories A and B combine to form category C, the semantic rule specifies that their denotations a' (of type corresponding to A) and b' (of type corresponding to B) combine via a function to yield a denotation of the type for C, typically through application where one argument is a function. Basic types include e for entities (individuals) and t for truth values, with complex types like \langle e, t \rangle for predicates (functions from entities to truth values); denotations may be intensional, as functions from indices (e.g., \langle s, t \rangle, where s is situations) to truth values. For the simple sentence "John runs," the derivation proceeds via function composition as follows. The proper name "John" denotes an individual j of type e, while the intransitive verb "runs" denotes a predicate \lambda x_e . run'(x) of type \langle e, t \rangle, where run' is a characteristic function mapping runners to truth values in a model. Applying the verb to the name yields [[\text{John runs}]] = run'(j), a proposition of type t that evaluates to true if j satisfies run' at the relevant index and false otherwise; this application is governed by the semantic rule paired with the syntactic merger of noun phrase and verb phrase. In intensional variants, "runs" has type \langle s, \langle e, t \rangle \rangle, and the result is run'(j) evaluated at a world-time index.

Historical Development

Philosophical Origins

The philosophical origins of the principle of compositionality emerged in early 19th-century , where it was debated alongside contextuality. Trendelenburg, in his Logical Investigations (1840), explored both approaches, ultimately favoring contextuality as more fundamental for understanding meaning. Hermann Lotze, in Logic (1874), balanced compositionality and contextuality depending on logical aims. , in his Logic (1880), initially emphasized contextuality but shifted toward compositionality in later editions (1893) due to practical needs in . These discussions highlighted tensions between holistic and part-based accounts of meaning, setting the stage for formal developments. In the mid-19th century, advanced logical foundations through his algebraic treatment of logic in (1854), modeling logical relations as operations on classes—such as addition for disjunction and multiplication for conjunction. This allowed complex expressions to be derived systematically from simpler ones, treating logic as an algebraic calculus that preserved truth values and influenced formal systems. advanced these ideas significantly in his (1879), introducing a novel notation based on a function-argument structure for logical expressions, which treated propositions not as subject-predicate forms but as saturations of unsaturated functions by arguments. For example, the sentence "Caesar conquered " could be analyzed with "conquered" as the function and "Caesar" and "Gaul" as arguments, enabling the compositional breakdown and reconstruction of complex thoughts through recursive application of this structure. This innovation allowed for the formal representation of generality and inference in a way that underscored how the content of a whole expression depends on its syntactic parts and their combinatorial rules. Frege further elaborated on the philosophical underpinnings of compositionality in (1884), where he introduced the context : never to ask for the meaning of a word in isolation, but only in the context of a . This implies that the significance of subsentential elements emerges from their role in determining the truth conditions of the entire sentence, thereby linking individual meanings to the compositional rules governing their combination and establishing a holistic yet structured semantics.

Linguistic and Semantic Evolution

In 1933, introduced a for formalized languages, defining the truth conditions of complex expressions recursively through the truth conditions of their immediate constituents and the rules governing their syntactic combination. This recursive method ensured that the of a under a given depended solely on the satisfaction of its parts, establishing compositionality as a foundational constraint for semantic adequacy in formal systems. Tarski's T-schema, exemplified by the equivalence "'Snow is white' is true snow is white," extended this to molecular sentences, where truth is preserved through structural rules like conjunction and . Rudolf Carnap built upon this framework in his 1937 work, The Logical Syntax of Language, by applying compositional principles to constructs approximating natural language within logical systems. Carnap argued that the "logical syntax" of a language determines its semantic properties, such that the designation (or meaning) of a complex expression is a function of the designations of its components via formation rules. For instance, in his pseudo-natural language examples, the intension of a compound like "the morning star is the evening star" derives compositionally from the intensions of its parts, bridging formal logic and linguistic structure without invoking non-syntactic factors. This adaptation emphasized tolerance in language construction, allowing multiple syntactic frameworks while maintaining compositional integrity for semantic interpretation. During the 1950s, Noam Chomsky's development of further integrated compositionality into linguistic theory, particularly through in (1957). These rules generate hierarchical representations of sentences, enabling semantic interpretation to proceed compositionally from atomic elements to full structures, as in the analysis of noun phrases where modifiers combine with heads to yield derived meanings. Chomsky viewed syntax as autonomous yet providing the scaffold for semantics, with transformations preserving underlying semantic relations, thus refining Tarski and Carnap's ideas for descriptive adequacy in grammars. This syntactic-semantic linkage emphasized , allowing infinite sentence generation while ensuring meanings compose predictably. In the 1970s, Richard Montague advanced this evolution by formalizing compositionality within universal grammar, treating natural language fragments as intensional logics amenable to lambda abstraction. In his 1970 paper "Universal Grammar," Montague proposed a direct mapping between syntactic categories and semantic types, where the interpretation of a phrase like "every man runs" results from applying lambda operators to quantify over denotations of its parts. This homomorphism ensured that semantic values compose via function-argument application, extending Carnap's intensions to handle opacity and scope, and solidifying compositionality as a core tenet of formal semantics for empirical linguistic analysis. Montague's approach, building briefly on Frege's context principle, demonstrated that natural languages could be analyzed with the same rigor as formal ones, without substantive theoretical differences.

Theoretical Justifications

Arguments in Favor

One key theoretical justification for the principle of compositionality stems from reverse compositionality, which asserts that the meanings of the constituent parts of a complex expression are uniquely determined by the meaning of the whole. In finite languages, this unique determination implies that the semantics must be compositional, as non-compositional assignments would lead to ambiguities in decomposing complex meanings back to their parts. Another argument emphasizes expressive completeness, where compositionality enables the generation of an array of meanings from a and a of syntactic rules, thereby accounting for the observed in semantic systems. This ensures that novel complex expressions can be meaningfully interpreted without requiring an number of learned meanings. A further supports compositionality by showing that, in models satisfying both ( meaning generation) and systematicity (consistent interpretation of structural relations across expressions), only compositional semantics adequately fulfills these requirements, as non-compositional alternatives fail to preserve structural inferences across novel combinations. Janssen's 1997 theorem highlights non-uniqueness in non-standard semantic models, where arbitrary meaning assignments can be reformulated compositionally by adjusting syntax or interpretations; however, in standard finite-language settings with conventional homomorphic semantics, compositionality remains the uniquely viable framework for ensuring consistent meaning determination.

Supporting Evidence from Language Use

One key piece of evidence for the principle of compositionality comes from the productivity of natural languages, which allows speakers to comprehend and generate an unlimited number of novel utterances using a of lexical items and rules. This is vividly illustrated by recursive , where clauses can be nested indefinitely, such as in the sentence "The rat the cat chased died," which can be extended to "The rat the cat the dog scared chased died" without loss of or interpretability. Such structures demonstrate how meanings combine systematically to yield predictable interpretations for arbitrarily complex expressions. Systematicity provides further observational support, as linguistic knowledge exhibits consistent patterns where understanding one construction implies comprehension of related ones through substitution or recombination of elements. For instance, a speaker who grasps the meaning of "John loves Mary" can immediately understand "Mary loves John" or "John loves the book," reflecting the compositional recombination of roles and arguments. This property holds across sentence types, enabling inferences about novel combinations based on prior knowledge of components, as evidenced in psycholinguistic experiments where participants reliably interpret permuted relational statements. Empirical studies from child reinforce compositionality by showing that young children parse and produce utterances in a rule-based, combinatory manner from an early age. For example, toddlers aged 2 to 3 years demonstrate the ability to combine words into meaningful phrases, such as agent-action-object sequences, indicating an innate toward compositional rather than rote . Experimental paradigms, including preferential looking tasks, reveal that even 14-month-olds correctly interpret simple compositional sentences like noun-verb combinations, suggesting early emergence of meaning assembly from parts. These findings align with longitudinal data tracking how children generalize from heard examples to unencountered forms, prioritizing syntactic and semantic rules. Cross-linguistic evidence underscores the universality of compositionality, as it manifests in typologically diverse languages including English, , and sign languages. In , an with complex , honorific expressions and particle combinations yield meanings predictably from their constituents, as shown in semantic analyses of verb forms and case markers. Similarly, sign languages like exhibit compositional syntax, where classifiers and spatial modifiers combine to form interpretable descriptions of motion and location, paralleling productivity. This consistency across modalities and structures—head-initial in English, head-final in , and visuospatial in signs—indicates that compositionality is a fundamental feature of human language, supported by comparative linguistic corpora and acquisition studies in these systems.

Applications

In Formal Semantics

In formal semantics, the principle of compositionality is foundational to , where the meaning of a complex expression, such as a , is derived systematically from the meanings of its syntactic constituents through . denotations are typically functions from models (structures interpreting the ) to truth values in {true, false}, ensuring that the of a S, denoted [[S]], equals the application of the to the : [[S]] = [VP]. This approach treats fragments as formal languages, with syntactic rules paired directly with semantic rules that preserve compositionality, allowing meanings to be built recursively without reference to non-syntactic structure. To handle predicate-argument structures while maintaining compositionality, integrates elements of the , representing meanings as lambda terms that abstract over variables. For instance, a like "run" denotes a lambda abstract λx.run(x), where x ranges over entities, which applies to a proper name like "" to yield run(John), a true or false in a given model. This abstraction enables flexible composition: intransitive verbs and adjectives denote functions from entities to truth values, while lambda conversion ensures that complex expressions reduce to their semantic values without violating the principle. Quantifiers pose a challenge to direct , as noun phrases like "every dog" must combine with verb phrases denoting . In Montague's framework, such quantified NPs are treated as higher-order functions that take a (a abstract) as argument; for "every dog barks," the interpretation is [every dog], where [[every dog]] maps sets to truth values based on generalized quantifier semantics (true if the dogs set is a subset of the barkers set). To resolve scope ambiguities and enable surface compositionality, type raising lifts simple NPs to quantifier types, allowing them to function as arguments to predicates or operators, thus preserving the compositional derivation of meanings like wide or narrow scope readings. Extensions of to intensional contexts, such as propositional (e.g., "John believes that Mary runs"), incorporate possible worlds semantics to account for opacity and hyperintensionality while upholding compositionality. Here, sentences denote —functions from possible worlds (or world-time pairs) to truth values—and attitude verbs like "believe" denote relations between individuals and sets of worlds (propositions) accessible to the agent; thus, [[John believes S]] holds if the intension of S is true in all worlds compatible with John's beliefs. This ensures that embeddings under attitude operators compose meanings via over indices, maintaining the principle without ad hoc adjustments for context-dependence.

In Computational Linguistics and AI

In , the principle of compositionality has been operationalized through models that integrate with structured combinations of word representations. A foundational approach is the compositional distributional model proposed by Coecke, Sadrzadeh, and , which employs tensor products to compose meanings of phrases from their constituents. For instance, in their framework, the meaning of an adjective-noun compound like "red frog" is derived by applying a tensor representing "red" to the for "frog," enabling the model to capture relational semantics in a while preserving compositional structure. This method, often illustrated with examples involving relational words such as verbs or adjectives interacting with nouns, has influenced subsequent work in semantic spaces by providing a mathematically rigorous way to handle phrase-level meanings beyond simple averaging or . Neural network architectures, particularly Transformers introduced after 2017, have advanced compositionality by leveraging mechanisms to dynamically combine token representations in a way that implicitly supports hierarchical and relational structures in language. The self- layers in Transformers allow for parallel computation of dependencies across sequences, facilitating the emergence of compositional generalizations without explicit recursive modules. For example, heads can attend to specific syntactic or semantic roles, enabling the model to compose meanings incrementally, as demonstrated in analyses showing Transformers' ability to handle novel combinations in tasks requiring multi-step reasoning. This implicit enforcement has made Transformers central to modern systems, where compositionality arises from the model's capacity to weight and integrate contextual information flexibly. In inference (NLI) tasks, compositionality is incorporated via fine-tuned models like , where bidirectional training captures syntactic and semantic dependencies to support robust entailment detection in premise-hypothesis pairs involving complex structures. on datasets like SNLI enables the model to generalize to unseen combinations by leveraging learned representations that align with compositional principles, as evidenced by improved accuracy on benchmarks testing structural generalizations. This approach has proven effective in reducing reliance on spurious correlations, promoting robust entailment judgments through integration of contextual and relational information. Recent advancements in the have focused on hybrid neuro-symbolic models that combine large language models (LLMs) with rule-based compositional systems to enhance , particularly in handling and novel structures. These models augment LLMs' pattern-matching with symbolic rules for recursive composition, enabling better to unseen linguistic combinations. For instance, neural-symbolic stack machines parse inputs into stack-based operations that enforce compositional rules, improving performance on recursive tasks like nested dependencies in semantic parsing. Such hybrids address LLMs' limitations in systematic by integrating rule-based mechanisms, as shown in frameworks that achieve higher accuracy on compositionally challenging benchmarks compared to purely neural baselines.

Critiques and Challenges

Non-Compositional Elements

Idioms represent a prominent class of linguistic expressions that challenge the principle of compositionality, as their overall meaning cannot be straightforwardly derived from the meanings of their individual components. For instance, the English idiom which means "to die," does not involve any literal kicking of a bucket; instead, the functions as a fixed unit with an arbitrary, non-predictable semantics. This non-decomposability suggests that idioms are processed through hybrid mechanisms involving both holistic retrieval and compositional analysis in the , rather than purely one or the other. Coordinate structures introduce ambiguities that further complicate compositional , as the meanings of conjuncts do not always combine predictably without additional syntactic or semantic mechanisms. Consider the "old men and women," which can mean either "old (men and women)"—where "old" modifies the entire coordinated —or "(old men) and women," where the modifier attaches only to "men." This scope ambiguity arises because coordination does not enforce a unique way to project meanings from parts to whole, requiring context or extra-grammatical resolution to disambiguate. Context-dependent meanings, such as those expressed by indexicals like "I" or "now," also deviate from strict compositionality by relying on external contextual factors beyond syntactic structure and lexical semantics. The referent of "I" shifts to the speaker in each utterance, making the sentence's truth conditions dependent on the context of use rather than solely on the composition of its elements; similarly, "now" denotes the time of utterance. This introduces a layer of variability where the same syntactic form yields different interpretations, necessitating a two-tiered semantic framework that separates character (context-invariant meaning) from content (context-dependent interpretation). Empirical evidence from neuroimaging supports the holistic processing of idioms, distinguishing it from compositional literal language comprehension. Functional MRI studies reveal that idioms elicit greater bilateral activation in prefrontal and temporal regions, including the left inferior frontal gyrus and right superior temporal gyrus, compared to literal phrases, suggesting retrieval as integrated wholes rather than piecemeal assembly. For example, during idiom comprehension, enhanced frontotemporal connectivity in the medial prefrontal cortex indicates a specialized network for selecting non-literal meanings holistically. These patterns contrast with literal sentence processing, which activates more localized parietal areas, underscoring the non-compositional demands of idiomatic expressions.

Responses and Modifications

Linguists and philosophers have proposed several modifications to the principle of compositionality to address apparent non-compositional phenomena, such as idioms, while maintaining its core tenet that meaning is determined by the meanings of parts and their syntactic combination. One key approach involves augmenting formal semantic systems with meaning postulates, which are axioms that constrain possible interpretations and link idiomatic expressions to their literal compositional meanings through additional semantic rules. For instance, Partee outlines how meaning postulates can specify relations between lexical items, extending this to idioms by treating them as cases where the idiomatic interpretation is derived via postulates that relate the whole phrase's meaning to the compositional sum of its parts, such as positing a rule for "" that equates it to dying under specific contextual conditions. This preserves compositionality by integrating non-literal meanings as inferential extensions rather than exceptions. Another response employs underspecification in frameworks like (HPSG), where semantic representations allow multiple possible compositional derivations for ambiguous or flexible constructions, with context selecting the appropriate interpretation. In HPSG, this involves representing syntactic and semantic structures with partial constraints, enabling phenomena like variable constituent order or control in verbs to be handled through underspecified feature structures that generate several valid derivations, resolved pragmatically or by linearization rules. For example, in particle verb constructions, underspecification permits both phrasal and lexical analyses to coexist compositionally, avoiding the need for non-compositional stipulations. This approach maintains compositionality by ensuring all interpretations arise from systematic combination rules, even if initially ambiguous. Pragmatic theories offer further adjustments by distinguishing levels of meaning, positing that compositionality governs the explicit content ("what is said" or explicature), while implicatures arise from contextual enrichment. In relevance theory, Dan Sperber and Deirdre Wilson argue that the principle applies to the development of explicatures, which expand the linguistically encoded logical form through disambiguation, reference resolution, and conceptual adjustment, all guided by relevance principles that maximize cognitive effects with minimal effort. For instance, the sentence "Some dogs are mammals" compositionally yields an explicature enriched to "Some but not all dogs are mammals" via pragmatic inference, without violating compositionality at the explicit level; non-compositional appearances stem from implicatures added post-compositionally. This framework reconciles semantic compositionality with pragmatic flexibility by relocating non-literal elements to inference. In the , research on large language models (LLMs) has highlighted ongoing challenges to compositionality, with models showing limitations in handling complex compositional tasks, particularly those requiring multi-step reasoning or novel combinations, despite improvements on simpler cases through scaling. Recent benchmarks as of 2025, such as those evaluating morphological and generalization, reveal persistent failures in systematic compositionality, prompting responses like specialized instruction tuning and new evaluation frameworks to enhance modular reasoning without fully resolving tensions with data-driven training.

References

  1. [1]
    [PDF] Compositionality - MIT
    Compositionality in Formal Semantics. Selected Papers by Barbara H. Partee. Editorial material and organization Copyright © 2004 by Blackwell Publishing Ltd ...
  2. [2]
    [PDF] Lecture 1: Introduction to Formal Semantics and Compositionality
    Feb 12, 2004 · The Principle of Compositionality: The meaning of an expression is a function of the meanings of its parts and of the way they are syntactically ...
  3. [3]
    [PDF] The Principle of Semantic Compositionality
    The Principle is often said to trace back to Frege, and indeed many textbooks call The Principle of Semantic. Compositionality "Frege's Principle". However ...
  4. [4]
    [PDF] Compositionality ∗
    The principle is also adhered to in computer science. Programming languages are not only used to instruct computers to perform certain tasks, but they are also.
  5. [5]
    Compositionality in Computational Linguistics | Annual Reviews
    Jan 17, 2023 · This raises the question of whether linguistic principles, such as the Principle of Compositionality, still have value as modeling tools.
  6. [6]
    (PDF) The Principle of Semantic Compositionality - ResearchGate
    Aug 7, 2025 · The Principle of Semantic Compositionality (sometimes called 'Frege's Principle') is the principle that the meaning of a (syntactically ...
  7. [7]
    [PDF] Lecture 1: Introduction to Formal Semantics and Compositionality
    Feb 20, 2006 · The Principle of Compositionality: The meaning of an expression is a function of the meanings of its parts and of the way they are syntactically ...
  8. [8]
    Chapter 1 Reason, Systematicity, Judgment, and Depth in Kant's ...
    This chapter argues that for Kant reason is the faculty for making deep, systematic judgments. It focuses on Kant's discussion of the principle of systematicity ...
  9. [9]
    George Boole - Stanford Encyclopedia of Philosophy
    Apr 21, 2010 · His algebraic method of analyzing hypothetical syllogisms was to transform each of the hypothetical premises into an elective equation, and then ...The Context and Background... · The Laws of Thought (1854) · Boole's Methods
  10. [10]
    Frege's logic - Stanford Encyclopedia of Philosophy
    Feb 7, 2023 · Friedrich Ludwig Gottlob Frege (b. 1848, d. 1925) is often credited with inventing modern quantificational logic in his Begriffsschrift.
  11. [11]
    Frege's Theorem and Foundations for Arithmetic
    Jun 10, 1998 · Crispin Wright on Frege's Context Principle” in H. Field, Realism, Mathematics, and Modality, Oxford: Blackwell, 1989, pp. 147–170. Fine, K ...
  12. [12]
    [PDF] Compositionality: its historic context - Research Explorer
    It is shown that contextual- ity and compositionality were discussed at the beginnings of the 19th century, but that contextuality was favoured. Also for Frege ...
  13. [13]
    Truth, Logical Structure, and Compositionality - JSTOR
    limited significance. 6. TRUTH AND COMPOSITIONALITY. Compositionality is a methodological notion: a constraint on the semantics of a given language and its ...
  14. [14]
    [PDF] logical syntax of language - rudolf carnap - AltExploit
    © 1937 Rudolf Carnap, Translated by Amethe Smeaton (Countess von Zepplin). All rights reserved. No part of this book may be reprinted or reproduced or ...Missing: compositionality source
  15. [15]
    Logical Syntax of Language | Rudolf Carnap - Taylor & Francis eBooks
    Jun 23, 2014 · Carnap, R. (1937). Logical Syntax of Language (1st ed.). Routledge. https://doi.org/10.4324/9781315823010. COPY. ABSTRACT. This is IV volume of ...Missing: compositionality primary source
  16. [16]
    [PDF] Chomsky-1957.pdf - Stanford University
    The work assumes a use-theory of meaning, that grammars are embedded in a broader semiotic theory which USES the grammar to determine the meaning and reference ...Missing: compositionality | Show results with:compositionality
  17. [17]
    [PDF] A Brief History of the Syntax-Semantics Interface in Western Formal ...
    The Fregean principle of compositionality was central to Montague's theory and remains central in formal semantics. (10) The Principle of Compositionality:.<|control11|><|separator|>
  18. [18]
    Putting Truth into Universal Grammar - jstor
    In contrast, a Tarskian semantics is strictly compositional. It attaches meaning to formulas in general, not just to closed sentences. Crucially, open ...<|separator|>
  19. [19]
    Compositionality - ScienceDirect.com
    This chapter provides a precise interpretation of the principle of compositionality and a mathematical model for the principle.
  20. [20]
    [PDF] Does the Principle of Compositionality Explain Produc - CEUR-WS.org
    Jun 20, 2017 · Abstract. One of the main motivations for having a compositional semantics is the account of the productivity of natural languages.
  21. [21]
    [PDF] Goldberg Compositionality - Semantics Archive
    A Principle of Compositionality is generally understood to entail that the meaning of every expression in a language must be a function of the meaning of its ...
  22. [22]
    [PDF] On the Systematicity of Language and Thought - UC Irvine
    So the fact that two words are grammatically different is evidence that the words are in different categories, and that language is systematic over these fine- ...
  23. [23]
    [PDF] Probing Linguistic Systematicity - ACL Anthology
    Systematicity, the property that words mean the same thing in different contexts, is closely related to compositionality; nevertheless, compositional systems ...
  24. [24]
    The acquisition of compositional meaning - PMC - PubMed Central
    Dec 16, 2019 · According to the well-known principle of compositionality, the meaning of an expression is a function of the meanings of its parts and the way ...
  25. [25]
    Evidence for compositional abilities in one-year-old infants - Nature
    Mar 10, 2025 · Infants correctly composed simple noun-verb sentences at 14 months, facial expressions with objects at 12 months, and mental physical transformations at 10 ...
  26. [26]
    [PDF] A Bayesian Model of the Acquisition of Compositional Semantics
    The model uses positive evidence in the form of sentences and contextual information from world environ- ments to learn mappings from words to pairs of ...
  27. [27]
    Compositionality, lexical integrity, and agglutinative morphology
    In this paper we consider the relationship between compositionality and word structure in the context of Japanese, a highly agglutinative language. ... The ...
  28. [28]
    Compositionality in Different Modalities: A View from Usage-Based ...
    Sep 26, 2022 · Compositionality refers to the notion that the meaning of a complex linguistic unit is a function of the meanings of its constituent parts.Signed Languages · Multimodality · Sign Languages As...<|control11|><|separator|>
  29. [29]
    The Body as Evidence for the Nature of Language - Frontiers
    Taking its cue from sign languages, this paper proposes that the recruitment and composition of body actions provide evidence for key properties of language ...
  30. [30]
    [PDF] The Proper Treatment of Quantification in Ordinary English
    The aim of this paper is to present in a rigorous way the syntax and semantics of a certain fragment of a certain dialect of English.
  31. [31]
    [PDF] Lecture 2. Model-theoretic semantics, Lambdas, and NP semantics
    Feb 28, 2013 · To begin looking at the lambda calculus, we will start with just a “first-order” part of it, as if we were just adding a bit of the lambda ...
  32. [32]
    ophy of language which i
    RICHARD MONTAGUE*. PRAGMATICS AND INTENSIONAL LOGIC. The word 'pragmatics' was used in Morris [1] for that branch ofphilos- ophy of language which involves ...
  33. [33]
    [1706.03762] Attention Is All You Need - arXiv
    Jun 12, 2017 · We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely.Missing: compositionality | Show results with:compositionality
  34. [34]
    [PDF] Compositional Generalization via Neural-Symbolic Stack Machines
    It enables NeSS to categorize components based on their semantic similarities, which further improves the generalization. To train our model without the ground ...Missing: Hybrid LLMs 2020s
  35. [35]
    [PDF] Toward Compositional Generalization with Neuro-Symbolic AI - HAL
    Sep 22, 2024 · In this paper, we highlighted some of the most important concepts of both NeSy and. CoGe, showcasing the cutting-edge research trends shaping ...Missing: 2020s | Show results with:2020s
  36. [36]
    How to kick the bucket and not decompose: Analyzability and idiom ...
    We expected that nondecomposable idioms should be processed more quickly than decomposable phrases because expressions such as kick the bucket are lexicalized ...
  37. [37]
    On the compositional and noncompositional nature of idiomatic ...
    The present paper reviews linguistic and psycholinguistic perspectives on idiom representation and models of idiom processing.
  38. [38]
    [PDF] Ambiguity and Compositionality - Antony Eagle
    › It seems plausible that natural languages are compositional. But can we give an argument? › Two features of natural language have been thought to strongly ...Missing: coordination | Show results with:coordination
  39. [39]
    Compositionality, Semantic Flexibility, and Context-Dependence
    Dec 7, 2020 · It is argued that semantic flexibility reduces to context-sensitivity and does not raise unsurmountable problems for standard compositional ...
  40. [40]
    Idiom Comprehension: A Prefrontal Task? - Oxford Academic
    May 8, 2007 · Abstract. We investigated the neural correlates of idiomatic sentence processing using event-related functional magnetic resonance imaging.
  41. [41]
    The role of left and right hemispheres in the comprehension of ...
    Sep 15, 2009 · Time-course and neural bases of literal vs. idiomatic language processing were compared. Fifteen volunteers silently read 360 idiomatic and ...Missing: holistic | Show results with:holistic
  42. [42]
    [PDF] lexical semantics and compositionality
    meaning of a given item with meaning postulates should be supposed to exhaust its meaning. Another, perhaps related, problem in the field is whether and how.
  43. [43]
    Phrasal or lexical constructions? Some comments on ...
    Sep 27, 2007 · Some comments on underspecification of constituent order, compositionality, and control. Proceedings of the 14th International Conference on ...
  44. [44]
    (PDF) Underspecified Semantics in HPSG - ResearchGate
    In this paper we propose a new approach to underspecified semantics in HPSG. We show that a general framework for underspecified semantic representation ...
  45. [45]
    [PDF] Relevance: Communication and cognition - Monoskop
    The book argues that communication implies relevance, as individuals focus on relevant information to achieve the greatest cognitive effect with the least ...
  46. [46]
    Do Large Language Models Have Compositional Ability? An ... - arXiv
    Jul 22, 2024 · In this study, we delve into the ICL capabilities of LLMs on composite tasks, with only simple tasks as in-context examples.