Fact-checked by Grok 2 weeks ago

Phonotactics

Phonotactics is a branch of that examines the constraints governing the permissible combinations and sequencing of sounds within a , particularly in forming syllables and words. These rules specify which phonemes can occur together in specific positions, such as onsets, nuclei, or codas, thereby defining the structural possibilities of linguistic units. Key aspects of phonotactics include restrictions on clusters, vowel sequences, and distributions that vary across languages; for instance, English permits complex onsets like /str/ in "street" but prohibits word-initial /ŋ/ as in "*ngreen," while languages like largely avoid clusters altogether. Phonotactic patterns are language-specific and learned through exposure, influencing both —where illegal sequences lead to errors—and , where legal forms are processed more efficiently. Phonotactics plays a crucial role in , as infants and adults rapidly generalize these constraints from minimal input, aiding and segmentation in continuous speech. In linguistic analysis, it informs models of syllable structure and has applications in , , and understanding historical sound changes across language families.

Fundamentals

Definition and Scope

Phonotactics is a branch of that examines the permissible and impermissible combinations of sounds, specifically phonemes or their features, within the words and syllables of a . These restrictions determine which sequences of segments form valid linguistic units, influencing how speakers produce and perceive speech. Unlike , which focuses on the physical production and acoustic properties of sounds, phonotactics deals with abstract rules governing their organization, independent of actual pronunciation variations. The scope of phonotactics encompasses constraints at multiple levels: segmental, involving combinations of individual and vowels such as clusters; syllabic, regulating the structure of onsets, nuclei, and codas; and prosodic, addressing broader patterns like or intonation boundaries that interact with sound sequences. It is distinct from , which concerns the formation of words through meaningful units like roots and affixes, though phonotactic rules may sometimes align with morphological boundaries without directly governing word-building processes. Understanding phonotactics requires familiarity with foundational concepts in , including —the minimal contrastive sound units that distinguish meaning—as identified through minimal pairs, pairs of words differing by only one sound (e.g., pat and bat in English). Allophones, the non-contrastive variants of a phoneme that do not affect meaning (e.g., aspirated [pʰ] in pin versus unaspirated in spin), provide context for phonotactic rules by showing how sounds behave in specific environments without violating combinatory constraints. Basic phonotactic rules illustrate these principles across languages. In English, the velar nasal /ŋ/ (as in sing) cannot occur word-initially, making forms like *[ŋit] invalid. In Japanese, syllables typically follow a CV structure but permit a limited coda, such as the moraic nasal /N/ (realized as [n, ɲ, ŋ, or m] depending on the following sound), which is obligatory in certain nasalized positions to maintain prosodic well-formedness. These examples highlight how phonotactics enforces language-specific patterns, with violations often leading to perceptual repair or adaptation in loanwords.

Historical Development

The study of phonotactics traces its roots to 19th-century , where scholars examined sound changes and their effects on permissible combinations within . Jacob Grimm's formulation of in 1822 described systematic shifts in consonants from Proto-Indo-European to , such as the change from /p/ to /f/ (e.g., Latin *pater to English ), which implicitly constrained allowable clusters and sequences by altering the inventory and distribution of sounds across related languages. This work laid groundwork for understanding phonotactic restrictions as outcomes of historical sound laws, influencing later analyses of syllable structures in language families. Key milestones emerged in the late 19th and early 20th centuries with foundational contributions to phonological theory. Jan Baudouin de Courtenay's research in the 1870s on sound laws, particularly in like and Kashubian, distinguished between phonetic sounds and abstract phonemes, emphasizing how positional contexts govern permissible combinations and foreshadowing phonotactic constraints. Building on this, Leonard Bloomfield's 1933 monograph introduced the concept of distributional classes, classifying sounds based on their environments and co-occurrence patterns, which provided a systematic framework for identifying phonotactic rules in descriptive . Concurrently, the emerged as a concept in early 20th-century work, ranking sounds by perceptual prominence to explain organization. The mid-20th century marked a shift toward generative approaches, with and Morris Halle's 1968 integrating phonotactics into a feature-based model of generative . This framework treated constraints on sound sequences as operations on binary features (e.g., [+consonantal], [+sonorant]), deriving phonotactic patterns from universal rules and language-specific adjustments during derivation. Influential scholars like advanced related ideas in his 1904 analysis of formation, proposing a prominence theory where sounds vary in sonority to determine weight and structure, impacting metrics of syllable heaviness in prosodic systems. further contributed through his 1941 exploration of phonological universals, identifying hierarchical feature oppositions that underpin cross-linguistic patterns in sound distribution. From the 1980s to the 2000s, phonotactic research evolved by incorporating typological perspectives and implicational universals, as articulated in Joseph Greenberg's 1963 survey of 30 languages, which proposed conditional statements like "if a language has phonemic fricatives, it has stops," linking inventory constraints to broader sequential rules. This integration shifted focus from isolated rules to predictive hierarchies across languages, influencing optimality-theoretic models that evaluate constraint interactions globally.

Core Principles

Sonority Sequencing Principle

The (SSP) posits that within a syllable, the sonority of speech sounds must rise gradually from the onset to the and then fall gradually toward the , ensuring a smooth perceptual and articulatory profile. Sonority refers to the relative auditory prominence or perceived loudness of a , determined primarily by its acoustic intensity and resonance, with exhibiting the highest sonority due to their open vocal tract configuration and periodic airflow, while stops and fricatives show the lowest as a result of greater obstruction. This principle, first articulated by in his foundational work on , serves as a universal guideline for phonotactic well-formedness, predicting that deviations create marked structures often repaired through processes like or cluster simplification in loanwords or child language. The provides a ranked scale for classifying sounds, typically structured as follows: low vowels > mid vowels > high vowels > glides > (e.g., /l/, /r/) > nasals (e.g., /m/, /n/) > obstruents (fricatives > stops, with voiceless lower than voiced). This hierarchy reflects articulatory ease, where transitions between sounds of increasing sonority involve less gestural overlap and smoother timing, facilitating production, while perceptual salience is enhanced by the peak in periodic energy at the , aiding detection and parsing. Violations of the hierarchy, such as a falling sonority in onsets (e.g., a followed by a nasal), are rare and considered highly marked, often leading to perceptual ambiguity or articulatory difficulty. Formally, the SSP can be represented through the syllable template \sigma = (C_1)(C_2 \dots ) V (C_1)(C_2 \dots ), where sonority increases monotonically from any onset consonant(s) to the vocalic (the sonority ) and decreases in the , allowing for plateaus or gradual falls in cases like falling diphthongs (e.g., /ai/, where sonority falls gradually between elements). For instance, in a complex onset like /bla/, sonority rises from the stop /b/ (low) through the /l/ (mid) to the /a/ (high), forming a valid ; plateaus occur when adjacent segments share similar sonority, as in /tw/ where the glide /w/ approximates the vowel's prominence without a sharp rise. Cross-linguistic evidence supports the as a strong tendency, with conforming clusters (e.g., rising sonority onsets like /pr/ or falling codas like /mp/) appearing in the majority of syllable inventories across language families, while falling-sonority onsets are virtually absent in most languages. A large-scale analysis of 496 languages reveals that while violations occur in about 40-50% of cases—often involving or in onsets and codas—the principle still accounts for preferred patterns, such as maximal sonority rises toward the , underscoring its role in universal phonotactics.

Syllable Structure Constraints

Syllables are typically composed of three main parts: an optional onset consisting of one or more preceding the , a formed by a or syllabic that serves as the syllable's core, and an optional of following the . Cross-linguistically, the simplest syllable structure is , where C represents a and V a , reflecting a universal preference for open syllables with minimal consonantal margins. Complex onsets and complex codas are permitted in some languages but not others, with typological variation showing that not all languages allow both types of complex margins. Phonotactic constraints often impose restrictions based on position within the , such as prohibitions on certain places of or voicing in codas. For instance, many languages, including and , disallow voiced obstruents in coda position due to , resulting in voiceless realizations of underlying voiced stops word-finally. Adjacency effects further limit permissible sequences, as seen in English, where clusters like /tl/ are banned in onsets to avoid incompatible articulatory transitions between alveolar stops and laterals. Markedness hierarchies in phonotactics favor simpler structures, with syllables considered unmarked and complex margins introducing greater complexity that requires phonological licensing. In frameworks like Government Phonology, the licenses the onset and through hierarchical relations, where weaker licensing in codas permits more complex clusters compared to onsets. This underscores a universal tendency toward in syllable margins, where codas tolerate higher due to reduced perceptual salience. When ill-formed sequences violate these constraints, languages employ repair mechanisms to restore well-formedness, including to insert vowels breaking illicit clusters, deletion to excise offending consonants, or metathesis to reorder segments. commonly repairs complex codas in adaptation, as in inserting /u/ after obstruents to avoid closed s. Deletion targets marked codas in casual speech or historical change, while metathesis, though rarer, resolves adjacency violations by swapping sounds, as evidenced in experimental learning tasks where participants reorder clusters to align with templates. Typological variation highlights the diversity of syllable structures, with some languages permitting no consonant onsets—resulting in all vowel-initial syllables—such as Arrernte, where underlying forms lack syllable onsets. In contrast, languages like allow heavy codas with up to four consonants, such as /rstk/ in word-final position, reflecting permissive phonotactics for complex margins.

Language-Specific Examples

English

English phonotactics permit complex consonant clusters in syllable onsets, but only those exhibiting rising sonority, such as /str/ in "" and /spl/ in "," while prohibiting sequences with falling or equal sonority like /bn/ or /tl/ that violate this principle. These restrictions ensure that less sonorous consonants precede more sonorous ones in onsets, as observed in native word formations. In codas, English bans certain sounds in word-final position, including /h/, which occurs exclusively as a syllable onset, and the cluster /ŋg/, though /ŋ/ alone is permitted as in "sing." Sibilant-plus-stop clusters are allowed in codas, however, as evidenced by /sts/ in "texts." Vowel-consonant interactions in English involve glide insertion to form diphthongs, where sequences like /aɪ/ are analyzed as a vowel followed by a glide /j/ or /w/, as in "high" or "how." Additionally, the /ə/ occurs primarily in unstressed syllables, whether open or closed, while open syllables in stressed positions favor full s like /i/ or /a/ (e.g., "sofa" /ˈsoʊ.fə/). Dialectal variations affect coda realizations, particularly with /r/, which is pronounced in American English codas as in "car" but often deleted in non-rhotic British Received Pronunciation. Loanword adaptations frequently involve epenthesis to resolve illicit clusters, such as inserting a schwa in "film" to yield /fɪləm/ in certain dialects like Irish English, aligning the pronunciation with native phonotactic constraints.

Japanese

Japanese phonotactics are governed by a strictly moraic structure, where the fundamental unit is the mora, typically organized as (C)V or (C)VN, with N representing the moraic nasal /n/ and no consonant clusters permitted except for the special mora /Q/, which causes gemination of the following obstruent. This CV(N) template ensures that onsets are simple single consonants or empty, while codas are limited to the moraic nasal /n/, which assimilates in place of articulation to a following consonant, or the geminate trigger /Q/, realized as a brief closure before voiceless obstruents like /p/, /t/, /k/, and /s/. For instance, the word kitte 'stamp' features /Q/ geminating the /t/, forming a bimoraic heavy syllable. Vowel sequences in exhibit , where adjacent vowels from different morphemes or in rare monomorphemic cases remain distinct without obligatory fusion, though such configurations are infrequent and often subject to optional glide formation or in . Long vowels, analyzed as bimoraic units (), contrast with short monomoraic vowels and contribute to the language's isochronous rhythm, as in kāsa '' versus kasa 'hat'. These constraints shape moraic units, reinforcing the syllable's role as a grouping of moras rather than an independent phonological entity. In adaptation, Japanese phonotactics enforce to resolve illicit clusters, inserting a default high /u/ or a copy of a nearby , as seen in the English word becoming sutoroberī. Palatalization rules further apply, transforming coronals like /t/ and /d/ before /i/ into affricates /tɕ/ and /dʑ/, yielding forms such as tīshatsu for . These adaptations maintain the CV(N) template while incorporating foreign elements. The standard variety exemplifies these constraints, but dialects like Okinawan diverge, permitting more complex consonant clusters such as prenasalized stops and CCV onsets, reflecting Ryukyuan phonological diversity. For example, Okinawan allows sequences like /mb/ or /nd/ in native words, contrasting with mainland Japanese simplicity.

Ancient Greek

The phonotactics of Ancient Greek permitted a relatively simple syllable structure, primarily consisting of CV (consonant-vowel), CCV (with complex onsets), and CVC (with a coda consonant) shapes, where CV syllables were light and CVC or CVV syllables were heavy in quantitative meter. Complex onsets were allowed in word-initial position, including clusters such as /pn/ (as in pneuma 'breath') and /ps/ (as in psūkhē 'soul'), which adhered to the sonority sequencing principle by rising from obstruent to nasal or fricative. Codas are typically single consonants but can form complex clusters in heavy syllables (CVCC), contributing to prosodic weight in the language's organization. Diphthongs formed a key part of vowel phonotactics, allowing complex sequences like /ai/ (as in paidós 'child') and /eu/ (as in 'well'), which were treated as long in quantitative metrics used in poetry, contributing to the heavy status of their syllables. These diphthongs influenced metrical patterns in and lyric verse, where determined rhythmic structure, such as in . Consonant restrictions included the absence of word-initial /w/ after the period, as the (ϝ) representing this from Proto-Indo-European *w fell out of use by the Classical era, leaving no trace in or Ionic dialects. provided phonemic contrasts among stops, distinguishing unaspirated /p/ (as in pótmos 'fall') from aspirated /ph/ (as in phérō 'I carry'), a feature that marked lexical differences and persisted in careful . Historical sound changes shaped phonotactics, including in codas when a was lost, such as the deletion of /w/ or /j/ after a , resulting in prolongation (e.g., *sā́wōn > *sā́ōn 'safe'), thereby maintaining moraic weight and affecting heaviness. In the dialect, geminates were realized as doubled stops like /tt/ (as in máttēn 'in vain'), which were phonemically distinct from singletons and frequent in intervocalic positions, influencing prosody and later through Latin borrowings that preserved some patterns. These features of phonotactics, with their emphasis on and metrical constraints, exerted lasting influence on the phonological systems of descendant languages in the Mediterranean region.

Formal Models

Feature-Based Approaches

Feature-based approaches to phonotactics model sound sequences by decomposing segments into bundles of binary distinctive features, enabling constraints to be formalized as bans on incompatible feature combinations. In the seminal framework of (SPE), Chomsky and Halle (1968) proposed a set of universal binary features, including [±sonorant], [±consonantal], [±continuant], and place features like [±anterior] and [±coronal], which capture the articulatory and acoustic properties of sounds. Phonotactic restrictions, such as prohibitions on certain consonant clusters, are then expressed as rules that prevent illicit co-occurrences of these features within prosodic domains like the syllable onset or nucleus. For example, in English, the restriction that only /s/ can precede another stop word-initially (e.g., permitting /sp/ but prohibiting /*tp/), can be derived from feature-based rules involving [continuant] and place features, promoting a sonority rise in permissible sequences. To address limitations in the linear matrix representation of features in SPE, organizes features into hierarchical tree structures, reflecting natural classes and dependencies among them. Sagey (1986) introduced a model with a dominating major class features (e.g., [±consonantal]), which branch into manner, place, and laryngeal tiers; for instance, the laryngeal includes features like [±voice] and [±spread glottis] to group properties. This explains phonotactic in clusters, such as place agreement in sequences (e.g., /n/ becoming [ŋ] before velars), by allowing linked features under shared nodes (e.g., coronal or ) to spread, enforcing co-occurrence without stipulating rules for each language. Such structures highlight how phonotactics emerge from feature interactions rather than arbitrary segment lists. Phonological representations in these approaches often incorporate underspecification, where redundant or predictable features are omitted from underlying forms to streamline derivations and reflect perceptual salience. For vowels, place features are frequently underspecified; for example, non-low vowels may lack explicit [±anterior] or [±back] specifications, defaulting to values like [−anterior] for front vowels, as this captures asymmetries in and alternation patterns without over-specifying invariant properties. This principle, developed in works extending SPE, reduces in rule application and aligns with evidence from phonological processes where default values surface in neutral contexts. Despite their influence, feature-based models face critiques for overgeneration, as the linear or even geometric arrangements in SPE permit derivations of unattested forms, such as impossible feature combinations in complex onsets, without sufficient mechanisms to block them universally. This led to the evolution toward , which introduces non-linear tiers and association lines to better model timing, tone, and , curbing overgeneration by representing features as autonomous autosegments rather than strictly sequential matrices.

Optimality Theory Applications

Optimality Theory (OT), developed in the early 1990s, applies to phonotactics by modeling sound patterns as the outcome of interactions among a of ranked, violable constraints, rather than rule-based derivations. In this framework, a generator function () produces an infinite set of candidate outputs from a given underlying input, while an evaluator (EVAL) selects the optimal candidate based on the language-specific ranking of constraints from the (CON). Markedness constraints in CON penalize complex or unnatural structures, such as *COMPLEX-ONSET (banning branching onsets) or NO-CODA (banning codas), while faithfulness constraints preserve aspects of the input, like MAX-IO (no deletion) or DEP-IO (no insertion). Language-particular phonotactics emerge from the hierarchical ranking of these constraints, allowing violations of lower-ranked ones when necessary to satisfy higher-ranked ones. In phonotactic applications, pits against to account for permissible and impermissible sequences. For instance, in English, the sequence /ŋg/ is banned word-finally due to a high-ranked markedness constraint *NG (prohibiting /ŋ/ followed by a non-coronal stop), which outranks relevant constraints like IDENT-IO (preserving place features), leading to deletion or other repairs in potential candidates containing /ŋg/. Similarly, onsets like /str/ in "street" are permitted because constraints against onset ity, such as *COMPLEX, are ranked below and other pressures like ONSET (requiring syllables to have onsets). The following tableau illustrates this for the input /str/, where the faithful candidate [str] emerges as optimal by fatally violating the low-ranked *COMPLEX while satisfying higher-ranked DEP-IO (no ) and ONSET; alternative candidates like [sətr] (with ) or [tr] (with deletion) incur more serious violations.
Input: /str/DEP-IOONSET*COMPLEX
a. ☞ [str]*
b. [sətr]*!
c. [tr]**!
This setup explains why English tolerates certain three-consonant onsets without , unlike languages where * is higher-ranked. Extensions of address more complex phonotactic phenomena, such as opacity, where an intermediate stage affects a later one in ways not directly visible on the surface. Correspondence Theory refines by introducing multiple correspondence relations—input-output (), output-output (), and base-reduplicant ()—to model repairs like deletion or spreading without assuming serial derivations; for example, it handles cases where an illicit cluster is repaired differently in underived vs. derived contexts by aligning corresponding elements across outputs. Learnability in OT is supported by algorithms like recursive demotion, which infers the correct ranking from pairs of winner-loser candidates in observed data, progressively demoting constraints violated by winners but not losers to converge on the target grammar. OT's advantages in phonotactics include its ability to explain cross-linguistic variation through simple reranking—e.g., languages with no codas rank above MAX-IO, while those allowing them reverse this—and to unify disparate processes into "conspiracies" driven by a single high-ranked constraint, such as multiple strategies avoiding geminates in . However, critiques highlight persistent challenges, including the theory's difficulty with certain opacities without ad hoc extensions like or stratal , and the risk of overgeneration from an unconstrained function, which can produce unattested patterns unless additional restrictions are imposed on .

Implications

Language Acquisition

Children acquire phonotactic knowledge through a developmental progression that begins with universal patterns in early and transitions to language-specific constraints by the second year of life. In the initial stage around 6-10 months, infants produce canonical syllables (e.g., CV structures) that are largely across , showing little adherence to specific phonotactic rules of their ambient . By 12-24 months, however, native phonotactic patterns emerge, as evidenced by English-learning infants' avoidance of illicit onset clusters like /bn/, which violate sonority rise preferences and are rarely encountered in input. This shift reflects growing sensitivity to probabilistic constraints in the linguistic environment, enabling toddlers to produce and prefer well-formed syllables aligned with their 's . Empirical evidence for phonotactic acquisition comes from experimental tasks revealing gradient knowledge rather than strict categorical rules. In nonce word tasks akin to wug tests, children as young as 3-4 years demonstrate graded acceptability judgments for novel forms, rating high-probability clusters (e.g., /bl/) as more word-like than low-probability ones (e.g., /bn/), indicating partial internalization of phonotactic probabilities. Similarly, error patterns in child speech, such as reduction, follow sonority principles: children preferentially retain the higher-sonority element in falling-sonority clusters (e.g., reducing /sp/ to /p/) to optimize well-formedness, even before full mastery. These patterns underscore how phonotactics guide production from an early age, with reductions decreasing as input-driven learning strengthens constraint adherence. Theoretical accounts of phonotactic acquisition debate the relative contributions of innate universals and learned mechanisms. Innate biases, particularly sonority-based restrictions like the , appear to bootstrap learning, as infants extend these universals to novel clusters unattested in their language, suggesting an initial phonological that favors rising sonority in onsets. In contrast, statistical learning from input drives , with infants tracking co-occurrence probabilities of sounds to internalize language-specific patterns, as shown in habituation studies where 9-month-olds discriminate legal from illegal sequences after brief exposure. Prosody plays a facilitative role in this process, with rhythmic cues like enhancing sensitivity to phonotactic boundaries during word segmentation and learning, particularly in trochaic languages where strong-weak patterns highlight permissible clusters. Cross-linguistically, acquisition trajectories reflect language-specific structures, such as the early mastery of moraic timing in . infants segment and produce morae (e.g., or V units) accurately by 12-18 months, leveraging the language's isochronous rhythm to enforce phonotactic constraints like vowel epenthesis in loanwords, ahead of complex acquisition in languages like English. In phonological disorders like , impaired phonotactic processing manifests as reduced sensitivity to sound sequence probabilities, leading to difficulties in decoding novel words and sustaining phonological representations during reading acquisition. Key milestones in understanding this process emerged from 1990s research applying to model ranking in development. Studies by Clara Levelt and colleagues analyzed children's longitudinal speech data, revealing staged acquisition of types (e.g., before CCV) via gradual promotion of constraints over , predicting error orders like reduction before full onsets. This framework, extended by Boersma and Levelt's gradual learning algorithm, demonstrated how input re-ranks innate constraints to match target phonotactics, aligning with observed timelines across languages.

Computational and Typological Applications

Phonotactics plays a central role in through the identification of cross-linguistic patterns and universals that constrain and segment combinations. Joseph Greenberg's work on universals, particularly in his 1978 analysis of phonological structures, highlighted implicational hierarchies in complexity, such as the tendency that languages permitting complex onsets also allow codas, while languages lacking codas rarely permit onset clusters. This reflects broader principles where simpler structures (e.g., syllables) are more common globally than complex ones (e.g., CCVC). The UCLA Phonological Segment Inventory Database (UPSID), compiled by Ian Maddieson in the 1980s and updated to include 451 languages, has been instrumental in quantifying these patterns, supporting statistical universals derived from segment co-occurrence frequencies. In , phonotactics is modeled using finite-state automata (FSAs) to generate or validate permissible sound sequences, enabling efficient representation of constraints as regular languages. For instance, genetic algorithms have been employed to induce FSAs from positive phonotactic data, capturing language-specific rules like English's avoidance of /tl/ onsets. N-gram models, which estimate probabilities of sequences based on frequencies, are widely used in to ensure generated utterances adhere to phonotactic probabilities, improving naturalness in systems like grapheme-to-phoneme conversion. approaches, particularly supervised models, predict adaptations by learning mappings from source to target phonotactics, such as inserting epenthetic vowels to repair illicit clusters in Japanese borrowings from English. Practical applications of phonotactics extend to , where models filter out phonotactically impossible candidates to reduce search space and error rates; for example, pruning non-words like *bnif in English accelerates decoding in hidden Markov model-based systems. In design for under-resourced languages, phonotactic constraints guide conventions to reflect permissible clusters. leverages phonotactics for speaker profiling, analyzing deviations in cluster realization to infer dialectal origins or non-native accents in audio evidence. Tools like facilitate empirical analysis by enabling segmentation and measurement of phonotactic violations in acoustic data. Advances in the , including early recurrent neural networks, modeled phonotactic probabilities to simulate human-like sensitivity to sequence likelihoods, laying groundwork for modern applications. Challenges in these domains include accommodating dialectal variation, where phonotactic allowances differ systematically—e.g., permits /xt/ word-finally unlike —complicating universal models. Predicting remains difficult, as computational metrics like FSA complexity or neural surprisal often fail to fully capture implicational hierarchies without extensive cross-linguistic training data, leading to overgeneralization in low-resource scenarios.

References

  1. [1]
    Phonotactics - an overview | ScienceDirect Topics
    Phonotactics refers to the systematic rules governing the permissible arrangement of sounds in a language. It involves restrictions on vowels and consonants ...
  2. [2]
    Rapid generalization in phonotactic learning - Laboratory Phonology
    Aug 15, 2016 · The set of all such restrictions is referred to as the phonotactics of the language. The distinction between phonotactically legal and illegal ...
  3. [3]
    English phonotactics1 | English Language & Linguistics
    Aug 4, 2015 · This article presents an analysis of the phonotactic structures of English presented in The Cambridge English Pronouncing Dictionary
  4. [4]
    Learning to speak by listening: Transfer of phonotactics from ... - NIH
    Abstract. The language production and perception systems rapidly learn novel phonotactic constraints. In production, for example, producing syllables in ...
  5. [5]
    Morphology and Phonotactics - Oxford Research Encyclopedias
    Sep 26, 2018 · Phonotactics is the study of restrictions on possible sound sequences in a language. In any language, some phonotactic constraints can be ...
  6. [6]
    [PDF] Phonology 1: phonemes
    Phonemes: minimal pair. • Minimal pair: a pair of words distinguished by only one phoneme occurring in the same place in the string. ( ')''*'&+),-%&fat&+ ...
  7. [7]
    4.2 Allophones and Predictable Variation – Essentials of Linguistics
    Allophones are non-contrastive variants within a phoneme category, appearing in predictable environments, and are phonetically conditioned.
  8. [8]
    Novel stress phonotactics are learnable by English speakers
    Dec 26, 2019 · For example, in English, /ŋ/ (ng sound) can only be at the coda (i.e., ending) position of a syllable, as in “sing.” But in Vietnamese, /ŋ/ can ...
  9. [9]
    [PDF] Consequences of High Vowel Deletion for Syllabification in Japanese*
    Japanese is well-known as a language without consonant clusters, allowing only homorganic nasal- consonant clusters and geminates (e.g., Ito, 1986).
  10. [10]
    Grimm's law | Definition, Linguistics, & Examples - Britannica
    One shift (probably a few centuries before the Christian era) affected the Indo-European consonants and is evident in English, Dutch, other Low German languages ...
  11. [11]
    A Reader in Nineteenth Century Historical Indo-European Linguistics
    Grimm speaks of consonant gradation. We no longer do, but our entire treatment of the Indo-European vowels is based on the assumption of gradation. Grimm viewed ...Missing: implications | Show results with:implications
  12. [12]
    [PDF] Baudouin de Courtenay - ANTHOLOGY - Monoskop
    His work on the role of sounds in the structure of language was, how- ever, a direct outgrowth of the nineteenth-century development of phonetics and the ...
  13. [13]
    Language, Bloomfield, Hackett - The University of Chicago Press
    $$53.00Leonard Bloomfield's Language is both a masterpiece of textbook writing and a classic of scholarship. Intended as an introduction to the field of linguistics.
  14. [14]
    [PDF] quantifying the sonority hierarchy - Dallas International University
    3.2.4 The early 20th century: Jespersen (1904). Analyses of sonority in the first half of the 20th century include. Jespersen (1904, 1922), de Saussure (1907 ...
  15. [15]
    [PDF] THE SOUND PATTERN OF ENGLISH - MIT
    This study of English sound structure is an interim report on work in progress rather than an attempt to present a definitive and exhaustive study of ...
  16. [16]
    Language: Its Nature Development And Origin - Project Gutenberg
    LANGUAGE ITS NATURE DEVELOPMENT AND ORIGIN. BY OTTO JESPERSEN PROFESSOR IN THE UNIVERSITY OF COPENHAGEN.
  17. [17]
    [PDF] Child Language, Aphasia and Phonological Universals - Monoskop
    PHONOLOGICAL. UNIVERSALS by. ROMAN JAKOBSON. HARVARD UNIVERSITY AND. MASSACHUSETSS INSTITUTE OF TECHNOLOGY. MOUTON PUBLISHERS · THE HAGUE · PARIS NEW YORK. Page ...Missing: phonotactics | Show results with:phonotactics
  18. [18]
    [PDF] Universals of language - Internet Archive
    This document reports on a conference about language universals, which were divided into phonology, grammar, and semantics, and syn-chronic and diachronic.
  19. [19]
    (PDF) Implicational phonological universals - ResearchGate
    Aug 7, 2025 · A principle holding for implicational phonological universals is proposed, stating that the antecedent and the consequent of an implicational ...
  20. [20]
    [PDF] clements90.pdf
    17.2 The Sonority Sequencing Principle: a historical overview. The notion that speech sounds can be ranked in terms of relative stricture or sonority can be ...
  21. [21]
    Lehrbuch der Phonetik; : Jespersen, Otto, 1860-1943 - Internet Archive
    Jul 23, 2008 · Lehrbuch der Phonetik;. by: Jespersen, Otto, 1860-1943. Publication date: 1904. Topics: Phonetics. Publisher: Leipzig, Teubner. Collection ...Missing: sonority | Show results with:sonority
  22. [22]
    Sonority sequencing and its relationship to articulatory timing in ...
    Mar 15, 2023 · Sonority is often used to explain patterns of syllable structures across the world's languages, through the Sonority Sequencing Principle ( ...
  23. [23]
    Modeling Sonority in Terms of Pitch Intelligibility With the Nucleus ...
    Jul 7, 2022 · In models that use the H hierarchy, there are four levels of obstruents (voiced and voiceless stops and fricatives) which are collapsed into one ...<|separator|>
  24. [24]
  25. [25]
    3.10 Syllables – Essentials of Linguistics, 2nd edition
    Based on a language's own sonority hierarchy, its syllables usually obey the sonority sequencing principle (SSP), which requires sonority to rise through the ...
  26. [26]
    [PDF] PHONOLOGICAL COMPLEXITY IN LINGUISTIC PATTERNING
    Aug 21, 2011 · The Syllable Index in Figure 2 is the sum of values ranging from 0-3 for the complexity of onsets, 1-2 for the complexity of nuclei, and 0-3 for ...
  27. [27]
    Syllables and typology in OT - Brian W. Smith
    Cross-linguistic tendencies regarding what can be an onset, coda, and nucleus; Contrastive syllabification? Why do phonologists think syllable structure exists?Cross-Linguistic Tendencies... · Why Do Phonologists Think... · Typology Of Onsets And CodasMissing: components | Show results with:components<|separator|>
  28. [28]
    [PDF] Positional Faithfulness and Voicing Assimilation in Optimality Theory
    Other logically possible patterns do not occur: for example, languages which preserve voicing in coda but not in onset, or languages which devoice word-final ...
  29. [29]
    Phonotactics – ENGL6360 Descriptive Linguistics for Teachers
    For example, English has phonotactic restrictions that ban [tl] and [dl] in onsets, but this is not a universal restriction. Plenty of languages allow onsets ...
  30. [30]
    [PDF] Licensing Strength and Syllable Structure in Government Phonology.
    The least marked syllable structure is that with a simplex onset and a short nucleus (CV). The second step on the scale of markedness is represented by a ...
  31. [31]
    (PDF) Licensing constraints in phonology - ResearchGate
    Aug 9, 2025 · In this article I firstly propose a general framework for formulating interconstituent relations that either 'license' or 'govern' the ...
  32. [32]
    (PDF) Epenthesis.and.deletion in loan phonology - ResearchGate
    Mar 5, 2019 · The two primary processes serving this end are clearly epenthesis and. deletion. And while it is true that examples of epenthesis and deletion ...Missing: mechanisms | Show results with:mechanisms
  33. [33]
    [PDF] EPENTHESIS, DELETION AND THE EMERGENCE OF THE ...
    Epenthesis and deletion thus are the result of the substrate ranking imposing a relatively unmarked syllable structure. The non-uniformity of these adjustment ...
  34. [34]
    Learning metathesis: Evidence for syllable structure constraints - PMC
    Phonological metathesis occurs when two adjacent sounds switch places (e.g., pronouncing 'cast' as 'cats', in which the /t/2 and the /s/ switch). Because many ...
  35. [35]
    [PDF] Syllables and Reduplication in Bella Coola (Nuxalk)* - UBCWPL
    This analysis of Bella Coola prosodic structure is largely focused on arguing for the existence of syllables with fricatives in nuclei in OBSTRUENT-ONLY words, ...Missing: phonotactics | Show results with:phonotactics
  36. [36]
    Polish Syllable Structure - Bethin - Major Reference Works
    Apr 28, 2011 · The sequencing of segments within onsets and codas is also said to be governed by sonority, in that sonority rises toward the syllable peak and ...Missing: typological Nuxalk<|control11|><|separator|>
  37. [37]
    dependent phonotactic patterns in speech and digital sequence ...
    Mar 19, 2018 · In. English, for example, the consonant /h/ can only be a syllable onset and not a coda, while /ɳ/ can only be a coda. English speakers ...
  38. [38]
    [PDF] 2 Mora and Syllable - HARUO KUBOZONO - Blackwell Publishing
    The mora in Japanese can be defined in four ways according to its roles: (i) as a basic unit of temporal regulation, (ii) as a unit by which phonological dis- ...
  39. [39]
    Representing the moraic nasal in Japanese: evidence from Tōkyō ...
    May 10, 2021 · In this paper, I revisit the phonological representation of the Japanese moraic nasal N based on data from the Tōkyo, Ōsaka and Kagoshima varieties.Missing: phonotactics | Show results with:phonotactics
  40. [40]
    [PDF] 1 The phonetics of sokuon, or geminate obstruents - Keio
    Mar 17, 2015 · With this said, the primary acoustic correlate of Japanese geminates is greater duration compared to singletons: geminate consonants are ...
  41. [41]
    [PDF] On a Certain Type of Hiatus Resolution in Japanese - Keio
    Abstract: This paper discusses one type of hiatus resolution in Japanese, which spreads the first vowel to the second syllable node in hiatus.
  42. [42]
    The Phonology of Japanese - J-Stage
    Throughout the book, Labrune develops a syllable-free model of Japa- nese phonology and claims that Tokyo Japanese is a mora-counting mora language where the ...
  43. [43]
    [PDF] Shoji, S. (2014). Japanese epenthetic vowels
    Japanese Vowel Epenthesis​​ In most Japanese loanwords, the epenthetic vowel is [ɯ], which works as the context-free default epenthetic vowel.
  44. [44]
    [PDF] Vowel Selection in Japanese Loanwords from English
    Vowel length in Japanese loanwords is determined by the phonetic length of corresponding vowels in the source language. Japanese vowel phonemes are selected ...Missing: phonotactics | Show results with:phonotactics
  45. [45]
  46. [46]
  47. [47]
    [PDF] ANCIENT GREEK PITCH ACCENT: - eScholarship
    More precisely, words ending in a light (i.e. CV, CVC) syllable have a H tone (known as acute 'V́') either on the antepenultimate syllable (1a–d) or on the ...
  48. [48]
    [PDF] Problem Set #1: Ancient Greek
    Goal: Apply Clements & Keyser's syllable theory to the Ancient Greek data given below in ... How do we know that the (a) clusters are syllable-initial clusters ...
  49. [49]
    [PDF] Intermediate Phonology Part 3: Syllables - Caroline Féry
    Jespersen, Otto. 1904. Lehrbuch der Phonetik. Leipzig und Berlin: B.G. Teubner. Page 17. 17. Syllable construction in English. 3. Coda formation. Join any ...
  50. [50]
    Greek and Latin Quantitative Metre - Oxford Academic
    This chapter discusses the quantitative metre which was formed in the Greek language between 1000 and 750 bc. The chapter aims to measure short syllables. It ...
  51. [51]
    [PDF] Introduction to Greek Meter - Aoidoi.org
    Long vowels and diphthongs are long, but... • In Epic and elegiacs, a long vowel or diphthong at the end of a word may become shortened if the following word ...
  52. [52]
    [PDF] Greek Accent: A Case for Preserving Structure - MIT
    This study shows that. Ancient Greek had a mixed accentual system: the location of the accented syllable is determined by a metrical procedure, which counts ...
  53. [53]
    [PDF] Accent, Syllable Structure, and Morphology in Ancient Greek
    A syllable containing a long vowel or diphthong can bear one of two accents, or “intonations”, either acute (phonologically V ´V) or circumflex ( ´VV). Their ...Missing: phonotactics | Show results with:phonotactics
  54. [54]
    l before a consonant becomes r (delateralization) | Greek Ancient ...
    Between Ancient Greek and Modern Greek, a sound change affected the consonant λ l when it occurred before another consonant; in particular, the λ l became ρ r.Missing: resonants | Show results with:resonants
  55. [55]
  56. [56]
    [PDF] Ancient Greek
    Geminates. • All consonants except for /dz/ and /h/ may occur as geminates. • Cf. ἵππος 'horse', ἐννέα 'nine'. • Only /ll/, /mm/, /rr/ and /tt/ are frequent in ...
  57. [57]
    [PDF] THE PHONOTACTICS AND PHONOLOGY OF OBSTRUENT ...
    It holds that the preferred contact between two adjacent syllables is when the segment ending the first syllable is higher in sonority than the segment ...
  58. [58]
    (PDF) Feature geometry and cooccurrence restrictions - ResearchGate
    Aug 6, 2025 · Sagey (1986) has argued that there are distinct Articulator nodes, Labial, Coronal and Dorsal, each of which dominates certain binary features, ...
  59. [59]
    [PDF] The representation of features and relations in non-linear phonology
    I demonstrate that the association lines among features and x-slots that connect all the tiers in the hierarchy must represent the relation of overlap in time, ...
  60. [60]
    [PDF] Feature Geometry and Feature Spreading - MIT
    in the phonology. Sagey (1986) examined feature sets that function in the phonology of different lan- guages and showed that these functionally defined sets ...
  61. [61]
    [PDF] Underspecification in phonetics* - University of California, Los Angeles
    articulators, not phonological features, this is much like saying that vowels and consonants are specified for different sets of features. There is a set of.
  62. [62]
    [PDF] Distinctive features - Faculty of Linguistics, Philology and Phonetics
    The place features were therefore identical for vowels and consonants. The features and feature organization we will defend are based on universal principles of.
  63. [63]
    [PDF] Autosegmental Phonology PhD dissertation MIT - Full-Time Faculty
    A modification of the theory of generative phonology is suggested in this thesis in the introduction of parallel tiers of segments (or "autosegments"). This is ...
  64. [64]
    [PDF] Optimality Theory in Phonology - Rutgers Center for Cognitive Science
    It is a central hypothesis of Optimality Theory that a grammar ranks all the constraints in Con and that any ranking of Con is a grammar. (In this, OT picks up ...
  65. [65]
    [PDF] OPTIMALITY THEORY
    This is an introduction to Optimality Theory, the central idea of which is that surface forms of language reflect resolutions of conflicts be-.
  66. [66]
    [PDF] Faithfulness and Reduplicative Identity - Rutgers Optimality Archive
    correspondence theory of McCarthy & Prince (1993a): that template satisfaction is a special case of autosegmental association, involving associating ...
  67. [67]
    [PDF] The Learnability of Optimality Theory:
    Tesar & Smolensky. Learnability of Optimality Theory. 3. General Constraint Demotion. In this section, we pull out the core part of the RCD algorithm, which we ...
  68. [68]
    [PDF] OPTIMALITY THEORY IN PHONOLOGY - Maria Gouskova
    OT is a theory of constraint interaction in grammar, which aims to solve a couple of related problems that have confronted generative phonological theory ...
  69. [69]
  70. [70]
    Phonological universals in early childhood: Evidence from sonority ...
    Across languages, onsets with large sonority distances are preferred to those with smaller distances (e.g., bw>bd>lb; Greenberg, 1978).
  71. [71]
    Phonotactic constraints on infant word learning - PMC - NIH
    These studies show that infants develop early sensitivity to native language phonotactic patterns: the constraints on and likelihood of occurrence of phonemes ...Missing: bn avoidance
  72. [72]
    Feature-based generalisation as a source of gradient acceptability
    Aug 6, 2025 · This paper tests the role of phonological features in helping speakers evaluate which novel combinations receive greater lexical support. A ...
  73. [73]
    The influence of sonority on children's cluster reductions - PubMed
    This hypothesis predicted that children would reduce clusters to whichever consonant would result in the least complex syllable as defined by sonority.Missing: phonotactics | Show results with:phonotactics
  74. [74]
    Finding patterns and learning words: Infant phonotactic knowledge ...
    We found that infants with smaller vocabularies showed stronger phonotactic learning than infants with larger vocabularies even after accounting for general ...Missing: bn avoidance
  75. [75]
    [PDF] Phonotactic and Prosodic Effects on Word Segmentation in Infants
    9-month-olds use phonotactic sequences and prosodic cues, especially prosodic cues, for word segmentation. Stress and pauses can also influence this.
  76. [76]
    Segmentation of Rhythmic Units in Word Speech by Japanese ...
    Apr 8, 2021 · The present study examined whether Japanese infants and toddlers could segment word speech sounds comprising basic morae (ie, rhythm units similar to syllables)
  77. [77]
    Neurophysiological responses to phonological and temporal ...
    This is the first study to simultaneously assess neural sensitivity to phonological and temporal regularities in Dutch adult dyslexic readers.
  78. [78]
    [PDF] The Acquisition of Syllable Types
    Nov 16, 2009 · Constraints are arranged from left (highest ranked constraint) to right (lowest ranked constraint), and potential linguistic analyses (i.e., ...Missing: 1990s | Show results with:1990s
  79. [79]
    [PDF] Phonological Acquisition as Weighted Constraint Interaction
    Learners begin with a ranking of Markedness constraints above Faithfulness constraints, and rerank them on the basis of evidence from the target language. A ...
  80. [80]
    [PDF] CHAPTER 1 CLUSTER PHONOTACTICS AND THE SONORITY ...
    1.2.3 The Sonority Sequencing Principle​​ One of the most general cross-linguistic patterns of syllable phonotactics is the generalization that in any syllable ...
  81. [81]
    [PDF] Discovering Phonotactic Finite-State Automata by Genetic Search
    Abstract. This paper presents a genetic algorithm based approach to the automatic discovery of finite- state automata (FSAs) from positive data.
  82. [82]
    Stemmer and Phonotactic Rules to Improve n-Gram Tagger-Based ...
    Jan 8, 2021 · It is one of the essential components in speech synthesis, speech recognition, and natural language processing. The deep learning (DL)-based ...
  83. [83]
  84. [84]
    [PDF] The Possible-Word Constraint in the Segmentation of Continuous ...
    We propose that word recognition in continuous speech is subject to constraints on what may constitute a viable word of the language.
  85. [85]
    Phonotactics in Language: Rules, Roles, and Applications - Studocu
    Rating 5.0 (1) Phonotactic rules come into play when: Identifying phonological patterns, developing treatment , monitoring progress, and research. Research on phonological ...
  86. [86]
    Recurrent neural networks as neuro-computational models of ... - NIH
    Jul 28, 2025 · RNNs trained on phoneme symbol sequence inputs can simulate human sensitivity to phonotactic regularities in speech [14,15]. RNNs developed for ...
  87. [87]
    [PDF] Phonotactic Complexity across Dialects - ACL Anthology
    May 20, 2024 · Abstract. Received wisdom in linguistic typology holds that if the structure of a language becomes more complex in.<|separator|>
  88. [88]
    Cumulative markedness effects and (non-)linearity in phonotactics
    Apr 12, 2022 · This paper addresses the relationship between the strength of phonotactic constraints and the way in which multiple coincident violations of such constraints ...