Fact-checked by Grok 2 weeks ago

Lexical decision task

The lexical decision task (LDT) is an experimental paradigm in and in which participants are presented with visual or auditory stimuli consisting of letter strings (or sound sequences) and must rapidly decide whether each is a real word or a nonword (), with response times and accuracy serving as primary measures of lexical access and recognition processes. Introduced in the early 1970s, the task was pioneered by David E. Meyer and Roger W. Schvaneveldt in their 1971 study, which demonstrated semantic priming effects by showing that decisions for a target word (e.g., "nurse") were faster when preceded by a semantically related prime (e.g., "") compared to an unrelated one (e.g., "bread"), suggesting dependencies in retrieval operations from . This facilitation arises because related pairs activate shared semantic networks in the , a cognitive dictionary-like structure organizing word knowledge. The LDT has become a for investigating models of , lexical access, and reading, revealing how factors such as word frequency, orthographic neighborhood density, morphological complexity, and phonological overlap influence processing speed and error rates. For instance, high-frequency words elicit faster responses than low-frequency ones, supporting interactive activation models where bottom-up perceptual input interacts with top-down lexical knowledge. In and electrophysiological studies, the task has mapped neural correlates of lexical processing, including activations in the left for orthographic analysis and temporal regions for semantic integration. Variants include the paired-associate LDT (as in the original study) for priming effects and auditory versions for recognition, extending its utility to bilingualism, aging, and language disorders like . Despite limitations—such as potential strategic biases in nonword rejection or insensitivity to post-lexical comprehension—the LDT remains influential for its simplicity, reliability, and ability to isolate early stages of language processing.

Overview

Definition and Purpose

The lexical decision task (LDT) is a widely used experimental paradigm in and , in which participants view strings of letters and classify each as either a real word (e.g., "") or a non-word (, e.g., "tac") as rapidly and accurately as possible. Introduced by Meyer and Schvaneveldt in their seminal study on semantic facilitation in , the task typically involves presenting stimuli briefly on a screen, followed by a manual response via keypress. Non-words are constructed to be pronounceable and orthographically legal but absent from the language's , ensuring the decision relies on accessing stored lexical representations rather than simple orthographic familiarity. The primary purpose of the LDT is to measure the efficiency of lexical access—the process by which perceptual input activates corresponding entries in the —revealing underlying automatic mechanisms in reading and language comprehension. By emphasizing speeded responses, the task captures both reaction time (RT) as the principal dependent variable, which indexes the duration of lexical processing, and accuracy rates to assess decision reliability. This (word/non-word) isolates core recognition processes, minimizing confounds from higher-level semantic or syntactic integration, and has proven instrumental in probing how factors like word frequency, neighborhood density, and priming influence retrieval from . Theoretically, the LDT is rooted in computational models of lexical access that conceptualize RT as a reflection of activation dynamics within the . In the model, proposed by Marslen-Wilson and Welsh, spoken or visual input activates a set of phonologically or orthographically matching candidates (a ""), with decision time corresponding to the resolution of competition among them until the target is uniquely identified. Complementarily, the interactive model by Rumelhart and McClelland describes lexical processing as bidirectional flow of across , , and word levels, where LDT performance arises from excitatory and inhibitory interactions that propagate until a word reaches sufficient for a "yes" response. These frameworks underscore how the task elucidates the time course and selectivity of , providing empirical benchmarks for model validation.

Historical Development

The lexical decision task was introduced in 1971 by psychologists David E. Meyer and Roger W. Schvaneveldt as a method to investigate semantic priming effects during . In their seminal experiment, participants responded faster to a target word when it was preceded by a semantically related prime (e.g., "nurse" after "") compared to an unrelated one, suggesting that lexical retrieval involves across related memory representations. This paradigm shifted focus from mere identification accuracy to response latencies, providing a sensitive measure of cognitive processing speed in language comprehension. The task quickly gained prominence in the 1970s within for studying , building on earlier tachistoscopic methods that presented words briefly to assess perceptual thresholds. Unlike tachistoscopic recognition, which emphasized error rates under time constraints, the lexical decision task allowed for unlimited exposure durations and emphasized decision times, enabling deeper exploration of lexical access mechanisms. Early adoption highlighted its utility in revealing how word frequency influences recognition, with high-frequency words eliciting quicker decisions than low-frequency ones. A key milestone came in 1973 when Kenneth Forster and Stephen Chambers expanded on frequency effects by comparing lexical decisions to naming tasks, demonstrating that both reflect a serial search through a frequency-ordered . Forster's subsequent work, including his bin-search model of lexical access, further formalized these processes by positing discrete access units checked in order of . In the , the task integrated with emerging connectionist frameworks, particularly through James McClelland's parallel distributed processing approach, which simulated lexical decision performance via interactive activation networks. This era marked a transition from to distributed models, with Seidenberg and McClelland's 1989 implementation replicating and priming effects without explicit lexical entries.

Procedure

Standard Implementation

The standard implementation of the lexical decision task is conducted in a controlled environment using computer-based stimulus presentation. Participants are seated in front of a and instructed to categorize visually presented letter strings as either real words or non-words (pseudowords) by pressing designated keys on a , such as the "/" key for words and the "z" key for non-words, as quickly and accurately as possible while minimizing errors. This setup allows precise measurement of response times (RTs), which reflect the speed of lexical access. Each trial follows a structured sequence to standardize attention and timing. A fixation cross or asterisk appears centrally on the screen for 250–500 ms to direct gaze and prepare the participant, followed immediately by the target stimulus—a single uppercase letter string (typically 3–8 letters long) presented in a clear font such as Courier. The stimulus remains visible until the participant responds or a timeout occurs, usually after 2,000–4,000 ms, at which point feedback may indicate a missed response. A brief inter-trial interval (e.g., 1,000 ms blank screen) then precedes the next trial. A typical experimental session includes 100–200 trials to ensure sufficient data for analysis while avoiding fatigue, divided into blocks of 40–250 trials with short breaks. The stimuli are balanced with a 50/50 ratio of words to non-words to equate overall task difficulty and prevent . An initial set of 16–40 practice trials, also balanced, familiarizes participants with the procedure without contributing to main analyses. To maintain experimental validity, the order of stimuli is fully randomized across trials for each participant, counterbalancing conditions across multiple lists to minimize sequential effects or . Filler trials, often consisting of high-frequency words, are incorporated to balance the distribution of low-frequency targets and obscure experimental manipulations, thereby reducing learning or carryover effects.

Stimuli Presentation and Measurement

In the lexical decision task, stimuli consist primarily of real words and pseudowords, with real words selected to vary in frequency to examine effects on recognition speed. High-frequency words, such as common terms appearing frequently in language corpora, are processed more rapidly than low-frequency words, reflecting differences in lexical access efficiency. Pseudowords are constructed by altering one or more letters in real words to create pronounceable but nonexistent strings, such as changing "blint" from a base word while preserving orthographic and phonological neighborhood density to control for similarity to actual vocabulary. Occasionally, illegal strings—unpronounceable sequences like "xlg" that violate English orthographic rules—are included to distinguish between types of non-lexical items and assess baseline rejection processes. Stimuli are typically presented visually on a computer screen in uppercase or lowercase letters, centered at fixation to ensure foveal processing. The visual angle subtended by the stimulus is approximately 2 degrees. Presentation duration is often unlimited, remaining on screen until a response is made, though a maximum of 3,000-4,000 milliseconds may be imposed to prevent excessively long trials; this setup emphasizes rapid decision-making while minimizing display-time confounds. Response measurement focuses on both speed and accuracy, with reaction time (RT) recorded as the interval from stimulus onset to the participant's keypress indicating "word" or "nonword." Participants typically use designated keys, such as "/" for words and "z" for nonwords, to classify stimuli quickly and accurately. Accuracy is determined by correct classifications, with errors noted as false positives (pseudowords accepted as words) or false negatives (words rejected). Basic involves computing RTs for correct trials only, alongside rates expressed as percentages, to capture performance without contamination from inaccuracies. Outliers are routinely excluded, such as RTs below 200 milliseconds (anticipatory responses) or above 3,000 milliseconds, and those exceeding 2-3 standard deviations from an individual's RT, ensuring robust estimates of typical times; this trimming removes approximately 5-15% of trials depending on the dataset.

Variations

Priming Paradigms

In the lexical decision task (LDT), priming paradigms involve presenting a prime stimulus—a word or nonword—prior to the stimulus to investigate how prior exposure influences lexical recognition processes. This modification allows researchers to measure facilitative or inhibitory effects on response times (RTs), typically revealing faster decisions (e.g., 50-100 ms reduction) for related word pairs compared to unrelated ones. The prime activates associated lexical representations, facilitating access to the if semantically or associatively linked, as demonstrated in foundational experiments where pairs like "doctor-nurse" yielded quicker RTs than "doctor-butter." Semantic priming, a core type, occurs when the prime and target share meaning, such as category coordinates (e.g., "lion-tiger"), leading to RT facilitation through spreading activation in semantic networks. Associative priming extends this to word pairs connected by frequent co-occurrence (e.g., "bread-butter"), often producing similar effects but potentially modulated by expectancy. Repetition priming, another variant, involves presenting the identical word as both prime and target, resulting in substantial RT reductions (up to 100-200 ms) due to enhanced perceptual or lexical familiarity, with stronger effects for low-frequency words. Negative priming, conversely, arises when the prime is ignored or suppressed (e.g., in a distractor task), slowing subsequent RTs to that item as a target by 20-50 ms, reflecting inhibitory mechanisms in selective attention. Implementation typically uses a short interstimulus interval (ISI) of 50-250 between prime and target to capture automatic processes before strategic influences dominate. Masked priming, where the prime is briefly presented (e.g., 50 ) followed by a pattern mask like hash marks (#####), minimizes conscious awareness and isolates unconscious semantic effects, as shown in studies where masked related primes still facilitated RTs by 30-60 . Forward and backward further controls prime visibility, enabling examination of subliminal influences on lexical access. These paradigms collectively reveal dependencies between retrieval operations, supporting models of interactive activation in .

Cross-Modal Adaptations

Cross-modal adaptations of the lexical decision task integrate auditory and visual stimuli to probe the integration of phonological and semantic information across sensory modalities, revealing how lexical access operates independently of input mode. In these variants, an auditory prime—typically a presented via —is followed by a visual target, such as a letter string, to which participants respond by indicating whether it is a real word or nonword. This configuration, pioneered in studies of sentence comprehension, allows examination of immediate lexical activation effects without the confounds of purely visual processing. The procedure generally involves auditory primes with durations of 300-600 ms, ensuring natural speech perception, followed immediately by the onset of the visual target at prime offset to capture transient states. Visual targets remain on screen until response, with participants emphasizing speed and accuracy in their lexical decisions. This timing minimizes post-lexical strategic influences and facilitates measurement of cross-modal priming, where related auditory primes speed responses to semantically or phonologically associated visual targets. Such adaptations have been instrumental in demonstrating exhaustive access to multiple word meanings during processing. In applications to spoken word recognition, cross-modal LDT variants assess effects like facilitation from phonetic overlap, as seen when an auditory prime such as "" accelerates decisions to a visually presented target like "" due to shared phonological features. These measures highlight competitive dynamics in lexical cohorts and inform models of auditory word form recognition. Additionally, the paradigm's advantages include its ability to uncover modality-independent mechanisms of lexical access, where semantic priming persists across sensory shifts, supporting theories of amodal lexical representations. It is particularly prevalent in (ERP) studies, where it elicits the N400 component—a negativity peaking around 400 ms post-target—as an index of semantic integration difficulties, with reduced amplitudes for primed targets indicating facilitated processing.

Applications

Lexical Access Research

The lexical decision task has been instrumental in probing the mechanisms of lexical access, particularly the core research questions of whether access to the occurs serially—scanning entries one by one—or in , activating multiple candidates simultaneously. Early models like Forster's serial search posited a sequential verification process, but from lexical decision response times (RTs) supports both serial and aspects of , where potential lexical entries are evaluated based on incoming sensory input. A serial bottleneck may constrain overt , as rapid presentation of multiple words reveals interference when shifts sequentially, supporting hybrid views of pre-lexical followed by serial selection. A central finding driving these investigations is the word frequency effect, where high-frequency words elicit faster RTs than low-frequency ones, typically by 50-100 ms, reflecting easier activation for more familiar entries in the . This effect underscores the task's sensitivity to access dynamics, as it emerges even when controlling for orthographic or phonological confounds. The frequency effect aligns with the logogen model, proposed by Morton, which conceptualizes lexical entries as detectors (logogens) that accumulate evidence from perceptual input until a is reached; higher-frequency words lower this threshold, enabling quicker activation without serial scanning. Neighborhood density further illuminates competitive aspects of access, where words in dense orthographic neighborhoods—such as "," surrounded by similar forms like "," "," and ""—slow RTs due to increased among activated candidates, often by 20-50 ms compared to sparse neighborhoods. This inhibitory effect aligns with interactive models of lexical access, where partial matches spread activation to competitors, delaying target selection until inhibition resolves the rivalry. Empirical studies using the task have demonstrated orthographic priming effects, where masked primes sharing letter overlaps (e.g., "flop" priming "") facilitate RTs by 20-50 ms, indicating early sublexical overlap influences lexical entry activation independent of meaning. To isolate these effects, experimental designs in lexical access research routinely employ frequency-matched controls, pairing high- and low-frequency words with nonwords of equivalent length and letter composition to minimize extraneous variables. Additionally, megastudies like SUBTLEX provide large-scale norms derived from subtitle corpora, offering precise frequency estimates that enhance stimulus selection and replicability across experiments, outperforming earlier norms in predicting variance by up to 10%. These methods ensure robust testing of access models while accounting for in word exposure.

Semantic and Syntactic Processing

The lexical decision task (LDT) has been instrumental in examining semantic processing by measuring within semantic s, where activation of a prime word facilitates recognition of related targets through associative links. In this , direct semantic priming occurs when a prime like "" speeds responses to a target like "," reflecting automatic activation spread in short stimulus-onset asynchrony (SOA) conditions of 200-300 ms. Indirect or mediated priming extends this model, as seen in studies where a prime such as "" facilitates a target like "" via an unpresented mediator "," demonstrating deeper under controlled list compositions that minimize expectancy biases. However, mediated priming effects are often attenuated or absent in standard LDT compared to tasks, suggesting task-specific constraints on activation depth, with facilitation typically ranging from 20-40 ms when observed. Contextual effects in LDT further probe semantic processing, particularly in ambiguity , where sentence bias selection among multiple word meanings. For instance, preceding an ambiguous word like "" with a financial (e.g., "") reduces response times to related compared to neutral or biasing unrelated contexts, indicating rapid integration of contextual constraints to suppress irrelevant meanings. This aligns with interactive models where semantic modulates lexical access, with facilitation effects of approximately 30-50 ms for congruent contexts and minimal inhibition for incongruent ones. Key 1980s studies on category-specific effects, such as coordinate priming within living versus nonliving categories, revealed differential activation patterns, supporting domain-specific semantic organization in the . Syntactic influences in LDT highlight how grammatical structure affects beyond semantics, with embedded variants integrating targets into frames to isolate syntactic priming. Appropriate syntactic contexts, such as a preceding a target, yield faster lexical decisions than mismatched ones, with facilitation around 25 , demonstrating early syntactic modulation of lexical access. Grammatical effects further illustrate this, as often elicit quicker responses than verbs in neutral contexts due to higher average and distributional properties, though this asymmetry diminishes in constraining syntactic frames. In bilingual LDT adaptations, costs—slower responses when alternating languages—reveal syntactic integration challenges, with switch costs of 50-100 attributed to between grammatical systems. Methodologically, using congruent versus incongruent syntactic contexts isolates these effects, typically producing 20-50 facilitation for matches while controlling for semantic overlap.

Key Findings

Lateralization in Processing

Research using the lexical decision task (LDT) has provided substantial evidence for hemispheric asymmetries in language processing, particularly in how the brain handles abstract versus concrete words. The left hemisphere (LH) demonstrates superiority in processing abstract words, which lack direct sensory referents and rely more on verbal associations, while the right hemisphere (RH) shows greater involvement for concrete, imagery-rich words that evoke sensory experiences. This pattern aligns with dual-coding theory, positing that concrete words benefit from both verbal and imagistic representations, allowing broader RH activation, whereas abstract words are predominantly verbally coded in the LH. Concrete words are generally processed faster than abstract words in LDTs, known as the concreteness effect. To investigate these asymmetries, researchers employ divided visual field (DVF) presentations in the LDT, where stimuli are briefly flashed 2-6 degrees to the left (LVF, projecting to RH) or right (RVF, projecting to LH) of a central fixation point. This method minimizes interhemispheric transfer, isolating hemispheric contributions. Studies consistently report a LH advantage for lexical decisions, reflecting the LH's specialization for lexical access. For abstract words, this RVF/LH advantage is pronounced, whereas it diminishes for concrete words, indicating RH facilitation via visual imagery pathways. Seminal work in the 1980s by Eran Zaidel using split-brain patients highlighted these differences, revealing that the isolated RH could perform lexical decisions but exhibited strengths for concrete words, while the LH excelled with abstract ones, supporting independent lexical stores in each hemisphere. More recent integrations with neuroimaging, such as fMRI, corroborate this by showing greater LH activation, particularly in the inferior frontal gyrus, during semantic integration for abstract words in LDTs, with activity in left basal temporal cortex but no specific right-hemisphere involvement for concrete stimuli. These findings bolster asymmetric models of language processing, where the LH handles propositional, analytic semantics and the contributes to holistic, imagery-based processing. Exceptions arise for emotional words, where processing may differ due to affective content.

Effects on Response Times

In the lexical decision task (LDT), response times (s) are significantly influenced by , with high-frequency words eliciting faster decisions than low-frequency ones; this is typically modeled as an inverse logarithmic relationship between RT and frequency, reflecting easier access to more common lexical entries. For instance, low-frequency words can take 50-100 ms longer to recognize compared to high-frequency counterparts, a pattern robustly demonstrated across large-scale norming studies. Similarly, a word length effect emerges, where longer words are processed more slowly, with RTs increasing by approximately 10-20 ms per additional letter, though this is more pronounced for nonwords than words due to serial visual scanning demands. Accuracy in LDT performance exhibits a clear speed-accuracy tradeoff, where faster responses correlate with higher error rates, particularly under time pressure; participants prioritizing speed show reduced precision in distinguishing words from nonwords. Error rates are notably elevated for low-frequency words (typically 5-10%) relative to high-frequency words (1-2%), as rarer items demand greater cognitive effort and are more prone to misclassification. Several factors modulate these RT patterns. Practice across sessions reduces overall RTs, with improvements of around 100 ms observed as participants become more efficient at the task, though this effect diminishes with well-constructed stimuli. Age also plays a role, with older adults (aged 60+) exhibiting RTs 200-300 ms slower than younger counterparts, attributed to declines in speed while maintaining similar accuracy profiles. Contextual influences further shape RTs, including facilitation from semantic or associative primes, which can shorten decision times by up to 150 ms by pre-activating related lexical representations. In contrast, high orthographic neighborhood density—where a word has many similar competitors—produces inhibition, especially for nonwords, leading to slower RTs (20-50 ms longer) due to increased lexical competition, while words in dense neighborhoods may show mild facilitation.

Limitations

Methodological Criticisms

One major methodological criticism of the lexical decision task (LDT) concerns stimulus artifacts, particularly in the generation of pseudowords, which can introduce systematic biases if not carefully controlled. Traditional methods often rely on simple errors, such as swapping adjacent letters (e.g., "JUGDE" for ""), leading to overly wordlike nonwords that inflate response latencies for both words and pseudowords by complicating the discrimination process. This over-reliance on transpositions can bias results toward shallower orthographic processing rather than true lexical access, as participants may detect illegality more readily in less realistic pseudowords. Additionally, the task's use of isolated words lacks , as it does not replicate natural reading contexts involving syntactic or semantic integration, potentially limiting generalizability to real-world language processing. Response biases further undermine the LDT's reliability, especially in unbalanced designs where the proportion of words to pseudowords is unequal, encouraging strategies that distort accuracy and times. For instance, when words outnumber pseudowords, participants may default to "word" responses, particularly for ambiguous low-frequency items, reducing the task's to subtle lexical effects. Motor confounds from keypress responses exacerbate this, as hand dominance influences execution speed; right-handed participants typically respond faster with their dominant hand, introducing lateralization artifacts that confound linguistic measures unless counterbalanced. These biases highlight the need for balanced stimulus lists and alternative response modalities, such as vocal or whole-body actions, to mitigate such issues. Reproducibility in LDT experiments is challenged by small effect sizes, often yielding Cohen's d values below 0.5 for phenomena like semantic priming, necessitating large sample sizes (e.g., over 100 participants) to achieve adequate power. This is compounded by high inter-individual variability stemming from differences in reading proficiency, where less fluent readers exhibit exaggerated length effects even for high-frequency words, while proficient readers rely more on direct lexical routes, leading to inconsistent magnitudes across groups. Such variability demands standardized proficiency screening and larger, diverse samples to ensure replicable findings. In cross-modal adaptations of the LDT, technical concerns like inconsistent audio quality or screen glare can degrade , as suboptimal auditory presentation introduces perceptual noise that affects lexical access timing and accuracy. These setup-dependent artifacts are particularly problematic in non-laboratory environments, underscoring the importance of calibrated equipment to maintain stimulus fidelity.

Interpretive Challenges

One major interpretive challenge in the lexical decision task (LDT) arises from potential confounds in attributing response time () effects to pure lexical access processes, as these effects may instead reflect decision biases rooted in signal detection theory (SDT). In SDT frameworks applied to LDT, participants set a response criterion for distinguishing words from nonwords, and shifts in this criterion—such as a bias toward quicker "yes" responses for familiar stimuli—can inflate frequency effects independent of early perceptual or access stages. For instance, Balota and Chumbley () demonstrated that word frequency primarily influences a post-access decision stage, where high-frequency words lower the decision , leading to faster s that mimic enhanced lexical but actually stem from strategic biases rather than causation in access itself. This confound complicates causal inferences, as RT variations may capture task-specific strategies rather than underlying lexical mechanisms. The LDT also tends to favor interpretations aligned with feedforward models of , potentially underrepresenting top-down influences like predictive processing that are prominent in natural reading. Feedforward accounts, such as those emphasizing bottom-up orthographic-to-phonological , align well with LDT's isolated presentation of stimuli, where RTs reflect sequential without contextual support. However, this setup minimizes top-down modulations, such as semantic predictions or feedback from higher-level , which interactive models like the interactive and competition framework highlight as crucial for resolving in connected text. Balota et al. (2012) note that LDT performance correlates more strongly with isolated word identification than with processes involving , suggesting the task overemphasizes unidirectional flow and may lead to model overreach when extrapolating to dynamic reading scenarios. Generalizability from LDT results to real-world reading is limited by the task's artificial of words, which neglects contextual and introduces cultural biases inherent in word norms. In natural reading, words are in , where top-down facilitates via and , processes underrepresented in LDT's decontextualized format; consequently, LDT RTs predict decoding and but fail to capture dynamics. Moreover, word norms used to select stimuli (e.g., counts) often derive from corpora of , educated populations, embedding cultural biases that skew results for diverse users—such as underestimating for non-dominant dialects or idioms. Brysbaert and New (2009) underscore how such norms limit cross-linguistic applicability, as LDT effects may not generalize beyond the sampled cultural contexts. Alternative explanations further challenge standard interpretations, particularly for frequency effects, which may arise from subjective familiarity rather than objective lexical . While frequency is typically seen as accelerating access via repeated exposure strengthening representations, dissociations show that familiarity—perceived ease of —can independently drive faster RTs in LDT, especially when meaningfulness covaries with exposure. Colombo, Pasini, & Balota (2006) found that matching words on familiarity and meaningfulness eliminated frequency effects in some conditions, indicating that RT advantages often reflect holistic familiarity judgments rather than frequency-specific lexical activation per se. This suggests caution in attributing effects solely to frequency, as familiarity-based accounts better explain variability across tasks and populations.

References

  1. [1]
    What lexical decision and naming tell us about reading - PMC
    In the lexical decision (LD) task, the participant makes a speeded manual decision to a letter string on the computer screen: is it a word or not? In the naming ...
  2. [2]
    Lexical Decision Task (LDT) - PsyToolkit
    In a lexical decision task (LDT), a participant needs to make a decision about whether combinations of letters are words or not.Missing: primary | Show results with:primary
  3. [3]
    Lexical Decision Task - an overview | ScienceDirect Topics
    A lexical decision task is a procedure where participants are presented with a mixture of letter strings and their task is to quickly determine whether the ...Missing: primary | Show results with:primary
  4. [4]
    Discovering the brain stages of lexical decision: Behavioral effects ...
    In a typical lexical decision task (LDT), participants are asked to respond whether a sequence of letters is an actual word or a nonword.
  5. [5]
    Auditory Lexical Decision: Language and Cognitive Processes
    Auditory lexical decision is commonly used as a measure of priming and context effects, and as an index of impairments following brain damage.
  6. [6]
    Facilitation in recognizing pairs of words: Evidence of a dependence ...
    Facilitation in recognizing pairs of words: Evidence of a dependence between retrieval operations. Citation. Meyer, D. E., & Schvaneveldt, R. W. (1971).
  7. [7]
    An interactive activation model of context effects in letter perception
    According to the model, context aids the perception of target letters as they are processed in the perceptual system.
  8. [8]
    Models of visual word recognition - PMC - PubMed Central
    Interactive activation (IA) model: the first, and still most influential, form of connectionist model of word recognition. Words are represented as nodes in a ...
  9. [9]
    Lexical access and naming time - ScienceDirect.com
    Naming times and word-nonword classification times (lexical decision times) for samples of words, nonwords, and unfamiliar words were compared.Missing: effects paper
  10. [10]
  11. [11]
    Lexical Decision Task | Developmental Psychopathology Lab
    A computerized lexical decision task was developed for use in studies of children and adolescents to assess their attentional orienting responses to words.
  12. [12]
    [PDF] the english Lexicon Project
    The English Lexicon Project is a multiuniversity effort to provide a standardized behavioral and descriptive data set for 40,481 words and 40,481 nonwords. It ...
  13. [13]
    [PDF] Responding to Nonwords in the Lexical Decision Task
    Researchers have extensively documented how various statistical properties of words (e.g., word frequency) influence lexical processing.Missing: foundation | Show results with:foundation
  14. [14]
    Word contexts enhance the neural representation of individual ...
    Jan 16, 2020 · Participants observed words or nonwords (i.e. orthographically illegal, unpronounceable strings) with a U or N as middle letter, resulting in ...
  15. [15]
    Individual differences in visual lexical decision are highly correlated ...
    Fifteen participants performed a lexical decision task. Each trial began with a fixation cross displayed for 500ms, immediately followed by a stimulus (about 2 ...
  16. [16]
    Visual word recognition: Evidence for a serial bottleneck in lexical ...
    The inter-stimulus interval (ISI) may therefore have been long enough to allow serial switching of attention to detect color in both words within one trial.
  17. [17]
    [PDF] Where Are the Effects of Frequency in Visual Word Recognition ...
    Fourth, Becker (1979) found a highly significant frequency effect (49 ms) in a lexical decision task, albeit reduced in comparison with an ...Missing: seminal | Show results with:seminal
  18. [18]
    Bodies, antibodies, and neighborhood-density effects in masked ...
    As originally defined by Coltheart et al. (1977), a word's neighborhood consists of all the other words that can be formed from this word by changing only one ...
  19. [19]
    Masked Orthographic and Phonological Priming in Visual Word ...
    The lexical decision task, on the other hand, showed priming effects independently of whether prime and targets shared onsets. These results are discussed ...
  20. [20]
    The role of syntactic context in word recognition | Memory & Cognition
    This study examines the role of syntactic information in word recognition. Subjects made a word-nonword decision regarding a target string that was precede.
  21. [21]
    An ALE meta-analytical review of the neural correlates of abstract ...
    Aug 3, 2021 · These results confirm that concrete and abstract words processing involves at least partially segregated brain areas.Clustering Procedure · Results · Discussion
  22. [22]
    The Right Hemisphere's Access to Lexical Meaning: A Function of its ...
    Zaidel E (1978) Lexical organization in the right hemisphere. In: Buser PA ... In: Benson DF, Zaidel E (eds) The dual brain. Guilford, New York. Google ...
  23. [23]
    Processing concrete words: fMRI evidence against a specific right ...
    Here we report new event-related fMRI data on the processing of concrete and abstract words in a lexical decision task. While abstract words activated a ...
  24. [24]
    The effects of word length and emotionality on hemispheric ...
    The effects of emotionality and length on lateralized lexical decision of abstract nouns were investigated in 41 normal and three commissurotomized subjects.
  25. [25]
    Comparing the Frequency Effect Between the Lexical Decision ... - NIH
    Apr 1, 2016 · Using two exemplar experiments, this paper introduces an approach to include both the lexical decision task and the naming task in a study.
  26. [26]
    Modeling the length effect for words in lexical decision
    The word length effect in Lexical Decision (LD) has been studied in many behavioral experiments but no computational models has yet simulated this effect.
  27. [27]
    Speed-Accuracy Trade-Off - an overview | ScienceDirect Topics
    The speed–accuracy trade-off refers to the phenomenon where an increase in response speed leads to a higher likelihood of making errors, particularly under ...
  28. [28]
    Practice Effects in Large-Scale Visual Word Recognition ... - Frontiers
    Our results show that when good nonwords are used, practice effects are minimal in lexical decision experiments and do not invalidate the behavioral data. For ...
  29. [29]
    A Diffusion Model Analysis of the Effects of Aging in the Lexical ...
    The effects of aging on response time (RT) are examined in 2 lexical-decision experiments with young and older subjects (age 60-75).<|separator|>
  30. [30]
    Absence of inhibitory neighborhood effects in lexical decision and ...
    The effect of neighborhood density on visual word recognition was found to be facilitatory for words by inhibitory for nonwords in 3 lexical-decision ...
  31. [31]
  32. [32]
    The Tyrion Lannister Paradox: How Small Effect Sizes can be ...
    Jun 21, 2013 · In a lexical decision task subjects make judgments about words. In a naming task they simply read the words aloud; there is no decision involved ...
  33. [33]