Fact-checked by Grok 2 weeks ago

Pre-attentive processing

Pre-attentive processing is the initial, automatic stage of sensory perception in which basic features of stimuli, such as color, shape, and motion, are rapidly and unconsciously analyzed in parallel across the sensory field without the involvement of focused or selective . This operates with high capacity and speed, allowing the to segregate potential targets from background elements before conscious intervenes. The concept gained prominence through Anne Treisman's Feature Integration Theory (FIT), introduced in 1980, which describes visual perception as occurring in two sequential stages: a pre-attentive phase that registers individual features independently and a subsequent attentive phase that binds those features into unified object representations. In this framework, pre-attentive processing enables efficient detection of simple attributes like hue, orientation, size, curvature, and stereoscopic depth, which are processed effortlessly and in parallel. A hallmark demonstration is the pop-out effect in visual search tasks, where a target differing in a single pre-attentive feature—such as a red item among green distractors—appears to "pop out" immediately, with detection time remaining constant regardless of the number of surrounding elements. While most extensively studied in vision, pre-attentive processing also occurs in other sensory modalities, including audition, where basic acoustic features like pitch, duration, and spatial location are automatically evaluated, as evidenced by event-related potentials such as the mismatch negativity response to deviant sounds. These mechanisms underpin everyday perceptual efficiency, from navigating complex environments to applications in human-computer interaction and data visualization, where leveraging pre-attentive cues enhances rapid information extraction. Ongoing research continues to refine the boundaries of pre-attentive features, confirming that they are typically elemental and irreducible, guiding attention toward salient stimuli while filtering irrelevant details.

Overview

Definition and Characteristics

Pre-attentive processing is the initial, automatic stage of perceptual processing in which the rapidly and subconsciously accumulates basic sensory from the environment to identify stimuli for potential further conscious analysis. This stage involves the parallel registration of primitive features such as color, , shape, and motion, allowing for the efficient filtering of relevant without voluntary effort or focal . According to Anne Treisman's , this pre-attentive phase operates as a bottom-up mechanism, organizing the based on inherent stimulus properties before binds features into coherent objects. Key characteristics of pre-attentive processing include its , occurring involuntarily in response to incoming sensory input; parallelism, enabling the simultaneous processing of multiple features across the ; and a limited duration, typically completing within 200 milliseconds or less, after which attentional mechanisms may engage if a is detected. These traits facilitate pop-out effects, where a stimulus differing in a basic feature—such as a among distractors—emerges effortlessly through feature contrast, without serial scanning of individual items. For instance, in tasks, observers can detect such a rapidly, as the pre-attentive system highlights the discrepancy in color without requiring focused scrutiny. In practical applications, pre-attentive processing underpins the of salient visuals in , where contrasting elements like bold colors or unexpected shapes capture involuntary to influence perceptions. Additionally, individual differences in pre-attentive processing speed have been linked to cognitive abilities, including correlations with IQ, as faster parallel feature detection supports quicker overall information processing and problem-solving efficiency.

Historical Development

The concept of pre-attentive processing emerged from early theories of selective in the mid-20th century. Donald Broadbent's 1958 filter model proposed an early selection where sensory is filtered based on physical characteristics before deeper semantic , laying groundwork for distinguishing automatic early stages from controlled later ones. Building on this, Ulric Neisser's 1967 two-stage model in described as involving an initial pre-attentive synthesis of features followed by focal for , emphasizing in the first stage. A pivotal advancement came in 1980 with and Garry Gelade's , which formalized pre-attentive processing as a parallel, automatic stage where basic visual features like color and orientation are registered across the without focused , contrasting with serial attentive binding for conjunctions. This theory spurred the development of pop-out paradigms in the , where targets differing in a single feature are detected rapidly regardless of distractor number, demonstrating pre-attentive efficiency in tasks. These paradigms, refined through experiments like those on search asymmetries, provided empirical support for pre-attentive feature detection as a distinct, modular process. The 1980s also saw reinforcement of strict modularity in perception via Jerry Fodor's 1983 The Modularity of Mind, which argued for domain-specific input systems operating pre-attentively and independently of central . By the 1990s, however, theories evolved toward hybrid models incorporating top-down influences, as seen in Steven Yantis and James C. Johnston's 1990 proposal of a flexible attentional locus where pre-attentive processing could be modulated by task demands. Post-2000, integration with advanced understanding, with (ERP) studies revealing P1 and N1 components as markers of early pre-attentive visual processing, modulated within 100-200 ms of stimulus onset. Recent developments since 2020 have extended pre-attentive concepts to computational modeling and applied technologies. models for predicting visual saliency in digital applications have been validated against eye-tracking data, achieving ROC-AUC values of 0.75–0.84.

Theoretical Frameworks

Bottom-Up and Top-Down Mechanisms

Pre-attentive processing is fundamentally driven by two complementary mechanisms: bottom-up and top-down influences. Bottom-up processing operates automatically, guided by the inherent salience of stimuli, such as their , , or novelty, which triggers rapid detection without conscious effort. This stimulus-driven pathway relies on analysis of basic features like color, , and motion, often computed through saliency maps that prioritize conspicuous elements in the sensory input. For instance, a sudden bright or abrupt change in the can elicit reflexive orienting due to its high salience, facilitating efficient scanning of complex scenes. In contrast, top-down mechanisms introduce goal-directed into pre-attentive processing, shaped by prior expectations, task requirements, or contextual priming. These influences enhance the salience of features relevant to current objectives while suppressing irrelevant distractors, effectively biasing the sensory array toward behaviorally important information. For example, if searching for a object, top-down signals amplify red feature detection even at early processing stages, refining the initial bottom-up signals. This modulation occurs through neural feedback from higher cortical areas, allowing flexible adaptation to varying demands. The interaction between bottom-up and top-down mechanisms forms a dynamic, weighted model, where bottom-up salience provides the initial surge of , but top-down rapidly refines and sustains selection. In cueing paradigms, exogenous cues (bottom-up) produce fast but transient shifts, while endogenous cues (top-down, such as arrows indicating location) yield slower but more sustained benefits, with task-relevant cues accelerating detection by up to 50 ms compared to invalid ones. Electrophysiological evidence supports this, as the early C1 component of event-related potentials (ERPs), peaking around 90 ms post-stimulus in primary , primarily reflects bottom-up feature encoding, with minimal top-down intrusion at this . As an illustration of bottom-up dominance, pure capture occurs when highly salient distractors involuntarily seize regardless of task goals.

Pure Capture and Contingent Capture

Pure capture represents a form of attentional capture driven exclusively by bottom-up salience, where highly conspicuous stimuli, such as abrupt onsets or unique color changes, involuntarily shift irrespective of the observer's current goals or task demands. This phenomenon is exemplified in tasks where a flashing light or a color among uniform items draws automatically, leading to faster reaction times when the target appears at the captured location compared to uncued locations. Seminal experiments by Theeuwes (1992) demonstrated this in singleton search paradigms, showing that irrelevant color distractors disrupted performance even when participants were set to search for shape differences, supporting the idea of purely stimulus-driven guidance. In contrast, contingent capture arises when attentional shifts are modulated by top-down factors, specifically the observer's attentional set for task-relevant features. Here, a salient stimulus captures only if it matches the predefined goals, such as when searching for a particular and an abrupt color cue aligns with that , but an irrelevant color cue does not. Folk et al. (1992) provided foundational evidence through cuing tasks where capture by onset cues occurred solely when they shared the target's defining feature (e.g., color or abruptness), establishing the contingent involuntary orienting hypothesis. This top-down contingency ensures that is guided by current intentions rather than salience alone. The distinction between pure and contingent capture highlights their differential reliance on bottom-up versus top-down mechanisms, with pure capture persisting in conditions of low top-down control but being susceptible to suppression under high perceptual load, where irrelevant onsets fail to disrupt performance. Contingent capture, however, requires an active attentional set and is more robust when features align with goals, even under varying loads. evidence from fMRI studies reveals distinct neural underpinnings, with bottom-up pure capture primarily engaging parietal regions for stimulus-driven orienting, while contingent capture involves greater frontal activation for goal modulation of salience. A limitation of both is that not all stimuli reliably capture ; for instance, in contingent scenarios, task-irrelevant salience is often inhibited, preventing .

Sensory-Specific Processing

Visual Processing

Pre-attentive visual processing begins in the , where photoreceptors detect light and cells convey signals through magnocellular (for motion and low ) and parvocellular (for color and high ) pathways to the (LGN) of the . The LGN relays these signals to the primary (), where basic features such as edges and orientations are encoded in and cells with receptive fields tuned to specific properties. Processing then proceeds to the secondary (), which integrates these features into representations of contours and textures, and to area V4, which specializes in color and form through further refinement of these early signals. Key pre-attentive processes in the involve parallel detection of basic texton features, such as differences in , , width, or terminator , as described by Texton theory, which posits that textures are segmented rapidly based on differences in the of local conspicuous elements called textons, such as elongated blobs (line segments) defined by , , and width, or terminators at their ends. This enables pre-attentive discrimination without focused attention, relying solely on first-order statistics of texton distributions rather than higher-order spatial relations. Complementing this, the dimension-weighting account (DWA) explains how repeated visual searches across trials dynamically allocate attentional weights to relevant feature dimensions, such as color or , enhancing saliency signals for those dimensions while incurring costs when switching between them, with effects observable as early as pre-selective stages. Pop-out effects occur when a target defined by a single basic feature, like color or , is detected in parallel across the , independent of distractor number, as in a item among distractors. However, conjunctive searches requiring feature binding, such as detecting a vertical line among horizontal and vertical distractors, fail at the pre-attentive stage and demand serial to integrate features from separate maps. Emotional salience can modulate these effects, with faces—particularly those expressing —eliciting faster pre-attentive processing and larger mismatch responses compared to neutral or sad faces, suggesting prioritized detection of socially relevant stimuli. Electrophysiological evidence for these mechanisms includes visual (vMMN) in EEG, a negative deflection around 150-160 ms post-stimulus over parieto-occipital sites, elicited by deviant visual oddballs like changes in motion patterns, persisting even when is diverted and indicating automatic pre-attentive deviance detection. Recent computational studies further support this by showing that deep neural networks, trained on natural images, develop filters mimicking V1's orientation-selective receptive fields, capturing early sensitivities that align with pre-attentive .

Auditory Processing

Pre-attentive auditory processing begins in the , where sound vibrations are transduced into neural signals by hair cells, which are then transmitted via the auditory nerve (cranial nerve VIII) to the cochlear nuclei in the . From there, ascending pathways diverge into parallel routes: the dorsal pathway projects to the and contributes to spatial processing, while the ventral pathway, involving the for initial via interaural time and intensity differences, relays information to the . Signals then converge in the of the before reaching the primary auditory cortex () in the of the , where basic feature extraction occurs automatically without conscious attention. Key pre-attentive processes in the auditory domain include automatic deviance detection, exemplified by the (MMN), an (ERP) component elicited by subtle changes in auditory stimuli such as pitch deviations from a repeating standard sequence, even during inattention. The MMN arises from a comparison between incoming sounds and a predictive trace formed in the , with a typical latency of 100-250 ms post-stimulus onset, reflecting early operations in supratemporal and frontal regions. Auditory stream segregation further enables the perceptual grouping of sounds based on coherence in , timing, or spatial cues, allowing the to parse complex acoustic scenes into separate perceptual objects pre-attentively, as demonstrated in sequences where tones differing by more than approximately 5 semitones form distinct streams. Hemispheric lateralization modulates these processes, with the left specializing in rapid for fine-grained timing analysis (e.g., distinctions) and the right cortex excelling in spectral processing for holistic pitch and perception. Representative examples of these mechanisms include the origins of the cocktail party effect, where pre-attentive filtering based on stream segregation principles allows initial separation of a target voice from through primitive grouping by or onset timing, prior to attentional selection. Temporal acuity is highlighted by detection thresholds, where the minimal detectable silent in a continuous is approximately 2-3 ms in normal-hearing individuals, relying on and cortical timing circuits for pre-attentive discontinuity . Electrophysiological from MMN studies confirms these processes' , as the component persists across and , underscoring its role in involuntary . Recent post-2020 research has applied these insights to design, showing that algorithms enhancing stream segregation in noisy environments—such as those preserving low-frequency cues for better grouping—improve speech intelligibility for users with hearing impairment by mimicking pre-attentive spectral and temporal analysis.

Multisensory Integration

Integration Mechanisms

Pre-attentive multisensory integration relies on core that facilitate the binding of inputs from different sensory modalities without conscious awareness. One fundamental is the spatial and temporal coincidence rule, which posits that stimuli occurring close in space and time are more likely to be integrated as originating from a common source. For instance, in the ventriloquism effect, the perceived location of a sound is biased toward a spatially coincident but temporally asynchronous visual stimulus, such as a moving puppet's mouth, demonstrating automatic capture of auditory spatial by visual cues. Another key is inverse effectiveness, whereby yields greater relative enhancement when individual unisensory signals are weak or unreliable, allowing the perceptual system to amplify suboptimal inputs for improved detection. These principles underpin specific processes in pre-attentive integration, including cross-modal cuing, where an exogenous cue from one involuntarily shifts and accelerates processing in another. A classic example is an abrupt sound cue that speeds up the detection of a subsequent visual target at the same location, reflecting rapid, automatic orienting across senses. Computationally, such integration is often modeled using Bayesian frameworks, which weight sensory inputs according to their reliability—assigning higher influence to more precise —to produce an optimal perceptual estimate. This reliability-based weighting occurs pre-attentively, enabling efficient fusion of noisy signals without deliberate effort. Illustrative examples highlight these mechanisms in everyday perception. The McGurk effect exemplifies audiovisual integration, where conflicting visual lip movements (e.g., forming "ga") alter the of an auditory (e.g., "ba") to a fused percept like "da," driven by temporal synchrony and prior experience with speech. Similarly, in noisy environments, visual cues from a speaker's face enhance auditory speech comprehension by integrating lip-reading with degraded sound signals, improving intelligibility through pre-attentive cross-modal facilitation. Empirical evidence supports these integration mechanisms through behavioral measures. In redundant signal tasks, reaction times are faster for combined audiovisual stimuli (e.g., a light flash paired with a beep) compared to unisensory presentations, exceeding predictions from independent processing models and indicating superadditive pre-attentive binding. Recent post-2020 virtual reality studies further demonstrate that multisensory training—pairing visual scene motion with proprioceptive feedback—enhances balance control by strengthening pre-attentive integration, with participants showing reduced postural sway after immersive sessions.

Neural Substrates

The superior colliculus (SC) serves as a primary hub for reflexive orienting in pre-attentive multisensory processing, where neurons integrate visual, auditory, and somatosensory cues to amplify responses to salient stimuli and facilitate rapid behavioral shifts. In the SC's deep layers, multisensory convergence enhances neural firing beyond unisensory levels, particularly when stimuli from different modalities occur in close spatiotemporal proximity, supporting involuntary capture of attention. The superior temporal sulcus (STS), particularly its posterior region, plays a crucial role in audiovisual integration during pre-attentive stages, combining dynamic visual cues like biological motion with auditory signals to form unified percepts without conscious effort. The thalamus, acting as a relay for cross-modal gating, modulates sensory throughput; its pulvinar nucleus filters and prioritizes salient multisensory inputs, suppressing irrelevant signals to sharpen pre-attentive detection. Electrophysiological studies reveal early event-related potentials (ERPs) as markers of pre-attentive unisensory processing, with components like the P50 and reflecting initial sensory registration within 50-100 post-stimulus. In multisensory contexts, cross-modal modulations emerge rapidly in the 100-200 window, where pairings enhance or suppress ERP amplitudes in auditory and visual cortices, indicating automatic integration that boosts signal-to-noise ratios for salient events. These early interactions underscore the pre-attentive nature of multisensory enhancement, occurring prior to volitional deployment. Functional magnetic resonance imaging (fMRI) demonstrates pulvinar nucleus activation during pre-attentive salience detection, with heightened BOLD signals when multisensory stimuli compete for processing resources, facilitating bottom-up prioritization. Diffusion tensor imaging (DTI) further highlights white matter tracts, such as the , showing reduced in these pathways correlates with impaired efficiency. Recent optogenetic investigations post-2020 have confirmed the SC's causal role in multisensory capture, where targeted activation of SC neurons elicits reflexive orienting behaviors akin to natural stimulus-driven responses, emphasizing its pre-attentive function in threat detection and spatial awareness.

Plasticity and Adaptation

Experience-Dependent Plasticity

Experience-dependent plasticity refers to the brain's ability to modify pre-attentive processing through training and expertise, primarily via mechanisms like Hebbian learning that reinforce synaptic strengths in feature-sensitive neural circuits. Hebbian principles, where correlated neural activity leads to strengthened connections, enhance the efficiency of early sensory detectors, allowing for faster and more robust pre-attentive responses to relevant stimuli. For instance, professional musicians exhibit enhanced early auditory evoked potentials, including larger P1 and N1 responses, alongside structural expansions in Heschl's , reflecting heightened sensitivity to and in pre-attentive stages. This plasticity manifests in domain-specific expertise, such as in bilingual individuals who demonstrate shifted perceptual boundaries for colors due to Whorfian effects from exposure, altering pre-attentive categorization without conscious effort. Similarly, action players show improved visual contrast sensitivity at low spatial frequencies, enabling quicker pre-attentive detection of subtle changes in dynamic environments. Longitudinal studies provide robust evidence for these changes, with perceptual learning tasks leading to increased amplitudes in early event-related potentials (ERPs), such as the , after targeted training sessions. plays a crucial role in consolidating these gains, stabilizing synaptic modifications during non-rapid stages to prevent decay and promote generalization of pre-attentive enhancements. Recent studies as of 2025 have shown that passive associations can induce functional and structural in brains, enhancing multisensory pre-attentive processing without explicit . Additionally, research highlights neural correlates of perceptual in the auditory and , contributing to changes in sound processing over rapid and slow timescales. Pre-attentive processing undergoes significant maturation during early childhood, with basic pop-out effects emerging in infants as young as 3 to 4 months of age. Studies using visual search tasks demonstrate that young infants can detect salient features, such as orientation or color differences, in a parallel manner indicative of pre-attentive mechanisms, though efficiency is limited compared to adults. By 6 to 7 years, children exhibit more robust parallel processing, achieving adult-like performance in feature-based pop-out searches, where reaction times remain independent of distractor number for simple salient targets. This developmental trajectory reflects the tuning of sensory feature detectors during critical periods, such as the window for amblyopia treatment, which extends up to approximately 8 years and allows for plasticity in low-level visual processing if sensory input is balanced early. In aging, pre-attentive processing shows declines, including slower event-related potentials (ERPs) and diminished saliency detection for peripheral or low-contrast stimuli. Elderly individuals often display prolonged latencies in early visual ERPs, such as the P1 component, reflecting reduced speed in parallel feature extraction. Reduced saliency detection contributes to poorer performance in tasks requiring automatic capture by abrupt onsets or motion, with older adults showing decreased neural responses to behaviorally relevant spatial changes. To compensate, aging brains increasingly rely on top-down attentional mechanisms to enhance bottom-up signals, though this strategy is less effective for rapid, pre-attentive tasks. Cross-sectional studies provide key evidence for these changes, such as increased (MMN) latency in auditory pre-attentive processing among the elderly, indicating slower automatic deviance detection compared to younger adults. Early interventions, like musical training starting in infancy or , can boost by enhancing neural encoding of temporal and features, leading to improved pre-attentive discrimination of sounds and even speech elements. Recent longitudinal data post-2020, drawn from large cohorts like the ABCD study, suggest that high exposure in youth (e.g., or ) has subtle effects on regions involved in , such as modest cerebellar changes, but shows no strong direct impact on visual processing efficiency over 2-4 years. These findings highlight ongoing in development while underscoring age-related vulnerabilities in pre-attentive systems.

Deficits and Clinical Aspects

Pathological Deficits

In , pre-attentive processing is characterized by deficits in automatic sensory discrimination, often manifesting as reduced (MMN) amplitudes in event-related potentials (ERPs), which reflect impaired detection of auditory or visual deviants and contribute to through inadequate filtering of irrelevant stimuli. This impairment is linked to dysfunction in cortical networks, including reduced activity in the and prefrontal regions, leading to heightened involuntary capture and perceptual disorganization. For negative emotional stimuli, such as fearful faces, patients exhibit specific reductions in MMN generation, exacerbating affective processing overload and correlating with symptom severity. Autism spectrum disorder (ASD) involves altered pre-attentive processing, particularly reduced , as evidenced by a weaker where audiovisual speech incongruences elicit less illusory perception compared to neurotypical individuals, indicating diminished automatic binding of auditory and visual cues. This deficit stems from atypical temporal synchronization in sensory cortices, potentially rooted in excitatory-inhibitory imbalances. Conversely, enhanced local visual processing at pre-attentive stages is observed, with superior detection of fine-grained details in tasks, aligning with theories of detail-focused perceptual biases that prioritize featural over holistic analysis. In , pre-attentive processing shows delays in ERP components like the N100 and P200, alongside diminished deviance detection reflected in reduced MMN amplitudes, signaling early disruptions in automatic and change registration within temporal and frontal areas. These alterations, detectable even in stages, predict cognitive decline, positioning MMN as a potential non-invasive for preclinical diagnosis. Attention-deficit/hyperactivity disorder (ADHD) presents variable pre-attentive processing, with some studies showing intact MMN responses to auditory deviants suggestive of preserved automatic , while others report inconsistencies in attentional capture during passive tasks, potentially varying by subtype or stimulus complexity. Post-2020 research links to deficits, including olfactory impairments with altered ERPs indicating disrupted pre-attentive chemosensory discrimination, and auditory changes such as reduced responses, contributing to persistent fatigue and cognitive fog in affected individuals.

Assessment and Implications

Assessment of pre-attentive processing relies on a combination of behavioral and neurophysiological methods designed to isolate , unconscious from higher-order attentional mechanisms. Behavioral tasks, such as experiments measuring reaction times (RTs), evaluate the efficiency of parallel feature detection; for instance, RTs remain constant regardless of distractor number when targets are defined by basic features like color or orientation, indicating pre-attentive segregation. Oddball paradigms, where infrequent stimuli deviate from a repetitive sequence, further probe pre-attentive by recording RTs to deviant targets without explicit instructions to attend, revealing automatic pop-out effects in both visual and auditory domains. Neurophysiological techniques provide direct measures of brain activity during pre-attentive stages. Event-related potentials (ERPs), particularly the mismatch negativity (MMN), capture pre-attentive deviance detection in auditory processing as a negative deflection around 150-250 ms post-stimulus, elicited passively without task demands. In visual processing, steady-state visual evoked potentials (SSVEPs) assess pre-attentive responses to periodic stimuli, with amplitude and phase stability reflecting automatic feature binding at frequencies like 10-20 Hz. These methods confirm pre-attentive processing's independence from voluntary attention, as responses persist even when participants ignore stimuli. The implications of assessing pre-attentive processing extend to clinical, technological, and educational domains, emphasizing its role in early intervention and system optimization. In neurodevelopmental disorders like and attention-deficit/hyperactivity disorder (ADHD), reduced MMN amplitudes signal early impairments, enabling diagnosis before behavioral symptoms manifest and predicting developmental trajectories with high sensitivity. For instance, altered pre-attentive auditory ERPs in at-risk infants correlate with later social communication deficits, supporting pre-symptomatic screening via non-invasive EEG. Therapeutically, pre-attentive assessment identifies targets for sensory integration therapy, which enhances automatic sensory binding through structured multisensory exposure, improving adaptive responses in children with processing delays. In cognitive decline, pre-attentive markers like diminished visual MMN predict progression; deficits in pre-attentive motion processing precede gray matter atrophy, offering predictive validity for conversion rates up to 80% over two years. Post-2020 advancements in wearable EEG devices facilitate pre-attentive assessment outside labs, using dry electrodes to capture MMN-like responses during daily activities, with signal-to-noise ratios approaching traditional systems for applications in remote . In , pre-attentive models inspire algorithms, such as saliency maps that mimic parallel feature integration for rapid scene analysis, reducing computational load in by prioritizing bottom-up cues. Educationally, understanding pre-attentive processing optimizes learning environments by designing visual aids with pop-out features (e.g., color contrasts for key concepts), enhancing automatic capture and retention in diverse classrooms without overloading resources. These applications underscore pre-attentive processing's foundational role in bridging sensory input to higher , with assessments revealing subtle deficits, as seen in where MMN reductions index early perceptual disorganization. As of 2025, recent meta-analyses further refine MMN as a across disorders, including enhanced links to E/I imbalances in sensory deficits.

References

  1. [1]
    Preattentive processing - APA Dictionary of Psychology
    Preattentive processing is thought to identify basic stimulus features in parallel, with high capacity. Also called preattentive analysis; preperceptual ...
  2. [2]
    Perception in Visualization - computer science at N.C. State
    We begin with an overview of preattentive processing, the ability of the low-level human visual system to rapidly identify certain basic visual properties. We ...
  3. [3]
    Preattentive Processing of Auditory Spatial Information in Humans
    Auditory event-related potentials were recorded from reading subjects to frequent and infrequent tones. Frequent tones presented by a loudspeaker in front ...
  4. [4]
    What is a preattentive feature? - PMC - PubMed Central - NIH
    A preattentive feature is a feature that guides attention in visual search and that cannot be decomposed into simpler features.Some Definitions · Figure 1: Find The Two Brown... · The Characteristics Of...
  5. [5]
    [PDF] High-Speed Visual Estimation Using Preattentive Processing
    Preattentive processing refers to an initial organization of the visual field based on cognitive operations believed to be rapid, automatic, and spatially ...
  6. [6]
    Preattentive processing and cognitive ability - ScienceDirect.com
    Thus, preattentive processes may be described as background activity that necessarily precedes conscious mental activity. Their major purpose is the preparation ...
  7. [7]
    [PDF] Perception and - Communication Cache
    ... Broadbent. Applied Psychology Unit of the. Medical Research Council, Cambridge. Contained in this important book is a review of experimental worl ...
  8. [8]
    [PDF] cognitive psychology - Stanford University
    ULRIC NEISSER. Cornell University. COGNITIVE PSYCHOLOGY. ACC. NEW YORK. APPLETON-CENTURY-CROFTS. EDUCATIONAL DIVISION. MEREDITH CORPORATION. Page 2. Chapter 1.
  9. [9]
    The Modularity of Mind - MIT Press
    This study synthesizes current information from the various fields of cognitive science in support of a new and exciting theory of mind.
  10. [10]
    Preattentive Processing of Numerical Visual Information - Frontiers
    Feb 16, 2017 · In our current study we investigated the potentially preattentive nature of visual numerical perception in the subitizing range by means of EEG.Missing: post- | Show results with:post-
  11. [11]
    ai models for predicting visual attention in digital applications
    PDF | The paper presents a comparative pilot analysis of an AI-based visual attention prediction system with the traditional eye-tracking method in the.<|separator|>
  12. [12]
    [PDF] A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
    The model's saliency map is endowed with internal dynamics which generate attentional shifts. This model consequently represents a complete account of bottom-up ...
  13. [13]
    Top–down and bottom–up control of visual selection - ScienceDirect
    This passing on from the initial stage of preattentive processing to attentive processing is what is considered to be visual selection. ... time (> 150 ms), ...
  14. [14]
    Event-Related potentials in visual attention to threatening and fearful ...
    The earliest ERP components, such as C1 and P1, reflect initial sensory encoding and are typically not modulated by top-down factors. However, results from ...
  15. [15]
    An ERP Examination of the Different Effects of Sleep Deprivation on ...
    These findings suggest that as little as 24 hours of sleep deprivation affects both early and late stages of attention selection but affects endogenously ...
  16. [16]
    Stimulus‐driven capture and contingent capture - Theeuwes - 2010
    May 14, 2010 · Whether or not certain physical events can capture attention has been one of the most debated issues in the study of attention.Stimulus-Driven Capture · Contingent Attention Capture · ControversiesMissing: pure attention
  17. [17]
    (PDF) Perceptual selectivity for color and form - ResearchGate
    Aug 9, 2025 · In classic additional-singleton paradigms (Theeuwes, 1992) , a salient color distractor in a visual search can capture attention involuntarily.
  18. [18]
    Perceptual selectivity for color and form
    The results reveal that selectivity depends on the relative discriminability of the stimulus dimensions: the presence of an irrelevant item with a unique color ...
  19. [19]
    Attentional capture under high perceptual load
    Attentional capture by abrupt onsets can be modulated by several factors, including the complexity, or perceptual load, of a scene.Missing: pure | Show results with:pure
  20. [20]
    Top-down versus bottom-up attention differentially modulate frontal ...
    Nov 6, 2019 · ... bottom-up attention may be the direction of connectivity between frontal and parietal areas. Whereas typical fMRI connectivity analyses ...
  21. [21]
    Information Processing in the Primate Visual System - NCBI - NIH
    (V1 is the largest of all visual areas; V2 is nearly as large and adjoins V1.) ... Information processing strategies and pathways in the primate retina and visual ...
  22. [22]
    Toward a Unified Theory of Visual Area V4 - PMC - PubMed Central
    Visual area V4 is a midtier cortical area in the ventral visual pathway. It is crucial for visual object recognition and has been a focus of many studies on ...
  23. [23]
    A theory of preattentive texture discrimination based on first-order ...
    A theory of preattentive texture discrimination based on first-order statistics of textons. Download PDF. Bela Julesz. 472 Accesses. 196 Citations. 3 Altmetric.
  24. [24]
    Frontiers | Dynamic Weighting of Feature Dimensions in Visual Search
    Dimension-based accounts of visual search and selection have significantly contributed to the understanding of the cognitive mechanisms of attention.Priming in Pop-Out Search · Dimension Weighting · Locus of Dimension-Based...
  25. [25]
    Forty years after Feature Integration Theory - NIH
    FIT proposed a preattentive stage, followed by an attentive stage. Thus, the next papers deal with preattentive processing; what aspects of an object can be ...
  26. [26]
    The Relationship Between Affective Visual Mismatch Negativity and ...
    The mismatch negativity (MMN), an index of pre-attentive auditory processing, is particularly sensitive in detecting such deficits; however, little is known ...
  27. [27]
    Preattentive and Predictive Processing of Visual Motion - Nature
    Aug 17, 2018 · The visual mismatch negativity (vMMN), a specific component of event-related potentials (ERPs), can be used to test for such preattentive ...
  28. [28]
    Deep neural networks capture texture sensitivity in V2 - PMC - NIH
    V1 has spatially oriented receptive fields (only one orientation is shown), but receptive fields in V2 are not yet clearly understood (hence puzzle symbol).
  29. [29]
    Neuroanatomy, Auditory Pathway - StatPearls - NCBI Bookshelf
    Oct 24, 2023 · The auditory cortex relies on spiral ganglion neurons to protect the inner ear from loud sounds. Stimulation of the spiral ganglion neurons ...Introduction · Structure and Function · Embryology · Physiologic Variants
  30. [30]
    The mismatch negativity: A review of underlying mechanisms - PMC
    The mismatch negativity (MMN) is a brain response to violations of a rule, established by a sequence of sensory stimuli (typically in the auditory domain)
  31. [31]
    Spectral and Temporal Processing in Human Auditory Cortex
    Oct 1, 2001 · The core auditory cortex responds to temporal variation, while anterior superior temporal areas respond to spectral variation. Temporal ...
  32. [32]
    The cocktail-party problem revisited: early processing and selection ...
    Apr 1, 2015 · It supposes that all grouping occurs at a pre-attentive level (i.e., before selection takes place), which means that it acts on all sensory ...
  33. [33]
    Cortical activity associated with the detection of temporal gaps in tones
    ... gap-detection threshold (i.e., the minimally detectable gap duration) is usually found to be around 2–3 ms (Plomp, 1964; Penner, 1977). When the leading and ...
  34. [34]
    Auditory Brainstem Responses to Successive Sounds - NIH
    Jan 28, 2021 · By changing the intensity of stimuli and the duration of the gap, they found the temporal gap threshold is 2–3 ms in the chinchilla's neural ...
  35. [35]
    Lower frequency range of auditory input facilitates stream ...
    Sep 15, 2024 · The current study investigated the effect of lower frequency input on stream segregation acuity in older, normal hearing adults.
  36. [36]
    A multisensory perspective onto primate pulvinar functions
    In this review, we focus on the contribution of the pulvinar to multisensory integration. This subcortical thalamic nucleus plays a central role in visual ...Missing: paper | Show results with:paper
  37. [37]
    Spatial attention can modulate audiovisual integration at ... - PubMed
    Results showed that attention to the congruent audiovisual stimulus resulted in increased activation in the superior temporal sulcus, striate and extrastriate ...
  38. [38]
    Pulvinar-Cortex Interactions in Vision and Attention - PMC
    To understand the mechanisms of attentive stimulus processing in this pulvinar-cortex loop, we investigated the interactions between the pulvinar, area V4, and ...
  39. [39]
    Multisensory integration and white matter pathology - PubMed Central
    Nov 2, 2022 · White matter tracts are the integral connections that tether GM structures together to form cognitive networks (30). Concurrent processing ...
  40. [40]
    Descending pathways from the superior colliculus mediating ...
    Dec 2, 2022 · Neurons in the deep superior colliculus (dSC) integrate multisensory inputs and activate descending projections to premotor pathways responsible for orienting, ...
  41. [41]
    Enhancing Attentional Control: Lessons from Action Video Games
    Oct 9, 2019 · Action video game playing is associated with improved visual sensitivity, but not alterations in visual sensory memory. Atten. Percept ...
  42. [42]
    Music Training Positively Influences the Preattentive Perception of ...
    Apr 21, 2019 · The aim of this longitudinal study was to determine whether music training improves the preattentive processing of VOT and duration deviant ...
  43. [43]
    An investigation of the effectiveness of neurofeedback training on ...
    Apr 15, 2023 · A meta-analysis of neurofeedback training (NFT) in healthy adults showed an overall positive effect on motor performance.
  44. [44]
    Visual pop-out in infants: Evidence for preattentive search in 3
    The present experiment tested for preattentive visual search in 3- and 4-month-old infants using stimulus features described by Treisman and Souther (1985)
  45. [45]
    Understanding visual attention in childhood: Insights from a new ...
    Nov 14, 2016 · Event-related potentials connected with attentional functioning are not fully mature until the age of 12 years (Taylor et al., 2003).
  46. [46]
    Critical periods in amblyopia - PMC - NIH
    Jul 16, 2018 · The critical period for developing amblyopia in children extends to 8 years and is relatively easy to correct until that age by improving the ...Missing: pre- | Show results with:pre-
  47. [47]
    Electrophysiological Indicators of the Age-Related Deterioration in ...
    Previous studies on the MMN in aging usually showed a reduced amplitude and prolonged latency in elderly people (ca. 70 years) in comparison to those in the ...
  48. [48]
    Pre-attentive cortical processing of behaviorally perceptible spatial ...
    Jun 16, 2014 · Here we tested the hypothesis that pre-attentive processing of behaviorally perceptible spatial changes is preserved in older adults.Missing: saliency | Show results with:saliency
  49. [49]
    Perceptual processing deficits underlying reduced FFOV efficiency ...
    Older adults are known to perform more poorly on measures of the functional field of view (FFOV) than younger adults.<|control11|><|separator|>
  50. [50]
    Mismatch Negativity (MMN) response studies in elderly subjects - PMC
    Some scholars28 observed that the latency is higher in adults and elderly patients when compared to their younger counterparts with the MMN. We compared the ...
  51. [51]
    Musical intervention enhances infants' neural processing of ... - PNAS
    Apr 25, 2016 · Individuals with music training in early childhood show enhanced processing of musical sounds, an effect that generalizes to speech processing.
  52. [52]
    Long-term impact of digital media on brain development in children
    Jun 6, 2024 · Digital media (DM) takes an increasingly large part of children's time, yet the long-term effect on brain development remains unclear.
  53. [53]
    Neural mechanisms of mismatch negativity (MMN) dysfunction in ...
    Schizophrenia is associated with cognitive deficits that reflect impaired cortical information processing. Mismatch negativity (MMN) indexes pre-attentive ...
  54. [54]
    Neural substrates of normal and impaired preattentive sensory ...
    MMN deficits among SZ patients appear to be primarily accounted for by reductions in medial prefrontal brain regions that are followed by widespread dysfunction ...
  55. [55]
    Deficits in Pre-attentive Processing of Spatial Location and Negative ...
    Feb 2, 2021 · Deficits in mismatch negativity (MMN) generation are among the best-established biomarkers for cognitive dysfunction in schizophrenia and ...
  56. [56]
    Sensory Processing in Autism: A Review of Neurophysiologic Findings
    When both groups are trained on the visual feedback component of the McGurk effect, ASD participants fail to show improvements in performance (60, 61).
  57. [57]
    Evidence for Diminished Multisensory Integration in Autism ...
    While the McGurk effect is the most often used, results are hard to interpret in ASD because they may result from differences in social-communication processing ...Missing: pre- attentive
  58. [58]
    Behavioral, Perceptual, and Neural Alterations in Sensory and ...
    This difference in perceptual integration or binding can be readily demonstrated using the McGurk effect (McGurk and MacDonald, 1976), an audiovisual illusion ...
  59. [59]
    Preattentive visual change detection as reflected by the mismatch ...
    ERPs and deviance detection ... Visual mismatch negativity highlights abnormal pre-attentive visual processing in mild cognitive impairment and Alzheimer's ...
  60. [60]
    Mismatch negativity (MMN) amplitude as a biomarker of sensory ...
    The MMN amplitude was found to be a sensitive and specific biomarker of aMCI, in both the first and second evaluation.Missing: poor | Show results with:poor
  61. [61]
    Attention deficits revealed by passive auditory change detection for ...
    However, children with ADHD have no problem with pre-attentive processing. Thus, in dealing with the academic problems of children with ADHD, this ...
  62. [62]
    Exploratory Study on Chemosensory Event-Related Potentials in ...
    Mar 19, 2023 · ... disorders, and a pre-existing chemosensory dysfunction. All these ... Mild Cognitive Impairment (MCI) and Long COVID-19 (LC). The plot ...
  63. [63]
    Modulating the difficulty of a visual oddball-like task and P3m ...
    Jan 17, 2024 · We developed a new, oddball-like visual discrimination task with varying levels of difficulty despite using almost identical visual stimuli.Missing: attentive | Show results with:attentive
  64. [64]
    Measurement and interpretation of the mismatch negativity
    The mismatch negativity (MMN) is a preattentive brain response elicited by changes in repetitive auditory stimulation.
  65. [65]
    The steady-state visual evoked potential in vision research: A review
    Steady-state visual evoked potentials (SSVEP) are responses to periodic visual stimuli, measured at the scalp, and are stable in amplitude and phase over time.
  66. [66]
    Pre-attentive and Attentive Auditory Event-related Potentials in ...
    May 16, 2024 · Abnormalities in auditory processing are believed to play a major role in autism and attention-deficit hyperactivity disorder (ADHD).
  67. [67]
    Early Diagnostics and Early Intervention in Neurodevelopmental ...
    This review discusses early diagnostics and early intervention in developmental disorders in the light of brain development.2. Early Human Brain... · 3.2. Neuroimaging · 3.3. 1. Neurological...<|separator|>
  68. [68]
    Sensory Integration - StatPearls - NCBI Bookshelf - NIH
    Sensory integration (SI) is a framework conceptualized by Dr. A. Jean Ayers, Ph.D. in the 1970s; hence it is currently known as Ayres Sensory Integration (ASI).Missing: pre- attentive
  69. [69]
    Pre-attentive Visual Processing in Alzheimer's Disease - PubMed
    Our findings imply that AD patients exhibit pre-attentive visual processing deficits, known to affect later higher-order brain functions.Missing: predictive validity decline
  70. [70]
    Altered mismatch response precedes gray matter atrophy in ...
    May 25, 2025 · A pre‐attentive neurophysiological signal that reflects the brain's ability to detect the changes of the environment is called mismatch ...
  71. [71]
    Remote Wearable Neuroimaging Devices for Health Monitoring and ...
    Apr 16, 2024 · This review reveals that previous studies have leveraged mobile EEG devices for remote monitoring across the mental health, neurological, and sleep domains.3. Results · 3.1. 1. Neurological... · 3.3. Mobile Device...
  72. [72]
    How Orchestrating Attention May Relate to Classroom Learning
    Sep 1, 2020 · We explore a framework for understanding attention in the classroom, organized along two key dimensions: internal/external attention and on-topic/off-topic ...