Fact-checked by Grok 2 weeks ago

Motion perception

Motion perception is the process by which the infers the speed and direction of elements within a scene, primarily from changes in the pattern of light reaching the , while also integrating vestibular and proprioceptive inputs to distinguish self-motion from environmental movement. This perceptual attribute enables the detection of object trajectories and optic flow patterns essential for and with the dynamic world. Unlike physical motion, which involves positional changes over time, motion perception can arise from apparent motion in static sequences or illusions, demonstrating its constructive nature in the brain. At the neural level, motion processing begins in the and primary (), where direction-selective neurons in layer 4B respond to local motion signals, before projecting to the middle temporal area (MT) for integration into global motion representations. The MT area, part of the dorsal visual stream, contains neurons tuned to specific directions and speeds, resolving ambiguities like the aperture problem through pooling of local cues, as seen in responses to patterns or random-dot kinematograms. Further processing in the medial superior temporal area (MST) handles complex patterns such as optic for heading estimation, incorporating extra-retinal signals from eye and head movements. Motion perception encompasses several types, including first-order motion based on variations and second-order motion defined by or changes, each potentially involving distinct neural pathways. Biological motion, the recognition of actions from point-light displays, engages the (STS) and is evident from infancy, aiding like and detection. Illusions such as the —where prolonged exposure to moving stimuli causes stationary objects to appear to drift oppositely—highlight in MT neurons and the system's sensitivity to temporal correlations. This capability is vital for survival tasks like predator avoidance and , with deficits such as from MT lesions severely impairing everyday activities like crossing streets. Developmental studies show sensitivity emerges in early infancy but matures into adulthood, influenced by visual experience during critical periods. Overall, motion perception exemplifies the brain's ability to construct coherent representations from ambiguous input, underpinning broader visual .

Fundamentals of Motion Perception

Definition and Historical Overview

Motion perception refers to the process by which the visual system detects and interprets changes in the position of objects or patterns over time within the visual field, distinguishing it from the perception of static forms by emphasizing dynamic transformations in the retinal image. This perceptual attribute arises from the brain's inference of movement based on spatiotemporal variations in light patterns, enabling the differentiation of self-motion from environmental changes. Unlike static vision, which relies on spatial contrasts for shape recognition, motion perception integrates temporal cues to construct trajectories and velocities, forming a foundational aspect of visual processing. The historical study of motion perception traces back to the 19th century, when physicist conducted pioneering experiments on the aftereffects of motion, observing how prolonged exposure to moving stimuli induced illusory perceptions of opposite-direction upon cessation, laying early groundwork for understanding in visual . In 1912, psychologist advanced this field through his seminal work on the , demonstrating apparent motion where stationary lights flashed in sequence created the illusion of continuous , challenging atomistic views of perception and establishing key principles of spatiotemporal integration. Early 20th-century Gestalt psychologists, building on Wertheimer's findings, further contributed by emphasizing holistic motion grouping, positing that the visual system organizes dynamic elements into coherent wholes rather than isolated parts, influencing subsequent theories of perceptual organization. Motion perception plays a critical role in , , and , such as detecting approaching predators or coordinating actions like , by providing essential cues for anticipating environmental changes. Its evolutionary conservation is evident across , from ' elementary motion detectors for obstacle avoidance to humans' advanced systems for complex scene analysis, reflecting adaptations that enhance mobility and threat response in diverse ecological niches. This shared heritage underscores as a fundamental mechanism for in dynamic worlds.

Basic Principles of Visual Motion Detection

Visual motion detection begins with the formation of the retinal image, where motion is perceived as the displacement of patterns across the array of photoreceptors over time. The , comprising and cones, captures these time-dependent brightness variations, which the processes to infer object movement rather than static positions. This spatiotemporal change in the retinal image provides the raw input for motion computation, as the visual world is not directly encoded with but derived from sequential snapshots of distributions. A fundamental concept in motion detection is spatiotemporal frequency, which characterizes motion through the combination of spatial frequency (measured in cycles per degree, c/deg) and temporal frequency (measured in cycles per second, or Hz). Spatial frequency describes the periodicity of luminance variations across space, while temporal frequency captures the rate of change over time; together, they define the velocity of moving patterns in a three-dimensional frequency space. In human vision, motion sensitivity is tuned to low to moderate spatial frequencies (typically 0.5–4 c/deg) and peaks at temporal frequencies around 8–10 Hz, reflecting the optimal range for detecting coherent motion under natural viewing conditions. The Reichardt detector serves as a foundational model for understanding direction-selective motion detection, originally proposed for insect vision but influential in conceptualizing vertebrate mechanisms. This elementary motion detector operates via a delay-and-correlate process: signals from adjacent photoreceptors are temporally delayed relative to one another and then multiplied (correlated), producing a response that is sensitive to the direction of motion—enhanced for one direction and suppressed for the opposite. This mechanism allows the visual system to distinguish true motion from flicker or noise by exploiting the spatiotemporal structure of the input, without requiring explicit velocity computation at this stage. Psychophysical studies reveal the limits of motion detection in humans, with the minimum detectable displacement typically ranging from 0.3 to 1.5 arcmin under optimal conditions, such as high- stimuli in structured fields. This threshold represents the smallest positional shift between successive frames that observers can reliably perceive as motion, varying with factors like stimulus duration and . Additionally, sensitivity plays a key role, as moving patterns are detectable at lower contrasts (often 1–5% Michelson ) compared to static ones, enabling robust motion perception even in dim or low- environments. These thresholds highlight the visual system's efficiency in extracting motion signals from noisy inputs.

Types of Motion Perception

First-Order Motion Perception

First-order motion perception involves the detection of visual motion through changes in , where the direction and speed of movement are encoded in the first-order statistics of spatiotemporal variations. This process relies on a Fourier-based , in which linear spatiotemporal filters tuned to specific spatial frequencies and orientations extract energy from luminance-modulated signals. The seminal motion energy model describes this mechanism as involving quadrature pairs of filters (e.g., even- and odd-symmetric) whose outputs are squared, linearly summed, and then differenced in an opponent-energy stage to yield direction-selective responses, providing a phase-independent measure of motion. This energy computation effectively simulates the correlated activity of adjacent detectors with temporal delays, akin to the Reichardt detector, enabling robust detection of coherent motion in luminance-defined patterns. At the neural level, first-order motion is primarily processed via the magnocellular pathway, which dominates for low-spatial-frequency, high-temporal-frequency stimuli typical of fast movements. Signals from retinal ganglion cells project through the to layer 4Cα of primary (), where simple cells respond selectively to oriented edges moving coherently across their receptive fields. These neurons exhibit speed tuning curves that peak at velocities of 4–16 degrees per second, reflecting the transient responses of magnocellular inputs to rapid changes. Further integration occurs in area MT (V5), where direction-selective cells pool outputs to represent global motion direction. Classic stimuli for demonstrating first-order motion include drifting sine-wave gratings, where alternating light and dark bars move uniformly to elicit directionally tuned responses, and random dot kinematograms (RDKs) at high levels (e.g., >50%), in which a subset of dots moves coherently amid to reveal perceptual thresholds for motion discrimination. Psychophysical studies using such RDKs show that human observers achieve near-perfect detection at high coherence, underscoring the system's efficiency for salient, luminance-driven movements. A key limitation of first-order mechanisms is their insensitivity to motion defined by non-luminance cues, such as changes in , , or color, which require separate processing pathways. Additionally, local measurements in this system can give rise to the aperture problem, where ambiguous motion signals from edges necessitate higher-level integration, though this primarily challenges unambiguous direction estimation in complex scenes.

Second-Order Motion Perception

Second-order motion perception refers to the detection of motion defined by non-luminance-defined features, such as changes in , , or , rather than direct variations. This process requires additional computational stages beyond the initial -based analysis, involving nonlinear of the visual input followed by in a second-stage to extract directional motion signals. Seminal work by Chubb and Sperling introduced drift-balanced random stimuli, which lack net energy in any direction yet elicit coherent motion perception through of local features like envelopes, demonstrating the existence of dedicated non-Fourier pathways. Examples include -modulated gratings, where a static pattern's is sinusoidally varied by a drifting carrier, or stereo-defined motion reliant on changes without cues. Neural processing of second-order motion likely occurs through integration in the middle temporal area (MT/V5), where neurons respond comparably to both first- and second-order stimuli, though with reduced for the latter. studies show robust activation in MT/V5 for second-order motion at speeds around 4-9 deg/s, with stronger responses to flicker-based cues compared to filtered noise modulations, suggesting specialized handling within this region. Unlike first-order motion, which is robust up to high velocities, second-order is tuned to slower speeds, with maximum detectable rates typically up to 32 deg/s but optimal below 10 deg/s, reflecting the pathway's reliance on feature tracking rather than direct energy detection. Psychophysical experiments confirm that second-order motion exhibits higher direction discrimination thresholds than first-order motion, often 2-3 times elevated, indicating greater processing demands for extracting coherent from modulations. For instance, equiluminant stimuli, such as red-green drifting patterns at isoluminance, are perceived as moving slower than their -defined counterparts at equivalent physical speeds, with underestimation by up to 50% due to the absence of cues. These findings highlight the pathway's limits, as thresholds increase more rapidly with reduced or low depth. In natural scenes, second-order motion enables perception of complex phenomena like transparent overlays, where multiple motion directions coexist through layered contrast modulations, or moving shadows that alter local texture without net luminance flux, such as a cast shadow traversing a textured surface. These applications underscore the pathway's role in parsing ambiguous visual environments, integrating non-luminance signals to support depth and segmentation without invoking mechanisms. Brief integration with signals can occur at higher levels to resolve overlapping motions, enhancing overall scene understanding.

Challenges in Motion Processing

The Aperture Problem

The aperture problem refers to the inherent ambiguity in local measurements of visual motion, where the of an object's cannot be uniquely determined from the orientation of its edges alone when viewed through a limited spatial window, such as a small or an experimental aperture. This limitation arises because early visual processing captures only the motion component perpendicular to the edge, leaving the parallel component unspecified and allowing multiple possible true velocities consistent with the observed signal. The phenomenon was first systematically described in the context of perceived motion for extended contours. A classic illustration is the barber pole illusion, in which a diagonally drifting viewed through a long, narrow rectangular appears to move predominantly along the aperture's long axis—vertically for a vertical aperture—rather than following its true diagonal path, due to the dominance of the perpendicular motion signals from the grating bars. Similarly, in triangular or other shaped apertures, the perceived direction biases toward the aperture's boundaries, emphasizing how local edge information misleads direction estimation without additional contextual cues. The mathematical foundation of this ambiguity derives from the , based on the assumption that image remains constant over time for a moving point. For a 1D , the component is constrained by the equation \vec{v}_\perp = \frac{\partial I / \partial t}{\partial I / \partial x}, where I represents image , \partial I / \partial t is the temporal derivative, and \partial I / \partial x is the spatial derivative along the normal; the full 2D , however, requires an additional to resolve the underdetermined component. In terms, the true \vec{v} decomposes into the detectable component \vec{v}_\perp and an indeterminate component along the , highlighting the single-equation, two-unknown nature of local . Psychophysical experiments with patterns—superpositions of two or more gratings drifting in different s—reveal how these ambiguous local signals from individual components can combine to produce a coherent perceived motion direction for the overall pattern, often aligning with the intersection of constraints or the pattern's global form. For instance, when the gratings move at equal speeds, observers typically perceive the plaid's motion as the vector average of the components, though biases occur if speeds differ, underscoring the role of relative motion strengths in disambiguating . Such ambiguities are partially resolved by global features like line terminations or corners, which provide unambiguous perpendicular and parallel motion cues at those points, allowing the to infer the true trajectory. Neurally, the aperture problem manifests in the primary visual cortex (), where direction-selective neurons with small receptive fields (~0.5–2° ) respond primarily to the local perpendicular motion component of oriented contours, encoding ambiguous 1D signals rather than the full 2D object velocity. These responses thus reflect component motion, susceptible to aperture-like limitations, while veridical direction perception emerges through subsequent cortical processing that incorporates broader contextual integration. The aperture problem is ultimately addressed via motion integration mechanisms that pool these local signals across the .

Motion Integration Across Visual Field

The visual system integrates local motion signals across the visual field to construct a unified percept of global motion, resolving ambiguities in direction and speed that arise from limited receptive fields of individual neurons. This process involves pooling mechanisms in early visual areas such as V1 and V2, where surround modulation enhances or suppresses responses based on contextual motion in extraclassical receptive fields, allowing for the combination of signals over larger spatial extents. In higher areas like the medial superior temporal (MST) region, neurons exhibit wide-field receptive fields that integrate motion over extensive portions of the visual field, contributing to the perception of complex patterns such as optic flow. Such integration is essential for overcoming local directional uncertainties, like those posed by the aperture problem, by synthesizing information from multiple neighboring regions.80326-8) For stimuli like plaid patterns, composed of overlapping gratings moving in different s, the employs mechanisms such as vector averaging, which computes the global direction as the resultant vector of component motions, or winner-take-all competition, where the dominant direction suppresses alternatives. Vector averaging predominates for balanced plaids with similar component speeds and contrasts, yielding a direction bisecting the components, while winner-take-all emerges when one component is stronger, leading to perception aligned with the prevailing motion. These strategies are observed in neural responses in area MT, where population activity reflects the integrated percept. Bayesian integration provides a probabilistic framework for combining these local signals, weighting each by its reliability—such as or contrast—to optimize the estimate of global motion under . This approach accounts for in noisy environments, where more reliable cues exert greater influence on the final percept, aligning with optimal principles. Neural implementations may occur through recurrent in MT and MST, modulating responses based on contextual reliability. Psychophysically, integration is demonstrated in random dot kinematograms (RDKs), where coherent requires approximately 10-30% of dots moving in the same direction amid noise, with thresholds varying by eccentricity and speed. cues, such as T-junctions at boundaries, further aid integration by signaling figure-ground relationships, disambiguating motion continuity across occluders and enhancing global coherence. Computationally, the intersection of constraints (IOC) model resolves ambiguities by finding the velocity consistent with multiple local measurements, such as from line ends and intersections, effectively pooling constraints to true object motion.

Advanced Motion Phenomena

Motion in Depth

Motion in depth refers to the perception of objects moving toward or away from the observer along the , which is crucial for estimating three-dimensional trajectories from two-dimensional retinal images. This process relies on dynamic changes in visual cues over time, distinguishing it from lateral motion perception by incorporating depth information to infer components. Unlike planar motion, motion in depth activates specialized neural pathways that integrate disparity and flow signals to support behaviors such as obstacle avoidance. Key cues for motion in depth include binocular and sources. Binocular disparity change provides a primary cue, where the relative horizontal offset between the eyes' views alters as an object approaches or recedes, often manifesting as () for approaching objects or contraction for receding ones. cues encompass optic flow patterns, such as radial ( from a focus of expansion) indicating approach, and motion , where observer or object translation causes differential image motion across the to signal relative depth and speed. These cues can operate independently but are often combined for robust depth estimation, with motion particularly effective during self-motion. Neural processing of motion in depth involves areas beyond primary motion detectors. In the middle temporal (MT) area, approximately half of neurons exhibit selectivity for motion-in-depth , responding to radial patterns like or through to changing disparities or speed gradients. Higher-order regions such as V3A and the ventral intraparietal area (VIP) feature disparity-tuned cells that couple depth signals with velocity, enabling integration of and for precise depth-speed judgments. VIP neurons, in particular, show tuning for near disparities and optic flow, supporting egocentric distance encoding during navigation. Psychophysical studies reveal how these cues inform time-to-collision (TTC) estimation, a core aspect of motion in depth. Tau theory posits that TTC (\tau) can be directly perceived from the optic flow as the ratio of an object's angular size (\theta) to its rate of change (\dot{\theta}): \tau = \frac{\theta}{\dot{\theta}} This provides an approximation of time until contact without explicit distance computation, as validated in braking tasks where drivers scale responses to tau values. Illusions like the Aubert-Fleischl phenomenon extend to depth, where pursued motion in depth leads to overestimation of approaching speed compared to fixated viewing, due to underestimation of extra-retinal eye movement signals. In applications, motion in underpins navigation and avoidance, such as estimating for braking in scenarios, where underestimation of enhances safety margins by prompting earlier responses. This perceptual mechanism, honed through integrated cues, facilitates real-time collision risk assessment in dynamic environments.

Biological Motion Perception

Biological motion perception refers to the 's ability to recognize and interpret movements that are characteristic of living organisms, such as the coordinated limb actions during walking or gesturing, even from highly abstracted stimuli. Pioneering work by Gunnar Johansson in 1973 introduced point-light displays, where small lights are attached to the major joints of a performer (typically 10-12 points corresponding to joints like elbows, knees, and hips), filmed in darkness to isolate motion cues. These displays reveal coherent patterns of form and action, translating biological invariants such as the oscillatory trajectories of limbs and the pendulum-like swing of legs, allowing viewers to readily perceive a walking figure despite the absence of static or information. This perception arises through form-from-motion , where the integrates local dot trajectories over time to construct a global representation of the moving body. The mechanisms underlying biological motion perception demonstrate remarkable and robustness. Observers can detect these patterns even when the signal dots are masked by dynamic noise dots moving randomly, with adult thresholds typically requiring only about 8-15% coherent signal dots for reliable at 75-80% accuracy. In developmental terms, emerges early, with infants around 3-4 months of age showing preferences for upright point-light walkers over inverted or scrambled versions, indicating an innate bias for biological configurations. Perceptual learning can further enhance these thresholds, as targeted training improves detection performance in noisy conditions. At the neural level, biological motion engages specialized regions, particularly the , which supports the attribution of social intent and agency to moving forms. Neurons in the of nonhuman respond selectively to point-light displays, integrating motion with implied like direction or emotional gestures. Recent models frame this processing within frameworks, where innate priors for animacy—shaped by —facilitate rapid detection by anticipating coherent biological patterns amid ambiguity, as evidenced in studies using cued point-light tasks. This perceptual specialization holds evolutionary and applied significance, conserved across species including nonhuman primates and birds, where similar sensitivities aid survival by distinguishing animate agents. In humans, variations appear in autism spectrum disorder (ASD), with meta-analyses revealing consistent impairments in interpreting emotional or intentional aspects of biological motion, though basic detection may remain intact. These differences underscore biological motion's role in social cognition, linking perceptual deficits to broader challenges in understanding others' actions.

Learning and Plasticity

Perceptual Learning of Motion

Perceptual learning of motion refers to the enhancement of and abilities through repeated exposure and practice with motion stimuli, leading to task-specific improvements in the . Classic experiments by and Sekuler demonstrated that training on direction tasks results in a gradual, specific improvement in distinguishing subtle differences between motion directions, with effects persisting for months after training. These gains are highly specific to the trained direction of motion, indicating that learning refines neural representations for particular stimulus features rather than enhancing general motion sensitivity. In modern paradigms using random dot kinematograms (RDKs), training reduces motion coherence thresholds—the minimum percentage of coherently moving dots required for accurate direction —by 30-40% over multiple sessions, reflecting improved of motion signals into global percepts. Such improvements are task-specific and show limited transfer to untrained conditions; for instance, gains from training on high-coherence RDKs transfer only partially (around 70%) to lower-coherence versions or similar speeds and orientations, but not to dissimilar ones. Recent studies from 2023-2024 have extended this to biological motion, showing that visuomotor experience, such as in vertical dancers, enhances sensitivity to point-light displays by overcoming perceptual challenges like inversion effects, thereby improving detection of action cues relevant to social interactions. The time course of motion perceptual learning includes rapid within-session gains occurring over hours of practice, alongside long-term retention lasting weeks, as evidenced by sustained behavioral thresholds and neural decoding accuracy two weeks post-training. These changes involve Hebbian in early visual areas like , where repeated co-activation strengthens synaptic connections tuned to motion features. Factors influencing learning include age, with stronger effects in youth due to superior implicit processing of motion stimuli, and the necessity of , which ensures specificity by suppressing irrelevant features during training.

Neural Plasticity in Motion Perception

Neural plasticity in motion perception involves synaptic and circuit-level adaptations that refine processing in visual areas such as the middle temporal (MT) area, enabling improved discrimination of motion attributes like direction and speed. One key mechanism is Hebbian in MT, which strengthens connections to refine speed tuning curves in motion-sensitive neurons, allowing for more precise encoding of stimulus velocities following adaptive experiences. Complementing this, homeostatic adjusts synaptic strengths across MT circuits to preserve overall stability after plasticity-induced changes, preventing excessive or silencing that could disrupt motion . These processes underlie observable perceptual learning outcomes in motion tasks, where repeated exposure leads to enhanced behavioral sensitivity. Evidence from studies and computational models demonstrates MT retuning after motion exposure, with shifts in preferred speeds and directions persisting for weeks, as simulated by spike-timing-dependent rules. Functional MRI (fMRI) further reveals changes in blood-oxygen-level-dependent (BOLD) signals in and MT following motion direction training, with reduced activation in MT but increased or less reduced activation in for trained directions, indicating circuit reorganization for better motion coherence detection. These findings highlight how refines motion processing at multiple cortical stages without altering basic properties. At the molecular level, N-methyl-D-aspartate (NMDA) receptors mediate calcium influx critical for LTP induction in visual pathways, facilitating synaptic strengthening in motion-sensitive circuits during adaptive refinement. (BDNF) upregulation supports structural plasticity by promoting growth and stabilization in these pathways, enhancing connectivity for sustained motion encoding improvements. In pathological contexts, motion training protocols leverage this for ; in , targeted motion stimuli restore binocular motion integration by reinstating cortical in and MT, improving in adults. Similarly, post-stroke using motion-based training induces neuroplastic changes in motor-visual , aiding of motion-guided actions through enhanced BOLD responses in affected areas. Recent 2025 research further indicates that perceptual learning rewires brain connectivity, enhancing motion processing through changes in neural circuits.

Cognitive and Higher-Level Aspects

Cognitive Influences on Motion Perception

Cognitive influences on motion perception arise from top-down processes that modulate low-level sensory signals, enhancing or biasing the of dynamic visual information. Spatial , often described as a "" , selectively improves motion resolution in attended regions by reducing discrimination thresholds for and speed. For instance, exogenous cues directing to a specific can lower thresholds, allowing finer-grained processing of changes compared to unattended areas. In contrast, divided across multiple stimuli impairs the integration of motion signals, leading to reduced accuracy in perceiving coherent trajectories and increased errors in global motion judgments, particularly under high . Expectations and prior knowledge further shape motion perception through Bayesian frameworks, where internal models predict sensory input based on probabilistic priors, minimizing prediction errors. In biological motion contexts, priors for animate actions—such as self-propelled movements against —bias observers toward interpreting ambiguous dot patterns as intentional behaviors, facilitating rapid social inference. These priors are evident in enhanced sensitivity to point-light displays depicting human actions, where violations of expected (e.g., unnatural limb trajectories) elicit stronger neural prediction errors in occipitotemporal regions. Similarly, cognitive maps integrate self-motion cues via path integration during , updating spatial representations to estimate heading and distance traveled, even in the absence of landmarks; this process relies on vestibular and proprioceptive priors to maintain allocentric accuracy. Interactions between in the ventral stream and motion processing in the stream exemplify higher-level modulation, where semantic knowledge alters the grouping of dynamic elements. For example, prior identification of objects as coherent entities (e.g., a moving face) influences the perceptual binding of local motion vectors, promoting holistic trajectory over fragmented signals. Cultural and linguistic factors also impose biases, as languages with marking (e.g., English) versus those without (e.g., ) lead to differential to endpoints versus trajectories in motion events, with speakers showing distinct event-related potentials, such as P3 wave amplitudes, when viewing motion animations. Recent work highlights animate motion priors in social contexts, where expectations of intentionality enhance detection of subtle biological cues, such as gravitational influences on , underscoring the role of developmental and experiential tuning in perceptual biases.

Illusions and Perceptual Biases in Motion

Motion perception is susceptible to various illusions that highlight systematic errors and biases in how the processes dynamic stimuli. One prominent example is the , commonly demonstrated by the waterfall illusion, where prolonged exposure to a stimulus moving in one direction, such as cascading water, causes a subsequently viewed stationary or oppositely moving scene to appear to drift in the opposite direction. This illusion arises from adaptation to the initial motion, leading to a rebound that reveals the directional selectivity of motion-processing mechanisms. Similarly, the Duncker illusion, or induced motion, occurs when a stationary object appears to move due to the motion of a surrounding background; for instance, a fixed point of light seems to shift when encircled by moving dots, as the attributes motion to the rather than the . This effect demonstrates how contextual cues can override direct sensory input, creating false perceptions of object trajectory. Perceptual biases further illustrate these vulnerabilities, particularly in estimating speed and direction under competing or expansive stimuli. Observers tend to overestimate the speed of motion across large visual fields, such as expansive optic flow patterns exceeding 107 degrees of field of view, where peripheral stimulation leads to amplified perceived velocity compared to smaller displays. This bias is evident when central vision is occluded, resulting in systematic overestimation as the visual system interprets broad-field motion as faster to maintain perceptual stability. Direction repulsion represents another bias, occurring during motion transparency when two overlapping patterns move in slightly different directions (within about 60 degrees); the perceived direction of each component shifts away from the other, distorting the overall motion estimate. This repulsion effect is prominent in random dot kinematograms and underscores how concurrent motions interact to bias directional judgments. These illusions and biases stem from underlying neural and ecological principles that prioritize certain interpretations for adaptive processing. The , for example, results from selective adaptation in direction-tuned neurons in the , where prolonged stimulation fatigues cells responsive to the adapting direction, causing a rebound imbalance that favors opposite motion upon retesting. In induced motion scenarios like the Duncker illusion, the visual system exhibits an ecological bias toward parsimonious explanations, often attributing large-scale background shifts to self-motion rather than environmental change, as seen in vection where expansive optic flow induces a compelling of displacement. Such biases reflect evolutionary adaptations, favoring interpretations that align with typical real-world scenarios, like assuming self-movement during vection to resolve ambiguous input efficiently. Recent neurophysiological research has explored how population coding in visual areas mitigates some of these illusions by distributing representations across ensembles of neurons, allowing contextual integration to reduce directional errors. For instance, studies in 2024 have shown that population activity encodes perceptual certainty through gain variability (r=0.81 correlation), enabling the system to weigh ambiguous visual signals and partially counteract perceptual biases. This mechanism highlights how collective neural responses, rather than isolated cells, contribute to robust motion perception despite inherent illusions.

Neural Mechanisms

Direction-Selective Cells in the

Direction-selective cells are specialized neurons in the that exhibit preferential responses to visual stimuli moving in a particular , playing a crucial role in the early of motion signals. These cells typically display narrow directional tuning curves, with preferred directions spanning about 45-90 degrees in width, allowing for precise of motion trajectories. This selectivity arises from inhibitory mechanisms that suppress responses to motion in the opposite or non-preferred directions, enabling the visual system to encode motion robustly across various speeds and contrasts. In vertebrates, direction-selective cells first emerge in the , where ganglion cells exhibit ON and OFF subtypes that respond to motion in specific directions. For instance, in mice, retinal ganglion cells demonstrate direction selectivity to local motion, with ON direction-selective ganglion cells (DSGCs) preferring upward () motion and other subtypes, including ventral ON DSGCs, handling downward motion, achieving tuning widths around 90 degrees. These retinal outputs provide initial direction signals to higher visual areas. In the primary (), end-stopped cells contribute excitatory input to direction-selective processing, though full direction selectivity is refined in extrastriate areas like the middle temporal (MT) area, which features a hypercolumnar organization where neurons collectively represent all motion directions through clustered receptive fields. MT neurons maintain narrow tuning similar to retinal cells but integrate over larger fields, supporting global motion perception. In , direction-selective cells are prominently found in the lobula plate of the optic lobe, where large tangential cells process wide-field motion critical for behaviors like flight stabilization. A key example is the H1 neuron in flies, which responds selectively to horizontal wide-field motion in the preferred direction, with peak sensitivity to speeds up to 100 degrees per second—far exceeding typical processing ranges for such optic flow. These cells exhibit tuning widths of approximately 45-60 degrees and are organized to cover the full range of directions, contrasting with the more localized selectivity in retinas. The development of direction-selective cells begins in the prior to cortical maturation in vertebrates, with selectivity appearing as early as postnatal day 11 in mice, driven by spontaneous activity patterns that refine before visual experience. While the molecular identities of these cells, such as specific channels, underlie their function, detailed synaptic bases are explored elsewhere.

Neurophysiological Models of

Neurophysiological models of motion detection provide computational frameworks to explain how direction-selective responses emerge in neural circuits processing spatiotemporal visual inputs. These models, inspired by observations of direction-selective cells in the , simulate the transformation of patterns into directional signals through mechanisms like and probabilistic . Seminal and contemporary theories emphasize the integration of local computations to achieve robust motion perception under varying conditions. The foundational Hassenstein-Reichardt (HR) model, developed in based on behavioral studies in , proposes that direction selectivity arises from a correlator circuit with spatial offset and temporal delay. In this model, inputs from two adjacent spatial locations, denoted as I(x, t) and I(x + \Delta x, t), are processed such that for the preferred direction (e.g., left-to-right), the undelayed signal from the left input multiplies the temporally delayed signal from the right input: R_{+}(t) = I(x, t) \cdot I(x + \Delta x, t - \tau), where \tau represents the delay tuned to expected motion speed. The opposite direction subunit computes R_{-}(t) = I(x + \Delta x, t) \cdot I(x, t - \tau). The net directional response is then R(t) = R_{+}(t) - R_{-}(t), often followed by a to smooth the output and enhance velocity tuning. This multiplication-delay operation inherently confers direction opponency and has been mathematically derived to predict responses to drifting gratings and random dot patterns. Direction-selective cells in visual areas serve as the empirical basis for validating such models. Elaborations on the HR framework address more complex stimuli. Linear-nonlinear (LN) models extend direction detection to second-order motion, such as contrast-modulated or flicker-defined patterns, by inserting a nonlinearity (e.g., half-wave or computation) before ; this extracts the modulating , which is then fed into an -like detector for processing. Bayesian models further incorporate sensory noise and priors, framing as maximum a posteriori where unreliable signals are weighted against expectations of smooth or ecologically plausible trajectories. Recent updates integrating (2023–2024) emphasize hierarchical prediction errors, where top-down priors suppress noise in coherent , enhancing robustness in ambiguous scenes like optic flow. These models yield testable predictions, such as the reverse-phi phenomenon, where sequential decrements evoke motion in the opposite direction to increments; the HR correlator naturally accounts for this by sign-inverting the correlation output. Simulations of HR-based networks also replicate direction-tuned responses in middle temporal (MT) area neurons to stimuli and random motion. However, limitations include poor performance on non-periodic or aperiodic stimuli, where fixed delays mismatch velocities, leading to inaccurate tuning. Hybrid models mitigate this by incorporating divisive normalization, which divides the correlator output by local activity to achieve contrast invariance and explain adaptation effects observed in neural responses.

Molecular and Synaptic Basis of Motion Selectivity

In the mammalian , direction selectivity emerges primarily through interactions in the inner plexiform layer involving starburst amacrine cells (SACs), which provide inhibition to direction-selective cells (DSGCs). SACs release in a directionally biased manner, with stronger inhibition occurring in the null direction of motion, suppressing responses to stimuli moving opposite to the preferred direction. This inhibition is mediated by directionally selective calcium spikes in SAC dendrites, where centrifugal motion from the triggers amplified calcium influx and release at dendritic tips, enhancing excitatory drive while centripetal motion elicits weaker responses. Null direction suppression is further achieved via shunting inhibition, where conductance from SACs reduces the efficacy of excitatory inputs without hyperpolarizing the membrane, effectively gating signals in the non-preferred direction. Molecular markers distinguish DSGC subtypes, enabling precise circuit assembly. In mice, ON-DSGCs selectively express cadherin-6 (Cdh6), a that promotes dendritic targeting and synaptic specificity within the direction-selective circuitry. Other type II cadherins, such as Cdh7, Cdh8, Cdh9, Cdh10, and Cdh18, are expressed across DSGCs and their presynaptic partners, facilitating laminar segregation and subtype-specific connectivity in the inner plexiform layer. Optogenetic manipulations have confirmed these roles; for instance, targeted activation or silencing of SACs using or halorhodopsin disrupts direction selectivity in DSGCs, demonstrating that cadherin-mediated is essential for maintaining inhibitory inputs from SACs to specific DSGC subtypes. Synaptic dynamics at SAC-DSGC junctions contribute to temporal asymmetry underlying motion detection. SAC dendrites exhibit functionally asymmetric branching, with each radial sector processing motion in a cardinal direction through biased glutamate release from bipolar cells, leading to nonlinear integration along the dendrite. This asymmetry arises from the spatiotemporal properties of excitatory inputs, where faster receptor-mediated currents align with preferred-direction motion, while slower components predominate in the null direction, creating a delay that amplifies selectivity. In SACs, "silent" synapses—lacking receptors—enhance motion sensitivity by providing voltage-dependent temporal filtering, allowing coincident excitation-inhibition timing critical for direction computation. Recent advances highlight the role of in refining DS networks. A 2024 study revealed that gap junction coupling between glycinergic amacrine cells and DSGCs enables directional signaling beyond classical receptive fields, contributing to surround modulation and robust motion encoding in noisy environments. Additionally, comparative analyses have uncovered homologies between vertebrate and motion detection, where SAC-like chiral signaling in mammalian retinas parallels the asymmetric inhibitory motifs in fly T4/T5 neurons, suggesting conserved molecular principles for direction selectivity across phyla.

References

  1. [1]
  2. [2]
    Perception Lecture Notes: Visual Motion Perception
    Motion is a perceptual attribute: the visual system infers motion from the changing pattern of light in the retinal image. Often the inference is correct.
  3. [3]
    [PDF] Motion Perception - Brain and Cognitive Sciences
    Sep 26, 2017 · Simply stated, perceptual motion can be defined by neural responses that result in perceived motion.
  4. [4]
    Motion Perception - an overview | ScienceDirect Topics
    Motion perception is defined as the visual awareness that objects have changed location during a finite period of time, representing a complex visual capability ...Neural Mechanisms and Brain... · Types of Motion Perception...
  5. [5]
    Motion perception: a review of developmental changes and the role ...
    Interpretation of visual scenes often requires the processing of motion, for which integration of information occurs over both space and time. Psychophysical ...
  6. [6]
    Motion perception: behavior and neural substrate - Mather - 2011
    Oct 28, 2010 · When an individual perceives visual motion, he forms the perceptual impression that an object in his field of view is moving or has moved.<|separator|>
  7. [7]
    Motion Perception - an overview | ScienceDirect Topics
    Motion perception is defined as the ability to perceive and interpret movement in the visual environment, allowing individuals to detect the presence of ...
  8. [8]
    Ernst Mach. Fundamentals of the Theory of Movement Perception ...
    Mach set out his reaction to this disturbance in an experimental study of the effects of motion on sensation originally published in 1875 under the German ...
  9. [9]
    Motion perception: a modern view of Wertheimer's 1912 monograph
    Max Wertheimer's 1912 monograph on apparent motion is a seminal contribution to the study of visual motion, but its actual contents are not widely known.
  10. [10]
    A Century of Gestalt Psychology in Visual Perception I. Perceptual ...
    As discussed before, Wertheimer (1912) showed that under certain conditions it is possible to perceive pure motion, where motion is perceived without perceiving ...
  11. [11]
    Visual motion perception - PMC - NIH
    The primate visual motion system performs numerous functions essential for survival in a dynamic visual world.
  12. [12]
    Humans, fish, spiders and bees inherited working memory and ...
    WM and attention help their possessors to determine the identities and movements of objects in their environment, and therefore to respond better to threats and ...
  13. [13]
    Review Drosophila's View on Insect Vision - ScienceDirect.com
    Jan 13, 2009 · Insect vision has a long tradition, both in the evolutionary time scale over which it has existed and the time mankind has spent in studying it.
  14. [14]
  15. [15]
    [PDF] Spatiotemporal energy models for the perception of motion
    Feb 2, 1985 · We discuss a class of models for human motion mechanisms in which the first stage consists of linear filters that are oriented in space-time and ...Missing: seminal | Show results with:seminal
  16. [16]
    [PDF] Three-systems theory of human visual motion perception - UC Irvine
    40. Both the first- and the second-order systems use a primitive motion-energy algorithm, are primarily monocular, and are fast. The third-order mechanism is.<|control11|><|separator|>
  17. [17]
    [PDF] A NEURAL MODEL OF FIRST-ORDER AND SECOND-ORDER ...
    magnocellular pathways found in LGN is thus maintained in V1. From layer 4C , magnocellular pathways involved in motion perception project to layer 4B,.
  18. [18]
    The Processing of First- and Second-Order Motion in Human Visual ...
    May 15, 1998 · It is usually thought that first-order motion signals are first made explicit in area V1 because, in primates, direction-sensitive neurons are ...
  19. [19]
    Low- and high-level first-order random-dot kinematograms
    This is the first reported neuroimaging evidence supporting proposed low-level and high-level models of motion processing for first-order random-dot stimuli.
  20. [20]
  21. [21]
    None
    ### Summary of Findings on Neural Involvement in Second-Order Motion
  22. [22]
    First and second-order motion perception after focal human brain ...
    Perception of visual motion includes a first-order mechanism sensitive to luminance changes and a second-order motion mechanism sensitive to contrast ...
  23. [23]
    and second-order motion signals at the local-motion-pooling level
    Feb 8, 2010 · If there are separate LMP units, then transparent motion may always be perceived, regardless of the trajectory length. However, the fact that ...
  24. [24]
    [PDF] Structure and Function of Visual Area MT - Harvard Medical School
    Mar 16, 2005 · This is the expression of the aperture problem in frequency space (see Figure 8). If V1 neurons see component motion, they are ignorant, in a ...
  25. [25]
    The aperture problem—I. Perception of nonrigidity and motion ...
    Horizontal movement of a sinewave line along its axial direction is perceived as nonrigid if the angle at the zero crossing is smaller than a critical angle.
  26. [26]
    Phenomenal coherence of moving visual patterns - Nature
    Dec 9, 1982 · When a moving grating is viewed through an aperture, only motion orthogonal to its bars is visible, as motion parallel to the bars causes no ...Missing: problem | Show results with:problem
  27. [27]
    Occlusion and the solution to the aperture problem for motion
    The “aperture problem” indicates that a local reading of the velocity of an oriented contour is inherently ambiguous, insufficient by itself to recover the ...
  28. [28]
    Analysis of local and wide-field movements in the superior temporal ...
    Jan 1, 1986 · The middle temporal (MT) and medial superior temporal (MST) areas of the macaque cortex have many cells that respond to straight movements in ...
  29. [29]
    Neural Computations Governing Spatiotemporal Pooling of Visual ...
    Mar 30, 2011 · We first ask which pooling computations govern performance on a task that required human observers to combine local motion directions over time ...
  30. [30]
    Activity patterns in human motion-sensitive areas depend on ... - PNAS
    It is also in line with observations that coding strategies in MT and medial superior temporal may shift between vector averaging and a winner-take-all mode ...
  31. [31]
    Human visual motion perception shows hallmarks of Bayesian ...
    Feb 12, 2021 · Bayesian models of statistically ideal motion integration have been employed successfully to explain human motion perception for reduced ...
  32. [32]
    Joint Representation of Depth from Motion Parallax and Binocular ...
    Aug 28, 2013 · Binocular disparity and motion parallax provide two independent, quantitative cues for depth perception (Howard and Rogers, 1995, 2002).Missing: seminal | Show results with:seminal
  33. [33]
    Optic flow and depth perception - PubMed
    This article gives a simple analysis of the information for depth present in optic flow. It also reviews the psychophysical results for depth recovery from ...Missing: seminal papers
  34. [34]
    The neural basis of depth perception from motion parallax - Journals
    Jun 19, 2016 · Binocular disparity cues arise because the two eyes are separated horizontally, and provide information about depth [3,4]. Additionally ...Missing: seminal | Show results with:seminal
  35. [35]
    Neural Representation of Motion-In-Depth in Area MT
    Nov 19, 2014 · We show that approximately one-half of neurons in macaque area MT are selective for the direction of motion in depth, and that this selectivity ...Missing: radial | Show results with:radial
  36. [36]
    Coding of Stereoscopic Depth Information in Visual Areas V3 and V3A
    In this study, we measured absolute and relative disparity-tuning of neurons in V3 and V3A of alert fixating monkeys, and we compared their basic tuning ...Missing: VIP coupling
  37. [37]
    The role of the ventral intraparietal area (VIP/pVIP) in the perception ...
    Another study found that the majority of disparity-tuned VIP cells preferred crossed disparities produced by objects at near distances closer than the depth of ...
  38. [38]
    Functional Specializations of the Ventral Intraparietal Area for ... - NIH
    Feb 20, 2013 · The ventral intraparietal area (VIP) of the macaque brain is a multimodal cortical region with directionally selective responses to visual and ...
  39. [39]
    An Aubert-Fleischl-like illusion in depth | JOV - Journal of Vision
    The Aubert-Fleischl illusion is the false impression that objects move slower when they are pursued with the eyes as compared to when the eyes are kept ...Missing: phenomenon | Show results with:phenomenon
  40. [40]
    Action ability modulates time-to-collision judgments - PMC - NIH
    Time-to-collision (TTC) underestimation has been interpreted as an adaptive response that allows observers to have more time to engage in a defensive behaviour.
  41. [41]
    Visual perception of biological motion and a model for its analysis
    This paper reports the first phase of a research program on visual perception of motion patterns characteristic of living organisms in locomotion.
  42. [42]
    A Model of Biological Motion Perception from Configural Form Cues
    Mar 15, 2006 · Biological motion perception is the compelling ability of the visual system to perceive complex human movements effortlessly and within a ...
  43. [43]
    Functional neuroanatomy of biological motion perception in humans
    Single neurons in the superior temporal polysensory area (STP) in the superior temporal sulcus (STS) have been reported to be specifically sensitive to BM (2, 3) ...
  44. [44]
    Detecting biological motion signals in human and monkey superior ...
    Nov 7, 2024 · Most vertebrates, including humans, are highly adept at detecting and encoding biological motion, even when it is portrayed by just a few ...
  45. [45]
    Biological motion perception in autism spectrum disorder: a meta ...
    Dec 18, 2019 · We found that individuals with autism generally showed decreased performance in perception and interpretation of biological motion.
  46. [46]
    Heritable aspects of biological motion perception and its covariation ...
    Jan 22, 2018 · Impaired visual processing of biological motion (BM) is inextricably linked to compromised social cognitive abilities in autism spectrum disorder (ASD).
  47. [47]
    A Specific and Enduring Improvement in Visual Motion Discrimination
    Training improves the ability of human observers to discriminate between two similar directions of motion.
  48. [48]
    Effects of daily training amount on visual motion perceptual learning
    Apr 13, 2021 · These findings suggest that perceptual learning of motion direction discrimination is not always dependent on the daily training amount and less training leads ...
  49. [49]
    Perceptual learning modifies the functional specializations of visual ...
    Apr 5, 2016 · This finding suggests that perceptual learning in visually normal adults shapes the functional architecture of the brain in a much more pronounced way than ...
  50. [50]
    Experience modulates gaze behavior and the effectiveness of ...
    These findings highlight the role of specialized visuomotor experience in enhancing biological motion perception and have implications for training ...
  51. [51]
    Time courses of brain plasticity underpinning visual motion ...
    Nov 15, 2024 · Visual perceptual learning (VPL) refers to a long-term improvement of visual task performance through training or experience, reflecting brain plasticity even ...
  52. [52]
    Visual Plasticity in Adulthood: Perspectives from Hebbian and ...
    The balance between excitation and inhibition in the early visual cortex (V1, V2, and V3) is involved in the consolidation process of visual perceptual learning ...
  53. [53]
    Perceptual Learning: Changes across the Lifespan - ScienceDirect
    Jan 25, 2021 · New research reveals a complex interaction between attention and learning across the lifespan. In young adults, attention guides learning.
  54. [54]
    Perceptual learning rules based on reinforcers and attention - PMC
    Perceptual learning is guided by reinforcers, which gate plasticity, and attentional feedback, which highlights relevant neurons and suppresses irrelevant ones.
  55. [55]
    Altered Sensitivity to Motion of Area MT Neurons Following Long ...
    Primates with primary visual cortex (V1) damage often retain residual motion sensitivity, which is hypothesized to be mediated by middle temporal area (MT).
  56. [56]
    Cortical Motion Perception Emerges from Dimensionality Reduction ...
    Jul 27, 2022 · We developed a spiking neural network model that showed MSTd-like response properties can emerge from evolving spike-timing-dependent plasticity.Input Stimuli · Results · 3d Translation And Rotation...
  57. [57]
    Perceptual Learning of Motion Direction Discrimination with ...
    The opposite relationship between BOLD and behaviour was found at V1 for the group trained on the motion-opponent stimulus and at both V1 and hMT+ for the group ...
  58. [58]
    NMDA Antagonists in the Superior Colliculus Prevent ...
    Visual responsiveness in the treated SC was normal; thus the loss of compensatory plasticity was not due to reduced visual responsiveness. Our results argue ...Ampa Receptors Underlie Most... · Discussion · Nmda Receptors Play A...<|control11|><|separator|>
  59. [59]
    BDNF-Induced Increase of PSD-95 in Dendritic Spines Requires ...
    Oct 26, 2011 · In visual cortical neurons, BDNF can increase the size of PSD-95 puncta in spines and the overall amount of PSD-95 in dendrites within 60 min ( ...Missing: motion- | Show results with:motion-
  60. [60]
    Harnessing brain plasticity to improve binocular vision in amblyopia
    Recovery from amblyopia requires significant visual cortex neuroplasticity, i.e. the ability of the central nervous system and its synaptic connections to ...
  61. [61]
    Enhancing Brain Plasticity to Promote Stroke Recovery - Frontiers
    Leap motion-based virtual reality training for improving motor functional recovery of upper limbs and neural reorganization in subacute stroke patients.Abstract · Introduction · Main Text · Conclusions
  62. [62]
    Space, color, and direction of movement: how do they affect attention?
    Jul 19, 2013 · Most observers reported reductions of motion threshold in all three tasks compared to when no cue was provided.
  63. [63]
    Divided attention impairs motion perception in older adults | JOV
    Older adults' performance is known to be impaired when they perform secondary or interleaved tasks, compared to performing those tasks alone.Missing: integration | Show results with:integration
  64. [64]
    Predictive processing in biological motion perception - Sage Journals
    Jul 15, 2025 · Biological motion perception plays a crucial role in understanding the actions of other animals, facilitating effective social interactions.
  65. [65]
    Navigation: Building a cognitive map through self-motion - eLife
    Nov 25, 2024 · In mammals, path integration can also update an internal estimate of position on a 'cognitive map', a neural representation of a known ...Missing: locomotion | Show results with:locomotion
  66. [66]
    Dorsal–Ventral Integration in the Recognition of Motion-Defined ...
    Results suggest that such cues are sufficient to drive unfamiliar face recognition in normal participants and that ventral stream areas are necessary.
  67. [67]
    Brain potentials reflect language effects on motion event perception
    Here, we focus on grammatical differences between languages relevant for the description of motion events and their impact on visual scene perception.
  68. [68]
    Understanding biological motion through the lens of animate motion ...
    Aug 11, 2025 · Biological motion (BM), the movement generated by living entities, transmits signals of life and conveys vital cues for animacy perception.Missing: birds | Show results with:birds
  69. [69]
    Motion Aftereffect
    The motion aftereffect, first discussed by Aristotle, is also known as waterfall illusion (Verstraten, 1996; Wade, 1996). An aftereffect is a sensory ...
  70. [70]
    The motion aftereffect - ScienceDirect.com
    The illusion almost certainly originates in the visual cortex, and arises from selective adaptation in cells tuned to respond to movement direction. Cells ...
  71. [71]
    The Duncker Illusion: Intersubject Variability, Brief Exposure ... - IOVS
    The Duncker illusion, also known as induced motion, is the illusory component of an object's motion that results from background movement.
  72. [72]
    The induced motion effect is a high-level visual phenomenon
    Induced motion is the illusory motion of a target away from the direction of motion of the unattended background. If it is a result of assigning background ...
  73. [73]
    Influence of the Size of the Field of View on Visual Perception While ...
    In this study, visual speed was underestimated when the FoV was smaller than 73 degrees, and slightly overestimated when the FoV was larger than 107 degrees.
  74. [74]
    [PDF] Influence of the size of the field of view on motion perception
    In contrast, when the central region was occluded and visual flow only presented peripherally, the speed was system- atically overestimated. This overestimation ...
  75. [75]
    Direction repulsion in motion transparency | Visual Neuroscience
    Jun 2, 2009 · We found that (1) misperception of motion direction ("direction repulsion") occurs when two spatially intermingled directions of motion are ...
  76. [76]
    Direction repulsion between components in motion transparency
    Our data show that this direction repulsion between components occurs within a single spatial scale but not between widely separated spatial scales. This ...Missing: competing motions
  77. [77]
    Neuronal Basis of the Motion Aftereffect Reconsidered - ScienceDirect
    A test stimulus that moves in the same direction as adaptation will elicit a response primarily in neurons that have just been adapted, while a test stimulus ...Results · Fmri Saturation Control · Experimental Procedures
  78. [78]
    Measuring vection: a review and critical evaluation of different ...
    Jun 27, 2023 · The subjective experience of self-motion in the absence of actual physical motion is commonly termed vection. Vection is often exemplified by ...
  79. [79]
    The role of image realism and expectation in illusory self-motion ...
    Results showed that intact images produced more vection than scrambled images, especially at faster speeds. In contrast, expectation did not significantly ...
  80. [80]
    How does V1 population activity inform perceptual certainty? - PMC
    Jun 17, 2024 · Neural population activity in sensory cortex informs our perceptual interpretation of the environment. Oftentimes, this population activity will ...