Fact-checked by Grok 2 weeks ago

Microexpression


Microexpressions are involuntary, brief expressions lasting approximately 0.04 to 0.20 seconds that reveal genuine s despite conscious efforts to suppress or mask them.
Pioneered by psychologist through research on displays of in the 1960s and 1970s, these expressions are theorized to arise from rapid, subcortical neural processing that circumvents voluntary control, potentially linked to structures like the .
Ekman's work established microexpressions as part of a set of universal emotional signals, with empirical studies confirming their fleeting duration and recognizability under laboratory conditions, though detection rates without remain low, often below chance in naturalistic settings.
Applications include security screening and , where programs have demonstrated modest improvements in identification accuracy; however, controversies persist regarding their practical reliability for detection, as real-world evidence shows they occur infrequently and are confounded by cultural, contextual, and individual variability factors.

Definition and Characteristics

Core Definition and Duration

Microexpressions are brief, involuntary movements that reveal concealed , typically lasting from 1/25 to 1/5 of a second (0.04 to 0.20 seconds). These expressions occur when an individual suppresses their genuine emotional response, resulting in a fleeting display that contrasts with the intended neutral or alternative demeanor. The brevity stems from rapid contractions of specific facial muscles, which activate and subside before full conscious control can intervene. The physiological basis involves spontaneous neural signals from emotional processing centers, leading to these muscle activations without voluntary modulation. Empirical studies using high-speed cameras have verified this duration range, showing that expressions shorter than 1/25 of a second are often imperceptible to the , while those up to 1/5 of a second can be recognized under controlled conditions. This short timeframe reflects the limits of cortical inhibition over subcortical emotional pathways, ensuring the leakage of authentic affective states despite efforts at concealment. Detection relies on the measurable speed of facial action units, where muscle movements are quantified in milliseconds, distinguishing microexpressions from deliberate or prolonged displays. confirms that these durations hold across attempts to mask incongruent emotions, such as or hidden behind a , providing a verifiable marker of underlying affective incongruence.

Relation to Facial Muscles and Involuntariness

Microexpressions result from rapid, fleeting contractions of underlying facial muscles, which are anatomically discrete and coded as action units (AUs) in the (FACS), a developed by and Wallace Friesen in 1978 to decompose expressions into their biomechanical components. For instance, the microexpression characteristically activates AU1, involving elevation of the inner eyebrows via the frontalis pars medialis muscle, frequently alongside AU2 (outer brow raise by frontalis pars lateralis) and AU5 (levators widening the eyes), producing a brief widening of the orbital region essential for signaling vigilance. These muscle activations follow first-principles of facial anatomy, where zygomatic, orbicularis, and frontalis groups enable precise, low-amplitude movements that evade sustained voluntary recruitment due to their brevity, typically under 0.5 seconds. Their involuntariness stems from subcortical-driven impulses along the extrapyramidal neural tract, which originates in limbic structures like the and directly projects to brainstem facial motor nuclei, circumventing the cortical pyramidal tract responsible for deliberate control. This pathway generates ballistic-like, synchronized muscle firing that resists suppression, as emotional elicitation precedes conscious damping—evident in attempts to conceal feelings, where microexpressions emerge as uncontrolled "leakage" despite inhibitory efforts. By contrast, macroexpressions, lasting longer and amenable to posing or inhibition, rely more on pyramidal tract modulation from the , allowing greater volitional override through learned facial masking. This hardwired involuntariness aligns with evolutionary pressures for authentic signaling, particularly in threat detection, mirroring innate displays where rapid, unfeigned facial configurations—such as eyebrow raises and eye exposures in response to conspecific threats—facilitate immediate group alertness without opportunity for . Such mechanisms, conserved across , underscore causal in : complete voluntary suppression would undermine adaptive honesty verification, as partial leakage preserves the informational value of true affective states for survival in cooperative hierarchies.

Distinction from Macroexpressions and Moods

Microexpressions are distinguished from macroexpressions primarily by their brevity and involuntariness, with the former lasting approximately 0.04 to 0.2 seconds—often less than 0.5 seconds—while macroexpressions endure for 0.5 seconds or longer and are frequently subject to voluntary control. This temporal boundary arises from microexpressions' role as involuntary "leaks" of concealed , rooted in subcortical neural pathways that bypass deliberate modulation, whereas macroexpressions engage cortical processes allowing for posing or masking. Empirical studies using high-speed video in controlled settings confirm that durations below 0.25 seconds yield lower recognition accuracy, underscoring the perceptual challenge and tied to such rapidity, as opposed to the more deliberate extension in macroexpressions. Laboratory experiments further differentiate the two by contrasting spontaneous versus induced displays: in deception paradigms, microexpressions of negative emotions (e.g., or ) emerge briefly during incongruent emotional states, correlating with concealed authenticity, while macroexpressions align with overt, potentially fabricated narratives. (ERP) data from EEG recordings reveal distinct neural mechanisms, with microexpressions eliciting earlier, more automatic P1 and N170 components indicative of rapid, processing, unlike the later, modulated responses to prolonged macroexpressions. These findings counter conflations in non-empirical accounts by demonstrating that duration modulates not just visibility but underlying . In contrast to moods, which constitute diffuse, sustained physiological states (often lasting minutes to hours) lacking facial signatures and tied to generalized rather than categorical , microexpressions manifest as specific, action-unit-coded bursts corresponding to basic like or . Controlled affective induction tasks show no equivalent brief transients in mood states, as moods influence baseline without triggering the rapid muscular activations (e.g., via zygomaticus or corrugator) characteristic of microexpressions; instead, pop-psychology tendencies to blur these overlook experimental evidence that microexpressions' brevity links causally to momentary emotional overrides, not protracted sentiment. This demarcation preserves microexpressions' utility in pinpointing incongruent affects, unconfounded by mood's broader, less facially anchored nature.

Historical Development

Pre-Ekman Observations

Charles Darwin's 1872 book The Expression of the Emotions in Man and Animals provided early theoretical groundwork for understanding involuntary facial expressions as evolved, innate responses shared across species. Darwin argued that emotional displays, including rapid muscle contractions associated with or , serve adaptive functions and occur spontaneously, often resisting conscious control. He documented these through observations of humans and animals, emphasizing their universality and brevity in natural contexts, though photographic evidence proved challenging due to the fleeting nature of some actions. In the decades following, evolutionary and physiological theorists built on Darwin's ideas, positing that facial movements reflect activation, with brief expressions signaling genuine affective states amid attempts at suppression. However, mid-20th-century accounts in clinical and psychoanalytic literature described anecdotal "facial slips"—rapid, unintended glimpses of concealed emotions during or —without of timing or reliability. These observations suggested causal links between and involuntary leakage but relied on subjective , lacking controlled measurement or validation. Prior to the , no rigorous empirical data existed on expressions lasting fractions of a second, as available technologies like failed to capture such transience, underscoring the limitations of pre-video methodologies. This gap highlighted the necessity for systematic, data-driven approaches to distinguish involuntary micro-level signals from deliberate macroexpressions, setting the stage for later advancements in observational precision.

Paul Ekman's Foundational Work (1960s-1980s)

initiated his seminal cross-cultural investigations in the mid-1960s, traveling to in 1967 and 1968 to examine facial expressions among the isolated South Fore people, who had minimal contact with Western media or photography. In these studies, Ekman showed posed photographs of basic emotions to tribal members, who recognized them with high accuracy, and elicited spontaneous expressions by having participants recount emotional events, yielding configurations matching those in literate societies for emotions like , , , , , and . These results provided empirical support for innate universals in emotional signaling, countering anthropological claims of purely learned displays, though the small sample sizes—often fewer than 100 participants from preliterate groups—constrained statistical power and broader cultural extrapolation. Building on this foundation, Ekman collaborated with Wallace V. Friesen in the 1970s to create the (FACS), a manual first published in 1978 that decomposes facial behavior into anatomically distinct Action Units (AUs) based on underlying muscle activations, such as AU1 for inner brow raising. FACS enabled objective, replicable measurement of subtle movements, moving beyond subjective judgments and facilitating verifiable breakdowns of expressions into components, which proved instrumental for dissecting involuntary signals amid voluntary masking. Its rigorous coding required extensive rater training to minimize , yet early applications highlighted challenges in field settings with variable lighting and angles. By the 1980s, Ekman advanced detection of microexpressions—fleeting, involuntary glimpses of true lasting under 0.5 seconds—through frame-by-frame of slowed videotape from high-stakes interrogations and clinical interviews, where subjects attempted under pressure. These brief expressions, often action-unit specific like a flash of (AU1+2+4), revealed concealed affects in contexts like , with Ekman noting their rarity (about 1 in 100 opportunities) but high diagnostic value when present. While pioneering empirical verifiability via slow-motion review, initial identifications depended on researcher , introducing potential absent blinded protocols, and samples from forensic or therapeutic footage were non-random, limiting controlled validation.

Evolution of Research Tools and Frameworks

In the decades following Paul Ekman's initial documentation of microexpressions in the , researchers in the and refined the (FACS), originally developed in 1978, to accommodate micro-duration coding by emphasizing manual, frame-by-frame dissection of video sequences. This adaptation addressed the challenge of capturing involuntary action units (AUs) lasting 0.05 to 0.5 seconds, requiring protocols for precise identification of onset, apex, and offset phases in low-intensity movements that standard FACS application overlooked in longer macroexpressions. Such refinements prioritized anatomical fidelity in coding subtle muscle activations, enabling causal linkage between specific AUs and underlying emotional impulses without relying on elicited poses. The integration of digital video technology during this period facilitated empirical validation of microexpression brevity through temporal frame analysis, shifting from anecdotal slow-motion reviews to quantifiable metrics derived from standard frame rates (e.g., 25-30 initially). By the late 2000s, high-speed imaging at 100-200 , as employed in early detection studies, allowed researchers to measure durations with sub-frame accuracy, confirming microexpressions' resistance to voluntary control via inconsistent AU blending compared to deliberate expressions. This data-oriented evolution supplanted speculative models, grounding claims in replicable observations of spontaneous footage rather than laboratory simulations. A pivotal advancement occurred in the with the creation of specialized databases for spontaneous microexpressions, providing standardized corpora for . The Micro-expression (CASME) database, released in 2013, compiled 195 sequences from over 1,500 videos of neutralized faces, recorded at 60 fps to capture unposed leakage. Its successor, CASME II (2014), expanded to 247 high-quality clips at 200 fps with refined AU annotations via FACS, enhancing interoperability for benchmarking AU-emotion mappings. These resources fostered frameworks emphasizing empirical rigor, such as optical flow-based strain analysis for validating AU dynamics, thereby prioritizing verifiable muscle causality over interpretive bias.

Types and Classification

Basic Emotion Categories

Microexpressions manifest the six basic emotions identified by through empirical research on facial displays: , , , , , and . These categories emerged from studies demonstrating consistent recognition rates above chance in judgment tasks involving posed and spontaneous expressions, with facial muscle patterns reliably differentiating each emotion. Laboratory elicitation experiments, such as those using film clips to provoke specific affects, have shown that these brief, involuntary expressions replicate the action unit (AU) combinations of their prolonged counterparts, providing evidence for their emotional specificity. The (FACS), developed by Ekman and Friesen, links each basic to characteristic AU patterns derived from anatomical analysis and observational data.
EmotionKey Action UnitsAssociated Facial Features
AU 4 (brow lowerer), AU 5 (upper lid raiser), AU 7 (lid tightener), AU 23 (lip tightener)Lowered brows, narrowed eyes, pressed lips
AU 9 (nose wrinkler), AU 10 (upper lip raiser), AU 17 (chin raiser)Wrinkled nose, raised upper lip, sometimes protruded tongue
AU 1 (inner brow raiser), AU 2 (outer brow raiser), AU 4 (brow lowerer), AU 5 (upper lid raiser), AU 20 (lip stretcher), AU 26 (jaw drop)Raised and drawn-together brows, widened eyes, dropped jaw, stretched mouth
AU 6 (cheek raiser), AU 12 (lip corner puller)Raised cheeks, crow's feet wrinkles, smiling mouth
AU 1 (inner brow raiser), AU 4 (brow lowerer), AU 15 (lip corner depressor)Raised inner brows forming omega shape, downturned mouth corners
AU 1 (inner brow raiser), AU 2 (outer brow raiser), AU 5 (upper lid raiser), AU 26 (jaw drop)Raised brows and eyelids, dropped jaw, open mouth
These combinations exhibit consistency in electromyographic (EMG) recordings during induction, underscoring their involuntary nature in microexpressions lasting under 500 milliseconds. While some studies propose expanding the categories to include (often AU 12 or 14 unilaterally), characterized by a one-sided , its inclusion lacks the replicable cross-modal evidence of the core six, with accuracy varying more across contexts and showing weaker ties to distinct physiological responses. Prioritizing from controlled and FACS , the six-category model remains the most empirically robust for identifying microexpressions, though ongoing challenges question strict one-to-one mappings between movements and inferred .

Action Units in Microexpressions

![Amygdala.jpg][float-right] Microexpressions are analyzed through the (FACS), which decomposes facial movements into discrete Action Units () corresponding to specific muscle activations. In microexpressions, these manifest briefly, often as isolated activations due to their duration of 39 to 200 milliseconds, limiting the co-occurrence of multiple units typically seen in prolonged macroexpressions. This brevity emphasizes verifiable muscle correlates, such as the contraction of the in AU4 (brow lowerer), which signals by drawing the eyebrows together and downward. Key AUs associated with Ekman's six basic emotions in microexpressions include:
EmotionPrimary Action Units
HappinessAU6 (cheek raiser), AU12 (lip corner puller)
SadnessAU1 (inner brow raiser), AU4 (brow lowerer), AU15 (lip corner depressor)
AngerAU4 (brow lowerer), AU5 (upper lid raiser), AU7 (lid tightener), AU17 (chin raiser)
FearAU1+2 (brow raisers), AU5 (upper lid raiser), AU20 (lip stretcher), AU26 (jaw drop)
SurpriseAU1+2 (brow raisers), AU5 (upper lid raiser), AU26 (jaw drop)
DisgustAU9 (nose wrinkler), AU10 (upper lip raiser), AU17 (chin raiser)
Electromyography (EMG) studies quantify these activations, revealing faster onset latencies and lower intensity in microexpressions compared to macroexpressions; for instance, microexpression durations average shorter peaks in muscle activity, measured as a of maximum voluntary (MVC%), confirming their subtle, rapid nature. This rapid activation supports the causal pathway from the via amygdalo-motor projections to brainstem facial motor nuclei, circumventing cortical inhibition and enabling involuntary leakage of concealed emotions. Such subcortical routing ensures microexpressions reflect authentic limbic-driven responses before voluntary suppression.

Variations in Intensity and Context

The intensity of microexpressions varies according to the activation strength of underlying facial action units (AUs), as quantified in the (FACS) on an ordinal scale from trace (A) to maximal (E). Microexpressions typically manifest at lower intensities (A or B), reflecting partial but involuntary muscle contractions that scale with emotional without reaching the fuller engagement seen in macroexpressions. In high-control contexts, such as interrogations or negotiations where suppression is incentivized, microexpressions may exhibit reduced AU involvement or briefer durations due to conscious effort, yet core involuntariness endures as complete elimination proves impossible. Empirical analyses of high-stakes scenarios confirm that attempted masking attenuates overt signals but permits subtle leakage through residual low-intensity . Stress induction paradigms, involving tasks like or threat exposure, reveal contextual modulation where microexpression variability increases in AU combinations and peak amplitudes, yet detectability remains robust via consistent physiological signatures. For example, AU-based classifiers in experiments achieve high accuracy in identifying stress-related variants, underscoring preserved neural fidelity amid environmental pressures. Microexpressions differ from culturally learned displays, which conform to display rules varying by socialization, through evidence of innate universality observable in pre-verbal infants and isolated groups unaffected by enculturation. Cross-cultural studies, including those with non-literate populations, demonstrate invariant microexpression patterns for basic emotions, contrasting with modulated macroexpressions; developmental trajectories further affirm this distinction, as microexpressions appear prior to cultural acquisition.

Detection Methods

Human Perceptual Limits and Training

Untrained observers typically recognize microexpressions with accuracies ranging from 45% to 59% in controlled tasks, performing only modestly above chance levels for multi-category identification. These baseline rates reflect inherent perceptual constraints, as microexpressions endure for approximately 40 to 200 milliseconds, often eluding conscious due to limitations in visual persistence and attentional capture within the human visual system. Iconic memory, which briefly retains visual traces for around 250 milliseconds, aids detection of static features but struggles with rapid, low-intensity dynamic changes characteristic of these expressions, particularly when embedded in ongoing social interactions. Training interventions, such as Paul Ekman's Micro-Expression Training Tool (), address these limits by presenting facial expressions in slowed-motion sequences (e.g., 1/5 normal speed) followed by full-speed replays and immediate feedback on accuracy, enabling learners to calibrate recognition of specific action units. Empirical evaluations of and similar programs report post-training recognition accuracies of 70-80% in laboratory settings, with retention observed over periods up to several months in some cohorts. However, these gains are primarily validated through scripted, isolated video clips rather than naturalistic contexts, and psychophysical studies highlight that sustained attention demands further constrain spotting in real-time scenarios where microexpressions occur infrequently and amid competing facial movements. Evidence for transfer of trained skills to unprompted, observation remains sparse and inconclusive, with laboratory improvements showing limited to dynamic, high-stakes environments due to factors like emotional masking, viewer , and the rarity of unadulterated microexpressions in everyday interactions. Longitudinal studies indicate that while brief training (e.g., 40 minutes) yields measurable short-term enhancements, decay occurs without repeated practice, underscoring the need for ongoing reinforcement to approach perceptual thresholds in applied settings.

Manual Analysis Techniques

Manual analysis of microexpressions relies on the (FACS), a standardized protocol developed by and Wallace Friesen in 1978 to decompose facial movements into discrete Action Units (AUs) corresponding to specific muscle activations. Coders trained in FACS examine video footage in or frame-by-frame to detect fleeting expressions lasting 1/25 to 1/5 of a second, identifying AUs such as AU1 (inner brow raiser) for or AU4 (brow lowerer) for , which may not be perceptible in . This method prioritizes replicable scoring over subjective interpretation, with adaptations for microexpressions emphasizing high-resolution footage captured at least at 30 frames per second to enable precise temporal annotation. The begins with global of the entire to contextualize potential microexpression onset and frames, followed by localized of regions (e.g., brows, eyes, ) and frame-by-frame verification to confirm AU presence, intensity (A-E scale), and . records the exact frames of AU activation, ensuring that only verifiable muscle movements are scored, as partial or blended AUs in microexpressions demand heightened precision compared to macroexpressions. Research protocols often involve multiple passes: initial detection, AU decomposition per FACS guidelines, and cross-verification to minimize errors from motion artifacts or lighting variations. Reliability is assessed via inter-rater agreement metrics, such as Cohen's kappa (κ), where certified FACS coders achieve values of 0.80-0.95 after specialized microexpression training, surpassing general FACS application due to rigorous certification requiring at least 100 hours of study and empirical validation. Training emphasizes empirical benchmarks, with coders demonstrating accuracy on validated stimuli sets before independent scoring, rather than relying on unverified expertise; studies confirm that non-certified observers score microexpressions with near-chance accuracy without such protocols.

Automated AI-Based Recognition

Automated AI-based recognition of microexpressions employs and algorithms to detect fleeting facial muscle activations, offering scalability for processing vast video datasets that exceeds human manual analysis capacity. These systems typically analyze spatiotemporal features such as pixel intensity changes and motion vectors to identify action units (AUs) from the (FACS), enabling automated spotting and classification without reliance on subjective observer training. Common techniques include methods, which compute dense motion fields to capture subtle deformations corresponding to , often combined with histogram of oriented (HOOF) or facial dynamics maps (FDM) for feature extraction. Convolutional neural networks (CNNs) and (LSTM) networks further refine AU detection by learning hierarchical patterns from video frames, benchmarked primarily on spontaneous microexpression datasets like CASME II (247 samples, 200 fps, 5 emotion classes) and SAMM (159 samples, 200 fps, 7 emotion classes). In controlled settings, these approaches achieve recognition accuracies ranging from 60% to over 80%, with examples including 76.60% on CASME II using LBP-TOP features and up to 88.28% via specialized networks like OFF-ApexNet. Despite data-driven precision in isolating microexpression patterns, performance declines substantially in real-world scenarios, often dropping below 60% due to challenges like facial occlusion from hands or masks, variable illumination, and head pose variations that disrupt feature alignment. Systems excel at statistical matching of trained visual cues but are constrained by limitations, lacking inherent causal grounding in the physiological or contextual drivers of emotional leakage, which can lead to overgeneralization across diverse populations. Ongoing refinements focus on robust preprocessing to mitigate these issues, though cross-database transfer remains inconsistent, highlighting the gap between controlled benchmarks and practical deployment.

Empirical Evidence for Validity

Cross-Cultural Universality Studies

Paul Ekman's fieldwork in the late 1960s with the visually isolated South Fore people of demonstrated high recognition accuracy for basic emotional facial expressions, with participants correctly identifying emotions such as , , , and fear from photographs at rates exceeding 70% for most categories, comparable to literate Western samples. These studies involved both recognition tasks using posed Western faces and production tasks where participants described emotional scenarios and enacted corresponding expressions, which were then recognized by Americans at above-chance levels, supporting innate signaling mechanisms. Extended research through the 1990s across isolated tribes yielded consistent findings, with overall recognition rates of 70-90% for core emotions, privileging field data from preliterate groups to minimize cultural contamination. Subsequent replications in diverse literate populations, including studies across 10 cultures in the , confirmed cross-cultural agreement on judgments of expressions, particularly for blended , with core action units () like brow raising for or lip curling for showing robust universality. Investigations in Asian and European contexts, such as comparisons between and participants, affirmed recognition of basic configurations but highlighted variations due to culturally specific —norms governing expression intensity or masking, such as de-intensification of negative in hierarchical settings among East Asians. These rules, while modulating overt displays, do not alter the underlying signals, as evidenced by consistent AU decoding across groups. Empirical evidence indicates stronger universality in recognition than in spontaneous production, where display rules introduce variability; for instance, meta-analyses of cross-cultural studies show recognition accuracy averaging 70-80% globally, but production conformity drops in cultures emphasizing emotional restraint. For microexpressions—brief, involuntary expressions— this distinction implies greater universality, as they represent unmasked leakage of innate patterns less susceptible to cultural modulation, though direct cross-cultural data on microexpressions remains sparser and builds on basic expression findings. Challenges to strict universality, such as lower agreement for fear in some non-Western samples, underscore the need for context-specific probes but do not overturn core evidence from isolated populations.

Replication of Recognition Training Effects

Early studies provided evidence supporting the efficacy of microexpression recognition training using tools like the , developed by . In two experiments reported by Hurley et al. in 2011, participants who underwent METT training demonstrated significant improvements in identifying microexpressions compared to untrained controls, with gains persisting in immediate post-training assessments. Similarly, Marsh et al. (2010) found that METT training led to enhanced emotion recognition accuracy one month post-training among participants with , suggesting potential short-term durability in clinical populations. These results aligned with Ekman's claims that brief exposure—around 40 minutes—could boost baseline recognition rates from approximately 30% to 40%. However, replications have revealed limitations in the generalizability and longevity of these effects, particularly beyond controlled laboratory settings. A 2019 study by et al. tested METT's transfer to real-world detection and found no significant improvement in accuracy relative to controls, despite lab-based recognition gains, indicating weak field transfer possibly due to over-reliance on stylized stimuli. This pattern suggests that may enhance perceptual to isolated cues in artificial tasks but fails to build robust skills for dynamic, contextual scenarios, akin to in perceptual learning where specific exemplars are memorized rather than generalized principles internalized. Recent empirical reviews in the underscore diminishing effect sizes without ongoing . For instance, a 2023 on psychotherapist trainees showed initial microexpression recognition improvements following METT-style , but these effects were not sustained at three-month follow-up, contrasting with more durable gains from . Such findings imply that microexpression-specific induces transient boosts reliant on repeated practice, as isolated exposure to brief, low-intensity stimuli does not foster enduring neural adaptations for spontaneous detection. Overall, while early replications confirmed efficacy, accumulating highlights mixed outcomes, with durability challenged by the gap between contrived and naturalistic application.

Physiological Correlates via EMG and

(EMG) studies have identified distinct physiological signatures differentiating microexpressions from macroexpressions. In a analysis of facial EMG data from 35 participants, encompassing 233 microexpressions and 147 macroexpressions, microexpressions exhibited lower EMG amplitudes and reduced participant awareness or control compared to macroexpressions, suggesting involuntary neural pathways underlie their brevity and subtlety. These findings indicate faster onset of muscle potentials in microexpressions, consistent with rapid, subcortically driven responses rather than deliberate cortical . Neuroimaging evidence further links microexpressions to conserved brain regions involved in emotion processing. (fMRI) research demonstrates that microexpressions elicit heightened activation in frontal areas, including the , supplementary motor cortex, and , relative to macroexpressions, reflecting greater cognitive suppression efforts. (EEG) studies reveal divergent event-related potentials (ERPs) for microexpression recognition, with enhanced early components (e.g., P1, N170) indicating automatic perceptual processing distinct from macroexpressions. These correlates extend to the amygdala, a subcortical structure pivotal for rapid emotional appraisal. While direct fMRI measures of microexpressions are limited, amygdala activation patterns in response to brief emotional stimuli align with evolutionary models of threat detection, bypassing higher cortical filters for universality across cultures. Such subcortical involvement counters strict cultural constructivist views by evidencing innate, hardwired mechanisms for basic emotion signals, preserved phylogenetically from mammalian homologs. This physiological embedding supports causal realism in microexpression elicitation, rooted in autonomic arousal rather than learned display rules alone.

Applications and Practical Uses

Deception and Lie Detection Claims

Paul Ekman proposed that microexpressions function as involuntary "leakage" of genuine emotions suppressed during deception, occurring briefly when individuals attempt to mask incongruent feelings, such as fear or contempt, while maintaining a false demeanor. He suggested these fleeting displays appear in roughly 20% of high-stakes deceivers, serving as potential indicators of deceit when mismatched with spoken content. However, this estimate lacks replication; subsequent high-motivation experiments, including those by Porter and ten Brinke in 2008, observed no microexpressions or exceedingly low rates, challenging their prevalence as reliable leakage. Empirical studies on deception detection reveal microexpressions occur infrequently, with meta-analytic reviews estimating their appearance in under 1% of interactions overall, even among liars, due to successful suppression or absence of strong conflicting emotions. While sparse evidence supports utility in rare high-stakes scenarios—such as one 2018 study detecting negative microexpressions (lasting ≤0.5 seconds) distinguishing lies about future intentions from truths—these findings do not establish causality between microexpressions and deceit, as incongruent emotions can arise in truthful statements under stress. Bond and DePaulo's 2006 meta-analysis of over 200 deception studies underscores that facial cues, including purported microexpressions, yield accuracies near chance (54% overall, 47% for lies), without proving deception over other explanations like anxiety. In practice, reliance on microexpressions for underperforms baseline behavioral indicators, such as verbal inconsistencies or implausible narratives, which DePaulo et al. (2003) identified as more diagnostic across cues to . Training focused on microexpressions fails to surpass these general cues, with reviews emphasizing that deceivers often suppress all facial movements rather than leaking specific ones, reducing incongruence as a deceit marker. Thus, while microexpressions offer theoretical promise for spotting emotional discord in select cases, predominant evidence highlights their rarity and non-specificity, prioritizing broader empirical cues for effective .

Clinical and Therapeutic Contexts

Microexpressions play a role in by aiding the identification of subtle emotional signals in populations with recognition deficits, such as those with . Individuals with frequently demonstrate reduced accuracy in perceiving brief facial expressions, contributing to social communication challenges. A study involving children and adolescents with found that a facial emotion training program, which included exposure to micro-duration expressions, significantly improved identification accuracy post-intervention compared to baseline assessments. Similarly, multimodal training incorporating microexpressions enhanced accuracy in adolescents with , outperforming control conditions in a pilot feasibility trial, though effects were more pronounced with combined auditory-visual cues. In (PTSD), patients exhibit impaired facial emotion recognition, including sensitivity to brief negative expressions, which may perpetuate avoidance and relational strain. War veterans with lifetime PTSD showed deficits in identifying from facial stimuli 40 years post-exposure, suggesting potential therapeutic value in targeted remediation. While large-scale RCTs specific to microexpression training in PTSD remain limited, small-scale interventions addressing rapid expression decoding have been proposed to uncover incongruent during , facilitating exposure to hidden affective states without directly resolving causality. Therapeutically, microexpression recognition training has demonstrated benefits in enhancing among clinical groups, as evidenced by a pilot study in patients where the Micro-Expression Training Tool (METT) led to improved scores alongside better recognition of fleeting . In trainees, such training boosted multimodal accuracy, correlating with stronger therapeutic alliances via heightened detection of client incongruences. However, these tools reveal discrepancies between overt and transient leakage rather than elucidating root causal pathways in disorders; outcomes like gains in relational drills, including couples analogs, are verifiable in small cohorts but require replication to confirm generalizability beyond diagnostic aid. Integration into assessments is thus supplementary, prioritizing empirical decoding metrics over holistic causal inference.

Security, Law Enforcement, and Driver Monitoring

The U.S. (TSA) implemented the Screening of Passengers by Observation Technique () program in the mid-2000s, training officers to detect behavioral indicators including microexpressions for threat identification at airports under the Department of Homeland Security (DHS). Evaluations by the (GAO) in 2010 and 2013 found marginal accuracy improvements, with SPOT referrals yielding only 0.6% hits on terrorist watchlists or arrests from over 2 million screenings in fiscal year 2012, primarily for minor offenses rather than security threats, prompting recommendations to curtail funding due to inadequate empirical validation. By 2013, the program had cost approximately $900 million, with high false positive rates inflating secondary screening expenses and diverting resources without proportional threat detection gains. In contexts, microexpression analysis has been integrated into investigative interviewing protocols, such as those evaluated by the (FBI), where trained observers achieved detection accuracies exceeding chance (around 53%) in controlled scenarios, reaching up to 80% for specific behavioral cues. However, field trials reveal limited standalone , with GAO-linked reviews noting insufficient validation for programs like that extend to training, emphasizing its role as a supplementary tool to established methods like testing rather than a primary detector. Non-invasive advantages persist, allowing observation without equipment, but persistent false positives—stemming from cultural variability and —elevate operational costs and risk erroneous escalations. For driver monitoring, a July 2025 framework utilizing (FACS) action units for microexpression recognition targets fatigue-related emotional states, proposing automated alerts to mitigate accident risks during prolonged . This approach leverages brief, involuntary facial signals to detect instability non-invasively via in-vehicle cameras, with preliminary models showing potential for intervention, though deployment trials remain nascent and emphasize with physiological sensors for reliability over isolated use. Empirical outcomes prioritize adjunct applications, aligning with findings where microexpressions augment but do not replace assessments.

Criticisms, Limitations, and Controversies

Weaknesses in Deception Detection Reliability

Empirical investigations have consistently demonstrated that microexpressions occur infrequently during deceptive behavior, limiting their utility as diagnostic cues. Reviews of high-stakes deception scenarios, such as criminal confessions and cases, indicate that facial leakage via microexpressions appears in fewer than 5% of lies, often overshadowed by strategic masking or neutral facades. This rarity stems from individual variability in emotional control, where many deceivers suppress or avoid involuntary displays altogether, as evidenced in simulations where only a subset of liars exhibited detectable micros. The low of such events compounds detection challenges, as the of lying in everyday or even interrogative contexts is typically below 20-30%, leading to high false positive rates when brief expressions are interpreted as leaks without corroborating . Ekman's leakage hypothesis—that microexpressions involuntarily betray concealed emotions diagnostic of deceit—lacks robust replication across controlled studies. While initial claims posited these fleeting signals as reliable indicators due to their brevity and universality, subsequent experiments have failed to replicate consistent differentiation between truth-tellers and liars based solely on micros. For instance, observer accuracy in identifying deception from video clips featuring alleged micro-leaks hovers around chance levels (50-54%), even among those briefed on the hypothesis, highlighting a disconnect between theoretical expectation and empirical outcome. From a causal standpoint, the inference from a microexpression (e.g., fear or contempt) to intentional deceit falters, as such displays can arise from non-deceptive sources like situational anxiety or cognitive load, without establishing a direct pathway to hidden intent. Training protocols aimed at enhancing microexpression detection exacerbate reliability issues through among observers. Studies evaluating tools like the Micro Expression Training Tool () show that trainees gain confidence and exhibit a heightened tendency to classify ambiguous behaviors as , yet their overall accuracy does not improve beyond untrained baselines. This bias manifests as over-attribution of brief facial movements to leakage, prioritizing isolated cues over aggregate verbal or contextual indicators, which meta-analyses identify as more predictive of veracity. Consequently, reliance on microexpressions in deception assessment risks systematic errors, particularly in high-stakes settings where confirmation-seeking interpretations amplify false alarms.

Overhype and Commercial Exploitation

The Group markets proprietary microexpression training programs, including the Micro Expressions Training Tool (), with online access fees ranging from $119 for basic modules to $229 for extended practice sessions lasting 3 to 6 months. These tools promise enhanced detection of fleeting facial cues for applications like analysis, yet independent evaluations reveal limited verifiable benefits. A 2019 study led by forensic linguist Vincent Denault at the tested on security professionals and found it failed to elevate accuracy above levels equivalent to untrained guessing, attributing this to the tool's overemphasis on isolated expressions without contextual integration. Further scrutiny in a controlled experiment by et al. (2019) exposed participants to training before assessing in video testimonies; results showed no significant improvement over controls, with trained accuracy dipping to 52%—marginally below —and no evidence of sustained skill transfer to dynamic scenarios.%20-%20METT.pdf) These findings align with broader reviews highlighting the scarcity of randomized controlled trials validating commercial training beyond short-term drills, often confined to static stimuli rather than ecologically valid interactions. Commercial providers' reliance on self-reported or proprietary metrics, incentivized by recurring sales to and corporate clients, fosters skepticism regarding unsubstantiated claims of transformative proficiency. Media portrayals have amplified this commercial , with series like (2009–2011), directly inspired by Ekman's research, depicting microexpression mastery as a near-infallible lie-detection , despite contemporaneous academic critiques underscoring placebo-equivalent outcomes in practical use. Such dramatizations, echoed in true-crime adaptations, prioritize over empirical rigor, contributing to widespread of unproven tools without demand for independent audits—a dynamic where profit-driven dissemination outpaces causal evidence for real-world utility.

Methodological and Conceptual Flaws

on microexpressions has frequently relied on small sample sizes, limiting statistical power and generalizability. For instance, databases used for microexpression often contain fewer than 100 samples per category, exacerbating in models and hindering robust validation. settings introduce artifacts such as demand characteristics, where participants' awareness of being observed alters spontaneous expressions, confounding results with performative behaviors rather than authentic leakage. Aldert Vrij and colleagues have highlighted these issues, noting that microexpression studies often lack controlled tests for , with expressions elicited under artificial constraints that do not mirror real-world variability. Conceptually, the framework posits microexpressions as brief manifestations of emotions, yet indicates they often represent blends or partial activations rather than pure categorical signals. Facial movements in microexpressions do not consistently map with singular emotions, as neuromuscular patterns reflect overlapping affective states influenced by immediate , challenging the of isolated leakage. Claims of universality suffer from limited , as discrepant findings are frequently attributed to cultural masking or subtlety rather than refuting innate discreteness, evading rigorous disconfirmation. Post-2010 replication efforts have exposed inconsistencies, with data leakage in datasets—such as temporal or subject overlaps between and testing—artificially inflating rates beyond validation. While constructivist views emphasize cultural variability, neurobiological data affirm biological priors, including conserved muscle innervations for core signals like or affiliation, underpinning expressions prior to extensive . These flaws underscore the need for larger, ecologically valid paradigms to distinguish innate substrates from learned overlays, resisting relativist overemphasis on absent causal evidence.

Recent Technological Advances

Deep Learning and Transformer Models (2018-2025)

approaches, particularly convolutional neural networks (CNNs) and later transformer architectures, marked a shift in microexpression recognition (MER) from 2018 onward, leveraging large-scale feature extraction to address the subtlety and brevity of microexpressions. Early deep models focused on and spatiotemporal features, achieving initial improvements over handcrafted methods on datasets like CASME II and SAMM. By the early 2020s, transformer-based models emerged, capitalizing on self-attention mechanisms to capture long-range dependencies in facial sequences, outperforming CNNs in benchmark accuracies exceeding 80% for recognition tasks on standardized datasets. Transformer integrations, such as hierarchical transformer networks (HTNet) and (ViT), enhanced spotting and classification by prioritizing critical facial regions like eye and mouth areas through entropy-based . The 2025 Micro-Expression Grand Challenge (MEGC2025) highlighted these advances, introducing tasks for integrated spotting-then-recognition and on microexpressions, with top models demonstrating robust performance on unseen sequences. Lightweight variants, including MobileViT and ShuffleNet adaptations, enabled processing with reduced parameters (e.g., 1.53 million in self-attentive ShuffleNet), suitable for edge devices while maintaining accuracies around 75-85% on composite databases. Despite gains, models often overfit to limited spontaneous data in datasets, which comprise fewer than 300 samples per class, leading to inflated performance on posed or semi-posed augmentations not representative of naturalistic variability. Empirical evaluations show systems process vast video volumes faster than s—handling thousands of clips per hour versus human limits of dozens—but falter in nuanced, context-dependent where human observers integrate cultural or situational cues. This volume advantage stems from parallel computation, yet reliance on benchmark-specific patterns underscores gaps in generalization to diverse, unposed scenarios.

Multimodal and Real-Time Detection Systems

Multimodal microexpression detection systems integrate facial cues with complementary physiological or behavioral signals, such as (EMG), voice prosody, or (EEG), to capture subtle emotional leaks more robustly than unimodal facial alone. These approaches leverage the causal linkage between microexpressions—brief, involuntary facial muscle activations—and concurrent bodily responses, where EMG, for instance, directly quantifies zygomaticus or corrugator activity underlying fleeting expressions, providing verification of and duration not always discernible visually. A 2024 study demonstrated that fusing microexpression video with peripheral physiological signals enhanced latent accuracy by incorporating fusion networks, yielding improvements attributable to reduced from isolated facial data. Similarly, combining microexpressions with EEG signals in transformer-based frameworks has shown potential for detecting hidden emotions by aligning spatiotemporal facial patterns with neural correlates, though causal enhancements depend on synchronized data alignment to avoid spurious correlations. Real-time implementations deploy these systems via edge computing paradigms, processing data locally on devices like in-vehicle cameras or wearables to minimize latency for applications requiring immediate feedback. For driver monitoring, action unit-based microexpression frameworks analyze facial dynamics in near-real-time to infer emotional states like anger or surprise, integrating with vehicle sensors for proactive alerts, as validated on datasets simulating driving scenarios. EMG-augmented systems further enable quantification of microexpression intensity through surface electrodes, detecting sub-visual muscle firings in milliseconds, which causally supplements optical flow methods by measuring electromechanical signals directly tied to emotional elicitation. Multimodal fusion in such setups has empirically boosted detection precision by 10-15% over facial-only baselines in controlled emotion elicitation tasks, primarily by mitigating occlusions or head pose variations through cross-modal redundancy. Despite these advances, multimodal systems reveal limitations in addressing context-dependent variations unmodeled in facial-dominant pipelines, such as vocal inflections influenced by linguistic factors or EMG artifacts from non-emotional movements, which can confound causal attribution without baseline personalization. In driver contexts, while voice-EMG integration promises holistic monitoring, real-world deployment challenges include signal desynchronization under motion, underscoring that added modalities enhance reliability only when causally aligned with expression onset rather than post-hoc correlation. Ongoing research emphasizes lightweight fusion architectures for edge devices to balance computational demands with these interpretive gaps.

Challenges in AI Validation Against Human Benchmarks

AI microexpression detection models often surpass human performance on benchmark datasets, reporting accuracies of 74–85% on corpora like CASME II, while trained human observers achieve only 47–50% accuracy. This apparent superiority, however, complicates validation against benchmarks, as the latter reflect variable, error-prone judgments influenced by cognitive biases and limited perceptual acuity, rather than an infallible . In deception contexts, for instance, unaided humans detect lies at approximately 57% accuracy via cues, slightly above chance, underscoring the need to scrutinize whether gains stem from genuine causal signal detection or dataset-specific artifacts. Laboratory datasets underpinning these metrics, such as CASME II (under 200 samples) and SMIC (306 samples), predominantly feature elicited expressions in controlled environments, yielding low when extrapolated to spontaneous, high-stakes real-world scenarios marred by noise, occlusions, and head movements. The paucity of large-scale, balanced corpora labeled for —compounded by class imbalances (e.g., underrepresentation of or in MMEW)—fosters , where models excel in isolated evaluations but falter against the holistic, context-integrated benchmarks humans employ in naturalistic interactions. Interpretability deficits in architectures, such as transformers and CNNs, further hinder alignment with reasoning, as these black-box systems yield predictions without elucidating causal pathways from subtle muscular activations to concealed , unlike action unit-based approaches requiring . Comprehensive 2024–2025 reviews advocate hybrid -AI loops, integrating explainable AI techniques like saliency maps to validate outputs against empirical attributions, ensuring to underlying mechanisms over mere correlative . Although AI-enabled scalable corpora curation holds potential for bridging these gaps, premature deployment without such rigorous, deception-grounded invites detached from verifiable causal .

References

  1. [1]
    Microexpressions Differentiate Truths From Lies About Future ... - NIH
    Dec 18, 2018 · In these works, microexpressions were usually defined as fleeting emotional expressions lasting between 1/25th to 1/5th s (i.e., 0.04–0.20 s), ...
  2. [2]
    Micro Expressions | Facial Expressions - Paul Ekman Group
    Micro expressions are facial expressions that occur within a fraction of a second. This involuntary emotional leakage exposes a person's true emotions.
  3. [3]
    Paul Ekman's Journal Articles on Emotions and Facial Expressions
    Access free, peer-reviewed journal articles by Paul Ekman, the world-renowned expert on emotions and facial expressions. Explore his research on universal ...
  4. [4]
    Effects of the duration of expressions on the recognition of ...
    Conclusions: The results of this study suggest that the proper upper limit of the duration of microexpressions might be around 1/5 of a second and confirmed ...2. Experiment 1 · 3. Experiment 2 · 3.2. Results
  5. [5]
    Evidence for training the ability to read microexpressions of emotion.
    Study 2 demonstrated that individuals trained in reading microexpressions retained their ability to read them better than a comparison group tested 2–3 ...Missing: empirical | Show results with:empirical
  6. [6]
    Microexpressions Are Not the Best Way to Catch a Liar - Frontiers
    Numerous studies have found that deceivers often show appeasement or fake smiles that can be mistaken as signals of pleasure, comfort, or enjoyment.
  7. [7]
    Detecting deception through facial expressions in a dataset of ...
    Micro-expressions are “full-face emotional expressions that are compressed in time, lasting only a fraction of their usual duration, so quick they are usually ...
  8. [8]
    MICRO-EXPRESSIONS: FACT OR FICTION?
    Apr 18, 2020 · Not only do micro-expressions have little to no supporting empirical evidence, but the body of theoretical and experiential evidence is founded ...Do Micro-Expressions... · What About Artificial... · A Macro-Web Of Limitations
  9. [9]
    A Survey of Automatic Facial Micro-Expression Analysis - Frontiers
    Compared to ordinary facial expressions or macro-expressions, MEs usually last for a very short duration which is between 1/25 and 1/5 of a second (Ekman, 2009b) ...Missing: evidence | Show results with:evidence
  10. [10]
    Microexpressions Differentiate Truths From Lies About Future ...
    Dec 17, 2018 · In these works, microexpressions were usually defined as fleeting emotional expressions lasting between 1/25th to 1/5th s (i.e., 0.04–0.20 s), ...
  11. [11]
    A comprehensive bibliometric survey of micro-expression ...
    Mar 15, 2024 · This study provides a comprehensive review of the area of ME recognition. A bibliometric and network analysis techniques is used to compile all the available ...
  12. [12]
    Emotional Context Influences Micro-Expression Recognition - NIH
    Apr 15, 2014 · Micro-expressions are extremely quick facial expressions [1] that usually last for 1/25 s to 1/5 s [2]. Like facial expressions, micro- ...
  13. [13]
    Facial Action Coding System - Paul Ekman Group
    Explore the Facial Action Coding System (FACS), an in-depth tool developed by Dr. Paul Ekman for analyzing facial expressions and emotions.
  14. [14]
    Recognizing Action Units for Facial Expression Analysis - PMC - NIH
    Ekman and Friesen [14] developed the Facial Action Coding System (FACS) for describing facial expressions by action units (AUs). Of 44 FACS AUs that they ...
  15. [15]
    Spontaneous Facial Expressions and Micro-expressions Coding
    Therefore, this paper focuses on macro-expression or micro-expression that responds to genuine emotions and analyzes the relationship between the cerebral ...
  16. [16]
    Are Facial Expressions of Emotion Voluntary or Involuntary?
    Dr. Ekman's controversial claim about facial expressions of emotion, including the science behind whether expressions are voluntary or involuntary.
  17. [17]
    Facial expression analysis - Scholarpedia
    May 14, 2008 · Voluntary and involuntary expressions are under the control of different neural tracts (Rinn, 1991), with voluntary expressions controlled by ...
  18. [18]
    Monkeying around: Non-human primate behavioural responses to ...
    There was no difference in primate behaviour responses between human and macaque highly threatening stimuli. Human scream face reproduction appears to be ...
  19. [19]
    Electrophysiological Evidence Reveals Differences between the ...
    The most important difference between microexpressions and macroexpressions is their duration (Svetieva, 2014). However, there are different estimates of the ...Missing: "peer | Show results with:"peer
  20. [20]
    Relationship between low mood and micro-expression processing
    Jul 31, 2024 · Micro-expressions refer to the spontaneous facial expressions of seven emotions—anger, contempt, disgust, fear, happiness, sadness and surprise— ...Missing: empirical | Show results with:empirical
  21. [21]
    (PDF) Effects of the duration of expressions on the recognition of ...
    Aug 7, 2025 · Conclusions The results of this study suggest that the proper upper limit of the duration of microexpressions might be around 1/5 of a second ...
  22. [22]
    The Expression of Emotion in Man and Animals, by Charles Darwin
    The expression of the emotions in man and animals by Charles Darwin with photographic and other illustrations. New York D. Appleton And Company 1899.
  23. [23]
    Darwin's contributions to our understanding of emotional expressions
    Darwin took for granted that it is the morphology of facial expression that conveys information about which emotion is occurring. No question that the timing of ...
  24. [24]
    The expression of the emotions in man and animals
    However Darwin's problems were not at an end: gathering appropriate photographs of human expressions was to prove tricky. He required images of fleeting actions ...<|separator|>
  25. [25]
  26. [26]
    Darwin, Deception, and Facial Expression - ResearchGate
    Aug 4, 2025 · ... Micro-expressions are fleeting, involuntary facial expressions that unconsciously reveal themselves when individuals attempt to suppress ...
  27. [27]
    130 Years Later: Darwin's Theories Stand - NYAS
    Jan 1, 2003 · She has found that they process both visual and auditory cues to interpret emotion, with certain facial expressions and sounds having more ...
  28. [28]
    Darwin's The Expression of the Emotions in Man and Animals (1872 ...
    Darwin simultaneously worked on his “demonstration” of the universality of five basic affects (pleasure, fear, suffering or grief, rage, and disgust).
  29. [29]
    [PDF] Facial Expressions - Paul Ekman Group
    Most of the research on universals in facial expression of emotion has focused on one method-showing pictures of facial expressions to observers in different.
  30. [30]
    [PDF] emotions-revealed-by-paul-ekman1.pdf
    Renowned expert in nonverbal communication Paul Ekman has led a renaissance in our scientific under- standing of emotions, addressing just these questions. Now ...
  31. [31]
    Paul Ekman and the search for the isolated face in the 1960s
    This essay examines the detailed process of isolating facial data from the context of its emergence through the early work of psychologist Paul Ekman in the ...
  32. [32]
    The History of the Facial Action Coding System (FACS)
    Learn about the origins of the Facial Action Coding System (FACS). Dr. Paul Ekman describes his journey through developing a system to measure facial movements.
  33. [33]
    About Paul Ekman | Emotion Psychologist
    DISCOVERY OF MICRO EXPRESSIONS. Dr. Ekman worked with clinical cases in which patients lied about their emotional state. He studied patients who claimed ...Missing: pre- | Show results with:pre-<|separator|>
  34. [34]
    More about FACS — Erika Rosenberg, Ph.D.
    The Facial Action Coding System (FACS), first developed by Paul Ekman and Wallace Friesen in 1978 and revised by Ekman, Friesen, & Hager in 2002, is a ...
  35. [35]
  36. [36]
    Suppressed Emotions and Deception: The Discovery of Micro ...
    Dec 19, 2022 · Ekman has compiled over 50 years of his research to create comprehensive training tools to read the hidden emotions of those around you.Missing: brevity controlled
  37. [37]
  38. [38]
  39. [39]
    Universal Emotions | What are Emotions? - Paul Ekman Group
    WHAT ARE THE SIX BASIC EMOTIONS ACCORDING TO PAUL EKMAN? Dr. Ekman identified the six basic emotions as anger, surprise, disgust, enjoyment, fear, and sadness.Atlas of Emotions · What is Anger? · What is Contempt? · BooksMissing: peer | Show results with:peer
  40. [40]
    Facial expression and emotion. - APA PsycNet
    Ekman, P. (1989). The argument and evidence about universals in facial expressions of emotion. In H. Wagner & A. Manstead (Eds.), Handbook of social ...Missing: six | Show results with:six
  41. [41]
  42. [42]
    Experiments on real-life emotions challenge Ekman's model - PMC
    Jun 12, 2023 · Emotions he found to be universal are happiness, sadness, anger, disgust, surprise, and fear. Dimensional emotional models define emotions ...Missing: microexpressions peer
  43. [43]
    Evidence for the Universality of Facial Expressions of Emotion
    Ekman and Friesen defined six universal basic emotions: happiness, sadness, fear, disgust, surprise, and anger [2], they have produced FACS-(Facial Action ...<|separator|>
  44. [44]
    Challenges to Inferring Emotion From Human Facial Movements
    We identify three key shortcomings in the scientific research that have contributed to a general misunderstanding about how emotions are expressed and perceived ...
  45. [45]
    Could Micro-Expressions be Quantified? Electromyography Gives ...
    Aug 16, 2024 · Micro-expressions (MEs) are brief, subtle facial expressions. This paper uses EMG to quantify MEs, measuring intensity with MVC% and duration.
  46. [46]
    Differences in brain activations between micro- and macro ...
    Sep 12, 2022 · This study is the first to investigate the neural mechanisms underlying the differences between micro- and macro-expressions by using EEG signal ...
  47. [47]
    The amygdalo-motor pathways and the control of facial expressions
    As such, the amygdala might be responding primarily to the sensory consequences associated with the production of facial expressions. This finding, together ...
  48. [48]
    FACS-Based Graph Features for Real-Time Micro-Expression ...
    Nov 30, 2020 · Moreover, FACS defines AU intensities on a five-point ordinal scale (i.e., from lowest A to strongest E intensity. The main benefit of ...
  49. [49]
    Facial micro-expression recognition: A machine learning approach
    Micro-expressions are characterized by short duration and low intensity, hence, efforts to train humans in recognizing them have resulted in very low ...
  50. [50]
    Liars can't completely supress facial expressions - UB Reporter
    Jul 18, 2011 · We can reduce facial movements when trying to suppress them, but we can't eliminate them completely. “Whether we are dealing with highly skilled ...
  51. [51]
    Facial Micro-Expressions: An Overview | IEEE Journals & Magazine
    Jun 5, 2023 · Micro-expression (ME) is an involuntary, fleeting, and subtle facial expression. It may occur in high-stake situations when people attempt to conceal or ...
  52. [52]
    Stress recognition identifying relevant facial action units through ...
    This article investigates automatic acute stress recognition based on AUs using conventional Machine and Deep Learning techniques.Missing: detectability | Show results with:detectability
  53. [53]
    Interpreting individual differences in stress-induced facial expressions
    Apr 20, 2019 · We review the role of facial expressions according to the leading affective neuroscience theories, including constructed emotion and social-motivation accounts.Missing: variability detectability
  54. [54]
    [PDF] Universals and Cultural Differences in Facial Expressions of Emotion
    When we began to plan our cross-cultural research, we had done very little study of facial expressions even within any one culture. Our emphasis had been on ...Missing: methodology limitations
  55. [55]
    Are facial expressions universal or culturally specific?
    Dr. Paul Ekman answers the question “Are facial expressions universal?”. Learn how cultural expressions and display rules change in public and private.
  56. [56]
    [PDF] Perceptual averaging of facial expressions requires visual ...
    When these dots persist after the target disappears, even for just a fraction of a second, they can disrupt discrimination of that object's features and even ...
  57. [57]
    [PDF] A test of the Micro-Expressions Training Tool (METT)
    This study investigates the efficacy of one specific lie detection training program: the Micro-Expressions Training Tool (METT; Ekman, 2006; Paul. Ekman Group, ...
  58. [58]
    Automatic Micro-Expression Analysis: Open Challenges - Frontiers
    Aug 6, 2019 · Study (Ekman, 2002) shows that for micro-expression recognition tasks, ordinary people without training only perform slightly better than chance ...<|control11|><|separator|>
  59. [59]
    Spontaneous Facial Expressions and Micro-expressions Coding
    The supporting physiological theory for AU labeling of emotions is obtained by adding facial muscle movements patterns.Missing: causal | Show results with:causal
  60. [60]
    Micro-Expression Key Frame Inference - IEEE Computer Society
    A.​​ The process of manually annotating MEs can be roughly divided into three steps, namely the global observation, the local observation, and the frame-by-frame ...
  61. [61]
    A Survey of Automatic Facial Micro-Expression Analysis - NIH
    One of the very first efforts to improve the human ability at recognizing MEs was conducted by Ekman where he developed the Micro-Expression Training Tool (METT) ...
  62. [62]
    Emotion-specific AUs for micro-expression recognition
    Aug 8, 2023 · As the AUs coding is dependent on the FACS manual, the reliability of the AUs in the FACS-coded datasets determines the efficacy of these AUs in ...
  63. [63]
    Review of Automatic Microexpression Recognition in the Past Decade
    May 2, 2021 · In addition, microexpressions have high reliability and ... issues of interest to researchers on automated microexpression analysis.Missing: controversies | Show results with:controversies
  64. [64]
    [PDF] Constants across cultures in the face and emotion.
    A member of the South Fore tribe recruited subjects, explained the task, and read the translated stories; a. Caucasian recorded the subjects' responses ...Missing: rates | Show results with:rates
  65. [65]
    Evidence for the universality of facial expressions of emotion.
    This chapter explores the scientific evidence for the universal expression and recognition of facial expressions of emotions.
  66. [66]
    [PDF] Universals and Cultural Differences in the Judgments of Facial ...
    Subjects in 10 cultures performed a more complex judgment task than has been used in previous cross-cultural studies. Instead of limiting the subjects to ...Missing: methodology | Show results with:methodology
  67. [67]
    American-Japanese cultural differences in intensity ratings of facial ...
    Findings from a recent study by Ekmanet al. (1987) provided evidence for cultural disagreement about the intensity ratings of universal facial expressions.Missing: replications universality
  68. [68]
    Are There Universal Facial Expressions? - Paul Ekman Group
    Dr. Ekman discovered strong evidence of universality of some facial expressions of emotion as well as why expressions may appear differently across cultures.Missing: tribes | Show results with:tribes
  69. [69]
    Emotion Recognition across Cultures: The Influence of Ethnicity on ...
    For example, Ekman and colleagues have demonstrated that people from different cultures are able to identify correctly the emotions portrayed in photographs of ...
  70. [70]
    Cultural Differences in Emotional Expression | Paul Ekman Group
    Explore the cultural differences in emotional expression through the lens of Darwin's studies and Ekman's research on emotional expressions.
  71. [71]
    Fear Facial Expressions: Basic or Social Constructivist?
    ... Ekman has found people universally recognize the fear facial expression. In ... Fore and Borneo participants showed lower recognition rates than other countries.
  72. [72]
    [PDF] Evidence for training the ability to read microexpressions - Humintell
    microexpressions were limited to book chapters and books. Many. 3FL05 peer-reviewed articles on expression in deceptive situations do exist. 3FL06. (e.g. ...
  73. [73]
    Trainee psychotherapists' emotion recognition accuracy improves ...
    In this study, we investigated two related, but distinct facets of ERA: dynamic multimodal (audio, video, audio-video) ERA and facial micro expression ERA.
  74. [74]
    Training Emotion Recognition Accuracy: Results for Multimodal ...
    Aug 12, 2021 · It is further argued that an untrained observer likely will not become consciously aware of micro expressions (see e.g., Ekman and Friesen, 1969 ...
  75. [75]
    [PDF] Could Micro-Expressions be Quantified? Electromyography Gives ...
    Abstract—Micro-expressions (MEs) are brief, subtle facial expressions that reveal concealed emotions, offering key behavioral cues.
  76. [76]
    Altered brain dynamics of facial emotion processing in schizophrenia
    Jan 16, 2025 · We investigated the processing and underlying neural temporal dynamics of task-irrelevant emotional face stimuli using combined EEG/fMRI in 14 individuals with ...Missing: microexpressions | Show results with:microexpressions
  77. [77]
    [PDF] The Universality of Emotional Facial Expressions across Culture and ...
    The results indicated a universal agreement, inherent survival skills, and did not signify strong cross cultural influences. Without a doubt, emotion is the ...Missing: evidence | Show results with:evidence
  78. [78]
    [PDF] Facial Expression of Emotion - Paul Ekman Group
    The study of facial expressions addresses structure, biological substrates, universality, and cultural specificity of emotion, and if they are discrete or ...<|separator|>
  79. [79]
    [PDF] Nonverbal-Leakage-And-Clues-To-Deception.pdf - Paul Ekman Group
    Knowledge of nonverbal leakage and deception clues could also perhaps be utilized in an attempt to develop lie detection procedures which rely upon nonverbal ...
  80. [80]
    Microexpressions Are Not the Best Way to Catch a Liar - PMC
    Their subsequent analysis of these high-stakes pleadings found only six instances of microexpressions among deceivers and slightly more (8) among genuine ...
  81. [81]
    Accuracy of deception judgments - PubMed
    People achieve an average of 54% correct lie-truth judgments, correctly classifying 47% of lies as deceptive and 61% of truths as nondeceptive.
  82. [82]
    Psychological sleuths--Detecting deception
    "Liars' answers sound more discrepant and ambivalent, the structure of their stories is less logical, and their stories sound less plausible," they say. Liars ...Missing: occurrence | Show results with:occurrence
  83. [83]
    (PDF) Microexpressions Are Not the Best Way to Catch a Liar
    Sep 20, 2018 · Microexpressions are lauded as a valid and reliable means of catching liars (see Porter and ten Brinke, 2010). However, there are many reasons to question what ...
  84. [84]
    Efficacy of a Facial Emotion Training Program for Children and ...
    The results showed that the facial emotion training program enabled children and adolescents with ASD to identify feelings in facial expressions more accurately ...
  85. [85]
    Feasibility of internet-based multimodal emotion recognition training ...
    Research suggests that individuals with autism spectrum disorder (ASD) have difficulties in emotion recognition (ER), which could lead to social ...
  86. [86]
    Lifetime PTSD is associated with impaired emotion recognition in ...
    In this study, lifetime PTSD in war veterans 40 years after exposure was associated with impaired ability to identify facial expression of emotions.
  87. [87]
    Recognition of facial emotions among maltreated children with high ...
    Relative to children who were not maltreated, maltreated children both with and without PTSD showed enhanced response times when identifying fearful faces.
  88. [88]
    The Effects of the Micro-Expression Training on Empathy in Patients ...
    Aug 6, 2025 · ... In addition, in one study, training of emotion recognition resulted in a significant improvement in empathy, even in comparison to a CG. 70 ...
  89. [89]
    Trainee psychotherapists' emotion recognition accuracy improves ...
    We conclude that trainee psychotherapists' emotion recognition accuracy can be effectively trained, especially multimodal emotion recognition accuracy.Missing: couples | Show results with:couples
  90. [90]
    A pilot study to investigate the effectiveness of emotion recognition ...
    Aug 6, 2025 · This training used the Micro-Expression Training Tool (METT) developed by Paul Ekman, which includes short video-clips to teach the facial ...
  91. [91]
    [PDF] TSA Should Limit Future Funding for Behavior Detection Activities
    Nov 8, 2013 · GAO analyzed data from fiscal years 2011 and 2012 on the rates at which BDOs referred passengers for additional screening based on behavioral.
  92. [92]
    - BEHAVIORAL SCIENCE AND SECURITY: EVALUATING TSA'S ...
    ... accuracy at detecting micro expressions and accuracy at detecting lies. But ... airports would cost less than 1% of last year's DHS budget. Although I ...
  93. [93]
    Reading People: Behavioral Anomalies and Investigative Interviewing
    Mar 5, 2014 · Compared to the average accuracy rate of 53 percent—no better than chance—by observers in previous studies, the findings indicated that ...Missing: trials | Show results with:trials
  94. [94]
    Action unit based micro-expression recognition framework for driver ...
    Jul 30, 2025 · This study presents a micro-expression recognition framework designed to identify emotional variations in drivers by analyzing facial Action ...
  95. [95]
    (PDF) Reconsidering Facial Expressions and Deception Detection
    This chapter overviews facial expression in deception detection, separating their alleged diagnostic value as cues to deception from that of strategic ...
  96. [96]
    Veracity judgement, not accuracy: Reconsidering the role of facial ...
    Decades of deception research have consistently found that human lie detection ability is poor (Bond & DePaulo, 2006). People are also overconfident in their ...
  97. [97]
    Micro Expressions Training Tools - Paul Ekman Group
    The Facial Action Coding System (FACS) is a downloadable PDF manual. It is a ... It takes at least 100 hours of self-study to learn and it has been ...
  98. [98]
    Study finds flaws in leading security lie detection training tool
    ... research team, whose investigation found that the airport security system, METT – the Micro-Expressions Training Tool, fails to improve lie detection rates ...
  99. [99]
    Psychological Research Inspires New Television Series 'Lie to Me'
    a shift of the eyes, a curl of the upper lip, an infinitesimal sneer — can give us away without our knowing it, and ...
  100. [100]
    [PDF] Data Leakage and Evaluation Issues in Micro-Expression Analysis
    Mar 6, 2023 · With the above men- tioned issues, this becomes incredibly difficult. Hence it is crucial to fix these problems. We analyze the data leakage and ...
  101. [101]
    Fact or Artifact? Demand Characteristics and Participants' Beliefs ...
    Sep 29, 2025 · In addition to creating inferential errors, demand characteristics can bias estimates of causal relationships. For example, the effects of ...
  102. [102]
    Lisa Feldman Barrett versus Paul Ekman on facial expressions ...
    Jul 19, 2023 · To falsify the anti-Ekman ... And if laughing can be a human-universal innate behavior, why not the Duchenne smile microexpression?
  103. [103]
    Impact of social context on human facial and gestural emotion ... - NIH
    Aug 3, 2024 · First, neurobiological evidence shows that not all facial muscles equally contribute to emotion signaling: humans appear to have greater ...
  104. [104]
    Dynamic Facial Expressions of Emotion Transmit an Evolving ...
    Jan 2, 2014 · Dynamic facial expressions transmit hierarchical information over time. Signals evolve from few biologically basic to complex socially specific information.
  105. [105]
    Advances in Facial Micro-Expression Detection and Recognition
    Micro-expressions are facial movements with extremely short duration and small amplitude, which can reveal an individual's potential true emotions and have ...
  106. [106]
    Leveraging vision transformers and entropy-based attention ... - Nature
    Apr 21, 2025 · This paper proposes a novel micro-expression recognition method based on the Vision Transformer. First, a new model called HTNet with LAPE
  107. [107]
    HTNet for micro-expression recognition - ScienceDirect.com
    Oct 14, 2024 · He have published over 60 peer-reviewed papers, including over 40 of them published in top-tier conferences and journals, like ICML, CVPR ...
  108. [108]
    MEGC2025: Micro-Expression Grand Challenge on Spot Then ...
    The ME grand challenge (MEGC) 2025 introduces two tasks that reflect these evolving research directions: (1) ME spot-then-recognize (ME-STR), which integrates ...
  109. [109]
    Lightweight ViT Model for Micro-Expression Recognition Enhanced ...
    Jun 29, 2022 · This approach describes training a facial expression feature extractor by transfer learning and then fine-tuning and optimizing the MobileViT model.
  110. [110]
    Micro-expression recognition model based on TV-L1 optical flow ...
    Oct 20, 2022 · In this paper, we propose the ShuffleNet model combined with a miniature self-attentive module, which has only 1.53 million training parameters.
  111. [111]
    [PDF] Deep Insights of Learning based Micro Expression Recognition - arXiv
    Oct 10, 2022 · However, pub- licly available datasets for MER consist of limited data samples and tend to cause over-fitting. Moreover, deep/dense networks ...
  112. [112]
    A survey of micro-expression recognition - ScienceDirect.com
    Recent MER systems generally focus on three important issues: overfitting caused by a lack of sufficient training data, the imbalanced distribution of samples, ...Missing: posed | Show results with:posed
  113. [113]
    Micro-expression spotting: A new benchmark - ScienceDirect.com
    Jul 5, 2021 · A new challenging benchmark for ME spotting; (2) we suggest a new evaluation protocol that standardizes the comparison of various ME spotting techniques.
  114. [114]
    Comparison of human vs machine performance across benchmarks
    Differently from computer vision systems which require explicit supervision, humans can learn facial expressions by observing people in their environment.
  115. [115]
    Multimodal latent emotion recognition from micro-expression and ...
    This paper aims at the benefits of incorporating multimodal data for improving latent emotion recognition accuracy, focusing on micro-expression (ME) and ...
  116. [116]
    A Transformer-Based Multimodal Framework for Hidden Emotion ...
    Jun 30, 2025 · A Transformer-Based Multimodal Framework for Hidden Emotion Recognition through Micro-Expression and EEG Fusion | Proceedings of the 2025 ...
  117. [117]
    Could Micro-Expressions be Quantified? Electromyography Gives ...
    Secondly, we conducted an empirical study to investigate the internal characteristics of MEs. We found that individuals exhibit less control and awareness over ...<|separator|>
  118. [118]
    A More Objective Quantification of Micro-Expression Intensity ...
    Jun 18, 2025 · With its precision, ability to detect micro-expressions, and objectivity of results, EMG enables a deeper understanding of the processes of ...<|separator|>
  119. [119]
    Multimodal driver emotion recognition using motor activity and facial ...
    Nov 26, 2024 · This study introduces a methodology to recognize four specific emotions using an intelligent model that processes and analyzes signals from motor activity and ...
  120. [120]
    Optimized driver fatigue detection method using multimodal neural ...
    Apr 10, 2025 · This study introduces a comprehensive approach using multimodal neural networks, leveraging the DROZY dataset, which includes physiological and facial data ...
  121. [121]
    Advances in facial expression recognition technologies for emotion ...
    Sep 23, 2025 · Despite these advances, several challenges remain, such as aligning model explanations with human reasoning, handling multimodal data fusion ...