Facial Action Coding System
The Facial Action Coding System (FACS) is a comprehensive, anatomically based framework for objectively measuring and coding facial movements by breaking them down into discrete components corresponding to specific facial muscle contractions, known as Action Units (AUs).[1] Developed by psychologists Paul Ekman and Wallace V. Friesen and first published in 1978 as a manual for behavioral science research, FACS enables detailed, standardized descriptions of facial expressions independent of inferred emotional meaning. The system draws on anatomical analysis to identify 44 principal AUs in adults, each assigned a numerical code and shorthand notation (e.g., AU 12 for lip corner puller), along with intensity ratings on a five-point scale from A (trace) to E (maximum).[2] FACS builds on earlier anatomical studies, such as those by Carl-Herman Hjortsjö in 1970, but Ekman and Friesen's innovation lies in its systematic, observer-based coding protocol that emphasizes visible changes in facial appearance rather than internal muscle states or subtle cues like skin tone.[1] In 2002, Ekman, Friesen, and collaborator Joseph C. Hager released a major revision (FACS 2), incorporating updated observations from high-speed videography and electromyography to refine AU definitions and add new descriptors for eye, head, and body movements.[3] This revision expanded the system's precision while maintaining its core focus on anatomical fidelity, resulting in a 500-page manual that serves as the gold standard for training certified coders.[4] The system's applications span multiple disciplines, including psychology for studying universal and culture-specific emotions, neuroscience for linking facial actions to brain activity, and clinical settings for assessing pain through distinct AU patterns like brow lowering (AU 4) and eye tightening (AU 7).[5] In computer science and human-computer interaction, FACS informs automated facial analysis tools, such as those using machine learning to detect AUs in real-time video for emotion recognition or lie detection.[2] Consumer research has also adopted FACS to evaluate affective responses to products and advertisements by coding subtle expressions like the Duchenne smile (AUs 6 + 12), highlighting its versatility in bridging basic science and applied contexts.[1] Despite its manual coding demands, which require extensive training (typically 100 hours), FACS remains influential due to its reliability and objectivity, with inter-coder agreement rates often exceeding 80% for trained users.[2]History and Development
Origins and Conceptual Foundations
The Facial Action Coding System (FACS) traces its conceptual roots to Charles Darwin's seminal work, The Expression of the Emotions in Man and Animals (1872), which posited that facial expressions are innate, universal signals evolved for communication and survival, inspiring later systematic analyses of facial morphology to decode emotional states.[6] Darwin's emphasis on observable facial changes as indicators of underlying emotions, drawing from Guillaume Duchenne's electrophysiology studies, laid the groundwork for objective measurement by highlighting the need to distinguish genuine from posed expressions based on anatomical mechanics.[7] FACS also built upon Swedish anatomist Carl-Hermann Hjortsjö's 1970 studies, which analyzed facial musculature and identified discrete movement units, providing a direct anatomical precursor.[7] In the 1970s, psychologists Paul Ekman and Wallace V. Friesen developed FACS to address the limitations of subjective interpretations in emotion research, where prior methods like the Facial Affect Scoring Technique (FAST) relied on holistic judgments prone to observer bias.[7] Motivated by the need for a precise tool to detect subtle cues in facial behavior—such as those revealing deceit or concealed emotions—they shifted focus to an anatomically grounded approach, analyzing how individual facial muscles produce discrete visible changes rather than interpreting entire expressions as unitary emotions.[7] The initial FACS manual was published in 1978 as a comprehensive, self-instructional guide with photographic and filmed examples for training coders.[4] A major revision, FACS 2002, co-authored with Joseph C. Hager, incorporated updated anatomical knowledge from advances in facial musculature studies, refining action descriptions for greater accuracy while preserving the system's foundational structure.[4] At its core, FACS operates on the principle that all facial movements are discrete events resulting from specific muscle activations, enabling decomposition of expressions into measurable components for reliable, replicable analysis across contexts.[7]Key Contributors and Evolution
The Facial Action Coding System (FACS) was primarily developed by psychologist Paul Ekman and his collaborator Wallace V. Friesen, who together created the original framework in the late 1970s. Ekman, renowned for his research on universal facial expressions of emotion, led the conceptual and theoretical development, drawing on anatomical studies to systematize facial movements. Friesen, with expertise in nonverbal communication and observational techniques, contributed significantly to the methodological aspects, including the detailed scoring protocols for real-time and video-based analysis. Their joint work culminated in the first FACS manual published in 1978, which established the system's foundation for objectively measuring facial muscle actions.[8] The system underwent a major revision in 2002, led by Ekman and Friesen in collaboration with Joseph C. Hager, who focused on refining the anatomical descriptions and expanding practical tools for coders. Hager's contributions included the development of software aids, such as the checker program for practice scoring and multimedia resources to enhance training accuracy. This update incorporated feedback from decades of use, improving the precision of action unit definitions while maintaining the core structure of approximately 44 action units plus eye, head, and visibility modifiers. Erika L. Rosenberg, a key collaborator in subsequent advancements, has played a pivotal role in manual revisions and educational dissemination; she developed the standardized five-day FACS training workshop in 2004 and has led efforts to update the manual for contemporary applications, including clearer criteria for complex expressions.[9][10] In the 2020s, computational tools inspired by FACS, such as AI-driven systems, have advanced automated action unit detection, often achieving reliability comparable to trained human coders (with inter-observer agreement rates of 80-90% and kappa values often exceeding 0.8 for manual coding), enabling large-scale studies while preserving the manual system's anatomical rigor.[11][4] The Paul Ekman Group oversees certification programs, including the FACS Final Test, which ensures proficiency through rigorous self-study and workshop preparation, standardizing application across research and practice.[4]Core Methodology
Coding Process and Training
The coding process in the Facial Action Coding System (FACS) begins with the preparation of video footage, which is typically viewed at frame rate or in slow motion to enable precise observation of facial movements. Coders systematically analyze the video frame by frame, identifying the onset (beginning of the movement), apex (peak intensity), and offset (end of the movement) for each discernible facial action. This decomposition relies on anatomical criteria outlined in the FACS manual, focusing on visible changes such as wrinkles, bulges, or furrows that indicate underlying muscle contractions, ensuring that expressions are broken down into their elemental components without presupposing emotional meaning.[12][13][2] Training to become a proficient FACS coder requires substantial dedication, typically involving 50 to 100 hours of self-study using the comprehensive 527-page FACS manual and accompanying 197-page Investigator's Guide, which cover anatomical foundations and coding rules through photographs and video examples. This is often supplemented by structured workshops, such as the 5-day course developed by certified trainers, emphasizing practice on sample videos to build observational skills. Certification is achieved by passing the FACS Final Test, a video-based examination with 34 items that assesses the ability to accurately code action units, requiring demonstration of reliable application across varied facial behaviors.[4][14] While manual observation remains the core method, digital tools enhance efficiency in FACS implementation; for instance, EmFACS (Emotion FACS), a specialized subset of FACS, selectively codes only those action units that are likely to have emotional significance, streamlining manual coding for emotion-related research.[4] Reliability in coding is evaluated via intra-coder agreement (consistency within a single coder over repeated sessions) and inter-coder agreement (consistency across multiple coders), with certification thresholds typically demanding inter-rater reliability above 0.80 using metrics like Cohen's kappa. Challenges often arise with subtle movements, where agreement may dip below 0.70 due to perceptual ambiguities in low-intensity actions. The resulting codes from this process specify action units as the fundamental output of facial analysis.[11][15]Action Units and Anatomical Basis
The Facial Action Coding System (FACS) defines action units (AUs) as the fundamental, discrete units of facial movement, each corresponding to the contraction of one or more specific facial muscles and producing distinct, observable changes in facial appearance. These units serve as the building blocks for describing all visually discernible facial behaviors, enabling precise decomposition of expressions into their anatomical components. In addition to AUs, FACS incorporates action descriptors (ADs) to account for broader or gross visible movements that lack a direct tie to individual muscle actions, such as full eyelid closure or head tilts, particularly when specific muscular involvement cannot be isolated.[4][16] The anatomical foundation of AUs stems from a systematic dissection of facial musculature, drawing on studies of muscle function and their visible effects, including key groups such as the zygomaticus major, which elevates the cheeks. This mapping identifies approximately 44 AUs in humans, each linked to targeted muscle activations across the face's expressive regions, ensuring that coding reflects the biomechanical realities of facial movement rather than superficial appearances.[4][17][18] Central to FACS is the principle of AU independence, whereby most units can occur separately or combine additively to form complex configurations, allowing for the representation of nuanced facial dynamics; however, anatomical constraints, such as shared muscle attachments, cause certain AUs to co-occur obligatorily, like those involving coupled lip and cheek actions.[17][19] FACS maintains a strictly descriptive orientation, cataloging muscular events without inferring emotional states; while combinations of AUs may correlate with expressions like surprise in empirical studies, the system itself avoids such interpretive labels to preserve objectivity in observation and analysis.[4][3]Action Unit Codes
Main Facial Action Units
The main Facial Action Units (AUs) in the Facial Action Coding System (FACS) represent discrete, anatomically distinct movements of the face, each linked to the activation of one or more specific facial muscles. These units enable precise coding of facial behavior by observers trained in FACS, focusing on visible changes rather than inferred emotions. Originally outlined in the 1978 manual by Paul Ekman and Wallace V. Friesen, the system was revised in 2002 by Ekman, Friesen, and Joseph C. Hager to incorporate refinements from electromyographic studies and cadaver dissections, including updated criteria and intensity scoring for AU 25 (lips part) to better account for subtle relaxations in lip closure.[4][20] The principal facial AUs (numbered 1–28, excluding gaps such as 3, 8, 19, and 21) group related movements (e.g., 1–2 for brows, 9–12 for upper lip and cheeks), and they exclude supplementary codes for head position, eye gaze, or visibility obstructions. Each AU is described by its muscular basis, the observable facial changes it produces, and potential interactions with other units. For instance, AU 6 (cheek raiser) often combines with AU 12 (lip corner puller) to form raised cheeks and upturned lip corners, characteristic of a full smile. Intensity levels (A–E) can modify these units, but base descriptions focus on peak activation.[21][22] The following table lists the principal facial AUs 1–28, drawing from the 2002 FACS manual criteria:| AU | Name | Muscular Basis | Visible Effects |
|---|---|---|---|
| 1 | Inner Brow Raiser | Frontalis, pars medialis | Elevates medial portion of eyebrows, creating horizontal wrinkles across bridge of nose |
| 2 | Outer Brow Raiser | Frontalis, pars lateralis | Elevates lateral portion of eyebrows, arching them outward |
| 4 | Brow Lowerer | Corrugator supercilii, depressor supercilii | Draws eyebrows together and downward, producing vertical furrows between brows |
| 5 | Upper Lid Raiser | Levator palpebrae superioris | Widens eyes by raising upper eyelid, exposing more sclera |
| 6 | Cheek Raiser | Orbicularis oculi, pars orbitalis | Raises cheeks and forms crow's feet wrinkles around eyes |
| 7 | Lid Tightener | Orbicularis oculi, pars palpebralis | Narrows eye opening by tensing lower eyelid upward |
| 9 | Nose Wrinkler | Levator labii superioris alaeque nasi | Raises upper lip and nostrils, wrinkling sides of nose |
| 10 | Upper Lip Raiser | Levator labii superioris | Elevates upper lip, exposing teeth and lengthening upper lip groove |
| 11 | Nasolabial Deepener | Zygomaticus minor | Pulls skin upward from nose to lip, deepening nasolabial fold |
| 12 | Lip Corner Puller | Zygomaticus major | Draws lip corners laterally and upward, creating oblique cheek lines |
| 13 | Sharp Lip Pusher (Cheek Puffer) | Levator anguli oris (caninus) | Pushes cheeks outward, puffing them slightly |
| 14 | Dimpler | Buccinator | Tightens cheek, producing dimples near lip corners |
| 15 | Lip Corner Depressor | Depressor anguli oris (triangularis) | Pulls lip corners downward, creating downturned mouth |
| 16 | Lower Lip Depressor | Depressor labii inferioris | Depresses lower lip, exposing lower teeth |
| 17 | Chin Raiser | Mentalis | Pushes lower lip up and wrinkles chin |
| 18 | Lip Puckerer | Incisivii labii superioris and/or inferioris | Purses lips forward into a puckered spout |
| 20 | Lip Stretcher | Risorius, often with platysma | Stretches lips horizontally, flattening mouth |
| 22 | Lip Funneler | Orbicularis oris (superior/inferior parts) | Purses lips into a funnel shape, protruding them |
| 23 | Lip Tightener | Orbicularis oris | Tenses lips, drawing mouth into a tight line |
| 24 | Lip Pressor | Orbicularis oris | Presses lips firmly together, often with tension |
| 25 | Lips Part | Relaxation of mentalis or orbicularis oris; subtle action of depressor labii inferioris | Slightly separates lips without tension, often preparatory for speech or breathing |
| 26 | Jaw Drop | Masseter relaxed; temporal and internal pterygoid relaxed | Jaw drops, increasing vertical distance between teeth and lips |
| 27 | Mouth Stretch | Pterygoids, digastric | Stretches mouth horizontally and lowers jaw |
| 28 | Lip Suck | Orbicularis oris | Draws lips inward, sucking or biting lower lip |