Fact-checked by Grok 2 weeks ago

Virtual human

A virtual human is a digital entity created through the integration of , , , and technologies, designed to replicate human-like appearance, behavior, emotions, and interactive capabilities in virtual environments. These entities differ from real humans by existing solely in digital spaces and often emphasize simulated psychological and emotional responses to enhance user engagement. The concept of virtual humans traces its origins to early computer simulations, such as Boeing's digital human model in 1964 for ergonomic design, but gained prominence in the 1980s with the debut of virtual idols in Japanese media, like in the 1982 series . Over decades, advancements in 2D and , neural networks for voice synthesis, and -driven animation have enabled progressively realistic representations, evolving from simple avatars to interactive agents capable of and . Recent developments as of 2025 include generative for more dynamic behaviors. Key challenges in their development include achieving perceptual realism in motion and appearance to avoid the "uncanny valley" effect—where near-human likeness evokes discomfort—and balancing computational efficiency with lifelike behaviors like gaze direction and nonverbal cues. Virtual humans find applications across diverse fields, including healthcare for therapeutic interviews and , gaming and for immersive , for interactive simulations, and commercial sectors like and for virtual customer service. In healthcare, for instance, they simulate face-to-face conversations to support assessments and behavior change interventions, demonstrating in improving outcomes such as and adherence to . Ongoing research, with over 600 academic publications as of 2023 and continued growth into 2025, explores enhancements in interactions and ethical considerations like and in AI-driven responses.

Introduction and Fundamentals

Definition and Characteristics

A virtual human is a computer-generated anthropomorphic entity designed to simulate human appearance, behavior, cognition, and interaction within digital environments, primarily enabled by , , and technologies. These entities exist in virtual worlds, exhibiting human-like traits such as realistic facial expressions, gestures, and conversational abilities to facilitate immersive human-computer interactions. Unlike static digital representations, virtual humans are malleable, allowing adaptation across diverse scenarios while maintaining anthropomorphic fidelity. Virtual humans are related to concepts in and , with terms like "digital human" often used interchangeably to describe similar interactive, entities. doubles focus on replicating the physical appearance of real individuals via for applications like , without incorporating autonomous or . Avatars, by contrast, are user-controlled digital proxies that prioritize direct manipulation over inherent . Agents, meanwhile, refer to non-visual or functionality-driven systems that execute tasks without a persistent human-like form or . Key characteristics of humans include visual realism, achieved through sophisticated , texturing, and rendering to produce lifelike appearances ranging from detailed skin and clothing to physiological variations. enables , emotional modeling, and reactive decision-making, often powered by frameworks for and personality traits; recent integrations with large language models have further advanced cognitive capabilities as of 2023. supports responses to users via inputs, including speech, gestures, and environmental cues, fostering empathetic and context-aware engagements. ensures a consistent presence, allowing and within simulated spaces as if physically present. The term "virtual human" originated in academic contexts during the , building on earlier modeling in to describe interactive, autonomous digital figures capable of real-time simulation. This evolution marked a shift from rigid ergonomic models to dynamic entities integrating and behavior, laying the foundation for modern applications.

Key Technologies

Virtual humans rely on foundational techniques to construct and visualize realistic representations of human forms. At the core of are polygonal meshes, which approximate surfaces through collections of vertices, edges, and faces, enabling efficient representation of complex geometries in applications. Subdivision surfaces extend this by refining polygonal meshes into smoother, piecewise parametric forms, combining topological flexibility with underlying continuity to model organic shapes like human anatomy. Texturing enhances these models via , which projects 2D images onto surfaces by assigning texture coordinates to vertices, allowing detailed surface properties such as skin patterns to be applied without altering the mesh geometry. Shaders further process these textures in real-time, computing material interactions like to simulate lifelike skin and hair appearance. Rendering pipelines integrate these elements to produce photorealistic visuals, with simulating light paths to accurately model reflections, refractions, and shadows essential for human-like realism. This technique traces rays from the camera through the scene, computing intersections with meshes and applying textures and shaders to generate indistinguishable-from-photography outputs, particularly for virtual human faces and bodies. Animation techniques bring virtual humans to life through skeletal , where a hierarchical structure is embedded within the to deform it realistically during . (IK) optimizes this by solving for joint positions to reach target endpoints, such as a hand grasping an object, automating natural pose adjustments across the . For facial animation, blend shapes—pre-defined deformations representing expressions like smiles or frowns—allow between neutral and emotive states, enabling nuanced control over subtle human-like reactions. Audio synthesis complements visual components with text-to-speech (TTS) systems, where neural networks generate natural-sounding voice output. , a deep , produces raw audio waveforms by predicting sound samples autoregressively, capturing prosody, intonation, and to mimic human speech patterns. Integration frameworks like and orchestrate these technologies in real-time environments, combining graphics rendering, animation, and audio for interactive virtual humans. 's tool, for instance, leverages its pipeline to create high-fidelity characters with rigged animations and ray-traced visuals directly importable into projects. supports similar workflows through its animation system and shader graphs, facilitating cross-platform deployment of responsive digital avatars. Hardware enablers accelerate these processes, with GPU acceleration handling parallel computations for rendering pipelines and solvers to achieve performance. systems using optical markers—reflective points tracked by cameras—provide precise input data for and , capturing human movements to drive virtual skeletons with sub-millimeter accuracy.

History

Early Developments

The origins of virtual humans trace back to the , with early computer simulations for ergonomic design. A pioneering example is 's "Boeing Man" model, developed in 1964 by William Fetter, which created the first digital figure using wireframe graphics to evaluate aircraft cockpit layouts and pilot positioning. This marked the initial use of computational models to simulate anatomy and in virtual spaces. The saw further progress in digital animations incorporating interactive or simulated human-like figures in entertainment media. In , advancements began with basic representations, though these were limited to simple sprites rather than full simulations. The brought breakthroughs in polygonal modeling, enabling more volumetric representations of humanoid figures beyond flat sprites. Software like Virtus WalkThrough, released in 1990, allowed users to create and navigate basic environments with polygonal objects, paving the way for spatial integration in spaces. These models relied on simple wireframe and polygon-based rendering, transitioning virtual humans from screens to explorable forms. A key academic milestone was Norman Badler's development of the Jack system at the starting in the late 1980s and expanding through the 1990s, which provided tools for creating articulated 3D human figures in simulations for and analysis. Jack utilized polygonal meshes with segments and joint constraints to mimic human , supporting tasks like manipulation and motion prediction in virtual environments. This system emphasized hierarchical control structures for behaviors, representing a shift toward functional virtual humans in research applications. Despite these advances, early virtual humans in the and faced significant limitations due to constraints. Models typically featured low counts—often under 1,000 polygons—to enable rendering on contemporary computers, resulting in blocky, low-fidelity appearances that lacked detailed or textural realism. Animations were rigid and pre-scripted, relying on keyframe or basic without fluid natural motion synthesis, which often led to unnatural poses or collisions in simulations. interaction was minimal, as computational power restricted dynamic behaviors, confining most applications to offline rendering or simple playback rather than responsive autonomy.

Milestones in Entertainment

In the late and early , virtual humans began to appear in entertainment media, leveraging emerging technologies like to create more immersive experiences. A landmark achievement came with the 2001 film Final Fantasy: The Spirits Within, the first feature-length production to feature photorealistic synthetic human actors, such as Dr. Aki Ross, whose movements were captured using for 90% of body animations via a 16-camera optical system with actors wearing 35 markers. This approach marked a significant shift toward virtual actors indistinguishable from live performers in visual fidelity, though the film's high production costs underscored early challenges in audience reception. Building on foundational techniques, the gaming industry introduced interactive virtual humans that enhanced player engagement through basic autonomy and responsiveness. In The Sims (2000), non-player characters (NPCs) demonstrated limited autonomous AI, allowing them to pursue needs like hunger or social interaction independently while responding to player interventions, creating emergent narratives in simulated households. Similarly, the Tomb Raider series, starting with its 1996 debut, featured as a pioneering interactive protagonist, whose polygonal model enabled fluid exploration and combat in 3D environments, influencing character design across action-adventure genres. Demo projects further showcased potential for lifelike interactions, as seen in the 1997 Microsoft Persona project, which presented "Lifelike Computer Characters" through prototypes like Peedy the Parrot, capable of real-time facial expressions synchronized with speech using reactive 3D animation sequences. These demonstrations highlighted for emotional expressivity, paving the way for more dynamic virtual companions in . Technical advancements in performance capture also transformed character portrayal, exemplified by Andy Serkis's work as in trilogy (2001–2003), where his on-set motions were digitized to drive the creature's nuanced behaviors, blending human acting with digital enhancement to achieve unprecedented emotional depth. This technique elevated virtual humans from static models to performative entities, influencing subsequent films and games by emphasizing actor-driven realism over manual keyframing.

Modern Advancements

In the and , virtual humans saw significant advancements in AI-driven behaviors and applications for enhanced realism. Researchers and companies began integrating to enable more natural conversational interactions, with IBM's platform introducing virtual agents in 2016 that allowed developers to build and train engagement bots for and other dialogues. Concurrently, techniques improved facial animation, exemplified by Disney Research's FaceDirector method in 2014, which enabled continuous control and transfer of an actor's emotional facial performances to digital characters in video, preserving subtle expressions like eye gazes and head movements for high-fidelity results. Post-2018, the field experienced rapid growth, marked by a surge in academic publications from 2020 onward, reaching 583 valid papers indexed in the database by April 2023, with continued expansion through 2025. This proliferation reflects the evolution of virtual humans through four historical phases: the 1980s emergence via 2D , the 1990s-2000s shift to and , the 2010s integration of for behavioral realism, and the post-2010s expansion into advanced synthesis techniques. Culminating in this latest phase, innovations like neural rendering—leveraging for precise, photorealistic visual outputs—and end-to-end synthesis—using neural networks to generate coherent speech and animations directly from inputs—have driven unprecedented fidelity and efficiency in virtual human creation. The 2020s brought further milestones, including real-time technologies that enable instantaneous face and voice manipulation for immersive virtual interactions, as seen in tools capable of generating lifelike videos during live sessions. Metaverse integrations advanced embodiment, with Meta's Codec Avatars project, announced in 2021, delivering photorealistic, full-body representations for , supporting eye contact, gestures, and expressions to foster natural remote communication. The digital human market, encompassing virtual humans, is projected to grow from USD 50.56 billion in 2025 to USD 247.43 billion by 2029, at a of 48.7%, fueled by these and synergies. Key institutions have propelled empathetic capabilities, notably the University of Southern California's Institute for Creative Technologies (USC ICT), whose Virtual Human Therapeutics Lab develops embodied conversational AI agents for mental health applications, demonstrating how virtual humans can deliver tailored, evidence-based interventions with emotional intelligence.

Types

Identity-Based Virtual Humans

Identity-based virtual humans are digital replicas designed to accurately represent specific real individuals, capturing their physical likeness, voice, and behavioral traits to create lifelike virtual counterparts. These replicas serve as personalized avatars that preserve an individual's unique identity, distinguishing them from generic or fictional virtual entities. The creation process typically begins with high-resolution 3D scanning techniques to model the subject's anatomy and appearance. Photogrammetry, which involves capturing multiple overlapping photographs from various angles and processing them through software like RealityCapture to generate detailed 3D meshes, is a primary method for achieving this. LiDAR scanning complements this by using laser pulses to measure distances and produce precise point clouds of the human form, often integrated with mobile devices for accessible full-body or facial captures. To replicate vocal identity, AI-driven voice cloning technologies, such as those developed by Respeecher, analyze audio samples to synthesize speech that matches the original speaker's timbre, intonation, and emotional nuances. High-fidelity is essential for these humans, ensuring anatomical accuracy and realistic . Seminal work in this area involves optimizing template-based models from multiple surface scans to derive subject-specific parameters, such as bone lengths and muscle volumes, enabling physics-based that accounts for , collisions, and elastic deformations. This data-driven approach uses non-rigid registration and large-scale optimization to personalize rest poses and simulate natural behaviors, surpassing surface-only methods by incorporating subsurface details for more authentic emulation. Personality traits are approximated through data integrated into these models, allowing the replica to mimic idiosyncratic gestures and expressions derived from the individual's recorded performances. Prominent examples illustrate the application of identity-based virtual humans in entertainment and preservation. In the 2022 ABBA Voyage tour, digital avatars of the band members were created using from 160 cameras to record their performances, combined with visual effects from to produce youthful, interactive holograms that performed live alongside a real band. Similarly, in the 2016 film : A Star Wars Story, the late actor was digitally revived as through mapping of archival footage onto a actor's performance, achieving a seamless blend of likeness and dialogue despite ethical debates. In the market, identity-based virtual humans play a key role in brand endorsements and legacy preservation, with the broader digital human sector projected to grow from USD 4.55 billion in to USD 14.83 billion by 2034, driven by demand for personalized content. Brands increasingly employ digital clones of celebrities and models for campaigns, enabling cost-effective, adaptable promotions without scheduling constraints—such as virtual replicas in ads that maintain consistent visual across markets. For legacy preservation, these replicas allow deceased individuals to "perform" in , extending cultural impact while raising legal considerations around and . Emerging applications also explore their use in secure verification within virtual environments, leveraging biometric fidelity to authenticate users in metaverses.

Service-Based Virtual Humans

Service-based virtual humans are digital entities engineered primarily for practical assistance in customer-facing or operational roles, leveraging embodied avatars to deliver interactive support through interfaces. These systems integrate conversational , particularly (NLP) for understanding and generating human-like dialogue, enabling them to handle inquiries, provide guidance, and perform routine tasks such as scheduling or . Task automation is a core capability, allowing these virtual humans to process multiple interactions simultaneously while maintaining contextual awareness during conversations. Appearances are often customizable to align with brand identities, incorporating adjustable facial features, clothing, and behavioral traits to enhance user engagement and trust. Development of service-based virtual humans typically employs hybrid models that blend rule-based systems for structured responses with techniques for adaptive, context-aware interactions. Rule-based components ensure reliability in predefined scenarios, such as following scripted dialogue flows, while enhances flexibility through classifiers for intent recognition and . For instance, the SimSensei Kiosk, developed in the early 2010s, utilized rule-based dialogue policies with approximately 100 fixed utterances for conducting interviews, augmented by models for and nonverbal behavior detection to assess user distress signals. This hybrid approach allows virtual humans to respond dynamically to user inputs, improving accuracy in exchanges without relying solely on rigid scripts. Prominent examples include Soul Machines' Digital Workers, which function as virtual receptionists and assistants in customer service, human resources, and healthcare settings, capable of empathetic listening and task handling like onboarding or query resolution. Another is the SimSensei Kiosk, a healthcare chatbot deployed as a virtual interviewer named Ellie to screen for post-traumatic stress disorder (PTSD) by analyzing verbal and nonverbal cues during face-to-face simulations, aiding clinicians in early detection. These implementations highlight the shift toward embodied agents that simulate human presence for more effective service delivery. Key advantages of service-based virtual humans include round-the-clock availability, enabling continuous support without fatigue, and high scalability to manage increasing interaction volumes across global operations. In 2023, the virtual agents and assistants segment dominated the digital avatar market, reflecting their widespread adoption for efficient, cost-effective service automation in commercial environments.

Virtual Idols

Virtual idols represent a subset of fictional humans engineered for and audience captivation, emphasizing performative and narrative immersion to foster dedicated fanbases. These entities feature fully synthetic designs, typically drawing from aesthetics with exaggerated physical traits—such as oversized eyes, vibrant hair, and stylized proportions—to maximize visual allure and emotional resonance. Their animations often rely on technology, where human performers' movements are recorded and mapped onto digital models to achieve fluid, expressive behaviors during virtual events. One seminal example is , launched on August 31, 2007, by using voice synthesis software, which enables users to compose and perform music voiced by her perpetually 16-year-old persona. Miku has headlined holographic concerts globally, including a 2023 performance at Tokyo's backed by live musicians, drawing thousands of fans to interactive shows that blend pre-recorded elements with real-time visuals. Another influential case is , a virtual ensemble debuted by in 2018, reimagining champions Ahri, Akali, Evelynn, and Kai'Sa as . Their debut at the 2018 Finals amassed over 20 million music video views in days, sparking widespread fan creations and broadening the franchise's appeal beyond gaming. The viability of virtual idols hinges on robust fan economies, sustained through merchandise, virtual live streams, and blockchain-based assets like NFTs, which enable exclusive digital ownership and interactions. In , the sector's global ranged from 1.09 to 3.67 billion USD, underscoring its emergence as a pivotal niche, with over 500 annual virtual concerts engaging audiences across more than 220 countries and platforms like Hololive boasting over 80 million subscribers. Merchandise sales further amplify revenue, while NFT trades exceeded 50,000 units that year, often tied to personalized fan experiences. This phenomenon has progressed from rudimentary depictions to immersive 3D holographic formats, reflecting technological maturation. Pioneering efforts in the , like the CG character Kyoko Date's 1996 debut with the song "Love Communication," laid groundwork for synthetic personas. By 2007, popularized 2D designs that evolved into 2.5D holograms for live tours starting around 2010, as seen in global Miku Expo events. The 2010s marked a shift to full with K/DA's 2018 AR-integrated performances, paving the way for AI-enhanced groups like the four-member MAVE: in 2023, which employs realistic modeling for hyper-detailed virtual stages.

Research Areas

Computer Graphics and Animation

Research in computer graphics and animation for virtual humans emphasizes algorithms that enhance visual realism and expressive motion, focusing on rendering techniques that simulate lifelike appearance under varying conditions and animation methods that capture natural dynamics. Rendering advancements have leveraged neural radiance fields (NeRF) to achieve photorealistic representations with dynamic lighting. Introduced in 2020, NeRF models scenes as continuous functions that output volume density and view-dependent radiance from sparse input views, enabling novel view synthesis with complex geometry and appearance variations suitable for virtual human reconstruction. Extensions like D-NeRF adapt this to dynamic scenes, incorporating time-dependent deformations for non-rigid motions in human avatars. Complementing these, subsurface scattering simulations replicate light diffusion through translucent materials like skin, crucial for believable facial rendering. A seminal approach uses separable approximations to compute scattering via efficient 1D convolutions, achieving real-time performance (1.05 ms per frame on 2012 hardware) while matching multi-pass methods in quality. Animation techniques prioritize physics-based simulations for secondary elements such as cloth and , which interact dynamically with body motion to convey . For cloth, methods employ mass-spring systems or finite element models to resolve collisions and draping, with recent data-driven enhancements integrating to predict photorealistic deformations from simulated training data. Hair animation similarly relies on strand-based physics, modeling frictional contacts and to style and animate strands responsive to environmental forces; early influential work demonstrated stable simulation of thousands of hairs under gravity and self-interaction. Expression transfer algorithms further enable expressive facial animation by mapping source performances to targets in . The Face2Face method captures and reenacts expressions from RGB video at 29.9 , preserving identity while transferring nuances like wrinkles. GAN-based variants, such as ReenactGAN, refine this by learning boundary transfers for high-fidelity reenactment across diverse identities. Key studies from Disney Research highlight emotional facial dynamics through dynamic appearance modeling. Their 2018 framework captures per-frame changes in skin albedo (reflecting blood flow for emotional cues like ) and specular properties using multi-view passive , enabling relightable animations that integrate with production pipelines. Evaluation of these advancements employs perceptual metrics to quantify realism, such as Mean Opinion Scores () from raters assessing lifelikeness on scales of 1-5, alongside objective measures like structural similarity index for geometric fidelity. Current trends emphasize real-time photorealism for / applications, driven by over 200 papers since 2020 on techniques like variants for generation. Surveys note accelerated progress in monocular reconstruction for clothed humans, enabling interactive rendering at 30+ on consumer . As of 2025, advancements include Gaussian splatting integrations with for faster rendering in () environments. These visual methods integrate briefly with AI-driven behaviors to produce cohesive virtual humans, though cognitive modeling remains distinct.

Artificial Intelligence and Behavior

Artificial intelligence plays a central role in enabling virtual humans to exhibit realistic cognitive and emotional behaviors, simulating , , and interactions that mimic human-like responses. Behavior modeling techniques allow virtual humans to perform autonomous actions by integrating rule-based and learning-based approaches. Finite state machines (FSMs) have been a foundational method for structuring these behaviors, defining discrete states and transitions to manage in simulated environments, such as navigation or interaction sequences in virtual worlds. More advanced models incorporate (RL), where virtual agents learn optimal actions through trial-and-error interactions with their environment, often enhanced by feedback to refine policies for complex, adaptive behaviors like multi-agent coordination. These methods enable virtual humans to demonstrate goal-directed , such as responding dynamically to environmental cues in training simulations. Emotional simulation in virtual humans draws from principles to generate and respond to emotions, fostering believable interpersonal dynamics. Researchers adapt Paul Ekman's (FACS), which decomposes facial expressions into action units, to drive realistic emotional displays in virtual characters, allowing them to convey states like joy or distress through synchronized animations. This integration with enables virtual humans to recognize user emotions via multimodal inputs—such as facial cues or voice tone—and simulate empathetic responses, enhancing their role in therapeutic or social scenarios. For instance, emotional AI models process affective signals to modulate behavior, ensuring virtual humans exhibit contextually appropriate reactions that align with human emotional norms. Dialogue systems further advance behavioral realism by powering natural conversations in virtual humans, leveraging end-to-end neural architectures for seamless, context-aware exchanges. Post-2020 developments, such as integrations of transformer-based models like BlenderBot, allow virtual humans to maintain and generate empathetic, personality-infused responses during interactions. These systems train on diverse conversational datasets to handle open-domain topics, improving coherence and emotional attunement without relying on predefined scripts. Pioneering research from the University of Southern California's Institute for Creative Technologies (USC ICT) in the exemplifies AI-driven behavioral simulation, particularly through virtual patients designed for PTSD . Projects like BRAVEMIND and Virtual Justina created autonomous virtual humans that simulate responses, using to generate personalized, emotionally nuanced interactions that guide users through therapeutic scenarios. These systems employ decision-making algorithms to adapt dialogues and behaviors based on user input, simulating symptoms like anxiety to facilitate clinical training and treatment. Believability of such virtual humans is evaluated using Turing-like tests, which assess whether their behaviors fool human judges into perceiving them as authentic counterparts, often measuring metrics like emotional consistency and response naturalness. Recent advancements post-2020 highlight a surge in modeling for virtual humans, utilizing architectures to predict and generate compassionate responses in conversational settings. Studies have explored multi-output regressions to detect and distress, enabling virtual agents to mirror human emotional support dynamically. This work builds on large language models to create virtual humans capable of social emotion elicitation, with applications in for more immersive interactions. Evaluations show these models improve perceived , as transformers excel at capturing contextual nuances in expression compared to earlier rule-based systems. As of 2025, large language models (LLMs) are increasingly used for automated gesture selection in virtual humans, enhancing behavioral realism in environments.

Human-Computer Interaction

Interaction paradigms in human-computer interaction with virtual humans emphasize inputs to foster more natural and intuitive exchanges, incorporating , voice commands, and eye-tracking to simulate real-world . These approaches allow users to engage through combined channels, such as directing a virtual human's attention via gaze while issuing verbal instructions, enhancing the seamlessness of communication in immersive environments. Research highlights that integrating these modalities reduces and improves task efficiency, as demonstrated in surveys of virtual human design where multimodal setups outperform single-mode interactions in user satisfaction metrics. Response plays a critical role in achieving perceived naturalness, with studies showing that delays exceeding 1 second can disrupt and , leading users to rate interactions as less engaging. To mitigate this, techniques like gestural fillers—such as nodding or subtle animations during processing—have been shown to mask delays of several seconds, preserving behavioral realism and user comfort in conversational settings. These findings underscore the need for low- systems to align virtual human responses with human conversational rhythms, where even brief pauses can influence perceived competence and willingness to continue dialogue. Evaluation methods for virtual human interactions often rely on user studies measuring presence—the sense of being in a —and , the emotional connection formed during exchanges. For instance, the 2014 SimSensei Kiosk system conducted controlled interviews to assess psychological distress, revealing that participants disclosed more personal information to the virtual interviewer than in self-report surveys, with scores correlating positively with nonverbal synchronization like gaze and posture mirroring. These studies typically employ validated scales, such as the Social Presence Questionnaire, to quantify immersion and trust, providing benchmarks for improvements in therapeutic and social applications. Accessibility research focuses on adaptive interfaces that tailor virtual human behaviors to diverse user needs, including adjustments for motor impairments through simplified inputs or amplified visual feedback for low-vision users. Such adaptations ensure equitable engagement by dynamically modifying interaction complexity based on user performance data. Additionally, in facial expressions is vital, as reveal variations in signaling—e.g., East Asian users interpreting subtle eye movements differently from counterparts—necessitating parameterized models that adjust expressive dynamics to avoid miscommunication and promote inclusivity. Recent trends, particularly from 2023 onward, center on environments to enhance co-presence, where virtual humans appear alongside physical elements to simulate shared spaces, boosting collaboration metrics like duration in team simulations. Metrics such as flow modeling, formalized in standards like the Interaction Flow Modeling Language (IFML), enable precise analysis of dialogue transitions and feedback loops, helping designers optimize and in co-located scenarios. As of 2025, haptic interfaces for digital humans in metaverses and integrations in further advance multimodal s. These advancements prioritize scalable, adaptations to support broader adoption in hybrid human-virtual interactions.

Applications

Entertainment and Media

Virtual humans play a pivotal role in film and television, enabling the creation of computer-generated actors and immersive environments through virtual production techniques. The Disney+ series The Mandalorian (2019) exemplifies this with its StageCraft technology, developed by Industrial Light & Magic, which uses massive LED walls for real-time rendering of backgrounds, allowing actors to perform in photorealistic settings without green screens or physical locations. This approach captured over 50% of the first season's shots in-camera, substantially reducing post-production demands and overall production costs by streamlining workflows and eliminating extensive location scouting and set builds. Such innovations have lowered expenses for blockbusters, with virtual production generally cutting TV production costs by 30-40% compared to traditional methods in some cases. In video games, virtual humans manifest as non-player characters (NPCs) that enhance narrative depth and player engagement in open-world titles. (2020), developed by CD Projekt Red, features thousands of highly detailed virtual human NPCs, including companions like Judy Alvarez and Panam Palmer, who exhibit lifelike animations, dialogues, and behaviors to foster in its sprawling Night City setting. These procedural and hand-crafted elements allow for dynamic interactions, making the game's dystopian society feel alive and responsive, thereby boosting player retention and emotional investment. Media broadcasts leverage virtual humans for consistent, scalable content delivery, including news anchoring and live events. In 2018, China's unveiled the world's first AI-powered news anchor, developed with , which simulates human-like speech, facial expressions, and gestures to report news around the clock on and websites. Virtual humans also star in interactive live events, such as virtual concerts; rapper Travis Scott's 2020 Fortnite performance as a towering digital drew 12.3 million attendees in a single showing, blending music with immersive virtual worlds to expand audience reach beyond physical venues. The and applications of humans form the largest segment, driving innovation in and audience interaction. According to Allied Market Research, as of 2023, and entertainment accounted for the dominant share of the global virtual humans market, valued at $43.3 billion overall, with projections indicating continued leadership amid rapid growth to $182.7 billion by 2033. According to The Business Research Company, by 2025, the broader virtual humans market is expected to expand to $51.94 billion, fueled by these sectors' demand for realistic digital personas.

Education and Training

Virtual humans serve as interactive pedagogical agents in educational settings, functioning as virtual tutors to deliver personalized instruction and facilitate immersive learning experiences. For instance, systems like AutoTutor employ conversational virtual humans to support language learning through natural dialogue, adapting explanations to learner needs and promoting deeper comprehension. Similarly, integrates AI-powered chatbots resembling virtual humans for roleplay exercises in , enabling realistic video conversations that simulate human interaction. In historical education, AI-generated virtual humans recreate figures like , allowing students to engage in dialogues grounded in primary sources for authentic reenactments and exploration. In training contexts, enable scenario-based simulations for skill development, particularly in high-stakes environments. The U.S. has utilized lifelike virtual humans as mentors in tactical and cultural programs since the early 2010s, such as at the , where they provide real-time feedback during role-playing interactions with simulated foreign populations. In corporate settings, platforms like Talespin's virtual human technology support , allowing employees to practice communication and in safe, repeatable virtual scenarios without real-world consequences. These applications offer key benefits, including personalized that adjusts to and environments for risk-free ; meta-analyses of virtual human interventions in confirm small but consistent positive effects on learning outcomes, attributing improvements to increased and social presence. By 2023, expansions in virtual human applications included VR-based training programs in schools, where students interact with simulated characters to build emotional understanding and prosocial behaviors through immersive perspectives.

Healthcare

Virtual humans have emerged as valuable tools in healthcare, particularly for therapeutic applications in support. One prominent example is the system, developed by the University of Southern California's Institute for Creative Technologies in the early 2010s, which functions as a virtual therapist to assist with conditions like (PTSD) and . employs advanced facial recognition and voice analysis software to detect subtle micro-expressions and emotional cues, enabling non-judgmental conversations that encourage patients, especially military veterans, to disclose sensitive information more openly than they might with human clinicians. This approach has demonstrated effectiveness in building rapport and eliciting disclosures, with patients reporting greater comfort due to 's lack of bias or fatigue. In diagnostics and medical training, virtual humans serve as simulated patients to enhance clinical skills without risking real individuals. These AI-driven avatars replicate diverse medical scenarios, allowing students to practice history-taking, physical examinations, and in interactive environments. For instance, platforms like MedSimAI, introduced in the , provide voice and chat-based simulations that offer immediate feedback on communication and diagnostic accuracy, improving learners' performance by up to 17% in controlled settings. Similarly, AI-powered tutors use virtual humans to guide dissections and visualizations, fostering deeper conceptual understanding through adaptive, personalized sessions that adjust to the user's pace and errors. Such tools have become integral to medical curricula, enabling scalable, 24/7 access to high-fidelity training. Virtual companions, often embodied as relatable avatars, play a key role in patient engagement for chronic illness by providing ongoing emotional and informational support. These systems help patients adhere to plans, monitor symptoms, and combat through conversational interactions tailored to conditions like or heart disease. Systematic reviews of AI-based chatbots and virtual agents indicate they improve self- and reduce hospital readmissions by enhancing daily engagement. In trials involving older adults with chronic conditions, virtual companions have led to significant reductions in , with effect sizes around 0.20 to 0.30, comparable to interactions, thereby alleviating the burdens of long-term illness. The integration of virtual humans into has accelerated since 2023, driven by expanded market opportunities and technological advancements. Post-pandemic regulatory flexibilities have facilitated their use in remote consultations, where avatars assist in , follow-ups, and behavioral coaching, contributing to the AI-in-telehealth sector's growth from $4.22 billion in 2024 to a projected CAGR of 36.4% through the decade. This expansion underscores virtual humans' role in making healthcare more accessible, particularly for underserved populations, amid a broader market valued at over $123 billion in 2024.

Commercial Services

Virtual humans have become integral to customer service in e-commerce, where they function as digital agents capable of handling inquiries, providing personalized recommendations, and facilitating transactions around the clock. For instance, Alibaba Cloud's Digital Human solution deploys realistic avatars to offer shopping guidance and support in online retail environments, enhancing user experience through natural language interactions and visual engagement. In China, AI-powered virtual livestreamers, built with technologies from Baidu and DeepSeek, manage sales queries for products ranging from consumer goods to electronics, operating 24/7 to outperform human counterparts in efficiency and availability. In , virtual humans serve as endorsers in personalized campaigns, leveraging their customizable appearances and consistent messaging to drive consumer interest. Studies from 2023 demonstrate that virtual influencers achieve average engagement rates of 5.9% in campaigns, approximately three times higher than the 1.9% rate for human influencers, attributing this to their novelty and targeted content delivery. This heightened interaction fosters and conversion, as virtual endorsers can simulate endorsements without the logistical challenges of human celebrities. Within finance and retail, virtual humans act as advisors to streamline banking and shopping processes. HSBC introduced its virtual assistant Amy in 2019, an AI-driven chatbot that resolves account queries, provides transaction details, and guides users through services, evolving to incorporate more advanced conversational capabilities over time. In retail settings, companies like ZOZO in Japan employ digital humans as in-store guides via apps and virtual platforms, offering personalized styling advice and product navigation to reduce returns and boost satisfaction since 2021. The adoption of virtual humans in commercial services is poised for substantial economic impact; according to Precedence Research, the global virtual humans market is valued at USD 5.12 billion in and projected to reach USD 14.83 billion by 2034, reflecting their role in cost savings and scalability for businesses. This growth underscores their potential to capture a meaningful share of customer-facing operations in , , and by enhancing efficiency and personalization.

Challenges and Future Directions

Technical Challenges

One major technical challenge in developing virtual humans is achieving sufficient realism to avoid the effect, where near-humanlike appearances and behaviors evoke discomfort, anxiety, or disgust in users due to subtle imperfections in , attractiveness, and uncanniness. This effect is particularly pronounced in embodied conversational agents, where mismatches between expected human-like traits—such as facial expressions congruent with emotional scenarios—and actual rendering lead to negative appraisals like perceived or eeriness. Inconsistencies become evident in prolonged interactions, as current models struggle with sustained behavioral coherence; however, most empirical studies are limited to short sessions averaging 11 minutes, highlighting a gap in understanding long-term expression fatigue and adaptation. Scalability poses significant hurdles due to the high computational demands of rendering for humans, which often requires processing complex models, animations, and lighting at 30-60 frames per second to maintain immersion in environments. These demands lead to challenges like excessive drain, device overheating, and in mobile and applications, restricting access to high-end and limiting deployment in resource-constrained settings such as portable devices. For instance, rendering numerous humans in dynamic scenes exacerbates these issues, as efficient algorithms are needed to balance visual fidelity with performance without compromising . The reliance on large datasets for training virtual human models introduces biases stemming from underrepresentation of diverse demographics, particularly in recognition and expression datasets that skew toward certain racial, gender, and ethnic groups. This underrepresentation results in inaccurate modeling for minority groups, such as poorer performance in recognizing emotions or attributes for dark-skinned or individuals, perpetuating inequities in virtual human realism and interaction quality. Addressing this requires expansive, balanced datasets—including synthetic generations of over a million diverse images with varied poses—to mitigate biases and ensure equitable training, though creating such resources demands substantial computational and ethical oversight. Integration challenges arise from synchronizing graphics, behaviors, and audio outputs in dynamic environments, where coordination of modalities like speech, eye gaze, facial expressions, and posture is essential for natural interactions but often disrupted by processing delays. For example, aligning lip movements with audio while adapting AI-driven responses to user inputs in varying scenarios requires fusion techniques, yet current systems struggle with seamless vision-based, sensor-based, and audio-based integration, leading to unnatural or disjointed virtual human performances. These hurdles are compounded in interactive applications, where brief asynchronies can break , necessitating advanced algorithms for low-latency harmony across components.

Ethical and Societal Issues

Virtual humans, as interactive digital representations, raise significant risks through the extensive inherent in interactions. These systems often capture physiological and behavioral data, such as eye movements, patterns, and emotional responses, to enable realistic , potentially creating detailed profiles that function as "kinematic fingerprints" with accuracies up to 63.55% for pointing gestures and 49.67% for walking. This facilitates surveillance-like , where virtual assistants or avatars personalize experiences in ways that influence behavior subconsciously, such as through targeted persuasion or ambient adjustments, often without full user awareness. Compliance with regulations like the EU's (GDPR) is challenging, as virtual human interactions may involve processing biometric data under Article 9, requiring explicit consent and data protection impact assessments, yet vague policy language in platforms like often fails to specify purposes clearly. The potential for identity deception through virtual humans has intensified with the rise of technologies, which generate hyper-realistic that can misrepresent individuals for harmful purposes. Deepfakes enable the creation of fabricated videos or audio depicting real people in false scenarios, such as political figures delivering misleading speeches, thereby spreading and eroding public trust in media and democratic processes. This misuse extends to virtual humans by allowing unauthorized digital replicas that blur the line between authentic and artificial identities, often targeting vulnerable groups like women and minors with nonconsensual intimate content. For example, the documented 210 web pages with AI-generated deepfakes of material in the first half of 2025, marking a 400% increase from prior periods. In response, 2025 has seen emerging regulations, including the U.S. federal TAKE IT DOWN Act, enacted on May 19, which criminalizes the distribution of nonconsensual intimate deepfakes with penalties up to two years imprisonment and mandates platforms to remove such content within 48 hours; as of September 2025, 47 states and the District of Columbia have enacted laws specifically addressing , in addition to broader state regulations on nonconsensual intimate imagery. Algorithmic biases in virtual humans perpetuate prejudices in appearance and behavior, undermining inclusivity and reinforcing societal inequities. For instance, designs and interaction models often draw from datasets skewed toward certain demographics, leading to racial biases where non-white representations exhibit glitches or less fluid animations in immersive environments. A 2023 study using found that while embodying an other-race reduced implicit racial bias in Caucasian participants—measured by the (IAT) with a significant decrease (p = 0.04, d = 0.67)—it did not alter neurophysiological indicators of stereotyping, such as the N400 response, highlighting persistent cognitive prejudices in virtual embodiment. These biases can marginalize underrepresented groups, as avatars failing to accurately reflect diverse ethnicities or abilities limit equitable access to virtual spaces and exacerbate exclusion in applications like social . Over-reliance on virtual humans as companions may have profound societal effects, potentially diminishing human relationships and . Interactions with AI-driven virtual entities, such as chatbots or avatars, can foster emotional bonds but risk , where users develop suboptimal communication habits and reduced motivation to engage in complex human interactions. This over-dependence may replace genuine connections, leading to decreased as individuals prioritize simulated interactions that lack reciprocal emotional depth, with studies noting potential erosion of moral skills and emotional . For neurodivergent users, however, such companions offer benefits like social upskilling, though the broader societal shift toward AI-mediated relationships raises concerns about long-term amid rising epidemics. Advancements in are poised to enable fully autonomous virtual humans through agentic AI systems, which integrate generative models with capabilities to operate independently in dynamic environments. These systems, often powered by post-2025 multimodal large language models (LLMs), allow virtual humans to process text, vision, and audio inputs simultaneously for more natural interactions, such as real-time adaptation in conversations or tasks without human oversight. For instance, agentic AI facilitates "virtual coworkers" that plan, reason, and execute actions, enhancing the of virtual humans in professional simulations. Integration with the is accelerating the development of embodied agents—virtual humans with physical-like presence in persistent digital worlds—enabling seamless social and collaborative experiences. These agents leverage behavioral foundation models to perform zero-shot tasks, such as navigating virtual spaces or interacting with multiple users, fostering immersive environments for and work. The virtual humans market, driven by metaverse adoption, is projected to grow from $51.94 billion in 2025 to $252.61 billion by 2029, at a (CAGR) of 48.5%, reflecting increased demand for AI-driven avatars in blended realities. Hybrid realities are emerging through (AR) overlays that position virtual humans directly in physical spaces, providing contextual daily assistance like personalized guidance during routines or . This trend blurs the boundaries between and physical worlds, with AR-enabled virtual humans offering real-time support in , , and healthcare via lightweight wearables. By 2025-2030, AR is expected to become an everyday tool, enhancing user experiences with interactive virtual companions that adapt to environmental cues for practical aid. Sustainability efforts in virtual human technologies focus on energy-efficient rendering techniques to mitigate the environmental costs of high-compute and processing. Innovations like the "You Only Render Once" () framework reduce power consumption in mobile by approximately 27% through optimized monocular-to-binocular image generation, minimizing redundant computations while maintaining visual fidelity. Such approaches address the broader of AI-driven rendering, promoting greener deployment in AR/VR applications and supporting scalable, eco-friendly virtual human ecosystems.

References

  1. [1]
    Virtual Human: A Comprehensive Survey on Academic and Applications
    Insufficient relevant content. The provided content snippet does not contain the full text of the article from https://ieeexplore.ieee.org/document/10305172, including the abstract or introduction. Only a title and partial metadata are available, which do not provide the definition of a virtual human, key characteristics, or distinctions from related concepts.
  2. [2]
    [PDF] A Survey on Realistic Virtual Human Animations - Hal-Inria
    Apr 25, 2024 · Figure 1: The central goal of this survey is to explore the aspects of animation realism of virtual humans. We approach this topic by.
  3. [3]
    Computer-Controlled Virtual Humans in Patient-Facing Systems
    Virtual humans (VH) are computer-generated characters that appear humanlike and simulate face-to-face conversations using verbal and nonverbal cues.
  4. [4]
    Virtual humans | Proceedings of Graphics Interface 2012
    We then define six typical Virtual Humans: the Performing Virtual Human, the Physiological Virtual Human, the Learning Virtual Human, the Connected Virtual ...
  5. [5]
    [PDF] Real Time Virtual Humans - University of Pennsylvania
    Animated virtual humans can be created in human time scales through motion capture or computer synthesis. Virtual humans are also beginning to exhibit autonomy.Missing: definition | Show results with:definition
  6. [6]
    [PDF] Geometric Modeling Based on Polygonal Meshes - RWTH Aachen
    The use of polygonal meshes for the representation of highly complex geometric objects has become the de facto standard in most computer graphics applications.
  7. [7]
    Subdivision Surfaces - Pixar Graphics Technologies
    Subdivision surfaces combine the topological flexibility of polygonal meshes with the underlying smoothness of piecewise parametric surfaces. Just as ...
  8. [8]
    Working with UV Maps - Foundry Learn
    UV mapping is the process of translating a 3D surface with volume and shape onto a flat 2D texture image. A way to visualize how that works is to consider ...
  9. [9]
    Texture mapping – Knowledge and References - Taylor & Francis
    Texture mapping is a technique used in 3D graphics to apply an image or texture onto a digital object in order to enhance its visual detail and realism.
  10. [10]
    1.2 Photorealistic Rendering and the Ray-Tracing Algorithm
    The goal of photorealistic rendering is to create an image of a 3D scene that is indistinguishable from a photograph of the same scene. Before we describe the ...
  11. [11]
    Overview of the Ray-Tracing Rendering Technique - Scratchapixel
    Ray-tracing is a technique for computing the visibility between points. Light transport algorithms are designed to simulate the way light propagates through ...<|separator|>
  12. [12]
    What Is Rigging in Animation? Skeletal Animation Explained - Adobe
    In animation, rigging works by construction a series of bones (or skeleton) for your 2D or 3D model. This could be an animal or human character, but you can ...
  13. [13]
    FK (Forward Kinematics) vs. IK (Inverse Kinematics) in 3D Character ...
    Jan 1, 2024 · While FK allows for the sequential movement of joint hierarchies, IK simplifies the animation process by automating the positioning of related joints.
  14. [14]
    A Holistic Solution for Automatic Facial Animation Generation ... - arXiv
    Feb 21, 2024 · For 3D virtual human models, each blendshape represents a single facial unit, such as eyebrows, lips, jaw, etc. By creating a series of ...
  15. [15]
    WaveNet - Google DeepMind
    Learning from human speech. WaveNet is a generative model trained on human speech samples. It creates waveforms of speech patterns by predicting which sounds ...
  16. [16]
    MetaHuman | Digital Humans - MetaHuman
    MetaHuman is a complete framework for creating and animating highly realistic and emotive digital human characters for real-time 3D projects.
  17. [17]
    Digital Humans - Unreal Engine
    Digital humans are 3D computer-generated versions of human beings that can be animated to move and behave like real people, but in a virtual world.
  18. [18]
    [PDF] GPU-accelerated Real-time Markerless Human Motion Capture
    Abstract: We present a system for capturing human motions based on video data from multiple cameras. It realizes a 3- dimensional voxel-based reconstruction ...Missing: optical | Show results with:optical
  19. [19]
    [PDF] Modeling and Animating Virtual Humans for Real-Time Applications
    Mar 31, 2005 · The common practice with optical motion capture systems is to use a set of 40 to 50 separate markers to capture the whole body (not including ...
  20. [20]
    Donkey Kong - First Versions
    Original name: "Monkey Kong" · Name: "Donkey Kong" · Released: July 9, 1981 (Japan) - Later 1981 (North America) · Platform: Arcade · First price: 1 play 1 coin - ...<|separator|>
  21. [21]
    Virtus Walkthrough - 1990 - YouTube
    Nov 6, 2009 · Virtus Walkthrough was the first realtime 3D design tool created for the PC. It was created by David Easter, Mark Uland and me in 1990.Missing: 1987 polygonal models humanoid
  22. [22]
    [PDF] Real-Time Virtual Humans - UPenn CIS
    We first describe the state of the art, then focus on the particular approach taken at the University of Pennsylvania with the. Jack system. Various aspects ...
  23. [23]
    [PDF] Final Fantasy: The Spirits Within A Case Study
    Final Fantasy, the first CGI (computer generated image) film featuring synthetic human actors, opens with its protagonist, Dr. Aki Ross, surveying her barren, ...Missing: CG | Show results with:CG
  24. [24]
    “Final Fantasy: The Spirits Within” by Square USA, Inc.
    Production Sessions · Production Sessions ... motion-capture technology to create a theater experience unlike anything movie audiences have seen before.
  25. [25]
    [PDF] Game Design for Classical AI
    Ward. The NPCs have limited autonomous AI, but require management by the player to keep them out of trouble, as in The Sims (Wright, 2000). • Puzzle. The ...
  26. [26]
    On 30 Years of Lara Croft | Feminist Media Histories
    Jul 1, 2025 · First appearing in Core Design's commercially successful 1996 game Tomb Raider, the buxom fictional action star Lara Croft was recognizable, ...
  27. [27]
    (PDF) Lifelike computer characters: the persona project at microsoft
    Lifelike computer characters: the persona project at microsoft. May 1997. Authors: Gene Ball at Lake Washington Institute of Technology.Missing: DARPA | Show results with:DARPA
  28. [28]
    Out-of-Body Workspaces: Andy Serkis and Motion Capture ...
    Mar 19, 2019 · Mo-cap transforms the virtual body of the 3D model into a space of intimate interaction, bridging the human actor and computer-generated model.
  29. [29]
    Andy Serkis: The Man Who Plays Computer Generated Parts - NPR
    Aug 4, 2011 · That same technique, known as performance capture, was used in Serkis's role as Gollum in "The Lord of the Rings." Let's get back to the ...
  30. [30]
    IBM Watson unveils new conversational capabilities
    Oct 26, 2016 · Watson Virtual Agent allows users – from startups and small businesses to enterprise – to easily build and train engagement bots from the cloud, ...
  31. [31]
    [PDF] FaceDirector: Continuous Control of Facial Performance in Video
    We present a method to continuously blend between mul- tiple facial performances of an actor, which can contain dif- ferent facial expressions or emotional ...
  32. [32]
  33. [33]
    Detection of real-time deep fakes and face forgery in video ... - NIH
    Aug 29, 2024 · Deep fake replaces a human face in a picture or video with a digitally created one. Deep fakes refer to realistic fake images that could be ...Missing: virtual 2020s
  34. [34]
  35. [35]
  36. [36]
    Virtual Human Therapeutics Lab - Institute for Creative Technologies
    The Virtual Human Therapeutics Lab uses embodied conversational AI agents inside evidence-based mHealth software applications to demonstrate the role Virtual ...
  37. [37]
    None
    ### Summary of Method for Reconstructing Humans Using Photogrammetry
  38. [38]
    What's the best 3D face scanner in 2024? - Artec 3D
    Feb 20, 2024 · With LIDAR and apps like Scaniverse or Polycam, you can digitize anything from a household object to the human face in high resolution – albeit ...
  39. [39]
    AI Voice Cloning - Respeecher
    Utilize our AI voice cloning technology to replicate any voice for diverse media projects, from blockbuster Hollywood films to immersive video games.Missing: virtual humans photogrammetry LiDAR
  40. [40]
    [PDF] Reconstructing Personalized Anatomical Models for Physics-based ...
    We present a method to create personalized anatomical models ready for physics-based animation, using only a set of 3D surface scans.
  41. [41]
    How ABBA Voyage was made | Ingenia
    ABBA Voyage was made using a demountable arena, motion capture of ABBA, and ILM visual effects, with the show and venue created in tandem.
  42. [42]
  43. [43]
    Virtual Humans Market Size to Hit USD 14.83 Billion by 2034
    Sep 10, 2025 · The global virtual humans market size was evaluated at USD 4.55 billion in 2024 and is predicted to hit around USD 14.83 billion by 2034, ...
  44. [44]
    Digital clones of real models are revolutionizing fashion advertising
    May 7, 2025 · Digital clones are transforming fast-fashion marketing. Always available, ageless and adaptable to any setting, these virtual figures enable brands to create ...<|control11|><|separator|>
  45. [45]
    Digital identity in virtual worlds | Shaping Europe's digital future
    Mar 31, 2025 · In virtual worlds, people may use avatars that display different attributes depending on the context of that virtual environment, either for professional or ...
  46. [46]
    Digital Humans For Virtual Assistant Services - Meegle
    Digital humans are not just chatbots with a face; they are sophisticated AI-driven entities designed to simulate human interaction. Here are the key features ...How Digital Humans Are... · Industry-Specific... · Challenges And Solutions In...
  47. [47]
    Virtual Human - artificial human being for conversational purposes
    Virtual Humans are automated agents that converse, understand, reason and exhibit emotions. They possess a three-dimensional body and perform tasks through ...Typical Usage · Background · Virtual Human Pages
  48. [48]
    [PDF] SimSensei Kiosk: A Virtual Human Interviewer for Healthcare ...
    May 5, 2014 · We present SimSensei Kiosk, an implemented virtual human interviewer designed to create an engaging face-to-face inter- action where the user ...Missing: screening | Show results with:screening
  49. [49]
    SimSensei kiosk: a virtual human interviewer for healthcare decision ...
    We present SimSensei Kiosk, an implemented virtual human interviewer designed to create an engaging face-to-face interaction where the user feels comfortable ...Missing: screening | Show results with:screening
  50. [50]
    New Soul Machines Digital Workforce | Scale Personalization
    Powered by our Experiential AI™, lifelike Digital Workers see, listen, react, remember and even empathize more like a real human, fostering trust and engagement ...Missing: receptionists | Show results with:receptionists
  51. [51]
    Digital Avatar Market Size, Share & Growth Report, 2030
    The global digital avatar market size was estimated at USD 18.19 billion in 2023 and is anticipated to reach USD 270.61 billion by 2030, growing at a CAGR ...
  52. [52]
    The Virtual Idol: Producing and Consuming Digital Femininity
    Virtual idols are digital characters created using 3D modeling, artificial intelligence, and motion capture technology (Sookkaew & Saephoo, 2021). Their ...
  53. [53]
    How Collectivism and Virtual Idol Characteristics Influence Purchase ...
    Virtual idols are digital characters created using 3D modeling, artificial intelligence, and motion capture technology (Sookkaew & Saephoo, 2021). Their essence ...
  54. [54]
    Forever 16: Fans of Japan's virtual singing idol Hatsune Miku ...
    Sep 5, 2023 · Legions of fans are celebrating the 16th anniversary of Miku's August 31, 2007, release with events including a virtual exhibition and songwriting.
  55. [55]
    League of Legends' virtual K-pop band is helping the game attract a ...
    Nov 11, 2018 · The K/DA music video for “POP/STARS” reimagines four of the murderous champions of League of Legends as internationally acclaimed artists back ...
  56. [56]
    Virtual Idol Market Size, Share & Report [2033]
    The virtual idol market reached an estimated global value ranging between 1.09 billion and 3.67 billion USD in 2023, depending on industry scope. Major virtual ...
  57. [57]
    K-pop AI & Virtual Idols: 30 Years from Miku to Girl Groups
    Sep 5, 2024 · Virtual idols have evolved over the past thirty years, transitioning from early experimentation to becoming a well-established industry that generates ...Missing: examples | Show results with:examples
  58. [58]
    NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
    - **Title:** NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
  59. [59]
    [PDF] D-NeRF: Neural Radiance Fields for Dynamic Scenes
    D-NeRF extends neural radiance fields to dynamic scenes, synthesizing novel views of objects under rigid and non-rigid motions from a single camera.
  60. [60]
    [PDF] Separable Subsurface Scattering - Graphics and Imaging Lab
    Jul 12, 2012 · We propose a technique to simulate subsurface scattering for human skin that runs in just over 1 millisecond per frame, making it a ...
  61. [61]
    [PDF] Inverse Dynamic Hair Modeling with Frictional Contact
    gravity and frictional contacts, including hair self ...
  62. [62]
    None
    **Title, Authors, and Year for Face2Face Real-Time Facial Reenactment**
  63. [63]
    [PDF] ReenactGAN: Learning to Reenact Faces via Boundary Transfer
    The proposed ReenactGAN is capable of manipulating a target face in a video by transferring movements and facial expressions from an arbitrary person's video.
  64. [64]
    [PDF] Practical Dynamic Facial Appearance Modeling and Acquisition
    We present a method to acquire dynamic properties of facial skin appearance, including dynamic diffuse albedo encoding blood flow, dynamic specular.
  65. [65]
    A Survey on Realistic Virtual Human Animations: Definitions ...
    Apr 30, 2024 · Virtual Humans (VHs) are digital representations of humans and present an ever-growing domains in computer graphics and computer vision. Due ...<|control11|><|separator|>
  66. [66]
    [PDF] Simulating Human Behavior in 3D Scenes - CVF Open Access
    Early approaches rely on rule-based approaches like finite-state machines [20] and behavior trees [21]. They provide a brute force way of manually crafting ...
  67. [67]
    Agent behavior modeling method based on reinforcement learning ...
    Jun 23, 2023 · In this study, a method for modeling the agent behavior based on reinforcement learning and human in the loop is proposed to improve the ability and efficiency ...
  68. [68]
    [PDF] Learnable Behavioural Model for Autonomous Virtual Agents
    May 12, 2006 · ABSTRACT. In this paper, we propose a new integration approach for simulation and behaviour in the learning context that is able to.
  69. [69]
  70. [70]
    Developing conversational Virtual Humans for social emotion ...
    Jul 15, 2024 · Affective Computing (AfC) explores the capacity for eliciting, recognizing, comprehending, and appropriately responding to human emotions. This ...
  71. [71]
    Affective computing in virtual reality: emotion recognition from brain ...
    Sep 12, 2018 · This study, on the other hand, aims to develop an emotion recognition system for affective states evoked through Immersive Virtual Environments.Material And Methods · Stimulus Elicitation · Signal Processing<|separator|>
  72. [72]
    [2208.03188] BlenderBot 3: a deployed conversational agent ... - arXiv
    Aug 5, 2022 · We present BlenderBot 3, a 175B parameter dialogue model capable of open-domain conversation with access to the internet and a long-term memory.
  73. [73]
    BlenderBot 3: An AI Chatbot That Improves Through Conversation
    Aug 5, 2022 · Our new AI research chatbot is designed to improve its conversational skills and safety through feedback from people who use it.
  74. [74]
    BRAVEMIND - USC Institute for Creative Technologies
    BRAVEMIND, ICT's virtual reality (VR) exposure therapy system has been shown to produce a meaningful reduction in PTSD symptoms in multiple clinical trials.
  75. [75]
    [PDF] Virtual Justina: A PTSD Virtual Patient for Clinical Classroom Training
    The potential of using computer generated virtual humans as standardized virtual patients (VPs) for use in clinical assessments, interviewing and diagnosis.<|separator|>
  76. [76]
    Virtual Listener: A Turing-like test for behavioral believability
    In this work, we present an approach to modeling a perceptive 3D virtual listener with emotional capabilities. The virtual character has a 3D face that performs ...
  77. [77]
    Empathy and Distress Prediction using Transformer Multi-output ...
    Empathy and Distress Prediction using Transformer Multi-output Regression and Emotion Analysis with an Ensemble of Supervised and Zero-Shot Learning Models.
  78. [78]
    Empathy and Distress Detection using Ensembles of Transformer ...
    This paper presents our approach for the WASSA 2023 Empathy, Emotion and Personality Shared Task. Empathy and distress are human feelings that are implicitly ...Missing: virtual | Show results with:virtual
  79. [79]
    Can Gestural Filler Reduce User-Perceived Latency in Conversation ...
    The results showed that the gestural fillers mitigate user-perceived latency and affect the willingness, impression, competence, and discomfort in conversations ...
  80. [80]
    Building Culturally-Valid Dynamic Facial Expressions for a ...
    Oct 19, 2020 · Our results demonstrate the power of using a culturally sensitive perception-based psychological approach to develop psychologically impactful ...
  81. [81]
    Exploring the Effects of the Virtual Human with Physicality on Co ...
    We report on the implemented results of our virtual humans and experimental results on co-presence and emotional response. KEYWORDS. Virtual human, Mixed- ...<|control11|><|separator|>
  82. [82]
    See how 'The Mandalorian' used Unreal Engine for its real-time ...
    Feb 21, 2020 · Plus, it no doubt saves on post-production costs -- according to ILM, the technique was used in fully 50 percent of The Mandalorian's shots.Missing: savings | Show results with:savings
  83. [83]
    This is the Way: How Innovative Technology Immersed Us in the ...
    May 15, 2020 · But beyond the time and cost savings of eliminating the step of meticulously replacing the green screens, StageCraft allows for better lighting ...Missing: percentage | Show results with:percentage<|separator|>
  84. [84]
  85. [85]
    Faces of Night City: A closer look at Cyberpunk 2077's weird ...
    Dec 7, 2020 · I grabbed my virtual camera and took some portraits of the people I saw, as well as a few characters V meets during the story. Here are some of my favourites.
  86. [86]
    China state media Xinhua unveils AI news anchor | CNN Business
    Nov 9, 2018 · Developed by Xinhua and Chinese search engine company Sogou, the anchor was designed to simulate human voice, facial expressions and gestures.
  87. [87]
    Big-Name Artists Use Virtual Concerts to Connect with Fans
    May 4, 2020 · John Legend, Tinashe, and Travis Scott give virtual concerts to connect with fans. They're performing as digital avatars in fantastical worlds.
  88. [88]
    Virtual Humans Market Size, Share | Industry Forecast - 2033
    The global virtual humans market size was valued at $43.3 billion in 2023, and is projected to reach $1827.65 billion by 2033, growing at a CAGR of 45.1%
  89. [89]
    Virtual Humans Global Market Report 2025
    DisponibleThe virtual humans market size has grown exponentially in recent years. It will grow from $34.88 billion in 2024 to $51.94 billion in 2025 at a compound annual ...
  90. [90]
    Learning with virtual humans: Introduction to the special issue
    Mar 2, 2021 · Virtual humans are embodied agents with a human-like appearance. In educational contexts, virtual humans are often designed to help people learn ...Missing: definition | Show results with:definition
  91. [91]
    Get to know the AI behind every Video Call with Lily - Duolingo Blog
    Apr 22, 2025 · The Duolingo app lets you have realistic video conversations with an AI-powered chatbot. Find out from one of our experts how it all works!Duolingo Uses AI · CEFR level · Sarcastic emo teenage girl · Parker Henry
  92. [92]
    Speaking with the Past: Constructing AI-Generated Historical ... - MDPI
    Recent advances in generative artificial intelligence (AI) have enabled the creation of AI-generated characters modeled after historical figures, ...
  93. [93]
    VR tool uses virtual humans for soft skills training | HR Dive
    Mar 5, 2019 · Virtual reality meets soft skills training in a new “virtual human technology” tool announced last week by business solutions firm Talespin.
  94. [94]
    Facts and Stats That Reveal The Power Of eLearning [Infographic]
    2. The Research Institute of America found that eLearning increases retention rates 25% to 60% while retention rates of face-to-face training are very ...
  95. [95]
    An empathetic VR-based learning approach to improving EFL ...
    This study proposed an empathetic VR-based learning (E-VRL) approach to provide learners with access to authentic experiences and further promote learner ...
  96. [96]
    How computer-assisted therapy helps patients and practitioners
    Jan 9, 2019 · Ellie's programming uses cutting-edge facial, body and voice recognition software that can read a person's subtle emotional and behavioral ...Missing: detection micro-
  97. [97]
    Computerized 'Ellie' has just enough humanity to aid in therapy work
    Apr 3, 2015 · A webcam and microphone allow the computer's software to “see” and “hear” her conversation partner's response. The feedback guides the direction ...Missing: detection micro-
  98. [98]
    Coping with AI Anxiety - USC Viterbi | Magazine
    Ellie is a “virtual therapist” designed by USC's Institute for Creative Technologies (ICT) to treat military veterans suffering from post-traumatic stress ...Missing: micro- | Show results with:micro-
  99. [99]
    Medical students use AI to practice communication skills
    Mar 25, 2025 · MedSimAI, an AI virtual patient, simulates doctor-patient interactions with chat and voice, providing immediate feedback and unlimited practice ...
  100. [100]
    Artificial Intelligence in Healthcare Simulation | HealthySimulation.com
    May 21, 2025 · Virtual patients are an AI based computer program which can show symptoms and are able to respond to learners' interventions in real time. The ...
  101. [101]
    Medical Students Are Loving This Virtual Patient Simulator - Memrizz
    Feb 28, 2025 · The simulator provides a safe, 24/7 environment, improves diagnostic accuracy by 17%, is cost-effective, and offers personalized learning with ...
  102. [102]
    Artificial Intelligence-Based Conversational Agents for Chronic ...
    Sep 14, 2020 · The goal of this systematic literature review is to review the characteristics, health care conditions, and AI architectures of AI-based conversational agentsMissing: companions illness
  103. [103]
    Artificial Intelligence-Based Chatbots in Chronic Disease Management
    Mar 22, 2025 · This systematic review aimed to examine the traits, medical conditions, and AI architectures of conversational agents that are based on artificial intelligence
  104. [104]
    AI Companions Reduce Loneliness | Journal of Consumer Research
    Abstract. Chatbots are now able to engage in sophisticated conversations with consumers in the domain of relationships, providing a potential coping soluti.
  105. [105]
    AI in Telehealth & Telemedicine Market Growth, Drivers, and ...
    Global AI in telehealth & telemedicine market valued at $2.85B in 2023, reached $4.22B in 2024, and is projected to grow at a robust 36.4% CAGR, ...
  106. [106]
    Telehealth Market Size, Share, Trends | Industry Report 2030
    The global telehealth market size was estimated at USD 123.26 billion in 2024 and is projected to reach USD 455.27 billion by 2030, growing at a CAGR of 24.68% ...Missing: humans | Show results with:humans
  107. [107]
  108. [108]
    Chinese 'Virtual Human' Salespeople Are Outperforming Their Real ...
    Aug 20, 2025 · Built using AI technology from Baidu and DeepSeek, the virtual livestreamers sell everything from wet wipes to printers 24 hours a day, ...Missing: 2020s | Show results with:2020s
  109. [109]
    The Rise of Virtual Influencers to Disrupt the Influencer Marketing ...
    Jan 24, 2024 · The average engagement rate for virtual influencer campaigns in 2023 was 5.9%. This is 3x higher than the average engagement rate for real ...
  110. [110]
    Retail sector: Understanding the rise and risks of the virtual influencer
    Jun 28, 2024 · The 2023 survey mentioned above shows the average engagement rate for virtual influencer campaigns was 5.9%, three times higher than the 1.9% ...Missing: percentage study
  111. [111]
    Banks Are Promoting 'Female' Chatbots To Help Customers, Raising ...
    Feb 27, 2019 · "Mia" is the virtual assistant introduced by Australian digital bank Ubank in February 2019. ... HSBC's virtual assistant is called "Amy.".
  112. [112]
    Digital humans in retail - fxguide
    Feb 9, 2021 · The VFX and games industry have long sort to solve digital humans for everything from stunt double replacements to virtual characters.
  113. [113]
    The uncanny valley effect in embodied conversational agents
    Introduction: The Uncanny Valley Effect (UVE) describes the discomfort users feel when interacting with Embodied Conversational Agents (ECAs) that display ...
  114. [114]
  115. [115]
    Real-Time Rendering Optimization for XR: 7 Challenges and Tips
    Aug 15, 2024 · Real-time rendering creates 3D visuals in XR. Challenges include battery drain, overheating, latency, and complex scenes. Solutions include ...
  116. [116]
    SMU Lab Tackles Bias in AI Facial Recognition Systems
    Oct 19, 2023 · Researchers at SMU are creating large datasets to address bias and fairness issues found in facial recognition (FR) technology.
  117. [117]
    Unveiling Biases in Human Characteristics Representation
    Dec 9, 2024 · However, the Uncanny Valley (UV) theory suggests that as Virtual Humans (VHs) become more realistic, they may evoke discomfort. This phenomenon ...Missing: underrepresentation | Show results with:underrepresentation<|separator|>
  118. [118]
    The interaction design of 3D virtual humans: A survey - ScienceDirect
    This paper hopes to help researchers quickly understand the characteristics of various modal interactions in the process of designing intelligent virtual humans ...
  119. [119]
    Virtual Reality Data and Its Privacy Regulatory Challenges: A Call to ...
    Apr 14, 2025 · This Note argues that virtual reality exposes a more fundamental problem of the GDPR: the futility of text-based informed consent in the context of virtual ...
  120. [120]
    [PDF] The impact of the General Data Protection Regulation (GDPR) on ...
    This study addresses the relationship between the General Data. Protection Regulation (GDPR) and artificial intelligence (AI). After.
  121. [121]
    Reckoning With the Rise of Deepfakes - The Regulatory Review
    Jun 14, 2025 · Before the TAKE IT DOWN Act, states individually regulated AI-generated intimate imagery. As of 2025, all 50 states and Washington, D.C. have ...Missing: virtual misinformation
  122. [122]
    Black immersive virtuality: Racialized experiences of avatar ...
    Embodied glitches demonstrate a novel form of how racial bias can be experienced when avatars are brought into immersive contexts. •. Greater discrepancy ...
  123. [123]
    Behavioral and neurophysiological indices of the racial bias ...
    Sep 28, 2023 · Immersive virtual reality (IVR) can reduce implicit racial bias through the feeling of owning (embodying) a virtual body of a different “race”.
  124. [124]
    The impacts of companion AI on human relationships: risks, benefits ...
    Apr 16, 2025 · This paper intends to make two contributions to the literature and discourse on impacts of companion AI: a categorisation of the primary risks and benefits.
  125. [125]
    McKinsey technology trends outlook 2025
    Jul 22, 2025 · Agentic AI combines the flexibility and generality of AI foundation models with the ability to act in the world by creating “virtual coworkers” ...
  126. [126]
    The agentic organization: A new operating model for AI | McKinsey
    Sep 26, 2025 · Companies are moving toward a new paradigm of humans working together with virtual and physical AI agents to create value.
  127. [127]
    Gen AI Outlook: Key trends shaping its development in 2025
    Autonomous agents are taking AI technology to a new level, by performing multiple tasks, making decisions, and engaging with their surroundings independently, ...Domain-Specific Models · Improving Model Training... · Multimodal Gen AiMissing: enhancements | Show results with:enhancements
  128. [128]
    Virtual Humans Market Size, Competitors & Forecast to 2029
    ### Key Projections for Virtual Humans Market Size (2025-2029)
  129. [129]
    [PDF] 2025 tech trends report • 18th edition - metaverse & new realities
    In the metaverse era, companies will reinvent their business models to blend physical and digital offerings, integrating virtual goods, services, and immersive ...
  130. [130]
    The Future of Augmented Reality: A Vision for 2025-2030 - Emerline
    Rating 5.0 (15) Jun 21, 2025 · Explore how Augmented Reality is evolving into an everyday tool. This vision for 2025-2030 details AR's market trends, new devices, ...
  131. [131]
    The Future of Augmented Reality in 2025: Trends - Reydar
    There will be virtual assistants and AI chatbots that support you while shopping and offer personalised recommendations when both shopping online and in a ...Missing: hybrid | Show results with:hybrid
  132. [132]
  133. [133]
  134. [134]
    [PDF] You Only Render Once: Enhancing Energy and Computation ...
    Jun 27, 2025 · Mobile Virtual Reality (VR) is essential for achieving convenient and immersive human-computer interaction and realizing emerging applications ...Missing: demands | Show results with:demands