Fact-checked by Grok 2 weeks ago

Computer-mediated reality

Computer-mediated reality refers to the use of computing technologies to add information to, subtract elements from, or otherwise manipulate an individual's sensory perception of the physical world, often through wearable devices or displays. Coined by wearable computing pioneer Steve Mann in the late 1990s, this concept combines —which overlays digital content onto the real environment—and diminished reality, which selectively filters out aspects of reality to alter perception. It serves as an umbrella term encompassing related technologies like virtual reality (VR), which creates fully immersive simulated environments, and mixed reality (MR), which seamlessly blends physical and digital elements. The foundational work on computer-mediated reality emerged from advancements in wearable computing and humanistic intelligence, where human cognition integrates with computer processing in a feedback loop to mediate sensory input. Mann's early experiments in the involved head-mounted displays and cameras that enabled visual alterations, such as enhancing low-light or removing distracting visual noise, laying the groundwork for broader applications. By the early , expanded to include multi-sensory , potentially affecting touch, , or even olfaction, though visual remains predominant. In contemporary contexts, computer-mediated reality technologies (CMRT) are deployed on ubiquitous platforms like smartphones, head-mounted displays, and spatial cameras, supporting hardware components such as sensors for gesture and voice input. These systems have gained traction in healthcare since the , aiding in medical training, surgical guidance, , and patient monitoring by enhancing perceptual accuracy and enabling patient-centered interventions. Beyond , applications extend to , where integrates with for , and to tools that assist visually impaired users by augmenting environmental cues. Ongoing emphasizes ethical considerations, such as in pervasive mediation and the psychological impacts of altered perceptions.

Overview

Definition

Computer-mediated reality refers to the use of computational systems to add to, subtract from, or otherwise manipulate sensory in , thereby modifying an individual's of their physical . This approach leverages wearable or portable devices to intervene in the sensory pathway, creating an altered experiential layer that integrates digital elements with the tangible world. In contrast to pure virtual reality, which immerses users in a completely synthetic environment by replacing real-world input, computer-mediated reality focuses on augmenting or mediating the existing physical reality without full substitution. For instance, it can involve overlaying digital data—such as navigational cues or contextual annotations—onto the user's visual field through transparent displays, enhancing rather than obscuring the underlying environment. The fundamental mechanisms of computer-mediated reality involve three key stages: capturing sensory input via devices like cameras or environmental sensors, processing the data through algorithms that filter, enhance, or generate overlays, and delivering the modified output via interfaces such as head-mounted displays or haptic feedback systems. This enables alterations, such as dimming bright lights for better visibility or amplifying subtle audio cues, to create a customized perceptual . The term "computer-mediated reality" evolved in the context of wearable computing research, where it was coined to describe proactive alterations to reality beyond mere passive observation or recording, building on earlier concepts like Sutherland's 1968 head-mounted display as a foundational demonstration of computationally influenced .

Scope and Importance

Computer-mediated reality encompasses the manipulation of human through computational means, primarily targeting sensory modalities such as visual, auditory, haptic, and multisensory inputs to alter or augment the user's experience of the environment. This scope includes technologies that add, subtract, or modify sensory information in real-time, thereby reshaping how individuals interact with their surroundings, but it excludes non-perceptual forms of computer mediation, such as pure data processing or computational simulations without direct sensory output. The field intersects significantly with human-computer interaction (HCI), , and , providing frameworks to study and enhance how technology reshapes human cognition and sensory processing. In HCI, it enables intuitive interfaces that extend natural perceptual abilities, while explores how mediated experiences influence , , and decision-making; contributions reveal neural adaptations to altered realities, such as changes in brain activity during immersive simulations. These intersections underscore the field's role in advancing theories, where perception is not passive but actively mediated by computational tools. Societally, computer-mediated reality holds profound importance by democratizing enhanced perception, particularly for , allowing individuals with sensory impairments to experience augmented environments through adaptive interfaces like haptic feedback for the visually impaired. It drives innovation in daily life by integrating into , healthcare, and interactions, fostering inclusive experiences that bridge physical limitations. The global market for related technologies, including augmented and virtual reality systems central to mediated perception, is estimated at USD 20.43 billion in 2025 and projected to reach USD 85.56 billion by 2030, reflecting rapid adoption and economic impact. Philosophically, computer-mediated reality prompts critical questions about the of , challenging traditional notions of by blurring distinctions between the physical and realms. It raises debates on whether computationally altered experiences undermine truth or enrich subjective meaning, echoing longstanding inquiries into and in an era where simulations can evoke genuine emotional and existential responses.

Historical Development

Early Concepts and Precursors

The concept of computer-mediated reality traces its roots to 19th-century optical innovations that manipulated human perception to create illusions of depth and immersion. In 1838, invented the , a device that used paired images viewed through mirrors to produce a three-dimensional effect from flat drawings, demonstrating how visual mediation could augment spatial awareness without altering physical reality. This precursor laid foundational ideas for perceptual enhancement, influencing later efforts to blend artificial visuals with natural sight. Similarly, early 20th-century cinematography advanced mediated vision by introducing motion and narrative immersion; the Lumière brothers' 1895 enabled projected films that transported audiences into dynamic, simulated environments, fostering a collective experience of alternate realities through controlled visual stimuli. Theoretical underpinnings emerged in the mid-20th century through , which provided a framework for sensory augmentation via computational . Norbert Wiener's 1948 book Cybernetics: Or Control and Communication in the Animal and the Machine defined the field as the study of regulatory systems involving loops between humans and machines, applicable to enhancing sensory inputs for improved interaction with environments. Wiener's ideas extended to practical sensory mediation, such as his involvement in the 1948 "Hearing Glove" project at , which used vibrotactile to substitute auditory signals for the deaf, illustrating cybernetic principles in augmenting human perception through machine-mediated loops. By the early , these concepts materialized in experimental devices integrating multiple senses for immersive . Morton Heilig's , patented in , was a single-user booth that combined stereoscopic visuals, audio, wind, vibration, and scents to simulate experiences like a motorcycle ride through , aiming to expand into a holistic perceptual mediator. This multisensory apparatus represented an early attempt to override natural sensory inputs with synthesized ones, prefiguring computational overlays on reality. A pivotal breakthrough occurred in 1968 when developed the first at , dubbed the "Sword of Damocles" due to its ceiling-suspended frame. This device rendered wireframe 3D graphics in , tracking head movements to overlay computer-generated perspectives onto the user's view of the physical world, enabling the first interactive augmentation of visual reality. Sutherland's system demonstrated the feasibility of dynamic, perspective-correct mediation, setting the stage for computational integration with human senses.

Key Milestones and Pioneers

In the 1970s, Steve Mann pioneered wearable computing through body-borne computers designed for continuous environmental sensing and personal augmentation. At age 12 in 1974, Mann constructed his first wearable device, a Sequential Wave Imprinting Machine (SWIM) that visualized sound and radio waves via lights to create phenomenological . By 1978, he developed an early wearable camera system to capture and alter real-time perceptions, laying the groundwork for mediated reality concepts involving ongoing life recording and sensory modification for enhanced human experience. Building on early precursors like Ivan Sutherland's 1968 , the 1980s and 1990s saw a surge in augmented and developments driven by key innovators. In 1985, co-founded , the first company to commercialize products such as and data gloves, and in 1987, he coined the term "" to describe immersive digital environments. Complementing this, in 1992, engineers Tom Caudell and David Mizell introduced the term "" in their seminal work on heads-up display technology for manual manufacturing, enabling digital overlays on physical workspaces to improve efficiency in industrial settings. The 2000s marked the integration of computer-mediated reality with mobile devices, expanding accessibility beyond specialized hardware. A pivotal example was the 2010 launch of the Layar augmented reality browser for smartphones, which used GPS, compass, and camera data to overlay real-world information like location-based content and interactive media, democratizing AR applications for everyday users. From the 2010s onward, consumer-grade devices and AI enhancements propelled practical implementations forward. announced the HoloLens in 2015, a self-contained holographic headset that projected interactive 3D augmentations into the user's environment, fostering applications in , , and . Concurrently, advancements in AI-driven mediation enabled more intuitive sensory alterations, while Steve Mann continued his work on "humanistic intelligence," a framework emphasizing ethical human-AI for augmentation that prioritizes user agency and perceptual enhancement in wearable systems. In 2024, Apple released the , a headset that advanced by integrating high-resolution passthrough cameras and eye-tracking for immersive blending of digital content with the physical world, marking a major milestone in consumer adoption of computer-mediated reality.

Core Technologies

Hardware Components

Hardware components form the foundational layer of computer-mediated reality systems, enabling the capture, processing, and delivery of sensory inputs to bridge physical and digital environments. Early innovations, such as Sutherland's 1968 head-mounted display, laid the groundwork by integrating optical see-through elements and head-tracking mechanisms to overlay wireframe graphics on the real world, though the device required ceiling suspension due to its weight. Modern hardware builds on these principles, emphasizing lightweight, integrated designs for immersive experiences. Display technologies are central to rendering mediated content, with head-mounted displays (HMDs) utilizing and LCD screens to project high-resolution overlays in and augmented setups. panels offer superior contrast and color accuracy, achieving pixel densities up to 3386 in micro- variants for compact HMDs, while LCDs provide cost-effective brightness for broader adoption. Emerging holographic lenses target clearer, more images with improved in /VR applications. In augmented reality glasses, see-through enable direct viewing of the environment, often employing displays that guide via internal reflections to superimpose elements without obstructing vision; diffractive waveguides, for instance, support wide incident angle tolerances of 20-30 degrees for slim, lightweight form factors. These technologies balance and eyebox size but face challenges like color bleeding and limited étendue. Input sensors capture real-world data essential for spatial alignment and interaction. Cameras facilitate environmental capture through , providing rich, color-based feature detection for tracking despite vulnerabilities in low-light conditions. Inertial measurement units (), combining accelerometers, gyroscopes, and magnetometers, enable precise head tracking by delivering high-frequency motion data, often preintegrated to minimize drift in pose estimation. Depth sensors, such as , enhance spatial mapping by generating accurate 3D point clouds for obstacle avoidance and scene reconstruction, complementing cameras in featureless environments though lacking inherent color information. Wearable form factors prioritize mobility and user comfort, with smart glasses like integrating compact displays and sensors into eyewear for hands-free augmentation. Haptic suits deliver tactile feedback across the body using vibrotactile actuators or microcurrent arrays to simulate textures and forces, enhancing immersion in full-body interactions. Body-worn processors handle on-device computation, distributing workloads to support untethered operation. Power and integration challenges persist, as battery life constraints limit session durations in power-intensive AR/VR devices, often requiring efficient architectures to sustain multi-hour use. Miniaturization trends drive thinner profiles through advanced semiconductors, yet thermal management remains critical to prevent overheating in dense sensor-display integrations. Qualcomm's Snapdragon XR chips exemplify these advancements; the XR2 Gen 2 offers up to 2.5 times GPU performance gains compared to its predecessor, while the 2025 XR2+ Gen 2 provides an additional 15% GPU and 20% CPU boost, powering devices like the Samsung Galaxy XR.

Software and Algorithms

Software and algorithms form the backbone of computer-mediated reality (CMR) systems, enabling the processing, integration, and rendering of with real-world sensory inputs. Rendering pipelines in CMR rely on engines to overlay virtual elements onto physical environments seamlessly. For instance, and are widely adopted for developing applications, providing robust tools for handling , lighting, and in scenarios. These engines facilitate the creation of immersive experiences by processing camera feeds and sensor data to align digital assets with the user's viewpoint, ensuring low-latency performance critical for perceptual stability. A key component of these pipelines is (SLAM) algorithms, which anchor virtual content to the real world by simultaneously estimating the device's position and constructing a of the . Visual SLAM variants, such as those using feature points from cameras, enable robust tracking in dynamic settings, allowing AR overlays to remain stable as users move. Seminal work in SLAM has evolved from early probabilistic frameworks to modern visual-inertial methods, providing the foundational accuracy needed for CMR anchoring without external markers. AI integration enhances CMR through machine learning models, particularly in computer vision for dynamic filtering and enhancement of sensory data. Object recognition models, such as convolutional neural networks (CNNs), identify and classify real-world elements in real-time, enabling selective augmentation like highlighting interactive objects or suppressing distractions. These models process visual inputs to adapt mediations contextually, improving user immersion by responding to environmental changes. Data fusion techniques combine heterogeneous sensor inputs—such as from cameras, , and GPS—to achieve precise state estimation in CMR. Sensor fusion algorithms, exemplified by the , integrate noisy measurements for reliable pose estimation, updating the system state iteratively. The update equation is given by: \hat{x}_{k|k} = \hat{x}_{k|k-1} + K_k (z_k - H_k \hat{x}_{k|k-1}) where \hat{x}_{k|k} is the updated state estimate, K_k is the Kalman gain, z_k is the measurement, and H_k is the observation model. This approach mitigates individual sensor limitations, ensuring smooth tracking in environments. Frameworks and APIs streamline CMR development across devices. Apple's ARKit provides motion tracking and scene understanding APIs for , enabling developers to build mediated experiences that leverage device cameras for environmental interaction. Similarly, Google's ARCore offers cross-platform support for and , facilitating environmental detection and light estimation for consistent AR rendering. Open-source libraries like support custom perceptual alterations through primitives, such as feature detection and , allowing tailored enhancements in experimental CMR setups.

Forms of Implementation

Augmented Perception Systems

Augmented perception systems enhance or modify users' sensory experiences by integrating digital elements with real-world inputs, primarily through visual means, to improve awareness or functionality without replacing the natural environment. These systems operate on the principle of mediated reality, where computational processing alters incoming sensory data in to augment . For instance, visual augmentation overlays contextual information, such as directions or object identifiers, directly onto the user's , enabling more informed interactions with the physical world. In visual augmentation, digital overlays are superimposed on live camera feeds or see-through displays to enrich sensory input. A prominent application is in assistive devices for the visually impaired, where systems like eSight eyewear use high-resolution cameras and displays to magnify and enhance contrast in central vision, achieving up to 12.3× magnification and improving distance acuity by 0.74 logMAR units immediately upon use. These devices process video at 30 frames per second, applying features like and binarization to clarify scenes for users with conditions such as or . Such enhancements maintain a direct connection to the real world while compensating for perceptual deficits. Selective subtraction, a counterpart to augmentation, involves computationally dimming, masking, or removing undesired elements from the to reduce cognitive overload or distraction. This technique, often implemented via algorithms that fill removed areas with plausible background reconstructions, can block visual noise in complex environments. For example, in urban navigation, systems may conceal billboards or to streamline wayfinding, preserving focus on essential paths. Pioneering examples illustrate the practical evolution of these systems. Steve Mann's device, developed in the early 2000s, exemplifies real-time visual mediation by using a head-worn camera and display to augment or diminish specific scene portions, such as altering brightness in overexposed areas or removing targeted objects via voxel-based reconstruction. In contemporary consumer applications, employs smartphone-based to casually overlay virtual creatures onto real-world views captured by the device's camera, encouraging exploratory movement while blending digital elements seamlessly with physical surroundings. Augmented perception systems vary in integration levels, distinguishing (AR), which adds virtual elements to enhance the scene, from (DR), which subtracts real elements to simplify perception. AR focuses on accurate registration of overlays, while DR prioritizes photometric and temporal consistency to avoid perceptual discontinuities, such as unnatural seams at removal boundaries. —the degree to which mediated views match unaided human vision—is assessed through metrics like judgments, where users evaluate action possibilities (e.g., passability) in AR environments, showing near-real-world accuracy with crossover ratios around 1.18 times shoulder width. ensures that enhancements feel intuitive, minimizing artifacts like or resolution mismatches that could disrupt natural sensory processing. These systems often rely on head-mounted displays and for precise alignment.

Wireless and Mobile Mediation

Wireless and mobile mediation extends augmented perception systems into portable, untethered formats, allowing real-time perceptual alterations in dynamic environments without reliance on fixed infrastructure. Smartphone-based augmented reality (AR) leverages built-in cameras and sensors to deliver mobile hardware adaptations for perceptual mediation, enabling users to experience overlaid digital content directly on their devices. For instance, Snapchat's AR filters use facial recognition algorithms to alter visual perceptions in real-time, such as smoothing skin or adding virtual accessories, processed entirely on the device's processor for immediate feedback. This approach democratizes access to computer-mediated reality by utilizing ubiquitous mobile hardware, with computational demands met through optimized graphics pipelines that maintain frame rates above 30 Hz even on mid-range smartphones. Wireless protocols facilitate seamless integration of mobile devices with wearables in AR and VR setups, supporting low-power data exchange essential for on-the-go mediation. (BLE) is commonly employed for short-range syncing between smartphones and wearables like smart glasses, transmitting sensor data such as head orientation with minimal battery drain, typically under 1 mW average power consumption. , particularly , enables higher-bandwidth connections for streaming AR content from nearby access points, achieving throughputs up to 9.6 Gbps to support multi-device synchronization. Complementing these, networks provide ultra-low-latency streaming for cloud-based AR rendering, with end-to-end delays reduced to below 20 ms, allowing complex visuals to be offloaded to remote servers while maintaining immersive quality on mobile endpoints. Location-aware mediation incorporates GPS integration to deliver context-sensitive overlays, adapting perceptual content to the user's physical surroundings for enhanced mobility. GPS-enabled systems fuse data with device orientation to project relevant digital annotations, such as historical markers during urban tours, updating in as users move through environments. In fieldwork scenarios, these systems support remote assistance by overlaying expert-guided instructions onto a worker's camera feed, using geolocation to prioritize context-specific visuals like equipment diagnostics at precise sites. Battery life and connectivity challenges in mobile mediation are addressed through optimizations like edge computing, which processes data locally or at network edges to minimize latency and power usage. Edge servers handle rendering tasks near the user, reducing round-trip times to under 10 ms for AR interactions compared to full cloud reliance. Devices such as Meta's Quest standalone VR headset exemplify this by integrating onboard Snapdragon processors for untethered operation, achieving up to 2 hours of continuous mediation with optimized power management that prioritizes essential sensor fusion over high-fidelity graphics when battery levels drop below 20%.

Applications

Assistive and Therapeutic Uses

Computer-mediated reality technologies have significantly advanced assistive applications for individuals with visual impairments, particularly through devices that augment in . The OrCam MyEye, introduced in the , is a wearable artificial intelligence-based device that attaches to eyeglasses and uses a forward-facing camera to read printed and digital text aloud, recognize faces, identify products, and detect colors or banknotes, thereby enabling greater independence in daily tasks such as reading books or handling currency. Clinical studies have demonstrated its efficacy; for instance, a multicenter evaluation involving visually impaired participants showed that the device facilitated independent text reading, money handling, and face recognition, with high user satisfaction reported across real-world scenarios. Similarly, in patients with advanced , the OrCam MyEye 2.0 improved by reducing needs and enhancing functional independence in visual tasks. For color-blind users, () systems provide enhancement to differentiate colors that are otherwise indistinguishable. Wearable AR devices, such as those based on prototypes like , apply image filtering to boost color saturation and , allowing users with color vision deficiency (CVD) to perceive scenes more accurately. A pilot study validated this approach, showing that AR-mediated improved color identification in clinical settings for protanopic and deuteranopic individuals by digitally remapping and enhancing colors in to improve . These systems leverage neural mechanisms of color to simulate normal vision, offering portable augmentation without invasive interventions. In therapeutic contexts, (VR) has been employed since the late 1990s to treat phobias and (PTSD) by immersing patients in controlled, mediated environments that replicate fear-inducing scenarios. This approach facilitates gradual desensitization, with meta-analyses confirming that VR (VRET) yields significant anxiety reductions comparable to exposure, particularly for specific phobias like or , where effect sizes indicate moderate to large improvements in symptom severity. For PTSD, VRET simulates trauma-relevant settings to promote emotional processing, with clinical trials demonstrating sustained reductions in hyperarousal and avoidance behaviors. Haptic feedback integrated into computer-mediated suits and wearables supports motor following by delivering tactile cues that guide movement and enhance sensory-motor integration. These systems, often combined with , provide vibrations or forces to simulate proper limb positioning, aiding recovery of function. A scoping review of haptic-enabled robotic found improvements in fine motor skills and hand dexterity among patients, with quantitative assessments showing enhanced and reach accuracy through repeated sensory feedback sessions. implementations enable portable use, allowing therapy extension beyond clinical settings. Notable case studies illustrate long-term personal augmentation; Steve Mann, a pioneer in wearable computing, has worn custom devices continuously since the 1980s to mediate his visual and auditory perceptions, documenting a 30-year empirical exploration of and for self-empowerment. Clinical trials of AR visual aids for low-vision users report improvements in task performance, such as and navigation, underscoring the scalability of these technologies for broader . As of 2025, devices like enable advanced applications, such as immersive therapy for .

Educational, Professional, and Entertainment Applications

Computer-mediated reality has transformed educational practices by enabling interactive simulations that enhance student engagement and comprehension. (AR) applications, such as Google's Expeditions launched in 2015, allow learners to explore complex subjects like human anatomy through overlaid digital models on physical spaces, fostering hands-on interaction without specialized equipment. (VR) further immerses students in historical recreations, such as the VR experience, where users virtually tour the secret annex to grasp the context of events, promoting deeper and retention of historical narratives. In professional settings, AR facilitates remote collaboration and precision tasks, exemplified by Boeing's use of AR wireframe overlays on smart glasses to guide aircraft assembly, reducing errors by displaying virtual schematics directly on components. This approach achieves up to 88% first-pass accuracy and cuts task times by 20%, streamlining complex wiring installations. For industrial training in hazardous environments, VR and AR simulate risks like chemical spills or equipment failures, enabling workers to practice responses safely; a report highlights how these simulations mitigate real-world accidents by building procedural without exposure to danger. Entertainment applications leverage computer-mediated reality for immersive leisure experiences, with VR games like Beat Saber engaging players in rhythm-based swordplay synced to music, combining physical activity with futuristic visuals to captivate millions. Social platforms such as Roblox extend this through metaverse interactions, where users collaborate in shared virtual worlds with AR-enhanced elements for real-time socializing and creative play. Live events benefit from AR enhancements, as seen in Coachella's 2022 integration of AR filters during Flume's performance, overlaying surreal visuals like animated parrots on livestreams to amplify audience immersion. Studies underscore the efficacy of these applications, with training leading to higher knowledge retention through that reinforces knowledge via multisensory engagement. Overall, these implementations boost efficiency and enjoyment across domains, with research indicating up to 75% higher retention in mediated reality scenarios versus conventional methods. As of 2025, advancements support more interactive learning environments.

Challenges and Future Directions

Technical and Ethical Challenges

One significant technical challenge in computer-mediated reality systems, particularly those using head-mounted displays (HMDs), is , where delays exceeding 20 milliseconds between user head movements and corresponding visual updates can induce disorientation and . These latencies arise from processing bottlenecks in rendering real-time environments, exacerbating sensory mismatches between visual and vestibular inputs. Additionally, the computational demands of generating high-fidelity mediated experiences, such as rendering complex scenes at high frame rates, require substantial GPU resources, often pushing current hardware limits in mobile or untethered devices. Advances in GPU architecture, including capabilities, are essential to support seamless integration of real and virtual elements without performance degradation. Ethical dilemmas in computer-mediated reality prominently include privacy erosion due to the pervasive enabled by wearable devices, which continuously capture environmental and biometric data without explicit user awareness. For instance, AR glasses equipped with cameras and sensors can inadvertently record bystanders, raising concerns about and potential misuse by third parties. Another critical issue is obtaining in shared mediated spaces, where multiple users interact in overlaid virtual environments, complicating boundaries of permission for interactions or across physical and realms. These challenges demand robust frameworks for transparency and user control to prevent unauthorized intrusions. Health risks associated with computer-mediated reality primarily manifest as cybersickness, characterized by , , and from sensory mismatches, such as discrepancies between expected and actual motion cues in HMDs. This condition affects up to 80% of users in prolonged sessions, stemming from the brain's inability to reconcile visual stimuli with physical sensations. Mitigation strategies include , which optimizes computational load by rendering high-resolution details only in the user's central gaze area while reducing peripheral quality, thereby minimizing visual-vestibular conflicts and alleviating symptoms. Techniques like eye-tracking integration further enhance this approach by dynamically adjusting focus based on gaze direction. Accessibility barriers further hinder widespread adoption of computer-mediated reality, although costs remain a barrier for some, with entry-level HMDs priced around $300 as of 2025, higher-end models above $500 may still exclude certain low-income users from participatory experiences. This economic divide intersects with the broader , where limited access to high-speed and compatible devices in underserved regions prevents equitable engagement with perceptual enhancements like AR overlays. Software algorithms, such as simultaneous localization and mapping (SLAM), also face real-world inaccuracies in dynamic environments, contributing to tracking errors that disproportionately impact users with disabilities reliant on precise spatial alignment. Addressing these requires standards and subsidized infrastructure to bridge gaps. One of the key technological frontiers in computer-mediated reality involves advancements in brain-computer interfaces (BCIs) that enable direct neural mediation, bypassing traditional sensory inputs. By 2025, has progressed to human implant trials, with the company successfully implanting its wireless BCI device in multiple participants to restore functionalities like web navigation and gaming for individuals with . These prototypes detect and decode brain signals in real-time, facilitating thought-controlled interactions that integrate seamlessly with augmented perceptual environments. Complementing this, is driving personalization of perceptual filters in (AR) systems, where AI algorithms analyze user preferences, behaviors, and environmental contexts to dynamically tailor visual and auditory overlays. For instance, AI-enhanced AR applications can anticipate user needs and generate context-aware content, such as customized instructional overlays during tasks, enhancing immersion without overwhelming the senses. Societal shifts induced by computer-mediated reality include the formation of "reality bubbles," where highly personalized virtual environments may deepen by reinforcing echo chambers of curated experiences. Research indicates that excessive engagement in social () platforms by already isolated individuals can exacerbate and reduce real-world interpersonal connections, as users prioritize mediated interactions over physical ones. In economic terms, the integration of in sectors like is disrupting jobs that rely on spatial awareness, such as assembly and maintenance roles, by automating guidance and error detection through overlaid digital instructions. This transformation could enhance up to 23 million jobs globally by 2030, shifting demand toward AR-literate workers and potentially requiring reskilling for those in routine spatial tasks. Regulatory needs are intensifying around standards for mediated content authenticity to combat , as generative blurs lines between real and fabricated perceptual inputs. International bodies like the (ITU) are developing specifications for provenance, including watermarking and protocols to verify AR/VR content origins and prevent deceptive alterations. These efforts aim to establish global guidelines that mandate transparency in mediated realities, addressing risks like deepfakes in immersive environments. Optimistic projections foresee widespread adoption of computer-mediated reality by 2030, enhancing human cognition through integrated BCI-AR systems that augment and . The economy, a core application of these technologies, is expected to generate between $5 trillion and $13 trillion in value annually, driven by sectors like and virtual collaboration. This growth could democratize access to enhanced perceptual tools, fostering global innovation while ethical privacy concerns spur adaptive regulations.

References

  1. [1]
    Introduction to Mediated Reality | Request PDF - ResearchGate
    Aug 6, 2025 · However, unlike the scope of ability-based design, the act of mediation, as the distinctive feature of computer-mediated reality [47, 48] ...
  2. [2]
    Computer mediated reality technologies: A conceptual framework ...
    The area of Computer Mediated Reality Technologies (CMRT), an umbrella term for Augmented Reality (AR), Virtual Reality (VR), Mixed Reality (MR) and 3D- ...
  3. [3]
    All Reality: Virtual, Augmented, Mixed (X), Mediated (X,Y), and ...
    Apr 20, 2018 · Mediated Reality is useful as a seeing aid (e.g. modifying reality to make it easier to understand), and for psychology experiments like ...
  4. [4]
    [PDF] Wearable Tetherless Computer-Mediated Reality - WearCam.org
    Wearable Tetherless Computer-Mediated Reality: WearCam as a wearable face-recognizer, and other applications for the disabled. Steve. Mann, N1NLF st eveQmedia ...
  5. [5]
    Introduction to Mediated Reality
    Mediated reality is at the intersection of four related fields: 1. Telephone, wireless communications, videoconferencing, and so forth. 2. Photography ...
  6. [6]
    A head-mounted three dimensional display - ACM Digital Library
    The fundamental idea behind the three-dimensional display is to present the user with a perspective image which changes as he moves.
  7. [7]
    an Exploration into Computer-Mediated Reality - Oxford Academic
    The Performance of Cyberspace: an Exploration into Computer-Mediated Reality Open Access ... scope of this inquiry. In that regard, see Parsons, 1957 and ...
  8. [8]
    Enrichment of Human-Computer Interaction in Brain ... - PMC
    Nov 29, 2017 · Tridimensional representations stimulate cognitive processes that are the core and foundation of human-computer interaction (HCI).
  9. [9]
    Research on the Integration of Human-Computer Interaction and ...
    Aug 6, 2025 · This paper aims at reviewing the development of the integration of Cognitive Neuroscience and Human-computer Interaction, and put forward ...
  10. [10]
    Neuroscience of Virtual Reality: From Virtual Exposure to Embodied ...
    All of the articles suggest that VR is suitable for the treatment of mental health problems and could make an important contribution in many different areas, ...Vr As Simulative Technology · Vr As Embodied Technology · Vr As Cognitive Technology
  11. [11]
  12. [12]
    Brain Neuroplasticity Leveraging Virtual Reality ... - PubMed Central
    This study explores neuroplasticity through the use of virtual reality (VR) and brain–computer interfaces (BCIs).
  13. [13]
    Virtual Augmented And Mixed Reality (VR/AR) Market Size and Share
    Oct 23, 2025 · The Virtual, Augmented and Mixed Reality market size is estimated at USD 20.43 billion in 2025 and is projected to reach USD 85.56 billion by 2030, advancing ...
  14. [14]
    Why Is Virtual Reality Interesting for Philosophers? - PMC
    This article explores promising points of contact between philosophy and the expanding field of virtual reality research.
  15. [15]
    Virtual Reality: an Empirical-Metaphysical Testbed[1]
    This essay argues that a fundamental message of VR may be to illumine timeless philosophical inquiries concerning the nature of knowing and being.
  16. [16]
    180 years of 3D | Royal Society
    Aug 20, 2018 · The man responsible was Charles Wheatstone FRS, who published the first description of his stereoscope in the 1838 volume of the Philosophical ...
  17. [17]
    A very short history of cinema | National Science and Media Museum
    Jun 18, 2020 · Learn about the history and development of cinema, from the Kinetoscope in 1891 to today's 3D revival.
  18. [18]
    (PDF) Norbert Wiener and the origins of cybernetics - Academia.edu
    This essays wants to draw the origins and the fundamental concepts about cybernetics. We will rely to the considerations of Norbert Wiener who is mentioned ...<|separator|>
  19. [19]
    Hearing gloves and seeing tongues? Disability, sensory substitution ...
    Wiener, the early pioneer and populariser of cybernetics, helped initiate the 'Hearing Glove' project in 1948 at MIT's Research Laboratory of Electronics where ...Missing: augmentation | Show results with:augmentation
  20. [20]
    US3050870A - Sensorama simulator - Google Patents
    An optical system to create three-dimensional visual effects comprising means to project an image substantially along a predetermined axis.
  21. [21]
    [PDF] Head Mounted Three Dimensional Display
    The work reported in this paper was performed at Harvard. University, supported in part by the Advanced Research Proj- ects Agency (ARPA) of the Department of ...
  22. [22]
    [PDF] Oral History of Steve Mann
    Dec 14, 2017 · Mann: So this is the world's first wearable computer. I built it when I was 12 years old in 1974, and it controls these lights. You know, it's a ...
  23. [23]
    Jaron Lanier at Microsoft Research
    He founded VPL Research, the first company to sell VR products, in the early 1980s. In the late 1980s he led the team that developed the first implementations ...Missing: 1987 | Show results with:1987
  24. [24]
    Who Coined the Term “Virtual Reality”?
    The term 'virtual reality' was coined by Jaron Lanier in 1987 during a period of intense research activity into this form of technology.
  25. [25]
    Augmented reality: an application of heads-up display technology to manual manufacturing processes
    **Summary of "Augmented Reality: An Application of Heads-Up Display Technology to Manual Manufacturing Processes" (1992)**
  26. [26]
    Layar debuts new augmented reality browser, bets big on content ...
    Jun 2, 2010 · Layar is today unveiling the latest iteration of its Reality Browser product (3.5), starting with an Android version, with an iPhone 3GS app ...
  27. [27]
    Microsoft Built A Holographic Headset Called HoloLens - TechCrunch
    Jan 21, 2015 · The headset is called the HoloLens. Microsoft promised that it would be released inside of the Windows 10 “timeframe.” The headset is ...
  28. [28]
    Humanistic Intelligence: `WearComp' as a new framework and ...
    Humanistic Intelligence: `WearComp' as a new framework and application for intelligent signal processing. Steve Mann University of Toronto, Department of ...
  29. [29]
    60000nits Full-color Native RGB Single Junction 3386PPI Micro-OLED
    Aug 10, 2025 · The key considerations for head mounted displays (HMD) include making low power consumption, lightweight design, and a compact architecture for ...
  30. [30]
    Slim Diffractive Waveguide Glasses for Beaming Displays with ...
    Mar 28, 2025 · We introduce a design for light-receiving glasses using a diffractive waveguide with in-coupling and out-coupling gratings.
  31. [31]
    Towards Indistinguishable Augmented Reality: A Survey on Optical ...
    Typical examples are mobile phone-based AR systems [101] and video see-through head-mounted displays (VST-HMD), which add external cameras to closed HMDs ...
  32. [32]
    Improving SLAM Techniques with Integrated Multi-Sensor Fusion for ...
    Mar 22, 2024 · Utilizing an array of sensor data from cameras, LiDAR, and Inertial Measurement Units (IMUs), SLAM concurrently estimates sensor poses and ...<|separator|>
  33. [33]
  34. [34]
    Towards full-body haptic feedback - ACM Digital Library
    This paper presents work we have done on the design and implementation of an untethered system to deliver haptic cues for use in immersive virtual environments.
  35. [35]
    Snapdragon XR2 Gen 2 Platform - Qualcomm
    The Snapdragon XR2 Gen 2 Platform powers next-generation MR and VR for all with amazing performance and groundbreaking on-device AI. The Snapdragon® XR2 Gen 2 ...
  36. [36]
    Emerging Technologies in Augmented Reality (AR) and Virtual ...
    Unity 3D and Unreal Engine are two of the most popular game engines for developing AR and VR applications [41]. Among those two, Unity 3D is mostly adopted ...Missing: graphics | Show results with:graphics
  37. [37]
    Augmented Reality Meets Artificial Intelligence in Robotics
    Sep 22, 2021 · Unity3D game engine was the most commonly used for developing AR applications and visualizations, in comparison to Unreal Engine. Other ...
  38. [38]
    Visual SLAM algorithms and their application for AR, mapping ...
    Visual simultaneous localization and mapping (vSLAM) algorithms use device camera to estimate agent's position and reconstruct structures in an unknown ...
  39. [39]
    [PDF] Simultaneous Localisation and Mapping (SLAM): Part II State of the Art
    Abstract—This tutorial provides an introduction to the Si- multaneous Localisation and Mapping (SLAM) method and the extensive research on SLAM that has ...
  40. [40]
    Simultaneous Localization and Mapping for Augmented Reality (PDF)
    In this paper, we discuss possible uses of SLAM in the different components of typical AR systems to provide meaningful applications and go beyond current ...
  41. [41]
    Explore Computer Vision's Role in AR and VR - Viso Suite
    Discover how computer vision transforms AR and VR by bridging digital and real worlds, enhancing user experiences with advanced object recognition.Immersive Object Recognition... · Real-Time Gesture... · Gesture Recognition And...Missing: mediated | Show results with:mediated
  42. [42]
    Deep Learning-Based Object Recognition in AR-VR Environment
    This paper reviews object recognition techniques in AR-VR using deep learning, focusing on identification, detection, and related tools.
  43. [43]
    [PDF] Sensor Fusion for Augmented Reality - DiVA portal
    Jan 9, 2007 · Abstract. In Augmented Reality (AR), the position and orientation of the camera have to be estimated with high accuracy and low latency.
  44. [44]
    Pose Estimation of a Mobile Robot Based on Fusion of IMU Data ...
    With an extended Kalman filter (EKF), data from inertial sensors and a camera were fused to estimate the position and orientation of the mobile robot. All these ...
  45. [45]
    ARKit | Apple Developer Documentation
    ARKit combines device motion tracking, world tracking, scene understanding, and display conveniences to simplify building an AR experience.ARKit in iOS · Class ARSession · ARAnchor · Verifying Device Support and...
  46. [46]
    ARCore - Google for Developers
    ARCore is Google's augmented reality SDK offering cross-platform APIs to build new immersive experiences on Android, iOS, Unity, and Web.ARCore: Google Developers · Quickstart for Android · ARCore supported devices
  47. [47]
    OpenCV - Open Computer Vision Library
    OpenCV is the world's biggest open-source computer vision library, containing over 2500 algorithms, and is free for commercial use.Releases · About · OpenCV courses · OpenCV UniversityMissing: perceptual | Show results with:perceptual
  48. [48]
    The Effect of a Head-mounted Low Vision Device on Visual Function
    With these considerations in mind, eSight Eyewear (Fig. 1) was designed to improve on previous devices by not only providing variable magnification (1.3 to 12.3 ...
  49. [49]
    Survey of Diminished Reality: Concealing, Eliminating, Seeing Objects
    Jun 28, 2017 · In this paper, we review diminished reality (DR) studies that visually remove, hide, and see through real objects from the real world.Missing: fidelity | Show results with:fidelity
  50. [50]
    [PDF] EyeTap Devices for Augmented, Deliberately Diminished, or ...
    This work describes how the tracking algorithm allows an EyeTap to alter the light from a particular portion of the scene to give rise to a ...Missing: mediation | Show results with:mediation
  51. [51]
    The promise of an augmented reality game—Pokémon GO - NIH
    Pokémon GO uses augmented reality (AR), which is similar to virtual reality but the key concept for it is 'utility' instead of 'presence'. Pokémon GO ...Missing: casual augmentation
  52. [52]
    [PDF] Affordances as a Measure of Perceptual Fidelity in Augmented Reality
    The current study will provide a preliminary assessment of whether affordance judgments can be used to assess the perceptual fidelity of a mediated augmented ...Missing: diminished | Show results with:diminished
  53. [53]
    [PDF] The Perceptual Science of Augmented Reality - Emily Cooper
    Mar 21, 2023 · Abstract. Augmented reality (AR) systems aim to alter our view of the world and en- able us to see things that are not actually there.Missing: diminished | Show results with:diminished<|separator|>
  54. [54]
    [PDF] Has the Virtualization of the Face Changed Facial Perception ... - arXiv
    However,. AR beautification filters take photo editing techniques previously limited to still images and enable them to be applied to a face in real time. We ...
  55. [55]
    [PDF] A Comprehensive Evaluation Framework for the Study of the Effects ...
    Jul 23, 2025 · Finally, we show how the filter- ing effect in a face embedding space can easily be detected and restored to improve face recognition ...
  56. [56]
    Multi-Protocol Wireless Interoperability: Building Seamless ...
    Jun 30, 2025 · Wi-Fi may be essential for cloud access, but Bluetooth might be better for local pairing. Zigbee or Thread could suit low-power mesh networks.
  57. [57]
    [PDF] Profound Contents to Be Enabled by Wireless Connectivity of AR/VR
    A Sunnyverse test engineer explained to us that full-process testing of a WLAN connection is executed in accordance with the IEEE 802.11 protocol and the ...Missing: syncing | Show results with:syncing
  58. [58]
    Towards an Evolved Immersive Experience: Exploring 5G- and ... - NIH
    Apr 2, 2023 · With the help of extreme computing power and low latency in sensor data provision, processing, analysis, and visual rendering, AR/VR has been ...
  59. [59]
    Low-latency cloud-based volumetric video streaming using head ...
    Jun 8, 2020 · NVIDIA CloudXR Delivers Low-Latency AR/VR Streaming Over 5G Networks to Any Device. ... Cloud rendering-based volumetric video streaming system ...<|separator|>
  60. [60]
    Representing remote locations with location-based augmented ...
    This research study investigates the representation of physically remote locations in LBARGs and the utilization of a location-sharing player's (LSP) location ...Missing: urban tours<|separator|>
  61. [61]
    Shaping the Future of Location Tracking: Using AR Overlays for ...
    Oct 7, 2024 · AR overlays can significantly enhance accuracy and provide innovative navigation, geolocation, and spatial-awareness solutions.Missing: sensitive tours
  62. [62]
    [PDF] Mobile Edge Computing for the Metaverse - arXiv
    Dec 19, 2022 · In MEC, the large amount of data is processed by edge servers closest to where it is captured, thus significantly reducing the latency and ...
  63. [63]
    Experimental evaluation of interactive Edge/Cloud Virtual Reality ...
    Oct 1, 2024 · In contrast, edge computing brings the computational resources to the network's edge, close to the end user. Thus, Edge VR minimizes latency and ...
  64. [64]
    Understanding Gameplay Latency for Oculus Quest, Oculus Go and ...
    Understanding Gameplay Latency for Oculus Quest, Oculus Go and Gear VR ... There are many different measurements that people refer to when talking about latency.
  65. [65]
    Case Study: Boeing Streamlines Aircraft Assembly with AR
    Aug 23, 2022 · Boeing uses AR for visual support, achieving 88% first-pass accuracy, reduced task time by 20%, and expanded AR to equipment-rack installation.Missing: overlays | Show results with:overlays
  66. [66]
  67. [67]
    Beat Saber - VR rhythm game
    Beat Saber is a VR rhythm game where you slash the beats of adrenaline-pumping music as they fly towards you, surrounded by a futuristic world.Music · FAQ · Contact usMissing: entertainment | Show results with:entertainment
  68. [68]
    Coachella and Flume go all in on live concert AR - Unreal Engine
    May 12, 2022 · From giant cockatoos to melting flowers, Flume's Coachella performance had it all. Discover how each augmented reality (AR) visual was ...
  69. [69]
    Measuring Learning in the Moment: The KPIs of VR Training
    Apr 17, 2019 · In a study by the National Training Laboratory, retention rates for VR learning were 75 percent, which was much higher than those for lecture- ...
  70. [70]
    Editorial: Cybersickness in Virtual Reality and Augmented Reality
    Early virtual reality (VR) systems introduced abnormal visual-vestibular integration and vergence-accommodation, causing cybersickness.
  71. [71]
    Cybersickness and Its Severity Arising from Virtual Reality Content
    Virtual reality (VR) experiences often elicit a negative effect, cybersickness, which results in nausea, disorientation, and visual discomfort.
  72. [72]
    Applying Modern Virtual and Augmented Reality Technologies ... - NIH
    The demanding computational requirements of rendering images for virtual reality (i.e., the requirement of having two high-definition displays, one for each ...
  73. [73]
    [PDF] The Role of GPUs in Deep Learning and Computational Advancemen
    Feb 16, 2025 · Augmented and virtual reality (AR/VR) need GPUs to render high-quality graphics and execute AI-driven features like real-time object ...Missing: mediated | Show results with:mediated
  74. [74]
    Privacy in consumer wearable technologies: a living systematic ...
    Jun 14, 2025 · Without strict oversight, the integration of wearable data into state surveillance infrastructures could erode anonymity and civil liberties, ...
  75. [75]
    Ethical and legal implications of health monitoring wearable devices
    The use of HMWDs in clinical and research settings raises several ethical and legal concerns, ranging from patient safety to autonomy, justice, and data ...
  76. [76]
    Ethical concerns in contemporary virtual reality and frameworks for ...
    Jan 29, 2025 · Researchers have identified various ethical issues related to the use of VR. For example, issues of consent, privacy and harm.
  77. [77]
    VR and AR: The Ethical Challenges Ahead | EDUCAUSE Review
    Apr 10, 2018 · Immersive technologies will raise new ethical challenges, from issues of access, privacy, consent, and harassment to future scenarios we are only now beginning ...
  78. [78]
    Mitigating Cybersickness in Virtual Reality Systems through ... - PMC
    However, foveated rendering provides focus information decoupled from depth cues. A more natural scene can be produced by using a combination of the two [33].
  79. [79]
    Individualized foveated rendering with eye-tracking head-mounted ...
    Jan 19, 2024 · We developed an individualized FR (IFR) method using different central vision sizes and peripheral vision resolutions across individuals in virtual reality.
  80. [80]
    Risks and Challenges for Inclusive and Equitable Immersive ...
    Jun 1, 2021 · Potential barriers to accessing or adopting AR/VR technologies may include insufficient broadband connections, affordability, or digital ...
  81. [81]
    Inclusive Augmented and Virtual Reality: A Research Agenda
    We present a research agenda identifying key areas where further work is required in relation to specific forms of disability.2.2. Inclusive Ar/vr... · 4. Inclusive Ar/vr Research... · 5. Discussion
  82. [82]
    XR Reality Check: What Commercial Devices Deliver for Spatial ...
    Aug 12, 2025 · Inaccurate spatial tracking in extended reality (XR) devices leads to virtual object jitter, misalignment, and user discomfort, fundamentally ...
  83. [83]
    Ensuring equitable access to AR/VR in higher education | Brookings
    Sep 6, 2022 · Students affected by the digital divide struggle with access to a curriculum with AR/VR involved, as they lack access even just to technology ...
  84. [84]
    What to expect from Neuralink in 2025 - MIT Technology Review
    Jan 16, 2025 · Neuralink is a whole lot closer to creating a plug-and-play experience that can restore people's daily ability to roam the web and play games.Missing: prototypes | Show results with:prototypes
  85. [85]
    Brain computer interfaces are poised to help people with disabilities
    Jun 30, 2025 · Neuralink's 'telepathy'. Implanted BCIs work by detecting and decoding signals coming from areas of the brain that control movement or speech.Missing: prototypes | Show results with:prototypes
  86. [86]
    Augmented reality and artificial intelligence: digital interaction today
    Jul 28, 2025 · By using AI, AR applications can analyze and understand the user's environment, preferences, and behaviors, resulting in personalized and ...Missing: perceptual | Show results with:perceptual
  87. [87]
    How AI is Enhancing Augmented Reality: The Future of Immersive...
    Aug 19, 2025 · Using predictive analytics, AI anticipates your needs and interests, enabling highly personalized AR experiences. AR content and suggestions ...Missing: perceptual | Show results with:perceptual
  88. [88]
    Social Virtual Reality (VR) Involvement Affects Depression When ...
    We conclude that high levels of involvement in social VR games by socially isolated users with low self-esteem can negatively affect their well-being.
  89. [89]
    Virtual and augmented reality could deliver a £1.4trillion boost to the ...
    Currently, fewer than a million jobs are impacted by VR and AR and this will rise to 23 million jobs by 2030, with the biggest impact in large economies like ...
  90. [90]
    Standards and policy considerations for multimedia authenticity - ITU
    Jul 11, 2025 · This comprehensive paper presents a systematic overview of current standards and specifications at the intersection of digital media authenticity and AI.
  91. [91]
    [PDF] Policy paper: Building trust in multimedia authenticity through ...
    Jul 10, 2025 · A combination of global principles, international guidelines, and national regulatory frameworks are increasingly guiding efforts to regulate ...Missing: mediated | Show results with:mediated
  92. [92]
    Value creation in the metaverse | McKinsey
    With its potential to generate up to $5 trillion in value by 2030, the metaverse is too big for companies to ignore.
  93. [93]
    Metaverse and Money - CitiGPS
    Mar 30, 2022 · The total addressable market for the Metaverse could be between $8 trillion and $13 trillion by 2030, with total Metaverse users numbering around five billion.
  94. [94]
    Synthetic Media Policy: Provenance and Authentication — Expert ...
    May 1, 2025 · Policymakers and the public must move beyond “synthetic or authentic” binaries when considering responses to AI-generated media.