Fact-checked by Grok 2 weeks ago

ARCore

ARCore is Google's (AR) (SDK) that enables to build immersive experiences by overlaying digital content onto the physical world, using a device's camera, motion sensors, and processors to track movement, understand environments, and estimate lighting conditions. Initially released as a developer preview for devices in August 2017, it achieved stable version 1.0 in February 2018, expanding support to over 13 initial smartphone models including , , and ASUS ZenFone AR. Today, ARCore offers cross-platform APIs compatible with (version 7.0 and later on qualified devices), (version 11.0 and later on supported devices), , , and web browsers via , making it accessible in more than 100 countries worldwide. At its core, ARCore facilitates three primary functionalities: motion tracking, which combines visual feature detection from the camera with (IMU) data to precisely locate the device in ; environmental understanding, which scans and classifies surfaces (such as floors, walls, and tables) to create a virtual representation of the surroundings; and light estimation, which analyzes to ensure virtual objects appear realistically illuminated and shadowed. These features allow for placement of AR , such as anchoring models to real-world positions that persist as users move. Beyond these fundamentals, has evolved to include advanced capabilities like the Depth API, which provides per-pixel depth maps for realistic occlusion effects where virtual objects interact naturally with physical ones; the Scene Semantics API, which uses to identify and label semantic elements in outdoor environments (e.g., , , buildings); and Cloud Anchors, enabling shared AR sessions across multiple users and devices for collaborative experiences. The Geospatial API, introduced in 2022, further extends functionality by integrating with and Street View data to position AR content at precise Earth-scale locations, supporting applications in , , and across and . Additionally, supports session recording and playback for testing, as well as augmented image tracking for recognizing and augmenting printed images or objects. ARCore's open-source nature and integration with for ensure automatic updates and broad device compatibility, powering hundreds of applications in gaming, education, retail, and enterprise sectors, while competing with platforms like Apple's ARKit. Its ongoing updates, such as enhanced models for better environmental detection, continue to improve accuracy and performance on billions of compatible mobile devices globally.

Introduction and History

Overview

ARCore is Google's () (SDK) designed for building immersive AR experiences on mobile devices, enabling the seamless integration of virtual content with the real world through the device's camera and sensors. It allows developers to create applications that overlay digital elements onto physical environments, fostering interactive and context-aware experiences in , , , and applications. The primary purpose of ARCore is to empower developers to utilize motion data, camera input, and device sensors for core AR functionalities, such as tracking the user's position and orientation, understanding the surrounding environment, and estimating lighting conditions to render realistic digital overlays. At a high level, ARCore's architecture is built on cross-platform APIs that integrate with for on Android devices, while also supporting development in environments like , , and web-based frameworks for broader accessibility. This software-centric approach has evolved AR development from reliance on specialized hardware to widespread availability on standard mobile hardware, with automatic updates delivered through to maintain compatibility without manual intervention. As of 2023, was available in over 100 countries, powering AR experiences across apps, games, and professional tools on 1.4 billion devices worldwide. By leveraging features like motion tracking and environmental understanding, it enables developers to place and interact with virtual objects in real-time physical spaces.

Development Timeline

ARCore originated as the successor to Google's Project Tango, an experimental platform launched in June 2014 that relied on specialized hardware sensors for motion tracking and but faced limitations in device adoption. Project Tango was officially discontinued in December 2017, with support ending on March 1, 2018, paving the way for a more accessible software-focused approach. Google announced ARCore on August 29, 2017, positioning it as a software-only SDK to enable AR experiences on standard devices without requiring custom hardware, thereby broadening developer and user accessibility compared to . The platform's initial 1.0 release occurred on February 23, 2018, supporting a select group of Android devices through integration with for AR, marking the shift toward mainstream mobile AR development. In May 2018, during , ARCore introduced the Cloud Anchors API, which allowed multiple users to share persistent AR content in shared physical spaces across compatible devices, fostering collaborative experiences. This update expanded ARCore's scope beyond single-user applications. The following year, in February 2019, the Augmented Faces API launched, providing a high-fidelity 468-point face mesh for overlaying effects and animations on detected human faces via front-facing cameras. September 2019 brought significant cross-platform progress with official support through Unity's AR Foundation framework, enabling developers to build unified apps for both and ecosystems and reaching over a billion users. In October 2020, enhanced shared functionality with persistent Cloud Anchors, allowing anchored content to remain resolvable for up to 365 days, alongside expansions to additional devices for greater adoption. The ARCore Geospatial API debuted in May 2022 at , integrating with to anchor AR content to real-world locations using visual positioning systems, enabling global-scale experiences in over 87 countries. At 2023, enhancements to this API included the Streetscape Geometry API for accessing 3D street data, the Geospatial Depth API for improved environmental sensing, and the Scene Semantics API for semantic understanding of surroundings like roads and buildings. In 2024, ARCore powered interactive AR demonstrations at CES, including the Android Virtual Guide experience built with Geospatial Creator for navigating exhibit spaces. As of 2025, recent ARCore updates, such as version 1.45.0's enhanced torch mode support for low-light environments and ongoing WebXR integration for browser-based AR, continue to improve functionality, alongside continued partnerships like the one with the Singapore Tourism Board for location-based AR tours launched in 2023. In late 2025, version 1.51.0 introduced further enhancements, including updated minimum SDK version 23 and Vulkan rendering samples. Throughout its evolution, ARCore has transitioned from an Android-exclusive tool to a cross-platform solution supporting , , , and Web environments, emphasizing developer tools for both and applications such as shared and geospatial anchoring.

Fundamental Technologies

Motion Tracking

ARCore's motion tracking system utilizes the device's (IMU), which includes accelerometers and gyroscopes, alongside the camera to determine the device's position and orientation relative to the real world. This enables (6DoF) tracking, encompassing three translational movements along the x, y, and z axes and three rotational movements corresponding to , yaw, and roll. By continuously monitoring these parameters, ARCore ensures that virtual content aligns accurately with the physical environment as the user moves. The tracking process relies on visual-inertial odometry (VIO), a simultaneous localization and mapping (SLAM)-based algorithm that fuses visual data from the camera—such as detected feature points—with inertial data from the IMU to estimate the camera's pose in real time. This fusion compensates for the limitations of individual sensors: the camera provides spatial context through image features, while the IMU delivers high-frequency motion cues to bridge gaps between camera frames. As ARCore processes successive frames, it refines pose estimates to correct for accumulated errors, maintaining stability during typical user interactions. ARCore achieves pose estimation at approximately 60 frames per second (), supporting smooth experiences on compatible devices. In evaluations, the system's accuracy demonstrates low final drift errors, such as 0.09 meters in indoor hallways and 0.12 meters in corridors for short walking sessions, highlighting its precision in controlled environments. These metrics underscore ARCore's capability for centimeter-level positioning over brief periods, essential for immersive applications. However, motion tracking can exhibit drift over extended sessions, particularly in low-texture environments like plain walls or uniform surfaces where feature points are scarce, leading to reduced pose reliability. Mitigation involves leveraging anchors tied to detected environmental features, which stabilize tracking by anchoring virtual elements to persistent real-world points. This approach helps counteract gradual inaccuracies without relying solely on raw motion data. As a foundational component, motion tracking is prerequisite for all AR experiences in ARCore, enabling the stable initial placement of virtual objects before further enhancements like environmental integration for anchor placement.

Environmental Understanding

ARCore's environmental understanding capability allows it to detect and map real-world surfaces and geometry, enabling virtual objects to interact realistically with the physical environment through plane detection and point cloud generation. Plane detection identifies horizontal surfaces like floors and tabletops, as well as vertical surfaces such as walls, by analyzing clusters of feature points derived from camera images. These planes are provided with boundary polygons for precise placement of augmented content and are classified into categories based on their normal vector orientation, including horizontal upward-facing planes (e.g., floors or tabletops), horizontal downward-facing planes (e.g., ceilings), and vertical planes (e.g., walls). Complementing plane detection, ARCore generates a sparse point cloud consisting of oriented 3D feature points captured from the camera feed, which is continuously updated in real-time using motion tracking data to build an environmental model. This supports surface angle estimation and overall scene reconstruction, with planes categorized as either trackable (actively refined with high confidence) or estimated (preliminary based on initial observations). The process relies on (SLAM) techniques that combine visual features with inertial sensor inputs for robust environmental representation. In practice, this functionality facilitates anchoring virtual objects to detected planes, ensuring they remain stably positioned relative to real-world as the device moves. It enables basic interactions, such as placing objects on horizontal surfaces or aligning them against vertical ones, enabling basic —for instance, preventing virtual items from intersecting with mapped planes. Detection performance is efficient in well-d environments, typically identifying planes within seconds of sufficient , but it requires visual texture for accuracy and may falter in feature-poor settings like plain walls or low-light conditions. However, since ARCore 1.45 (August 2024), developers can enable the device's mode to improve performance in low-light conditions.

Light Estimation

ARCore's light estimation feature analyzes the real-world environment to provide photometric data that enables realistic rendering of virtual objects, ensuring they blend seamlessly with the physical scene by matching lighting conditions. This estimates key lighting parameters from camera imagery, allowing developers to apply physically-based rendering () techniques for shadows, reflections, and overall illumination. By mimicking ambient and directional light cues, it enhances visual immersion in applications. The main directional light estimation identifies the primary light source's direction, intensity, and color in the . Direction is represented as a normalized , typically pointing from the light source toward the camera, while and color are provided as RGB values in linear space, derived from the environmental HDR lighting mode. This data supports casting accurate shadows and specular highlights on virtual objects, assuming a single dominant light source such as or an indoor . Color correction factors, including RGB scale and pixel in gamma space, further adjust for the scene's overall tone, approximating effects without explicit values. Ambient lighting is modeled using second-order , which capture indirect diffuse illumination from all directions through 27 coefficients (9 per RGB channel). These coefficients enable the approximation of soft and subtle reflections on surfaces, contributing to a more natural appearance under mixed lighting conditions. The exclude the main directional light's contribution, focusing instead on the surrounding environment's radiance for effects. The estimation process relies on applied to successive camera frames, detecting highlights, shadows, and other visual cues to infer parameters in . Updates occur dynamically as the or camera view changes, with a indicating when the estimate was last revised, typically aligning with frame processing for responsive adjustments. This integrates briefly with environmental understanding by leveraging detected surfaces for contextual probe placement, though it primarily operates on photometric . In practice, the light estimation data feeds into rendering engines like , where it drives pipelines by supplying directional light properties for direct illumination and for . For instance, 's AR Foundation uses this to generate cubemaps for environmental reflections on glossy materials, significantly improving the of virtual objects in mixed-reality scenes. Developers enable modes such as Environmental for advanced features or Ambient Intensity for simpler intensity scaling, selecting based on performance needs. Despite its effectiveness, light estimation has limitations, including reduced accuracy in highly dynamic scenarios where light sources move rapidly or multiple strong lights are present, as it assumes a single main directional source. The HDR mode also imposes additional computational overhead due to cubemap generation, potentially impacting frame rates on lower-end devices. These constraints highlight the need for fallback strategies in varying real-world conditions.

Advanced Sensing and Interaction

Depth API

The ARCore Depth API enables developers to access depth information from the device's camera, generating depth maps that represent the distance of real-world objects from the camera in a scene. This pixel-level depth data enhances the realism of augmented reality experiences by allowing virtual content to interact more naturally with the physical environment, such as through accurate and spatial awareness. The API leverages either sensors or software-based to produce these maps, which are synchronized with the camera's RGB images for seamless integration into rendering pipelines. ARCore supports two primary depth modes: a full-depth mode, which generates dense depth maps using software estimation from device motion (provided in AUTOMATIC mode when hardware is unavailable), and a raw depth mode, which provides sparse, confidence-based depth maps directly from compatible depth sensors like time-of-flight (ToF) hardware. In full-depth mode, the system employs algorithms such as patch-match stereo to estimate depth across the entire image based on visual features tracked over multiple frames, requiring user movement for optimal results. The raw depth mode (Config.DepthMode.RAW_DEPTH_ONLY) delivers higher-accuracy sparse data at specific pixels, accompanied by a confidence map indicating reliability (values from 0 to 255, with 255 denoting highest confidence), making it suitable for devices with integrated depth sensors. Developers can enable the automatic mode (Config.DepthMode.AUTOMATIC) to let ARCore select the best option based on device capabilities and scene conditions. As of October 2025, the Depth API is supported on over 87% of active ARCore-compatible devices. Depth maps are typically provided at resolutions matching lower-end camera configurations for performance, up to on supported devices, and updated at 30 frames per second to align with ARCore's tracking rate. The effective range spans 0 to 65 meters, with the highest accuracy achieved between 0.5 and 5 meters, where depth values are represented as 16-bit unsigned integers in millimeters for precise distance calculations. Beyond this optimal range, accuracy decreases due to factors like limitations or estimation errors in low-texture areas. Key applications of the Depth API include occlusion, where virtual objects are rendered behind real-world elements based on depth comparisons in shaders, improving visual fidelity without manual geometry modeling. It also supports enhanced interactions, such as for virtual elements with real surfaces and improved hand tracking by providing context for , enabling more responsive user inputs like pointing or grabbing in scenes. For instance, developers can perform depth hit-tests to place anchors on non-planar surfaces, facilitating precise object positioning. In implementation, the Depth API integrates with ARCore's point cloud data by allowing developers to convert depth pixels into world-space 3D points using the camera's intrinsics and pose, creating denser environmental representations for advanced rendering or physics simulations. Depth images are acquired via Frame.acquireDepthImage16Bits() and can be transformed for non-frontal views by applying the camera's and from the AR session's tracking data, ensuring alignment with the overall scene geometry. This supports custom shaders for effects like depth-aware blending, where functions such as DepthGetMillimeters() extract distances for GPU-accelerated processing. Recent enhancements include the introduction of 16-bit depth support in ARCore SDK 1.31, extending the maximum range to 65 meters while maintaining compatibility with models for refined estimation in challenging conditions like low motion. Additionally, with Jetpack XR in 2025 versions provides smoothed raw depth outputs, improving usability for on-device ML-driven applications without sacrificing performance. These updates build on the raw depth mode added in 2021, which offers unfiltered sensor data for higher precision in supported hardware.

Anchors and Cloud Anchors

In ARCore, anchors serve as virtual reference points that tie digital content to specific positions and orientations in the real world, leveraging the device's motion tracking and environmental understanding to maintain stability as the user moves. Local anchors are created either instantly by specifying a pose in the AR session's world space or by attaching to detected planes for surface-aligned placement, ensuring virtual objects remain fixed relative to the physical environment during the current session. These anchors adjust dynamically to updates in the ARCore world model, which incorporates data from motion tracking and features like the Depth API for enhanced stability in complex scenes. Cloud Anchors extend this functionality to enable shared and persistent AR experiences across multiple devices and sessions by hosting anchor data on the ARCore API cloud endpoint. In the host-resolve model, a hosting device uploads an anchor's pose and associated environmental data to generate a unique Cloud Anchor ID, which is then shared with other devices—typically via a backend service like —to allow them to resolve and align to the same real-world location. This synchronization relies on visual feature matching between devices in the shared physical space, supporting multi-user AR applications without requiring precise GPS coordination. Local anchors persist only for the duration of the AR session and are stored on the device, limiting their use to single-user scenarios, whereas Cloud Anchors provide cross-session persistence, with lifetimes configurable up to 365 days when using authentication, enabling repeated access to the same virtual placements. For multi-user interactions, Cloud Anchors facilitate collaboration, such as in multiplayer where participants co-place virtual objects or in collaborative for shared virtual prototypes. However, Cloud Anchors depend on a stable connection for hosting and resolving, introducing and potential failures in low-network conditions, and they require proper authorization through a Google Cloud project to access the API.

Specialized APIs

Geospatial API

The ARCore Geospatial API enables developers to create location-based experiences tied to Earth's , allowing virtual content to be anchored to real-world geographic locations without requiring on-site mapping. Introduced in 2022, it leverages Google's Visual Positioning System (VPS) alongside GPS and device sensors to provide global-scale AR positioning. At its core, the API fuses GPS signals, Wi-Fi data, inertial sensors, and visual features captured by the device's camera with VPS, which builds a point cloud from billions of Street View images to match the user's environment against a global localization model. This combination delivers Earth-relative positioning with higher precision than standalone GPS, particularly in areas with VPS coverage spanning nearly all countries. Under typical outdoor conditions, it achieves positional accuracy better than 5 meters, often around 1 meter, enabling stable tracking over large areas by integrating local motion tracking data to minimize drift during wide-area movement. Geospatial poses represent the device's position and orientation in a global Earth-centered , typically using the WGS84 datum for , , and altitude. Developers can create persistent geospatial anchors linked directly to these coordinates, with options for WGS84 (standard ellipsoidal model), (surface-elevated), or Rooftop (building-top) placements to ensure content aligns accurately with physical locations. These anchors support shared experiences across devices, as they are resolved via VPS without needing local session sharing. In 2024, at , enhancements to the Geospatial API included expanded coverage to for devices, enabling more creators to build experiences in densely populated urban environments. Additional updates integrated the API with tools like for faster localization and improved anchor accuracy, and Editor for scaling content across multiple sites using Places API. A pilot program also began incorporating Geospatial content into ' Street View and Lens modes in select cities like and . Common applications include navigation overlays, such as guiding users to parked vehicles in lots, and large-scale games like virtual balloon-popping or garden-building that span city blocks. Optimal performance requires locations covered by Street View imagery, which favors outdoor areas with good visibility of surroundings, though the operates on compatible devices without additional hardware.

Scene Semantics

Introduced at 2023, the Scene Semantics API in utilizes to deliver semantic understanding of outdoor environments captured by the device's camera. It processes the camera image feed on-device to generate a semantic image, where each is assigned one of 11 predefined labels representing common outdoor scene components, along with a corresponding confidence image indicating prediction reliability for each . This enables developers to identify and interact with environmental elements such as buildings, roads, and vegetation without relying solely on geometric tracking. The API supports ML-based detection by mapping pixels to semantic labels derived from the scene's point cloud and image data, producing semantic masks that highlight regions of specific categories and allow for precise labeling. The supported labels include (open sky including clouds), building (structures like houses and attached elements), (non-walkable vegetation such as trees and shrubs), (drivable surfaces), (pedestrian paths including curbs), (walkable natural ground like grass or sand), structure (non-building elements like fences or bridges), object (temporary or permanent items such as signs or poles), (motorized transport like cars or buses), (humans including pedestrians), and (bodies like lakes or rivers). Developers can query the prevalence of these labels in the current frame—such as the fraction of the scene occupied by roads or —to inform dynamic AR behaviors, like restricting virtual object placement to safe areas. Additionally, the API provides quality tiers for predictions, with higher reliability for larger or more common objects like buildings and roads compared to smaller or transient ones like vehicles. This integrates with ARCore's environmental understanding by layering semantic information onto detected geometry, such as planes and point clouds, to create a richer model of the scene for more intelligent interactions. It also combines with the Depth API to project semantic labels into , enabling the derivation of volumetric representations like approximate bounding boxes for labeled regions when depth data is available. For instance, in context-aware applications, developers can use these features to place virtual signage on detected sidewalks while avoiding roads or water bodies, enhancing realism and safety in outdoor experiences. The on-device ensures low and preserves user by avoiding cloud uploads of scene data.

Augmented Faces and Images

ARCore's Augmented Faces enables developers to overlay virtual assets onto detected human faces using the device's , without requiring specialized . Introduced in February as part of ARCore 1.7, the employs models to detect and track facial features in real-time, generating a dense mesh consisting of 468 vertices that represent the face's geometry relative to its center pose. This mesh updates each frame, typically at the camera's rate of up to 60 frames per second on supported devices, allowing for smooth rendering of effects such as filters or accessories. The also provides region poses for key facial areas like the eyes, nose, and mouth, facilitating precise asset placement that adapts to head movements and partial occlusions. The Augmented Faces API supports tracking multiple faces simultaneously through the session's trackables, enabling applications to handle scenarios with more than one person in view, though performance may vary based on device capabilities. Developers can leverage the 3D mesh deformations to infer facial expressions, supporting dynamic overlays that respond to movements like smiling or frowning. For realistic integration, the API integrates with ARCore's light estimation to adjust virtual assets' shading based on environmental lighting conditions. Common applications include AR beauty tools, where users try on virtual makeup or hairstyles, and interactive filters in experiences. In parallel, ARCore's Augmented Images API allows for real-time tracking of predefined images, such as posters or product packaging, by detecting and estimating their 6DoF pose ( and orientation) along with physical size. Launched at in May 2018, this uses and to extract grayscale feature points from a developer-provided image database, enabling augmentation even for moving or partially occluded targets. It can track up to 20 images concurrently and refines estimates frame-by-frame at up to 60 FPS, maintaining stability when images are temporarily out of view assuming a static environment. Augmented Images requires images to be flat and occupy at least 25% of the camera frame for reliable detection, with a maximum database size of 1,000 images per session. This limitation necessitates a predefined catalog of target images, which developers must generate using tools like arcoreimg before deployment. Applications span campaigns, where scanning a triggers interactive overlays, and educational tools that enhance printed materials with . Both APIs contribute to immersive experiences by focusing on dynamic, user-facing interactions, distinct from static environmental anchoring.

Development and Compatibility

SDKs and Integration

ARCore offers primary software development kits (SDKs) in and Kotlin for native applications, enabling developers to integrate features directly into apps using the environment. For performance-intensive scenarios, such as rendering or cross-platform engines, the C++ SDK via the Android Native Development Kit (NDK) provides low-level access to ARCore's core functionalities. Additionally, support through the Device API allows for WebAR experiences in compatible browsers like on , facilitating browser-based AR without native app installation. The API is structured around key components for managing AR sessions and processing real-time data. Session management involves creating an ArSession object to oversee the AR lifecycle, including starting, pausing, and resuming the session based on app state. Frame updates are delivered per camera frame via the ArFrame class, supplying updated pose data, camera intrinsics, and environmental insights like detected or light estimates. Configuration options, such as enabling or disabling finding , allow customization of tracking behaviors to balance accuracy and computational load—for instance, setting Config.PlaneFindingMode.[HORIZONTAL](/page/Horizontal) to detect only horizontal surfaces for simpler scenes. Supporting tools enhance development workflows, particularly for game engines. ARCore Extensions for , version 1.51.0 and later as of 2025, integrate with Unity's AR Foundation to expose advanced ARCore features like Cloud Anchors and depth sensing in a cross-platform manner, requiring 2019.4 or newer with AR Foundation 4.1.5 or later for full compatibility. In 2025, introduced ARCore for Jetpack XR (version 1.0.0-alpha07 as of 2025), a library that integrates ARCore capabilities with Jetpack Compose for modern UI development. For debugging, Scene Viewer serves as an essential tool, enabling developers to preview and interact with 3D models (in or glb format) in AR mode directly from an app or web link, helping validate asset compatibility and rendering without building a complete application. To integrate ARCore, developers first install Google Play Services for AR via the Google Play Store on supported devices, ensuring the latest version (such as 1.51.0) is available for runtime support. In the Android project, add the ARCore dependency to the build.gradle file (e.g., implementation 'com.google.ar:core:1.51.0') and declare permissions for CAMERA and INTERNET in the AndroidManifest.xml. The session lifecycle is handled programmatically: check device compatibility with ArCoreApk_checkAvailability(), create and configure the session if supported, process frames in the render loop, and properly pause or close the session during app interruptions to prevent resource leaks. For WebXR, initialize an immersive AR session using navigator.xr.requestSession('immersive-ar') after verifying ARCore support. Best practices emphasize reliability and efficiency in ARCore integration. For error handling, always verify ARCore availability and prompt users to install or update for AR if the device is unsupported, using asynchronous checks like ArCoreApk_checkAvailabilityAsync() to avoid blocking the . Performance optimization includes minimizing tracking overhead by selectively enabling features—such as disabling environmental probing when only motion tracking is needed—and leveraging efficient rendering pipelines like to maintain 60 FPS on mid-range devices.

Supported Platforms and Devices

ARCore primarily supports Android devices running version 7.0 (API level 24) or later, with compatibility enabled through Google Play Services for AR, which handles the installation and updates of the ARCore runtime. For cross-platform development, ARCore integrates with Unity's AR Foundation framework, allowing applications to target iOS devices running iOS 11.0 or later via Apple's ARKit, where ARCore-specific features like Cloud Anchors and Augmented Faces are supported on ARKit-compatible hardware. Additionally, ARCore powers augmented reality experiences on the web through the WebXR API in Google Chrome on supported Android devices, enabling browser-based AR without native app installation. Hardware requirements for ARCore include a device with a gyroscope, accelerometer, and rear-facing camera to enable motion tracking, environmental understanding, and light estimation; certified devices must also feature a sufficiently powerful CPU for real-time AR processing. ARCore version 1.0 and later is distributed via Google Play Services, ensuring that only devices meeting these sensor and performance criteria can fully utilize the SDK. As of 2025, supports over 200 device models across various manufacturers, including the series (such as and later), and newer flagships like the S23 series, and recent models; a comprehensive, up-to-date list of certified devices is maintained by . On iOS, support is achieved through cross-compilation in , targeting ARKit-enabled devices like and later models running or higher. also supports foldable devices, such as the Fold variants, and mid-range options like certain G series phones, broadening beyond premium hardware. ARCore updates are delivered automatically through the Google Play Store via for AR on supported devices, ensuring seamless access to new features and performance improvements without manual intervention. For regions like , where is unavailable, support is provided through alternative app stores such as those from and . Developers can implement in-app compatibility checks using the API to verify device support at runtime, allowing applications to gracefully fallback to 2D modes or non-AR experiences on incompatible hardware. This API-driven verification helps ensure broad usability while adhering to the platform's hardware prerequisites.

References

  1. [1]
    ARCore - Google for Developers
    ARCore is Google's augmented reality SDK offering cross-platform APIs to build new immersive experiences on Android, iOS, Unity, and Web.ARCore supported devices · ARCore: Google Developers · Quickstart for Android
  2. [2]
    Announcing ARCore 1.0 and new updates to Google Lens
    Feb 23, 2018 · ARCore, Google's augmented reality SDK for Android, is out of preview and launching as version 1.0. Developers can now publish AR apps to the ...
  3. [3]
    Overview of ARCore and supported development environments
    Oct 31, 2024 · ARCore is Google's platform for creating augmented reality experiences using a phone's camera to integrate virtual content with the real world.Supported devices · Quickstart for Android · What's under the hood in... · ARCore
  4. [4]
    Depth adds realism | ARCore - Google for Developers
    The Depth API can power object occlusion, improved immersion, and novel interactions that enhance the realism of AR experiences. The following are some ways you ...May 2022 (ARCore SDK... · Developer guide · Depth API quickstart for...
  5. [5]
    Understand the user's environment with the Scene Semantics API
    The Scene Semantics API helps developers understand the user's environment in AR experiences by providing real-time semantic information using an ML model.
  6. [6]
    Make the world your canvas with the ARCore Geospatial API
    May 11, 2022 · ARCore, our AR developer platform, works across billions of devices, providing developers with simple yet powerful tools to build immersive ...
  7. [7]
    What's under the hood in ARCore - Google for Developers
    Oct 31, 2024 · Get a deep dive on ARCore, including the APIs for tracking and placement, and ML models for improving your experiences.
  8. [8]
  9. [9]
    Build transformative augmented reality experiences with new ...
    May 10, 2023 · ARCore is now available on 1.4 billion Android devices and select features are also available on compatible iOS devices, making it the ...
  10. [10]
    Tango (platform) - Wikipedia
    Tango (platform) ; Google · June 5, 2014; 11 years ago (2014-06-05) · Android · English.Overview · Applications · Devices · The Yellowstone tablet
  11. [11]
    Google kills its Tango augmented reality platform, shifting focus to ...
    Dec 15, 2017 · First introduced in 2014, “Project Tango” was a bit of a trailblazer in its own right, preceding Apple's ARKit with a phone and tablet-based ...
  12. [12]
    Google's new dev kit is bringing augmented reality to Android - WIRED
    Aug 29, 2017 · Announced at the Worldwide Developer Conference in June, ARKit lets iPhone and iPad developers create experiences that combine digital overlays ...
  13. [13]
    Google publicly launches ARCore 1.0 on 13 phones, will begin ...
    Feb 23, 2018 · With ARCore's launch today, developers are now able to put their own creations into the Play Store and users with phones left out of the preview ...
  14. [14]
    Experience augmented reality together with new updates to ARCore
    May 8, 2018 · Three months ago, we launched ARCore, Google's platform for building augmented reality (AR) experiences. There are already hundreds of apps ...
  15. [15]
    New UI tools and a richer creative canvas come to ARCore
    Feb 15, 2019 · Creating AR Selfies ... ARCore's new Augmented Faces API (available on the front-facing camera) offers a high quality, 468-point 3D mesh that lets ...
  16. [16]
    ARCore updates to Augmented Faces and Cloud Anchors enable ...
    Sep 12, 2019 · Earlier this year, we announced our Augmented Faces API, which offers a high-quality, 468-point 3D mesh that lets users attach fun effects to ...
  17. [17]
    Improving shared AR experiences with Cloud Anchors in ARCore 1.20
    Oct 6, 2020 · In 2018, we introduced the Cloud Anchors API in ARCore, which lets people across devices view and share the same AR content in real-world spaces ...Missing: announcement | Show results with:announcement
  18. [18]
    Google announces new ARCore Geospatial API for building ...
    May 11, 2022 · Google announces new ARCore Geospatial API for building Augmented Reality experiences in real-world locations in 87 countries ... May 11, 2022 – ...
  19. [19]
  20. [20]
    12 Augmented Reality Technology Trends to Watch in 2025 - MobiDev
    Sep 8, 2025 · ARCore, Google's augmented reality software development kit for Android devices, has seen a few iterative updates since last year. The latest ...Arcore 1.45. 0 · Arcore Vs Arkit · 5g Connectivity And Mobile...
  21. [21]
    Google's ARCore partners with Singapore Tourism Board to create ...
    May 11, 2023 · Google launched the ARCore Geospatial API in 2022 to help developers create world-anchored experiences. The API provides access to global ...
  22. [22]
    Fundamental concepts | ARCore - Google for Developers
    Oct 31, 2024 · Learn how ARCore enables experiences that can make virtual content appear to rest on real surfaces, or be attached to real-world locations.
  23. [23]
    A Benchmark Comparison of Four Off-the-Shelf Proprietary Visual ...
    Dec 15, 2022 · Google ARCore exhibited accurate and consistent motion-tracking performance next to ARKit. ARCore worked well for indoor sequences and the ...1. Introduction · 3.2. Google Arcore · 4. Experiments
  24. [24]
    Plane.Type | ARCore - Google for Developers
    Oct 31, 2024 · A horizontal plane facing downward (e.g. a ceiling). public static final Plane.Type HORIZONTAL_UPWARD_FACING. HORIZONTAL_UPWARD_FACING.Missing: classification labeling
  25. [25]
    Environment | ARCore - Google for Developers
    Oct 31, 2024 · Official documentation. Debugging ... detecting feature points and identifying common surfaces, such as horizontal and angled planes.
  26. [26]
    Working with Anchors | ARCore - Google for Developers
    Oct 31, 2024 · As ARCore's environmental understanding updates throughout an AR experience, virtual objects can appear to drift away from where they were ...Missing: correction | Show results with:correction
  27. [27]
    Get the lighting right | ARCore - Google for Developers
    The Lighting Estimation API provides detailed data that lets you mimic various lighting cues when rendering virtual objects. These cues are shadows, ambient ...Lighting cues · Reflections · Environmental HDR mode · Main directional light
  28. [28]
    ArLightEstimate | ARCore - Google for Developers
    Jul 14, 2025 · Returns the intensity of the main directional light based on the inferred Environmental HDR Lighting Estimation.
  29. [29]
    Realistically light virtual objects in a scene | ARCore
    The Lighting Estimation API provides detailed data that lets you mimic various lighting cues when rendering virtual objects.
  30. [30]
    Config.DepthMode | ARCore - Google for Developers
    Oct 31, 2024 · RAW_DEPTH_ONLY mode provides a raw, mostly unfiltered, sparse depth image and depth confidence image, intended for understanding environmental ...Missing: dense | Show results with:dense
  31. [31]
    Use Depth in your Android app | ARCore - Google for Developers
    Oct 31, 2024 · The Depth API helps a device's camera to understand the size and shape of the real objects in a scene. It uses the camera to create depth images, or depth maps.Acquire Depth Images · Use Depth In Shaders · Parse Depth Information For...<|control11|><|separator|>
  32. [32]
    ARCore supported devices - Google for Developers
    The Android devices listed here support ARCore via Google Play Services for AR, which enables augmented reality (AR) experiences built with an ARCore SDK.
  33. [33]
    Use the ARCore Depth API for immersive augmented reality ...
    May 11, 2022 · This codelab shows you steps for building an ARCore application using the new Depth API. Depth provides a 3D understanding of a given scene ...Missing: documentation | Show results with:documentation
  34. [34]
    Retrieve depth information in your app with ARCore for Jetpack XR
    Oct 22, 2025 · SMOOTH_AND_RAW : Depth estimation is enabled with both raw and smooth depth and confidence values. Raw depth maps provide depth estimates ...Missing: sparse dense
  35. [35]
    Cloud Anchors allow different users to share AR experiences | ARCore
    Cloud Anchors are anchors that are hosted on the ARCore API cloud endpoint. This hosting enables users to share experiences in the same app.
  36. [36]
    Cloud Anchors developer guide for Android (Kotlin/Java) | ARCore
    Before using Cloud Anchors in your app, you must first enable the ARCore API in your application. Important: Apps that host and resolve Cloud Anchors with a TTL ...
  37. [37]
    ARCore Cloud Anchors with persistent Cloud Anchors
    Oct 18, 2021 · This codelab guides you through the Cloud Anchors API. You will take an existing ARCore app, modify it to use Cloud Anchors, and create a shared AR experience.ARCore Anchors and... · Configure ARCore · Host an anchor · Use StorageManager
  38. [38]
    Build global-scale, immersive, location-based AR experiences with ...
    The ARCore Geospatial API enables you to remotely attach content to any area covered by Google Street View and create AR experiences on a global scale.Missing: 2023 | Show results with:2023
  39. [39]
    GARSessionConfiguration(Geospatial) Category | ARCore
    Jul 14, 2025 · Under typical conditions, VPS can be expected to provide positional accuracy typically better than 5 meters and often around 1 meter, and a ...
  40. [40]
    Google AR at I/O 2024: New Geospatial AR features and more
    May 15, 2024 · Geospatial AR content will be visible from your mobile device using Street View and Lens in Maps. Users can discover the AR content by simply searching for a ...
  41. [41]
    ARCore - ArSemanticLabel - Google for Developers
    Defines the labels the Scene Semantics API is able to detect and maps human-readable names to per-pixel semantic labels.
  42. [42]
    google-ar/codelab-scene-semantics-geospatial-depth - GitHub
    ARCore Scene Semantics and Geospatial Depth Codelab project. This repository contains the template used for the Get started with the Scene Semantics and ...
  43. [43]
    ARCore 1.7 adds Augmented Faces API for AR selfies, new ...
    Feb 15, 2019 · The latest developer release today introduces a new Augmented Faces API and an ARCore Elements app for learning about basic principles.
  44. [44]
    Augmented Faces introduction | ARCore - Google for Developers
    The Augmented Faces API allows rendering assets on human faces without specialized hardware by providing feature points to identify regions of a face. · This API ...Missing: announcement | Show results with:announcement
  45. [45]
    AugmentedFace | ARCore - Google for Developers
    Oct 31, 2024 · Describes a face detected by ARCore and provides methods to access additional center and face region poses as well as face mesh related data.
  46. [46]
    Performance considerations | ARCore - Google for Developers
    Oct 31, 2024 · To create more compelling AR experiences, design with the following best practices in mind. Use anchors to improve tracking performance.
  47. [47]
    Session | ARCore - Google for Developers
    Feb 28, 2025 · This class allows the user to create a session, configure it, start or stop it and, most importantly, receive frames that allow access to camera image and ...
  48. [48]
    ARCore Augmented Images at Google IO 2018 - YouTube
    May 10, 2018 · Augmented reality is a major point of development for tech companies, and Google seems to be leading the way. The company launched ARCore ...
  49. [49]
  50. [50]
    Quickstart for Android | ARCore - Google for Developers
    You must update Google Play Services for AR on the emulator before running the app. See Run AR Apps in Android Emulator for more information. Run the sample.Missing: 2024 | Show results with:2024
  51. [51]
    Enable AR in your Android NDK app | ARCore | Google for Developers
    Android apps that use AR features communicate with Google Play Services for AR using the ARCore SDK. ... This way, they can be referenced directly in a C or C++ ...
  52. [52]
    WebXR | ARCore - Google for Developers
    Dec 18, 2024 · The WebXR Device API provides access to virtual reality and augmented reality devices in compatible web browsers.
  53. [53]
    API reference - ARCore SDK for Android NDK (C)
    ARCore SDK for Unreal Engine (official documentation). On this page; Pages ... Scene Semantics API. ArSession, Session management. ArStreetscapeGeometry ...
  54. [54]
    Getting started with ARCore Extensions for AR Foundation
    Install the ARCore Extensions package · Navigate to Window > Package Manager. · Next to Packages, select Unity Registry. · In the search bar, type "AR Foundation".Missing: 2024 | Show results with:2024<|control11|><|separator|>
  55. [55]
    Using Scene Viewer to display interactive 3D models in AR from an ...
    Oct 31, 2024 · Launch Scene Viewer using an explicit intent to Google Play Services for AR (ARCore), and choose an appropriate mode setting for displaying the ...Supported Use Cases · File Requirements For Models · Using The Previewer Tool To...Missing: WebAR | Show results with:WebAR
  56. [56]
    Releases · google-ar/arcore-unity-extensions - GitHub
    ARCore Extensions for Unity's AR Foundation now has beta support for AR Foundation 6 to give support for Unity 6.
  57. [57]
    SDK Downloads | ARCore - Google for Developers
    Download the SDKs, find documentation and GitHub repos, and download the Google Play Services for AR service for testing.
  58. [58]
    Create an immersive AR session using WebXR | ARCore
    Oct 31, 2024 · This page will guide you through creating a simple immersive AR application using WebXR. You'll need a WebXR-compatible development environment to get started.
  59. [59]
    Getting started with AR Foundation | ARCore - Google for Developers
    Unity's AR Foundation is a cross-platform framework that allows you to write augmented reality experiences once, then build for either Android or iOS devices.Missing: 2024 | Show results with:2024
  60. [60]
    Enable AR in your Android app | ARCore - Google for Developers
    Enable AR to use augmented reality features in your new or existing app. Configure your app to be AR Required or AR Optional.Missing: 2024 | Show results with:2024