Google Tensor
Google Tensor is a family of custom-designed, ARM-based system-on-chip (SoC) processors developed by Google exclusively for its Pixel smartphones, emphasizing advanced artificial intelligence (AI) and machine learning capabilities to deliver seamless on-device experiences.[1] First unveiled in October 2021 with the Pixel 6 and Pixel 6 Pro, the original Tensor SoC marked Google's inaugural effort in mobile chip design, integrating a dedicated Tensor Processing Unit (TPU) to accelerate AI tasks such as computational photography, voice recognition, and real-time translation.[2] Subsequent generations have built upon this foundation, iteratively improving performance, power efficiency, and AI integration. The Tensor G2, introduced in the Pixel 7 series in 2022, enhanced security with the Titan M2 module and supported expanded AI features like advanced face unlock.[3] In 2023, the Tensor G3 powered the Pixel 8 lineup, featuring upgraded ARM Cortex CPU cores, a more efficient GPU, and a next-generation TPU for running generative AI models.[4] The Tensor G4, debuting in the Pixel 9 series in 2024, optimized for Gemini AI models, delivered faster on-device processing for features like live video summaries and enhanced accessibility tools.[5] By 2025, the Tensor G5 in the Pixel 10 series shifted to TSMC's 3 nm process node, providing up to 60% more AI processing power via its fourth-generation TPU, 34% faster CPU performance, and new capabilities such as motion deblur in imaging and proactive AI assistants.[6] These SoCs distinguish themselves through tight hardware-software integration with Android and Google's ecosystem, prioritizing AI-driven functionalities over raw benchmark performance, while incorporating robust security hardware to protect user data.[4] Early generations were fabricated by Samsung, but the transition to TSMC for Tensor G5 reflects Google's push for greater efficiency and customization in mobile AI hardware.[6] Overall, the Tensor series has enabled Pixel devices to lead in innovative features like Magic Editor for photos, Call Screen for spam reduction, and on-device Gemini Nano for privacy-focused AI interactions.[5]Development
Background
Google's Pixel smartphones initially relied on Qualcomm's Snapdragon system-on-chips (SoCs) for their processing power. The original Pixel and Pixel XL, released in 2016, featured the Snapdragon 821.[7] This was followed by the Snapdragon 835 in the Pixel 2 series of 2017, the Snapdragon 845 in the Pixel 3 series of 2018, the Snapdragon 855 in the Pixel 4 series of 2019, and the Snapdragon 765G in the Pixel 5 of 2020.[8] These off-the-shelf processors provided reliable performance but limited Google's ability to fully optimize hardware for its software ecosystem, particularly in areas like artificial intelligence and machine learning. To address this, Google shifted to custom silicon with the introduction of the Tensor SoC series, debuting in the Pixel 6 and Pixel 6 Pro smartphones launched in October 2021. Development of Tensor began in 2017, following Google's acquisition of key talent from HTC's chip design team, as part of a broader effort to build in-house expertise in mobile SoC design.[9] This move aligned with industry trends toward custom silicon, exemplified by Apple's A-series processors since the iPhone 4 in 2010 and Samsung's Exynos SoCs starting around the same period, which allowed device makers to tailor hardware for specific features and efficiency.[10] Google first teased Tensor on August 2, 2021, positioning it as the company's first custom-built mobile SoC designed exclusively for Pixel devices, with a strong emphasis on accelerating on-device AI and machine learning tasks.[2] To expedite development, Google partnered with Samsung for both design collaboration—leveraging Samsung's Exynos expertise for elements like CPU cores—and fabrication on a 5 nm process node.[11] Tensor was marketed as a premium mobile SoC enabling Pixel-exclusive capabilities, such as advanced computational photography for features like Magic Eraser and real-time voice recognition enhancements.[1]Design
The Google Tensor series employs an ARMv8-based architecture, featuring custom-designed components optimized for integration with Google's software ecosystem and services. This foundation allows for tailored enhancements in areas like machine learning acceleration and system efficiency, enabling the SoC to prioritize tasks specific to Android and Google apps. A prominent element is the embedded TPU (Tensor Processing Unit), which serves as a dedicated accelerator for on-device AI inference.[12] Central to the Tensor design are goals that emphasize on-device AI capabilities over conventional measures of CPU or GPU computational power. This approach shifts resources toward efficient execution of neural network models, reducing latency and enhancing user privacy by minimizing cloud dependency for tasks like image recognition and natural language processing. Dedicated hardware support for TensorFlow Lite models exemplifies this focus, providing optimized pathways for deploying lightweight AI inferences in power-constrained mobile scenarios.[1] The series adopts a modular architecture that supports progressive refinements across generations, enabling targeted upgrades without requiring a complete redesign. Examples include scaling core configurations for better parallelism and adopting smaller process nodes to improve energy efficiency, allowing the SoC to evolve in alignment with advancing AI algorithms and device requirements.[2] Security forms an integral part of the Tensor design philosophy, featuring a dedicated Tensor Security Core that works with the Titan M2 chip for robust hardware-rooted protections. This dedicated security module handles encryption of sensitive data and enforces verified boot to prevent unauthorized modifications, safeguarding against sophisticated threats. Lab validations confirm Titan M2's resistance to attacks such as voltage glitching, electromagnetic analysis, and laser-based fault injection.[1]Manufacturing
The first-generation Google Tensor (G1) SoC was manufactured by Samsung Foundry using its 5nm process node, incorporating design elements inspired by the Samsung Exynos 2100 but with customizations tailored by Google for Pixel devices.[13] Samsung Foundry continued production for subsequent generations, with the Tensor G2 also on the 5nm node, while the G3 and G4 shifted to a 4nm process to improve density and efficiency.[14][15][16] These Samsung-fabricated chips encountered manufacturing challenges, including lower yields on the 5nm node that contributed to production constraints and thermal throttling issues under sustained loads, affecting overall device performance stability.[17][18][19] In a significant shift, the Tensor G5 marked Google's transition to TSMC for fabrication on its 3nm N3E or N3P process node, representing the company's first fully in-house designed SoC without Samsung's co-design involvement, which enabled enhanced power efficiency and reduced thermal concerns compared to prior generations.[20][21][22][23] This change incorporated advanced extreme ultraviolet (EUV) lithography across multiple layers for both Samsung's earlier nodes and TSMC's 3nm process, allowing for finer transistor patterning and higher integration densities. The foundry switch to TSMC also mitigated supply chain risks from Samsung's yield variability, supporting smoother Pixel production timelines for the 2025 lineup by leveraging TSMC's higher-volume capacity and reliability.[24][25]Architecture
Processor Cores
The processor cores in Google Tensor SoCs adopt an octa-core ARM Cortex-based design, leveraging a big.LITTLE configuration to dynamically balance high-performance computing demands with power efficiency for mobile applications.[26] This heterogeneous setup combines high-performance cores for intensive tasks, mid-tier cores for general workloads, and efficiency cores for background operations, allowing the system to scale resources based on real-time needs.[27] The architecture relies on ARM's DynamIQ technology, which facilitates the mixing of diverse core types within a shared cluster, enabling advanced power gating and frequency scaling to minimize thermal throttling and battery drain.[28] Over successive Tensor generations, the core lineup has progressed through ARM's Cortex-A series, starting with models like the Cortex-A76 and Cortex-A78 for performance clusters in earlier iterations, and advancing to more efficient variants such as Cortex-A720 and A725 in later designs. For example, the Tensor G5 features an unusual 1x Cortex-X4 prime core, 5x Cortex-A725 performance cores, and 2x Cortex-A520 efficiency cores.[29][30] Prime cores, typically the highest-performance units (e.g., from the Cortex-X series), are optimized for sustained high-load scenarios like multitasking or computational bursts, while the overall configuration maintains eight cores to support Android's multi-threaded ecosystem.[16] This evolution prioritizes integration and software optimization over raw peak performance, aligning with Google's focus on AI-driven features and user experience fluidity.[26] Clock speeds for prime cores range up to 2.8 GHz in early generations and reach 3.78 GHz in the Tensor G5, with efficiency cores operating at lower frequencies to conserve energy, supported by per-core L2 caches and a shared L3 cache of approximately 4 MB for faster data retrieval and reduced latency.[31][29] The cores connect to the system bus via ARM's interconnect fabric, which ensures coherent memory access and efficient task migration between core clusters.[28] Additionally, the CPU subsystem integrates with dedicated memory controllers that support LPDDR5 RAM, delivering high bandwidth (up to 51.2 GB/s in dual-channel setups) essential for handling large datasets in imaging and machine learning pipelines.[32]Graphics and AI Accelerators
The graphics processing unit (GPU) in Google Tensor SoCs up to the G4 is based on ARM's Mali architecture, while the G5 uses Imagination Technologies' PowerVR DXT-48-1536 GPU, enabling efficient rendering for mobile gaming and visual effects.[29] Early models, such as Tensor G1, integrate the Mali-G78 with 20 execution units, supporting the Vulkan API for high-performance graphics and basic ray tracing capabilities in later iterations like the Immortalis-G715 in Tensor G3.[27][33][34] This configuration allows for smooth 1080p and higher gaming experiences, with optimizations for power efficiency in demanding titles. Complementing the GPU, Tensor incorporates a dedicated Tensor Processing Unit (TPU), an edge-optimized variant of Google's AI accelerator, supporting INT8 and FP16 precision for on-device machine learning inference. This TPU, evolved across generations, handles up to 12 TOPS in recent models and is tailored for compact neural networks like Gemini Nano, enabling seamless multimodal AI processing without cloud dependency.[4][35] The architecture prioritizes low-latency operations for real-time tasks, distinguishing it from general-purpose GPUs by focusing on matrix multiplications central to deep learning. These accelerators power key AI features in Pixel devices, including hardware-accelerated computer vision for secure face unlock, which uses the TPU for rapid biometric matching, and natural language processing for on-device voice interactions. Photo editing tools like Magic Eraser leverage the TPU to intelligently remove or inpaint objects in images, applying generative models directly on the device for privacy-preserving enhancements.[36][37] The GPU and TPU integrate via a shared system memory architecture, allowing efficient data exchange with the CPU for low-latency inference and rendering pipelines. This unified design minimizes data transfer overhead, enabling hybrid workloads where AI-enhanced graphics, such as real-time style transfers in camera apps, benefit from direct memory access across components.[38]Modem and Other Components
The Google Tensor system-on-chip (SoC) integrates a Samsung Exynos-derived 5G modem, starting with the Exynos 5123 in the first-generation Tensor for Pixel 6 devices, which supports both sub-6 GHz and mmWave bands with peak download speeds of up to 7.5 Gbps and upload speeds of 3 Gbps.[13] Subsequent generations evolved this component for improved efficiency, such as the Exynos 5300 in Tensor G2 and G3, which maintains sub-6 GHz and mmWave compatibility while enhancing power consumption and carrier aggregation for better real-world connectivity in Pixel 7 and 8 series.[33] By Tensor G4 in the Pixel 9 lineup, the Exynos 5400 modem achieves theoretical download speeds exceeding 10 Gbps in optimized conditions, though practical performance prioritizes thermal stability over maximum throughput. Beyond the modem, the Tensor SoC incorporates an integrated Image Signal Processor (ISP) tailored for advanced camera systems, enabling support for up to 50 MP sensors in early models like the Pixel 6's primary camera and scaling to multi-camera arrays with computational photography features such as real-time HDR+ processing. In later iterations, such as Tensor G5 for the Pixel 10 series, the fully custom-designed ISP handles 10-bit video capture at 4K 120 FPS, motion deblur in low light, and up to 200 MP resolution for high-end sensors, optimizing noise reduction and dynamic range without relying on external processing.[6] The SoC also includes a display controller that drives up to QHD+ resolutions at 144 Hz refresh rates, as seen in Pixel 8 Pro implementations, ensuring smooth visuals with variable refresh rate support for power efficiency.[32] Audio components feature an on-chip audio processor (AoC) that supports high-resolution codecs like LDAC and aptX HD, facilitating spatial audio rendering and noise cancellation directly within the Tensor for Pixel devices.[39] For storage and input/output, Tensor SoCs support UFS 3.1 in initial generations (Pixel 6 through 9), delivering sequential read speeds up to 2,100 MB/s and write speeds up to 1,200 MB/s, with inline hardware encryption via AES-XTS for secure data handling.[40] The Tensor G5 advances to UFS 4.0 compatibility in the Pixel 10, doubling performance to 4,200 MB/s reads and 2,800 MB/s writes while incorporating zoned storage for AI workloads, though PCIe interfaces remain limited to internal high-speed links rather than external expansion.[41] Power management is handled through integrated ICs that leverage the SoC's process node advancements, such as the 4nm in Tensor G3 for 20% better efficiency over predecessors, enabling adaptive battery optimization that extends usage by predicting app behavior and throttling non-essential tasks.[6] In the 3nm Tensor G5, these ICs achieve up to 30% power savings compared to 5nm designs, supporting over 30 hours of mixed-use battery life in Pixel 10 models through dynamic voltage scaling and AI-driven idle states.[22] Security features center on the Titan M series co-processor, with Titan M2 integrated alongside Tensor's dedicated security core—a RISC-V-based subsystem that verifies boot integrity and isolates sensitive operations in a hardware enclave.[42] This combination performs end-to-end encryption for payments, biometric authentication, and firmware updates, earning Common Criteria EAL4+ certification for resistance to tampering, as implemented across Pixel 6 and later devices.[43] The Titan M2 handles secure key storage and rollback protection independently of the main CPU, ensuring that even if the OS is compromised, critical data like private keys remains protected within the chip's isolated memory.[36]Models
Tensor G1
The Google Tensor G1 (also known as GS101) was announced on October 19, 2021, as part of the Pixel 6 series launch event, marking Google's entry into custom system-on-chip (SoC) design for its smartphones. It powers the Pixel 6, Pixel 6 Pro, and later the mid-range Pixel 6a released in July 2022. Fabricated on Samsung Foundry's 5 nm process node, the Tensor G1 represented a shift from third-party processors like Qualcomm's Snapdragon, enabling tighter integration of Google's software and hardware for improved AI and imaging capabilities.[1][44][45] The SoC features an octa-core CPU configuration with two high-performance Arm Cortex-X1 cores clocked at 2.80 GHz, two mid-range Arm Cortex-A76 cores at 2.25 GHz, and four efficiency-focused Arm Cortex-A55 cores at 1.80 GHz, providing a balance of power and energy efficiency for mobile workloads. Graphics processing is handled by the Arm Mali-G78 MP20 GPU, supporting advanced rendering for gaming and visual effects. A custom Google Tensor Processing Unit (TPU) accelerates on-device machine learning tasks, while the integrated Samsung Exynos 5123 modem delivers 5G connectivity, including sub-6 GHz and mmWave support, along with integrated features like Wi-Fi 6 and Bluetooth 5.2.[27][46][15] As the inaugural custom silicon for Pixel devices, the Tensor G1 introduced key innovations centered on AI-driven experiences, such as Real-Time HDR+, which applies computational photography enhancements during video recording at up to 4K 60 fps for more vibrant colors and dynamic range without post-processing delays. It also enabled on-device Live Translate, allowing real-time translation of conversations in apps like Messages and WhatsApp using less power than previous cloud-dependent methods, enhancing privacy and speed for multilingual users. These features underscored Google's focus on integrating its AI expertise directly into hardware for seamless, always-on capabilities.[1]Tensor G2
The Google Tensor G2 (GS201) is the second-generation system-on-chip (SoC) developed by Google in collaboration with Samsung, debuting in October 2022 with the Pixel 7 and Pixel 7 Pro smartphones. Announced on October 6 and released on October 13, it represents an iterative refinement over the original Tensor, retaining the Samsung 5nm fabrication process while incorporating optimizations for enhanced thermal management and power efficiency. This design choice allowed Google to focus on software-hardware integration rather than a node shrink, enabling targeted improvements in AI processing without major redesign costs.[47][48] At its core, the Tensor G2 features an octa-core CPU configuration comprising 2× Arm Cortex-X1 prime cores clocked at 2.85 GHz for high-performance tasks, 2× Arm Cortex-A78 performance cores at 2.35 GHz, and 4× Arm Cortex-A55 efficiency cores at 1.80 GHz. The graphics processing unit (GPU) upgrades to the Arm Mali-G710 (Immortalis-G710 MP7), delivering approximately 20% better performance and efficiency compared to the Mali-G78 in the prior generation, supporting advanced rendering for gaming and visual effects. The Tensor Processing Unit (TPU) is refined for machine learning workloads, rated at 12 TOPS (trillion operations per second) in INT8 precision with improved architecture that boosts relevant AI tasks by up to 60% while reducing power draw by about 20%. These enhancements contribute to overall power consumption reductions of 15-20% over the Tensor G1 in mixed workloads, aiding longer battery life in AI-intensive scenarios.[49][50][51] A hallmark of the Tensor G2 is its emphasis on on-device AI innovations, powering features like Photo Unblur, which leverages computational photography to restore sharpness in out-of-focus images captured by the Pixel's camera system, and an expanded Live Translate capability that supports real-time voice and text translation across more than 50 languages in additional apps such as WhatsApp and Instagram. These capabilities stem from the TPU's optimized matrix multiplication and neural network acceleration, enabling seamless integration of generative AI without cloud dependency. The SoC also includes an upgraded image signal processor (ISP) for faster photo and video processing, further streamlining user experiences in the Pixel ecosystem.[3][47] The Tensor G2 is deployed in several Google devices, including the Pixel 7, Pixel 7 Pro (both launched in 2022), the mid-range Pixel 7a (2023), and the Pixel Tablet (2023). Its balanced architecture prioritizes AI and camera optimizations over raw computational speed, aligning with Google's strategy for intelligent mobile computing.[47][52]Tensor G3
The Google Tensor G3 is a system on a chip (SoC) designed by Google and manufactured by Samsung using a 4 nm process node. It was announced on October 4, 2023, and powers the Pixel 8, Pixel 8 Pro, and later the Pixel 8a smartphone lineup.[4][53][54] The Tensor G3 features a nine-core CPU configuration consisting of one Arm Cortex-X3 prime core clocked at 2.91 GHz for high-performance tasks, four Cortex-A715 performance cores at 2.37 GHz, and four Cortex-A510 efficiency cores at 1.7 GHz. Its graphics processing is handled by an Arm Immortalis-G715 MP7 GPU, which supports advanced rendering and machine learning workloads. The SoC also includes a dual-core neural processing unit (NPU) based on Google's custom Tensor Processing Unit (TPU) architecture, optimized for on-device AI inference and enabling seamless integration with models like Gemini Nano for multimodal tasks such as text summarization and image understanding.[55][53][4] Key innovations in the Tensor G3 emphasize generative AI capabilities, marking the first implementation of advanced on-device processing for features like Video Boost, which uses cloud-assisted AI to upscale videos to 8K resolution and enhance stabilization on the Pixel 8 Pro. It also introduces improved generative editing tools, such as expanded Magic Editor for photo manipulation and Audio Magic Eraser for video sound cleanup, all powered by the enhanced TPU to prioritize efficiency and privacy through local computation. These advancements focus on integrating generative AI directly into everyday device interactions, setting the stage for more sophisticated on-device intelligence in the Pixel ecosystem.[4][36]Tensor G4
The Google Tensor G4 is the fourth-generation system-on-chip (SoC) developed by Google in collaboration with Samsung, debuting in August 2024 alongside the Pixel 9 series smartphones.[56] It powers the Pixel 9, Pixel 9 Pro, Pixel 9 Pro XL, and Pixel 9 Pro Fold, marking the final Tensor iteration manufactured using Samsung's 4nm process node with specific optimizations for power efficiency.[16] This chip emphasizes on-device AI processing while maintaining Google's focus on custom silicon tailored for Pixel devices. The Tensor G4 features an octa-core CPU configuration consisting of one Arm Cortex-X4 prime core clocked at 3.1 GHz for high-performance tasks, three Cortex-A720 performance cores at 2.6 GHz, and four Cortex-A520 efficiency cores at 1.92 GHz.[57] The GPU is an Arm Mali-G715, continuing the lineage of Immortalis-series graphics in recent Tensor chips for improved rendering in AI-driven applications.[58] Additionally, the integrated NPU supports multimodal AI capabilities, enabling advanced on-device processing for features like Gemini Nano models that handle text, images, and audio inputs simultaneously.[16] Key innovations in the Tensor G4 include enhanced satellite connectivity via the integrated Exynos 5400 modem, allowing Emergency SOS messaging in areas without cellular or Wi-Fi coverage.[59] It also powers an advanced version of Audio Magic Eraser, which uses AI to isolate and remove unwanted sounds from videos, such as crowd noise or wind, for cleaner audio output.[60] Furthermore, the chip incorporates improved thermal management through optimized packaging and process refinements, resulting in better sustained performance and reduced throttling during prolonged use compared to prior generations.[61]Tensor G5
The Google Tensor G5 is the fifth-generation system-on-chip (SoC) developed by Google, marking the company's first fully custom-designed processor without reliance on third-party architectures like Samsung's Exynos base. Announced on August 20, 2025, alongside the Pixel 10 series, it powers the Pixel 10, Pixel 10 Pro, and Pixel 10 Pro XL smartphones.[6][21][62] Manufactured on TSMC's 3 nm process node, this shift from previous Samsung Foundry production enables improved power efficiency and performance scaling.[29] The Tensor G5 features an eight-core CPU configuration consisting of one high-performance Arm Cortex-X4 prime core clocked at 3.78 GHz, five Arm Cortex-A725 performance cores at 3.05 GHz, and two Arm Cortex-A520 efficiency cores at 2.25 GHz. This setup delivers an average 34% CPU performance improvement over the Tensor G4, enhancing multitasking and responsiveness in everyday tasks. The GPU is the Imagination Technologies PowerVR IMG DXT-48-1536, optimized for mobile gaming and graphics rendering, though it omits hardware-accelerated ray tracing to prioritize efficiency.[29] Complementing these, the integrated neural processing unit (NPU), referred to as the fourth-generation Tensor Processing Unit (TPU), provides up to 60% greater computational power than its predecessor, enabling faster inference for on-device AI models.[63][6][64] A key innovation in the Tensor G5 is its deep integration with Google's Gemini Nano large language model, allowing advanced on-device generative AI features such as real-time summarization and proactive assistance without cloud dependency. This NPU enhancement supports over 20 on-device AI experiences, including enhanced photo editing and voice processing, while maintaining privacy through local computation. Overall, these advancements reflect Google's focus on AI-centric hardware tailored for the Pixel ecosystem.[6][63]Future Models
Google's Tensor G6 is anticipated to power the Pixel 11 series, expected to launch in 2026.[65] This next-generation SoC represents a continuation of Google's shift toward greater customization, building on the Tensor G5's foundation by emphasizing power efficiency over raw performance gains.[66] The Tensor G6 is reportedly set to be fabricated on TSMC's 2nm process node, a significant advancement from the 3nm node used in prior models, enabling improved thermal management and energy efficiency.[67] Additionally, Google plans to integrate a MediaTek M90 5G modem, supporting up to 12 Gbps download speeds and dual-active 5G connectivity, marking a departure from previous Samsung Exynos modems to enhance connectivity reliability.[68] A variant of the Tensor G6 is expected to extend to non-smartphone devices, such as the Pixel Tablet 3 and Pixel 11a, though reports as of late 2024 suggested potential cancellation of the Pixel Tablet 3.[65][69] Long-term, Google has secured TSMC production capacity for Tensor chips through at least the Pixel 14, projected for 2029, underscoring a commitment to in-house design for sustained AI advancements.[70] Developing custom SoCs like the Tensor series involves substantial upfront costs, estimated in the billions for design and tooling, which Google must balance against the need for seamless integration with the Android ecosystem and third-party components.[71] This approach allows tailored optimizations for Google services but requires ongoing investments to match competitors' off-the-shelf efficiency without compromising compatibility.[72]Reception and Impact
Performance Reviews
The Google Tensor series has demonstrated steady performance evolution across its generations, with each iteration focusing on balancing computational power, efficiency, and specialized AI capabilities tailored for Pixel devices. Early models like the Tensor G1 prioritized integration of machine learning accelerators over raw speed, achieving moderate CPU scores in standard benchmarks while laying the foundation for on-device AI processing. Subsequent generations built on this by incorporating process node improvements and architectural tweaks, resulting in measurable gains in both single- and multi-core tasks, though consistently trailing flagship competitors in peak throughput.| Generation | Geekbench 6 Single-Core | Geekbench 6 Multi-Core | Source |
|---|---|---|---|
| Tensor G1 | 1317 | 3208 | nanoreview.net/en/soc/google-tensor |
| Tensor G2 | 1439 | 3765 | nanoreview.net/en/soc/google-tensor-g2 |
| Tensor G3 | 1771 | 4429 | beebom.com/tensor-g4-vs-tensor-g3 |
| Tensor G4 | 1897 | 4655 | browser.geekbench.com/android_devices/google-pixel-9-pro |
| Tensor G5 | 2285 | 6191 | gadgets.beebom.com/guides/google-tensor-g5-vs-apple-a18-pro-benchmark-specs |