Fact-checked by Grok 2 weeks ago

Draw distance

Draw distance, also known as render distance or view distance, is a key parameter in that specifies the maximum distance from the viewpoint (typically a camera) at which objects and geometry in a scene are rendered by the graphics engine. It is primarily determined by the far clipping plane of the camera's view —a truncated pyramidal volume defining the visible region—beyond which objects are culled and not processed for drawing to conserve computational resources. This mechanism ensures that only relevant scene elements within the defined range contribute to the final image, forming the basis of visibility culling in rendering pipelines. In and simulations, draw distance plays a crucial role in balancing visual fidelity with hardware performance, particularly in expansive environments like open-world titles. A longer draw distance enhances by allowing players to perceive greater environmental scale and detail at a distance, but it substantially increases the rendering workload, as more polygons, textures, and effects must be processed by the CPU and GPU. Conversely, shortening it improves frame rates and reduces memory usage but can lead to abrupt "pop-in" of objects or a truncated sense of space. To mitigate these trade-offs, developers employ complementary techniques such as systems, which dynamically swap high-resolution models for simpler, low-poly versions as objects recede, and atmospheric effects like or to obscure the draw distance boundary without harsh cutoffs. Occlusion culling further optimizes by excluding hidden objects from rendering, while in and constrained platforms, "faked" extensions using silhouettes or pre-baked distant scenery extend perceived depth with minimal overhead. These approaches have evolved with hardware advancements, enabling progressively larger draw distances in modern engines like and Unreal, though they remain a core optimization challenge in interactive 3D applications.

Fundamentals

Definition and Purpose

Draw distance in is defined as the maximum distance from the viewpoint at which objects in a three-dimensional scene are drawn by the rendering engine, beyond which they are typically not rendered to maintain performance. This threshold serves primarily to balance visual fidelity and computational efficiency in real-time rendering applications, such as and simulations, by restricting the number of polygons, textures, and effects processed, thereby preventing hardware overload. In basic operation, objects located within the draw distance are fully rendered with their complete models, textures, and shaders to provide detailed visuals close to the viewer. Those exceeding this distance are culled from the draw calls—effectively excluded from the rendering process—or substituted with simpler placeholders, such as skyboxes or low-detail , to simulate distant environments without taxing resources. This mechanism integrates into the early stages of the , where visibility determination occurs, ensuring only relevant geometry advances to subsequent rasterization and shading steps (as explored further in the role within the graphics pipeline). The concept of draw distance originated in early flight simulators during the and , where limited computational power necessitated restricting rendered terrain and objects to achieve interactive frame rates; for instance, Bruce Artwick's 1976 master's thesis implemented a flight display at 9 frames per second on a PDP-11 . The term itself gained popularity in the amid the boom in consumer gaming, as titles like (1993) leveraged to enhance visual detail while managing hardware constraints.

Role in Graphics Pipeline

In the graphics rendering pipeline, draw distance is integrated during the view frustum culling and culling stages, which follow processing—encompassing model, , and transformations—but precede rasterization and fragment shading. processing transforms object coordinates into clip space using the model-- () matrices, at which point frustum culling discards outside the defined view volume to enforce the draw distance limit. culling further refines this by eliminating objects hidden behind closer , using techniques like queries to test without full rendering. This placement ensures inefficient is filtered early, preventing unnecessary downstream computations. Draw distance works in tandem with the to delineate the view volume, where parameters such as zNear and zFar establish the near and far clipping planes, directly controlling the maximum rendering distance from the camera. In APIs like and , this culling process filters geometry batches on the CPU side before issuing draw calls, reducing the volume of data transferred to the GPU and optimizing batch submissions. Thresholds for draw distance are determined via distance metrics from the camera position, such as to an object's or the of its , assuming prior transformation of vertices through the MVP pipeline. By curtailing the scope of processed , draw distance plays a key role in , minimizing GPU workload through fewer vertex shader invocations, reduced sampling, and lower usage. For example, in a populated with thousands of objects, frustum tied to draw distance can eliminate approximately 85% of them from rendering, substantially decreasing computational overhead and enabling higher frame rates in resource-constrained environments. This efficiency is particularly vital for maintaining interactive performance in applications.

Implementation Techniques

Culling Methods

Culling methods are essential techniques for enforcing draw distance by excluding objects that are either outside the visible view or occluded, thereby reducing the computational load on the . These methods primarily operate by testing object bounding volumes against visibility criteria, ensuring that only potentially visible reaches the GPU for rasterization. culling and distance-based culling form the core static approaches, while culling provides complementary integration to approximate draw distance effects in complex scenes. Frustum culling determines visibility by testing an object's bounding volume—such as axis-aligned bounding boxes (AABBs) or spheres—against the six planes of the view frustum (near, far, left, right, top, bottom). For an AABB, the test computes the signed distances from the box's vertices to each plane; if the farthest negative vertex lies on the positive side of any plane, the object is outside and culled, while partial intersections allow for potential rendering. The plane equation for a frustum plane \pi_{VF,i} is \mathbf{n}_i \cdot \mathbf{x} + d_i = 0, where the test for the negative vertex \mathbf{v}_n is a = \mathbf{v}_n \cdot \mathbf{n}_i + d_i > 0 (outside) and for the positive vertex \mathbf{v}_p, b = \mathbf{v}_p \cdot \mathbf{n}_i + d_i > 0 (intersecting). For spheres, the center's signed distance c = \mathbf{c}_S \cdot \mathbf{n}_i + d_i is compared to the radius r_S: assuming positive c indicates the outside half-space, if c > r_S, the sphere is fully outside; if c < -r_S, fully inside; otherwise, it intersects the plane. An object is culled if it is fully outside any frustum plane. This method efficiently avoids rendering objects beyond the frustum's far plane, directly supporting draw distance limits. Distance-based culling simplifies enforcement of draw distance by applying a radial from the camera , culling objects whose exceed a predefined maximum distance. The core logic uses the Euclidean distance formula: cull if \sqrt{(x_c - x_o)^2 + (y_c - y_o)^2 + (z_c - z_o)^2} > [threshold](/page/Threshold), where (x_c, y_c, z_c) is the camera and (x_o, y_o, z_o) is the object . This approach is computationally lightweight and often implemented via per-actor settings, such as size-distance pairs that adjust based on object scale to prevent premature of larger elements. In engines like Unreal, cull distance volumes extend this by defining spatial regions where vary, optimizing for open-world scenarios without full computation. Occlusion integrates with draw distance by approximating through depth-based or structural tests, reducing over-rendering of distant but potentially visible objects. Hierarchical (HZB) builds a pyramid of minimum depth values from a pre-pass depth buffer, enabling conservative tests where an object's screen-space bounding box is queried against coarser HZB levels for ; if the object's minimum depth exceeds the HZB value, it is without full rasterization. methods, suited for architectural scenes, divide the into cells connected by (e.g., ), recursively cells whose projected portal bounding boxes do not intersect the current view, effectively limiting draw distance propagation through visibility graphs. These techniques approximate draw distance by focusing on occluder hierarchies rather than exhaustive distance checks. Implementation of culling methods typically involves a pre-pass on the CPU for coarse and distance tests using hierarchies, followed by GPU-based refinement for via compute shaders or hardware queries. CPU execution allows precise scene management but risks bottlenecks in large worlds, while GPU offloading accelerates parallel tests at the cost of data transfer overhead and potential false positives in dynamic scenes, where over-culling can occur due to conservative approximations. Trade-offs include balancing accuracy against , with hierarchical approaches mitigating false positives by multi-level testing, though they increase preprocessing costs in complex environments.

Level of Detail Systems

Level of Detail (LOD) systems address the need to balance rendering performance and visual fidelity in by dynamically adjusting the complexity of models based on their draw distance from the viewer. These systems employ multiple precomputed versions of each model, ranging from high-polygon counts for nearby objects to simplified low-polygon approximations for distant ones, thereby reducing the computational load without introducing abrupt cutoffs. The selection of the appropriate LOD version is commonly governed by a distance-based criterion, expressed as \text{LOD\_level} = \lfloor \frac{\text{[distance](/page/Distance)}}{\text{LOD\_switch\_distance}} \rfloor, where \text{[distance](/page/Distance)} represents the from the camera to the object's center, and \text{LOD\_switch\_distance} is a tunable defining the interval for level transitions. Discrete LOD approaches rely on predefined distance thresholds to swap entire model representations, such as using LOD0 (full detail) for distances under 50 meters, LOD1 (reduced detail) for 50–200 meters, and progressively coarser levels beyond that, which efficiently manages counts but can cause noticeable "" artifacts during switches. In contrast, continuous LOD techniques mitigate these artifacts through methods like geomorphing, which interpolates positions between adjacent detail levels to create smooth transitions, often parameterized by time or view-dependent weights to align with frame rates. This ensures seamless detail reduction as objects recede, enhancing perceptual continuity in applications. Mipmapping extends principles to textures, automatically selecting from a of prefiltered lower-resolution images based on the texture's projected screen-space size, which inversely scales with distance to prevent moiré and shimmering on distant surfaces. The mip level is determined by the formula \text{mip\_level} = \log_2(\text{distance\_factor}), where \text{distance\_factor} approximates the ratio of the texture's coverage on screen to its native resolution, enabling efficient hardware-accelerated sampling. systems often complement basic methods by further optimizing drawn objects through detail reduction rather than outright exclusion. These techniques are staples in contemporary game engines, with providing automated LOD generation tools that simplify meshes via edge-collapse algorithms tied to distance-based screen size thresholds, and offering LOD Groups for manual assignment of model variants to distance ranges. LOD was prominently featured in early real-time rendering milestones, such as (1999), which integrated progressive LOD for models and curved surfaces to achieve high frame rates in complex scenes.

Historical Challenges

Issues in Early Computer Graphics

Early computer graphics faced significant hardware constraints that severely limited draw distance capabilities. The Graphics chipset, released in 1996, exemplified these issues with only 4 MB of VRAM for both frame buffer and texture memory, restricting the storage and processing of complex scenes. This scarcity forced developers to implement short draw distances to manage memory usage and maintain playable frame rates, often relying on fog effects to obscure the limited visibility range beyond a few hundred meters. Additionally, the Voodoo's pixel fill rate of 50 million pixels per second and triangle throughput of approximately 1 million triangles per second (for textured and fogged rendering) created bottlenecks when rendering distant objects, as even small, far-off geometry required substantial processing relative to the era's capabilities. Software limitations compounded these hardware challenges, particularly in the fixed-function pipeline of early versions (1.0–1.x), which offered basic back-face culling and frustum clipping but lacked built-in efficient occlusion culling mechanisms. This resulted in overdraw, where the processed pixels from hidden or distant objects unnecessarily, further degrading performance and necessitating conservative draw distance restrictions. In flight simulators of the era, developers employed terrain paging to dynamically load landscape data based on proximity, yet still resorted to horizon fog as a visual to conceal abrupt cutoffs in rendering. Fill rate and bottlenecks were particularly acute in custom arcade hardware of the . Games like (1993), running on Sega's Model 1 architecture, utilized fixed draw distances confined to flat, square arenas with backgrounds to avoid overwhelming the system's limited throughput of around 180,000 polygons per second. Distant elements, if rendered, would exacerbate fill demands on the hardware's rasterizer, leading to instability; thus, developers prioritized near-field detail over expansive views. A pivotal evolution occurred with the introduction of NVIDIA's in , which integrated dedicated transform and lighting (T&L) engines directly into the GPU. This offloaded vertex transformations and lighting calculations from the CPU, enabling the preparation and submission of more geometry per frame without proportional performance loss. As a result, optimized scenes could support longer draw distances while sustaining higher frame rates and reducing the reliance on fogging techniques.

Problems in Older Video Games

In older video games from the and early , limited draw distances due to hardware constraints frequently caused pop-in and pop-out effects, where objects, , and characters suddenly appeared or vanished as the player moved. This abrupt materialization disrupted flow and visual continuity, particularly in open areas. For instance, in The Legend of : Ocarina of Time (1998), enemies and environmental elements would pop into view unexpectedly, contributing to noticeable visual jarring during exploration. To conceal these short draw distances, developers often employed artificial and as workarounds, creating atmospheric effects that gradually obscured distant without rendering it. was common in early 3D games to mask rendering limitations while enhancing atmosphere, as seen in titles like GoldenEye 007 (1997). Similarly, Grand Theft Auto III (2001) used such techniques, though cityscapes and distant buildings still suffered from severe distance clipping, where structures flickered or vanished abruptly during high-speed driving, undermining the urban immersion. These flaws often broke player immersion, as visible geometry clipping exposed rendering boundaries, prompting community responses like PC mods to extend draw distances.

Modern Solutions and Alternatives

Dynamic Adjustment Techniques

Dynamic adjustment techniques enable real-time variation of draw distance to maintain consistent in response to fluctuating resources and scene complexity. These methods dynamically scale the rendering threshold based on metrics like or processing load, ensuring that distant objects are culled when necessary to prioritize frame stability. For instance, adaptive draw distance s monitor current s and reduce the rendering radius if dips below a target, such as lowering from a base setting to preserve smoothness during intensive scenes. One common approach involves frame-time budgeting, where the allocates a fixed time budget per frame and prioritizes nearer objects within that limit, effectively shortening draw distance for farther elements under load. This technique ensures efficient resource use by dynamically distant when the budget is exceeded, allowing games to handle variable scene demands without stutters. In (2007), draw distance is adjusted through graphics sliders linked to overall quality presets, which balance GPU utilization by scaling vegetation and object visibility ranges to fit hardware capabilities. Multi-threaded implementations further enhance dynamic adjustment by distributing tasks across processor cores, enabling variable draw distances in resource-constrained environments. The CPU performs preliminary and culling in parallel threads while the GPU handles rendering, allowing scalable for large worlds without bottlenecking the main . This was popularized on consoles like the , where the processor's Synergistic Processing Units (SPUs) were used for parallel culling in titles such as Battlefield: Bad Company and , supporting dynamic in expansive, deformable environments by efficiently rejecting occluded distant objects across multiple threads. User-configurable options provide manual control over draw distance through in-game settings menus, often featuring sliders and hardware (low, medium, high) that allow players to tune visibility based on their system's . These interfaces let users directly adjust thresholds for elements like or objects, with presets automatically scaling draw distance to match detected GPU and CPU capabilities, promoting accessibility across diverse .

Advanced Rendering Approaches

Ray tracing and techniques enable effectively infinite draw distances for lighting effects by propagating rays across the entire scene without relying on precomputed probes or limited geometry rendering. In hybrid rendering pipelines, primary visibility is determined through rasterization with traditional distance-based culling to maintain performance, while secondary rays handle distant reflections, shadows, and indirect lighting. For instance, NVIDIA's RTX implementation in (2020) uses for pixel-perfect, unlimited-range , simulating distant neon signs and billboards' contributions to without loading full geometry at those ranges. Unreal Engine 5's Nanite system introduces virtualized geometry processing, clustering billions of micropolygons into a (DAG) for on-demand streaming and rendering, which bypasses conventional (LOD) hierarchies and supports vast open-world scales. This approach renders at pixel-scale regardless of , with costs scaling sub-linearly to screen rather than , enabling distances spanning several kilometers in demonstrations like the Valley of the Ancient project. By asynchronously streaming clusters and using a software rasterizer optimized for tiny triangles, Nanite eliminates traditional pop-in artifacts associated with LOD transitions, as visible geometry is dynamically loaded based on needs. Voxel-based rendering leverages sparse octrees (SVOs) to approximate distant geometry and lighting through hierarchical traversal, allowing efficient for procedural environments without exhaustive polygon rendering. In Teardown (2020), the engine voxelizes objects into a dense with mipmapped levels for accelerated empty-space skipping, using ray tracing in fragment shaders for secondary effects like and specular reflections, which extend visibility up to the procedural world's volume limits—typically an approximately 250 m × 25 m × 250 m grid (2504×256×2504 voxels at 0.1 m per voxel). This method supports destructible, fully procedural draw distances by updating the voxel data on-the-fly, with sparse tracing modes reducing computation for far-field approximations while maintaining coherence in large, dynamic scenes. Integration with further extends draw distances by offloading rendering to remote servers with superior computational resources, streaming high-fidelity visuals to client devices and circumventing local hardware constraints on complexity or scale. (2019–2023) exemplified this by enabling 4K/60 FPS streaming of open-world titles like , where server-side processing allowed for extended draw distances and detailed distant landscapes that would overwhelm consumer GPUs, limited primarily by network rather than client capabilities. This paradigm shifts traditional to the cloud, permitting dynamic adjustment of rendering budgets for immersive, expansive environments across diverse hardware.

References

  1. [1]
    Unity - Manual: Introduction to the camera view
    ### Summary of Draw Distance, Far Clipping Plane, and Rendering in 3D Graphics
  2. [2]
    Cameras | Stride manual
    The far plane, also known as the draw distance, is the furthest point the camera can see. Objects beyond this point aren't drawn. The default setting is ...<|control11|><|separator|>
  3. [3]
    The Metrics of Space: Tactical Level Design - Game Developer
    Sep 3, 2012 · The far clipping plane is the point at which the game engine stops rendering. We sometime hear this referred to as "draw distance". Complex ...
  4. [4]
    Fake it til' you make it - faking extended draw distance in mobile ...
    Increasing the draw distance raises the number of triangles that pass through all culling stages: more objects undergo bounding box checks on the CPU and more ...
  5. [5]
    Fallout 4 Graphics, Performance & Tweaking Guide | GeForce
    Jan 15, 2016 · If you've got performance to spare, instead consider increasing the draw distance of Distant Object Detail by tweaking the setting. Godrays ...
  6. [6]
    GPU Performance for Game Artists
    If this limit is exceeded during gameplay testing, steps must be taken such as reducing the number of objects, reducing draw distance, etc. Console games ...
  7. [7]
    How Assassin's Creed Shadows makes environments look great up ...
    Mar 19, 2025 · ... draw distance. Objects like a sword, a chair, or a painting on a wall all require multiple versions to be created. Higher-fidelity versions ...
  8. [8]
    Draw distance - Semantic Scholar
    Draw distance, also known as render distance or view distance, is a computer graphics term, defined as the maximum distance of objects in a three…
  9. [9]
  10. [10]
    [PDF] GPU-Driven Rendering Pipelines
    Modular construction using in-game level editor. • High draw distance. Background built from small objects. • No baked lighting.
  11. [11]
    Flight Simulator Gave Birth to 3D Video-Game Graphics
    Feb 26, 2023 · In 1999 Bill Gatespenned a moving tribute to the Wright brothers. He credited their winged invention as “the World Wide Web of that era,” ...<|control11|><|separator|>
  12. [12]
    [PDF] The 3D Rasterization Pipeline - cs.Princeton
    3D Rendering Scenarios. • Offline. One image generated with as much quality as possible for a particular set of rendering parameters.
  13. [13]
    [PDF] The Graphics Pipeline and OpenGL I: - Stanford University
    • graphics pipeline is a series of operations that takes 3D vertices/normals/triangles as input and generates fragments and pixels. • today, we only ...
  14. [14]
    OpenGL Projection Matrix - songho.ca
    GL_PROJECTION matrix is used for this projection transformation. First, it transforms all vertex data from the eye coordinates to the clip coordinates.Overview · Perspective Projection · Perspective Matrix with Field...
  15. [15]
    Frustum Culling - LearnOpenGL
    In this video illustrating frustum culling in a forest, the yellow and red shape on the left side is the bounding volume that contains the mesh. Red ...
  16. [16]
    [PDF] Automatic LOD selection - DiVA portal
    Oct 20, 2017 · The transition distance is then determined by multiplying the scaling factor with the render distance calculated in eq. 4.2. 7.2 View ...
  17. [17]
    3dfx Voodoo Graphics 4 MB Specs - GPU Database - TechPowerUp
    The GPU is operating at a frequency of 50 MHz, memory is running at 50 MHz. Being a single-slot card, the 3dfx Voodoo Graphics 4 MB does not require any ...
  18. [18]
    S3 ViRGE (325/VX/DX/GX/GX2) series of early 3D accelerators ...
    One frame in this resolution is 0.3Mpix. If we assume that each pixel on the screen is drawn just ~1.3x per frame, we can draw just 13 frames per second (=5/0.3 ...
  19. [19]
    [PDF] Voodoo Graphics Specification - O3ONE
    Dec 1, 1999 · The chart below provides performance characterization of advanced texture mapping rendering functionaltity for various SST-1 configurations.
  20. [20]
    GraphicsNotes 2013 -- Section 11: What's Wrong with OpenGL 1.0?
    On the embedded system side (for things like smart phones and tablets), OpenGL ES 1.0 uses only a fixed function pipeline, and OpenGL ES 2.0 and 3.0 have only a ...
  21. [21]
    Multi-Media Review - Microsoft Flight Simulator 98 - HistoryNet
    Aug 19, 2000 · Most obvious is support for 3-D graphics accelerators that improve the look of the terrain and objects in the virtual environment. Also new ...
  22. [22]
    Virtua Fighter - Sega Retro
    While considered a milestone in real-time 3D graphics, the 3D has limitations, with every arena being a flat square and backgrounds 2D in nature. It also ...
  23. [23]
    [PDF] Transform and Lighting | NVIDIA
    The GeForce 256 GPU uses separate transform and lighting engines so that each engine can run at maximum efficiency. Without discrete engines, the transform ...Missing: draw improvements
  24. [24]
    nVidia GeForce 256 (1999) - DOS Days
    $$199.00"The GeForce 256 is the first graphics chip to include transform and lighting (T&L) engines in its core design. T&L engines take a huge amount of load away ...Missing: draw distance improvements
  25. [25]
    fred_for_brains's Review of The Legend of Zelda: Ocarina of Time ...
    And though the draw distance occasionally leaves somthing to be desired (In the first world, all the people pop-up aka fade-in rather suddenly, even for the ...
  26. [26]
    In Praise of Video Gaming's Old Dalliance with Distance Fog - VICE
    Feb 7, 2017 · Distance fog was the great scourge of early 3D video gaming. Anyone who spent the late 1990s glued to a Nintendo 64 or original Sony PlayStation will remember ...
  27. [27]
    Grand Theft Auto III PS2 Review | Eurogamer.net
    Rating 10/10 · Review by Tom BramwellJul 18, 2005 · The biggest problem though is the distance clipping. There is some acceptable slow-down, but you often notice quite a lot of clipping, and ...Missing: limitations | Show results with:limitations
  28. [28]
    Postmortem: Epic Games' Unreal Tournament - Game Developer
    The teams at Epic Games and Digital Extremes survived stiff competition as they struggled to evolve a single-player game into a deathmatch-oriented design.Missing: distance | Show results with:distance
  29. [29]
    Guide:Optimizing performance | FortressCraft Evolved Wiki - Fandom
    Dynamic Draw Distance (enabled by default) will adjust your draw distance downwards to meet the frame rate target. The Draw Distance set in the menu is the ...
  30. [30]
    Crysis Remastered PC Performance Review and Optimisation Guide
    Sep 22, 2020 · Moving to high settings, draw distances are pushed out further, beyond that of the original Crysis' highest settings. Even the game's distant ...Missing: explanation | Show results with:explanation
  31. [31]
    Crysis Remastered Performance Analysis - Conclusion
    Nov 1, 2020 · I think the one possible exception to that is Vegetation, which is really more of a draw distance setting, as it determines how far away trees ...
  32. [32]
    Practical, Dynamic Visibility for Games - Self Shadow
    Sep 22, 2011 · This article covers two complementary approaches to visibility determination that have shipped in recent AAA titles across Xbox 360, PS3 and PC.
  33. [33]
    Practical Occlusion Culling on PS3 - Guerrilla Games
    Mar 4, 2011 · Occlusion culling tries to avoid rendering objects which the player can't see. Discussion of occlusion for games has tended to focus on the use ...
  34. [34]
    Cyberpunk 2077 Ray Tracing: Overdrive Mode - CD PROJEKT RED ...
    Apr 11, 2023 · NVIDIA RTX Direct Illumination (RTXDI) is used to transform lighting, most noticeably in Night City. What has the freely-available SDK ...
  35. [35]
    Nanite in UE5: The End of Polycounts? - Unreal Engine
    Aug 3, 2021 · Nanite, Unreal Engine's virtualized micropolygon geometry system, allows for millions of poly models to be rendered effortlessly in real time.Missing: draw distance 10km
  36. [36]
    [PDF] Nanite - Advances in Real-Time Rendering in Games
    Aug 9, 2021 · Today we are going to take a deep dive into Nanite, UE5's new virtual geometry ... Primary view isn't the only place we need to draw geometry.
  37. [37]
    Teardown Teardown - Blog
    Teardown is a physically-inspired deferred renderer that uses voxel ray tracing for certain secondary effects. The renderer is built on OpenGL 3.3, meaning it ...
  38. [38]
    With Google Stadia, Gaming Dreams Head For the Cloud - WIRED
    Mar 19, 2019 · Cloud gaming's theoretical benefits are many: distributed physics and complex simulations that might otherwise slow down a PC or console become ...