Decompression theory
Decompression theory is a scientific framework in physiology and hyperbaric medicine that explains the uptake, distribution, and elimination of inert gases in body tissues during exposure to elevated ambient pressures, such as in scuba diving, commercial diving, or caisson work, with the primary goal of preventing decompression sickness (DCS) by managing supersaturation and bubble formation during pressure reduction.[1] The theory posits that inert gases like nitrogen dissolve into tissues under hyperbaric conditions according to Henry's law and must be gradually off-gassed to avoid bubble nucleation, which can obstruct blood flow and cause tissue damage.[2] The foundations of decompression theory emerged in the late 19th century amid industrial accidents involving compressed air workers, where symptoms of DCS—such as joint pain, paralysis, and death—were first systematically documented during projects like tunnel construction.[1] In 1908, John Scott Haldane, along with A.E. Boycott and G.C.C. Damant, published the seminal work establishing the first mathematical model, using goat experiments to demonstrate that tissues could tolerate limited supersaturation (up to 1.6 times ambient pressure) without bubble formation, leading to the introduction of staged decompression stops and five hypothetical tissue compartments with half-times of 5, 10, 20, 40, and 75 minutes for gas exchange.[2] This Haldane model revolutionized safety protocols by replacing uniform ascents with calculated schedules that allowed initial rapid decompression followed by holds at specific depths to control inert gas elimination rates.[3] Subsequent refinements in the 20th century expanded the theory through empirical and biophysical models, with the U.S. Navy developing tables based on Robert Workman's M-value approach in the 1960s, which quantified permissible supersaturation gradients for multiple tissues.[1] In the 1980s, Albert A. Bühlmann advanced Haldane's neo-Haldanian framework with a 16-compartment model incorporating tissue-specific solubility and perfusion rates, validated against human dive data to produce safer, more conservative tables for mixed-gas diving.[4] Parallel developments in bubble mechanics, such as Yount's varying permeability model (1986), integrated free-phase gas dynamics to predict bubble growth and advocate for deeper stops, though validation studies using Doppler ultrasound have shown mixed results in reducing DCS incidence compared to traditional shallow-stop protocols.[5] Modern applications rely on dive computers implementing these algorithms, often with adjustable gradient factors to tailor conservatism based on factors like exercise, hydration, and patent foramen ovale prevalence (affecting ~25% of individuals and increasing DCS risk via right-to-left shunting).[1]Fundamentals of Decompression Physiology
Dissolved Inert Gas Dynamics
The solubility of inert gases, such as nitrogen, in biological fluids and tissues is governed by Henry's law, which states that the concentration of a dissolved gas in a liquid is directly proportional to the partial pressure of that gas above the liquid at equilibrium.[6] This principle is fundamental to understanding inert gas dynamics during pressure changes, as it predicts how increased ambient pressure during descent drives greater dissolution of breathing gas components into blood and tissues. The relationship is expressed as C = k \cdot P where C is the concentration of the dissolved gas, P is its partial pressure, and k is the solubility coefficient specific to the gas, liquid, and temperature.[6] For nitrogen in human blood at 37°C, the solubility coefficient k is approximately 0.0148 mL N₂ per mL blood per atm.[7] In tissues, solubility varies; for instance, nitrogen is more soluble in lipid-rich tissues like fat (with k around 0.10 mL/mL/atm, approximately 7 times blood) than in water-rich ones like muscle (around 0.014 mL/mL/atm, similar to blood), influencing the rate and extent of gas uptake across body compartments.[8] Once dissolved, inert gases are transported within the body primarily through diffusion across tissue boundaries and bulk flow via perfusion. Fick's first law of diffusion quantifies this transport, describing the flux of gas molecules as proportional to the concentration gradient across a membrane or tissue layer. The law is given by J = -D \frac{dC}{dx} where J is the diffusion flux, D is the diffusion coefficient (dependent on the gas, tissue, and temperature), and \frac{dC}{dx} is the concentration gradient.[9] In decompression contexts, this governs the passive movement of inert gases from blood into tissues or vice versa, with higher gradients accelerating exchange. For nitrogen in soft tissues, D typically ranges from 1.5 × 10^{-5} to 2.5 × 10^{-5} cm²/s.[10] Gas exchange in tissues is further modulated by whether it is perfusion-limited or diffusion-limited. In perfusion-limited exchange, prevalent in well-vascularized, blood-rich tissues like muscle or brain, the rate of inert gas uptake or elimination is primarily controlled by blood flow, as diffusion across the capillary wall occurs rapidly due to thin barriers and high surface area.[11] Conversely, in diffusion-limited exchange, common in poorly perfused, fatty tissues such as adipose, transport is bottlenecked by slow molecular diffusion through the tissue matrix, even if blood delivers gas to the periphery; this leads to slower equilibration and prolonged retention of dissolved inert gases.[10] These distinctions are critical, as fatty tissues can accumulate up to 5 to 10 times more nitrogen per unit volume than aqueous ones under equivalent partial pressures, due to higher solubility.[8] A practical illustration occurs during scuba diving descents, where nitrogen uptake exemplifies these dynamics. At the surface (1 atm), tissues are typically saturated with nitrogen at its ambient partial pressure of about 0.79 atm, yielding concentrations of roughly 0.0117 mL/mL in blood.[7] Descent to 10 meters (2 atm total pressure) raises the inspired nitrogen partial pressure to approximately 1.58 atm, potentially doubling tissue concentrations in fast-equilibrating compartments like blood-rich muscle within minutes, to around 0.0234 mL/mL if fully saturated.[9] At greater depths, such as 30 meters (4 atm), the partial pressure reaches 3.16 atm, allowing concentrations up to four times surface levels (about 0.0468 mL/mL in blood), though full saturation in slower tissues like fat may require hours, highlighting the interplay of perfusion, diffusion, and solubility.[12]Bubble Formation and Growth
Bubble formation during decompression arises from the phase separation of supersaturated inert gases in bodily fluids and tissues, deviating from purely dissolved gas behavior by initiating non-equilibrium gas phase transitions.[13] In decompression theory, bubble nucleation is described by two primary mechanisms: homogeneous and heterogeneous. Homogeneous nucleation involves the spontaneous formation of gas clusters within a uniform liquid medium, driven by thermal fluctuations that overcome the energy barrier for creating a stable bubble interface.[14] This process requires high supersaturation levels, as the formation of a critical bubble nucleus demands significant free energy input. In contrast, heterogeneous nucleation occurs at pre-existing sites such as impurities, tissue surfaces, or crevices, which lower the energy threshold by providing favorable interfaces for gas accumulation.[13] Heterogeneous mechanisms predominate in biological systems due to the abundance of such sites, making bubble inception more probable at moderate supersaturations compared to the extreme conditions needed for homogeneous nucleation.[14] The energy barrier for bubble formation is quantified by the change in Gibbs free energy, ΔG, which balances the surface energy cost against the volume energy gain from pressure differences: \Delta G = 4\pi r^2 [\sigma](/page/Sigma) - \frac{4}{3}\pi r^3 \Delta P where r is the bubble radius, \sigma is the surface tension, and \Delta P is the pressure difference across the interface (typically the supersaturation pressure).[13] At the critical radius r_c = 2[\sigma](/page/Sigma) / \Delta P, ΔG reaches a maximum, representing the unstable nucleus beyond which growth becomes thermodynamically favorable.[13] In tissues, effective surface tensions are often reduced (e.g., below 5 dyn/cm due to biological surfactants), lowering this barrier and enabling nucleation at supersaturations as low as 2-3 atmospheres.[14] Once nucleated, bubbles grow primarily through diffusion-driven mass transfer of dissolved gases from the surrounding supersaturated medium. The seminal Epstein-Plesset model describes this radial growth rate as: \frac{dr}{dt} = \frac{D}{r} (C_s - C_i) where dr/dt is the rate of change of bubble radius, D is the diffusion coefficient of the gas in the liquid, C_s is the gas concentration at the bubble surface (in equilibrium with internal pressure), and C_i is the concentration far from the bubble.[15] This equation assumes spherical symmetry and neglects convection, providing a foundational approximation for decompression scenarios where ambient pressure decreases, enhancing the concentration gradient and accelerating expansion.[15] In vivo applications extend this model to account for tissue perfusion limits, yielding growth rates on the order of micrometers per minute for micron-sized nuclei under typical diving supersaturations.[15] Surfactants and tissue interfaces play crucial roles in modulating bubble dynamics by altering surface tension and stability. Pulmonary surfactants, such as those composed of dipalmitoylphosphatidylcholine, reduce interfacial tension at alveolar surfaces, inhibiting bubble coalescence and promoting dissolution during decompression. In experimental models simulating venous flow, low concentrations of anionic surfactants like sodium dodecyl sulfate (25-50 ppm) significantly decrease bubble size and narrow size distributions by hindering coalescence, with synergistic effects when combined with electrolytes present in blood.[16] Hydrophobic tissue interfaces, such as lipid membranes, facilitate heterogeneous nucleation by providing low-energy sites, while hydrophilic surfaces coated with surfactants can stabilize smaller bubbles or prevent their growth.[16] These interactions underscore how biological surfactants mitigate bubble embolization risks in vascular and pulmonary tissues. Historical observations of bubble-related effects trace back to early 20th-century experiments by J.S. Haldane and colleagues, who subjected goats to hyperbaric pressures followed by staged decompression in 1908. Autopsies revealed gas bubbles in vascular and neural tissues of animals experiencing rapid decompression, correlating symptoms like paralysis with bubble occlusion and establishing the link between supersaturation and bubble-induced pathology. These findings, from exposures up to 6 atmospheres, demonstrated that limiting supersaturation to 1.6-2.0 times ambient pressure prevented overt bubble formation and symptoms in most cases.Isobaric Counterdiffusion
Isobaric counterdiffusion (ICD) is the process by which different inert gases diffuse into and out of body tissues in opposite directions while ambient pressure remains constant, potentially resulting in localized supersaturation and bubble formation. This phenomenon arises primarily from differences in the diffusion coefficients and solubilities of gases such as helium and nitrogen, leading to imbalances in gas exchange across tissue barriers like skin or subcutaneous layers. Unlike pressure-induced decompression, ICD occurs without depth changes, often during gas mixture switches in hyperbaric environments.[17] The mechanism is governed by Fick's laws of diffusion, adapted for multi-gas systems, where the flux of each gas is proportional to its diffusion coefficient and concentration gradient. Helium diffuses approximately 2.65 times faster than nitrogen in biological tissues due to its lower molecular weight (D ∝ 1/√MW), with helium's diffusion coefficient D_He ≈ 2.65 × D_N2. When switching from a nitrogen-rich mixture to helium-rich heliox, helium enters tissues more rapidly than nitrogen exits, creating transient supersaturation pockets if the inward helium flux exceeds the outward nitrogen flux. This imbalance is exacerbated by nitrogen's higher solubility in lipids (tissue-blood partition coefficient ~2.6 times that of helium), favoring net gas accumulation in adipose or epithelial layers. Mathematically, the net flux J_net at a tissue interface can be described as J_net = -D_He (dC_He/dx) + D_N2 (dC_N2/dx), where C represents concentration and x is distance; a positive J_net indicates net inward gas movement, promoting supersaturation if it exceeds the critical tension for bubble nucleation. Supersaturation occurs when the ratio D_1 S_1 / D_2 S_2 > 1, with S denoting solubility, highlighting how helium's high diffusivity and low solubility can drive bubble growth despite overall decompression benefits. These dynamics are modeled using perfusion-limited or diffusion-limited frameworks, such as the Krogh cylinder for radial gas exchange in tissue cylinders.[17] In saturation diving, ICD risks manifest during helium-nitrogen switches to optimize decompression, as seen in commercial operations from the 1960s onward. Early North Sea dives, supporting offshore oil exploration, involved heliox exposures at depths up to 300 meters, where gas switches at constant pressure were used to manage inert gas loads; such procedures occasionally led to cutaneous lesions or vestibular disturbances attributed to superficial ICD. For instance, 1960s experiments and operations by consortia like Comex and Oceaneering reported transient skin bends during heliox-to-air transitions at depths around 60 meters. Experimental evidence confirming ICD-induced bubbles comes from animal studies in the 1960s. Van Liew and Passke's rat experiments demonstrated permeation rates through subcutaneous gas pockets, showing that switching environments from nitrogen to helium caused pocket volumes to increase due to faster helium ingress, with sulfur hexafluoride pockets doubling in size over days in air. Subsequent pig studies by D'Aoust et al. further evidenced venous gas emboli after isobaric nitrogen-to-helium shifts, with bubble counts rising transiently in the vena cava, underscoring the risk of deep-tissue supersaturation without pressure reduction.Oxygen Toxicity and Protective Effects
Oxygen toxicity represents a significant risk in decompression diving, particularly when using enriched oxygen mixtures or rebreathers, where elevated partial pressures of oxygen (PO₂) can lead to central nervous system (CNS) and pulmonary damage. Early recognition of this hazard occurred in the 1920s during U.S. Navy experiments with closed-circuit oxygen rebreathers, such as the Davis apparatus, where divers experienced convulsions and other symptoms at depths equivalent to PO₂ levels above 1.6 atmospheres absolute (ATA), prompting the establishment of operational depth limits around 30 feet (9.1 meters) to mitigate risks. These incidents highlighted the need for controlled oxygen exposure in diving operations. Thresholds for oxygen toxicity are well-defined by authoritative guidelines, with the National Oceanic and Atmospheric Administration (NOAA) specifying limits to prevent CNS toxicity, such as 1.4 ATA for no more than 45 minutes in a single dive, and pulmonary toxicity, allowing indefinite exposure at 0.5 ATA while monitoring cumulative oxygen tolerance units (OTUs) to avoid longer-term lung irritation. Exceeding these limits can cause symptoms ranging from nausea and visual disturbances in pulmonary cases to seizures in CNS toxicity, with the latter being particularly acute during decompression stops on high-oxygen gases. Recent revisions to these guidelines, informed by empirical data from rebreather dives, have extended safe exposure times at moderate PO₂ levels like 1.3 ATA to up to 240 minutes of activity followed by decompression, reflecting improved understanding of risk at sub-critical pressures.[18] Despite these risks, oxygen plays a protective role in decompression through the oxygen window concept, which exploits metabolic differences in gas partial pressures to enhance inert gas washout and reduce bubble formation. The oxygen window arises from the consumption of oxygen in tissues, creating a partial pressure gradient where the effective PO₂ driving decompression is quantified as P_{\text{O}_2 \text{ effective}} = P_{\text{IO}_2} - P_{\text{CO}_2} - P_{\text{N}_2 \text{min}}, with P_{\text{IO}_2} as the inspired oxygen pressure, P_{\text{CO}_2} approximately 40-50 mmHg, and P_{\text{N}_2 \text{min}} the minimal venous nitrogen tension around 50 mmHg, allowing up to 150-200 mmHg of additional inert gas elimination without supersaturation. This mechanism is particularly beneficial in saturation diving, where breathing higher oxygen fractions during decompression accelerates safe desaturation rates by up to the full inspired PO₂ value in extended models.[19] Furthermore, oxygen exerts protective effects against decompression sickness (DCS) via antioxidant mechanisms, where hyperbaric oxygen (HBO) pretreatment or therapy induces free radical scavenging to counteract bubble-induced inflammation and oxidative stress. HBO exposure upregulates endogenous antioxidants such as superoxide dismutase and glutathione peroxidase, reducing endothelial cell apoptosis and necrosis in DCS models by mitigating reactive oxygen species (ROS) generated from vascular bubble interactions. For instance, in rat studies simulating DCS, HBO pretreatment at 2.5 ATA for 60 minutes prior to decompression significantly lowered brain tissue damage by enhancing these scavenging pathways, demonstrating oxygen's role in bolstering anti-inflammatory responses without exceeding toxicity thresholds. This dual biochemical action underscores oxygen's balanced utility in decompression protocols.[20][21]Decompression Sickness Overview
Pathophysiology and Symptoms
Decompression sickness (DCS) arises primarily from the formation of inert gas bubbles, such as nitrogen, in tissues and vasculature during rapid decompression, leading to mechanical obstruction and biochemical injury. These bubbles interact with endothelial cells, causing direct damage to the vascular lining and activation of platelets, which promotes aggregation and the release of pro-inflammatory mediators like cytokines and reactive oxygen species. This cascade results in endothelial dysfunction, inflammation, and vaso-occlusion, where bubbles block microcirculation and induce ischemia in affected tissues, particularly in supersaturated areas like joints, spinal cord, and brain.[22][1][23] DCS is classified into Type I and Type II based on severity and organ involvement. Type I DCS is milder, manifesting as musculoskeletal pain (often described as "the bends" due to nitrogen bubbles accumulating in joints, causing deep aching in shoulders, elbows, or knees), skin symptoms like pruritus or mottled rash (cutis marmorata), and lymphatic issues such as swelling. In contrast, Type II DCS is severe, involving neurological deficits (e.g., numbness, weakness, paralysis, or confusion from spinal or cerebral involvement), cardiorespiratory symptoms (e.g., dyspnea or chest pain from pulmonary "chokes"), or cardiovascular compromise, which can lead to shock if untreated. The incidence of DCS in recreational scuba diving is low, approximately 3-4 cases per 10,000 dives, though Type II accounts for about 10-20% of cases and carries higher morbidity.[22][1][23][24] Symptoms typically emerge shortly after surfacing, with 75-80% of cases onsetting within the first hour and over 90% within 6 hours, though delayed presentations up to 24-48 hours can occur in rare instances. Early recognition is crucial, as joint pain may resolve with rest but neurological symptoms like vertigo or sensory changes progress rapidly without intervention. Diagnosis relies on clinical history and exclusion of mimics, supported by Doppler ultrasound for detecting venous gas emboli (VGE); the Spencer scale grades bubble load from 0 (no bubbles) to 4 (continuous signals overriding cardiac sounds), with grades 3-4 correlating to higher DCS risk.[22][1][23][25]Risk Factors and Prevention Basics
Several environmental and procedural factors elevate the risk of decompression sickness (DCS) during diving. Rapid decompression rates exceeding 10 meters per minute promote excessive bubble formation by outpacing inert gas elimination from tissues, significantly heightening DCS incidence.[26] Cold exposure, particularly during ascent, induces vasoconstriction that impairs tissue perfusion and slows off-gassing, thereby increasing bubble nucleation and DCS susceptibility.[26][23] Physiological conditions further compound DCS vulnerability by altering gas exchange dynamics. Dehydration diminishes plasma volume and reduces overall tissue perfusion, hindering inert gas washout and elevating DCS risk, especially in prolonged or repetitive exposures.[26][27] Advancing age and obesity impair circulatory efficiency and perfusion rates, contributing to slower gas elimination and higher DCS likelihood in susceptible individuals.[26][27] A patent foramen ovale (PFO), present in approximately 25% of the population, facilitates paradoxical emboli by allowing venous bubbles to bypass pulmonary filtration, increasing DCS risk up to 2.5-fold overall and fourfold for neurological manifestations.[26][28] Repetitive diving profiles substantially amplify DCS probability due to cumulative inert gas loading; for instance, multiple dives per day can elevate risk by factors of 2 to 3 compared to single exposures.[27] Altitude exposure, such as post-dive travel above 2,400 meters, exacerbates decompression stress by further reducing ambient pressure, often leading to symptoms like joint pain or neurological deficits.[26] Fundamental prevention strategies center on mitigating these factors through controlled procedures. Adhering to slow ascent rates of 9 to 18 meters per minute allows adequate time for gas off-gassing, substantially lowering bubble formation.[26] Incorporating surface intervals exceeding 1 hour between repetitive dives facilitates partial inert gas elimination, while extended intervals of at least 12 hours after no-decompression dives—ideally 18 hours for multi-day series—minimize residual effects before altitude exposure.[26] Hydration protocols, including pre-dive fluid intake to maintain plasma volume, support perfusion and may help reduce venous gas emboli formation (a DCS precursor), though direct evidence for reducing DCS incidence in humans remains limited.[1]Key Concepts in Decompression Modeling
Tissue Compartments and Perfusion
In decompression theory, the tissue compartment model provides a foundational framework for simulating inert gas uptake and elimination in the body. Developed by John Scott Haldane in 1908, this approach represents the human body as a series of 5 to 16 parallel compartments, each exhibiting exponential saturation and desaturation curves based on tissue-specific kinetics. Haldane's original formulation used five compartments with half-times ranging from 5 to 75 minutes, derived from animal experiments exposing goats to hyperbaric conditions and observing decompression outcomes. This multi-compartment structure allows modeling of differential gas loading across tissues during dives, enabling the calculation of safe ascent profiles to prevent supersaturation beyond critical thresholds. Central to the model is the perfusion-limited assumption, which posits that inert gas partial pressure in each tissue compartment equilibrates instantaneously with arterial blood due to adequate blood flow, making perfusion the primary rate-determining factor. Gas exchange follows first-order exponential kinetics, with the compartment's half-time defined as \tau = \frac{0.693}{k}, where k is the tissue-specific perfusion rate constant (in min^{-1}). This assumption simplifies computations by treating tissues as homogeneous units perfused uniformly, though actual rates vary with factors like cardiac output and local blood flow.[29] Compartment half-times span a wide range to reflect physiological diversity: fast compartments, such as those modeling blood and brain (1–5 minutes), saturate rapidly during descent and drive short no-decompression limits in shallow bounce dives, while slow compartments, like those for fat (480 minutes or more), accumulate gas over extended exposures and necessitate staged decompression in deep or prolonged profiles. For example, in a typical technical dive to 30 meters for 60 minutes, fast compartments may approach 80% saturation, requiring careful ascent monitoring, whereas slow compartments remain below 50%, influencing multiday residual nitrogen considerations. These half-times, refined through subsequent models like Bühlmann's 16-compartment system, are calibrated against empirical data from human and animal trials to optimize safety margins.[29] A key limitation of the perfusion-limited framework arises in non-perfused or avascular tissues, such as cartilage, where gas transport relies solely on diffusion across long distances without blood flow support, leading to extended half-times and potential inaccuracies in predicting bubble formation or joint-specific decompression sickness. This diffusion-limited exchange in structures like tendons and bone marrow can prolong desaturation, as evidenced by higher DCS incidence in avascular sites despite conservative profiles based on perfused tissue assumptions.[29]Inert Gas Ingassing and Outgassing
In decompression theory, inert gas ingassing refers to the uptake of dissolved inert gases, such as nitrogen, into body tissues during exposure to elevated ambient pressures, while outgassing describes the subsequent elimination of these gases as pressure decreases. These processes are modeled using multi-compartment tissue models, where each compartment represents a group of tissues with similar perfusion rates and gas exchange kinetics. The exchange follows Fick's law of diffusion, modulated by blood flow, leading to exponential approaches to equilibrium between arterial blood and tissue tensions.[30] The standard equation for inert gas ingassing in a tissue compartment, assuming an initial tissue tension near zero (as at the start of a dive from the surface), is given byP_t(t) = P_a \left(1 - e^{-t / \tau}\right),
where P_t(t) is the tissue inert gas tension at time t, P_a is the arterial inert gas tension, and \tau is the compartment half-time (the time required to reach 50% saturation). This formulation approximates the exponential uptake, with full saturation approached asymptotically after approximately six half-times (about 98% equilibration). For more precise modeling, the exponent incorporates the natural log of 2, as e^{-(\ln 2) t / \tau}, reflecting the half-time definition. These equations derive from Haldane's foundational perfusion-limited model, refined in modern neo-Haldanian algorithms.[30][31] During decompression, outgassing occurs as tissues release inert gas back to the arterial blood and lungs, driven by the reversed pressure gradient. The governing equation is
P_t(t) = P_a + (P_0 - P_a) e^{-t / \tau},
where P_0 is the initial tissue tension at the start of outgassing (e.g., upon ascent), and other terms are as defined above. This exponential decay ensures that faster compartments (shorter \tau) unload gas more rapidly than slower ones, influencing decompression stop requirements to prevent excessive supersaturation. The half-time \tau varies by compartment, typically ranging from 1 to 720 minutes for nitrogen in models like Bühlmann's, with values briefly referenced from prior tissue compartment discussions.[30][31] To ensure safe outgassing without decompression sickness, the concept of the M-value defines the critical tissue tension limit as a fraction of ambient pressure. Introduced by Robert D. Workman in the 1960s, the M-value represents the maximum allowable inert gas tension in a compartment at a given ambient pressure, expressed as M = P_{\text{crit}} / P_{\text{amb}}, where P_{\text{crit}} is the critical tension and P_{\text{amb}} is ambient pressure. Workman's linear formulation, M = M_0 + G \cdot P_{\text{amb}}, uses an intercept M_0 and gradient G specific to each compartment and gas, calibrated from animal and human exposure data to bound supersaturation safely. This ratio guides ascent rates and stops by maintaining tissue tensions below M-values, preventing bubble formation from excessive gradients.[32][33] A representative example of nitrogen outgassing involves a 10-minute half-time compartment following a dive to 30 meters (4 atmospheres absolute, ata) on air, assuming near-saturation at depth for simplicity. At depth, arterial nitrogen tension P_a \approx 0.79 \times 4 = 3.16 ata, so initial tissue tension P_0 \approx 3.16 ata upon ascent to the surface (1 ata, where P_a \approx 0.79 ata). After 10 minutes at the surface, the tissue tension reduces to P_t(10) = 0.79 + (3.16 - 0.79) e^{-(\ln 2) \cdot 10 / 10} \approx 0.79 + 2.37 \times 0.5 = 1.975 ata, representing 50% washout. This illustrates how the fast compartment halves its excess load in one half-time, though slower compartments retain more gas, necessitating staged decompression for deeper or longer exposures.[30][31]