Fact-checked by Grok 2 weeks ago

Dissipative system

A dissipative system, also known as a dissipative structure, is a thermodynamically open system that operates far from equilibrium, exchanging energy and matter with its environment to sustain organized, coherent states through irreversible processes that produce entropy. These systems emerge via fluctuations and instabilities, leading to self-organization where non-equilibrium conditions act as a source of order rather than disorder, contrasting with the tendency toward equilibrium in closed systems. Introduced by Ilya Prigogine in his work on non-equilibrium thermodynamics, dissipative structures require continuous energy input to maintain stability, often exhibiting sensitivity to boundary conditions, system size, and nonlinear interactions such as autocatalysis. Key characteristics include , where small fluctuations amplify into macroscopic patterns, and an increase in that drives the system toward more complex configurations. For instance, in physical systems like Bénard convection cells, layers heated from below form hexagonal patterns when the exceeds a critical threshold ( > 1708), maximizing the rate of . In chemical contexts, reactions such as the model demonstrate oscillations and spatial patterns like Turing structures, illustrating how loops sustain temporal and spatial order. Dissipative systems extend to biological applications, where living organisms function as prime examples, maintaining and through metabolic cycles that dissipate energy from nutrient flows while exporting to the surroundings. This links self-organization to , with phenomena like genetic mutations amplifying fluctuations akin to thermodynamic instabilities, enabling adaptive responses to environmental changes. In broader contexts, such as or , dissipative principles inform models of behaviors in particle systems or in dynamical systems, emphasizing the role of in achieving and . Overall, these systems unify aspects of physics, chemistry, and by revealing how irreversibility fosters emergent in far-from-equilibrium environments.

Introduction

Definition and Basic Principles

A dissipative system is a thermodynamically open system that operates far from , exchanging and/or with its , which leads to the dissipation of and an overall increase in . Unlike closed or isolated systems, open systems allow continuous flows that sustain internal dynamics, preventing them from reaching where minimizes. The basic principles of dissipative systems revolve around irreversibility, where processes driven by the second law of convert ordered energy into disordered forms like , fostering complex behaviors rather than stasis. This irreversibility enables , such as temporal oscillations in chemical reactions or spatial patterns in fluid , and the emergence of ordered structures through amplified fluctuations that organize matter despite increasing global . In contrast to conservative systems, where forces like preserve without loss, dissipative systems inherently lose usable energy to irreversible processes, leading to and eventual stabilization or novel steady states. Simple examples illustrate these principles in physical contexts: a oscillator with , such as a damped , dissipates as through air and material , causing oscillations to decay over time. Similarly, an electrical circuit with converts electrical energy into thermal via , reducing current flow and preventing indefinite energy storage. These cases highlight how openness to environmental exchange drives , distinguishing dissipative dynamics from idealized reversible models.

Historical Development

The concept of dissipative systems traces its roots to 19th-century developments in , where the foundations of energy dissipation and irreversibility were laid. introduced the notion of in 1865 as a measure of the unavailability of energy for work in thermodynamic processes, emphasizing the dissipative nature of heat transfer and the second law of . Building on this, Lord Rayleigh analyzed instabilities in fluid motions during the 1880s, demonstrating how dissipative processes could lead to the breakdown of states and the emergence of ordered patterns in viscous fluids under gravitational forces. In the mid-20th century, the framework for advanced significantly with Lars Onsager's formulation of reciprocal relations in 1931, which linked phenomenological coefficients in irreversible processes and provided a mathematical basis for understanding coupled dissipative fluxes near . These ideas set the stage for exploring systems far from , highlighting how dissipation could drive symmetry-breaking and . Ilya Prigogine played a pivotal role in the 1960s and 1970s by developing the theory of dissipative structures, which describe self-organizing systems maintained by continuous energy and matter exchange with their environment. For this work on and the role of fluctuations in , Prigogine received the in 1977. His influential book From Being to Becoming (1980) further synthesized these concepts, arguing that irreversibility and time's arrow are intrinsic to dissipative processes, bridging classical with complexity in physical and biological systems. Following Prigogine's contributions, the concept expanded into systems and in the early 1970s, with Jan C. Willems introducing dissipativity as a property of dynamical systems characterized by dissipation relative to a storage function. In the 1980s and 1990s, extensions to emerged through models of open , incorporating dissipation via environment interactions to describe decoherence and quantum . More recently, post-2020 research has applied dissipative principles to , such as in studies of dissipative adaptation enabling in driven , and further to quantum many-body correlations and gravitational effective field theories for open systems as of 2025.

Thermodynamic Foundations

Non-Equilibrium Thermodynamics

In , dissipative systems operate as open systems exchanging and energy with their environment, allowing the second to permit local entropy decreases compensated by greater increases in the surroundings. The is expressed as dS = d_e S + d_i S, where d_e S represents entropy exchange and d_i S \geq 0 the internal production, ensuring overall entropy growth in the . The rate \sigma = \sum J_i X_i > 0, with J_i as thermodynamic fluxes (e.g., or ) and X_i as conjugate affinities (e.g., temperature or gradients), quantifies as the driving force for irreversible processes. Near equilibrium, the linear regime prevails, characterized by Onsager's reciprocal relations, where fluxes linearly depend on affinities: J_i = \sum_j L_{ij} X_j, with the phenomenological coefficients satisfying L_{ij} = L_{ji} due to . In this domain, Prigogine's principle of minimum applies, positing that steady states minimize \sigma subject to fixed constraints, providing a variational criterion for stability akin to a . Far from equilibrium, however, nonlinear interactions emerge, transitioning to regimes where can increase through instabilities, enabling beyond linear approximations. Bifurcations mark critical transitions in these far-from-equilibrium conditions; for instance, a occurs when a stable equilibrium loses , giving rise to sustained oscillatory states through the emergence of a , reflecting the onset of temporal organization. Prigogine's theory elucidates how such order arises from fluctuations: near critical points, random perturbations are amplified by nonlinearities, breaking the law of large numbers and fostering coherent structures sustained by continuous dissipation. In chemical reactions, autocatalytic sets illustrate this principle, where self-amplifying cycles (e.g., product catalyzing reactant conversion) enhance local order via concentration patterns, while elevating global to comply with the second law.

Dissipative Structures

Dissipative structures refer to spatially or temporally organized patterns that emerge and persist in far-from-equilibrium systems through the continuous of and matter, often involving broken in steady states. These structures arise from irreversible processes that amplify fluctuations, leading to where order is maintained by ongoing exchanges with the , countering the tendency toward disorder predicted by . Coined by in the context of , the concept highlights how such patterns function as "islands of decreasing " locally, while globally increasing . These structures are characterized by excess , where the system maximizes to maintain order, as opposed to minimizing it near . Key characteristics of dissipative structures include driven by autocatalytic feedback loops, high reproducibility under consistent conditions, and acute sensitivity to external parameters such as temperature gradients or chemical concentrations. For instance, small perturbations near critical thresholds can trigger bifurcations, transitioning the system from uniform states to coherent patterns, with the scale and form dictated by boundary conditions and system size. This sensitivity underscores their role in understanding and formation in open systems. Classic examples illustrate these principles vividly. Bénard convection cells, observed in the early 1900s, form when a thin fluid layer heated from below develops hexagonal patterns of upward and downward flows beyond a critical , dissipating heat more efficiently than conduction alone. The Belousov-Zhabotinsky reaction, discovered in the 1950s, produces temporal oscillations in color and chemical concentrations through autocatalytic cycles involving and , exemplifying spatiotemporal patterns in chemical systems. Lasers represent another paradigm, where in an excited medium, pumped by external energy, generates coherent light beams as a dissipative structure, with gain and loss balancing to sustain the ordered emission. Mathematically, the formation of dissipative structures is often described by reaction-diffusion equations, which couple local reaction with spatial . A prototypical form for a two-component is given by: \begin{align} \frac{\partial u}{\partial t} &= D_u \nabla^2 u + f(u, v), \\ \frac{\partial v}{\partial t} &= D_v \nabla^2 v + g(u, v), \end{align} where u and v are concentrations, D_u and D_v are diffusion coefficients, and f and g represent nonlinear reaction terms, such as in the model. These equations predict instabilities like Turing patterns when diffusion rates differ, leading to stationary spatial structures sustained by energy throughput. On a planetary scale, hurricanes and tornadoes exemplify large-scale dissipative structures, where solar energy input drives atmospheric convection and rotation, forming organized vortices that dissipate heat and moisture into the environment, thereby enhancing global entropy export.

Systems and Control Theory

Concept of Dissipativity

In systems and control theory, dissipativity refers to a property of dynamical systems where an abstract notion of "energy" or storage does not increase beyond what is supplied externally along system trajectories. Specifically, a system is dissipative with respect to a supply rate function w(u, y), which quantifies the rate of energy supply from inputs u to outputs y, if the accumulated supply is non-positive or bounded in a way that prevents indefinite energy growth. If the supply rate satisfies w(u, y) \leq 0 for all admissible u and y, the system is strictly dissipative, implying that internal storage decreases over time without external input. The foundational framework for dissipativity was established by Jan C. Willems in 1972, who defined it for state-space models through the existence of a non-negative storage function V(x): X \to \mathbb{R}_{\geq 0}, where x is the . For a evolving from initial x(0) to x(t), dissipativity holds if V(x(t)) \leq V(x(0)) + \int_0^t w(u(s), y(s)) \, ds for all t \geq 0 and all input trajectories u(\cdot). This inequality ensures that any increase in storage is accounted for by the integrated supply, preventing the system from generating spontaneously. Willems' approach unifies various concepts by treating V(x) as an abstract measure, applicable beyond physical interpretations. From an input-output perspective, dissipativity generalizes classical notions like passivity, where the supply rate is w(u, y) = u^T y, representing flow into the . More broadly, arbitrary supply rates allow analysis of properties such as finite gain (w(u, y) = \gamma^2 \|u\|^2 - \|y\|^2) or sector boundedness, providing a flexible tool for characterizing without requiring detailed internal . This framework applies directly to nonlinear dynamical systems of the form \dot{x} = f(x, u), y = h(x, u), assuming well-posedness such as of solutions for given . Unlike thermodynamic dissipative structures, which involve physical and openness to maintain far-from-equilibrium states, the systems-theoretic concept of dissipativity serves primarily as a tool for and design. Here, the storage function V(x) is mathematical and not tied to thermodynamic , focusing instead on bounding energy-like quantities to ensure robust behavior under .

Stability and Passivity

In , the dissipativity property of a system establishes a direct link to when the supply rate satisfies specific conditions. For a dynamical system \dot{x} = f(x, u), y = h(x, u), dissipativity with respect to a supply rate s(u, y) implies the existence of a nonnegative storage function V(x) such that \dot{V}(x) \leq s(u, y) along system trajectories. If s(u, y) is negative semi-definite (i.e., s(u, y) \leq 0 for all u, y) and the system is zero-state observable, then V(x) serves as a Lyapunov function, ensuring asymptotic stability of the equilibrium. This connection, originally derived for nonlinear systems, unifies energy-based arguments with stability analysis, where the dissipation inequality bounds the increase in stored "energy." Passivity represents a of dissipativity, where the supply rate is s(u, y) = u^T y, corresponding to power flow in physical systems. Passive systems are dissipative with respect to this rate, and the passivity theorem guarantees that the interconnection of two strictly passive systems is asymptotically stable, assuming detectability of the outputs. Conversely, the converse Lyapunov theorem provides a method to verify passivity by constructing a storage function that satisfies the inequality, particularly for linear time-invariant systems via the Kalman-Yakubovich-Popov lemma. These results enable design that preserves or enforces passivity, facilitating stabilization through energy-dissipating interconnections. Dissipativity further supports applications, such as the framework of integral quadratic constraints (IQCs), which extend dissipation inequalities to frequency-domain bounds for uncertain systems. IQCs model uncertainties (e.g., nonlinearities or delays) as constraints on input-output signals, allowing linear matrix inequality (LMI)-based tests for of loops. This approach is widely used in , where passivity ensures persistent excitation and parameter convergence without destabilizing the system. A representative example is the analysis of RLC electrical networks, which are passive dissipative systems with storage function given by the total magnetic and electric \frac{1}{2} L i^2 + \frac{1}{2} C v^2, and supply rate s(u, y) = v i, where v and i are voltage and ; this underpins passivity-based stabilization of systems. Extensions of dissipativity theory post-2000 have addressed hybrid systems, incorporating discrete switching events while maintaining stability guarantees. In the 2010s, research developed notions of quadratic supply rate (QSR) dissipativity for hybrid interconnections, ensuring that mode transitions preserve overall dissipation and Lyapunov-like stability through multiple storage functions. These advancements enable analysis of cyber-physical systems, such as switched control networks, where dissipativity certifies robustness to abrupt changes.

Quantum Mechanics

Open Quantum Systems

Open quantum systems describe quantum mechanical entities that interact non-negligibly with an external environment, or "bath," leading to phenomena such as , where quantum superpositions decay, and energy dissipation, where the system loses or gains energy to the surroundings. In the quantum domain, dissipative structures emerge through non-equilibrium dynamics, extending Prigogine's classical concepts to quantum fluctuations and . In contrast, closed quantum systems evolve unitarily according to the , preserving and energy without external influences. This coupling to the environment fundamentally alters the dynamics, making open systems central to understanding realistic quantum processes in fields like and . A common simplification in modeling open quantum systems is the Markovian approximation, which assumes the environment's memory effects are negligible, allowing the system's evolution to depend only on its current state. This approximation relies on the , valid for weak system-bath coupling where the system's state has minimal back-action on the bath, and the secular approximation, which neglects rapidly oscillating terms in the to focus on resonant energy exchanges. These assumptions enable tractable derivations of master equations but break down in strong-coupling regimes or structured environments. The dynamics of open quantum systems are rigorously described using the density operator formalism, where the system's state is represented by a \rho, capturing mixed states arising from environmental entanglement. The time evolution of \rho is governed by completely positive trace-preserving (CPTP) maps, ensuring that probabilities remain non-negative and normalized after any , including dissipative effects. This framework generalizes unitary evolution and accommodates irreversible processes without violating quantum axioms. Environments in open quantum systems are often modeled as thermal baths consisting of infinite collections of non-interacting oscillators, providing a bosonic reservoir at a given . The Caldeira-Leggett model, developed in the , exemplifies this approach by coupling a central quantum system—typically a —to such a bath via bilinear interactions, yielding Ohmic or sub-Ohmic spectra that capture realistic frictional and noisy effects. This model has become foundational for studying quantum and in condensed matter systems. Recent advancements since 2020 have emphasized non-Markovian effects in open quantum systems within , where memory correlations in the bath can lead to backflow and modified fluctuation relations. Reviews from 2022 onward highlight how these effects enable thermodynamic advantages, such as enhanced work extraction in quantum engines, challenging classical Markovian limits. For instance, non-Markovian dynamics have been shown to influence and heat statistics, with fluctuation theorems extended to account for temporal correlations in structured reservoirs. This focus underscores the interplay between and in emerging quantum technologies.

Quantum Dissipative Models

Quantum dissipative models describe the of open quantum systems interacting with their environment, leading to irreversible processes such as decoherence and relaxation. The most general framework for Markovian in these systems is provided by the , which ensures complete positivity and trace preservation of the density operator \rho. The equation takes the form \frac{d\rho}{dt} = -i [H, \rho] + \sum_k \left( L_k \rho L_k^\dagger - \frac{1}{2} \{ L_k^\dagger L_k, \rho \} \right), where H is the system and the L_k are Lindblad operators representing the dissipative channels, such as coupling to a bosonic bath. This form was derived independently by Gorini, Kossakowski, Sudarshan, and Lindblad in 1976 as the generator of quantum dynamical semigroups. A example is the , modeling phenomena like loss in optical cavities. For a H = \omega a^\dagger a with via the jump L = \sqrt{\gamma} a (where \gamma is the decay rate and a the annihilation ), the Lindblad equation yields exact solutions for expectation values through Heisenberg-like equations of motion. The mean number \langle a^\dagger a \rangle decays exponentially as \langle a^\dagger a (t) \rangle = \langle a^\dagger a (0) \rangle e^{-\gamma t}, while coherences \langle a \rangle satisfy \frac{d}{dt} \langle a \rangle = -i \omega \langle a \rangle - \frac{\gamma}{2} \langle a \rangle. These resemble the classical Bloch equations but incorporate quantum fluctuations. Another prominent model is the spin-boson model, which captures in two-level systems like qubits coupled to a . The is H = -\frac{\Delta}{2} \sigma_x + \frac{\epsilon}{2} \sigma_z + \sigma_z \sum_j \lambda_j (b_j + b_j^\dagger) + \sum_j \omega_j b_j^\dagger b_j, with arising from the bath modes. In the Markovian limit, it reduces to a Lindblad equation with operators proportional to \sigma_- for relaxation and \sigma_z for . Exact solutions are available in the non-interacting blip approximation for weak coupling, revealing phenomena like coherent-incoherent transitions. Dissipative phase transitions emerge in many-body under Lindblad , where steady states exhibit critical driven by . A key example is in the open Dicke model of , involving N two-level atoms coupled to a lossy mode. The model H = \omega_c a^\dagger a + \omega_0 S_z + g (a + a^\dagger) S_x with cavity decay L = \sqrt{\kappa} a undergoes a second-order at critical coupling g_c = \sqrt{\omega_c \omega_0 / (2N)}, where the steady-state number jumps from zero to macroscopic values, signaling . Strong dissipation can suppress quantum evolution, manifesting as the , where frequent "measurements" via environmental coupling inhibit transitions. In Lindblad terms, large jump rates confine the to a decoherence-free , effectively freezing dynamics; for instance, in a driven two-level with strong , the off-diagonal coherences decay rapidly, stabilizing the initial state. Recent advances leverage dissipation for , particularly error correction. In 2023, proposals demonstrated autonomous error correction using dissipatively stabilized squeezed-cat qubits, where engineered baths correct phase-flip errors by driving the toward cat-like steady states resilient to single-photon loss, achieving lifetimes extended by factors of 10 beyond bare qubits. Building on this, as of 2025, hardware-efficient implementations using linear arrays of bosonic modes and enhanced squeezing methods have achieved up to 160-fold improvements in bit-flip error protection, advancing fault-tolerant quantum computation. To solve Lindblad equations numerically, especially for large systems, the wavefunction (MCWF) method unravels the into stochastic pure-state trajectories. Each trajectory evolves unitarily under an effective non-Hermitian H_{\text{eff}} = H - \frac{i}{2} \sum_k L_k^\dagger L_k, interrupted by random quantum jumps |\psi\rangle \to L_k |\psi\rangle / \norm{L_k |\psi\rangle}, with rates \langle \psi | L_k^\dagger L_k | \psi \rangle. Averaging over many trajectories reconstructs the , offering efficiency for high-dimensional problems over direct matrix exponentiation.

Applications

In Physics and Chemistry

In physics, dissipative systems underpin key phenomena in , where the operation of the laser itself represents a paradigmatic example of a dissipative structure, emerging from the interplay of , , and energy dissipation to produce coherent light, as first theoretically described in the context of synergetics during the . Building on this, dissipative solitons—stable, localized light sustained by a balance of gain, loss, nonlinearity, and —were conceptualized in the for mode-locked lasers, enabling high-energy and complex spatiotemporal . In , serves as a canonical instance of dissipative , wherein is transferred across scales via nonlinear , ultimately dissipated as at small scales, resulting in unpredictable yet statistically structured patterns. Similarly, in , nonlinear dissipative cavities exhibit rich behaviors such as and , where external driving and internal losses foster self-organized structures like Kerr solitons in microresonators. In chemistry, dissipative systems drive oscillatory reactions beyond the well-known Belousov-Zhabotinsky type, including the model, a theoretical framework developed by the school to capture autocatalytic oscillations analogous to those in , where far-from-equilibrium conditions sustain periodic concentration fluctuations through continuous energy input. on catalytic surfaces, such as during hydrogen oxidation on , arises from reaction-diffusion instabilities, producing spatiotemporal waves and Turing-like structures that propagate due to adsorption-desorption kinetics and surface heterogeneity. These chemical examples highlight how dissipation enables ordered dynamics in otherwise chaotic reaction networks, often modeled via simulations to reveal nanoscale oscillations and chaos. Dissipative mechanisms also facilitate self-assembly in colloidal systems, as demonstrated in 2010s experiments where out-of-equilibrium driving—such as chemical fuels or light—induces transient, non-equilibrium structures like dynamic clusters of patchy particles, overcoming the static limitations of equilibrium assembly. In climate modeling, atmospheric circulation functions as a large-scale dissipative structure, organizing convective cells and jet streams to transport heat and maintain global energy balance through irreversible processes, with post-2020 integrations in assessments emphasizing non-equilibrium thermodynamics for projecting circulation changes under warming scenarios. A notable modern advancement occurred in 2021, when experiments in quantum optics realized dissipative time crystals in a driven nuclear spin ensemble, observing subharmonic oscillations that persist indefinitely in an open system, defying detailed balance via continuous dissipation and periodic driving.

In Biology and Ecology

Living systems exemplify dissipative structures by sustaining ordered states far from thermodynamic equilibrium through ongoing energy and matter exchanges with their environment. conceptualized life as inherently tied to , where irreversible processes drive and complexity, preventing decay into equilibrium despite the second law of entropy. In cells, this manifests via metabolic pathways that hydrolyze ATP to fuel and maintain , dissipating energy as heat to counteract entropic disorder and enable functions like and membrane integrity. Such mechanisms ensure that cellular organization persists only under continuous energy input, aligning with Prigogine's view that biological order arises from dissipative fluxes rather than isolated equilibrium states. Embryonic development illustrates dissipative principles through reaction-diffusion that generate morphogen gradients, crucial for patterning tissues. These gradients, often modeled by Turing instabilities, form spatial structures where activators and inhibitors diffuse at differing rates, influenced by adsorption to the , leading to self-organized patterns like somites or limb buds. This process requires non-equilibrium conditions, with energy dissipation sustaining transient instabilities that resolve into stable developmental architectures, as seen in vertebrate axis formation. In biological neural networks, dissipative emerges in oscillatory rhythms, where far-from-equilibrium interactions among neurons produce coherent firing patterns essential for and , akin to chemical dissipative waves. Ecosystems function as vast open dissipative networks, channeling through food webs where each dissipates a portion via , maximizing while building hierarchies. flows exhibit allometric , with throughflow rates across following laws that reflect efficient , as observed in diverse systems like bays and lakes. within these networks can be captured by extended Lotka-Volterra models incorporating , which predict the spontaneous emergence of dissipative structures under , stabilizing heterogeneous distributions far from uniform . Such models highlight how ecological arises from nonlinear interactions and energy gradients, unifying trophic efficiencies—typically 10% across levels—with thermodynamic imperatives. In , links reveal as a dissipative collapse, where and heat disrupt energy flows, eroding established structures and triggering toward new configurations, as documented in post-2020 analyses of global hotspots. This tipping amplifies , with lost canopy openness altering microclimates and accelerating in once-stable systems.

In Engineering and Complex Systems

In engineering applications, dissipative systems principles are employed to enhance and performance in feedback control mechanisms, particularly in where dissipative observers facilitate accurate . These observers leverage dissipation properties to reconstruct unmeasurable states in dynamic systems, such as soft robots, by integrating boundary measurements with Lyapunov-based guarantees. For instance, in soft robotic manipulators, dissipative disturbance observers enable robust without acceleration sensors, compensating for unknown payloads and environmental interactions through passivity-inspired designs. In power systems, passivity concepts from dissipative theory underpin stability analysis and , ensuring reliable operation amid uncertainties like inputs. Passivity-based approaches model power networks as interconnected dissipative components, where dissipation in lines and converters maintains global stability during transients or faults. This is evident in , where passivity guarantees bounded outputs and facilitates decentralized stabilization without centralized coordination. Within complex systems, dissipative frameworks model economic processes as open systems where transaction costs act as friction-like dissipative forces, preventing and introducing realistic inefficiencies in markets. Drawing from 1990s research at the on evolving complex economies, these models treat markets as dissipative structures that self-organize through agent interactions, with transaction costs dissipating excess energy analogous to thermodynamic . A key example of dissipative networks in complex systems is , where nonlinear leads to jam formation as emergent patterns in driven particle systems. is conceptualized as a non-equilibrium dissipative system, with interactions creating asymmetric forces that trigger Hopf bifurcations, resulting in stable jam clusters propagating backward at characteristic speeds. Empirical validations from confirm that these dissipative explain jam onset at critical densities, around 0.06 per meter. Recent advances in apply dissipative principles to neural for energy-efficient , where dissipation enhances computational performance by mimicking natural energy flows in spiking neuron models. In socioeconomic contexts, post- financial crash models interpret market collapses as dissipative bifurcations, where accumulated instabilities in adaptive financial lead to abrupt transitions. These models, informed by critical slowing down near bifurcations, view crashes as shifts in dissipative equilibria, with leverage cycles amplifying dissipation until systemic reconfiguration occurs, as analyzed in empirical studies of the 2008 downturn.