Enthalpy is a thermodynamic state function defined as the sum of its internal energy and the product of its pressure and volume, mathematically expressed as H = U + PV, where U is the internal energy, P is the pressure, and V is the volume.[1][2][3] As a state function, the change in enthalpy (\Delta H) depends only on the initial and final states of the system, not on the path taken, making it path-independent in thermodynamic processes.[1]Enthalpy is particularly significant in processes occurring at constant pressure, where the change in enthalpy equals the heat transferred to or from the system (\Delta H = q_p), simplifying the application of the first law of thermodynamics, which states that the change in internal energy is \Delta U = q + w, with w = -P \Delta V for pressure-volume work.[2][1] This relationship allows \Delta H = \Delta U + P \Delta V, providing a direct measure of heat exchange in open systems or reactions under atmospheric conditions, such as combustion or solution processes.[2] Positive values of \Delta H indicate endothermic processes where heat is absorbed, while negative values denote exothermic processes where heat is released.[1]In chemistry and engineering, enthalpy plays a central role in analyzing phase transitions, chemical reactions, and energy balances; for instance, the enthalpy of vaporization for water at 298 K is 44.0 kJ/mol, and the enthalpy of fusion at 273.15 K is 6.01 kJ/mol, quantifying the energy required for these changes.[1] It is also essential in fields like aeronautics for modeling gas flows in engines, where specific enthalpy changes relate to temperature variations via \Delta h = c_p \Delta T at constant pressure, aiding in the design of efficient propulsion systems.[3] Overall, enthalpy facilitates the prediction and control of energy transfers in thermodynamic systems, underpinning much of modern physical chemistry and engineering applications.[2][3]
Fundamentals
Definition
Enthalpy, denoted as H, is a thermodynamic statefunction defined as the sum of the internal energy U of a system and the product of its pressure P and volume V:H = U + PVThe formulation H = U + PV was introduced by Josiah Willard Gibbs, who referred to it as the "heat function for constant pressure," in his 1876 work on the equilibrium of heterogeneous substances.[4] The term "enthalpy" was coined in 1909 by the Dutch physicist Heike Kamerlingh Onnes.[5]Enthalpy represents the total energy content of a thermodynamic system, encompassing the internal energy—which accounts for the microscopic kinetic and potential energies of the particles—and the additional energy associated with the work done by or on the system due to pressure-volume interactions.[6] As such, it provides a convenient measure for analyzing energy transfers in processes involving expansion or compression against external pressure.[7]In the International System of Units (SI), the unit of enthalpy is the joule (J), consistent with its nature as an energy quantity.[6] Enthalpy is an extensive property, meaning its magnitude scales with the size or amount of substance in the system; for instance, doubling the mass of a homogeneous system doubles its enthalpy.[8]
Mathematical Expressions
Enthalpy H is defined as a thermodynamic state function through its total differential form, derived from the first law of thermodynamics. The first law states that for a closed system, the change in internal energy is dU = \delta q - \delta w, where \delta q is the heat added to the system and \delta w is the work done by the system. For reversible processes involving only pressure-volume work, \delta w = P dV, leading to dU = \delta q - P dV. Enthalpy is introduced as H = U + PV, so its differential is dH = dU + P dV + V dP. Substituting the first law expression yields dH = \delta q + V dP.[9][10]From the combined first and second laws of thermodynamics, dU = T dS - P dV, the enthalpy differential simplifies further to dH = T dS + V dP. This form highlights enthalpy's status as a state function, with the exact differential depending on entropy S and pressure P. The natural variables for H are thus S and P, as the coefficients T = \left( \frac{\partial H}{\partial S} \right)_P and V = \left( \frac{\partial H}{\partial P} \right)_S are intensive and extensive conjugates, respectively.[9]Enthalpy can be understood as the Legendre transform of the internal energy U(S, V), specifically transforming the volume variable V to its conjugate pressure P. The transform relation is H(S, P) = U(S, V) + P V, where V = V(S, P) is determined such that P = -\left( \frac{\partial U}{\partial V} \right)_S. This transformation shifts the natural variables from (S, V) to (S, P), preserving the thermodynamic information while adapting to constant-pressure conditions. In practice, enthalpy is often expressed as a function of temperature T and pressure P, H(T, P), by inverting S = S(T, P) from the Gibbs free energy or equation of state relations.[11][12]Key partial derivatives of enthalpy reveal its thermodynamic relations. The heat capacity at constant pressure is given by C_P = \left( \frac{\partial H}{\partial T} \right)_P, representing the rate of enthalpy change with temperature under isobaric conditions. Another important derivative, derived using Maxwell relations from dH = T dS + V dP, is \left( \frac{\partial H}{\partial P} \right)_T = V - T \left( \frac{\partial V}{\partial T} \right)_P. Here, the Maxwell relation \left( \frac{\partial S}{\partial P} \right)_T = -\left( \frac{\partial V}{\partial T} \right)_P from the Gibbs free energy potential is applied to express the entropy dependence. These derivatives underscore enthalpy's role in describing responses to temperature and pressure variations.[9]
Physical Interpretation
Enthalpy can be intuitively understood as the total heat content of a thermodynamic system, particularly under conditions of constant pressure, where it encompasses not only the system's internal energy but also the energy associated with expansion work against the surrounding pressure. This perspective arises because, in many practical scenarios like chemical reactions or phase changes occurring in open containers, pressure remains fixed, and enthalpy effectively captures the energy available for heat transfer without needing to separately account for mechanical work done by the system.[13][1]In processes at constant pressure, the change in enthalpy, ΔH, directly corresponds to the heat absorbed or released by the system, providing a straightforward measure of thermal energy exchange. For an exothermic process, such as combustion, ΔH is negative, indicating heat is liberated to the surroundings; conversely, an endothermic process, like dissolution of certain salts, has a positive ΔH, signifying heat uptake from the environment. This equivalence simplifies the analysis of heat flows in constant-pressure calorimetry, making enthalpy a practical tool for experimental thermodynamics.[2][1]Enthalpy's utility extends to open systems, such as those involving fluid flow, where it represents the total energy transported across system boundaries by the moving mass, including both internal energy and the work required to push the fluid through the system. This interpretation is particularly valuable in engineering contexts like turbines or pipelines, as it allows for efficient tracking of energy balances without isolating flow work contributions.[14]A relatable everyday example is the boiling of water in an open pot at atmospheric pressure, where the enthalpy of vaporization quantifies the heat needed to convert liquid water to steam, approximately 40.7 kJ per mole at 100°C, reflecting the energy to overcome intermolecular forces and perform expansion work against constant pressure. This process illustrates how enthalpy change governs the heat input required for phase transitions in routine applications like cooking.[15]
Thermodynamic Relations
Relation to Internal Energy and Work
Enthalpy H is defined as the sum of the internal energy U and the product of pressure P and volume V, expressed mathematically as H = U + PV[10]. The internal energy U represents the total microscopic energy of a system, encompassing the kinetic and potential energies associated with the motion and interactions of its constituent molecules[16]. In contrast, the PV term captures the macroscopic work potential of the system, specifically the reversible pressure-volume work that the system can perform or absorb during expansion or compression against an external pressure[2].This definition arises from the first law of thermodynamics, which states that the differential change in internal energy is dU = \delta q - \delta w, where \delta q is the heat added to the system and \delta w is the work done by the system, often expressed as PdV for pressure-volume work in closed systems[10]. Substituting H = U + PV yields the differential form dH = dU + PdV + VdP. Incorporating the first law and assuming only PdV work, this simplifies to dH = \delta q + VdP[2]. At constant pressure, where dP = 0, the equation reduces to dH = \delta q_p, demonstrating that the change in enthalpy directly equals the heat transferred in constant-pressure processes[10].In closed systems, internal energy U alone does not account for the work associated with volume changes against a constant external pressure, making it less convenient for analyzing such processes[2]. Enthalpy addresses this by incorporating the PV term, which effectively includes the expansion work P\Delta V in the energy balance, allowing \Delta H to represent the total heat content adjusted for this work[10]. This makes H particularly useful for processes like chemical reactions or phase changes occurring at constant pressure, where volume adjustments are common.Josiah Willard Gibbs introduced the enthalpy function in his 1876 paper "On the Equilibrium of Heterogeneous Substances" to simplify the thermodynamic analysis of systems at constant pressure, defining it as \chi = \epsilon + pv (where \epsilon is internal energy and pv is the pressure-volume product) as a "heat function" that facilitates equilibrium calculations by focusing on heat exchanges without explicit work terms[17][18]. Gibbs' formulation emphasized its role in representing the total energy available for heat transfer under these conditions, laying the groundwork for its widespread use in thermodynamics[17].
Relation to Heat in Processes
In thermodynamic processes conducted at constant pressure, the change in enthalpy of a system, \Delta H, is equal to the heat transferred to or from the system, denoted as q_p. This relationship arises from the first law of thermodynamics, where the enthalpy accounts for both the internal energy change and the pressure-volume work under isobaric conditions.[19][2]Enthalpy is a state function, meaning its change between two states is independent of the path taken, whether the process is reversible or irreversible. In contrast, the heat transfer q is path-dependent and varies depending on the specific conditions of the process, such as the rate of expansion or the presence of irreversibilities like friction. This path independence allows \Delta H to be calculated reliably using standard values, even if the actual heat exchanged differs from q_p in non-ideal scenarios.[20][21]For processes not occurring at constant pressure, \Delta H generally does not equal q, as the pressure-volume term contributes differently to the energy balance. However, enthalpy changes remain valuable in experimental contexts like constant-pressure calorimetry, where devices such as coffee-cup calorimeters approximate isobaric conditions to directly measure \Delta H through observed heat flows.[2][22]In open systems, where mass flows across the boundaries, the first law of thermodynamics naturally incorporates enthalpy to describe the total energy transport, including both internal energy and flow work associated with the incoming and outgoing streams. This makes \Delta H essential for analyzing steady-state processes, though detailed derivations for flow applications are considered separately.[23][14]
Natural Variables and Characteristic Functions
In thermodynamics, the natural variables of a potential function are those independent variables in terms of which the function is most naturally expressed, allowing for straightforward determination of equilibrium conditions and response functions under specific constraints. For enthalpy H, the natural variables are entropy S and pressure P, which contrasts with the internal energy U, whose natural variables are S and volume V.[24][25] This selection arises from the Legendre transform relating H to U, substituting pressure for volume to facilitate analysis in scenarios where pressure is controlled.[26]Enthalpy is particularly useful as a characteristic function for processes conducted at constant pressure, where volume may vary, such as in chemical reactions occurring within a piston-cylinder device maintained at fixed external pressure. In such systems, the minimization of H at constant S and P identifies the equilibrium state, and changes in H directly correspond to heat exchanged without accounting for expansion work explicitly.[27][9]The table below compares the four primary thermodynamic potentials, highlighting their natural variables, differential forms, and typical applications:
These potentials are interconnected via Legendre transforms and each is extremized (minimized for stable equilibria) under their respective constraints.[24][25][27]Enthalpy also serves as a characteristic function in phase equilibria under constant entropy and pressure conditions, where its minimum denotes stability, though such cases are less common than those involving temperature and pressure.[28]
Applications in Chemistry
Enthalpy of Reaction
The enthalpy of reaction, denoted as \Delta H_{\text{rxn}}, is defined as the change in enthalpy when reactants are converted to products in a chemical reaction, as written in the balanced equation, under conditions of constant pressure. This quantity represents the heat transferred to or from the system at constant pressure, where \Delta H = q_p./Thermodynamics/Energies_and_Potentials/Enthalpy/Reaction_Enthalpy)The standard enthalpy of reaction, \Delta H^\circ, is the enthalpy change for a reaction under standard conditions: a temperature of 298 K, a pressure of 1 bar, and with all reactants and products in their standard states (pure substances in their most stable form, such as ideal gases for gases, pure liquids or solids for condensed phases). These conditions ensure comparability across reactions and are tabulated for many compounds based on experimental measurements.Hess's law states that the standard enthalpy of reaction is the same whether the reaction occurs in one step or in a series of steps, because enthalpy is a state function independent of the path taken. This additivity allows the calculation of \Delta H^\circ for complex reactions by summing the enthalpies of intermediate steps or using formation enthalpies, as \Delta H^\circ_{\text{rxn}} = \sum \Delta H^\circ_f(\text{products}) - \sum \Delta H^\circ_f(\text{reactants}). The principle was established through experimental verification in the 19th century and remains a cornerstone for thermochemical computations.Chemical reactions are classified as exothermic if \Delta H < 0, indicating heat release to the surroundings, or endothermic if \Delta H > 0, indicating heat absorption from the surroundings. At the microscopic level, these changes arise from differences in bond energies: exothermic reactions typically form stronger bonds overall than those broken, while endothermic reactions involve net bond weakening. For instance, the combustion of methane is exothermic due to the high bond strength of CO₂ and H₂O relative to CH₄ and O₂.
Enthalpy Changes in Chemical Properties
Enthalpy changes in chemical reactions, denoted as ΔH, exhibit dependence on temperature through Kirchhoff's law, which relates the variation of the reaction enthalpy with temperature to the difference in heat capacities at constant pressure between products and reactants. Specifically, the law is expressed as\frac{d(\Delta H)}{dT} = \Delta C_pwhere ΔC_p is the difference in molarheat capacities (C_p,products - C_p,reactants). This relation allows correction of standard enthalpy changes measured at one temperature, such as 298 K, to other temperatures by integrating over the heat capacity functions, assuming ΔC_p is either constant or known as a function of temperature. For many reactions, if ΔC_p is small or approximately constant, the enthalpy change varies linearly with temperature, providing a practical tool for thermochemical calculations in processes like combustion or synthesis where operating temperatures differ from standard conditions.[29]The pressure dependence of reactionenthalpy arises from the thermodynamic identity\left( \frac{\partial (\Delta H)}{\partial P} \right)_T = \Delta \left[ V - T \left( \frac{\partial V}{\partial T} \right)_P \right],derived from the fundamental relation for enthalpy differentials and Maxwell relations linking partial derivatives of thermodynamic potentials. This expression, often simplified to Δ(α T V - V) where α is the thermal expansion coefficient, indicates that for condensed phases like liquids and solids, the term inside the brackets is small, making ΔH nearly independent of pressure. However, for gas-phase reactions, where volumes are larger and more compressible, pressure effects can be significant, especially at high pressures, necessitating corrections in applications such as industrial gas reactions or supercritical processes. For ideal gases, the pressure dependence vanishes entirely since enthalpy depends only on temperature.[30][31]Standard enthalpies of formation, ΔH_f°, quantify the enthalpy change for forming one mole of a compound from its elements in their standard states at 298.15 K and 1 bar pressure, serving as fundamental building blocks for calculating reaction enthalpies via Hess's law. These values are tabulated in comprehensive databases, with examples including ΔH_f° for CO_2(g) at -393.51 kJ/mol and for H_2O(l) at -285.83 kJ/mol, reflecting the stability of the compound relative to its elements. Such data enable prediction of enthalpy changes for any reaction by summing the formation enthalpies of products minus those of reactants, adjusted for stoichiometric coefficients, and are essential for assessing reaction feasibility in chemical engineering and materials design. The standard state convention ensures consistency across compounds, with elements assigned ΔH_f° = 0 by definition.[32]Catalysts influence the rate of chemical reactions by lowering the activation energy barrier but do not alter the overall enthalpy change ΔH, as the thermodynamic state function depends solely on initial and final states, independent of the pathway. This invariance holds because catalysts participate transiently without net consumption or production, preserving the energy balance of the reaction. In practice, this allows catalysts to accelerate processes like hydrogenation or oxidation without shifting the equilibriumthermodynamics, focusing their role on kinetics rather than energetics.[33]
Specific Enthalpy in Solutions
In chemical solutions, specific enthalpy refers to the enthalpy of the system expressed on a per-unit-mass or per-mole basis, denoted as h = H / m (where H is total enthalpy and m is mass) or molar specific enthalpy \bar{h} = H / n (where n is moles). This normalization is particularly useful in dilute solutions, where solute concentrations are low, allowing for straightforward comparisons of energy content across varying solution strengths without scaling by total volume or mass. For instance, in aqueous solutions, molar specific enthalpy helps quantify the energy associated with solute-solvent interactions per mole of solute, facilitating analysis in processes like dissolution.[34]The enthalpy of solution, \Delta H_{\text{solution}}, represents the heat absorbed or released at constant pressure when a solute dissolves in a solvent to form a solution. It arises from the net balance of endothermic steps, such as breaking solute-solute and solvent-solvent intermolecular forces, and the exothermic step of solute-solvent interactions. For example, dissolving sodium chloride in water yields \Delta H_{\text{solution}} = +3.9 kJ/mol, indicating an endothermic process driven by entropy despite the positive enthalpy change.[35][34]Enthalpy of solution can be expressed in integral or differential forms to account for concentration dependence. The integral enthalpy of solution is the total enthalpy change when one mole of solute dissolves in a specified amount of solvent, varying with the solvent-to-solute ratio; at infinite dilution, it approaches a limiting value representing complete dissociation without further interactions. In contrast, the differential enthalpy of solution is the enthalpy change for dissolving an infinitesimal amount of solute (approaching one mole) into a large volume of existing solution at a fixed concentration, maintaining that concentration unchanged. For potassium chloride, the integral value with 20 moles of water is +3.78 kcal/mol, while at infinite dilution it is +4.4 kcal/mol, illustrating the effect of solvation layers.[36]In ideal solutions, which obey Raoult's law—where the partial vapor pressure of each component equals its pure vapor pressure times its mole fraction (P_A = P_A^\circ x_A)—the enthalpy of mixing \Delta H_{\text{mix}} is approximately zero, as intermolecular forces between like and unlike molecules are equivalent, resulting in no net heat exchange upon mixing. Non-ideal solutions deviate from Raoult's law, exhibiting positive or negative \Delta H_{\text{mix}} due to unequal forces; for example, mixing ethanol and water (non-ideal) releases heat (\Delta H_{\text{mix}} < 0), while hexane-heptane mixtures (ideal) show no temperature change. This distinction is crucial for predicting solution behavior in dilute regimes.[37]Applications of specific enthalpy in solutions extend to electrochemistry, notably in neutralization reactions where acids and bases in aqueous solution react to form water. The enthalpy of neutralization, a specific case of solution enthalpy change, is the heat released when one mole of water forms, typically -57 to -58 kJ/mol for strong acid-strong base pairs like HCl and NaOH, reflecting the exothermic \ce{H^+ + OH^- -> H2O} ion combination in dilute solution. For weak electrolytes, such as acetic acid and NaOH (-56.1 kJ/mol), the value is less negative due to partial ionization, highlighting concentration and solvation effects in electrochemical cells.[38]
Applications in Physics and Engineering
Enthalpy Changes in Physical Properties
Enthalpy changes occur during phase transitions of pure substances, where the system absorbs or releases heat at constant temperature and pressure without a change in temperature. The enthalpy of fusion, denoted as \Delta H_{\text{fus}}, represents the heat required to melt a solid into a liquid, breaking intermolecular forces in the solid lattice. For water at its melting point of 273.15 K, \Delta H_{\text{fus}} = 6.007 \, \text{kJ/mol}.[39]The enthalpy of vaporization, \Delta H_{\text{vap}}, is the heat absorbed to convert a liquid to a gas, overcoming cohesive forces to allow molecules to enter the vapor phase. For water at its boiling point of 373.15 K, \Delta H_{\text{vap}} = 40.66 \, \text{kJ/mol}.[39]Sublimation involves the direct transition from solid to gas, with the enthalpy of sublimation \Delta H_{\text{sub}} approximately equal to the sum of the enthalpies of fusion and vaporization at the same temperature, as it combines both processes: \Delta H_{\text{sub}} = \Delta H_{\text{fus}} + \Delta H_{\text{vap}}. This relation holds under conditions where the intermediate liquid phase is unstable, such as below the triple point; for water at 273.15 K, \Delta H_{\text{fus}} = 6.007 \, \text{kJ/mol} and \Delta H_{\text{vap}} \approx 45.05 \, \text{kJ/mol}, yielding \Delta H_{\text{sub}} \approx 51.06 \, \text{kJ/mol}.These phase transition enthalpies vary with temperature and pressure due to changes in molecular interactions. The Clausius-Clapeyron equation describes the temperature dependence of the vapor pressure P for a phaseequilibrium, relating it to \Delta H_{\text{vap}}:\frac{d \ln P}{dT} = \frac{\Delta H_{\text{vap}}}{T [\Delta V](/page/Delta-v)}where \Delta V is the volume change across the phase boundary, often approximated using ideal gas behavior for the vapor phase as \Delta V \approx RT / P for vaporization, yielding:\frac{d \ln P}{dT} = \frac{\Delta H_{\text{vap}}}{RT^2}.Integrating this form assumes \Delta H_{\text{vap}} is constant, but in reality, \Delta H_{\text{vap}} decreases with increasing temperature, approaching zero at the critical point, which affects the curvature of the vapor pressure curve.[40] Pressure effects on \Delta H_{\text{fus}} and \Delta H_{\text{sub}} are smaller, as solids and liquids are nearly incompressible, but elevated pressures can slightly increase these values by reducing the volume change.[41]For compression or expansion processes in gases, the enthalpy change depends on whether the gas behaves ideally or deviates from ideality. For an ideal gas, enthalpy is a function of temperature only, so the change in enthalpy during isothermal compression or expansion is zero, and for non-isothermal processes, it is given by \Delta H = \int_{T_1}^{T_2} C_P \, dT, where C_P is the heat capacity at constant pressure, independent of pressure changes. This independence arises because intermolecular forces are negligible, making H = U + PV = f(T) + nRT, with both internal energy U and PV depending solely on temperature.[42]Real gases exhibit deviations from ideal behavior, particularly at high pressures or low temperatures, where intermolecular attractions and repulsions affect enthalpy. These deviations are quantified using departure functions, which represent the difference between real and ideal gas properties at the same temperature and pressure: (H - H^{\text{ig}}) = \int_{P^{\text{ig}}=0}^{P} \left[ V - T \left( \frac{\partial V}{\partial T} \right)_P \right] dP, derived from thermodynamic relations and the equation of state (e.g., van der Waals or Peng-Robinson). The enthalpy departure (H - H^{\text{ig}}) is typically negative due to attractive forces reducing the energy needed for expansion, and its magnitude increases with pressure, altering \Delta H for compression or expansion compared to the ideal case. For instance, in supercritical fluids near the critical point, departure functions can shift \Delta H by several percent from ideal predictions.[43]
Open Systems and Flow Processes
In open systems, where mass crosses the system boundaries, enthalpy plays a central role in energy balances because it incorporates both the internal energy of the fluid and the work associated with pushing the fluid across the boundary, known as flow work. This makes enthalpy the natural property for analyzing processes involving continuous flow, such as in pipelines, turbines, and heat exchangers. The first law of thermodynamics applied to such systems, or control volumes, accounts for energy accumulation, heat transfer, work, and the net flux of energy carried by the incoming and outgoing mass streams.For steady-flow processes, where properties do not change with time and the mass flow rate is constant, the energy equation simplifies significantly. The steady-flow energy equation per unit mass is given byh_2 - h_1 + \frac{V_2^2 - V_1^2}{2} + g(z_2 - z_1) = q - w_s,where h is the specific enthalpy, V is the velocity, z is the elevation, q is the heat transfer per unit mass, and w_s is the shaft work per unit mass (with subscripts 1 and 2 denoting inlet and outlet). This equation arises from applying the conservation of energy to a fixed control volume, balancing the net energy inflow against accumulation and outflow. When changes in kinetic and potential energies are negligible, it reduces to h_2 - h_1 = q - w_s, highlighting that the enthalpy change directly reflects the net heat and work interactions.The enthalpy flow rate, \dot{m} h, quantifies the rate at which energy is transported across the control surface by the mass flow \dot{m}, encompassing both thermal energy and the pressure-volume work required to displace the fluid. This term appears in the rate form of the steady-flow energy equation as \dot{Q} - \dot{W}_s + \dot{m} (h_1 + \frac{V_1^2}{2} + g z_1) = \dot{m} (h_2 + \frac{V_2^2}{2} + g z_2), enabling efficient analysis of devices where mass flow drives the process.In nozzles and diffusers, which accelerate or decelerate fluid streams, the process is often adiabatic (q = 0) with no shaft work (w_s = 0). The steady-flow energy equation then becomesh_1 + \frac{V_1^2}{2} = h_2 + \frac{V_2^2}{2},assuming negligible potential energy changes, meaning the stagnation enthalpy h_0 = h + \frac{V^2}{2} remains constant. In a nozzle, this allows kinetic energy to increase at the expense of enthalpy (and thus temperature for ideal gases), while a diffuser converts kinetic energy back into enthalpy to raise pressure.For unsteady open systems, where properties vary with time, the first law expands to include the rate of change of total energy within the control volume:\frac{dE_{cv}}{dt} = \dot{Q} - \dot{W} + \sum \dot{m}_i \left( h_i + \frac{V_i^2}{2} + g z_i \right) - \sum \dot{m}_e \left( h_e + \frac{V_e^2}{2} + g z_e \right),with E_{cv} as the total energy (internal, kinetic, and potential) inside the volume. Here, the enthalpy-based flux terms capture the varying mass inflows and outflows, making it applicable to transient events like filling or emptying tanks.
Throttling and Joule-Thomson Effect
Throttling is an irreversible adiabatic process in which a fluid undergoes a sudden pressure drop as it flows through a restriction, such as a valve or porous plug, with no heat transfer or shaft work involved. In steady-state conditions, the first law of thermodynamics for open systems dictates that the specific enthalpy remains constant across the throttle, such that h_1 = h_2, where h is the specific enthalpy. This isenthalpic nature arises because the decrease in internal energy is exactly balanced by the work associated with the pressure-volume change during expansion.[44]The Joule-Thomson effect refers to the temperature variation experienced by a real gas during this isenthalpic throttling process, distinguishing it from the free expansion of the Joule experiment where no work is done. The magnitude and direction of the temperature change depend on intermolecular forces and the gas's equation of state. The Joule-Thomson coefficient, \mu = \left( \frac{\partial T}{\partial P} \right)_H, measures the rate of temperature change with pressure at constant enthalpy; a positive \mu indicates cooling upon pressure reduction, while a negative \mu indicates heating. This coefficient is derived thermodynamically as \mu = \frac{1}{C_P} \left[ T \left( \frac{\partial V}{\partial T} \right)_P - V \right], where C_P is the heat capacity at constant pressure, T is temperature, V is volume, and the partial derivative reflects thermal expansion effects. For ideal gases, \mu = 0, resulting in no temperature change, but real gases deviate due to attractive and repulsive forces.[45]The sign of \mu reverses at the inversion temperature, defined as the locus of states where \mu = 0, marking the boundary between cooling and heating regimes during throttling. Above the inversion temperature, repulsive forces dominate, leading to heating (\mu < 0); below it, attractive forces prevail, causing cooling (\mu > 0). For common gases like nitrogen and oxygen, the maximum inversion temperature exceeds room temperature (e.g., approximately 625 K for nitrogen), enabling cooling from ambient conditions, whereas for helium, it is only about 34 K, requiring pre-cooling for liquefaction. The inversion curve in the temperature-pressure plane separates regions of positive and negative \mu, with the maximum inversion temperature occurring at low pressures.[45]In refrigeration systems, the Joule-Thomson effect is harnessed in throttling valves to achieve cooling by expanding high-pressure liquid-vapor mixtures, dropping their temperature isenthalpically before evaporation absorbs heat. This process, while irreversible and entropically dissipative, is integral to vapor-compression cycles used in billions of household and commercial units worldwide. For cryogenic applications, such as liquefying gases below ambient temperatures, multi-stage throttling exploits the effect after initial cooling. In liquefied natural gas (LNG) production, throttling facilitates cooling of natural gas streams through pressure reduction in expansion valves, promoting condensation of heavier hydrocarbons and aiding the overall liquefaction process in cascade or mixed-refrigerant cycles. This application leverages the positive \mu of methane and other components at process conditions to achieve temperatures around 111 K efficiently.[46][47]
Practical Diagrams and Examples
Basic Thermodynamic Diagrams
Thermodynamic diagrams provide graphical representations of enthalpy and related properties, enabling engineers and scientists to visualize and analyze processes involving energy transfers in cycles and systems. These diagrams plot enthalpy against other state variables such as entropy, pressure, or temperature, allowing for the identification of state points and the determination of property changes without relying solely on tabular data or complex calculations. By incorporating lines of constant pressure, temperature, or quality, they facilitate the tracing of thermodynamic paths, such as isobaric or isothermal processes, which are common in power generation, refrigeration, and heat transfer applications.[48]The enthalpy-entropy (h-s) diagram, commonly known as the Mollier diagram, plots specific enthalpy (h) on the vertical axis against specific entropy (s) on the horizontal axis. Developed as an extension of the temperature-entropy diagram proposed by J. Willard Gibbs, it is particularly useful for analyzing steam and vapor processes in turbines and boilers. For water and steam, the diagram features curved isobars representing constant pressure lines, along with lines of constant quality (dryness fraction) that delineate the saturation dome, separating the liquid, vapor, and two-phase regions. This layout allows users to locate states within the superheated, saturated, or wet vapor domains and to follow processes like isentropic expansion in turbines, where entropy remains constant.[48][49]In refrigeration and air conditioning systems, the pressure-enthalpy (P-h) diagram is widely employed, with pressure (P) on the vertical axis (often logarithmic scale) and specific enthalpy (h) on the horizontal axis. This diagram highlights the saturation curve, dividing the chart into subcooled liquid, superheated vapor, and two-phase regions, and includes isotherms to show temperature variations. It is instrumental for plotting vapor-compression cycles, where processes such as isentropic compression (vertical line upward), isobaric condensation (horizontal line leftward), throttling (near-vertical drop across constant enthalpy), and evaporation (horizontal line rightward) can be directly visualized. For refrigerants like R-134a, the diagram reveals how enthalpy changes correspond to heat absorption or rejection at specific pressures.[50]The temperature-enthalpy (T-h) diagram, also referred to as the h-T or T-Q plot in some contexts, graphs temperature (T) against specific enthalpy (h), making it suitable for heat exchanger analysis where phase changes occur. It distinguishes sensible heat regions, where enthalpy changes linearly with temperature in single-phase flows, from latent heat regions, marked by horizontal plateaus during phase transitions like boiling or condensation. In counterflow or parallel-flow exchangers, the diagram illustrates the approach of hot and cold streams, with the area between curves representing heat transfer duties. For moist air or steam systems, it separates sensible heating/cooling from latent effects due to moisture changes, aiding in the design of evaporators or condensers.[51]These diagrams simplify the calculation of enthalpy changes (ΔH) by leveraging their coordinate systems: in h-s and P-h charts, ΔH is the horizontal distance between state points, while in T-h plots, it is the vertical shift adjusted for mass flow rates. Engineers can thus determine ΔH graphically by locating initial and final states via known properties like pressure or temperature, then measuring the enthalpy difference directly from the scale, bypassing iterative equation solving or steam table interpolations. This visual approach enhances efficiency in cycle optimization and process simulation, particularly for steady-state analyses.[52][49]
Compression and Expansion Processes
In adiabatic compression processes within ideal compressors, the change in enthalpy equals the work input per unit mass, as no heat transfer occurs and the steady-flow energy equation simplifies to \Delta h = w. This relationship holds for reversible, isentropic compression where entropy remains constant, allowing the outlet enthalpy h_2 to be determined from inlet conditions using isentropic relations.[53] In real compressors, inefficiencies arise due to irreversibilities such as friction and fluid mixing, leading to higher actual work input than the isentropic ideal. The isentropic efficiency \eta is defined as the ratio of isentropic work to actual work, \eta = w_{\text{isentropic}} / w_{\text{actual}} = (h_{2s} - h_1) / (h_2 - h_1), where h_{2s} is the isentropic outlet enthalpy; typical values range from 0.85 to 0.95 for modern axial compressors.[53][54]For expansion processes in turbines, polytropic expansion accounts for non-isentropic behavior, where the enthalpy drop drives the work output. In an ideal steady-flow turbine, the work per unit mass w equals the decrease in enthalpy, h_{\text{out}} - h_{\text{in}} = w, with no heat transfer under adiabatic conditions. Polytropic processes model real turbine behavior using a polytropic index n between 1 (isothermal) and \gamma (isentropic, where \gamma is the specific heatratio), enabling more accurate prediction of enthalpy changes across varying pressure ratios.[55] This is particularly relevant in gas turbine cycles, where the enthalpy drop \Delta h = c_p (T_{\text{in}} - T_{\text{out}}) determines shaft power, with efficiencies often exceeding 0.90 for high-performance units.[55][56]Pressure-enthalpy (P-h) diagrams are essential tools for visualizing and calculating enthalpy changes in compression and expansion within thermodynamic cycles like the Rankine cycle. In the Rankine cycle, the turbine expansion is represented as a near-vertical line on the P-h diagram from high-pressure superheated vapor to lower-pressure conditions, where the horizontal distance quantifies the enthalpy drop \Delta h available for work extraction. For example, in a steam Rankine plant operating at 10 MPa inlet pressure, the diagram reveals an enthalpy drop of approximately 1000–1200 kJ/kg across the turbine, depending on outlet conditions, facilitating efficiency assessments without iterative calculations.[57] These diagrams also highlight phase changes and subcooling effects, aiding in the selection of operating parameters for optimal performance.[57]Real gas behaviors in compressors deviate from ideal gas assumptions, particularly at high pressures and near critical points, impacting enthalpy calculations and overall efficiency. For ideal gases, enthalpy depends solely on temperature (h = h(T)), simplifying compression analysis via \Delta h = c_p \Delta T; however, real gases exhibit pressure-dependent enthalpy due to intermolecular forces and non-zero molecular volume, requiring equations of state like the Benedict-Webb-Rubin-Starling (BWRSE) for accurate predictions. In high-pressure natural gas compressors (e.g., >10 MPa), ideal gas models overestimate enthalpy by up to 10–15%, leading to errors in work requirements and temperature rise; real gas corrections via departure functions ensure deviations below 2% for reliable design.[58] This distinction is critical in applications like supercritical CO2 cycles, where real gas effects enhance compressorefficiency by 5–10% compared to ideal predictions.[58]
Case Studies in Devices
In steam turbines, enthalpy changes during expansion are critical for determining efficiency, often analyzed using the pressure-enthalpy (P-h) diagram. Consider a Rankine cycle steam turbine where superheated steam enters at 10 MPa and 500°C (h_in ≈ 3375 kJ/kg from steam tables) and exhausts at 0.1 MPa after isentropic expansion (h_out ≈ 2393 kJ/kg, s constant at 6.599 kJ/kg·K). The ideal enthalpy drop ΔH = h_in - h_out ≈ 982 kJ/kg, yielding a turbine work output of approximately 982 kJ/kg under isentropic conditions. In practice, with an isentropic efficiency of 85%, the actual ΔH is reduced to about 835 kJ/kg, as irreversibilities like friction dissipate energy, highlighting how P-h diagrams facilitate efficiency calculations by tracing the expansion path from inlet to outlet states.[59]For air compressors in the Brayton cycle, used in gas turbines, enthalpy changes distinguish isentropic ideals from real processes with losses. Assume air as an ideal gas with constant specific heats (c_p = 1.005 kJ/kg·K, γ = 1.4), entering at 1 atm and 300 K (h_in = 300 kJ/kg, referenced to 0 K). For isentropic compression to 10 atm, the temperature rise is T_out = T_in (P_out/P_in)^((γ-1)/γ) ≈ 600 K, so ΔH_isentropic = c_p (T_out - T_in) ≈ 302 kJ/kg. Accounting for 80% isentropic efficiency due to heat losses and friction, the actual outlet temperature reaches about 675 K, increasing ΔH_actual to ≈ 377 kJ/kg and requiring more input work, which underscores the role of enthalpy in optimizing compressor performance within the cycle.Heat exchangers, such as counterflow types in power plants, rely on enthalpy balances to transfer heat without mixing fluids, visualized on temperature-enthalpy (T-h) diagrams. In a steam-water counterflow heat exchanger, hot water enters at 200°C (h_hot_in ≈ 852 kJ/kg) and cools to 100°C (h_hot_out ≈ 419 kJ/kg), while cold water enters at 20°C (h_cold_in ≈ 84 kJ/kg) and heats to 120°C (h_cold_out ≈ 503 kJ/kg), using saturated liquid approximations from steam tables at respective saturation pressures. The enthalpy balance ensures ΔH_hot ≈ - (m_cold / m_hot) ΔH_cold; with equal mass flow rates, the heat transfer is about 433 kJ/kg, confirming energy conservation as the T-h diagram illustrates the parallel enthalpy shifts for effective design and sizing.[60]
Historical Development
Origins and Etymology
The term "enthalpy" originates from the Greek verb enthalpein, meaning "to heat in," combining the prefix en- (in) with thalpein (to heat). This etymology reflects the concept's association with heat transfer under constant pressure conditions. The word was coined by the DutchphysicistHeike Kamerlingh Onnes around 1908, though it first appeared in print in 1909 in a paper by J. P. Dalton, who attributed the naming directly to Onnes while discussing the thermodynamic quantity ε + pv (internal energy plus pressure-volume product).[61]Although Onnes provided the name, the underlying concept of enthalpy as a thermodynamic potential predates it by several decades. In 1876–1878, American physicist Josiah Willard Gibbs introduced the function H = U + PV in his seminal papers "On the Equilibrium of Heterogeneous Substances," published in the Transactions of the Connecticut Academy of Sciences. Gibbs described it as the "heat function for constant pressure" (using the symbol χ) without assigning the term "enthalpy," emphasizing its role in analyzing phase equilibria and chemical reactions at constant pressure. This formulation laid the foundational mathematical structure for what would later be called enthalpy.[61]Prior to Gibbs, related ideas of "heat content" appeared in 19th-century thermodynamics, often linked to efforts to quantify total thermal energy in systems. For instance, Rudolf Clausius in his 1864 work used the German term Wärmeinhalt (heat content) to describe the thermal component of a body's energy, distinguishing it from work potential in the context of the first law of thermodynamics. This term captured an early intuitive sense of enthalpy as a measure of heat available at constant pressure, influencing later adoption in German-speaking scientific literature.[62]
Key Contributions and Evolution
The concept of enthalpy was formally introduced by Josiah Willard Gibbs in the 1870s as a thermodynamic potential function, specifically termed the "heat function for constant pressure," defined as the sum of the internal energy and the product of pressure and volume, H = E + PV. This formulation appeared in his seminal work "On the Equilibrium of Heterogeneous Substances," where Gibbs utilized it to analyze phase equilibria and chemical reactions under constant pressure conditions, laying the groundwork for modern thermodynamics. Gibbs' contribution emphasized enthalpy's role in describing heat transfer at constant pressure, distinguishing it from internal energy changes at constant volume.The term "enthalpy" itself was coined by Dutch physicist Heike Kamerlingh Onnes in 1909 during his pioneering experiments in low-temperature physics, particularly in studies of gases near liquefaction points. Onnes adopted the Greek root "enthalpein" (to heat in) to describe this function, applying it to quantify thermodynamic properties like specific heats and entropies of substances such as argon and helium at cryogenic temperatures. His experimental verification, using precise calorimetric measurements, confirmed enthalpy's utility in open systems and irreversible processes, bridging theoretical constructs with empirical data from superconductivity and liquefaction research. Alfred W. Porter later proposed the symbol H for enthalpy in 1922, standardizing its notation in scientific literature.[61]In the 20th century, enthalpy's integration into chemical thermodynamics advanced significantly through the work of Gilbert N. Lewis and Merle Randall in their 1923 textbook "Thermodynamics and the Free Energy of Chemical Substances," which systematized its application to reaction equilibria, standard states, and free energy calculations. Lewis and Randall extended Gibbs' potentials to practical chemical problems, such as predicting reaction spontaneity via ΔH and its relation to entropy, enabling quantitative analysis of ionic solutions and gas-phase reactions. This framework became foundational for physical chemistry, influencing fields from electrochemistry to biochemistry. Enthalpy calculations gained prominence in engineering applications, particularly in rocketry programs, where it was essential for optimizing propellant combustion, nozzle expansion, and specific impulse through isentropic flow models.[63][64]Modern extensions of enthalpy concepts emerged post-1980s with computational quantum chemistry methods for accurately determining enthalpy changes (ΔH), such as enthalpies of formation. John A. Pople's Gaussian-2 (G2) theory, introduced in 1991, combined ab initiomolecular orbital calculations with empirical corrections to achieve chemical accuracy (within 1 kcal/mol) for thermochemical properties of first- and second-row elements. This composite method, involving frozen-core approximations and higher-order correlation energies, revolutionized predictions of reaction enthalpies without experimental input, impacting drug design and materials science. Subsequent refinements, like G3 and density functional theory hybrids, further enhanced computational efficiency for larger systems.[65]