Enthalpy of fusion
The enthalpy of fusion, denoted as ΔH_fus, is the change in enthalpy associated with the phase transition of a substance from solid to liquid at its melting point and constant pressure, without any accompanying temperature change, representing the energy required to overcome intermolecular forces in the solid lattice.[1] This process absorbs heat, increasing the potential energy of the molecules while kinetic energy remains constant.[1] It is typically reported as a molar quantity in units of kilojoules per mole (kJ/mol), though specific enthalpies per unit mass (kJ/kg or J/g) are also common for practical applications.[2] A well-known example is water, where the molar enthalpy of fusion is 6.00678 kJ/mol at 0°C and 101.325 kPa, equivalent to approximately 333.55 J/g for ice melting into liquid water.[3] This value reflects the relatively strong hydrogen bonding in ice, which requires significant energy to disrupt.[4] In general, the magnitude of ΔH_fus correlates with the strength of intermolecular forces; substances with stronger bonds, such as metals or ionic solids, exhibit higher values compared to molecular solids with weaker van der Waals interactions.[1]Core Concepts
Definition
The enthalpy of fusion, denoted as \Delta H_\text{fus}, is the change in enthalpy accompanying the phase transition of one mole of a substance from the solid to the liquid state at constant pressure and its melting temperature T_m.[1] This transition is an endothermic process in which heat is absorbed to overcome intermolecular forces, disrupting the ordered solid structure without altering the temperature of the substance during the melting phase.[1] The enthalpy of fusion is synonymous with the latent heat of fusion for the process at constant pressure, where the heat absorbed equals \Delta H_\text{fus}, as \Delta H = q_p.[1][5] Mathematically, it is expressed as \Delta H_\text{fus} = H_\text{[liquid](/page/Liquid)} - H_\text{[solid](/page/Solid)} evaluated at T_m.[2] The standard units are kilojoules per mole (kJ/mol) for molar quantities or joules per gram (J/g) for specific values, with the SI unit being joules per mole (J/mol).[6][1]Thermodynamic Interpretation
The enthalpy of fusion, denoted as \Delta H_\text{fus}, represents the heat absorbed during the phase transition from solid to liquid at constant pressure and is fundamentally linked to the first law of thermodynamics. According to the definition of enthalpy as H = U + PV, where U is the internal energy, P is pressure, and V is volume, the change in enthalpy for the fusion process is given by \Delta H_\text{fus} = \Delta U_\text{fus} + P \Delta V_\text{fus}. Here, \Delta U_\text{fus} accounts for the change in molecular interactions and vibrational freedom as the ordered solid structure breaks into the more disordered liquid, while P \Delta V_\text{fus} captures the work associated with the typically small volume expansion upon melting, since liquids generally occupy slightly more space than solids for most substances.[7] From the perspective of the second law of thermodynamics, the enthalpy of fusion connects to the Gibbs free energy change at the equilibrium melting temperature T_m, where \Delta G_\text{fus} = 0. This condition implies \Delta G_\text{fus} = \Delta H_\text{fus} - T_m \Delta S_\text{fus} = 0, yielding the key relation \Delta S_\text{fus} = \Delta H_\text{fus} / T_m, which quantifies the entropy increase due to the greater disorder in the liquid phase compared to the solid. Unlike the entropy of vaporization, which follows Trouton's rule with a roughly constant value of about 85–88 J/mol·K for many non-associated liquids, the entropy of fusion varies more widely (typically 10–60 J/mol·K depending on the substance type, such as metals or molecular solids) but adheres to the same thermodynamic equality at equilibrium.[8] In phase diagrams, the enthalpy of fusion plays a critical role in determining the slope of the solid-liquid equilibrium line through the Clapeyron equation, derived from the equality of chemical potentials across phases: \frac{dT}{dP} = \frac{T_m \Delta V_\text{fus}}{\Delta H_\text{fus}}. This equation illustrates how pressure influences the melting point; for most substances where \Delta V_\text{fus} > [0](/page/0), increasing pressure raises T_m, as the denominator \Delta H_\text{fus} (always positive for endothermic melting) moderates the effect alongside the volume change. Exceptions occur for substances like water, where \Delta V_\text{fus} < [0](/page/0), leading to a decrease in melting point with pressure. For reversible fusion processes at constant pressure, the heat absorbed q_\text{rev} equals the enthalpy change \Delta H_\text{fus}, which also matches T_m \Delta S_\text{fus} due to the reversible nature of the phase transition at equilibrium. This equivalence underscores the process's isothermal character, where the system absorbs latent heat without temperature variation, balancing the entropy production to zero for the universe.[2]Measurement and Data
Experimental Determination
The experimental determination of the enthalpy of fusion, denoted as \Delta H_\text{fus}, traces its origins to the late 18th century, when Antoine Lavoisier and Pierre-Simon Laplace developed an ice calorimeter in 1782 to measure latent heats relative to the heat required to raise water from 0°C to 60°C.[9] In their apparatus, a sample was enclosed within concentric ice-filled containers; heat absorbed or released by the sample melted a quantifiable amount of ice (approximately 489.5 g per "pound" equivalent), allowing relative determinations of specific and latent heats with an estimated accuracy of about 1.7%, though modern recalibrations show deviations of 10-15% for some values.[9] Classical methods for measuring \Delta H_\text{fus} relied on calorimetry via the method of mixtures, where a known mass of solid at or below its melting point is added to a calorimeter containing a liquid (often water) initially above the melting temperature, and the equilibrium temperature change is used to calculate the latent heat absorbed during melting.[10] This approach quantifies heat input by monitoring temperature equilibration, assuming no heat loss and complete phase change, and has been applied to substances like ice since the 19th century.[10] Post-1900, these techniques evolved through the establishment of absolute calorimetric standards by national metrology institutes, incorporating electrical calibration via the Joule effect for improved traceability to SI units and greater precision in heat flux measurements.[9] Differential scanning calorimetry (DSC) serves as the modern standard for determining \Delta H_\text{fus}, operating by heating a sample and reference at a controlled rate while measuring differential heat flow to maintain identical temperatures.[11] The melting process produces an endothermic peak on the heat flow versus temperature plot; \Delta H_\text{fus} is obtained by integrating the peak area between the initial onset (where the curve deviates from the baseline) and final temperature (where it returns to baseline), often normalized to sample mass and calibrated against standards like indium (\Delta H_\text{fus} = 28.47 J/g at 156.4°C).[12] This method is applicable to thermally stable materials over -120°C to 600°C, providing rapid results for quality control and research, though outcomes can vary with sample form and heating rate.[11] For precise low-temperature measurements, adiabatic calorimetry isolates the sample from external heat exchange, allowing accurate tracking of heat capacity and phase transitions by incrementally adding electrical energy while maintaining near-zero temperature gradients.[13] This technique has been used to determine \Delta H_\text{fus} for metals like gallium, yielding values such as 5.59 kJ/mol at 29.78°C for high-purity samples.[14] Key challenges in these measurements include supercooling, where the liquid phase persists below the melting point without crystallizing, leading to underestimation of \Delta H_\text{fus} if the end of the latent heat period is misidentified.[15] Impurities depress the melting point and reduce the observed \Delta H_\text{fus} due to eutectic formation or incomplete phase purity, with effects quantifiable via DSC purity analysis comparing experimental onsets to theoretical 100% purity values.[16] Additionally, incomplete melting—arising from kinetic barriers or insufficient heat supply—results in lower calculated enthalpies, as unmelted fractions do not contribute to the full endothermic response.[17]Tabulated Values and Examples
The enthalpy of fusion, denoted as ΔH_fus, varies significantly across substances depending on their bonding type, molecular structure, and atomic mass. Representative values for elements illustrate differences between noble gases, alkali metals, and transition metals, with noble gases exhibiting particularly low values due to weak interatomic forces. For instance, helium has an anomalously small ΔH_fus of 0.02 kJ/mol, reflecting its quantum mechanical behavior near absolute zero, while metals like sodium show lower values compared to heavier transition metals like iron.[18][19]| Element | ΔH_fus (kJ/mol) | Melting Point (°C) |
|---|---|---|
| Helium (He) | 0.02 | -272 (under pressure) |
| Sodium (Na) | 2.60 | 97.8 |
| Iron (Fe) | 13.8 | 1538 |
| Compound | ΔH_fus (kJ/mol) | Melting Point (°C) |
|---|---|---|
| Water (H₂O) | 6.01 | 0.00 |
| Naphthalene (C₁₀H₈) | 18.8 | 80.2 |
| α-D-Glucose (C₆H₁₂O₆) | 31.4 | 141 |
| Sodium chloride (NaCl) | 28.2 | 801 |