Phase
In physics and chemistry, a phase is a distinct, homogeneous region within a material system where all physical and chemical properties, such as density, composition, and structure, remain uniform throughout.[1] This concept encompasses the familiar states of matter—solid, liquid, gas, and plasma—each representing a phase separated by boundaries where properties change abruptly.[2] Phases are mechanically separable and play a central role in understanding material behavior under varying conditions of temperature, pressure, and composition.[3] In the context of wave mechanics and oscillations, phase describes the specific position or stage of a periodic waveform at a given time and location, typically measured as a fraction of the complete cycle from 0 to 360 degrees (or 0 to 2π radians).[4] For instance, the phase constant indicates the wave's configuration at the origin (t=0, x=0), while the phase difference between two waves determines whether they interfere constructively or destructively.[5] This property is fundamental to phenomena like sound propagation, electromagnetic radiation, and quantum wave functions, where phase coherence enables applications in optics, acoustics, and signal processing.[6] Phase transitions occur when a system shifts between phases, often driven by changes in external variables like temperature or pressure, resulting in discontinuities in properties such as volume or entropy.[7] These transitions are mapped on phase diagrams, which delineate stable phase regions and coexistence lines (e.g., melting or boiling curves) for pure substances or alloys.[8] Notable examples include the liquid-gas critical point, beyond which phases become indistinguishable, and superconducting phases in materials under low temperatures.[9]Physical Sciences
Phases of Matter
In thermodynamics, a phase of matter is defined as a homogeneous region within a system that exhibits uniform physical properties, such as density, composition, and structure, throughout its volume.[10] This concept allows for the classification of matter based on how its constituent particles—atoms, molecules, or ions—are arranged and interact under varying conditions of temperature and pressure.[11] Phases are distinguishable because crossing certain boundaries in thermodynamic state space leads to abrupt changes in properties, though the phases themselves maintain internal uniformity.[12] The modern understanding of phases originated with the work of Josiah Willard Gibbs, who in 1876 introduced the concept in his seminal paper "On the Equilibrium of Heterogeneous Substances," laying the foundation for phase rule and equilibrium analysis in multiphase systems.[13] A key experimental illustration is the triple point of water, where solid, liquid, and vapor phases coexist in equilibrium at precisely 0.01°C and 611.657 Pa, demonstrating the precise conditions under which multiple phases can stably exist together.[14] Classical phases include the solid, liquid, and gas states, each characterized by distinct macroscopic properties arising from the degree of particle ordering and mobility. In the solid phase, particles are tightly packed with limited movement, resulting in high rigidity and very low compressibility; solids can be crystalline, featuring a regular, repeating lattice structure like diamond, or amorphous, with disordered arrangements like glass.[15] Liquids exhibit moderate particle mobility, allowing flow while maintaining a fixed volume, with low compressibility but higher viscosity than gases due to stronger intermolecular forces.[16] Gases, in contrast, have particles that are far apart and move freely, leading to high compressibility and low viscosity, enabling easy expansion to fill available space.[17] A representative example is the water system, where ice (solid) has a rigid hexagonal crystal structure with minimal flow, liquid water flows with moderate viscosity and resists compression, and water vapor behaves as a highly compressible gas; these phases highlight how properties like density and viscosity vary distinctly within the same substance.[18] Beyond classical phases, non-classical states emerge under extreme conditions, expanding the classification of matter. Plasma, often considered the fourth state, consists of ionized gas with free electrons and ions, exhibiting collective electromagnetic behavior and high electrical conductivity, as seen in stars and lightning.[19] The Bose-Einstein condensate (BEC) forms at temperatures near absolute zero, where bosons occupy the same quantum state, creating a macroscopic wave-like entity with superfluid properties, first experimentally realized in 1995 using ultracold rubidium atoms.[20] Supercritical fluids occur above a substance's critical point, where the distinction between liquid and gas phases vanishes, resulting in a dense, diffusive state with properties intermediate between the two, such as enhanced solubility used in industrial extractions like decaffeinating coffee.[21] These non-classical phases illustrate how matter can adopt novel forms when quantum effects or extreme pressures dominate, though transitions between any phases involve changes in energy and entropy as detailed in phase transition theory.[17]Wave and Oscillation Phase
In wave physics, the phase of a periodic wave refers to the position within its cycle at a given point in time and space, serving as the argument of the periodic function describing the wave.[22] This phase is typically measured in radians, ranging from 0 to $2\pi, or equivalently in degrees from 0° to 360°, corresponding to one full oscillation.[22] For a sinusoidal wave, the displacement y at position x and time t is given by y = A \sin(\omega t - k x + \phi_0), where A is the amplitude, k is the wave number, \omega is the angular frequency, and \phi_0 is the initial phase. The phase \phi = \omega t - k x + \phi_0 thus tracks the wave's progression through its cycle. At a fixed position (e.g., x = 0), the time-dependent phase simplifies to \phi = \omega t + \phi_0, where \phi_0 sets the starting point of the oscillation.[22] This form derives directly from the sinusoidal expression y = A \sin(\omega t + \phi_0), as the argument \omega t + \phi_0 defines the phase, advancing linearly with time at rate \omega.[22] The phase difference between two waves is the spatial or temporal offset in their cycles, quantified as \Delta \phi = \phi_1 - \phi_2.[23] When \Delta \phi = 2m\pi (for integer m), the waves are in phase, leading to constructive interference where amplitudes add; conversely, \Delta \phi = (2m+1)\pi results in destructive interference, where amplitudes cancel.[24] In optics, phase shifts in light waves arise from path length differences or reflections, producing diffraction patterns such as those in single-slit experiments, where varying phase across the aperture creates intensity minima.[25] For instance, reflection from a medium with higher refractive index introduces a \pi phase shift, contributing to interference fringes in thin films.[23] In acoustics, phase synchronization of sound waves is crucial for phenomena like beats or spatial audio, where in-phase alignment enhances perceived loudness through constructive interference, while out-of-phase waves reduce it.[26] Phase differences in sound propagation can also synchronize wave fronts in arrays, as in directional microphones.[26] Wave phases are measured using interferometers, such as the Michelson interferometer, which splits a light beam into two paths, recombines them, and detects phase differences via interference fringe shifts corresponding to path length changes of \lambda/2.[27] This setup quantifies phase by observing the resulting intensity pattern, where each fringe represents a $2\pi phase shift.[28]Phase Transitions
Phase transitions are processes by which a thermodynamic system undergoes a change from one phase of matter to another, typically induced by variations in temperature, pressure, or composition.[17] These transformations involve rearrangements in the molecular or atomic structure, such as the breaking or forming of bonds, and can be classified based on the nature of the thermodynamic properties at the transition point.[29] The Ehrenfest classification, introduced in 1933, categorizes phase transitions by the order of the derivatives of the thermodynamic potential that exhibit discontinuities.[30] First-order transitions, like melting or boiling, are characterized by a discontinuous change in the first derivative of the free energy, such as volume or entropy, and involve the absorption or release of latent heat without a temperature change during the process. In contrast, second-order transitions feature continuous first derivatives but discontinuities in higher-order ones, like specific heat, resulting in no latent heat and a smoother change, as seen in certain magnetic or superconducting transitions.[31] Common examples include melting, where a solid transforms to a liquid (e.g., ice to water at 0°C and 1 atm), boiling, converting liquid to gas (e.g., water to steam at 100°C and 1 atm), and sublimation, a direct solid-to-gas shift (e.g., dry ice).[17] A notable feature is the critical point, beyond which the distinction between liquid and gas phases vanishes, forming a supercritical fluid with properties intermediate between the two; for water, this occurs at 374°C and 218 atm.[32] The Gibbs phase rule governs the conditions under which phases coexist in equilibrium: F = C - P + 2 where F is the number of degrees of freedom (variables like temperature and pressure that can be changed without altering the number of phases), C is the number of components, and P is the number of phases.[33] For a unary system like pure water (C=1), the rule predicts invariant conditions (F=0) at the triple point where solid, liquid, and gas coexist (0.01°C, 611 Pa), univariant lines like the melting curve (F=1, fixed temperature at given pressure), and bivariant regions like the liquid phase (F=2, temperature and pressure independently variable).[34] In modern contexts, second-order transitions appear in high-T_c superconductors, such as cuprates like YBa_2Cu_3O_7, where the material shifts from normal to superconducting state below a critical temperature (around 90 K) without latent heat, enabling zero-resistance current flow.[35] Experimental measurement of latent heat in first-order transitions relies on calorimetry, where the heat absorbed or released is quantified by monitoring temperature changes in a controlled system, often using differential scanning calorimetry for precise enthalpy determination.[36]Mathematics and Engineering
Phase in Complex Numbers
In complex analysis, the phase of a nonzero complex number z = x + iy, where x and y are real numbers, is defined as the argument \theta, which is the angle between the positive real axis and the line from the origin to the point (x, y) in the complex plane. This representation expresses z in polar form as z = r e^{i\theta}, with magnitude r = |z| = \sqrt{x^2 + y^2}. The phase \theta is typically computed using the two-argument arctangent function \theta = \atan2(y, x), which accounts for the correct quadrant and yields values in the interval (-\pi, \pi].[37][38][39] The argument function is inherently multi-valued because angles differing by integer multiples of $2\pi represent the same complex number, so \arg(z) = \theta + 2\pi k for any integer k. To make it single-valued for practical purposes, the principal value \Arg(z) is defined on the principal branch, conventionally -\pi < \Arg(z) \leq \pi, excluding the negative real axis as a branch cut. This principal argument ensures a unique determination while preserving continuity in the complex plane except along the cut. The multi-valued nature arises from the periodic wrapping of the exponential map, necessitating branch choices in applications involving logarithms or roots.[40][41][42] A foundational relation underpinning the phase is Euler's formula, e^{i\theta} = \cos \theta + i \sin \theta, which links the exponential function to trigonometric functions and enables the polar representation of complex numbers. This formula was introduced by Leonhard Euler in his 1748 treatise Introductio in analysin infinitorum, where he derived it through series expansions without relying on geometric interpretations of the complex plane. The equation highlights how the phase \theta encodes rotational information in the complex plane.[43][44] In applications, the phase plays a key role in Fourier analysis, where signals are decomposed into complex exponential components e^{i 2\pi f t}; the phase of each Fourier coefficient determines the temporal shift of the corresponding frequency component, essential for reconstructing the original signal. For instance, in signal processing, the phase spectrum from the discrete Fourier transform reveals alignment or delays between harmonics, aiding in tasks like filtering or phase correction without altering magnitudes. This decomposition leverages the argument's properties to separate amplitude and phase information, providing a complete frequency-domain representation.[45][46]Electrical and Power Systems
In electrical and power systems, phase refers to the timing offset between alternating current (AC) waveforms, representing the angular position of one waveform relative to another in a cycle.[47] This concept is fundamental to AC circuits, where voltage and current vary sinusoidally, and the phase difference influences power delivery and system efficiency.[48] The development of polyphase systems, particularly three-phase power, traces back to Nikola Tesla's inventions in the late 19th century. In 1888, Tesla filed patents for his polyphase AC motor and system, which introduced multiple phases to enable efficient power transmission and utilization, forming the basis for modern AC electrical grids.[49] These innovations allowed for the practical generation, transmission, and distribution of electricity over long distances, surpassing the limitations of direct current systems.[50] Single-phase AC systems use a single waveform for power delivery, suitable for residential and light commercial loads, but they suffer from pulsating power output.[47] In contrast, polyphase systems, most commonly three-phase, employ three identical AC waveforms offset by 120 degrees in phase, providing smoother and more constant power flow.[51] This configuration is widely used in industrial power transmission due to its higher efficiency, as it reduces conductor material requirements and minimizes power fluctuations compared to single-phase setups.[52] Three-phase systems can be connected in wye (star) or delta configurations, each offering distinct advantages for voltage and current distribution. In a wye connection, phases connect to a common neutral point, allowing access to both phase voltage (line-to-neutral) and line voltage (line-to-line, which is √3 times the phase voltage).[53] Delta connections link phases in a closed loop, where phase voltage equals line voltage, but line current is √3 times the phase current, making it suitable for high-power applications without a neutral conductor.[54] These configurations enhance transmission efficiency by balancing loads and enabling flexible voltage levels for diverse applications. A key metric in these systems is the power factor, defined as cos φ, where φ is the phase angle between voltage and current waveforms; it quantifies the efficiency of real power utilization relative to apparent power.[48] Phase voltage is the potential across a single phase winding, while line voltage is the potential between two lines, differing by a factor of √3 in wye systems due to vector addition of phases.[53] The total real power in a balanced three-phase system is calculated asP = \sqrt{3} \, V_L I_L \cos \phi
where V_L is the line voltage, I_L is the line current, and \cos \phi is the power factor; this formula accounts for the three phases' contributions.[55] Three-phase systems offer significant advantages over single-phase for motors and generators, including constant torque and power delivery, which reduces vibration and improves reliability in industrial machinery.[50] Generators produce more power with less material, and motors start more smoothly without auxiliary windings, leading to higher efficiency and smaller designs for equivalent output.[52] Phasor representations, using complex numbers, are often employed to analyze these phase relationships in circuit design.[53]