Second
The second, symbol s, is the base unit of time in the International System of Units (SI). It is defined by taking the fixed numerical value of the caesium frequency ΔνCs, the unperturbed ground-state hyperfine transition frequency of the caesium-133 atom, to be 9 192 631 770 when expressed in the unit Hz, which is equal to s-1.[1] This definition, adopted in 2019, ensures the second remains a stable and reproducible standard independent of astronomical observations.[2] Historically, the second originated as a subdivision of the day, initially defined as 1/86 400 of the mean solar day based on Earth's rotation.[2] This astronomical definition, used before 1960, suffered from irregularities in Earth's rotation, leading to imprecision.[3] In 1960, the 11th General Conference on Weights and Measures (CGPM) redefined it as a fraction of the tropical year 1900 for better stability.[2] The modern atomic definition was established in 1967 by the 13th CGPM, linking the second to the caesium-133 atom's hyperfine transition frequency, which provided unprecedented accuracy.[2] The 2018 revision by the 26th CGPM fixed the numerical value of this frequency exactly, aligning with the broader redefinition of SI units based on fundamental constants.[2] The second underpins precise timekeeping essential to modern science, technology, and daily life.[3] It enables applications such as satellite navigation systems like GPS, which rely on atomic clocks synchronized to within nanoseconds for accurate positioning.[3] In physics, the second serves as a foundational unit for deriving other quantities, including speed, frequency, and energy, facilitating experiments in quantum mechanics and relativity.[4] Accurate realizations of the second, achieved through caesium atomic clocks and emerging optical lattice clocks, support advancements in telecommunications, financial transactions, and fundamental research, with current optical clocks potentially stable to 1 second over the age of the universe.[3]Definition and Etymology
Etymology
The term "second" for the unit of time originates from the Medieval Latin secunda, a shortening of pars minuta secunda, meaning "second small part," which distinguished it from the pars minuta prima or "first small part," referring to the minute as the initial subdivision of the hour.[5] This nomenclature arose in the context of medieval astronomical and mathematical calculations, where time and angular measurements were divided hierarchically into increasingly finer portions.[5] In medieval European timekeeping, the second emerged as the smallest practical division of the hour, building on the sexagesimal (base-60) system that subdivided the minute into 60 parts.[6] This usage first appeared in written records around the late 14th century, primarily in scientific treatises on astronomy and geometry, where precise divisions were essential for computations.[5] The sexagesimal framework underpinning the second traces back to the ancient Babylonians, whose base-60 numeral system influenced Greek astronomers like Ptolemy and was later adopted across Europe for time and angle measurements.[7] This legacy ensured the second's integration into the standardized divisions of the hour and minute that persisted through the Middle Ages.[7]Current SI Definition
The second, symbol s, is the SI base unit of time. It is defined by taking the fixed numerical value of the caesium frequency Δν_Cs, the unperturbed ground-state hyperfine transition frequency of the caesium-133 atom, to be 9 192 631 770 when expressed in the unit Hz, which is equal to s⁻¹.[8] This definition corresponds to the duration of 9 192 631 770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom at rest relative to the observer and at a thermodynamic temperature of 0 K.[2] This atomic definition was established by Resolution 1 of the 13th General Conference on Weights and Measures (CGPM) in 1967, replacing earlier ephemeris-based definitions to achieve greater precision and universality. It was revised by Resolution 1 of the 26th CGPM in 2018, effective from 20 May 2019, to explicitly fix the numerical value of Δν_Cs at exactly 9 192 631 770 Hz as part of the broader redefinition of SI units using fundamental constants. The hyperfine transition in question occurs between the two hyperfine sublevels (F=3 and F=4) of the ground electronic state (6s ^2S_{1/2}) of the caesium-133 atom, producing microwave radiation at approximately 9.192 GHz.[8] Caesium-133 was selected for this definition due to the exceptional stability and reproducibility of its hyperfine transition frequency, which provides a highly accurate, invariant standard independent of environmental variations and suitable for precise metrological realizations in atomic clocks.[2]Uses and Applications
In Timekeeping and Daily Life
In mechanical clocks, the second is marked by the escapement mechanism, which regulates the release of energy from a wound spring or weight, typically advancing the second hand once per oscillation of the balance wheel or pendulum, accumulating into larger units like minutes and hours. Quartz clocks, introduced in the mid-20th century, generate seconds through a crystal oscillator vibrating at 32,768 Hz, divided electronically to produce one pulse per second that drives the timekeeping display. Digital clocks similarly derive seconds from quartz-based electronic circuits, displaying them numerically as the fundamental unit that builds to 86,400 seconds in a mean solar day.[9][10] In sports timing, the second serves as the base unit for measuring performance in events like Olympic track races, where official times are recorded to the nearest hundredth of a second using photo-finish cameras and electronic sensors to determine winners with high precision. Transportation systems, such as GPS, rely on sub-second timing synchronization between satellites and receivers, achieving accuracies better than 100 nanoseconds to enable precise location calculations essential for navigation.[11][12] The second hand on analog watches provides a visual representation of passing time, first appearing in the late 16th century and becoming a common feature on timepieces in the 18th century, serving as a cultural symbol of punctuality and the relentless progression of moments in daily life.[13][14] Human reaction times to visual stimuli average 0.2 to 0.3 seconds, influencing perceptions of time in activities like driving or gaming, where delays beyond this range can affect responsiveness.[15]In Science and Technology
In physics, the second is a fundamental base unit in the International System of Units (SI), serving as the denominator for derived units such as frequency, where the hertz (Hz) is defined as one cycle per second.[16] This makes the second essential for quantifying oscillatory phenomena, like electromagnetic waves or mechanical vibrations. For instance, the frequency of visible light ranges from about 4 × 10^14 Hz to 7.5 × 10^14 Hz, illustrating how the inverse second captures rapid periodic events at the atomic and subatomic scales./02%3A_Comparing_Model_and_Experiment/2.02%3A_Units_and_dimensions) The second also underpins units of speed and acceleration; speed is expressed in meters per second (m/s), while acceleration uses meters per second squared (m/s²).[16] A representative example is the speed of sound in dry air at 20°C, which is approximately 343 m/s, demonstrating the second's role in wave propagation and fluid dynamics calculations./Book%3A_University_Physics_I_-Mechanics_Sound_Oscillations_and_Waves(OpenStax)/17%3A_Sound/17.03%3A_Speed_of_Sound) In kinematics, acceleration due to gravity near Earth's surface is about 9.8 m/s², highlighting the unit's application in describing changes in velocity over time./02%3A_Comparing_Model_and_Experiment/2.02%3A_Units_and_dimensions) In computing, the second measures processor performance through clock cycles, with modern central processing units (CPUs) operating at gigahertz (GHz) frequencies—billions of cycles per second—to execute instructions efficiently.[17] For example, a 3 GHz CPU performs 3 × 10^9 cycles per second, enabling rapid data processing in applications from simulations to real-time graphics rendering.[18] Data transfer rates in networks and storage systems are similarly quantified in bits per second (bps), where high-speed Ethernet can reach 100 Gbps, or 10^11 bits per second, to handle large-scale information flow.[19] Astronomy employs the second for precise timing of celestial phenomena, such as the rotation periods of pulsars—rapidly spinning neutron stars that emit beams detectable as pulses, some occurring hundreds of times per second.[20] The Crab Pulsar, for instance, rotates 30 times per second, with its pulse timing providing a natural clock for verifying atomic time standards over vast distances.[21] Additionally, the second defines convenient distance units like the light-second, the distance light travels in vacuum in one second, approximately 299,792 kilometers, which scales to larger measures such as light-years for interstellar navigation and event timing.[22]Relation to Broader Time Measurements
Clocks and Solar Time
Apparent solar time, as measured by sundials, reflects the actual position of the Sun in the sky and varies throughout the year due to the Earth's elliptical orbit and axial tilt, resulting in differences from mean solar time known as the equation of time, which can reach up to ±16 minutes.[23][24] Mean solar time, in contrast, assumes a uniform rate of solar motion and forms the basis for civil timekeeping, where each second is a consistent fraction of the average solar day, ensuring clocks maintain steady intervals independent of daily solar variations.[25] Early mechanical clocks regulated their seconds through escapement mechanisms that controlled the release of energy from a weight or spring, with pendulum clocks providing precise timing by leveraging gravitational oscillation. In 1656, Christiaan Huygens developed the first practical pendulum clock, featuring a seconds pendulum with a period of 2 seconds—meaning each swing took 1 second—to drive the escapement and mark uniform seconds, dramatically improving accuracy to within seconds per day compared to prior designs.[26][27] These escapements, often anchor-style, ticked at each pendulum swing, dividing the day into equal seconds aligned with mean solar time. Modern quartz clocks achieve even greater precision by using a piezoelectric quartz crystal that oscillates at a resonant frequency when electrified, typically 32,768 Hz in wristwatches and similar devices, which is divided down through binary counters to produce 1 Hz pulses for second increments.[28][29] This frequency, a power of 2 (2^15), allows efficient digital division to generate exact 1-second intervals, making quartz mechanisms standard in contemporary timepieces for their stability and low deviation from mean solar seconds.Larger Units and Events Measured in Seconds
The second serves as the foundational unit for larger time measurements in the International System of Units (SI). By definition, there are 60 seconds in a minute. A standard day consists of 24 hours, equating to exactly 86,400 seconds. The average length of a Gregorian year is 365.2425 days, or approximately 31,556,952 seconds.[30] On human scales, durations measured in seconds illustrate everyday and lifetime experiences. For instance, the time required for light to travel from the Sun to Earth, covering approximately 149.6 million kilometers at the speed of light, is about 499 seconds, or roughly 8 minutes and 19 seconds. An average human lifespan, based on a global life expectancy of around 73 years, corresponds to approximately 2.3 billion seconds, though estimates often round to 2.4 billion seconds when considering slight variations in annual length and regional differences.[31][32] At cosmic scales, the second quantifies vast epochs. The current estimated age of the universe, derived from measurements of the cosmic microwave background by the Planck mission, is about 13.8 billion years, equivalent to roughly $4.36 \times 10^{17} seconds. This immense duration underscores the second's role in expressing the timeline of cosmic evolution from the Big Bang onward.[33]Timekeeping Standards
Atomic Time Standards
Atomic time standards form the basis for realizing the SI second, with caesium fountain clocks serving as the primary frequency standards. These clocks measure the frequency of the microwave transition between two hyperfine energy levels in the ground state of the caesium-133 atom, as defined by the International System of Units. The most advanced caesium fountain clocks achieve fractional frequency uncertainties as low as 1 × 10^{-16}, corresponding to an accuracy where the clock would lose or gain no more than 1 second over approximately 300 million years.[34] The operation of a caesium fountain clock involves laser cooling caesium atoms to temperatures near absolute zero, typically around 1 microkelvin, to reduce thermal motion and enable precise measurement. The cooled atoms are then launched upward in a vacuum chamber, forming a "fountain" trajectory, during which they pass through a microwave cavity twice—once ascending and once descending. This allows a Ramsey interrogation sequence, where two separated microwave pulses interact with the atoms to determine the precise frequency of the 9,192,631,770 Hz hyperfine transition with minimal perturbation.[34] The extended flight time of about 1 second enhances the interrogation precision compared to earlier atomic beam clocks.[35] International comparisons of these primary standards are coordinated by the BIPM Time Department, which collects calibration data from caesium fountain clocks at national metrology institutes worldwide to ensure consistency in the realization of the SI second. Over 450 atomic clocks, including hydrogen masers for short-term stability and caesium fountains for long-term accuracy, contribute to the stability of International Atomic Time (TAI), but the accuracy is anchored by a subset of about a dozen high-precision caesium fountains through regular key comparisons like CCTF-K001.UTC. These comparisons use techniques such as GPS carrier-phase measurements and two-way satellite time and frequency transfer to synchronize and evaluate clock performances globally.[36][37]Coordinated Universal Time
Coordinated Universal Time (UTC) serves as the international reference time scale, maintained by the International Bureau of Weights and Measures (BIPM) in collaboration with the International Earth Rotation and Reference Systems Service (IERS). It combines International Atomic Time (TAI), a continuous scale defined by the weighted average of highly stable atomic clocks worldwide, with adjustments to align it to Universal Time UT1, which tracks Earth's irregular rotation relative to distant stars. As of November 2025, TAI leads UTC by 37 seconds, with UTC differing from TAI by an integer number of seconds, ensuring that the offset between UTC and UT1 remains within ±0.9 seconds.[38] To maintain this alignment, leap seconds—positive adjustments of one second—are inserted into UTC as needed, typically at the end of June or December following 23:59:59 UTC, creating a 23:59:60 interim. As of November 2025, since the introduction of leap seconds in 1972, 27 have been added to UTC, with the most recent on December 31, 2016; no further insertions have occurred due to Earth's rotation slowing less rapidly than anticipated.[39] In 2022, the General Conference on Weights and Measures adopted Resolution 4, directing the development of a plan to discontinue leap seconds by 2035, allowing |UT1 - UTC| to drift up to ±0.9 seconds without adjustment until at least 2135, to simplify global timekeeping systems.[40] UTC underpins critical applications requiring precise synchronization, including the Global Positioning System (GPS), where satellite signals broadcast UTC to enable accurate positioning and timing for navigation worldwide. In telecommunications, UTC ensures coordinated operations across networks, such as in mobile and internet protocols, preventing disruptions from timing mismatches. Legally, UTC forms the basis for civil time zones globally, with countries offsetting their standard times from it to regulate daily activities, broadcasting, and international agreements.Historical Development
Ancient and Sexagesimal Systems
The ancient Babylonians adopted the sexagesimal (base-60) numeral system, inherited from the Sumerians around the third millennium BCE, for astronomical and timekeeping purposes by approximately 2000 BCE. This system facilitated the division of the 24-hour day—itself derived from earlier Egyptian practices of splitting daylight and nighttime into 12 parts each—into 60 minutes per hour and 60 seconds per minute, enabling more accurate tracking of celestial movements and seasonal cycles.[41] Medieval astronomers built upon this foundation, refining sexagesimal divisions for enhanced precision in time measurement. In the 2nd century CE, Claudius Ptolemy detailed in his Almagest the subdivision of hours into 60 "first small parts" (minutes) and further into 60 "second small parts" (seconds), primarily for angular calculations in astronomy but directly applicable to temporal intervals due to the linkage between time and celestial arcs. By the 11th century, the scholar Abu Rayhan al-Biruni advanced these methods through his work on astrolabes, instruments that allowed observers to measure stellar altitudes and derive local time with accuracy approaching the second, as his designs achieved angular resolutions of up to 10 arcminutes, equivalent to roughly 40 seconds of time.[41][42] In medieval Europe, the practical adoption of these divisions occurred through monastic water clocks, which emerged in the 11th to 13th centuries to enforce the rhythm of canonical prayers by segmenting the day into unequal seasonal hours. These clepsydrae, often elaborate devices with floats and gears powered by regulated water flow, marked subdivisions into minutes for liturgical timing, while astronomical tables incorporated seconds for finer computational adjustments, bridging ancient theoretical systems with emerging mechanical timekeeping.[43]Astronomical Definitions
In the 19th century, astronomers established the mean solar second as precisely 1/86,400 of a mean solar day, a definition that formalized the division of the day into 24 hours of 60 minutes each, with each minute comprising 60 seconds.[10] This unit, rooted in the apparent motion of the Sun across the sky, provided a practical standard for timekeeping but was inherently variable due to irregularities in Earth's rotation.[44] By the mid-20th century, the limitations of relying on Earth's daily rotation became evident, prompting a shift toward a more stable astronomical reference. In 1956, the International Committee on Weights and Measures redefined the second as the ephemeris second, equivalent to 1/31,556,925.9747 of the tropical year for 1900 January 0 at 12 hours ephemeris time.[45] The tropical year, defined as the time interval between successive vernal equinoxes, offered a longer-term orbital benchmark less affected by short-term rotational fluctuations, with the specific fraction calculated from precise ephemeris observations of celestial bodies.[46] However, even this orbital-based definition faced challenges from secular changes in Earth's dynamics. Observations revealed that tidal friction from the Moon and Sun causes Earth's rotation to slow gradually, lengthening the mean solar day by approximately 1.7 milliseconds per century, which complicates the consistency of pre-atomic time standards and necessitates ongoing astronomical corrections.[47] This variability underscored the need for definitions tied to invariant natural phenomena, while the 86,400-fold division of the day traced back briefly to ancient Babylonian sexagesimal systems that influenced modern subdivisions.[48]Transition to Atomic Standard
The development of atomic clocks in the mid-20th century marked a pivotal shift from astronomical to atomic time standards, driven by the need for greater stability and reproducibility. In 1955, Louis Essen and Jack Parry at the National Physical Laboratory (NPL) in the United Kingdom constructed the first operational cesium-beam atomic clock, which measured the hyperfine transition frequency of cesium-133 atoms at approximately 9.192 GHz, achieving stability far superior to existing quartz or astronomical methods.[49] This innovation built on earlier work by Isidor Rabi and others on molecular beam resonance, providing a foundation for redefining the unit of time independent of Earth's irregular rotation.[50] By 1967, international consensus recognized atomic time's advantages, leading the 13th General Conference on Weights and Measures (CGPM) to formally adopt the atomic definition of the second. The second was defined as the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the cesium-133 atom, unperturbed by external fields, at rest, and at a temperature of 0 K.[51] The primary rationale was atomic time's invariance, unaffected by astronomical variability such as tidal friction or geophysical changes that cause fluctuations in solar or ephemeris time.[2] To maintain continuity, the definition was calibrated so the atomic second matched the ephemeris second—previously defined as 1/31,556,925.9747 of the tropical year 1900—to within 1 part in $10^{10}.[2] This transition profoundly impacted practical timekeeping and technology. It enabled the precise calibration of quartz crystal oscillators, culminating in the 1969 launch of the Seiko Quartz Astron, the world's first quartz wristwatch, which achieved accuracy within 5 seconds per month by referencing atomic standards.[52] In the 1970s, atomic time became essential for the Global Positioning System (GPS), where cesium and rubidium clocks on satellites ensure the nanosecond-level synchronization required for trilateration-based positioning accurate to meters.[49]Historical Summary Table
| Era | Definition | Precision/Notes | Key Date/Event |
|---|---|---|---|
| Ancient Sexagesimal | The second as the 1/60th division of a minute in the base-60 (sexagesimal) system, yielding 86,400 seconds in a day.[53] | Approximate; limited by observational tools like sundials and water clocks, with daily errors of minutes to hours.[54] | c. 2000 BC, developed in Babylonian astronomy.[53] |
| Mean Solar | 1/86,400 of the mean solar day (average length of a day based on Earth's rotation relative to the Sun).[2] | Variable due to irregularities in Earth's rotation, including a gradual slowing of about 1.7 milliseconds per century; accuracy limited to roughly 1 part in 10^8 over short periods.[2] | Late 19th century standardization, e.g., 1884 International Meridian Conference establishing Greenwich as prime meridian.[55] |
| Ephemeris | The fraction 1/31,556,925.9747 of the tropical year for 1900 January 0 at 12 hours ephemeris time.[56] | More stable than mean solar time, with precision around 1 part in 10^8 based on astronomical ephemerides; independent of daily rotation variations.[54] | Adopted in 1956 by the International Committee for Weights and Measures (CIPM); ratified by the 11th General Conference on Weights and Measures (CGPM) in 1960.[57] |
| Atomic | The duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom at rest at 0 K.[51] | Initial precision of about 1 part in 10^13, representing a gain of approximately 5 orders of magnitude in accuracy over the ephemeris second; enabled highly reproducible timekeeping.[54][58] | Adopted in 1967 by the 13th CGPM.[51] |
Future Redefinition
Motivations and Technical Requirements
The current definition of the second, based on the caesium-133 hyperfine transition, achieves a relative frequency uncertainty of approximately 10^{-16} in state-of-the-art realizations, limiting further advancements in precision timekeeping and metrology.[59] This precision falls short of the 10^{-18} level required to align the second with the accuracy of other SI base units, such as the metre, which benefits from a definition tied to the invariant speed of light, thereby hindering interdisciplinary applications in fundamental physics, geodesy, and telecommunications.[60] The primary motivation for redefinition stems from the need to enhance the second's stability and accuracy to support scientific progress, including tests of general relativity and improved global navigation systems, as the caesium standard's limitations increasingly constrain technological innovation.[61] The Consultative Committee for Time and Frequency (CCTF), under the International Bureau of Weights and Measures (BIPM), has outlined specific technical requirements for any new definition to ensure it surpasses the current standard while preserving compatibility.[62] These include linking the second to a fixed numerical value of an invariant physical constant, such as the Rydberg constant, to provide an unalterable foundation akin to other SI redefinitions.[63] The new standard must be globally realizable with relative uncertainties below 10^{-17} in primary standards operated by national metrology institutes, enabling widespread adoption without disrupting existing time scales like Coordinated Universal Time (UTC).[64] Additionally, continuity must be maintained by ensuring the new definition reproduces the current second to within 10^{-16} or better, avoiding any abrupt shifts in international timekeeping infrastructure.[61] Discussions on redefining the second have intensified since the 2010s, driven by advancements in atomic frequency standards that outpace caesium technology.[59] The CCTF roadmap targets a potential adoption by 2030, contingent on achieving sufficient portability, stability, and intercomparison accuracy among candidate standards at the 10^{-18} level.[60] This timeline aligns with quadrennial meetings of the General Conference on Weights and Measures (CGPM), with a draft proposal anticipated for review in 2026 if technical criteria are met.[61]Optical Clocks and Rydberg Constant
Optical clocks represent advanced atomic timekeeping devices that utilize electronic transitions in the optical frequency domain, offering significantly higher precision than the current caesium-based standard. These clocks probe narrow linewidth transitions in trapped ions or neutral atoms, enabling fractional frequency uncertainties on the order of 10^{-18}, which corresponds to an accuracy of about 1 second over 15 billion years.[59] For instance, ion-based optical clocks using the ^1S_0 to ^3P_0 transition in ^{27}Al^+ achieve systematic uncertainties as low as 5.5 \times 10^{-19}, while neutral atom systems like the ^{87}Sr clock operate at a transition frequency of approximately 429 THz with uncertainties around 8 \times 10^{-19}.[65] Such performance surpasses the limitations of caesium microwave clocks, which are constrained to around 10^{-16} accuracy due to environmental sensitivities.[60] A key feature of optical lattice clocks, particularly those using neutral atoms like strontium or ytterbium, is the use of optical lattices to trap and interrogate ensembles of atoms simultaneously. In this design, counter-propagating laser beams tuned to a "magic wavelength" (typically near 813 nm for strontium) form a one-dimensional standing wave, creating a periodic potential that confines thousands to millions of laser-cooled atoms in individual sites.[66] This lattice minimizes differential light shifts between the clock states, allowing coherent interrogation of the transition with a probe laser at the clock frequency, thereby enhancing stability and reducing decoherence from atomic collisions.[67] The standing wave structure effectively arrays the atoms in a crystal-like formation, enabling superradiant effects in larger ensembles for even better short-term stability.[68] Linking optical clocks to fundamental constants offers a pathway for redefining the second in a manner independent of specific atomic species, potentially fixing the Rydberg constant R_\infty to eliminate measurement uncertainties. The Rydberg frequency is defined as \nu_{Ry} = c R\infty, where c is the exact speed of light and R\infty \approx 1.097 \times 10^7 , \mathrm{m}^{-1} is the infinite-mass Rydberg constant, yielding \nu_{Ry} \approx 3.29 \times 10^{15} , \mathrm{Hz}.[61] \nu_{Ry} = c R_\infty This frequency can be realized through calculable transitions, such as the 1S-2S line in hydrogen, which optical clocks can measure with high precision to tie the second to quantum electrodynamics predictions.[69] A potential redefinition could thus express the second as 1 / (k \nu_{Ry}), where k is a fixed integer multiple chosen to align with practical time scales, ensuring the new standard maintains continuity with UTC while anchoring time to a universal constant with relative uncertainty below 10^{-12}.[70] This approach leverages the superior accuracy of optical clocks to refine R_\infty's value, currently known to 1.9 parts in 10^{12}, and supports applications in fundamental physics tests.[61]Recent Advances in 2025
In June 2025, an international collaboration established the largest coordinated network of optical clocks to date, simultaneously comparing ten such clocks across six countries including Finland, France, Germany, Italy, the United Kingdom, and Japan.[71] This effort utilized optical fiber links for regional connections in Europe and short-range fibers for local comparisons, supplemented by satellite links for global synchronization, achieving frequency ratio measurements with uncertainties as low as 4.4 × 10^{-18} between specific clocks like indium and ytterbium ions.[72] These sub-10^{-18} uncertainties represent a significant improvement over previous satellite-only methods, demonstrating the feasibility of a stable global optical time scale essential for verifying clock consistency ahead of potential SI redefinition.[71] Building on this, intercontinental comparisons in July 2025 extended the network's reach, linking clocks across Europe and Asia with transcontinental fiber and satellite infrastructure to confirm optical stability at levels between 10^{-16} and 10^{-18}.[73] For instance, offsets such as a 4 × 10^{-16} discrepancy in an Italian ytterbium clock were identified and resolved, ensuring agreement across the network and highlighting the robustness of these systems against propagation errors.[74] This milestone paves the way for a 2030 redefinition of the second by establishing the precision needed for an international optical standard to replace the cesium-based definition.[73] Earlier in January 2025, proposals advanced the development of optical ion clocks using thorium nuclei, with theoretical models for a Th^{5+} ion clock projecting relative uncertainties below 10^{-19} due to its closed-shell structure minimizing shifts from black-body radiation and external fields. These thorium-based designs, building on the low-energy nuclear isomer transition in thorium-229, support Rydberg constant-linked frequency standards by offering immunity to environmental perturbations that affect electronic transitions in conventional optical clocks. Such record projected accuracies underscore thorium's potential to enable a more stable redefinition of the second, with black-body radiation shifts calculated at just 4.3 × 10^{-24} at 300 K. In October 2025, the CCTF meeting highlighted ongoing progress toward redefining the second, including evaluations of optical frequency standards through international and regional comparison campaigns to assess candidate transitions and uncertainty budgets. No formal decision was made, but the discussions emphasized advances in optical clocks as key to achieving the necessary consensus for a potential redefinition by the end of the decade, around 2030.[75] Also in October 2025, research on the thorium-229 nuclear clock transition demonstrated its sensitivity to the fine-structure constant, enabling investigations into its stability with unprecedented precision. This breakthrough, published on October 15, 2025, supports the viability of nuclear clocks for probing fundamental physics and advancing toward a redefinition of the second independent of atomic electronic transitions.[76] Further work on October 27, 2025, confirmed that thorium nuclear clocks can detect variations in fundamental constants with accuracy 6,000 times greater than existing methods, reinforcing their role in future time standards.[77]Derived Units and Multiples
Units Incorporating Seconds
Derived SI units incorporating the second typically express rates, velocities, or powers where the second appears in the dimensional formula, often in inverse form to denote quantities per unit time. These units combine the second with other base units like the metre or kilogram, enabling the measurement of dynamic phenomena in physics and engineering. The International System of Units (SI) defines these coherently to ensure consistency across scientific applications.[78] The hertz (Hz) is the SI derived unit of frequency, defined as the number of cycles or events occurring per second, with the dimensional formula \mathrm{s}^{-1}. It quantifies periodic phenomena, such as vibrations or oscillations. For example, alternating current (AC) mains electricity in North America operates at a standard frequency of 60 Hz.[78][79] The metre per second (m/s) serves as the coherent SI unit for speed or velocity, expressed dimensionally as \mathrm{m} \cdot \mathrm{s}^{-1}. This unit measures the rate of change of position with respect to time. A representative application is the escape velocity from Earth's surface, which is approximately 11.2 km/s, the minimum speed required for an object to overcome Earth's gravitational pull without further propulsion.[78][80] Other derived units highlight the inverse role of the second in angular and energetic contexts. Angular frequency, or angular velocity, uses the radian per second (rad/s), with the dimensionless radian combined as \mathrm{rad} \cdot \mathrm{s}^{-1}, to describe rotational rates. Similarly, the watt (W), the SI unit of power, incorporates the second as \mathrm{kg} \cdot \mathrm{m}^{2} \cdot \mathrm{s}^{-3}, representing energy transfer per unit time, such as one joule per second. These units underscore the second's foundational role in quantifying time-dependent processes.[78]SI Prefixes for Seconds
The SI prefixes provide a systematic way to express multiples and submultiples of the second (s), the base unit of time in the International System of Units (SI), allowing for concise notation of time intervals ranging from ultrafast processes to extended durations. These prefixes, standardized by the General Conference on Weights and Measures and maintained by the International Bureau of Weights and Measures (BIPM), follow decimal powers of ten and apply uniformly across SI units. While all 24 prefixes (from quecto- to quetta-) are permissible for the second, practical usage is selective, favoring submultiples for short timescales in physics, chemistry, and engineering, and limited multiples for scientific measurements where traditional units like minutes or days are insufficient.[81] Submultiples of the second are prevalent in fields requiring high temporal resolution. The millisecond (ms = $10^{-3} s) measures events on the order of human physiological responses, such as reaction times. The microsecond (µs = $10^{-6} s) is standard in electronics for signal processing and radar timing. The nanosecond (ns = $10^{-9} s) appears in telecommunications and particle physics for light travel distances in small media. The picosecond (ps = $10^{-12} s) supports applications in laser spectroscopy and semiconductor characterization. The femtosecond (fs = $10^{-15} s) enables the study of molecular dynamics in chemistry and ultrafast processes in laser-matter interactions, as demonstrated in early femtochemistry experiments using precisely timed pulses to observe atomic rearrangements. The attosecond (as = $10^{-18} s) facilitates probing electron motion in atoms and molecules, central to attosecond physics for resolving quantum-scale phenomena in intense laser fields.[81][82][83] For multiples, adoption is more restrained due to the sexagesimal conventions in everyday timekeeping, but they find utility in specialized contexts. The kilosecond (ks = $10^{3} s ≈ 16.7 minutes) occasionally denotes short operational intervals in engineering and computing simulations. The megasecond (Ms = $10^{6} s ≈ 11.6 days) measures extended observation periods in astronomy, such as the cumulative exposure time in X-ray telescope surveys of galactic centers. Larger multiples like the gigasecond (Gs = $10^{9} s ≈ 31.7 years) emerge rarely in cosmological modeling, while prefixes such as deca- (das = 10 s) and hecto- (hs = $10^{2} s ≈ 1.67 minutes) lack common application for time, overshadowed by conventional units.[81][84]| Prefix | Symbol | Factor | Approximate Duration | Example Usage Context |
|---|---|---|---|---|
| Milli- | m | $10^{-3} | 1 ms | Physiological timings, audio processing |
| Micro- | µ | $10^{-6} | 1 µs | Digital signal timing, computing clocks |
| Nano- | n | $10^{-9} | 1 ns | Microwave propagation, collider events |
| Pico- | p | $10^{-12} | 1 ps | Optical spectroscopy, charge carrier dynamics |
| Femto- | f | $10^{-15} | 1 fs | Femtochemistry, laser filamentation |
| Atto- | a | $10^{-18} | 1 as | Electron dynamics in attosecond pulses |
| Kilo- | k | $10^{3} | 1 ks ≈ 16.7 min | Simulation runtimes, short missions |
| Mega- | M | $10^{6} | 1 Ms ≈ 11.6 days | Astronomical exposure times |