Fact-checked by Grok 2 weeks ago

University Physics

University physics refers to the calculus-based introductory physics curriculum offered at colleges and universities, primarily targeting students majoring in science, , or related fields. This multi-semester sequence emphasizes the application of mathematical tools, such as and equations, to explore fundamental physical laws governing , , motion, and interactions in the . The curriculum typically unfolds over two to three semesters, beginning with classical mechanics—including topics like kinematics, Newton's laws, work, energy, momentum, rotational dynamics, gravitation, and fluid statics and dynamics—followed by waves and acoustics. Subsequent courses cover thermodynamics, electricity and magnetism (encompassing electric fields, Gauss's law, potential, capacitance, current, resistance, circuits, magnetic fields, Ampère's law, and induction), and optics alongside modern physics concepts such as relativity, quantum mechanics, atomic structure, and nuclear physics. A hallmark of university physics education is its integration of theoretical principles with experimental verification through laboratory components, fostering skills in data analysis, error assessment, and scientific inquiry. This approach not only builds a rigorous understanding of natural phenomena but also prepares students for advanced coursework, research, and professional applications in fields like , , and .

Introduction

Definition and Scope

University physics constitutes a foundational sequence of two- or three-semester calculus-based courses tailored for undergraduate majors in science, engineering, , and related fields, providing essential training in physical principles through . This sequence differs from algebra-based introductory physics offerings, which emphasize conceptual understanding without advanced and are geared toward non-STEM students such as those in or . The scope of university physics encompasses core areas of classical and modern physics, including , , , and acoustics, , and introductory topics in and . These courses build a comprehensive for understanding natural phenomena, often structured across three volumes or semesters to allow progressive depth: the first focusing on and , the second on , , and , and the third on and . Each course in the sequence typically spans 10 to 15 weeks per semester and carries 3 to 4 hours, including lectures and components to reinforce theoretical concepts through experimentation. The stresses mathematical rigor, employing for deriving physical laws, analysis for multidimensional problems, and introductory differential equations for dynamic systems, assuming prior familiarity with high school physics and single-variable .

Prerequisites and Course Structure

University physics courses typically require students to have a solid foundation in and introductory sciences to handle the quantitative rigor of the material. Prerequisite knowledge generally includes single-variable and , which are essential for deriving and applying physical laws, as well as basic for vector analysis and geometric problems. These prerequisites ensure students can engage with -based formulations from the outset, with many institutions allowing concurrent enrollment in calculus if prior completion is not possible. The standard course structure for university physics is organized as a multi-semester sequence designed to build progressively from classical to modern concepts, often spanning the first two years of an undergraduate program. Physics I focuses on and introductory waves, covering topics such as , , and oscillatory motion. Physics II addresses , , and sometimes , emphasizing electric and magnetic fields, circuits, and wave propagation. Physics III introduces , including , , and , bridging classical foundations to contemporary theories. Laboratory components are integral, either integrated into lecture courses for hands-on experimentation or offered as separate sessions to develop experimental skills in , , and error assessment. Variations in course structure exist across institutions to accommodate different student needs and institutional emphases, such as honors tracks that accelerate pacing and incorporate advanced problem sets or components for high-achieving students. Some programs offer integrated formats combining lectures, labs, and discussions in studio-style classrooms to foster , while others maintain separate courses for greater specialization. These adaptations may also include flexible sequences allowing students to mix tracks based on prior credits, ensuring accessibility without compromising depth. A core element of university physics course design is the emphasis on problem-solving and conceptual understanding, which are cultivated through a balance of analytical exercises and qualitative reasoning to prepare students for real-world applications. Problem-solving activities require applying mathematical tools to physical scenarios, reinforcing while highlighting the underlying principles. Conceptual understanding is prioritized via targeted assessments and activities that probe intuitive grasp over rote calculation, as evidenced in curricula using tools like ConcepTests to address common misconceptions. This dual focus ensures students not only compute solutions but also interpret physical phenomena, with research showing improved outcomes when conceptual strategies precede quantitative work.

Historical Development

Origins in the 19th Century

The emergence of university physics as a formalized discipline began in early 19th-century , particularly with the founding of the University of in 1810 under the vision of , which integrated research and teaching in the natural sciences, including . This institution quickly became a hub for , attracting students and faculty who emphasized hands-on experimentation alongside theoretical study, setting a model for modern research universities across . Key figures such as , who taught at from 1849, advanced physiological physics and principles, further solidifying experimental methods in the curriculum. In Britain, the work of and James Clerk Maxwell profoundly shaped physics education by establishing as a foundational topic. Faraday's discoveries in the 1830s, including and , provided an experimental basis that influenced university lectures and laboratories, promoting field concepts over action-at-a-distance models. Maxwell's mathematical synthesis in the 1860s, building on Faraday's ideas, unified , , and through equations describing electromagnetic waves, which were incorporated into advanced university courses by the late . Across the Atlantic, American colleges transitioned from —encompassing broad moral and speculative inquiries—to a more empirical, specialized physics discipline starting in the , driven by growing scientific professionalism and influences. Early textbooks like Denison Olmsted's An Introduction to (1829), used at Yale, compiled authorities on , , and astronomy for college students, marking a shift toward structured, evidence-based instruction. By mid-century, institutions formalized physics teaching; Yale's physics efforts, rooted in the early 1800s within the Department of Philosophy and the Arts, advanced with the 1847 establishment of a graduate program emphasizing sciences, awarding the first physics doctorate in 1861. Calculus-based approaches also integrated gradually, with Yale and similar schools offering mathematical treatments of motion and forces by the 1840s, enhancing analytical rigor in curricula.

Expansion in the 20th Century

The advent of Albert Einstein's theory of in 1905 and the formulation of in the 1920s fundamentally transformed the landscape of physics, necessitating the integration of modules into university curricula. These developments revealed the limitations of classical Newtonian as an valid only at low speeds and macroscopic scales, prompting educators to incorporate topics such as relativistic kinematics, wave-particle duality, and quantization principles by the mid-20th century, while advanced courses aligned with the variational frameworks underpinning both theories. Following , university physics experienced a significant expansion driven by policy initiatives that boosted enrollment and standardized curricula. The Servicemen's Readjustment Act of 1944, commonly known as the , provided educational benefits to millions of veterans, dramatically increasing college attendance and swelling physics departments across U.S. institutions. This surge was further amplified by the Soviet launch of Sputnik in 1957, which sparked national concerns over scientific competitiveness and led to the of 1958; the act allocated substantial federal funding for , including low-interest loans and grants that prioritized physics and sequences. These measures resulted in the of more uniform multi-semester introductory physics courses, often structured around calculus-based treatments of classical and modern topics to meet growing demands for scientifically trained professionals. Influential textbooks played a pivotal role in this modernization, with David Halliday and Robert Resnick's Physics for Students of Science and Engineering (first published in 1960) emerging as a cornerstone. This comprehensive volume introduced a by blending rigorous conceptual explanations with problem-solving exercises, making advanced topics accessible to undergraduates and influencing curricula worldwide for decades. By the , curricula began incorporating computational tools to address complex simulations beyond analytical solutions, exemplified by projects like the Microcomputer-Based Laboratory (MBL) and the Modeling Instruction for University Physics and Physics Education Technology (M.U.P.P.E.T.) initiative, which embedded numerical methods and programming into introductory courses. The expansion of university physics also extended globally, particularly in non-Western contexts during the post-1950s era. In , following in 1947, Prime Minister prioritized scientific education through the University Education Commission (1948–1949), which recommended enhancing physics programs; this led to the establishment of the (IITs) starting in 1951, where curricula adopted Western models but adapted them to emphasize engineering applications and national development needs. Similarly, in , post-war reconstruction in the and involved rapid university expansion to fuel economic recovery, with physics departments incorporating modern curricula influenced by U.S. and European standards while focusing on applied fields like and to support industrial growth. These adaptations ensured that university physics became a key component of in emerging economies, tailored to local technological priorities.

Classical Mechanics

Kinematics

is the branch of physics that describes the motion of objects without considering the forces causing that motion. It focuses on the geometric aspects of motion, such as , , and , typically analyzed using in university-level treatments. This approach allows for a precise mathematical description applicable to one, two, or three dimensions. The position of an object is represented by the \vec{r}(t), which specifies its location in space as a of time relative to a . The \vec{v}(t) is the time of the , given by \vec{v}(t) = \frac{d\vec{r}}{dt}, representing the instantaneous of change of and including both magnitude (speed) and direction. Similarly, the acceleration \vec{a}(t) is the time of the , \vec{a}(t) = \frac{d\vec{v}}{dt}, indicating how the changes over time. These definitions form the foundation of kinematic analysis, enabling the prediction of an object's trajectory from initial conditions. In one-dimensional motion, these concepts simplify to scalar quantities along a line, but university physics extends to higher dimensions for more realistic scenarios. For two-dimensional motion, exemplifies kinematics where an object follows a under constant in the vertical direction (due to ) and uniform motion horizontally. The position components are x(t) = x_0 + v_{0x} t and y(t) = y_0 + v_{0y} t - \frac{1}{2} g t^2, with g \approx 9.8 \, \text{m/s}^2. , another two-dimensional case, involves constant speed along a circular path, where the is \vec{r}(t) = R \cos(\omega t) \hat{i} + R \sin(\omega t) \hat{j}, with angular frequency \omega = v / R and centripetal acceleration a = \omega^2 R directed toward the center. Three-dimensional motion combines these, often using vector components in Cartesian coordinates: \vec{r}(t) = x(t) \hat{i} + y(t) \hat{j} + z(t) \hat{k}. Relative motion accounts for observations from different reference , which are coordinate systems that may move relative to each other. In non-relativistic , if two move at constant relative to one another (inertial ), the of an object in one frame is \vec{v}_{PA} = \vec{v}_{PB} + \vec{v}_{BA}, where \vec{v}_{BA} is the of frame B relative to A. This preserves the form of kinematic equations across . For example, a boat's relative to plus the current's gives its ground . Graphical analysis provides visual tools for understanding , particularly in one dimension. A displacement-time graph plots versus time, where the at any point equals the instantaneous ; a straight line indicates constant , while shows . The -time graph's represents , and the area under the gives . These graphs extend to higher dimensions by plotting components separately, aiding in the of complex trajectories like projectiles.

Dynamics and Newton's Laws

Dynamics in university physics examines the causes of motion through the application of forces, fundamentally described by Isaac Newton's three laws of motion as formulated in his Philosophiæ Naturalis Principia Mathematica. The first law, known as the law of inertia, states that an object at rest remains at rest, and an object in uniform motion continues in a straight line at constant speed unless acted upon by a net external force. This principle establishes the concept of inertial reference frames, where no net force implies no acceleration. The second law quantifies the relationship between force, mass, and acceleration: the net force \vec{F}_{net} on an object equals its mass m times its acceleration \vec{a}, expressed as \vec{F}_{net} = m \vec{a}. The third law asserts that for every action, there is an equal and opposite reaction; that is, if object A exerts a force on object B, then B exerts an equal force in the opposite direction on A. These laws, originally derived from observations of planetary motion and terrestrial experiments, provide the foundation for analyzing mechanical systems in classical physics. To apply Newton's laws, physicists use free-body diagrams (FBDs), which isolate an object and depict all external forces acting on it as vectors, excluding internal forces or those from other objects. In FBDs, forces such as weight (mg, where g is the ), , , and are represented, allowing the resolution of components along chosen axes for calculation. occurs when the on an object is zero, resulting in no acceleration (\vec{a} = 0), as per the first law; this condition is analyzed by summing force vectors to zero in both horizontal and vertical directions. The SI unit of force is the (N), defined as the force required to accelerate a 1 kg mass by 1 m/s², so $1 \, \mathrm{N} = 1 \, \mathrm{kg \cdot m/s^2}. Common applications illustrate these principles. For an Atwood's machine—two masses connected by a string over a —the second laws determine the acceleration a = \frac{(m_1 - m_2)g}{m_1 + m_2} (assuming m_1 > m_2) by drawing FBDs for each mass and equating tensions. On an , the component of parallel to the plane (mg \sin \theta) drives motion, balanced or opposed by or other forces; for , the along the plane must be zero. introduces resistive forces: static friction f_s \leq \mu_s N prevents motion until exceeded, where \mu_s is the coefficient of static friction and N is the normal force, while kinetic friction f_k = \mu_k N acts during sliding, with \mu_k < \mu_s typically. These coefficients, empirical values varying by surface (e.g., \mu_s \approx 0.6 for wood on wood), are determined experimentally using Newton's laws. Newton's laws hold rigorously in inertial frames but require modifications, such as fictitious forces, in non-inertial frames like accelerating elevators or rotating systems.

Work, Energy, and Power

In physics, work is defined as the transfer of energy that occurs when a force is applied to an object over a displacement, quantified by the line integral of the force along the path taken. Mathematically, the work W done by a constant force \vec{F} on an object displaced by \vec{dr} is given by the scalar (dot) product W = \vec{F} \cdot d\vec{r}, which for variable forces generalizes to W = \int \vec{F} \cdot d\vec{r}. This quantity is a scalar, independent of the direction of the force but dependent on the component parallel to the displacement, and its SI unit is the joule (J), equivalent to one newton-meter. The work-energy theorem establishes a fundamental link between work and motion, stating that the net work done on a particle by all forces equals the change in its kinetic energy. Kinetic energy KE is the energy associated with an object's motion, expressed as KE = \frac{1}{2} m v^2, where m is the mass and v is the speed. This theorem, derived by integrating along the path of motion, implies that positive net work increases kinetic energy, while negative net work decreases it; for example, pushing a block across a frictionless surface accelerates it, converting the applied work directly into increased KE. For conservative forces—those where the work done is path-independent and can be expressed as the negative gradient of a potential function—potential energy U stores the capacity to do work. In a uniform gravitational field near Earth's surface, the gravitational potential energy is U = mgh, where g is the acceleration due to gravity, h is the height above a reference level, and this form arises from the work done against gravity to elevate the mass m. Similarly, for an ideal spring obeying , the elastic potential energy is U = \frac{1}{2} k x^2, with k as the spring constant and x as the displacement from equilibrium, representing the energy stored by deforming the spring. These expressions highlight how potential energy quantifies stored work in position-dependent configurations. The principle of conservation of mechanical energy asserts that, in an isolated system acted upon solely by conservative forces, the total mechanical energy—sum of kinetic and potential energies—remains constant, as the work done by conservative forces merely interconverts these forms without dissipation. For instance, a pendulum bob swinging without friction converts gravitational potential energy at its highest point entirely to kinetic energy at the bottom, and vice versa. However, non-conservative forces, such as friction or air resistance, perform path-dependent work that dissipates mechanical energy into thermal forms, violating strict conservation; the work done by these forces equals the change in mechanical energy, necessitating inclusion of additional energy terms for analysis. Power quantifies the rate at which work is done or energy is transferred, defined as P = \frac{dW}{dt}, the time derivative of work. For a constant force, this simplifies to P = \vec{F} \cdot \vec{v}, the dot product of force and velocity, emphasizing that power is maximized when force aligns with motion. The SI unit is the watt (W), or one joule per second, as seen in applications like engines where sustained power output drives continuous energy conversion.

Momentum and Collisions

Linear momentum, denoted as \vec{p}, is defined as the product of an object's mass m and its velocity \vec{v}, providing a vector quantity that quantifies the motion of the object in a given direction. This concept arises from Newton's second law, where the net force \vec{F} on an object equals the time rate of change of its linear momentum, \vec{F} = \frac{d\vec{p}}{dt}. Impulse \vec{J} represents the effect of a force acting over a time interval and is given by the integral \vec{J} = \int \vec{F} \, dt, which equals the change in linear momentum \Delta \vec{p}. This impulse-momentum theorem explains how forces alter an object's motion, such as in impacts where short-duration forces produce significant velocity changes. In an isolated system, where no net external forces act, the total linear momentum is conserved, meaning \sum \vec{p}_i = \constant before and after any internal interactions. This principle, derived from the invariance of momentum under internal forces that occur in equal and opposite pairs per , applies to systems ranging from colliding particles to macroscopic bodies. Collisions between objects illustrate momentum conservation distinctly in elastic and inelastic cases. In elastic collisions, both total momentum and kinetic energy are conserved, allowing objects to rebound with no permanent deformation. Inelastic collisions conserve only momentum, with kinetic energy converted to other forms like heat or sound, often resulting in objects sticking together in perfectly inelastic scenarios. The nature of a collision is quantified by the coefficient of restitution e, defined as the negative ratio of the relative velocity of separation to the relative velocity of approach along the line of impact: e = -\frac{v_{2f} - v_{1f}}{v_{2i} - v_{1i}} for one-dimensional (1D) collisions, where subscripts i and f denote initial and final states. Here, e = 1 for perfectly elastic collisions, e = 0 for perfectly inelastic, and $0 < e < 1 for partially elastic cases; this extends to two-dimensional (2D) collisions by applying the definition to the normal components of relative velocities. For example, in a 1D head-on collision between two masses, solving the conservation equations with e yields final velocities that match experimental observations in billiard ball impacts or atomic scattering. The center of mass (CM) of a system plays a key role in collision dynamics, as its motion remains uniform in an isolated system, determined solely by external forces via \vec{F}_{\net} = M \frac{d\vec{v}_{\CM}}{dt}, where M is total mass and \vec{v}_{\CM} is CM velocity. During collisions, internal interactions do not affect CM motion, simplifying analysis by transforming to the CM frame where total momentum is zero, and velocities relative to CM determine post-collision outcomes. This approach is particularly useful in 2D collisions, where resolving velocities into components perpendicular and parallel to the contact line aids in applying conservation laws. An important application of momentum conservation in variable-mass systems is the rocket equation, derived for a rocket expelling mass at exhaust velocity v_e relative to itself: m \, dv = -v_e \, dm, where m is instantaneous mass, dv is change in rocket velocity, and dm < 0 for mass loss. Integrating this yields the maximum velocity change \Delta v = v_e \ln(m_0 / m_f), with initial mass m_0 and final mass m_f, assuming no external forces like gravity. This equation, fundamental to rocketry, highlights how high exhaust velocities enable significant speed gains despite mass ejection. In elastic collisions, kinetic energy conservation complements momentum to fully determine outcomes, distinguishing them from inelastic processes.

Rotational Motion

Rotational motion describes the dynamics of rigid bodies rotating about a fixed axis, extending the principles of linear kinematics and dynamics to angular quantities. In university physics, this topic builds on linear motion by introducing angular displacement \theta, measured in radians, which quantifies the change in orientation of a body. The angular velocity \vec{\omega} is the time derivative of \theta, given by \vec{\omega} = \frac{d\vec{\theta}}{dt}, representing the rate of rotation in radians per second. Similarly, angular acceleration \vec{\alpha} is the time derivative of \vec{\omega}, \vec{\alpha} = \frac{d\vec{\omega}}{dt}, in radians per second squared. For constant angular acceleration, the kinematic equations parallel those of linear motion: \omega = \omega_0 + \alpha t, \theta = \theta_0 + \omega_0 t + \frac{1}{2} \alpha t^2, and \omega^2 = \omega_0^2 + 2\alpha (\theta - \theta_0). These relations connect angular quantities to their linear counterparts through the radius r from the axis of rotation, such that linear displacement s = r \theta, tangential velocity v = r \omega, and tangential acceleration a_t = r \alpha. This analogy facilitates understanding how rotational motion manifests in physical systems like spinning wheels or orbiting satellites. Torque \vec{\tau} is the rotational equivalent of force, defined as the cross product \vec{\tau} = \vec{r} \times \vec{F}, where \vec{r} is the position vector from the axis to the point of force application and \vec{F} is the applied force, yielding a magnitude \tau = r F \sin \phi with \phi the angle between \vec{r} and \vec{F}. Torque causes changes in rotational motion, much like force alters linear motion. The moment of inertia I, analogous to mass, measures a body's resistance to angular acceleration and is calculated as I = \int r^2 \, dm for a continuous body or I = \sum m_i r_i^2 for discrete particles, where r is the perpendicular distance from the axis. For common shapes, such as a uniform disk rotating about its central axis, I = \frac{1}{2} M R^2. Newton's second law for rotation states that the net torque about an axis equals the moment of inertia times the angular acceleration: \sum \vec{\tau} = I \vec{\alpha}. This equation, derived from the linear form \vec{F} = m \vec{a} by considering rotational analogs, applies to rigid bodies undergoing fixed-axis rotation and enables prediction of angular motion under applied torques. In vector form, the directions align along the axis of rotation. Angular momentum \vec{L} for a rigid body is \vec{L} = I \vec{\omega}, representing the rotational analog of linear momentum \vec{p} = m \vec{v}. The time rate of change of angular momentum equals the net torque: \vec{\tau} = \frac{d\vec{L}}{dt}. Consequently, if no external torque acts on a system (\sum \vec{\tau} = 0), angular momentum is conserved, a principle central to analyzing isolated rotating systems like figure skaters pulling in their arms to increase spin rate. This conservation law stems directly from the rotational form of and holds for both fixed and varying axes under appropriate conditions. Rolling motion combines translation and rotation without slipping, where the point of contact with the surface is instantaneously at rest. For a rolling object, the linear velocity of the center of mass relates to the angular velocity by v = r \omega, and the linear acceleration by a = r \alpha, ensuring consistency between the two types of motion. This condition allows application of both translational and rotational dynamics, such as using \sum F = m a alongside \sum \tau = I \alpha, to solve for acceleration down an incline, where friction provides the necessary torque.

Gravitation

In university physics, gravitation is described by Isaac Newton's law of universal gravitation, which states that every particle in the universe attracts every other particle with a force directly proportional to the product of their masses and inversely proportional to the square of the distance between their centers. The modern form of this law, incorporating the gravitational constant G, gives the magnitude of this force F between two masses m_1 and m_2 separated by distance r as F = G \frac{m_1 m_2}{r^2}, where G = 6.67430 \times 10^{-11} \, \mathrm{m^3 \, kg^{-1} \, s^{-2}} (CODATA 2022 value). This inverse-square law unifies terrestrial gravity, such as the fall of objects on Earth, with celestial mechanics, explaining planetary and lunar motions under the same principle. The gravitational field \mathbf{g} due to a point mass M, such as a planet, is the force per unit mass experienced by a test mass m at distance r, expressed as g = \frac{GM}{r^2}. For , with mass M_\ Earth \approx 5.972 \times 10^{24} kg and mean radius r \approx 6.371 \times 10^6 m (as of standard references), this yields a surface field strength of approximately $9.81 \, \mathrm{m/s^2}. The field points toward the mass's center and decreases with distance, enabling calculations for satellites or falling bodies. The gravitational potential energy U between two masses M and m is derived from the work done against the gravitational force, given by U = -G \frac{Mm}{r}, with the negative sign indicating an attractive interaction and zero potential at infinite separation. This scalar potential simplifies the analysis of conservative gravitational forces in systems like binary stars or planetary escapes. The value of the gravitational constant G was first experimentally determined by Henry Cavendish in 1797–1798 using a torsion balance apparatus consisting of a horizontal rod suspended by a wire, with small lead spheres at each end attracted to larger fixed spheres, causing measurable deflection. Cavendish's measurements yielded G \approx 6.74 \times 10^{-11} \, \mathrm{m^3 \, kg^{-1} \, s^{-2}} (modern CODATA 2022 value: $6.67430 \times 10^{-11} \, \mathrm{m^3 \, kg^{-1} \, s^{-2}}), allowing the computation of Earth's density from its gravitational field. Newton's law provides the dynamical foundation for orbital motion, reproducing Johannes Kepler's empirical laws derived from Tycho Brahe's observations. Kepler's first law states that planets move in elliptical orbits with the Sun at one focus. The second law asserts that a line joining a planet to the Sun sweeps out equal areas in equal times, implying constant angular momentum. For circular orbits, a special case of the ellipse, the orbital speed v satisfies v = \sqrt{GM/r}, where M is the central mass. Kepler's third law relates the orbital period T to the semi-major axis a: T^2 \propto a^3, or precisely T^2 = \frac{4\pi^2}{GM} a^3 under Newton's theory. This holds for elliptical orbits, with the period depending on the average distance, as seen in Earth's 365-day orbit at a \approx 1 AU. In orbital contexts, weightlessness arises because objects in free fall, such as satellites, accelerate uniformly in the gravitational field, eliminating relative forces and producing apparent zero gravity. The escape velocity v_{esc}, the minimum speed needed for an object to escape a body's gravitational pull to infinity without further propulsion, is v_{esc} = \sqrt{\frac{2GM}{r}}. For Earth at the surface, this is about 11.2 km/s. Energy conservation governs such motions, equating initial kinetic energy to the magnitude of potential energy for escape.

Fluid Statics and Dynamics

Fluid mechanics in university physics examines the behavior of liquids and gases at rest (statics) and in motion (dynamics), applying concepts from pressure, density, and forces to systems like hydrostatics and flow. Fluids are substances that deform continuously under shear stress, unlike solids, and are characterized by density \rho = m/V, where m is mass and V is volume. In fluid statics, pressure P is the force per unit area, isotropic in all directions at a point. Hydrostatic pressure increases with depth in a fluid at rest: P = P_0 + \rho g h, where P_0 is surface pressure, \rho is density, g is gravity, and h is depth. This leads to : a pressure change in an enclosed fluid is transmitted undiminished throughout, enabling hydraulic systems like lifts. Buoyancy, described by , states that the upward buoyant force on an immersed object equals the weight of the displaced fluid: F_b = \rho_\mathrm{fluid} V_\mathrm{displaced} g. Objects float if their density is less than the fluid's, sink if greater, with applications in ship design and density measurements. Fluid dynamics addresses moving fluids, assuming incompressibility for liquids. The continuity equation for steady flow conserves mass: A_1 v_1 = A_2 v_2, where A is cross-sectional area and v is speed, implying narrowing pipes increase velocity. Bernoulli's principle, for ideal (inviscid, incompressible) flow along a streamline, relates pressure, kinetic energy, and potential: P + \frac{1}{2} \rho v^2 + \rho g h = \constant. This explains phenomena like airplane lift (faster flow over wing reduces pressure) and Venturi meters. Real fluids exhibit viscosity, introducing drag, but ideal models provide foundational analysis for pipes, blood flow, and aerodynamics.

Thermodynamics and Kinetic Theory

Temperature, Heat, and the Zeroth Law

Temperature is a measure of the average kinetic energy of the particles in a system, providing a quantitative indication of the system's thermal state. The Celsius scale defines the freezing point of water at 0 °C and the boiling point at 100 °C under standard atmospheric pressure. The Kelvin scale, the SI unit for temperature, sets absolute zero at 0 K, equivalent to -273.15 °C, where molecular motion theoretically ceases. To convert between scales, add 273.15 to a Celsius temperature to obtain Kelvin. The zeroth law of thermodynamics establishes the concept of thermal equilibrium, stating that if two systems are each in thermal equilibrium with a third system, then they are in thermal equilibrium with each other. This transitive property allows for the reliable measurement of temperature using thermometers, which reach thermal equilibrium with the system being measured. Thermometers operate on principles such as the expansion of liquids like mercury or changes in electrical resistance, calibrated against fixed points like ice and steam. Heat is the transfer of thermal energy between systems due to a temperature difference, occurring through three primary mechanisms: conduction, convection, and radiation. Conduction involves the direct transfer of energy through molecular collisions in solids or stationary fluids, such as heat flowing through a metal rod from hot to cold end. Convection requires fluid motion to carry heat, as seen in boiling water where hotter fluid rises and cooler fluid sinks. Radiation transmits energy via electromagnetic waves, independent of matter, with all objects above absolute zero emitting thermal radiation. For blackbodies, the power radiated per unit area follows the Stefan-Boltzmann law: P = \sigma A T^4 where \sigma = 5.67 \times 10^{-8} W/m²K⁴ is the Stefan-Boltzmann constant, A is the surface area, and T is the absolute temperature in Kelvin. Specific heat capacity quantifies the heat required to change the temperature of a unit mass of a substance by one degree, given by Q = m c \Delta T, where Q is heat transferred, m is mass, c is specific heat capacity, and \Delta T is temperature change. For water, c = 4186 J/kg·K, making it an effective thermal reservoir. Calorimetry measures heat exchange by observing temperature changes in isolated systems, often using the principle that heat lost by one object equals heat gained by another, as in coffee-cup calorimeters for specific heat determination. Most materials expand when heated due to increased atomic vibrations, with linear thermal expansion for solids described by \Delta L = \alpha L \Delta T, where \alpha is the coefficient of linear expansion, L is original length, and \Delta T is temperature change. For steel, \alpha \approx 12 \times 10^{-6} K⁻¹, leading to measurable effects like gaps in bridges to accommodate expansion. This phenomenon underpins the kinetic theory view that temperature reflects average molecular kinetic energy, influencing macroscopic properties like expansion.

First Law of Thermodynamics

The first law of thermodynamics states that the change in the internal energy of a closed system equals the heat added to the system minus the work done by the system, expressed as \Delta U = Q - W. This principle embodies the conservation of energy applied to thermal processes, recognizing heat and work as interchangeable forms of energy transfer. The formulation emerged from experimental work by in the 1840s, who demonstrated the mechanical equivalent of heat through precise measurements of paddle-wheel experiments, showing that a fixed amount of work always produces the same quantity of heat. later formalized it in 1850 as a general law, integrating it with the concept of internal energy to describe energy balances in thermodynamic systems. Internal energy U represents the total microscopic energy of a system, including molecular kinetic and potential energies, and is a state function depending only on the system's current state, not its history. For an ideal gas, U depends solely on temperature, so \Delta U = n C_v \Delta T, where C_v is the molar capacity at constant volume—a quantity relating input to temperature change under fixed volume conditions. In the first law, Q > 0 indicates absorbed by the system, while W > 0 denotes work performed by the system on its surroundings; the opposite signs apply for heat rejected or work done on the system. In gaseous systems, work arises primarily from volume changes against external pressure, calculated as W = \int P \, dV, where the integral represents the area under the curve on a pressure-volume (PV) diagram. This expression quantifies the expansion or compression work in reversible processes, assuming quasi-static conditions where pressure equilibrates throughout. For irreversible processes, such as sudden expansions, work is instead W = P_{\text{ext}} \Delta V, using the constant external pressure. Thermodynamic processes are classified by constraints on state variables, each illustrated on [PV](/page/PV) diagrams to visualize paths and compute energy changes via . An isobaric process occurs at constant pressure, appearing as a horizontal line on a [PV](/page/PV) diagram; here, W = P \Delta V, and \Delta U = Q - P \Delta V, so heat input Q = n C_p \Delta T where C_p = C_v + R is the molar heat capacity at constant pressure. An isochoric process maintains constant volume (\Delta V = 0), yielding a vertical line on the diagram with W = 0, thus \Delta U = Q = n C_v \Delta T. An holds constant, resulting in \Delta U = 0 for an , so Q = W = nRT \ln(V_f / V_i) for reversible , tracing a hyperbolic curve on the PV since PV = \text{constant}. An involves no heat exchange (Q = 0), leading to \Delta U = -W; for an , it follows PV^\gamma = \text{constant} where \gamma = C_p / C_v > 1, producing a steeper curve than isothermal on the PV , with changing as work is done. Heat engines convert thermal energy into mechanical work by cycling through processes like those above, with net work W_{\text{net}} = Q_h - Q_c from the first law applied over a closed loop, where Q_h is absorbed from a hot reservoir and Q_c is rejected to a cold reservoir. The efficiency \eta = W_{\text{net}} / Q_h = 1 - Q_c / Q_h measures performance, limited by the temperature ratio in reversible cycles. The , proposed by Sadi Carnot in , exemplifies this as an ideal reversible engine comprising two isothermal and two adiabatic processes, achieving maximum efficiency for given reservoir temperatures without violating . Applications of the first law include analyzing expansions, such as reversible isothermal expansion where all input becomes work, maintaining constant , or adiabatic free expansion into vacuum where W = 0 and Q = 0, so \Delta U = 0 and remains unchanged despite increase. These examples illustrate how the first law governs redistribution in systems, underpinning design and .

Second Law and Entropy

The second law of thermodynamics introduces the concept of irreversibility, dictating the direction of natural processes and explaining why certain phenomena, such as flow from hot to cold bodies, occur spontaneously while the reverse does not. This law prohibits machines of the second kind, which would convert entirely into work without other effects. Two classical statements encapsulate this principle: the Clausius statement, which asserts that cannot spontaneously transfer from a colder body to a hotter one without external work, and the Kelvin-Planck statement, which states that it is impossible for a cyclic process to extract from a single reservoir and convert it completely into work without rejecting to a colder reservoir. These formulations, originally proposed by in 1854 and William Thomson () in 1851, respectively, highlight the inherent limitations on energy transformations in thermodynamic systems. Central to the second law is the concept of , a that quantifies the degree of or the unavailability of for work in a . For a reversible process, the infinitesimal change in dS is defined as dS = \frac{\delta Q_{\text{rev}}}{T}, where \delta Q_{\text{rev}} is the reversible transfer and T is the absolute in ; this relation was introduced by Clausius in as a measure of the transformation potential of . In an , the second law implies that either remains constant for reversible processes or increases for irreversible ones, such that \Delta S \geq 0 for the universe, ensuring the in thermodynamic processes. For an undergoing reversible isothermal expansion, the change is \Delta S = nR \ln \left( \frac{V_f}{V_i} \right), where n is the number of moles, R is the , and V_f, V_i are the final and initial volumes, respectively; this illustrates how expansion increases and thus . Reversible processes, idealized as quasi-static with driving forces, maintain constant in adiabatic cases, while irreversible processes, involving finite gradients like sudden expansions or , generate additional , driving systems toward . From a statistical mechanics perspective, entropy arises from the multiplicity \Omega, the number of microscopic configurations consistent with a macroscopic state, as formulated by in 1877: S = k \ln \Omega, where k is Boltzmann's constant; higher multiplicity corresponds to greater disorder and probability, explaining why isolated systems evolve toward states of maximum . This interpretation reconciles the macroscopic second law with , as the overwhelming probability of entropy-increasing configurations enforces the observed directionality. In heat engines, which convert to mechanical work via cyclic processes between hot (T_h) and cold (T_c) reservoirs, the second law sets the maximum efficiency for a reversible as \eta = 1 - \frac{T_c}{T_h}, derived by Sadi Carnot in 1824; real engines achieve lower efficiencies due to irreversibilities, underscoring 's role in limiting useful work extraction. Thus, the second law and provide a unified framework for understanding the inevitability of disorder and the constraints on energy utilization in physical processes.

Kinetic Theory of Gases

The provides a microscopic model for understanding the macroscopic behavior of gases by treating them as collections of particles in constant random motion. Developed in the , it posits that gas arises from collisions of these particles with container walls, while and relate to particle spacing and average , respectively. This framework explains empirical without invoking forces between particles under ideal conditions. Central to the theory are several key assumptions for an : molecules are point particles with negligible compared to the container; there are no attractive or repulsive forces between molecules except during instantaneous, elastic collisions; molecules move in straight lines at constant speeds between collisions; and the number of molecules is large, allowing statistical averaging. These assumptions lead to the , PV = nRT, where P is , V is , n is the number of moles, R is the , and T is temperature in . The law emerges from equating macroscopic observations to microscopic transfer during wall collisions. The speeds of gas molecules follow the Maxwell-Boltzmann distribution, which describes the for molecular speeds in an at . Derived from assuming random collisions and , the distribution is given by f(v) = 4\pi v^2 \left( \frac{m}{2\pi kT} \right)^{3/2} \exp\left( -\frac{m v^2}{2kT} \right), where v is speed, m is , k is Boltzmann's constant, and T is temperature. This yields the root-mean-square speed v_{rms} = \sqrt{\frac{3kT}{m}}, representing the of the of the squared speeds, which characterizes the gas's . For example, at , molecules have a much higher v_{rms} than oxygen due to lower . Gas derives from the momentum imparted by colliding with . Consider a cubic of side length L; a of m and velocity component v_x to a rebounds with -v_x, changing by $2mv_x. The number of such collisions per unit time on one wall is \frac{1}{2} N \frac{v_x}{L}, where N is total . Averaging over all directions and using , is P = \frac{1}{3} \rho v_{rms}^2, with \rho = \frac{Nm}{V} as mass density. This connects directly to average translational . The U of an arises solely from molecular , governed by the , which states that each quadratic contributes \frac{1}{2} kT per molecule at temperature T. For a , there are three translational , yielding U = \frac{3}{2} nRT. Diatomic gases add two rotational degrees, increasing to U = \frac{5}{2} nRT at , though vibrational modes contribute at higher temperatures. This explains heat capacities and specific heats in gases. The mean free path \lambda is the average distance a travels between collisions, approximated as \lambda = \frac{1}{\sqrt{2} \pi d^2 n}, where d is and n = \frac{N}{V} is . Shorter paths occur in denser or larger-molecule gases, limiting straight-line motion. This concept underlies , where net particle arises from random walks across concentration gradients, with D \approx \frac{1}{3} v_{rms} \lambda, explaining phenomena like gas mixing without .

Oscillations and Waves

Simple Harmonic Motion

Simple harmonic motion (SHM) describes the oscillatory behavior of a system where a restoring force acts proportionally to the displacement from an equilibrium position, directed opposite to the displacement. This results in periodic motion that is sinusoidal in time, governed by the differential equation \frac{d^2x}{dt^2} + \omega^2 x = 0, where \omega is the angular frequency. The general solution for the position as a function of time is x(t) = A \cos(\omega t + \phi), where A is the amplitude (maximum displacement), \omega determines the period T = 2\pi / \omega, and \phi is the phase constant that sets the initial conditions. A classic example is the mass-spring system, where Hooke's law provides the restoring force F = -kx with spring constant k, leading to \omega = \sqrt{k/m} for mass m. This motion repeats indefinitely in the ideal, frictionless case, with velocity v(t) = -A \omega \sin(\omega t + \phi) and acceleration a(t) = -A \omega^2 \cos(\omega t + \phi)./Book%3A_University_Physics_I_-Mechanics_Sound_Oscillations_and_Waves(OpenStax)/15%3A_Oscillations/15.01%3A_Simple_Harmonic_Motion) In SHM, the total mechanical energy remains constant and equals \frac{1}{2} k A^2, representing the maximum potential energy at maximum displacement. This energy partitions between kinetic energy \frac{1}{2} m v^2 and elastic potential energy \frac{1}{2} k x^2, with the sum always equaling the total energy. At the equilibrium position (x = 0), all energy is kinetic, reaching a maximum \frac{1}{2} m (A \omega)^2 = \frac{1}{2} k A^2; at the amplitude (x = \pm A), it is entirely potential. This conservation arises from the system's conservative nature, allowing prediction of motion solely from energy considerations without solving the full dynamics./Book%3A_University_Physics_I_-Mechanics_Sound_Oscillations_and_Waves(OpenStax)/15%3A_Oscillations/15.03%3A_Energy_in_Simple_Harmonic_Motion) The simple approximates SHM for small angular displacements, where the restoring from yields a T = 2\pi \sqrt{\frac{L}{g}}, independent of or , with L as the pendulum length and g as . This formula emerges from the \sin \theta \approx \theta, linearizing the nonlinear pendulum equation \frac{d^2 \theta}{dt^2} + \frac{g}{L} \sin \theta = 0. Galileo first observed the isochronous property of pendulums in the early , later formalized by Huygens in 1673 for clock applications. Deviations occur for larger angles, where the period increases slightly. Real systems experience due to or air resistance, modifying the equation to m \frac{d^2 x}{dt^2} + b \frac{dx}{dt} + k x = 0, with b as the damping coefficient. Solutions exponentially, with underdamping (b < 2\sqrt{km}) producing oscillations that diminish over time, critical damping (b = 2\sqrt{km}) returning to equilibrium fastest without oscillation, and overdamping (b > 2\sqrt{km}) yielding slow, non-oscillatory approach. The damped is \omega_d = \sqrt{\omega^2 - (b/2m)^2} for underdamped cases./05%3A_Complex_Oscillations/5.01%3A_The_Damped_Harmonic_Oscillator) Externally driven oscillators follow m \frac{d^2 x}{dt^2} + b \frac{dx}{dt} + k x = F_0 \cos(\omega_d t), where F_0 is the driving force and \omega_d the driving . In , the peaks at near \omega \approx \sqrt{\omega^2 - 2 (b/2m)^2} for low , maximizing from the driver. This underlies applications like circuits. In , SHM trajectories are closed ellipses in the position-velocity plane (x vs. v), with area $2\pi E / \omega proportional to total , providing a for analysis. For two perpendicular SHMs with commensurate , the trajectory forms Lissajous figures—closed curves whose shapes depend on the ratio and difference, such as figure-eights for 1:2 ratios. These patterns visualize coupled oscillations, observable in oscilloscopes or plots./03%3A_Linear_Oscillators/3.04%3A_Geometrical_Representations_of_Dynamical_Motion)

Mechanical Waves

Mechanical waves are disturbances that propagate through an elastic medium, transferring from one location to another without a net of the medium itself. These waves require a medium, such as a , , or gas, to travel, distinguishing them from electromagnetic waves that propagate through . In university physics, mechanical waves are analyzed through their characteristics, governed by the interplay of the medium's and . Mechanical waves are classified into two primary types based on the direction of relative to the wave propagation direction: transverse and longitudinal. In transverse waves, the particles of the medium oscillate to the direction of wave travel, producing crests and troughs. Examples include waves on a stretched string, where the string vibrates up and down while the disturbance moves horizontally. In contrast, longitudinal waves involve particle motion parallel to the propagation direction, resulting in regions of and . These occur in solids, liquids, and gases, such as pressure variations in a coiled . Polarization refers to the orientation of the oscillation in transverse mechanical waves, which can be linearly polarized if the displacements occur in a single or circularly polarized if they rotate in a perpendicular to . Longitudinal waves, however, do not exhibit in the same manner because their particle motion is along the , limiting directional variability. This distinction arises from the nature of transverse displacements, allowing restriction by the medium or external fields, whereas longitudinal oscillations are inherently scalar along one . The behavior of mechanical waves is often modeled using sinusoidal functions, which emerge as solutions to the and connect to (SHM) as the underlying oscillatory pattern for small amplitudes. The one-dimensional , derived from Newton's second law applied to medium elements, describes transverse waves on a string as \frac{\partial^2 y}{\partial t^2} = v^2 \frac{\partial^2 y}{\partial x^2}, where y(x,t) is the transverse , and v is the wave speed. For a uniform under T with linear \mu, the speed is given by v = \sqrt{T / \mu}, obtained by balancing the from components on a small . This formula shows that wave speed increases with and decreases with , reflecting the medium's and inertial properties. Longitudinal waves in a medium follow a similar wave equation, with speed v = \sqrt{B / \rho}, where B is the and \rho is , but the focus here is on transverse in strings. Key parameters characterize periodic mechanical waves: the wavelength \lambda, the distance between consecutive crests (or troughs) in a or between compressions in a ; the T, the time for one complete ; and the f = 1/T, the number of cycles per second in hertz. These relate through the fundamental wave speed equation v = f \lambda, which holds for both and , indicating that speed is the product of and . For a given medium, increasing shortens the to maintain constant v. The governs interactions between mechanical waves, stating that when two or more waves overlap in a linear medium, the resultant displacement is the vector sum of individual displacements. This linearity assumes small where the medium's response is proportional to the disturbance. For waves of the same frequency traveling in opposite directions, such as on a string reflected at a boundary, superposition produces . In a , specific points called nodes remain stationary at zero displacement, while antinodes exhibit maximum . Standing waves form harmonics on bounded media like strings or pipes. For a string fixed at both ends, the wavelengths of allowed modes satisfy \lambda_n = 2L / n, where L is the and n = 1, 2, 3, \ldots is the , with nodes at the ends and antinodes in between. The fundamental mode (n=1) has one antinode at the center, and higher harmonics are integer multiples of the f_1 = v / (2L). In pipes, boundary conditions differ: an open pipe (pressure nodes at both ends) supports harmonics with \lambda_n = 2L / n, similar to the string, while a closed pipe ( antinode at the closed end, node at the open end) has odd harmonics only, \lambda_n = 4L / (2n-1). These modes arise from constructive interference at antinodes and destructive at nodes. Mechanical transport through the medium, quantified by I, the average per unit cross-sectional area perpendicular to propagation. For sinusoidal , I is proportional to the square of the A^2 and to the wave speed and , with the including both kinetic (from ) and potential (from medium deformation) contributions. In a transverse wave, the average transmitted is P = \frac{1}{2} \mu v \omega^2 A^2, where \omega = 2\pi f, illustrating how flows at rate I = P / A_{\text{eff}} without net mass transport. This propagation underscores ' role in phenomena like transfer in structures.

Sound and Doppler Effect

Sound is a that propagates through elastic media such as air, characterized by compressions and rarefactions of the medium. In air, sound waves travel at speeds typically around 343 m/s at (20°C), depending on factors like and . The v in an is given by v = \sqrt{\frac{\gamma P}{\rho}}, where \gamma is the adiabatic index (ratio of specific heats), P is the , and \rho is the ; equivalently, for fluids, it can be expressed as v = \sqrt{\frac{B}{\rho}}, with B being the measuring the medium's resistance to compression. The I of a sound wave represents the power per unit area carried by the wave, and the perceived is quantified using the sound level in decibels (), defined as \beta = 10 \log\left(\frac{I}{I_0}\right), where I_0 = 10^{-12} / is the reference corresponding to the threshold of human hearing at 1 kHz. This accommodates the vast range of human auditory sensitivity, spanning from the faintest detectable s at 0 to painful levels exceeding 120 , such as those from jet engines. Human hearing is limited to frequencies between approximately 20 Hz and 20 kHz, with greatest sensitivity around 2–5 kHz where predominate; sensitivity decreases at the extremes, making low and high less perceptible. The describes the apparent change in of a wave due to relative motion between and observer. For , the observed f' is f' = f \frac{v \pm v_o}{v \mp v_s}, where f is the source , v is the , v_o is the observer's speed (positive toward ), and v_s is the source's speed (positive away from the observer); the signs adjust for direction, resulting in higher when approaching and lower when receding. This effect arises because the compresses or stretches depending on motion: for a moving source, waves bunch up ahead and spread out behind, while a moving observer encounters waves more or less frequently. Everyday examples include the varying of an ambulance siren as it passes by. When a source moves faster than the , it produces a , a sudden front forming a cone-shaped behind the source, with the angle determined by the M = \frac{v_s}{v}, where \sin\theta = \frac{1}{M}. For M > 1, the source is supersonic, and no sound reaches ahead of the shock; the intersection of this cone with the ground creates a , a loud impulsive noise from the abrupt change, as experienced when aircraft exceed 1. Sonic booms can cause structural vibrations and are a key consideration in supersonic flight design. Resonance occurs in air columns when the driving frequency matches natural modes, amplifying as in musical instruments like organ pipes or flutes. In a closed pipe (open at one end, closed at the other), the is f_1 = \frac{v}{4L}, with L the length, corresponding to a quarter-wavelength; higher harmonics are odd multiples (f_3 = 3f_1, f_5 = 5f_1), due to the displacement antinode at the open end and at the closed end. For an open pipe (open at both ends), the is f_1 = \frac{v}{2L}, twice that of the closed pipe of equal length, with all harmonics as integer multiples (f_n = nf_1), reflecting antinodes at both ends. End corrections, typically about 0.6 times the pipe radius, adjust effective lengths for real tubes, influencing precise tuning in wind instruments.

Electromagnetism

Electrostatics

Electrostatics is the branch of that deals with the interactions between stationary and the resulting . It forms the foundation for understanding electric forces in the absence of motion or time-varying fields, providing key principles for phenomena ranging from structure to protection. The core concepts were developed in the 18th and 19th centuries through experimental and theoretical work, emphasizing the inverse-square nature of electric forces and the use of to simplify calculations. The fundamental law governing the force between two point charges at rest is , formulated by in 1785 based on torsion balance experiments measuring repulsive and attractive forces between charged spheres. It states that the magnitude of the electrostatic force \vec{F} between two point charges q_1 and q_2 separated by a distance r in vacuum is directly proportional to the product of the charges and inversely proportional to the square of the distance, directed along the line joining them: \vec{F} = k_e \frac{q_1 q_2}{r^2} \hat{r}, where k_e = \frac{1}{4\pi \epsilon_0} is Coulomb's constant, \epsilon_0 is the ($8.85 \times 10^{-12} \, \mathrm{C^2/N \cdot m^2}), and \hat{r} is the unit vector from one charge to the other./18%3A_Electric_Charge_and_Electric_Field/18.03%3A_Coulombs_Law) This law is analogous to but is vastly stronger for like charges, which repel, and opposite charges, which attract. For multiple charges, the total force on any charge is the vector sum of pairwise forces, enabling calculations for complex charge distributions./18%3A_Electric_Charge_and_Electric_Field/18.03%3A_Coulombs_Law) To describe the influence of a charge distribution without reference to a test charge, the concept of the \vec{E} was introduced by in the as a representing the force per unit charge at any point in space. The due to a point charge q at a distance r is \vec{E} = \frac{1}{4\pi \epsilon_0} \frac{q}{r^2} \hat{r}, derived from \vec{E} = \vec{F}/q_0, where q_0 is a small positive test charge./18%3A_Electric_Charge_and_Electric_Field/18.04%3A_Electric_Field) For a continuous distribution, \vec{E} is obtained by integrating contributions from each infinitesimal charge element. Faraday visualized using field lines, which emanate from positive charges and terminate on negative charges, with density proportional to ; lines never cross and are closer together where \vec{E} is stronger. This representation aids in understanding field patterns, such as radial lines around a point charge or uniform fields between parallel plates./18%3A_Electric_Charge_and_Electric_Field/18.04%3A_Electric_Field) A powerful tool for calculating electric fields from symmetric charge distributions is , first formulated by in 1835 as part of his work on gravitational and electric potentials, though its flux form was anticipated earlier./18%3A_Electric_Charge_and_Electric_Field/18.05%3A_Electric_Flux_and_Gausss_Law) It relates the total through a closed surface to the enclosed charge: \oint_S \vec{E} \cdot d\vec{A} = \frac{Q_\mathrm{encl}}{\epsilon_0}, where the integral is over a enclosing charge Q_\mathrm{encl}. The law exploits : for a uniformly charged of radius R and total charge Q, a spherical of radius r > R yields E = \frac{1}{4\pi \epsilon_0} \frac{Q}{r^2}, identical to a point charge at the center./18%3A_Electric_Charge_and_Electric_Field/18.05%3A_Electric_Flux_and_Gausss_Law) Inside the sphere (r < R), with uniform volume charge density \rho = Q/( \frac{4}{3}\pi R^3 ), the field is E = \frac{\rho r}{3\epsilon_0}, linear in r. For an infinite plane with surface charge density \sigma, a cylindrical gives E = \frac{\sigma}{2\epsilon_0}, independent of distance./18%3A_Electric_Charge_and_Electric_Field/18.05%3A_Electric_Flux_and_Gausss_Law) For an infinite line charge with linear density \lambda, a cylindrical surface yields E = \frac{\lambda}{2\pi \epsilon_0 r}. These applications highlight how simplifies problems where direct integration via Coulomb's law would be cumbersome. In electrostatic equilibrium, conductors exhibit unique properties due to the mobility of their free charges. When isolated or in contact with a charged object, excess charge resides on the surface, as any internal charge would create a field causing redistribution until \vec{E} = 0 inside./18%3A_Electric_Charge_and_Electric_Field/18.07%3A_Conductors_and_Electric_Fields_in_Static_Equilibrium) The surface charge density \sigma is higher at sharper points, leading to field enhancement and potential corona discharge. For a conductor with an internal cavity containing no charge, Gauss's law applied to a surface inside the conductor shows \vec{E} = 0 within the cavity, shielding it from external fields./18%3A_Electric_Charge_and_Electric_Field/18.07%3A_Conductors_and_Electric_Fields_in_Static_Equilibrium) If charge Q is placed inside the cavity, it induces -Q on the inner surface and +Q on the outer, maintaining zero field in the conductor material. This principle underlies the Faraday cage, demonstrated by Michael Faraday in 1836 using an electrified room: a conducting enclosure blocks external electric fields, protecting contents from electrostatic effects like lightning strikes, as induced charges on the outer surface cancel the internal field./18%3A_Electric_Charge_and_Electric_Field/18.07%3A_Conductors_and_Electric_Fields_in_Static_Equilibrium) A practical device illustrating electrostatic principles is the Van de Graaff generator, invented by in 1929 and patented in 1935, which produces high voltages for research and demonstrations. It uses a moving insulating belt to transport charge from a high-voltage source to a hollow metal dome, where charge accumulates due to the conductor's properties, building potentials up to several million volts. The strong field near the dome causes sparks or corona discharge, visualizing field lines and demonstrating charge separation in equilibrium. Originally developed for particle acceleration, it remains a staple in physics education for exploring electrostatic forces and fields.

Electric Potential and Capacitance

The electric potential at a point in an electrostatic field is defined as the work done per unit charge by an external agent to bring a positive test charge from a reference point (usually infinity, where V=0) to that point, given by the line integral V = -\int_{\infty}^{r} \vec{E} \cdot d\vec{l}, where \vec{E} is the electric field. This scalar quantity simplifies calculations in electrostatics compared to the vector nature of the electric field, as potentials add algebraically. For a uniform electric field, such as between parallel plates, the potential difference simplifies to \Delta V = -E d, where d is the distance along the field direction. The electric potential energy U of a charge q at a point with potential V is U = q V, representing the work required to assemble the charge distribution from infinity. This energy arises because the electrostatic force is conservative, meaning the work done depends only on initial and final positions, not the path taken. In electrostatics, the electric field can be derived from the potential as \vec{E} = -\nabla V, linking the two concepts directly. A capacitor is a device consisting of two conductors separated by an insulator, designed to store electric charge and potential energy. The capacitance C is defined as the ratio of the magnitude of the charge Q stored on each conductor to the potential difference V between them, C = Q / V, with units of farads (coulombs per volt). For an ideal parallel-plate capacitor with plate area A and separation d (where d \ll linear dimensions of plates to neglect fringing fields), the capacitance is C = \epsilon_0 A / d, where \epsilon_0 = 8.85 \times 10^{-12} F/m is the vacuum permittivity. Inserting a dielectric material between the plates increases the capacitance by a factor \kappa, the dielectric constant (relative permittivity), yielding C' = \kappa C, where \kappa > 1 for all materials due to reducing the effective field. For example, has \kappa \approx 80, allowing significantly more charge storage for the same voltage compared to . The energy stored in a charged is U = \frac{1}{2} C V^2, equivalent to the work done to charge it, with the energy density in the field being u = \frac{1}{2} \epsilon_0 E^2 for (or \frac{1}{2} \kappa \epsilon_0 E^2 with ). This energy formula derives from integrating the incremental work dU = V dq as charge builds up. Equipotential surfaces are loci of points where the electric potential is constant, forming spheres around point charges or planes parallel to charged plates in uniform fields. The is always perpendicular to these surfaces, with proportional to the rate of change of potential normal to the surface; no work is done moving a charge along an .

Current and Resistance

Electric current is the rate at which passes through a surface, defined mathematically as I = \frac{dQ}{dt}, where I is the in amperes, Q is the charge in coulombs, and t is time in seconds. This definition applies to steady in conductors, where the of charge carriers, typically in metals, remains constant over time. The of conventional is taken as the of positive charge motion, to . At the microscopic level, arises from the drift of charge carriers under an applied . The \vec{j}, which represents current per unit area, is related to the \vec{E} by \vec{j} = \sigma \vec{E}, where \sigma is the of the material. This relation, derived from the of electron motion in solids, shows that higher fields accelerate carriers, leading to greater , with \sigma depending on carrier density, charge, and . Ohm's law describes the linear relationship between voltage and current in many conductors: V = IR, where V is the potential difference in volts, I is current in amperes, and R is resistance in ohms. Formulated by Georg Simon Ohm in his 1827 publication Die galvanische Kette, mathematisch bearbeitet, this law holds for ohmic materials where resistance is constant. Resistance R of a conductor is given by R = \rho \frac{L}{A}, with \rho as resistivity (the reciprocal of conductivity, \rho = 1/\sigma), L as length, and A as cross-sectional area. Resistivity varies with temperature, typically increasing for metals due to enhanced electron-phonon scattering, following \rho(T) = \rho_0 [1 + \alpha (T - T_0)], where \alpha is the temperature coefficient and \rho_0 is resistivity at reference temperature T_0./04%3A_Batteries_Resistors_and_Ohm's_Law/4.03%3A_Resistance_and_Temperature) For example, copper's \alpha \approx 0.0039 /^\circ\mathrm{C}^{-1}, meaning resistance rises about 0.39% per degree Celsius increase./04%3A_Batteries_Resistors_and_Ohm's_Law/4.03%3A_Resistance_and_Temperature) In semiconductors, resistivity decreases with temperature as more charge carriers are thermally excited./04%3A_Batteries_Resistors_and_Ohm's_Law/4.03%3A_Resistance_and_Temperature) The power dissipated as heat in a resistor, known as Joule heating, is P = I^2 R, representing the rate of electrical energy conversion to thermal energy. This arises from the work done against resistive forces on charge carriers, with total energy dissipated over time t as Pt = I^2 Rt. In practical applications, such as electric heaters, this effect is harnessed by designing high-resistance elements to maximize heat output for given currents. For analyzing complex circuits with multiple branches and loops, Kirchhoff's laws apply conservation principles. Kirchhoff's junction rule states that the algebraic sum of currents entering a junction equals zero, reflecting charge conservation: \sum I = 0./20%3A_Circuits_and_Direct_Currents/20.3%3A_Kirchhoffs_Rules) Kirchhoff's loop rule asserts that the algebraic sum of potential differences around any closed loop is zero, embodying energy conservation: \sum V = 0./20%3A_Circuits_and_Direct_Currents/20.3%3A_Kirchhoffs_Rules) These rules, introduced by Gustav Kirchhoff in 1845, enable solving for unknown currents and voltages in steady-state DC circuits./20%3A_Circuits_and_Direct_Currents/20.3%3A_Kirchhoffs_Rules) In circuits combining resistors and capacitors, such as an RC series circuit connected to a battery, the capacitor charges exponentially. The charge on the capacitor builds as q(t) = Q (1 - e^{-t/RC}), where Q = CV is the maximum charge, C is capacitance, and RC is the time constant determining the charging rate. Initially, current is maximum (I_0 = V/R), decreasing to zero as the capacitor fully charges, with the voltage across it approaching the battery potential. This behavior models transient responses in filters and timing circuits.

Magnetostatics and Induction

Magnetostatics is the study of magnetic fields in systems where currents are steady, meaning the magnetic fields do not change with time. These fields arise from electric currents, building on the concept of current as the flow of charge in conductors. The fundamental relation between currents and magnetic fields is described by the Biot-Savart law, which was experimentally established by Jean-Baptiste Biot and Félix Savart in 1820. This law allows calculation of the infinitesimal magnetic field d\vec{B} produced by a small current element I d\vec{l} at a distance \vec{r} from the element, given by d\vec{B} = \frac{\mu_0}{4\pi} \frac{I d\vec{l} \times \hat{r}}{r^2}, where \mu_0 = 4\pi \times 10^{-7} \, \mathrm{T \cdot m/A} is the permeability of free space and \hat{r} is the unit vector along \vec{r}. The cross product ensures the field is perpendicular to both the current direction and the line connecting the element to the observation point, following the right-hand rule for direction. Integrating this expression over a continuous current distribution yields the total magnetic field \vec{B}, enabling predictions for fields around wires, loops, or arbitrary shapes. For example, the field around an infinite straight wire carrying current I is \vec{B} = \frac{\mu_0 I}{2\pi r} \hat{\phi}, circling the wire in azimuthal direction. A more powerful tool for symmetric current configurations is Ampère's law, formulated by in the 1820s following his experiments on forces between current-carrying wires. It states that the of the around any closed loop equals \mu_0 times the total I_\mathrm{encl} enclosed by that loop: \oint \vec{B} \cdot d\vec{l} = \mu_0 I_\mathrm{encl}. This integral form simplifies calculations for high-symmetry cases, such as the infinite straight wire mentioned earlier, where symmetry dictates \vec{B} is constant in magnitude and tangential along a circular Amperian loop. For a long —a of many tightly wound turns with n turns per unit length carrying I—Ampère's law yields a uniform inside of B = \mu_0 n I, directed along the axis, while the field outside is negligible. Similarly, for a (a bent into a shape with mean radius R and N total turns), the field inside is B = \frac{\mu_0 N I}{2\pi R}, circumferential and confined within the core. These results highlight Ampère's law's utility in engineering applications like electromagnets and transformers. Electromagnetic induction arises when changing magnetic fields produce electric fields, a phenomenon discovered by Michael Faraday in 1831 through experiments showing that a varying magnetic flux through a loop induces an electromotive force (emf). Faraday's law quantifies this as the induced emf \mathcal{E} in a closed loop equaling the negative rate of change of magnetic flux \Phi_B through the loop: \mathcal{E} = -\frac{d\Phi_B}{dt}, where \Phi_B = \int \vec{B} \cdot d\vec{A} is the flux over the area enclosed by the loop. For N loops, the emf is multiplied by N. The negative sign reflects Lenz's rule, formulated by Heinrich Lenz in 1834, which states that the induced current creates a opposing the change in flux, conserving energy by resisting the flux variation. For instance, if a bar magnet approaches a , the induced current generates a field repelling the magnet. This opposition explains why energy must be supplied to change currents in inductive circuits. Inductance quantifies the ability of a to store in its due to . Self-inductance L for a single is defined such that the induced is \mathcal{E} = -L \frac{dI}{dt}, measuring how much links the per unit ; for a , L = \mu_0 n^2 A l, where A is cross-sectional area and l is . Mutual inductance M between two circuits describes the induced in one by changing in the other, \mathcal{E}_2 = -M \frac{dI_1}{dt}, depending on their geometric ; for two , M equals the self-inductance of the inner times the fraction of linking the outer. The stored in an inductor's is U = \frac{1}{2} L I^2, analogous to electrostatic in a , derived by integrating the power delivered against the back during buildup. This is \frac{B^2}{2\mu_0} throughout space. In RL circuits, combining resistance R and inductance L, the inductor's opposition to current changes leads to transient behavior. When a is connected, current grows as I(t) = \frac{\mathcal{E}}{R} \left(1 - e^{-t/(L/R)}\right), reaching after \tau = L/R. Upon disconnection, current decays exponentially as I(t) = I_0 e^{-t/(L/R)}, with energy dissipated as heat in the . This decay illustrates Lenz's rule, as the inductor induces an maintaining the current against the decreasing . RL circuits model real-world devices like relays and filters, where \tau determines response time.

Maxwell's Equations

Maxwell's equations constitute the foundational set of four partial differential equations that describe the behavior of electric and magnetic fields in , unifying previously disparate phenomena such as , magnetostatics, and into a coherent dynamical theory. Formulated by James Clerk Maxwell in his 1865 paper "A Dynamical Theory of the Electromagnetic Field," these equations reveal the interdependence of electric and magnetic fields, predicting their propagation as at the and laying the groundwork for modern technologies like communication. The equations are expressed in both integral and differential forms, with the latter derived by in 1885 using to condense Maxwell's original twenty equations into a more compact and widely used notation. In integral form, the equations relate the fields to their sources over closed surfaces and loops. for electricity states that the of the \vec{E} through a closed surface is proportional to the enclosed : \oint_S \vec{E} \cdot d\vec{A} = \frac{Q_{\text{enc}}}{\epsilon_0}, where \epsilon_0 is the and Q_{\text{enc}} is the enclosed charge. asserts that the through any closed surface is zero, implying the absence of magnetic monopoles: \oint_S \vec{B} \cdot d\vec{A} = 0, with \vec{B} denoting the magnetic field. Faraday's law of induction describes how a changing magnetic flux \Phi_B = \int \vec{B} \cdot d\vec{A} induces an electromotive force around a closed loop: \oint_C \vec{E} \cdot d\vec{l} = -\frac{d\Phi_B}{dt}. This captures electromagnetic induction, where time-varying magnetic fields generate electric fields. The Ampère-Maxwell law extends Ampère's circuital law by including the displacement current, linking the circulation of the magnetic field to both conduction current and the rate of change of electric flux: \oint_C \vec{B} \cdot d\vec{l} = \mu_0 \left( I_{\text{enc}} + \epsilon_0 \frac{d\Phi_E}{dt} \right), where \mu_0 is the vacuum permeability, I_{\text{enc}} is the enclosed current, and \Phi_E = \int \vec{E} \cdot d\vec{A} is the electric flux. The differential forms, which apply locally at each point in space, are obtained via the divergence and Stokes' theorems and are particularly useful for deriving field behaviors in vacuum or media. These are: \nabla \cdot \vec{E} = \frac{\rho}{\epsilon_0}, \quad \nabla \cdot \vec{B} = 0, \quad \nabla \times \vec{E} = -\frac{\partial \vec{B}}{\partial t}, \quad \nabla \times \vec{B} = \mu_0 \vec{J} + \mu_0 \epsilon_0 \frac{\partial \vec{E}}{\partial t}, where \rho is the and \vec{J} is the . The zero divergence of \vec{B} confirms the nonexistence of isolated magnetic charges, a key implication distinguishing from . Faraday's in differential form highlights how spatially varying arise from time-dependent magnetic fields, enabling applications like transformers and generators. A significant consequence of is the , which quantifies the directional of an . Defined as \vec{S} = \frac{1}{\mu_0} \vec{E} \times \vec{B}, it represents the power per unit area carried by the fields, with units of watts per square meter, and its over a closed surface gives the rate of leaving a volume. Introduced by in 1884, this vector illustrates how electromagnetic flows, for instance, from a charging capacitor's interacting with an external . Together, these elements form the complete classical description of , emphasizing the symmetry and dynamism of electric and magnetic interactions.

Optics

Geometric Optics

Geometric optics is the branch of optics that describes light propagation using rays, approximating light as straight lines that bend only at interfaces between , valid when the wavelength is much smaller than the structures involved. This ray approximation neglects wave phenomena like and , focusing instead on , , and . The principles underpin the design of mirrors, lenses, and instruments, treating light paths as reversible and independent. Reflection occurs when light rays encounter a boundary between two media and bounce back, governed by the law of reflection: the angle of incidence equals the angle of reflection, both measured from the normal to the surface. This law holds for plane, spherical, and other mirror surfaces, enabling the formation of virtual or real images depending on the geometry. Refraction, the bending of rays at an interface, arises from the change in light speed between media and is described by Snell's law: n_1 \sin \theta_1 = n_2 \sin \theta_2, where n is the refractive index (a dimensionless measure of the medium's optical density, approximately 1 for vacuum, 1.33 for water, and 1.5 for glass) and \theta the angles from the normal. The refractive index n = c / v, with c the speed in vacuum and v in the medium, quantifies how much light slows and bends. For mirrors and lenses, image location and size are predicted using paraxial approximations, assuming small angles. Spherical mirrors form images via ray tracing, with focal length f = R/2 for radius R. Lenses, typically thin, obey the thin lens equation: \frac{1}{f} = \frac{1}{d_o} + \frac{1}{d_i}, where d_o is object distance, d_i image distance, and f focal length (positive for converging lenses, negative for diverging). Linear magnification m = -\frac{d_i}{d_o} = \frac{h_i}{h_o} gives image height h_i relative to object height h_o, with the negative sign indicating inversion for real images. Converging lenses produce real images for distant objects, while diverging lenses yield virtual, upright images. Total internal reflection (TIR) occurs when light in a denser medium strikes the at an greater than the \theta_c = \sin^{-1}(n_2 / n_1) (with n_1 > n_2), causing complete without transmission. This phenomenon, derived from when \sin \theta_2 > 1, enables applications like fiber optics, where light signals propagate through thin glass or plastic cores ( ~1.5) clad with lower-index material (~1.46), confining rays via repeated TIR for long-distance data transmission with minimal loss. Optical aberrations are deviations from ideal ray paths that blur images. arises in lenses or mirrors with spherical surfaces, as peripheral rays focus closer than paraxial ones due to varying path lengths, worsening with larger apertures; it is minimized by aspheric surfaces or stopped-down apertures. results from wavelength-dependent refractive indices, causing to focus shorter than red, leading to color fringing; achromatic doublets, combining crown and , correct this by countering . Optical instruments combine lenses to enhance resolution or magnification. The compound microscope uses an objective lens (short focal length f_o, ~1-2 mm) to form a real, magnified intermediate image near the focal point of the eyepiece (focal length f_e, ~2-5 cm), yielding total angular magnification m \approx (L / f_o) (25 / f_e), where L is tube length (~16 cm) and 25 cm the near-point distance. Telescopes, for distant objects, employ an objective (long f_o, e.g., 1 m for refractors) to create a real image at the eyepiece focal point (short f_e, ~2 cm), providing angular magnification m = -f_o / f_e; astronomical versions yield inverted images, while terrestrial add erecting lenses.

Wave Optics and Interference

Wave optics describes the behavior of as electromagnetic waves, emphasizing phenomena like and that arise from wave superposition and cannot be accounted for by ray-based geometric models. These effects demonstrate light's wave nature, where coherent sources produce constructive and destructive patterns, influencing applications from to . Unlike geometric , which approximates light as straight-line rays for large-scale phenomena, wave optics applies Huygens' principle to predict bending and spreading around obstacles. Huygens' principle, proposed in 1690, posits that every point on a serves as a source of secondary spherical wavelets that propagate forward, with the new wavefront forming as their common envelope. This geometric construction explains wave propagation and qualitatively, forming the basis for later quantitative theories by Fresnel and others. For instance, it illustrates how waves bend around edges, leading to observable patterns in slits and apertures. Diffraction occurs when encounters an comparable to its , causing the wave to spread and . In single-slit diffraction, the condition for destructive interference minima is given by \sin \theta = \frac{m \lambda}{a}, where a is the slit width, \lambda is the , \theta is the angle from the center, and m = \pm 1, \pm 2, \dots is an . This derives from the path length difference across the slit equaling an multiple of \lambda, resulting in zero net at those angles; the central maximum broadens as a decreases, highlighting the wave's spreading. The intensity distribution follows I(\theta) = I_0 \left( \frac{\sin \beta}{\beta} \right)^2, where \beta = \frac{\pi a \sin \theta}{\lambda}, showing a broad central peak flanked by subsidiary maxima. Young's double-slit experiment, conducted in , provided early evidence for 's wave nature through fringes on a screen. Monochromatic from a single source passes through two narrow slits separated by distance d, producing bright fringes where the path difference satisfies d \sin \theta = m \lambda for constructive (m = 0, \pm 1, \pm 2, \dots). The resulting intensity pattern is I(\theta) = 4 I_0 \cos^2 \left( \frac{\pi d \sin \theta}{\lambda} \right), with maximum intensity four times that of a single slit at the center, modulated by the single-slit envelope. Fringe spacing decreases with increasing d, confirming dependence. Thin-film interference arises when light reflects from the top and bottom surfaces of a thin transparent layer, such as a or oil slick, creating path-dependent differences. A shift of \pi (or \lambda/2) occurs upon from a medium of higher , while no shift happens at the lower-to-higher transition; the total difference is thus \delta = \frac{2\pi}{\lambda} (2 n t \cos \phi) + \pi or $0, depending on reflections, where n is the film's , t its thickness, and \phi the angle inside the film. Constructive for reflected light requires \delta = 2\pi m, producing iridescent colors in where thicknesses yield visible wavelengths. For incidence, minima appear at $2 n t = m \lambda if one shifts , while maxima appear at $2 n t = (m + 1/2) \lambda. Polarization describes the orientation of light's oscillations, which can be linear, circular, or elliptical; from sources like has random orientations. When polarized light passes through an analyzer at \theta to its polarization direction, Malus' law states the transmitted intensity is I = I_0 \cos^2 \theta, where I_0 is the incident intensity. This law, derived empirically in 1809, arises because only the component parallel to the analyzer's passes through, reducing intensity to zero at \theta = 90^\circ. Polarizers exploit this for applications like , filtering glare from horizontal surfaces. The in optical instruments is set by , with the Rayleigh criterion defining two point sources as just resolvable when the central maximum of one pattern falls on the first minimum of the other. For a circular aperture of diameter D, the minimum angular separation is \theta = 1.22 \frac{\lambda}{D}, where the factor 1.22 accounts for the Airy disk's first zero at $1.22 \pi. This , established in , applies to telescopes and microscopes, where shorter \lambda or larger D improves ; for example, visible (\lambda \approx 550 ) unaided eye to about 1 arcminute. Beyond this, patterns overlap indistinguishably, though superresolution techniques circumvent it under specific conditions.

Modern Physics

Special Relativity

Special relativity, formulated by in 1905, revolutionizes the understanding of space, time, and motion for objects traveling at constant velocities, especially near the , by unifying them into a four-dimensional framework. Unlike classical Newtonian mechanics, which assumes , special relativity reveals that measurements of length and duration depend on the observer's , leading to counterintuitive effects that have been experimentally verified in particle accelerators and studies. This theory maintains the invariance of the laws of physics under transformations between inertial frames while ensuring the consistency of electromagnetism, particularly the invariance of . The foundation of special relativity rests on two postulates introduced by Einstein. The principle of relativity asserts that the laws of physics, including those of and , take the same form in all inertial reference frames—frames moving at constant velocity relative to one another. The second postulate states that the in vacuum, c \approx 3 \times 10^8 m/s, is constant for all observers, regardless of the motion of the light source or the observer. These postulates, which resolve apparent conflicts between Newtonian and electromagnetic theory, imply that no object with mass can reach or exceed c, as accelerating to such speeds would require infinite energy. From these postulates, the Lorentz transformations emerge as the coordinate shifts between two inertial frames, say S and S', where S' moves at velocity v along the x-axis relative to S. These transformations, derived by requiring the invariance of the speed of light and the relativity principle, are: \begin{align*} x' &= \gamma (x - vt), \\ t' &= \gamma \left( t - \frac{vx}{c^2} \right), \\ y' &= y, \\ z' &= z, \end{align*} with the Lorentz factor \gamma = \frac{1}{\sqrt{1 - \frac{v^2}{c^2}}}. For v \ll c, these reduce to the transformations, recovering classical results, but they predict profound deviations at relativistic speeds. A key consequence is , where a clock moving at speed v relative to an observer runs slow compared to one at rest in the observer's frame. The \Delta t_0, measured by the clock in its , relates to the dilated time \Delta t observed in the other frame by \Delta t = \gamma \Delta t_0. This effect has been confirmed experimentally, for instance, in muon decay experiments where cosmic-ray s reach Earth's surface in greater numbers than expected classically due to their extended lifetimes from time dilation. Length contraction complements time dilation, affecting measurements in the direction of motion. An object's L_0, measured in its , appears contracted to L = \frac{L_0}{\gamma} in a frame where it moves at speed v. This contraction ensures consistency with the Lorentz transformations and has been observed in high-energy particle collisions, where subatomic particles exhibit shortened dimensions along their velocity vector. The arises directly from the time transformation equation, demonstrating that events simultaneous in one frame (\Delta t = 0) are not necessarily so in another unless they occur at the same location (\Delta x = 0). Specifically, \Delta t' = -\gamma \frac{v \Delta x}{c^2} for simultaneous events separated by \Delta x in the original frame. This undermines the classical notion of absolute time, showing that the order of spatially separated events can depend on the observer's motion. The , a highlighting and the , considers identical twins where one remains on while the other travels at relativistic speed to a distant star and returns. Upon reunion, the traveling twin has aged less, as their path through accumulates less due to the velocity-dependent dilation, despite symmetric arguments from each perspective being resolved by the frame change during acceleration and the non-simultaneity of separation and reunion events in the inertial frames. Although the twin formulation was popularized by in 1911, the underlying clock desynchronization effect was analyzed by Einstein in 1905 as a direct outcome of the theory's .

Photons and Photoelectric Effect

The concept of photons emerged from efforts to resolve inconsistencies in classical theories of light and radiation, particularly in explaining . In the late , predicted that the of would diverge to infinity at short wavelengths, a problem known as the . To address this, proposed in 1900 that electromagnetic energy is emitted and absorbed in discrete quanta, leading to his law for the spectral u(f, T) = \frac{8\pi h f^3}{c^3} \frac{1}{e^{hf/kT} - 1}, where h is Planck's constant, f is , T is temperature, c is the , and k is Boltzmann's constant. This quantization resolved the catastrophe by suppressing high-frequency contributions, matching experimental observations of blackbody spectra. Building on Planck's quanta, extended the idea to itself in 1905, proposing that consists of discrete particles, or photons, each carrying energy E = h f. This particle model explained the , where incident on a metal surface ejects s only if the light's frequency exceeds a material-specific threshold frequency f_0. Below f_0, no electrons are emitted regardless of intensity, contradicting classical wave theory, which predicted ejection dependent on . Einstein attributed this to the \phi = h f_0, the minimum energy needed to free an from the metal. In the photoelectric effect, the maximum kinetic energy of ejected electrons is given by K_{\max} = h f - \phi, with excess photon energy beyond \phi converted to electron kinetic energy. Experimentally, this is measured via the stopping potential V_s, the retarding voltage that halts the fastest electrons, satisfying e V_s = h f - \phi, where e is the electron charge. Plots of V_s versus f yield a straight line with slope h/e and intercept -f_0, confirming the linear frequency dependence and enabling determination of h. These relations hold for various metals, with \phi ranging from about 2 to 5 eV, establishing the quantized nature of light-matter interactions./University_Physics_III_-Optics_and_Modern_Physics(OpenStax)/06%3A_Photons_and_Matter_Waves/6.03%3A_Photoelectric_Effect) Photons also possess momentum p = h / \lambda, where \lambda is , implying particle-like collisions with . This was demonstrated in 1923 by through experiments with s on light elements like , where the scattered photon's shifts by \Delta \lambda = \frac{h}{m_e c} (1 - \cos \theta), with m_e the , c the , and \theta the angle. The shift arises from and in photon- collisions, treating the as a with negligible rest mass, and matches observations for wavelengths around 0.07 nm. This effect, peaking at \theta = 180^\circ, provides direct evidence of and rules out pure wave models. The underpins key technologies exploiting . Photodiodes convert incident into electrical current via electron-hole pair generation in semiconductors, with sensitivity tuned to wavelengths above the material's bandgap, enabling applications in and sensors. Solar cells, based on the photovoltaic variant of the effect in p-n junctions, generate power from , with laboratory efficiencies exceeding 27% and commercial efficiencies up to 26% as of 2025 in devices by optimizing to exceed the bandgap energy of about 1.1 . These devices have scaled to gigawatt-level production, demonstrating the practical impact of quantized .

Atomic Structure and Spectra

The structure of matter is fundamentally described by quantum models that explain the nature of spectra, where atoms emit or absorb at specific wavelengths corresponding to transitions between quantized levels. Early attempts to model the atom, building on Rutherford's nuclear model, culminated in Bohr's 1913 semiclassical theory, which introduced quantization to resolve the instability of classical orbiting electrons. In the , electrons orbit the in "stationary" states, with quantized as L = n \hbar, where n is a positive integer (the principal ) and \hbar = h / 2\pi is the reduced Planck's constant. This quantization prevents electrons from spiraling into the due to , as predicted by classical electrodynamics. For the , the levels are given by E_n = -\frac{13.6}{n^2} eV, derived from balancing centripetal and forces with the quantization condition. These levels become less negative (higher ) as n increases, approaching zero as n \to \infty, corresponding to the ionization threshold. The Bohr model successfully explained the observed emission spectra of hydrogen, where electrons excited to higher levels decay to lower ones, emitting photons with energies equal to the differences \Delta E = h \nu = E_{n_2} - E_{n_1} (with n_2 > n_1). This led to the prediction of spectral series, empirically discovered earlier. The Balmer series, visible in hydrogen spectra, consists of lines converging to a limit in the ultraviolet, described by the empirical formula \frac{1}{\lambda} = R \left( \frac{1}{n_1^2} - \frac{1}{n_2^2} \right), where n_1 = 2 and n_2 = 3, 4, \dots; here, R is the Rydberg constant, approximately $1.097 \times 10^7 m^{-1}. Johann Balmer proposed this formula in 1885 based on measured wavelengths, initially fitting the first four lines without theoretical justification. Johannes Rydberg generalized it in 1890 to all hydrogen series (e.g., Lyman for n_1 = 1, Paschen for n_1 = 3), introducing the universal Rydberg constant and confirming its value through spectroscopic data. Bohr's theory derived this formula theoretically by substituting the quantized energies into the frequency relation, marking a triumph in linking atomic structure to spectral lines. Despite its successes for hydrogen-like atoms, the Bohr model failed for multi-electron atoms and finer spectral details, prompting the development of fully quantum wave mechanics. Louis de Broglie hypothesized in 1924 that particles like electrons possess wave properties, with wavelength \lambda = h / p, where p is momentum; this matter-wave duality suggested electrons in atoms form standing waves around the nucleus, aligning with Bohr's orbital quantization. Erwin Schrödinger formalized this in 1926 through his wave equation, i \hbar \frac{\partial \psi}{\partial t} = \hat{H} \psi, whose time-independent form for stationary states yields solutions \psi(\mathbf{r}, t) = \psi(\mathbf{r}) e^{-i E t / \hbar}. The bound-state wavefunctions for hydrogen are characterized by four quantum numbers: the principal quantum number n = 1, 2, \dots determining energy; the orbital angular momentum quantum number l = 0, 1, \dots, n-1; the magnetic quantum number m_l = -l, \dots, l specifying orientation; and the spin magnetic quantum number m_s = \pm 1/2 for electron spin. These arise naturally from the separability of the wave equation in spherical coordinates, with n and l introduced by Sommerfeld in 1916 to refine Bohr's model for relativistic effects, m_l from quantization of orbital angular momentum, and m_s proposed by Uhlenbeck and Goudsmit in 1925 to explain spin-orbit coupling. A fundamental limit to simultaneous knowledge of position and momentum is given by Heisenberg's uncertainty principle, \Delta x \Delta p \geq \hbar / 2, underscoring the probabilistic nature of wavefunctions. The Zeeman effect provides experimental evidence for these quantum numbers, revealing how magnetic fields influence atomic spectra. In 1896, Pieter Zeeman observed that spectral lines split into multiple components when emitted in a , with the normal Zeeman effect showing three lines (unsplit π and shifted σ components) due to the interaction of the orbital magnetic moment with the field, shifting energies by \Delta E = \mu_B B m_l, where \mu_B = e \hbar / 2 m_e is the and B the field strength. The anomalous Zeeman effect, more common, involves additional splitting from (g \mu_B B m_j, with ), as explained by vector models incorporating \mathbf{l} and \mathbf{s}. This effect confirmed the quantized angular momenta and enabled measurement of atomic s, bridging classical and quantum descriptions of atomic structure.

Nuclear Physics

The consists of protons and neutrons, collectively known as nucleons, bound together by the , which overcomes the electrostatic repulsion between protons. Protons carry a positive charge equal to the , determining the Z, while neutrons are electrically neutral; the total number of nucleons defines the A = Z + N, where N is the neutron number. The of a depends on the balance between protons and neutrons, with light nuclei favoring roughly equal numbers and heavier ones requiring more neutrons to dilute proton repulsion. The of a quantifies the energy required to separate it into individual s, calculated from the mass defect Δm between the and its free constituents via Einstein's relation BE = \Delta m c^2, where c is the ./University_Physics_III_-Optics_and_Modern_Physics(OpenStax)/10%3A__Nuclear_Physics/10.03%3A_Nuclear_Binding_Energy) This energy arises from the conversion of a portion of the s' rest into binding potential. The per , BE/A, peaks around and at approximately 8.8 MeV, indicating maximum ; lighter nuclei have lower BE/A, while heavier ones exhibit a gradual decline, explaining tendencies toward for light elements and for heavy ones. The curve of per thus illustrates trends across isotopes. Radioactive decay occurs in unstable nuclei, emitting particles or radiation to achieve a more stable configuration. Alpha decay involves ejection of a helium-4 nucleus (two protons and two neutrons), reducing A by 4 and Z by 2, common in heavy nuclei like uranium-238. Beta-minus decay transforms a neutron into a proton, electron, and antineutrino, increasing Z by 1 while conserving A, as seen in carbon-14; beta-plus decay does the reverse, converting a proton to a neutron, positron, and neutrino. Gamma decay releases high-energy photons from excited nuclear states, without changing A or Z, often following alpha or beta decay to de-excite the daughter nucleus. The rate of decay follows the exponential law N = N_0 e^{-\lambda t}, where N is the number of undecayed nuclei at time t, N_0 is the initial number, and λ is the decay constant; the half-life t_{1/2} = \ln(2)/λ represents the time for half the nuclei to decay, varying widely from milliseconds to billions of years depending on the isotope./University_Physics_III_-Optics_and_Modern_Physics(OpenStax)/10%3A__Nuclear_Physics/10.04%3A_Radioactive_Decay) Nuclear fission and fusion release energy by altering nuclear binding. Fission, discovered by and in 1938 through bombardment of , splits a heavy nucleus like into lighter fragments, such as and , plus s. The Q-value, the energy released, is Q = (initial mass - final mass) c², typically 200 MeV per event for , far exceeding chemical reactions due to the curve's slope. In sustained chain reactions, emitted s induce further fissions if criticality is achieved, as in nuclear reactors. combines light nuclei into heavier ones, releasing energy when the per increases. For example, the deuterium-tritium (D-T) reaction, used in experimental devices, releases about 17.6 MeV. In stars like , the primary process is the proton-proton (pp) chain, converting four protons into and releasing a total of approximately 26.7 MeV per nucleus formed, with the energy stemming from mass conversion via E = mc². requires overcoming barriers at high temperatures, as in stars or experimental s. Recent experiments, such as those in 2025 by China's EAST and France's WEST reactor achieving durations over 1,000 seconds, highlight ongoing progress toward controlled . Nuclear models provide theoretical frameworks for understanding structure and behavior. The liquid drop model, proposed by in 1936 and applied to by Bohr and John Wheeler in 1939, treats the as an incompressible fluid of s, accounting for through volume, surface, , asymmetry, and pairing terms in the . This collective approach explains macroscopic properties like barriers but neglects individual motion. The , independently developed by and J. Hans D. Jensen in 1949, posits s occupying discrete energy levels in a mean potential, akin to atomic electrons, with spin-orbit coupling explaining (2, 8, 20, 28, 50, 82, 126) where shells fill completely, yielding exceptionally stable nuclei like or lead-208. These models complement each other, with the succeeding for low-energy and the liquid drop for dynamics. Detectors visualize or quantify nuclear radiation for experimentation and safety. The Geiger-Müller counter, invented by in 1908 and refined with Walther Müller in 1928, operates on gas ionization: radiation enters a cylindrical tube filled with low-pressure inert gas like , ionizing atoms to create electron avalanches between a central wire and wall under (typically 400-900 V), producing detectable pulses proportional to radiation intensity. It measures , and gamma rays but cannot distinguish types without absorbers. The , devised by in 1911, reveals particle tracks in a supersaturated vapor (e.g., in air); ionizing creates ion trails that serve as condensation nuclei for droplets, forming visible fog lines under expansion cooling, as used in early discoveries of positrons and muons. Wilson's design earned the 1927 , though modern variants like diffusion chambers improve portability.

Pedagogy and Resources

Teaching Approaches

University physics courses traditionally rely on lecture-based instruction, where instructors deliver content through presentations and derivations, often emphasizing mathematical formalism. However, research in has shown that this approach can limit student engagement and conceptual understanding, prompting a shift toward strategies that encourage student participation and problem-solving during class time. Active learning methods, such as peer instruction developed by in the 1990s, involve posing conceptual questions during lectures and having students discuss answers in pairs before voting on responses, which has been demonstrated to roughly double performance on conceptual assessments compared to traditional lectures in introductory physics courses. Flipped classrooms extend this by assigning lecture videos or readings as homework, freeing class time for interactive activities like group problem-solving; meta-analyses indicate that this model yields moderate gains in student achievement, equivalent to about half a standard deviation improvement over lecture-based formats in disciplines including physics. Demonstrations and interactive simulations enhance these approaches by providing visual and hands-on exploration of abstract concepts. The project, initiated in 2002 at the , offers free online tools for topics like electricity and , enabling students to manipulate variables and observe outcomes, which shows supports deeper understanding and retention in university physics settings. A key emphasis in modern teaching is addressing common student misconceptions through multiple representations, including diagrams, mathematical equations, and analogies, which help bridge intuitive ideas with formal . Studies in research highlight that integrating these representations in instruction reduces errors in problem-solving and fosters conceptual connections, particularly for topics like and motion where preconceptions persist. The integration of computational tools, such as and , into university physics curricula allows students to model physical systems numerically, simulating trajectories or electromagnetic fields to visualize beyond analytical solutions. This approach, increasingly adopted since the , develops quantitative skills and reveals limitations of approximations, with implementations in introductory labs showing improved student proficiency in and modeling. To promote inclusivity, teaching approaches increasingly incorporate strategies to address and racial gaps in enrollment and retention, such as inclusive pedagogies that value diverse perspectives and provide . Systematic reviews of diversity-focused interventions in postsecondary , including physics, demonstrate that active and collaborative methods combined with training for instructors can narrow achievement disparities by enhancing belonging and participation among underrepresented groups. These techniques complement experiments by reinforcing theoretical concepts through practical application. Since the , university physics pedagogy has further evolved to include hybrid learning models that blend in-person and online instruction, improving accessibility and flexibility. Additionally, as of 2025, AI-assisted tools, such as platforms and AI-driven simulations, are being integrated to personalize problem-solving support and provide instant feedback, enhancing engagement in courses.

Laboratory Experiments

Laboratory experiments in university physics courses serve to reinforce theoretical concepts from lectures by providing hands-on experience with measurement techniques and error analysis, enabling students to quantify uncertainties in data and assess the precision of instruments such as vernier calipers for linear measurements and oscilloscopes for voltage and time signals. These labs emphasize systematic and random errors, propagation of uncertainties, and statistical methods to evaluate experimental results against theoretical predictions, fostering skills in data validation and instrument calibration. In mechanics laboratories, the experiment demonstrates Newton's second law and of mechanical by measuring the of two es connected over a , allowing determination of the g through timing the motion and applying kinematic equations. Students typically vary mass differences to plot versus , verifying linear relationships and computing g with typical values around 9.8 m/s², while analyzing as a source of systematic error. The ballistic experiment combines of and to measure or g, where a embeds in a bob, raising it to a measurable height; the initial speed is calculated from the height rise, yielding with efficiencies often 90-95%. Electromagnetism labs include verification of , where students measure current through resistors using ammeters and voltmeters across varying voltages, plotting V versus I to confirm linear proportionality with slopes equal to resistance, typically achieving agreement within 5% for ohmic devices like carbon resistors. experiments focus on timing charge-discharge processes, using oscilloscopes to observe exponential voltage decay and measure the τ = RC, often comparing experimental values (e.g., 1-10 ms for standard components) to calculated ones, highlighting effects of internal instrument resistance. Optics laboratories employ gratings to measure light , such as from mercury or helium-neon lasers, by observing maxima on a screen and applying the d sin θ = mλ, where d is slit spacing, θ the angle, m the order, and λ the ; students typically resolve lines around 546 nm for green mercury emission with 1-2% accuracy. In , the Thomson method determines the charge-to-mass ratio e/m by accelerating electrons in a through a known voltage and deflecting them with perpendicular magnetic and , balancing forces to find e/m ≈ 1.76 × 10¹¹ C/kg from helical path radii, replicating J.J. Thomson's 1897 value within 10%. The Millikan oil-drop experiment measures the e by balancing gravitational and electric forces on charged oil droplets between parallel plates, tracking terminal velocities without and with voltage; quantized charges in multiples of 1.60 × 10⁻¹⁹ C are observed, with modern setups achieving precisions better than 1%. Safety standards in university physics labs mandate protective , proper handling of electrical apparatus to prevent shocks, and for any chemical use, with all accidents reported immediately to ensure compliance with institutional protocols. Data reporting requires clear documentation of raw measurements, estimates, and graphical analyses in notebooks, adhering to standards like including and discussing discrepancies to promote .

Common Textbooks

One of the most enduring textbooks in university physics is by David Halliday, , and Jearl , first published in and now in its 12th edition (2021). This comprehensive resource emphasizes problem-solving skills, conceptual understanding, and real-world applications through detailed examples, interactive simulations, and a wide array of practice problems with solutions. Its multiple editions have evolved to incorporate contemporary research topics, such as and contexts, while integrating tools like videos and online assessments via WileyPLUS. University Physics with Modern Physics by Hugh D. Young and Roger A. Freedman, originating from the 1949 work by Francis Sears and Mark Zemansky, has been refined through 15 editions, with the latest in 2020. Known for its clear, accessible explanations and strong emphasis on vectors—detailed early in Chapter 1 and reinforced throughout—it covers , , , and with a focus on in problem-solving. The text supports student learning through guided practice and AI-enhanced tools in its digital formats. Another widely adopted option is Physics for Scientists and Engineers by Raymond A. Serway and John W. Jewett, first published in 1982 and currently in its 11th edition (2025). Its modular structure allows flexible chapter sequencing, making it suitable for customized curricula, while featuring strong conceptual questions like problems and context-rich exercises to build understanding beyond rote calculation. The book integrates worked examples with online homework via WebAssign for targeted feedback. For open-source alternatives, OpenStax's University Physics (Volumes 1–3), released in 2016 under a , provides free digital access to high-quality, peer-reviewed content equivalent to traditional texts. This resource spans classical and , including , , and , with downloadable PDFs and web views to promote accessibility. In 21st-century editions of these textbooks, updates have increasingly incorporated modern topics like quantum applications and , alongside multimedia elements such as simulations and interactive e-books, enhancing engagement without altering core pedagogical foundations.

References

  1. [1]
    Preface - University Physics Volume 1 | OpenStax
    Sep 19, 2016 · Welcome to University Physics, an OpenStax resource. This textbook was written to increase student access to high-quality learning materials ...
  2. [2]
    Undergraduate Physics Curriculum | Department of Physics
    The Physics curriculum includes a two-year introductory sequence, courses in Mechanics, Electromagnetism, Quantum Mechanics, Thermodynamics, Optics, and ...
  3. [3]
    University Physics Volume 1 - OpenStax
    Study conceptual physics online free by downloading OpenStax's University Physics Volume 1 textbook and using our accompanying online resources.Introduction · 1.1 The Scope and Scale of... · 7.1 Work · 2.1 Scalars and Vectors
  4. [4]
    Choosing the Right Physics Course - Lafayette College
    science majors, the calculus-based introductory sequence Phys 131 – Phys 133 is appropriate. These courses give an introduction to physics, including ...
  5. [5]
    Undergraduate Course Outlines - University of Chicago Physics
    The Course Outlines and syllabi on this web page represent the best descriptions of some of the undergraduate courses that are available at this time.
  6. [6]
    Undergraduate Requirements - MIT Physics
    The Flex track requires: 8.03, 8.04 or 8.041, 8.044, 18.03 (Differential Equations); 8.21 Physics of Energy or 8.223 Classical Mechanics II (choose one); 8.033 ...
  7. [7]
  8. [8]
    Courses for Physics and Astronomy | University of Alabama
    This course is usually offered in the studio format (integrated lectures and labs). Degree credit can only be awarded for one of the following: PH 101, PH ...
  9. [9]
    Conceptual Problem Solving in Physics - ScienceDirect.com
    The central thesis of the chapter is that teaching learners to use CPS provides both a deeper understanding of the domain and can even help in solving problems.
  10. [10]
  11. [11]
    Problem-Solving Skills in Introductory Physics | CIRCLE - WashU
    In that study, we found that the Active Physics course better cultivates students' conceptual understanding, with students achieving significantly better gains ...
  12. [12]
    the development of German physics in the nineteenth century: part two
    Aug 6, 2025 · The University of Berlin opened officially in 1810 and, within a few decades, had become central to the education and training system of the ...<|control11|><|separator|>
  13. [13]
    Electromagnetism and Electrodynamics in the 19th Century
    ### Summary of Michael Faraday and James Clerk Maxwell's Influence on Electromagnetism in University Physics Curricula
  14. [14]
    A brief history of physics education in the United States
    May 1, 2015 · ... physics teaching was undergoing a rapid and wide-ranging transformation. Physics (known originally as “natural philosophy”) had been taught ...
  15. [15]
    History of US Physics Education - Master Bibliography - Google Sites
    CONSOLIDATED BIBLIOGRAPHY OF ALL REFERENCES. (in chronological order, by date of original publication). Supported in part by NSF DUE 1256333.
  16. [16]
    Yale University. Physics Department
    The Physics Department at Yale University began in the early 1800s as part of the Department of Philosophy and the Arts. Today the department offers studies in ...Missing: 1847 | Show results with:1847
  17. [17]
    [PDF] One Hundred and Fifty Years of Teaching Calculus
    You need to understand that during this period, if calculus was being taught, it was being taught to all students. The curriculum had no options. Furthermore, ...
  18. [18]
    1.6: The 20th Century Revolution in Physics
    Nov 21, 2020 · The two greatest achievements of modern physics occurred in the beginning of the 20th century. The first was Einstein's development of the Theory of Relativity.
  19. [19]
    The impact of Sputnik on education - Physics Today
    Oct 14, 2007 · What the GI bill had been to college education, the NDEA was to graduate study, with an emphasis on science and engineering..... © 2007 ...
  20. [20]
    How Sputnik changed U.S. education - Harvard Gazette
    Oct 11, 2007 · The post-Sputnik reforms were put in the hands of scientists, much to the dismay of some educators and concerned citizens who had previously had ...
  21. [21]
    [PDF] The Undergraduate Introductory Physics Textbook and the Future
    May 10, 2012 · In 1960 David Halliday and Robert Resnick released the gorilla introductory physics text of the twentieth century, Physics for Students of ...
  22. [22]
    The benefit of computational modelling in physics teaching
    There were two major course structure changes to introductory physics in the US in the 1980s that included computational modelling. The M.U.P.P.E.T. project ...
  23. [23]
    INDIA: Science, technology and development - University World News
    Feb 1, 2009 · A key element in this growth story has been the base of science and technology that India created in a planned manner and, once this was done, ...
  24. [24]
    [PDF] MORRIS LOW* Science and civil society in Japan - UQ eSpace
    physicists were likely to take up positions as university professors and rely on government support for their activities.11 This required Japanese physicists to.
  25. [25]
    Week 1: Kinematics | Classical Mechanics - MIT OpenCourseWare
    Lesson 1: 1D Kinematics - Position and Velocity, Lesson 2: 1D Kinematics - Acceleration, Lesson 3: 2D Kinematics - Position, Velocity, and Acceleration.Derivatives in Kinematics · Week 1 Introduction · 1.1 Coordinate Systems and...Missing: sources | Show results with:sources
  26. [26]
    [PDF] Chapter 3 Motion in Two and Three Dimensions
    For the purposes of doing physics, it is important to consider reference frames which move at constant velocity with respect to one another; for these cases ...
  27. [27]
    [PDF] Chapter 6 Circular Motion - MIT OpenCourseWare
    We shall begin by describing the kinematics of circular motion, the position, velocity, and acceleration, as a special case of two-dimensional motion. We will ...Missing: university | Show results with:university
  28. [28]
    42. 7.1 Work: The Scientific Definition - University of Iowa Pressbooks
    Work is the transfer of energy by a force acting on an object as it is displaced. · The work · The SI unit for work and energy is the joule (J), where · The work ...
  29. [29]
    7.3 Work-Energy Theorem – General Physics Using Calculus I
    The work-energy theorem states that the net work done on a particle equals the change in its kinetic energy.
  30. [30]
    7.2 Kinetic Energy – University Physics Volume 1 - UCF Pressbooks
    The kinetic energy of a particle is the product of one-half its mass and the square of its speed, for non-relativistic speeds. The kinetic energy of a system is ...
  31. [31]
    44. 7.3 Gravitational Potential Energy - University of Iowa Pressbooks
    \boldsymbol{W=Fd=mgh}. We define this to be the gravitational potential energy \boldsymbol{(\textbf{PE}_{\textbf{g} put into (or ...
  32. [32]
    Energy in a spring system - Physics
    Spring Potential Energy. The potential energy of a spring is given by: U = ½ kx2. Energy in a spring system. A block connected to a horizontal spring sits ...
  33. [33]
    8.3: Conservation of Energy - Maricopa Open Digital Press
    However, the conservation of mechanical energy, in one of the forms in (Figure) or (Figure), is a fundamental law of physics and applies to any system.
  34. [34]
    7.5 Nonconservative Forces – College Physics chapters 1-17
    A nonconservative force is one for which work depends on the path taken. Friction is a good example of a nonconservative force.
  35. [35]
    7.7 Power – College Physics - University of Iowa Pressbooks
    Power is the rate at which work is done, or in equation form, for the average power \boldsymbol{P} · The SI unit for power is the watt (W), where \boldsymbol{1\ ...
  36. [36]
    8.1 Linear Momentum, Force, and Impulse - Physics | OpenStax
    Mar 26, 2020 · Momentum, Impulse, and the Impulse-Momentum Theorem. Linear momentum is the product of a system's mass and its velocity. In equation form, ...
  37. [37]
    9.3 Conservation of Linear Momentum - University Physics Volume 1
    Sep 19, 2016 · Define a system whose momentum is conserved; Mathematically express conservation of momentum for a given system; Calculate an unknown quantity ...Missing: source | Show results with:source
  38. [38]
    8.5: Relative Velocity and the Coefficient of Restitution
    Sep 20, 2023 · We can quantify how inelastic a collision is by the ratio of the final to the initial magnitude of the relative velocity. This ratio is denoted by e and is ...
  39. [39]
    10.3: The center of mass - Physics LibreTexts
    Mar 28, 2024 · The center of mass is that position in a system that is described by Newton's Second Law when it is applied to the system as a whole. The center ...
  40. [40]
    Ideal Rocket Equation | Glenn Research Center - NASA
    Nov 21, 2023 · Ideal Rocket Equation · change in rocket momentum=M(u+du)−Mu=Mdu · change in exhaust momentum=dm(u−v)−udm=−vdm · change in system momentum=Mdu−vdm.Missing: dv = source
  41. [41]
    Rotational Motion | PHYS 1433 - City Tech OpenLab
    With torque and moment of inertia defined we can now right Newton's second law for rotating objects. \sum \vec{\tau} = I \vec{\omega}. Here is an example of ...
  42. [42]
    Week 10: Rotational Motion | Classical Mechanics | Physics
    Week 10: Rotational Motion. Lesson 28: Motion of a Rigid Body. Lesson 29: Moment of Inertia. Lesson 30: Torque. Lesson 31: Rotational Dynamics.Week 11: Angular Momentum · 29.5 Moment of Inertia of a... · Week 10 IntroductionMissing: second | Show results with:second
  43. [43]
    10.7 Newton's Second Law for Rotation – University Physics Volume 1
    In this section, we introduce the rotational equivalent to Newton's second law of motion and apply it to rigid bodies with fixed-axis rotation.
  44. [44]
    [PDF] Chapter 2 Rolling Motion; Angular Momentum
    (b) Rotational inertia is related to net torque and angular acceleration by way of τ = Iα. It is true that in this problem the rotating object is also ...Missing: university | Show results with:university
  45. [45]
    Philosophiae naturalis principia mathematica : Newton, Isaac, 1642 ...
    Jan 19, 2017 · This is the first edition of Newton's Principia, in which he elucidates the universal laws of gravitation and motion that underlay the phenomena described by ...
  46. [46]
    XXI. Experiments to determine the density of the earth - Journals
    The apparatus is very simple; it consists of a wooden arm, 6 feet long, made so as to unite great strength with little weight.
  47. [47]
    Astronomia nova aitiologetos [romanized] : sev physica coelestis ...
    Jul 24, 2012 · Astronomia nova aitiologetos [romanized] : sev physica coelestis, tradita commentariis de motibvs stellæ Martis, ex observationibus G. V. Tychonis Brahe.
  48. [48]
    Ioannis Keppleri Harmonices mundi libri V ... - Internet Archive
    May 15, 2012 · Kepler, Johannes, 1571-1630; Ptolemy, 2nd cent; Fludd, Robert ... FULL TEXT download · download 1 file · HOCR download · download 1 file.
  49. [49]
    13.1 Temperature – College Physics - University of Iowa Pressbooks
    Absolute zero is the temperature at which there is no molecular motion. There are three main temperature scales: Celsius, Fahrenheit, and Kelvin. Temperatures ...
  50. [50]
    SI Units – Temperature | NIST
    The temperature 0 K is commonly referred to as "absolute zero." On the widely used Celsius temperature scale, water freezes at 0 °C and boils at about 100 °C.
  51. [51]
    Temperature Scales - 16.04.07: Thermodynamics - Yale University
    To convert from a temperature in Kelvin to degrees Celsius, simply subtract 273.15 and to convert a Celsius temperature to a Kelvin temperature just add 273.15.
  52. [52]
    Zeroth Law - Thermal Equilibrium | Glenn Research Center - NASA
    May 2, 2024 · The zeroth law of thermodynamics is an observation. When two objects are separately in thermodynamic equilibrium with a third object, they are in equilibrium ...
  53. [53]
    16. TEMPERATURE
    If the thermometer is also in thermal equilibrium with a second body than the two bodies are also in thermal equilibrium. This is called the zeroth law of ...
  54. [54]
    1.1 Temperature and Thermal Equilibrium - UCF Pressbooks
    It is through the concepts of thermal equilibrium and the zeroth law of thermodynamics that we can say that a thermometer measures the temperature of something ...
  55. [55]
    Principles of Heating and Cooling - Department of Energy
    Heat is transferred to and from objects -- such as you and your home -- through three processes: conduction, radiation, and convection.
  56. [56]
    Conduction - UCAR Center for Science Education
    Conduction is one of the three main ways that heat energy moves from place to place. The other two ways heat moves around are radiation and convection.
  57. [57]
    The Transfer of Heat Energy - NOAA
    Jan 2, 2024 · Convection. Convection is the transfer of heat energy in a fluid. In the kitchen, this type of heating is most commonly seen as the circulation ...
  58. [58]
    Mechanisms of Heat Loss or Transfer | EGEE 102 - Dutton Institute
    Radiation is the transfer of heat through electromagnetic waves through space. Unlike convection or conduction, where energy from gases, liquids, and solids is ...
  59. [59]
    14.2 Temperature Change and Heat Capacity – College Physics
    \boldsymbol{Q=mc\Delta{T}}, where \boldsymbol{c} is the specific heat of the material. This relationship can also be considered as the definition of specific ...
  60. [60]
    [PDF] Heat and 1st Law of Thermodynamics
    Heat Capacity and Specific heat. The heat capacity C of a substance is ... Q = mc ΔT. Example. How much heat is required to raise the temperature of 100 ...
  61. [61]
    Calorimetry
    Then use Equation 5.42 to determine the heat capacity of the calorimeter (C bomb) from q comb and ΔT. ... Equation 5.41: ΔH rxn = q rxn = −q calorimeter = −mC sΔT.
  62. [62]
    coefficient of linear expansion. - Physics
    Basic features: The length change is proportional to the temperature change. Here α is the coefficient of linear expansion.
  63. [63]
    [PDF] Temperature, Expansion, Ideal Gas Law - Galileo and Einstein
    Thermal Expansion Notation. • The coefficient of linear expansion, denoted by α, is defined by Δℓ/ℓ. 0. = αΔT. • α = 1.2 x 10-5 for iron, 0.9 x 10-5 for glass.
  64. [64]
    [PDF] LECTURE NOTES ON THERMODYNAMICS
    May 17, 2025 · These are lecture notes for AME 20231, Thermodynamics, a sophomore-level undergraduate course taught in the Department of Aerospace and ...
  65. [65]
    The Laws of Thermodynamics and Limits on Engine Efficiency
    The first law of thermodynamics: total energy, including heat energy, is always conserved. He explicitly assumed that heat was just the kinetic energy of the ...Missing: history | Show results with:history
  66. [66]
    Thermodynamic Foundations – Introduction to Aerospace Flight ...
    The historical roots of this law can be traced back to the nineteenth century, when James Joule established the equivalence of heat and mechanical work. Rudolf ...
  67. [67]
    Thermodynamic processes
    An isothermal process occurs at constant temperature. Since the internal energy of a gas is only a function of its temperature, ΔU = 0 for an isothermal process ...
  68. [68]
    The First Law of Thermodynamics and Some Simple Processes
    ... thermodynamic processes: isobaric, isochoric, isothermal, and adiabatic. Compute the total work done during a cyclical thermodynamic process using a PV diagram.
  69. [69]
    Ch20; Heat and the First Law of Thermodynamics - General Physics II
    When a gas expands it does work on its surroundings. That work is equal to the area under the curve on a PV diagram which describes that expansion.
  70. [70]
    15.2 The First Law of Thermodynamics and Some Simple Processes
    Among them are the isobaric, isochoric, isothermal and adiabatic processes. These processes differ from one another based on how they affect pressure, volume, ...
  71. [71]
    [PDF] Chapter 19 The First Law of Thermodynamics
    Isobaric – no change in pressure (∆p = 0). 3. Page 4. 3. Isochoric – no change in volume (∆V = 0). 4. Adiabatic – no exchange of thermal energy (Q = 0).
  72. [72]
    3.3 The Carnot Cycle - MIT
    The efficiency can be 100% only if the temperature at which the heat is rejected is zero. The heat and work transfers to and from the system are shown ...
  73. [73]
    44 The Laws of Thermodynamics - Feynman Lectures - Caltech
    In fact, the science of thermodynamics began with an analysis, by the great engineer Sadi Carnot, of the problem of how to build the best and most efficient ...
  74. [74]
    4.4 Statements of the Second Law of Thermodynamics - OpenStax
    Oct 6, 2016 · It is impossible to convert the heat from a single source into work without any other effect. This is known as the Kelvin statement of the ...
  75. [75]
    Lord Kelvin | On the Dynamical Theory of Heat
    Now, according to the dynamical theory of heat, the temperature of a substance can only be raised by working upon it in some way so as to produce increased ...
  76. [76]
    4.6 Entropy - University Physics Volume 2 | OpenStax
    Oct 6, 2016 · Second Law of Thermodynamics (Entropy statement)​​ The entropy of a closed system and the entire universe never decreases. We can show that this ...
  77. [77]
    [PDF] The mechanical theory of heat - University of Notre Dame
    During the ten years which have elapsed since the first volume of papers appeared, many fresh investigations into the Mechanical Theory of Heat have been.
  78. [78]
    4.1 Reversible and Irreversible Processes - University Physics Volume 2 | OpenStax
    ### Summary of Reversible and Irreversible Processes, Second Law, and Entropy
  79. [79]
    4.7 Entropy on a Microscopic Scale - University Physics Volume 2
    Oct 6, 2016 · The second law of thermodynamics makes clear that the entropy of the universe never decreases during any thermodynamic process. For any other ...
  80. [80]
    Translation of Ludwig Boltzmann's Paper “On the Relationship ...
    Translation of the seminal 1877 paper by Ludwig Boltzmann which for the first time established the probabilistic basis of entropy.
  81. [81]
    4.2 Heat Engines - University Physics Volume 2 | OpenStax
    ### Summary of Carnot Efficiency Formula and Heat Engines
  82. [82]
    39 The Kinetic Theory of Gases - Feynman Lectures
    So for a monatomic gas, the kinetic energy is the total energy. In general, we are going to call U the total energy (it is sometimes called the total internal ...
  83. [83]
    1.4: The Kinetic Molecular Theory of Ideal Gases
    Jul 25, 2019 · The various gas laws can be derived from the assumptions of the KMT, which have led chemists to believe that the assumptions of the theory ...The Kinetic Molecular Theory... · What is an "Ideal Gas"?
  84. [84]
    F00-notes.14
    Kinetic Theory of Gases - Derivation of the Ideal Gas Law. In the preceding section we discussed the derivation of the Ideal Gas Law from an experimental ...
  85. [85]
    2.4 Distribution of Molecular Speeds - University Physics Volume 2
    Oct 6, 2016 · That is, the probability that a molecule's speed is between v and v + d v v + d v is f(v)dv. We can now quote Maxwell's result, although the ...
  86. [86]
    13.4 Kinetic Theory: Atomic and Molecular Explanation of Pressure ...
    Jul 13, 2022 · We gain a better understanding of pressure and temperature from the kinetic theory of gases, which assumes that atoms and molecules are in ...
  87. [87]
    2.3 Heat Capacity and Equipartition of Energy - University Physics ...
    Oct 6, 2016 · In the case of an ideal gas, determine the number d of degrees of freedom from the number of atoms in the gas molecule and use it to calculate C ...
  88. [88]
    27.6: Mean Free Path - Chemistry LibreTexts
    Mar 8, 2025 · This page discusses particle interactions in gases, focusing on collision energy, cross-section, collision frequency, and mean free path.Collision energy · Average collision Frequency · Mean Free Path · Random Walks
  89. [89]
    15.1 Simple Harmonic Motion – General Physics Using Calculus I
    Simple harmonic motion (SHM) is oscillatory motion where the restoring force is proportional to the displacement and acts in the opposite direction.
  90. [90]
    Simple Harmonic Motion - HyperPhysics
    Simple harmonic motion is the motion of a mass on a spring with a linear elastic restoring force, and it is sinusoidal in time.Missing: definition | Show results with:definition
  91. [91]
    The Pendulum - Galileo
    Pendulums of Arbitrary Shape​​ for small angles the period T=2π√I/mgl, and for the simple pendulum we considered first I=ml2, giving the previous result.
  92. [92]
    [PDF] Driven Harmonic Motion - UCSB Physics
    Jul 13, 2015 · Figure 1: Resonance behaviour in the driven harmonic oscillator, for the case ω = 1, and β2 = 0.1 (blue curve), β2 = 0.5 (orange curve), and ...
  93. [93]
    Further Understanding for Lissajous Figures | The Physics Teacher
    Jan 1, 2021 · Firstly, the three-dimensional space curve is projected onto the xoy plane, and the Lissajous figure is obtained, whose equations of motion are ...
  94. [94]
    [PDF] Mechanical waves - Duke Physics
    Mechanical waves are created by the interaction between neighboring particles in the medium. Energy and momentum are transferred from one particle to the next ...Missing: longitudinal | Show results with:longitudinal
  95. [95]
    [PDF] Chapter 15 Mechanical Waves 1 Types of Mechanical Waves
    There are basically two kinds of waves–transverse and longitudinal waves. Waves propagate through the medium at a definite speed called the wave speed.
  96. [96]
    [PDF] Chapter 12: Mechanical Waves and Sound - Laulima!
    Transverse – the wave disturbance is perpendicular to the direction of propagation. • Longitudinal – the wave disturbance is parallel to the direction of ...
  97. [97]
    [PDF] Mechanical Waves - NJIT
    A wave on a string is a type of mechanical wave. • The hand moves the string up and then returns, producing a transverse wave that moves to the right.
  98. [98]
    [PDF] Chapter 15 Mechanical Waves
    Types of mechanical waves. • A mechanical wave is a disturbance traveling through a medium. • Figure below illustrates transverse waves and longitudinal waves.<|control11|><|separator|>
  99. [99]
    [PDF] layton@physics.ucla.edu W1 4. Waves have characteristic ...
    Students know how to identify transverse and longitudinal waves in mechanical ... Polarization only occurs with transverse waves. If a transverse wave is ...
  100. [100]
    [PDF] MITOCW | 3.2 Waves
    Mechanical waves can be transverse and longitudinal, polarized, or have transverse and longitudinal components. Electromagnetic waves in free space are ...
  101. [101]
    The Feynman Lectures on Physics Vol. I Ch. 51: Waves - Caltech
    In all cases, the shear wave speed is less than the speed of longitudinal waves. The shear waves are somewhat more analogous, so far as their polarizations are ...
  102. [102]
    [PDF] Vibrations and Waves
    In this chapter we will focus on transverse waves. 1. Page 2. Wave equation. The following wave equation describes a transverse wave oscillating in the y ...
  103. [103]
    [PDF] Lecture 07: Wave Equation and Standing Waves - The Black Hole
    In these notes we derive the wave equation for a string by considering the vertical displacement of a chain of coupled oscillators. In finding the general ...<|separator|>
  104. [104]
    16.3 Wave Speed on a Stretched String – University Physics Volume 1
    The speed of the wave can be found from the linear density and the tension v = F T μ . · From the equation v = F T μ , if the linear density is increased by a ...
  105. [105]
    [PDF] Wave Motion 1 - Duke Physics
    The important points in these formulas: • The speed of a wave in a string is proportional to the square root of the tension and inversely to the square root of ...
  106. [106]
    16.1 Traveling Waves – General Physics Using Calculus I
    Wave velocity and wavelength are related to the wave's frequency and period by v = λ T = λ f . Mechanical waves are disturbances that move through a medium and ...
  107. [107]
    [PDF] Superposition and Standing Waves
    Shown below are six standing wave systems in strings. These systems vary in frequency of oscillation, tension in the strings, and number of nodes. The ...
  108. [108]
    [PDF] Chapter 16 - Superposition and Standing Waves - UMD Physics
    Sound waves in a pipe. • The open end of a pipe will be a pressure node – the pressure will constant. • A closed end of the pipe will be a pressure antinode – ...
  109. [109]
    Standing waves - Oregon State University
    Standing waves are created when waves of equal length and amplitude interfere. The position of the position of the mth node is a function of the wavelength.
  110. [110]
  111. [111]
    [PDF] y1 y2 y1+ y2 t t t - UNL Physics and Astronomy
    The waves that result from this are called standing waves. If I move my hand faster up and down, you see that I can change the number of nodes and antinodes.
  112. [112]
    [PDF] Wave Motion 3 - Duke Physics
    A closed end of a pipe is an antinode for pressure variation. A pipe open at both ends is thus like a string fixed at both ends. The same formulas apply for the ...
  113. [113]
    16.4 Energy and Power of a Wave – University Physics Volume 1
    The definition of intensity is valid for any energy in transit, including that carried by waves. The SI unit for intensity is watts per square meter (W/m2).
  114. [114]
    [PDF] Lecture 10: Energy and Power in Waves
    The potential energy depends on how stretched the string is. Of course, having a string with some tension T automatically has some potential energy due to ...Missing: speed | Show results with:speed
  115. [115]
    17.2 Speed of Sound – University Physics Volume 1
    The equation for the speed of sound in air v = γ R T M can be simplified to give the equation for the speed of sound in air as a function of absolute ...
  116. [116]
    47 Sound. The wave equation - Feynman Lectures - Caltech
    We may summarize this description of a wave by saying simply that f(x−ct)=f(x+Δx−c(t+Δt)),. when Δx=cΔt ...
  117. [117]
    17.3 Sound Intensity and Sound Level – College Physics chapters 1 ...
    Intensity is the same for a sound wave as was defined for all waves; it is. I = P A ,. where P is the power crossing area A . · Sound intensity level in units of ...
  118. [118]
    The range of human hearing - Physics
    Humans are sensitive to a particular range of frequencies, typically from 20 Hz to 20000 Hz. Whether you can hear a sound also depends on its intensity - we're ...
  119. [119]
    17.7 The Doppler Effect – University Physics Volume 1
    Use the following equation: f o = [ f s ( v ± v o v ) ] ( v v ∓ v s ) . The quantity in the square brackets is the Doppler-shifted frequency due to a moving ...
  120. [120]
    17.8 Shock Waves – University Physics Volume 1 - UCF Pressbooks
    The Mach number is the velocity of a source divided by the speed of sound, M = v s v . · When a sound source moves faster than the speed of sound, a shock wave ...
  121. [121]
    Sound Interference & Resonance: Standing Waves in Air Columns
    In air columns, the lowest-frequency resonance is called the fundamental, whereas all higher resonant frequencies are called overtones. Collectively, they are ...
  122. [122]
    [PDF] Charles-Augustin Coulomb First Memoir on Electricity and Magnetism
    In a memoir presented to the Academy, in 1784, I have determined from experiments the laws governing the torsional resistance of a filament of metal and I.
  123. [123]
    June 1785: Coulomb Measures the Electric Force
    Jun 1, 2016 · Charles Augustin Coulomb (top) used a calibrated torsion balance (bottom) to measure the force between electric charges.
  124. [124]
    18.3 Electric Field - Physics | OpenStax
    Mar 26, 2020 · Michael Faraday, an English physicist of the nineteenth century, proposed the concept of an electric field. If you know the electric field, then ...
  125. [125]
    Electric Field Lines - The Physics Classroom
    The concept of the electric field was first introduced by 19th century physicist Michael Faraday. It was Faraday's perception that the pattern of lines ...
  126. [126]
    5 Application of Gauss' Law - The Feynman Lectures on Physics
    Using Gauss' law, it follows that the magnitude of the field is given by E=ρr3ϵ0(r<R). You can see that this formula gives the proper result for r= ...
  127. [127]
    Gauss's Law for Electric Fields - EM GeoSci
    It states that the electric flux through any closed surface is proportional to the total electric charge enclosed by this surface.
  128. [128]
    Electric Fields and Conductors - The Physics Classroom
    Any closed, conducting surface can serve as a Faraday's cage, shielding whatever it surrounds from the potentially damaging effects of electric fields. This ...
  129. [129]
    February 12, 1935: Patent granted for Van de Graaff generator
    Feb 1, 2011 · A patent for the Van de Graaff generator was awarded in February, 1935. The device won the admiration of none other than Nikola Tesla.
  130. [130]
    Electrostatic generator - US1991236A - Google Patents
    R. J. VAN DE GRAAFF ELECTROSTATIC GENERATOR Filed Dec. 16, 1951 gwntox 1935 ... An object of this invention is to provide an electrostatic generator ...
  131. [131]
    Van de Graaff Generator - Magnet Academy - National MagLab
    The Van de Graaf generator creates a buildup of static electricity around a metal sphere. Electric charge in the form of electrons builds until the voltage is ...
  132. [132]
    [PDF] Chapter 3 Electric Potential - MIT
    In the presence of an electric field E e. F. JG. , in analogy to the gravitational field g , we define the electric potential difference between two points as.
  133. [133]
    Electrostatic Potential - Ximera - The Ohio State University
    Electric potential is defined as the work we need to do to move the charge divided by the amount of charge.
  134. [134]
    [PDF] PHY481 - Lecture 7: The electrostatic potential and potential energy
    Physical definition. The electric potential energy (U) is the potential energy due to the electrostatic force. As always only differences.
  135. [135]
    Electric Potential Energy - Richard Fitzpatrick
    A force which has the special property that the work done in overcoming it in order to move a body between two points in space is independent of the path taken ...
  136. [136]
    [PDF] Chapter 4 The Electric Potential
    An electrostatic force of 3.9×10−15 N acts on an electron placed anywhere between the two plates. Page 6. 58. CHAPTER 4. THE ELECTRIC POTENTIAL. (Neglect ...
  137. [137]
    19.5 Capacitors and Dielectrics – College Physics - UCF Pressbooks
    Capacitance of a Parallel Plate Capacitor​​ C = ε 0 A d . A is the area of one plate in square meters, and is the distance between the plates in meters.
  138. [138]
    Capacitor
    The capacitance of a parallel plate capacitor with two plates of area A separated by a distance d and no dielectric material between the plates is C = ε0A/d.Missing: definition | Show results with:definition
  139. [139]
    The Dielectric Constant - Physics
    Completely filling the space between capacitor plates with a dielectric increases the capacitance by a factor of the dielectric constant: C = κ Co, where Co is ...
  140. [140]
    CAPACITORS AND DIELECTRICS - Home Page of Frank LH Wolfs
    Since the final electric field E can never exceed the free electric field Efree, the dielectric constant [kappa] must be larger than 1. Since [kappa] is larger ...
  141. [141]
    Energy in a Capacitor - Physics
    If ΔV is the final potential difference on the capacitor, and Q is the magnitude of the charge on each plate, the energy stored in the capacitor is: U = 1/2 QΔ ...
  142. [142]
    8.3 Energy Stored in a Capacitor – University Physics Volume 2
    The total work W needed to charge a capacitor is the electrical potential energy U C stored in it, or U C = W . When the charge is expressed in coulombs, ...
  143. [143]
    7.5 Equipotential Surfaces and Conductors - UCF Pressbooks
    This means that equipotential surfaces around a point charge are spheres of constant radius, as shown earlier, with well-defined locations. Example. Potential ...
  144. [144]
    19.4 Equipotential Lines – College Physics
    An equipotential line is a line along which the electric potential is constant. An equipotential surface is a three-dimensional version of equipotential lines.
  145. [145]
    9.1 Electrical Current - University Physics Volume 2 | OpenStax
    Oct 6, 2016 · Electrical current is defined to be the rate at which charge flows. When there is a large current present, such as that used to run a ...Missing: dQ/ dt
  146. [146]
    Microscopic View of Ohm's Law - HyperPhysics
    Ohm's Law, where current is proportional to voltage, microscopically involves an electric field causing a drift velocity in free electrons. This drift is small ...
  147. [147]
    Georg Ohm - Scientist of the Day - Linda Hall Library
    Mar 17, 2025 · He formulated this as a law: the voltage divided by the current is equal to a quantity that we now call “resistance.” This law, universally ...
  148. [148]
    Ampere: History | NIST
    May 15, 2018 · The story of the ampere began when a Danish physicist named Hans Christian Ørsted discovered that magnetism and electricity were two aspects of the same thing.
  149. [149]
    [PDF] Michael Faraday· Discovery of Electromagnetic Induction
    Michael Faraday began his studies on electricity in 1821, i.e. a year after Oersted's discovery of magnetic effects of electric currents.
  150. [150]
  151. [151]
    Electrical papers - Internet Archive
    in Section ii. of "Electromagnetic Induction." These developments are contained in the second half of that article (Art. XXXV., vol. ii.) and in the article ...Missing: Oliver | Show results with:Oliver
  152. [152]
    [PDF] derivation of the basic laws of geometric optics
    Feb 6, 2017 · Hence we have the Law of Refraction, also known as Snell's law, -. ) sin(. ) sin(. 2. 1 t i n n θ θ = This important law forms the basis for ...
  153. [153]
    Thin lenses
    The lens equation and the mirror equation are written as 1/xo + 1/xi = 1/f. But the sign conventions for xo, xi, and f are different for lenses and mirrors.
  154. [154]
    [PDF] Chapter 23. Geometric Optics
    reflection equals the angle of incidence – the law of reflection. 1. 1 θ θ = ". Page 7. Snell's Law of Refraction. Consider “light” propagating in one ...
  155. [155]
  156. [156]
  157. [157]
    Thin-Lens Equation:Cartesian Convention - HyperPhysics
    For a thin lens, the lens power P is the sum of the surface powers. For thicker lenses, Gullstrand's equation can be used to get the equivalent power. To common ...
  158. [158]
    Total Internal Reflection - Physics
    A fiber optic cable is an excellent application of total internal reflection. An optical fiber is simply a long strand of glass, usually surrounded by a ...
  159. [159]
    Anatomy of the Microscope - Optical Aberrations
    Sep 11, 2018 · Spherical Aberration - These artifacts occur when light waves passing through the periphery of a lens are not brought into focus with those ...
  160. [160]
    Aberrations
    Spherical aberrations dominate when a wide beam, which is parallel to the optic axis, is focused by a converging lens with spherical surfaces. The focal length ...
  161. [161]
    Optical instruments
    If the eye is relaxed for distant viewing, the telescope simply produces an angular magnification equal to the ratio of the focal length of the objective to the ...
  162. [162]
    Optical instruments - Physics
    Aug 3, 2000 · The telescope is designed so the real, inverted image created by the first lens is just a little closer to the second lens than its focal length ...
  163. [163]
    Huygens' Principle: Derivation & Wave Elimination
    Oct 12, 2021 · Huygens' Principle (1678) implies that every point on a wave front serves as a source of secondary wavelets, and the new wave front is the tangential surface ...Missing: seminal | Show results with:seminal
  164. [164]
  165. [165]
    4.2 Intensity in Single-Slit Diffraction - University Physics Volume 3
    Sep 29, 2016 · sin ( ϕ 2 ) = E 2 r . where E is the amplitude of the resultant field. Solving the second equation for E and then substituting r from the first ...
  166. [166]
    II. The Bakerian Lecture. On the theory of light and colours - Journals
    The object of the present dissertation is not so much to propose any opinions which are absolutely new, as to refer some theories, which have been already ...
  167. [167]
    [PDF] ON THE ELECTRODYNAMICS OF MOVING BODIES - Fourmilab
    This edition of Einstein's On the Electrodynamics of Moving Bodies is based on the English translation of his original 1905 German-language paper. (published as ...
  168. [168]
    [PDF] 11.1 Principles of special relativity 11.2 Time dilation - MIT
    Mar 15, 2005 · Length and time dilation are specific manifestations of a general consequence of special relativity: what we consider to be “time” and “space” ...
  169. [169]
    [PDF] ON THE ELECTRODYNAMICS OF MOVING BODIES
    This edition of Einstein's On the Electrodynamics of Moving Bodies is based on the English translation of his original 1905 German-language paper. (published ...
  170. [170]
    [PDF] On the Law of Distribution of Energy in the Normal Spectrum
    Beck- mann3, show that the law of energy distribution in the normal spectrum, first derived by W. Wien from molecular-kinetic considerations and later by me ...
  171. [171]
    October 1900: Planck's Formula for Black Body Radiation
    It was Max Planck's profound insight into thermodynamics culled from his work on black body radiation that set the stage for the revolution to come.
  172. [172]
    [PDF] Einstein's Proposal of the Photon Concept-a Translation
    The American Journal of Physics is publishing the following translation in recognition of the sixtieth anniversary of the appearance of the original work.
  173. [173]
    [PDF] Einstein's First Paper on Quanta - The Information Philosopher
    As Einstein pointed out in his paper, this theory of the photoelectric effect had definite experimental consequences that had not yet been studied. The maximum ...
  174. [174]
    [PDF] A Quantum Theory of the Scattering of X-Rays by Light Elements
    Compton, Bull. Nat. Research Council, No. 20, p. 10 (Oct., 1922). Page 2. SCATTERING OF X-RAYS BY LIGHT ELEMENTS. 485 been able to show that only a small part ...
  175. [175]
    Photoelectric Solar Power Revisited - ScienceDirect.com
    Dec 20, 2017 · Here we present a model photoelectric solar power device that does not require charge transport through vacuum, opening the possibility of lower-cost ...
  176. [176]
    Nuclear Structure and Stability – Chemistry - UH Pressbooks
    The binding energy per nucleon is largest for nuclides with mass number of approximately 56. A graph is shown where the x-axis is labeled “binding energy per ...
  177. [177]
    [PDF] 12.748 The Basic Rules, Nuclear Stability, Radioactive Decay and ...
    The binding energy curve peaks at Fe, Ni region: these are the most stable nuclei. •. Neutrons, protons and isotopes. Nuclei consist of a mix of neutrons and ...
  178. [178]
    [PDF] chapter 13 – nuclear structure
    Jan 3, 2025 · The nuclear force requires a balance between the number of protons and neutrons ... The nuclear binding energy curve can be represented by a ...
  179. [179]
    Nuclear Binding Energy - HyperPhysics
    The binding energy curve is obtained by dividing the total nuclear binding energy by the number of nucleons. The fact that there is a peak in the binding energy ...
  180. [180]
    Radioactive Decay - Nuclear Chemistry
    The half-life for the decay of a radioactive nuclide is the length of time it takes for exactly half of the nuclei in the sample to decay. In our discussion ...
  181. [181]
    CH103 - CHAPTER 3: Radioactivity and Nuclear Chemistry
    Each radioactive nuclide has a characteristic, constant half-life (t1/2), the time required for half of the atoms in a sample to decay.
  182. [182]
  183. [183]
    Manhattan Project: The Discovery of Fission, 1938-1939 - OSTI.GOV
    It was December 1938 when the radiochemists Otto Hahn (above, with Lise Meitner) and Fritz Strassmann, while bombarding elements with neutrons in their Berlin ...
  184. [184]
    Q-value - Energetics of Nuclear Reactions | nuclear-power.com
    The Q-value of the reaction is defined as the difference between the sum of the masses of the initial reactants and the sum of the masses of the final products ...
  185. [185]
    Fission and Fusion - EdTech Books
    Explain nuclear fission and fusion processes; Relate the concepts of critical mass and nuclear chain reactions; Summarize basic requirements for nuclear ...Missing: value mc²
  186. [186]
    Nuclear energy
    E = mc2. Whenever a system looses energy, it looses mass. Let us compare the energy released per kg of fuel for various energy-releasing reactions. Terrestrial ...Missing: value mc²
  187. [187]
    Theoretical Models
    Aug 9, 2000 · The Liquid Drop Model treats the nucleus as a liquid. Nuclear properties, such as the binding energy, are described in terms of volume energy, ...
  188. [188]
    June 1911: Invention of the Geiger Counter
    Jun 1, 2012 · It used a Crooke's tube as one electrode, with a thin wire running through the middle of the tube as a second electrode.
  189. [189]
    This Month in Physics History | American Physical Society
    By 1910, Wilson was using his cloud chamber device to detect charged particles, since they would leave a trail of ions–and water droplets–as they passed ...
  190. [190]
    Detectors
    Aug 9, 2000 · Geiger Counter: The detector most common to the public is the Geiger-Mueller counter, commonly called the Geiger counter. It uses a gas-filled ...
  191. [191]
    Peer Instruction: Ten years of experience and results - AIP Publishing
    Sep 1, 2001 · We report data from ten years of teaching with Peer Instruction (PI) in the calculus- and algebra-based introductory physics courses for nonmajors.
  192. [192]
    The flipped classroom: A meta-analysis of effects on student ...
    Overall, flipping a classroom has a positive, moderate effect on student performance. Specifically, the moderate effect reflects half a standard deviation ...<|separator|>
  193. [193]
    PhET: Interactive Simulations for Teaching and Learning Physics
    Jan 1, 2006 · The Physics Education Technology (PhET) project creates useful simulations for teaching and learning physics and makes them freely available from the PhET ...
  194. [194]
    [PDF] The Use of Multiple Representations in Undergraduate Physics ...
    Abstract. Using multiple representations (MR) such as graphs, symbols, diagrams, and text, is central to teaching and learning in physics classrooms.<|separator|>
  195. [195]
    Integrating numerical modeling into an introductory physics laboratory
    Jul 1, 2021 · In this article, we document the process of redesigning a calculus-based introductory physics laboratory course to incorporate computational modeling.<|separator|>
  196. [196]
    Inclusion in practice: a systematic review of diversity-focused STEM ...
    Jan 6, 2023 · This systematic review investigates the literature on diversity-focused “STEM intervention programs” (SIPs) at the postsecondary level.
  197. [197]
    [PDF] Experiment 1 Measurement, Random Error & Error analysis
    This experiment aims to learn to measure lengths using rulers, vernier and micrometer calipers, and to analyze types of error and statistical methods.Missing: university oscilloscope
  198. [198]
    Error Analysis - Physics LibreTexts
    Jun 2, 2019 · Error analysis in physics is how experimentalists determine errors in measurements, using mathematical and statistical procedures, sometimes ...Missing: oscilloscope | Show results with:oscilloscope
  199. [199]
    Measurements and Error Analysis - WebAssign
    The process of evaluating the uncertainty associated with a measurement result is often called uncertainty analysis or error analysis.Missing: oscilloscope | Show results with:oscilloscope
  200. [200]
    Experiment of The Month | Millersville University
    The Atwood's machine is traditionally used to measure the gravitational field strength (aka free fall acceleration). This experiment uses the machine and ...
  201. [201]
    PHY214L Atwood Machine Lab Report Analysis and Results - Studocu
    Rating 5.0 (20) Purpose: The purpose of this experiment is to demonstrate Newton's 2nd law, which is that an object will only accelerate when there is a net force acting on the ...
  202. [202]
    [PDF] Lab 9: Ballistic Pendulum
    By measuring the maximum vertical height that the projectile plus catcher swing up to, one can apply energy and momentum conservation to determine the initial ...
  203. [203]
    [PDF] Experiment 2.03: Ohm's Law - NSUWorks
    Ohm's law is used to determine the resistance of several resistors. B. Formulas. V = IR. (1). Req = R1 + R2 +. , ...
  204. [204]
    [PDF] Experiment 21 RC Time Constants
    The objective of this experiment is to measure the time constants for two RC circuits and to determine the effect of a voltmeter on the circuit. Theory: A ...
  205. [205]
    223 Physics Lab: The RC Circuit - Clemson University
    This laboratory experiment is designed to investigate the behavior of capacitor responses of RC circuits, the basis for most electronic timing circuits.
  206. [206]
    [PDF] Physics 102 Lab 8: Measuring wavelengths with a diffraction grating
    Diffraction gratings split light into wavelengths. By measuring the angle of the light, and using the grating equation, the wavelength can be measured.Missing: university | Show results with:university
  207. [207]
    Laser Wavelength - Experiment of The Month | Millersville University
    The experiment uses a diffraction grating to measure a laser's wavelength. Students measure distances on a screen to calculate the wavelength.
  208. [208]
    [PDF] Measurement of Charge-to-Mass (e/m) Ratio for the Electron
    J.J. Thomson first measured the charge-to-mass ratio of the fundamental particle of charge in a cathode ray tube in 1897. A cathode ray tube basically ...Missing: university | Show results with:university
  209. [209]
    [PDF] WPI Physics Dept. Intermediate Lab 2651 Thomson's experiment
    Jan 19, 2015 · The objective of the experiment is to measurement the ratio of charge/mass of an electron, in the spirit of the classic experiment of J.J. ...
  210. [210]
    [PDF] The Millikan Oil-Drop Experiment - University of Toronto
    This experiment first described by [Millikan, 1913] is based on the fact that different forces act on an electrically charged oil drop moving in the homogeneous ...
  211. [211]
    [PDF] Millikan's Oil Drop Experiment
    This was achieved by measuring the charge of oil drops in a known electric field. If all electrons have the same charge, then the measured charge on the oil ...
  212. [212]
    [PDF] Guidelines for Chemical Laboratory Safety in Academic Institutions
    While these safety education guidelines are focused on students who complete a bachelor's degree in chemistry, they also cover those students who take chemistry.
  213. [213]
    [PDF] Safety Rules for Physics Laboratories
    The following guidelines and policies are designed to protect students from injuries and exposure to hazardous chemicals in the academic laboratories. The ...Missing: standards data university
  214. [214]
    [PDF] Measurement and Uncertainty Analysis Guide - UNC Physics
    Random uncertainties are statistical Jluctuations (in either direction) in the measured data. These uncertainties may have their origin in the measuring device,.Missing: oscilloscope | Show results with:oscilloscope
  215. [215]
    [PDF] (revised 12/27/08) MILLIKAN OIL-DROP EXPERIMENT
    The charge of the electron is measured using the classic technique of Millikan. Mea- surements are made of the rise and fall times of oil drops illuminated ...
  216. [216]
    Fundamentals of Physics, Extended, 12th Edition | Wiley
    Free delivery 30-day returnsFundamentals of Physics, 12th Edition guides students through the process of learning how to effectively read scientific material.
  217. [217]
    University Physics with Modern Physics
    ### Summary of University Physics with Modern Physics, 15th Edition
  218. [218]