A formula is a concise, symbolic expression or conventional statement that represents a general rule, principle, relationship, or method, often used in mathematics, science, and other fields to communicate complex ideas efficiently.[1] In its broadest sense, it can denote a fixed form of words for rituals or a prescribed plan for achieving a result, but its most prominent applications are in technical domains where precision is essential.[1]
In mathematics, a formula is a general fact, rule, or principle expressed using symbols, variables, and operations to describe relationships or perform calculations, such as deriving values for geometric shapes or solving equations.[1] These expressions enable the representation of abstract concepts in a standardized way, facilitating computations and proofs across algebra, calculus, and other branches; for instance, they form the basis of formalized languages where syntactically correct statements are built over sets of variables and logical structures.[2] Mathematical formulas are foundational to scientific modeling, appearing in everything from basic arithmetic to advanced theoretical physics.[3]
In chemistry, a formula specifies the composition and structure of a substance, using elemental symbols and subscripts to indicate the types and numbers of atoms involved, such as H₂O for water, which denotes two hydrogen atoms bonded to one oxygen atom.[4] This notation serves as a precise communication tool for molecules, compounds, and reactions, distinguishing between molecular formulas (showing atom counts) and structural formulas (depicting arrangements).[5] Chemical formulas are critical for laboratory work, industrial processes, and understanding material properties.[6]
Beyond science, the term extends to other contexts, including infant formula, a manufactured food product designed as a nutritional substitute for breast milk, providing essential nutrients for infants up to 12 months old when human milk is unavailable.[7] In motorsport, "formula" refers to a regulatory framework outlining technical specifications for vehicles in competitive racing series, ensuring safety, fairness, and innovation, as seen in the Formule Internationale rules governing Grand Prix events since 1938.[8] These diverse applications highlight the term's versatility in encapsulating structured approaches across disciplines.
In Mathematics
In mathematics, an arithmetic or algebraic formula is a mathematical expression that defines a relationship between quantities using symbols, operations, and variables, often expressed with an equals sign to facilitate calculations or solve for unknowns.[9] These formulas form the basis for performing computations in arithmetic and manipulations in algebra, enabling the representation of general rules for specific operations.[10]
Arithmetic formulas typically involve basic operations on numerical quantities or variables to compute measures or values in everyday applications. For instance, the area of a rectangle is given by A = l \times w, where l is the length and w is the width, allowing direct calculation of space enclosed by rectangular boundaries.[11] Similarly, the simple interest formula I = P \times r \times t determines the interest accrued on a principal amount P at rate r over time t, widely used in financial contexts to assess monetary growth without compounding.[12]
Algebraic formulas extend these concepts through identities and expansions that hold true for all values of the variables involved, aiding in simplification and factorization. A fundamental identity is the difference of squares, a^2 - b^2 = (a - b)(a + b), which factors a binomial difference into a product of linear terms, essential for solving equations and polynomial manipulations.[13] Another key algebraic formula is the binomial theorem, which expands powers of a binomial as (a + b)^n = \sum_{k=0}^{n} \binom{n}{k} a^{n-k} b^k, where \binom{n}{k} denotes the binomial coefficient, providing a systematic way to generate terms in expansions for higher-degree polynomials.
A prominent algebraic formula is the quadratic formula, which solves the equation ax^2 + bx + c = 0 for x, yielding x = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}.[14] This formula is derived by completing the square on the general quadratic equation. Start with ax^2 + bx + c = 0, divide through by a to get x^2 + \frac{b}{a}x + \frac{c}{a} = 0, then move the constant term: x^2 + \frac{b}{a}x = -\frac{c}{a}. Add \left( \frac{b}{2a} \right)^2 to both sides to form a perfect square: x^2 + \frac{b}{a}x + \left( \frac{b}{2a} \right)^2 = -\frac{c}{a} + \left( \frac{b}{2a} \right)^2, simplifying to \left( x + \frac{b}{2a} \right)^2 = \frac{b^2 - 4ac}{4a^2}. Taking square roots gives x + \frac{b}{2a} = \pm \frac{\sqrt{b^2 - 4ac}}{2a}, and solving for x produces the quadratic formula.[14]
The development of arithmetic and algebraic formulas traces back to ancient civilizations, with Babylonian mathematicians around 2000 BCE employing rhetorical algebra—word-based descriptions of equations—on clay tablets to solve practical problems like land measurement, marking the earliest systematic use of algebraic relations without symbols.[15] This evolved through syncopated notation in medieval Islamic scholarship, but modern symbolic algebraic formulas emerged in the 16th century with François Viète, who introduced consistent vowel symbols for unknowns and consonants for knowns in works like Isagoge ad locos planos et solidos (1591), enabling general formulas and paving the way for symbolic manipulation.[16]
In mathematical logic, formulas are well-formed expressions in a formal language that represent propositions or statements capable of being true or false. These formulas serve as the syntactic backbone for deductive reasoning and model-theoretic interpretations, distinguishing logical structures from informal natural language. Logical formulas build upon earlier algebraic traditions by introducing symbolic representations for inference and quantification, enabling precise analysis of validity and entailment.[17]
Propositional logic formulas consist of atomic formulas, which are basic propositional variables such as p, q, or r, denoting simple statements without internal structure. Compound formulas are recursively constructed by applying logical connectives to atomic or other compound formulas, including negation (\neg), conjunction (\wedge), disjunction (\vee), and implication (\rightarrow). For instance, P \wedge Q represents the statement "P and Q," while \neg (P \rightarrow Q) expresses the negation of the implication. These connectives ensure formulas are well-formed strings, adhering to syntactic rules that require proper bracketing to avoid ambiguity.[18][17]
In predicate logic, also known as first-order logic, formulas extend propositional structures by incorporating predicates, terms, and quantifiers to handle relations and variables. Atomic formulas here take the form P(t_1, \dots, t_n), where P is an n-ary predicate symbol and t_i are terms (constants, variables, or functions). Compound formulas combine these with the same connectives as in propositional logic. Quantifiers introduce generality: the universal quantifier \forall binds variables to express "for all," as in \forall x (P(x) \rightarrow Q(x)), meaning "for every x, if P(x) then Q(x)"; the existential quantifier \exists asserts "there exists," as in \exists x P(x). These elements allow formulas to capture complex inferences about objects and their properties in a domain.[19][17]
The syntax of first-order logic formulas is defined recursively: terms include variables and constants; atomic formulas involve predicates applied to terms or equality between terms; compound formulas result from negation, binary connectives, or quantifiers applied to well-formed subformulas. Parentheses ensure unique parsing. Semantically, an interpretation assigns a non-empty domain of discourse to variables and interprets predicates as relations over that domain, determining truth values recursively: a formula is true in a model under a variable assignment if atomic parts hold and connectives/quantifiers satisfy their conditions—for quantifiers, \forall v \theta is true if \theta holds for every reassignment of v in the domain, while \exists v \theta requires at least one such assignment. Free variables in a formula are those not bound by quantifiers, allowing open formulas like P(x) to function as predicates; bound variables, such as x in \forall x P(x), are scoped within the quantifier and do not affect the formula's truth outside that scope. This distinction is crucial for substitution and proof theory.[17][19]
A pivotal historical milestone in the development of logical formulas was Gottlob Frege's Begriffsschrift (1879), which introduced the first formal system of predicate logic using a two-dimensional notation. Frege employed a concavity symbol to denote universal quantification, effectively binding variables in generality statements, and strokes for connectives like implication and negation, shifting from Aristotelian syllogisms to a function-argument analysis of propositions. This innovation enabled the expression of arbitrary relational inferences, laying the foundation for modern first-order logic despite its initial two-dimensional complexity, which was later simplified by linear notations from Peano and Russell.[20][19]
Parsing and evaluation of logical formulas involve syntactic analysis to verify well-formedness and semantic assessment to determine truth. In propositional logic, truth tables provide a complete method for evaluation by enumerating all possible truth assignments to atomic propositions and computing compound values via connectives. For the conjunction p \wedge q, the truth table is:
This table shows p \wedge q is true only when both p and q are true, illustrating the connective's truth function exhaustively for two variables (yielding $2^2 = 4 rows). Such tables confirm tautologies, contradictions, or contingencies, underpinning automated theorem proving and circuit design. In first-order logic, evaluation extends to models, but propositional cases remain foundational for subformula analysis.[18][17]
Geometric formulas provide essential tools for calculating properties of shapes in Euclidean space. Fundamental among these is the Pythagorean theorem, which states that in a right-angled triangle, the square of the hypotenuse equals the sum of the squares of the other two sides: a^2 + b^2 = c^2. This relation, proven geometrically in Euclid's Elements (Book I, Proposition 47, c. 300 BCE), underpins many spatial measurements by relating distances in planar figures. Euclid derived it using area comparisons of squares constructed on the triangle's sides, without algebraic notation, emphasizing congruence and parallel lines.[21]
For curved shapes, the area of a circle is given by A = \pi r^2, where r is the radius and \pi approximates 3.14159. This formula emerged from ancient approximations; Archimedes (c. 287–212 BCE) bounded \pi between $223/71 and $22/7 in Measurement of a Circle, enabling precise area computations via inscribed and circumscribed polygons. Similarly, the volume of a sphere is V = \frac{4}{3} \pi r^3. Archimedes established this in On the Sphere and Cylinder by comparing the sphere to a circumscribed cylinder, showing the sphere's volume as two-thirds that of the cylinder through mechanical balancing and exhaustion methods.[22]
Analytic geometry extends these ideas to coordinate systems, where points are represented as ordered pairs or triples. The distance d between two points (x_1, y_1) and (x_2, y_2) in the plane derives directly from the Pythagorean theorem: d = \sqrt{(x_2 - x_1)^2 + (y_2 - y_1)^2}. This formula applies the theorem to the horizontal and vertical segments forming the line between points, as detailed in early coordinate treatments by René Descartes in La Géométrie (1637). The equation of a straight line in slope-intercept form is y = mx + c, where m is the slope (rise over run) and c the y-intercept; this linear relation facilitates graphing and intersections in Cartesian planes.[23]
Calculus introduces analytic formulas for rates of change and accumulation. The power rule for differentiation states that the derivative of x^n is n x^{n-1}, for n \neq 0. Isaac Newton developed this in his fluxion method (c. 1665–1666), treating variables as flowing quantities whose "fluxions" capture instantaneous rates, as outlined in his unpublished manuscript De Methodis Serierum et Fluxionum. The corresponding integral formula is \int x^n \, dx = \frac{x^{n+1}}{n+1} + C for n \neq -1, reversing the differentiation process to compute areas under curves. These rules, independently formalized by Gottfried Wilhelm Leibniz in the 1670s using infinitesimals, enable analysis of geometric objects' varying properties, such as arc lengths or surface areas.[24]
In coordinate systems, these formulas intersect: the distance metric supports vector analysis in geometry, while calculus derivatives describe tangents to curves, linking static shapes to dynamic processes. For instance, deriving the distance formula via Pythagoras allows embedding Euclidean geometry in algebraic frameworks, foundational to modern analytic applications.[25]
In Natural Sciences
Chemical formulas are symbolic representations used in chemistry to denote the composition of chemical compounds and the stoichiometry of reactions, providing a concise way to express the types and numbers of atoms involved. These notations emerged as essential tools for communicating chemical structures and transformations accurately.[26]
Molecular formulas indicate the exact number of atoms of each element in a molecule, using chemical symbols followed by subscripts for quantities greater than one; for example, the molecular formula for water is H₂O, signifying two hydrogen atoms and one oxygen atom. In contrast, empirical formulas represent the simplest whole-number ratio of atoms in a compound, without specifying the actual number of atoms; for instance, the empirical formula for benzene is CH, reflecting a 1:1 ratio of carbon to hydrogen, even though the molecular formula is C₆H₆. These formulas are fundamental for identifying compounds and calculating molar masses.[26][26]
Structural formulas go beyond mere composition by illustrating the arrangement of atoms and the bonds between them, offering insight into molecular geometry. In line notation, bonds are represented by lines connecting atomic symbols, as seen in ethanol (CH₃CH₂OH), where the chain depicts carbon-carbon and carbon-oxygen bonds. Condensed structural formulas simplify this by grouping atoms and using parentheses for branches, such as CH₃CH₂OH for ethanol, which implies single bonds without drawing them explicitly. These representations are crucial for understanding reactivity and isomerism in organic compounds.[26]
For ionic compounds, formulas denote the ratio of cations to anions required for electrical neutrality, typically without subscripts if the charges balance at 1:1, as in sodium chloride (NaCl), where one sodium ion (Na⁺) pairs with one chloride ion (Cl⁻). More complex ionic formulas include subscripts, such as calcium chloride (CaCl₂), indicating one calcium ion to two chloride ions. These formulas adhere to the principle that the total positive charge equals the total negative charge in the compound.[27]
Balancing chemical equations ensures conservation of mass by adjusting stoichiometric coefficients—the numerical multipliers placed before formulas—to equalize the number of each type of atom on both reactant and product sides. The process involves: (1) writing the unbalanced equation, (2) identifying elements and counting atoms, (3) starting with the most complex substance and adjusting coefficients iteratively for each element while avoiding fractions initially, and (4) verifying the balance. For the combustion of hydrogen, the unbalanced equation H₂ + O₂ → H₂O becomes balanced as 2H₂ + O₂ → 2H₂O, where coefficients of 2, 1, and 2 maintain two oxygen atoms and four hydrogen atoms throughout.[28]
Stoichiometric coefficients in balanced equations quantify the relative proportions of reactants and products in terms of moles, enabling predictions of reaction yields and limiting reagents; for example, in 2H₂ + O₂ → 2H₂O, the coefficient 2 for H₂ indicates that two moles of hydrogen react with one mole of oxygen to produce two moles of water. These coefficients are the smallest whole numbers that satisfy the balance, forming the basis for stoichiometric calculations.[29]
The evolution of chemical formulas traces back to the late 18th century, when Antoine Lavoisier, along with Guyton de Morveau, Berthollet, and Fourcroy, introduced a systematic nomenclature in their 1787 publication Méthode de nomenclature chimique, replacing alchemical terms with names based on composition to reflect chemical reality. This laid the groundwork for modern formulas by emphasizing elemental symbols and proportions. The International Union of Pure and Applied Chemistry (IUPAC), established in 1919, formalized and expanded these standards through commissions starting in 1923, issuing recommendations for organic and inorganic nomenclature that standardized formulas globally, with ongoing updates to accommodate new discoveries.[30][31]
Physical formulas encompass the mathematical equations that describe the fundamental laws governing natural phenomena in physics, from motion and forces to energy and quantum behavior. These formulas provide quantitative predictions for physical systems, building on mathematical concepts such as algebra and calculus. They have evolved through empirical observations and theoretical advancements, enabling precise modeling of the universe's dynamics.
The foundations of physical formulas trace back to the 17th century with Galileo's contributions to kinematics, where he established the principles of uniformly accelerated motion through experiments like inclined planes, laying the groundwork for later mechanical laws. Galileo's work in "Two New Sciences" (1638) demonstrated that objects in free fall accelerate at a constant rate independent of mass, challenging Aristotelian views and introducing the concept of inertia as resistance to motion changes. This kinematic framework influenced Isaac Newton's synthesis in the late 17th century.[32]
In classical mechanics, Newton's second law relates force, mass, and acceleration, originally stated in his "Philosophiæ Naturalis Principia Mathematica" (1687) as the change in motion proportional to the motive force impressed and occurring along the line of action. The modern vector form, F = ma, where F is the net force, m is mass, and a is acceleration, emerges from interpreting this law under constant mass, as clarified in subsequent analyses. This law quantifies how forces alter an object's momentum, fundamental to predicting trajectories in everyday and celestial mechanics.[33]
Newton's law of universal gravitation, also from the Principia, states that every particle attracts every other with a force proportional to the product of their masses and inversely proportional to the square of the distance between them:
F = G \frac{m_1 m_2}{r^2}
where G is the gravitational constant, m1 and m2 are the masses, and r is the separation distance. This inverse-square law unified terrestrial and celestial motion, explaining planetary orbits as derived from Kepler's laws.[34]
Kinematic equations describe motion under constant acceleration, derived from Newton's laws and calculus integration. The first equation, v = u + at, arises from integrating constant acceleration a over time t, where v is final velocity and u is initial velocity. A key derivation for the third equation, v2 = u2 + 2as, where s is displacement, follows from the work-energy theorem or velocity-time graph area: starting with v = u + at and s = (u + v)t/2, substitute t = (v - u)/ a to eliminate t, yielding v2 - u2 = 2as. This equation highlights the conservation of mechanical energy in non-conservative forms.[35][36]
In thermodynamics, the ideal gas law, PV = nRT, relates pressure P, volume V, amount n, temperature T, and gas constant R for dilute gases behaving ideally. Developed incrementally—Boyle's law (PV = constant, 1662), Charles's law (V ∝ T, 1787), and Avogadro's hypothesis (1811)—it was unified by Clapeyron in 1834 as PV = RT for one mole, extended to nRT later. This equation models gas expansion and heat engines, assuming negligible molecular interactions.[37]
Electromagnetism features Coulomb's law, quantifying the electrostatic force between point charges:
F = k \frac{q_1 q_2}{r^2}
where k is Coulomb's constant, q1 and q2 are charges, and r is distance; the force is attractive for opposite charges and repulsive for like charges. Derived experimentally by Charles-Augustin de Coulomb in 1785 using a torsion balance, it parallels gravitation but with electrical sign dependence, foundational to field theory./18%3A_Electric_Charge_and_Electric_Field/18.03%3A_Coulombs_Law)
Albert Einstein's special relativity (1905) introduced mass-energy equivalence: E = mc2, where E is rest energy, m is rest mass, and c is the speed of light, showing mass as a form of energy convertible under relativistic conditions. This derived from thought experiments on light and inertia, resolving electromagnetic inconsistencies with Newtonian mechanics. General relativity (1915) extended this to gravity as spacetime curvature, with field equations describing massive bodies' influence on geometry.[38][39]
In quantum mechanics, the Schrödinger equation governs wave function evolution:
i \hbar \frac{\partial \psi}{\partial t} = \hat{H} \psi
where i is the imaginary unit, ℏ is reduced Planck's constant, ψ is the wave function, t is time, and Ĥ is the Hamiltonian operator. Postulated by Erwin Schrödinger in 1926 based on de Broglie's wave-particle duality and Hamiltonian mechanics, it predicts probabilistic outcomes for microscopic systems, unifying wave and matrix mechanics.[40]
In biology, formulas play a crucial role in modeling genetic inheritance, population dynamics, biochemical reactions, and ecological diversity, providing quantitative frameworks for understanding living systems. The foundations trace back to Gregor Mendel's pioneering experiments on pea plants in 1866, where he established the principles of inheritance through ratios such as 3:1 for dominant to recessive traits in monohybrid crosses, laying the groundwork for modern genetics.[41] These empirical observations evolved into formal mathematical expressions, particularly with the advent of population genetics in the early 20th century. By the mid-1900s, formulas extended to biochemical kinetics and ecological metrics, and in contemporary bioinformatics, they incorporate computational notations for analyzing genetic sequences and evolutionary processes, such as probabilistic models for allele frequencies in large-scale genomic data.
A seminal genetic formula is the Hardy-Weinberg equilibrium, which describes the stable allele and genotype frequencies in a large, randomly mating population under no evolutionary influences. Introduced independently by G.H. Hardy and Wilhelm Weinberg in 1908, it states that for a gene with two alleles p (frequency of dominant allele) and q (frequency of recessive allele, where p + q = 1), the genotype frequencies are given by:
p^2 + 2pq + q^2 = 1
Here, p² represents homozygous dominant, 2pq heterozygous, and q² homozygous recessive individuals.[42] This equilibrium serves as a null model to detect deviations due to selection, mutation, migration, or drift, and remains foundational in genetic studies, with applications in testing for evolutionary forces in human populations.[43]
Population growth in biology is often modeled using the logistic equation, developed by Pierre-François Verhulst in 1838 to account for environmental carrying capacity limiting exponential increase. The differential equation is:
\frac{dN}{dt} = rN \left(1 - \frac{N}{K}\right)
where N is population size, r is the intrinsic growth rate, and K is the carrying capacity.[44] This S-shaped curve predicts initial exponential growth slowing as resources deplete, influencing models in ecology and epidemiology, such as predicting bacterial colony expansion or wildlife population limits. Verhulst's work, based on demographic data from 19th-century Europe, highlighted how density-dependent factors stabilize populations around K.[45]
Biochemical processes, particularly enzyme kinetics, rely on the Michaelis-Menten equation, formulated by Leonor Michaelis and Maud Menten in 1913 through experiments on invertase activity. It models reaction velocity v as:
v = \frac{V_{\max} [S]}{K_m + [S]}
where [S] is substrate concentration, V_max is maximum velocity, and K_m is the Michaelis constant (substrate concentration at half V_max).[46] This hyperbolic relationship assumes steady-state enzyme-substrate binding and underpins drug design and metabolic pathway analysis, with K_m indicating enzyme-substrate affinity. The equation's derivation from earlier quasi-steady-state assumptions revolutionized quantitative biochemistry.[47]
Ecological formulas quantify community structure, notably the Shannon diversity index, adapted from Claude Shannon's 1948 information theory to measure species diversity in ecosystems. The index H' is calculated as:
H' = -\sum_{i=1}^{S} p_i \ln p_i
where S is the number of species, and p_i is the proportion of individuals in species i.[48] Higher H' values indicate greater diversity, reflecting evenness and richness; it has been widely applied since the 1960s in conservation biology to assess habitat quality, such as in forest or marine communities. Shannon's entropy concept, originally for communication efficiency, was extended to biology to capture uncertainty in species proportions.[49]
In modern bioinformatics, these classical formulas integrate with notations like probabilistic sequence alignments (e.g., log-odds scores in BLAST) and Markov models for gene prediction, building on Mendelian ratios to analyze vast genomic datasets for evolutionary patterns.[50] Such extensions enable simulations of genetic drift or diversity in microbial metagenomes, maintaining conceptual ties to equilibrium and growth principles while scaling to computational biology.
In Computing and Technology
In the early days of computing during the 1940s and 1950s, mathematical formulas were implemented using low-level machine code or assembly language, often input via punch cards on machines like the IBM 701, which required programmers to manually encode arithmetic operations without high-level abstractions.[51] This labor-intensive process limited the expression of complex scientific formulas, as each instruction had to be painstakingly translated into binary equivalents punched onto cards for batch processing.
The development of Fortran (Formula Translation) in 1957 by a team at IBM, led by John Backus, marked a pivotal shift toward high-level languages designed specifically for scientific computing, allowing formulas to be written in a syntax closer to mathematical notation.[51] Fortran's introduction of algebraic expressions, such as X = A + B * C, enabled direct translation of equations into executable code, dramatically reducing programming time for numerical computations in fields like physics and engineering.[51] Subsequent languages like Python and C++ built on this foundation, incorporating libraries such as Python's math module for standard functions like square roots and exponents.
A classic example of implementing a mathematical formula in programming is the quadratic formula, x = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}, which solves equations of the form ax^2 + bx + c = 0. In Python, this can be expressed as follows, using the cmath module to handle complex roots:
python
import cmath
def quadratic_roots(a, b, c):
discriminant = b**2 - 4*a*c
root1 = (-b + cmath.sqrt(discriminant)) / (2*a)
root2 = (-b - cmath.sqrt(discriminant)) / (2*a)
return root1, root2
# Example usage
roots = quadratic_roots(1, 3, 2)
print(roots) # Outputs: (-1+0j, -2+0j)
import cmath
def quadratic_roots(a, b, c):
discriminant = b**2 - 4*a*c
root1 = (-b + cmath.sqrt(discriminant)) / (2*a)
root2 = (-b - cmath.sqrt(discriminant)) / (2*a)
return root1, root2
# Example usage
roots = quadratic_roots(1, 3, 2)
print(roots) # Outputs: (-1+0j, -2+0j)
This code computes the discriminant first to check for real or complex solutions, demonstrating how programming languages encapsulate mathematical operations for reuse.[52]
In algorithmic contexts, formulas like Big O notation provide a mathematical framework to describe the computational complexity of code, focusing on the upper bound of resource usage as input size n grows. Introduced by Edmund Landau in the early 20th century and popularized in algorithm analysis by Donald Knuth, Big O notation abstracts performance; for instance, nested loops iterating over an n \times n array yield O(n^2) time complexity, as each inner loop runs n times for each of the n outer iterations.[53]
Pseudocode offers a high-level, language-agnostic way to outline algorithms incorporating formulas, such as the binary search for finding an element in a sorted array of size n. The standard iterative pseudocode is:
function binary_search(sorted_array, target):
low = 0
high = length(sorted_array) - 1
while low <= high:
mid = (low + high) / 2
if sorted_array[mid] == target:
return mid
else if sorted_array[mid] < target:
low = mid + 1
else:
high = mid - 1
return not_found
function binary_search(sorted_array, target):
low = 0
high = length(sorted_array) - 1
while low <= high:
mid = (low + high) / 2
if sorted_array[mid] == target:
return mid
else if sorted_array[mid] < target:
low = mid + 1
else:
high = mid - 1
return not_found
The time complexity follows the recurrence relation T(n) = T(n/2) + O(1), where each step halves the search space with constant-time operations, solving to O(\log n) via the master theorem.[54]
Implementing formulas in code introduces challenges like floating-point precision errors, stemming from the IEEE 754 standard's binary representation of real numbers, which cannot exactly store many decimals (e.g., 0.1 requires infinite bits).[55] For instance, in the quadratic formula, catastrophic cancellation occurs when subtracting nearly equal large values in the discriminant, amplifying rounding errors up to 70 ulps (units in the last place); this is mitigated by rearranging the formula to compute the root with the larger magnitude first, reducing error to about 1 ulp.[55] Programmers address such issues using guard digits in hardware, compensated summation algorithms like Kahan's, or arbitrary-precision libraries to ensure numerical stability in formula evaluations.[55]
Formulas in data processing and spreadsheets enable users to perform calculations on tabular data without extensive programming knowledge, facilitating tasks from basic arithmetic to complex analysis in tools like Microsoft Excel and Google Sheets.[56] The concept originated with VisiCalc, the first electronic spreadsheet program released in 1979 for the Apple II, which introduced grid-based computation and became a key driver for personal computer adoption by allowing instant recalculation of values.[57]
In modern spreadsheets, functions like SUM aggregate values across ranges, as in =SUM(A1:A10), which adds the contents of cells A1 through A10, supporting both individual values and mixed references for efficient totaling.[58] Lookup functions such as VLOOKUP search for a value in the first column of a table and return a corresponding item from another column in the same row, exemplified by =VLOOKUP("Part123", A1:D10, 3, FALSE), which retrieves pricing data by part number while enabling exact matches.[59]
Statistical formulas are implemented via built-in functions that apply mathematical principles, such as the sample standard deviation, calculated as \sigma = \sqrt{\frac{\sum (x_i - \bar{x})^2}{n-1}} using Excel's STDEV.S function like =STDEV.S(A1:A10), which estimates dispersion from the mean for a dataset subset.[60]
In database systems, aggregate functions in SQL process query results similarly, with SUM computing the total of a column via SELECT SUM(column_name) FROM table_name and AVG finding the arithmetic mean through SELECT AVG(column_name) FROM table_name, both ignoring null values to produce summarized outputs.[61] These operations scale to large datasets, supporting group-by clauses for segmented analysis.
Business intelligence tools like Tableau extend this through calculated fields, where users define custom expressions such as [Profit] / [Sales] to compute profit margins, integrating with visualizations for dynamic data exploration.[62]
A common pitfall in spreadsheet formulas is circular references, where a cell indirectly depends on itself, leading to errors or incorrect iterations unless enabled for scenarios like iterative solving; Excel warns of these via status bar indicators and trace tools to resolve them.[63]
In Measurement and Units
Dimensional analysis is a mathematical technique used to examine the relationships between physical quantities by considering their fundamental dimensions, such as mass (M), length (L), and time (T), ensuring that equations remain consistent regardless of the unit system employed. This approach verifies the homogeneity of physical equations, meaning all terms must share identical dimensions, and facilitates the derivation of dimensionless parameters essential for scaling and modeling in engineering and science. By focusing on dimensions rather than numerical values, it simplifies complex problems involving multiple variables, revealing inherent similarities without solving the full governing equations.[64]
The origins of dimensional analysis trace back to Joseph Fourier's 1822 treatise Théorie Analytique de la Chaleur, where he first articulated the principle of dimensional homogeneity, asserting that physical equations must balance dimensionally to be valid across different measurement systems. In the 1870s, Lord Rayleigh advanced the method through his work on acoustics, introducing a systematic "method of dimensions" to estimate functional forms of physical laws by assuming power-law relationships and balancing dimensions, as detailed in his 1877 book The Theory of Sound. Rayleigh's approach, applied to problems like sound propagation, emphasized practical utility in deriving approximate formulas without complete theoretical derivation.
Dimensional formulas represent physical quantities as products of powers of base dimensions, providing a compact way to express their structure. For instance, the dimensional formula for force derives from Newton's second law, F = ma, where mass m has dimensions [M] and acceleration a has dimensions [L T^{-2}], yielding [F] = [M L T^{-2}].[64] This notation underscores how derived quantities build from primaries, aiding in quick assessments of physical relations. Another application is verifying equation consistency: for Einstein's mass-energy equivalence E = mc^2, energy E has dimensions [M L^2 T^{-2}], while mass m is [M] and the speed of light c is [L T^{-1}], so c^2 is [L^2 T^{-2}] and mc^2 matches [M L^2 T^{-2}], confirming dimensional balance./Book%3A_University_Physics_I_-Mechanics_Sound_Oscillations_and_Waves(OpenStax)/28%3A_Special_Relativity/28.07%3A_Relativistic_Energy)
The Buckingham π theorem formalizes Rayleigh's method, providing a rigorous framework for reducing dimensional equations. Stated by Edgar Buckingham in 1914, it posits that any physical relation involving n dimensional variables and k independent fundamental dimensions can be reformulated as a relation among n - k independent dimensionless products, known as π groups. For equations with multiple variables, such as those in fluid dynamics or heat transfer, the theorem identifies dimensionless numbers (e.g., Reynolds number) that govern the system's behavior, enabling similarity between scaled models and prototypes without dependence on specific units. The proof relies on linear algebra: the dimensional matrix of exponents for the variables has rank k, so the kernel yields n - k independent combinations that are dimensionless. This theorem is foundational for engineering problems, as it minimizes experimental variables by focusing on scale-invariant forms.[65]
In engineering applications, dimensional homogenization—ensuring and deriving dimensionally consistent forms—involves structured steps via the Buckingham method to form π groups. First, identify all relevant physical variables in the problem, such as dependent and independent quantities affecting the phenomenon. Second, express each variable's dimensions in terms of the base set (typically M, L, T, and sometimes θ for temperature or Q for charge). Third, determine the number of π groups as n - k, where n is the total variables and k is the rank of the dimensional matrix. Fourth, select k repeating variables that span the base dimensions and are dimensionally independent, often including those with the highest powers or key parameters like length scales. Fifth, form each π group by combining one non-repeating variable with the repeating variables raised to unknown exponents, solving the resulting system of equations for dimensional balance (e.g., for a π_i = Q * X_1^{a} X_2^{b} ..., set exponents of M, L, T to zero). Finally, express the relation as a function of the π groups, such as φ(π_1, π_2, ..., π_{n-k}) = 0, which homogenizes the equation into a dimensionless form suitable for scaling or experimentation.[65] This process ensures consistency and reduces complexity, as demonstrated in problems like drag force on an object, where π groups yield the drag coefficient.
Unit conversion formulas enable the transformation of measurements between different systems, ensuring consistency in scientific and engineering applications. A common example is the conversion between Fahrenheit (°F) and Celsius (°C) temperature scales, given by the formula C = \frac{(F - 32) \times 5}{9}, where the factor \frac{5}{9} (or equivalently \frac{1}{1.8}) accounts for the different size of degree intervals, and the -32 offset aligns the zero points.[66] This formula derives from the historical definitions: the Celsius scale sets 0 °C at water's freezing point and 100 °C at boiling, while Fahrenheit uses 32 °F and 212 °F for the same points.[66]
In the International System of Units (SI), derived units are formed by combining base units through formulas that express physical quantities. For instance, the joule (J), the unit of energy or work, is defined as $1 \, \mathrm{J} = 1 \, \mathrm{kg \cdot m^2 \cdot s^{-2}}, derived from the work formula W = F \times d, where force F is in newtons (\mathrm{N} = \mathrm{kg \cdot m \cdot s^{-2}}) and distance d in meters.[67] This coherent derivation ensures that products and quotients of SI units yield other SI units without additional numerical factors.[68]
Multi-step conversions rely on the chain rule, applying sequential conversion factors—each a proportionality constant equating two units—to transform quantities across systems. For example, to convert 1 mile per hour to meters per second, first multiply by $1 \, \mathrm{mile} = 1609.34 \, \mathrm{m} and $1 \, \mathrm{hour} = 3600 \, \mathrm{s}, yielding v = \frac{1609.34}{3600} \, \mathrm{m/s} \approx 0.447 \, \mathrm{m/s}; each factor cancels intermediate units, preserving the equality./02%3A_Measurement_and_Problem_Solving/2.07%3A_Solving_Multi-step_Conversion_Problems) These constants, such as 1609.34 for miles to meters, are fixed ratios derived from base definitions./02%3A_Measurement_and_Problem_Solving/2.07%3A_Solving_Multi-step_Conversion_Problems)
The standardization of units traces to the metric system's origins in the 1790s during the French Revolution, when the National Assembly commissioned a decimal-based framework to replace inconsistent local measures, leading to prototypes for the metre and kilogram in 1799.[69] This evolved into the SI in 1960 at the 11th General Conference on Weights and Measures (CGPM), formalizing seven base units and derived units for international coherence.[69] Ongoing refinements include the 2019 redefinition, effective May 20, 2019, which fixed the kilogram to the Planck constant h = 6.62607015 \times 10^{-34} \, \mathrm{J \cdot s}, eliminating artifact-based definitions and enhancing precision.[70]
Adjusting formulas for unit changes involves scaling constants to match the new system's dimensions. The gravitational constant G, for example, is $6.67430 \times 10^{-11} \, \mathrm{m^3 \cdot kg^{-1} \cdot s^{-2}} in SI units but $6.67430 \times 10^{-8} \, \mathrm{cm^3 \cdot g^{-1} \cdot s^{-2}} in the centimeter-gram-second (CGS) system, reflecting the factor of $10^{-3} for mass (kg to g) and $10^{-6} for length cubed (m³ to cm³), resulting in an overall scaling of $10^{-11} to $10^{-8}.[71] Such adaptations maintain the universal applicability of Newton's law of universal gravitation, F = G \frac{m_1 m_2}{r^2}.[71]