Fact-checked by Grok 2 weeks ago

Parameter

In mathematics, a parameter is a quantity that influences the output or behavior of a mathematical object, such as a function or equation, but is viewed as being held constant within a specific context. Unlike variables, which are manipulated to produce different outputs in a given instance, parameters remain fixed for that instance while allowing variation across a family of related objects; for example, in the equation of an ellipse \frac{x^2}{a^2} + \frac{y^2}{b^2} = 1, the values a and b serve as parameters that define the shape and size without varying during evaluation. In statistics, a parameter refers to a numerical characteristic or summary measure that describes an entire population, such as the population mean \mu or standard deviation \sigma, which is typically unknown and estimated from sample data. This contrasts with a statistic, which is a similar measure computed from a sample subset of the population. In computer science, a parameter is a value or variable passed to a function, method, or subroutine during its invocation, enabling reusable code by specifying inputs like data or configuration options; for instance, formal parameters are placeholders declared in the function definition, while actual parameters supply the concrete values at call time. Parameters facilitate modularity and abstraction in programming, appearing in diverse contexts from algorithm design to machine learning models where they are tuned to optimize performance. Beyond these core disciplines, parameters play a critical role in fields like engineering and physics, where they quantify system properties—such as coefficients in differential equations modeling physical phenomena—that are adjusted to fit experimental data or simulate behaviors. Their consistent use across domains underscores their utility in defining boundaries, constraints, and tunable elements within complex models and analyses.

Fundamentals

Definition and Usage

A parameter is a quantity or variable that defines or characterizes a system, function, or model, often held constant during a specific analysis while remaining adjustable to explore different scenarios or variations. In mathematical contexts, it serves as an input that shapes the behavior or properties of the entity under study without being the primary focus of variation. The term "parameter" originates from the Greek roots para- meaning "beside" or "subsidiary" and metron meaning "measure," reflecting its role as a supplementary measure that accompanies the main elements of a system. This etymology underscores its historical use in geometry as a line or quantity parallel to another, which evolved into a broader concept for fixed descriptors in analytical frameworks. The English term "parameter" entered mathematical usage in the 1650s, initially referring to quantities in conic sections. Unlike variables, which vary freely within a given domain to represent changing states or inputs, parameters are typically fixed within a particular context to maintain the structure of the model or equation. This distinction allows parameters to provide stability and specificity, while variables enable exploration of dynamic relationships. Common examples include the radius in the equation describing a circle, which determines the shape's size and is held constant for that geometric figure, or the growth rate in a population model, which characterizes the rate of expansion and can be adjusted to simulate different environmental conditions. These cases illustrate parameters' utility in simplifying complex systems without delving into field-specific computations. Parameters facilitate abstraction in scientific and mathematical modeling by encapsulating essential characteristics, enabling the creation of generalizable frameworks that can be applied or adapted across diverse contexts with minimal reconfiguration. This role promotes efficiency in representing real-world phenomena, allowing researchers to focus on core dynamics rather than unique details for each instance.

Historical Context

The concept of a parameter traces its roots to ancient Greek geometry, where it referred to a constant quantity used to define the properties of conic sections. Although the modern term "parameter" derives from the Greek words para- (beside) and metron (measure), denoting a subsidiary measure, early applications appear in the works of mathematicians like Euclid and Archimedes, who described conic sections through proportional relations and auxiliary lines that functioned parametrically. For instance, Archimedes utilized analogous fixed measures in his quadrature of the parabola around 250 BCE to determine areas. Apollonius of Perga further systematized this approach in his Conics circa 200 BCE, using the term orthia pleura (upright side) for the fixed chord parallel to the tangent at the vertex—now known as the parameter or latus rectum—essential for classifying ellipses, parabolas, and hyperbolas. Advancements in the 17th and 18th centuries integrated parameters into analytic geometry and curve theory. René Descartes, in his 1637 treatise La Géométrie, revolutionized the field by representing geometric curves algebraically using coordinates, where constants in the equations served as parameters defining the loci, bridging algebra and geometry without relying solely on synthetic methods. This laid the groundwork for parametric equations in modern form. Leonhard Euler expanded on this in the 18th century, developing parametric representations for complex curves, such as in his studies of elastic curves (elastica) and spirals during the 1740s, where parameters like arc length and curvature enabled precise descriptions of plane figures and variational problems. Euler's work, including his 1744 paper on the elastica, emphasized parameters as tools for solving differential equations governing curve shapes. In the 19th and early 20th centuries, parameters gained prominence in statistics, physics, and estimation theory. Carl Friedrich Gauss introduced parameter estimation via the least squares method in his 1809 Theoria Motus Corporum Coelestium, applying it to astronomical data to minimize errors in orbital parameters, marking the birth of rigorous statistical inference. Ronald A. Fisher advanced this in the 1920s with maximum likelihood estimation, detailed in his 1922 paper "On the Mathematical Foundations of Theoretical Statistics," where parameters represent unknown population characteristics maximized for observed data likelihood. In physics, James Clerk Maxwell incorporated parameters like permittivity and permeability in his 1865 electromagnetic theory, formalized in equations that unified electricity, magnetism, and light, treating these as constants scaling field interactions. The mid-20th century saw parameters adopted across interdisciplinary fields, particularly computing and artificial intelligence. In computing, the term emerged in the 1950s with the development of subroutines in early programming languages like FORTRAN (1957), where parameters passed values between procedures, enabling modular code as seen in IBM's mathematical subroutine libraries. In AI, parameters proliferated in the 1980s amid the expert systems boom and the revival of neural networks; for example, backpropagation algorithms optimized network parameters (weights) for learning, as in Rumelhart, Hinton, and Williams' 1986 seminal work, scaling AI from rule-based to data-driven models. Notably, while parameters are central to modern generative linguistics since Chomsky's 1981 principles-and-parameters framework, pre-20th-century linguistic usage remains underexplored, with sparse evidence in 19th-century descriptive grammars treating structural constants analogously but without the formalized term.

Mathematics

Parameters in Functions

In mathematics, a parameter is a quantity that influences the output or behavior of a function but is viewed as being held constant during the evaluation of that function for varying inputs. This distinguishes parameters from variables, which are the inputs that change to produce different outputs. Parameters effectively define the specific form or characteristics of the function, allowing it to be part of a broader family of related functions. Functions with parameters are often denoted using a semicolon to separate the variable from the parameter, such as f(x; \theta), where x is the independent variable and \theta represents one or more parameters. Here, \theta is fixed for a given function instance, but varying \theta generates different functions within the same family, enabling the modeling of diverse behaviors through a single parameterized expression. For instance, the exponential family of functions, such as f(x; \theta) = \theta^x for \theta > 0, illustrates how parameters create a versatile class of functions applicable in various mathematical contexts. Key properties of parameters in functions include linearity, identifiability, and sensitivity. A function is linear in its parameters if the output can be expressed as a linear combination of those parameters, meaning no products, powers, or other nonlinear operations involving the parameters appear in the expression. This linearity simplifies analysis and estimation, as seen in polynomial functions where parameters multiply powers of the variable but not each other. Identifiability refers to the ability to uniquely determine parameter values from the function's observed behavior; for example, in a linear function, parameters are identifiable provided the inputs span the necessary range to distinguish their effects. Sensitivity measures how changes in a parameter affect the function's output, typically quantified by the partial derivative with respect to the parameter, \frac{\partial f}{\partial \theta}, which indicates the rate of change in the function for small perturbations in \theta. Basic examples highlight these concepts. Consider the linear function y = mx + b, where m is the slope parameter controlling the steepness and b is the intercept parameter setting the y-value at x=0. Varying m and b produces a family of straight lines, with the function linear in both parameters. Similarly, the quadratic function y = ax^2 + bx + c involves three parameters: a determines the parabola's direction and curvature, b affects its tilt, and c shifts it vertically. This form is also linear in a, b, and c, allowing straightforward adjustments to fit observed data patterns. Parameter estimation in functions typically involves curve fitting, where observed data points are used to determine parameter values that best match the function to the data. A fundamental method is least squares fitting, which minimizes the sum of squared differences between observed values and the function's predictions. For linear and quadratic functions, this approach yields closed-form solutions for the parameters, such as solving normal equations derived from the data. This method, dating back to the work of Gauss and Legendre in the early 19th century, provides reliable estimates when the data noise is minimal and the function form is appropriate.

Parameters in Models

In mathematical models, parameters act as tunable components that encapsulate key system properties, enabling the simulation and prediction of dynamic behaviors through differential equations or computational frameworks. These parameters allow models to represent real-world processes by adjusting rates of change, interactions, or thresholds, thereby facilitating the exploration of scenarios that would otherwise be infeasible to observe directly. For example, in epidemiological simulations, the parameter β in the SIR model quantifies the transmission rate of infection from susceptible to infected individuals, influencing the spread dynamics within a population. Parameters in models are broadly categorized into structural ones, which define the underlying form and assumptions of the model—such as the choice of differential equation structure—and observational ones, which are empirically fitted to align model outputs with available data. Structural parameters establish the model's architecture, often derived from theoretical principles, while observational parameters are adjusted during calibration to reflect measurement outcomes. A critical challenge is identifiability, where parameters may not be uniquely recoverable from outputs due to correlations or insufficient data, leading to non-unique solutions that undermine prediction reliability; this issue is particularly pronounced in nonlinear systems. Model calibration involves optimizing parameters to minimize discrepancies between simulated results and empirical observations, with least squares fitting being a foundational technique that minimizes the sum of squared residuals. In the Lotka-Volterra predator-prey model, for instance, the parameters α (prey growth rate), β (predation efficiency), γ (predator mortality rate), and δ (predator conversion efficiency from prey) are calibrated to capture oscillatory population dynamics, often using time-series data on species abundances. The calibrated model is given by the system: \begin{align*} \frac{dx}{dt} &= \alpha x - \beta x y, \\ \frac{dy}{dt} &= \delta x y - \gamma y, \end{align*} where x and y denote prey and predator populations, respectively; least squares methods integrate numerical solutions of these equations with data to estimate the parameters. Post-2000 advancements have emphasized sensitivity analysis to evaluate parameter influence on model robustness, particularly through global methods that explore parameter ranges holistically rather than locally. These techniques, such as variance-based decomposition, quantify how variations in individual or combined parameters propagate to output uncertainty, aiding in model simplification and prioritization of calibration efforts in complex simulations.

Analytic Geometry

In analytic geometry, parametric equations provide a method to represent geometric objects such as curves and surfaces by expressing their coordinates as functions of one or more parameters, offering a flexible alternative to Cartesian or implicit forms. For instance, a straight line passing through points (x_0, y_0) with direction vector (a, b) can be parameterized as x = x_0 + a t, y = y_0 + b t, where t is the parameter that traces points along the line. Similarly, a circle of radius r centered at the origin is given by x = r \cos t, y = r \sin t, with t ranging from 0 to $2\pi to complete the loop. For an ellipse centered at the origin with semi-major axis a and semi-minor axis b, the equations become x = a \cos t, y = b \sin t, allowing the parameter t to control the position around the ellipse. Parametric representations offer several advantages over Cartesian equations, particularly in handling intersections, tracing paths, and facilitating animations, as they explicitly incorporate direction and parameterization by time or angle. For example, computing intersections between curves is often simpler parametrically, as it involves solving for parameter values rather than eliminating variables from implicit equations. A historical development in this area includes Plücker coordinates, introduced by Julius Plücker in the mid-19th century, which use six homogeneous parameters to describe lines in three-dimensional projective space, advancing the analytic treatment of line geometry. In higher dimensions, parametric equations extend to curves and surfaces, enabling descriptions of complex shapes. A sphere of radius r can be parameterized using two parameters, \theta and \phi, as: \begin{align*} x &= r \sin \theta \cos \phi, \\ y &= r \sin \theta \sin \phi, \\ z &= r \cos \theta, \end{align*} where \theta \in [0, \pi] and \phi \in [0, 2\pi], covering the entire surface. A helix, as a space curve, is represented by x = r \cos t, y = r \sin t, z = c t, with t as the parameter controlling both rotation and linear ascent. Unlike implicit forms, which define surfaces via equations like F(x, y, z) = 0 (e.g., x^2 + y^2 + z^2 = r^2 for a sphere), parametric forms allow direct mapping from parameter domains to the surface, aiding in visualization and computation without solving for coordinates implicitly. These parametric approaches find foundational applications in computer graphics, where they model smooth curves and surfaces for rendering and animation, such as tracing object paths or generating wireframe models without delving into algorithmic implementation.

Mathematical Analysis

In mathematical analysis, parameters often appear in functions where their variation affects the behavior of limits, derivatives, and integrals, enabling the study of how solutions depend continuously or differentiably on these parameters. A key tool for handling parameter dependence in integrals is the Leibniz integral rule, which allows differentiation under the integral sign. This rule states that if f(x, t) is continuous in x and t, and differentiable in t, with the partial derivative \frac{\partial f}{\partial t} continuous, then for fixed limits of integration, \frac{d}{dt} \int_a^b f(x, t) \, dx = \int_a^b \frac{\partial}{\partial t} f(x, t) \, dx. The rule, first employed by Gottfried Wilhelm Leibniz in the late 17th century, facilitates the analysis of parameter-dependent integrals by interchanging differentiation and integration under suitable conditions on the domain and function regularity. For series expansions, parameter-dependent functions can be approximated using Taylor series centered at a point, where the coefficients involve derivatives with respect to the primary variable but may themselves depend on the parameter. Consider a function f(x; p) analytic in x near x_0; its Taylor series is f(x; p) = \sum_{n=0}^\infty \frac{f^{(n)}(x_0; p)}{n!} (x - x_0)^n, allowing assessment of how the approximation varies with p. This parametric form underpins perturbation theory, where a small parameter \epsilon perturbs a solvable base problem P_0(x) = 0 to P_\epsilon(x) = P_0(x) + \epsilon Q(x) + O(\epsilon^2) = 0, and solutions are sought as asymptotic series x(\epsilon) = x_0 + \epsilon x_1 + \epsilon^2 x_2 + \cdots. In regular perturbation cases, these series converge for small \epsilon, providing quantitative dependence; singular cases require rescaling for uniform validity across domains./10:_Power_Series/10.03:_Taylor_and_Maclaurin_Series) The properties of continuity and differentiability for parameter-dependent functions f(x; p) rely on convergence behaviors of approximating sequences or series. Uniform convergence of a sequence of continuous functions f_n(x; p) to f(x; p) on a domain preserves continuity in both x and p, ensuring the limit function inherits these properties uniformly. For differentiability, if \{f_n\} converges uniformly and their derivatives \{f_n'\} converge uniformly to some g(x; p), then f is differentiable with f' = g, critical for analyzing parameter sensitivity in limits and integrals. This framework extends to parameter families, where uniform convergence prevents pathologies like pointwise limits yielding discontinuous parameter dependence. In advanced settings, such as dynamical systems, parameters serve as bifurcation points where qualitative solution structures change abruptly. A bifurcation parameter r in an ordinary differential equation \dot{x} = f(x; r) induces transitions like the supercritical pitchfork bifurcation, governed by the normal form \dot{x} = r x - x^3, where the origin shifts from stable (for r < 0) to unstable (for r > 0), spawning two new stable equilibria. These phenomena, analyzed via local Taylor expansions around equilibria, reveal how small parameter variations destabilize systems and generate complex behaviors like periodic orbits. Complementing this, the implicit function theorem provides local solvability for parameter-dependent equations F(x; p) = 0, ensuring unique, differentiable solutions x(p) near points where \frac{\partial F}{\partial x} \neq 0. In early 20th-century analysis, Ulisse Dini's rigorous formulations (1907–1915) applied the theorem to real analytic implicit functions and differential geometry, enabling studies of singularities and manifold structures, while extensions by Gilbert A. Bliss (1909) addressed existence in higher dimensions for Riemann surfaces.

Statistics and Econometrics

In statistics, parameters are unknown quantities in a model that are estimated from data to describe the underlying population or process. Parameter estimation involves methods that use observed data to infer these values, with two foundational approaches being the method of moments and maximum likelihood estimation. The method of moments, introduced by Karl Pearson, equates sample moments to population moments to solve for parameters; for the normal distribution, the first moment yields the mean \mu as the sample average, and the second central moment gives the variance \sigma^2 as the sample variance (adjusted for bias). Maximum likelihood estimation, developed by Ronald Fisher, maximizes the likelihood function—the probability of observing the data given the parameters—to obtain point estimates; for the normal distribution, this produces the same estimators for \mu and \sigma^2 as the method of moments, but the approach generalizes more efficiently to complex distributions by leveraging the data's joint density. Once estimated, statistical inference assesses the reliability of parameters through confidence intervals and hypothesis tests. Confidence intervals provide a range within which the true parameter likely lies, with coverage probability determined by the interval's construction; for example, a 95% confidence interval for \mu in a normal model with known variance uses the sample mean plus or minus 1.96 standard errors. Hypothesis testing evaluates specific claims about parameters, such as equality to a null value, using test statistics like the t-statistic in Student's t-test, which William Sealy Gosset introduced to handle small-sample inference on means when the variance is unknown, comparing the sample mean to the hypothesized value under a t-distribution. In econometrics, parameters often represent relationships between economic variables, estimated via regression models to inform policy and prediction. Ordinary least squares (OLS) estimates regression coefficients \beta_0 and \beta_1 in the linear model y = \beta_0 + \beta_1 x + \epsilon by minimizing the sum of squared residuals, a method formalized by Carl Friedrich Gauss for error-prone observations. When endogeneity biases OLS estimates—such as due to omitted variables or reverse causality—instrumental variables (IV) estimation uses exogenous instruments correlated with the regressor but not the error term to identify causal parameters, as advanced in modern causal inference frameworks. Time-series analysis treats parameters as characterizing temporal dependencies in data, with autoregressive integrated moving average (ARIMA) models specifying orders p, d, and q for autoregressive, differencing, and moving average components, respectively; estimation typically employs maximum likelihood on differenced series to achieve stationarity. While classical ARIMA relies on frequentist methods, Bayesian approaches for parameter inference, incorporating priors to handle uncertainty in volatile series like economic indicators, have been integrated since the late 1990s.

Probability Theory

In probability theory, parameters specify the properties of probability distributions, enabling the modeling of random phenomena. These parameters are typically classified into location, scale, and shape categories. The location parameter, often denoted by μ, determines the central tendency or shift of the distribution, such as the mean in the normal distribution. The scale parameter, denoted by σ, controls the spread or dispersion, representing the standard deviation in the normal case. Shape parameters alter the form of the distribution, for instance, the success probability p in the Bernoulli distribution, which governs the probability mass at 0 or 1, or the rate λ in the Poisson distribution, which sets the expected number of events in a fixed interval. A prominent class of distributions unified by their parametric structure is the exponential family, which encompasses many common distributions like the normal, Poisson, and Bernoulli. In this family, the probability density or mass function can be expressed as f(x \mid \eta) = h(x) \exp\left( \eta^T T(x) - A(\eta) \right), where η is the natural parameter vector, T(x) is the sufficient statistic, h(x) is the base measure, and A(η) is the log-partition function ensuring normalization. The natural parameters η reparameterize the distribution in a form that simplifies inference, as they directly multiply the sufficient statistic T(x), facilitating properties like convexity of the log-partition function. This parameterization highlights the role of parameters in capturing the essential variability across family members. Parameters also define stochastic processes, which model sequences of random variables evolving over time. In Markov chains, a discrete-time stochastic process with the Markov property, the transition probabilities p_{ij} = P(X_{t+1} = j \mid X_t = i) serve as the key parameters, forming the rows of the transition matrix that dictate the probability of moving between states. These parameters fully characterize the chain's stationary behavior and long-term dynamics when the matrix is stochastic. For continuous-time processes like Brownian motion, also known as the Wiener process, the parameters include the drift μ, which specifies the expected linear trend, and the volatility σ, which measures the instantaneous variance per unit time, yielding the stochastic differential equation dX_t = μ dt + σ dW_t where W_t is standard Brownian motion. Sufficient statistics play a crucial role in parameter inference within probability theory by encapsulating all relevant information about the parameters from the data. A statistic T(X) is sufficient for a parameter θ if the conditional distribution of the data given T(X) is independent of θ, allowing inference to proceed solely from T(X) without loss of information. In exponential families, the natural sufficient statistic T(x) directly informs estimation of η, as it appears linearly in the likelihood. This concept underpins efficient inference procedures, reducing dimensionality while preserving probabilistic structure. Lévy processes, a broad class of stochastic processes with independent and stationary increments generalizing Brownian motion, developed through contributions starting in the early 1900s, received key formalization by Paul Lévy in the 1930s. These processes are parameterized by a triplet (b, σ², ν), where b is the drift vector, σ² is the Gaussian covariance matrix for the diffusion component, and ν is the Lévy measure describing the intensity and size of jumps. This parameterization captures jumps, diffusion, and drift, enabling the modeling of heavy-tailed phenomena beyond Gaussian assumptions.

Computing

Computer Programming

In computer programming, parameters serve as placeholders for values or references passed to functions, subroutines, or methods, enabling modular code by allowing external inputs to influence execution without hardcoding specifics. They facilitate reusability and abstraction, distinguishing between formal parameters (defined in the function signature) and actual arguments (provided during invocation). Early programming languages emphasized parameters for numerical computations, evolving to support diverse passing mechanisms and scoping rules in modern contexts. The concept of parameters originated in the 1950s with FORTRAN, developed by IBM for scientific computing on the IBM 704. FORTRAN I (1957) introduced function statements using dummy arguments in an assignment-like syntax, such as function(arg) = expression, where arguments were passed by address, allowing functions to modify values indirectly. FORTRAN II (1958) enhanced this by supporting user-defined subroutines with separate compilation, retaining symbolic information for parameter references, while FORTRAN III (late 1950s) permitted function and subroutine names as arguments themselves, expanding flexibility for alphanumeric handling. These innovations marked parameters as essential for procedural abstraction in early high-level languages. Function parameters vary by type and passing mechanism across languages. Positional parameters are matched by order of declaration, as in Python's def add(a, b): return a + b, invoked as add(2, 3). Keyword parameters allow named passing for clarity and flexibility, such as greet(name="Alice", greeting="Hello") in def greet(name, greeting="Hello"): .... Default parameters provide fallback values, evaluated once at definition; for example, def power(base, exponent=2): return base ** exponent yields 9 when called as power(3). Parameters can be passed by value or by reference, affecting mutability. Pass-by-value copies the argument's value into the formal parameter, isolating changes; in C++, void swapByVal(int num1, int num2) leaves originals unchanged (e.g., inputs 10 and 20 remain 10 and 20 post-call). Pass-by-reference passes the address, enabling modifications to the original; C++'s void swapByRef(int& num1, int& num2) swaps values (10 and 20 become 20 and 10). This distinction balances efficiency for small types (value) with avoidance of copies for large structures (reference, often with const for read-only access). Configuration parameters configure program behavior at runtime, often via command-line arguments or APIs. In C++, the main function receives them as int main(int argc, char* argv[]): argc counts arguments (minimum 1), and argv is an array where argv[0] is the program name and subsequent elements are strings; for instance, invoking ./program input.txt -v sets argc=3 and argv[1]="input.txt". This mechanism supports scripting and external control without recompilation. Parameter scope defines accessibility, while binding associates names with values or types. Local parameters, declared within a function or block, are confined to that lexical scope (e.g., Python's function arguments act as locals), preventing unintended interference. Global parameters, declared at module level, are accessible program-wide but risk namespace pollution; languages like C++ use static binding at compile time for types (e.g., int x) and dynamic binding at runtime for values. Type systems enforce parameter constraints, such as strong typing in Java to prevent mismatches. Modern languages extend parameters with generics for type-safe reusability. TypeScript's generics use type parameters like <Type> in functions: function identity<Type>(arg: Type): Type { return arg; }, callable as identity<string>("hello") to infer and preserve string type. In functional programming paradigms, lambda expressions with parameters gained mainstream adoption in the 2000s, enabling anonymous functions for concise higher-order operations; C++11 (2011) and Java 8 (2014) integrated them, building on earlier influences like Python's 1994 lambda but accelerating use in object-oriented contexts for tasks like event handling.

Artificial Intelligence

In artificial intelligence, particularly in machine learning models, parameters refer to the internal variables that are learned during training to optimize model performance. Model parameters, often denoted as \theta, include the weights and biases in neural networks, which are adjusted to minimize a loss function that quantifies the difference between predicted and actual outputs. This optimization process typically employs backpropagation, an algorithm that computes gradients of the loss with respect to each parameter and updates them iteratively using gradient descent. The seminal introduction of backpropagation in multilayer networks enabled efficient training of deep architectures by propagating errors backward through the layers. For instance, in deep feedforward networks, the objective is to find \theta that minimizes the expected loss \mathbb{E}_{(x,y)}[L(y, f(x; \theta))], where f is the model function. Hyperparameters, in contrast, are configuration settings external to the model that are not learned from data but must be specified before training. Common examples include the learning rate \alpha, which controls the step size in parameter updates during optimization, and the batch size, which determines the number of training examples processed per iteration. These are tuned through methods such as grid search, which exhaustively evaluates combinations on a predefined grid, or more efficient approaches like random search, which samples hyperparameters randomly and often outperforms grid search by focusing on promising regions of the space. Bayesian optimization further advances this by modeling the hyperparameter-performance relationship as a probabilistic surrogate, such as a Gaussian process, to intelligently select configurations that balance exploration and exploitation. Specific examples illustrate the role of parameters in AI architectures. In transformer models, the model dimension d_{model} serves as a key hyperparameter that sets the size of input embeddings and hidden states, influencing the network's capacity to capture complex dependencies; for instance, the original transformer used d_{model} = 512. In reinforcement learning, the discount factor \gamma \in [0, 1) is a hyperparameter that weights future rewards in the value function, balancing immediate versus long-term gains in agents trained via methods like Q-learning. Recent developments have emphasized parameter efficiency and scaling in large-scale AI systems. Transfer learning, prominent since the 2010s, involves initializing model parameters with representations learned on large source tasks and fine-tuning them on target tasks, leveraging the transferability of lower-layer features while adapting higher layers. Studies show that early layers retain general features transferable across domains, whereas later layers become task-specific, guiding efficient fine-tuning strategies. In large language models, scaling laws have revealed optimal parameter regimes; for example, the Chinchilla model demonstrated that compute-optimal training allocates equal resources to model size (parameters) and data tokens, achieving better performance than larger undertrained models like Gopher by using 70 billion parameters trained on 1.4 trillion tokens.

Applied Sciences

Engineering

In engineering, parameters are essential variables that define the behavior, performance, and constraints of systems during design, analysis, and optimization. These include physical properties, operational settings, and dimensional specifications that engineers adjust to meet functional requirements while ensuring reliability and efficiency. For instance, in mechanical and electrical systems, parameters such as material strengths, load conditions, and circuit elements directly influence system stability and output. In control theory, system parameters like the proportional gain K_p and time constant \tau are critical for tuning proportional-integral-derivative (PID) controllers, which regulate processes in applications ranging from robotics to industrial automation. The gain K_p determines the controller's responsiveness to error, amplifying the corrective action proportionally, while \tau represents the system's inherent response time, affecting settling and overshoot in dynamic systems. Proper selection of these parameters, often through methods like Ziegler-Nichols tuning, ensures stable operation by balancing speed and accuracy. Design optimization in engineering relies on parameters such as tolerances in manufacturing, which specify allowable deviations in part dimensions to achieve interchangeability and precision. For example, in machining processes, tolerance parameters define limits for linear dimensions (e.g., ±0.1 mm for general fits) to minimize defects and assembly issues. Similarly, finite element analysis (FEA) uses input parameters like material Young's modulus, Poisson's ratio, and boundary loads to simulate stress distributions and predict failure modes in structures. These parameters enable iterative optimization, reducing material use while maintaining safety factors. Representative examples illustrate parameter roles in specific domains. In electrical circuits, resistance R (measured in ohms) quantifies opposition to current flow, while capacitance C (in farads) indicates charge storage capacity, both governing time constants in RC networks for filtering and timing applications. In fluid dynamics, the Reynolds number Re, defined as Re = \frac{\rho v D}{\mu} where \rho is density, v is velocity, D is diameter, and \mu is viscosity, serves as a dimensionless parameter to characterize flow regimes—low Re (<2000) indicates laminar flow, while high Re (>4000) signals turbulence, guiding pipeline and aerodynamic designs. Engineering standards formalize these parameters for consistency. The ISO 2768 standard establishes general tolerances for linear and angular dimensions in manufacturing, categorizing them into fine (f), medium (m), coarse (c), and very coarse (v) classes to suit production methods like casting or milling. In sustainable engineering, parameters such as carbon footprint factors—quantifying embodied emissions per unit mass—have evolved in the 2020s to incorporate lifecycle assessments, aiding low-carbon material selection and reducing global warming potential by up to 30% in building designs.

Environmental Science

In environmental science, parameters are essential for modeling complex systems such as climate and ecosystems, enabling predictions of natural processes and human impacts. Climate models, particularly general circulation models (GCMs), rely on key inputs like the climate sensitivity parameter λ, defined as the equilibrium change in global mean surface temperature per unit radiative forcing (ΔT = λ × ΔF), typically around 0.5 K/(W m⁻²) in one-dimensional radiative-convective models and exhibiting 20-30% variation in three-dimensional atmosphere-ocean GCMs due to feedbacks such as water vapor and clouds. This parameter quantifies the Earth's radiative response and is nearly invariant across forcings like well-mixed greenhouse gases and solar radiation, though it varies more for stratospheric ozone perturbations. Surface properties in GCMs, including albedo (the fraction of incident solar radiation reflected by the surface) and emissivity (the efficiency of surface thermal radiation emission), further govern energy balance; albedo influences absorbed solar energy and atmospheric circulation, while emissivity determines upward longwave flux in radiative transfer equations like F↑ = ε_s σ T_s^4, where σ is the Stefan-Boltzmann constant. These parameters, often assumed constant over global surfaces in simplified GCMs, are tuned to match observed climate states and drive simulations of future scenarios. Ecological models incorporate parameters to capture population dynamics and community structure, with the carrying capacity K in the logistic growth equation dN/dt = rN(1 - N/K) representing the maximum sustainable population size limited by resources, where r is the intrinsic growth rate and N is population size. This parameter levels off exponential growth as environmental constraints intensify, providing a foundational metric for predicting species responses to habitat changes in metapopulation contexts. Biodiversity indices, used as parameters to evaluate ecosystem diversity, include the Shannon index H = -∑ p_i ln(p_i), where p_i is the proportion of species i, measuring uncertainty in species identity and balancing rare and common taxa via geometric mean rarity. Similarly, the Simpson index D = 1 / ∑ p_i^2 quantifies the probability of interspecific encounters, emphasizing dominant species through harmonic mean rarity and serving as a robust input for models assessing community stability and resilience. These indices enable comparisons across ecosystems, informing conservation strategies by parameterizing diversity gradients standardized by sampling coverage. Uncertainty in environmental parameters arises from incomplete knowledge and model variability, with the Intergovernmental Panel on Climate Change (IPCC) addressing it through qualitative confidence levels (e.g., high based on evidence quality and agreement) and probabilistic likelihood terms (e.g., likely: 66-100% probability). For instance, IPCC models report parameter ranges like climate sensitivity with 90-95% confidence intervals to capture tails of distributions, ensuring traceable judgments in projections. Scenario analysis employs Representative Concentration Pathways (RCPs) to parameterize emissions trajectories, such as RCP2.6 (low forcing, ~2.6 W m⁻² by 2100) limiting warming to 1.6°C and RCP8.5 (high forcing, ~8.5 W m⁻²) projecting 4.3°C, influencing outcomes like sea level rise (0.43 m vs. 0.84 m by 2100). These pathways integrate parameters for greenhouse gas concentrations, land use, and socioeconomic drivers to evaluate risks across low- and high-emission futures. Recent assessments of biodiversity loss highlight parameters quantifying exploitation and decline, as detailed in a 2020 IPBES workshop report. For example, unsustainable wildlife trade affects 72% (6,241 species) of threatened or near-threatened vertebrates, with regional hunting risking 13% of Southeast Asian threatened mammals (113 species) and 8% in Africa (91 species). African elephant populations have declined 30-fold to ~400,000 over the past century, exacerbated by poaching exceeding 100,000 individuals between 2010 and 2012, serving as key metrics for modeling overexploitation drivers. More recent assessments, such as a 2024 PNAS study, confirm ongoing declines, with savanna elephant populations decreasing by an average of 70% at surveyed sites over the past 50 years (1964–2016) and forest elephants by 90%. Land use change contributes over 30% to emerging infectious diseases since 1960, linking biodiversity erosion to health risks through enhanced human-wildlife interfaces. These parameters underscore the economic scale of loss, with illegal trade valued at US$7-23 billion annually and prevention costs estimated at US$17.7-31.2 billion yearly.

Other Disciplines

Linguistics

In linguistics, the concept of parameters forms a core component of generative grammar, particularly within Noam Chomsky's Principles and Parameters (P&P) theory developed in the 1980s. This framework posits that Universal Grammar (UG) consists of invariant principles shared across all human languages, alongside a finite set of parameters that account for cross-linguistic variation. Parameters are binary switches that children "set" during language acquisition based on environmental input, allowing the grammar to generate language-specific structures while adhering to universal constraints. The theory emerged as an evolution of earlier generative models, aiming to explain both the uniformity and diversity of natural languages through a biologically endowed language faculty. A key example of a parameter is the head-directionality parameter, which determines whether the head of a phrase (such as a verb in a verb phrase or a preposition in a prepositional phrase) precedes or follows its complements. In head-initial languages like English, heads typically come first (e.g., "eat the apple"), whereas in head-final languages like Japanese, they follow (e.g., "apple eat"). Setting this parameter influences broader syntactic patterns, such as word order in clauses. Another prominent parameter is the pro-drop parameter, which licenses the null realization of subjects in finite clauses. Pro-drop languages like Spanish allow sentences such as "Habla inglés" (meaning "He/She speaks English"), where the subject pronoun is omitted due to rich verbal morphology, unlike non-pro-drop languages like English, which require overt subjects ("He/She speaks English"). These parameters illustrate how subtle settings can yield significant typological differences. In language acquisition, parameter setting explains how children rapidly converge on their target grammar despite limited and noisy input, guided by UG. The process involves hypothesizing parameter values that match the linguistic evidence, with mechanisms like the subset principle ensuring learnability. Formulated by Wexler and Culicover, the subset principle states that if one parameter value generates a proper subset of structures compared to another (the superset), learners initially adopt the subset value to avoid overgeneralization, only resetting to the superset upon unambiguous evidence. For instance, in acquiring the pro-drop parameter, English-learning children start with the conservative non-pro-drop setting (subset) and shift only when input lacks null subjects. This principle resolves potential learnability paradoxes in P&P theory, preventing erroneous generalizations from impoverished data. The P&P framework evolved into Chomsky's Minimalist Program (MP) in the 1990s, where parameters are refined to minimize the computational burden on the language faculty, focusing on core operations like Merge and Agree. In MP, parameters are increasingly localized to the lexicon or functional features, reducing their number and shifting emphasis from syntax to interfaces with other cognitive systems. Post-2010 developments have explored how these parameters interface with computational linguistics, such as in probabilistic models of parameter optimization that simulate acquisition via Bayesian inference, bridging theoretical syntax with statistical learning algorithms. This extension addresses gaps in earlier models by integrating minimalist assumptions into computational simulations of language variation and change.

Logic

In formal logic, parameters often manifest as free variables within proofs and formulas, serving as placeholders for arbitrary but fixed elements from the domain of discourse. A free variable in a logical formula is one that is not bound by a quantifier, allowing it to function as a parameter that can be instantiated with specific terms during proof construction or interpretation. This treatment enables the generalization of proofs: for instance, a proof of a formula \phi(x) with free variable x (as a parameter) can be universally quantified to \forall x \phi(x) if x does not occur free in the assumptions. In automated reasoning and resolution-based theorem proving, free variables as parameters facilitate unification and substitution, ensuring that proofs remain schematic and applicable across instances. The Herbrand universe plays a central role in this context, providing a domain constructed solely from the function symbols of the language without requiring non-logical constants or variables. Defined as the set of all ground terms (variable-free terms) generated by applying function symbols to each other, the Herbrand universe allows proofs to be analyzed over a countable, term-generated structure, where free variables in open formulas are effectively parameterized by these ground terms. Herbrand's theorem, which states that a set of first-order clauses is satisfiable if and only if it has a Herbrand model (an interpretation over the Herbrand universe), relies on this parameterization to reduce satisfiability to propositional logic over ground instances, eliminating the need for infinite domains in proof search. In model theory, parameters refer to specific elements of a structure incorporated into the defining formulas of sets or relations, enabling more expressive definitions than those without parameters. A set A in a structure \mathcal{M} is definable with parameters if there exists a first-order formula \phi(x, \bar{a}) with free variable x and parameters \bar{a} \in \mathcal{M} such that A = \{ b \in |\mathcal{M}| \mid \mathcal{M} \models \phi(b, \bar{a}) \}. This contrasts with parameter-free definability, where \phi(x) has no additional elements from \mathcal{M}, and allows for capturing structure-specific properties, such as the set of even integers in (\mathbb{Z}, +) defined by \phi(x, 0, 1): \exists y (x = y + y), using parameters 0 and 1. Historically, model theory's emphasis on parameters emerged in the 1940s through works by Alfred Tarski and Abraham Robinson, who used them to study definable subsets and stability in structures, influencing the field's shift toward algebraic and geometric applications. Examples in first-order logic illustrate parameters' utility: consider a sentence with parameters like \forall x (P(x, c) \rightarrow Q(x)), where c is a constant parameter naming an element in the model; this asserts a property relative to c, generalizing to arbitrary interpretations when c is treated as a free variable in proofs. Skolem functions extend this by replacing existentially quantified variables with functions of universal parameters, as in Skolemization: the formula \forall x \exists y P(x, y) becomes \forall x P(x, f(x)), where f is a Skolem function depending on the parameter x. Introduced by Thoralf Skolem in the 1920s as part of his work on the Löwenheim-Skolem theorem, these functions preserve satisfiability while eliminating existentials, facilitating resolution and model construction in first-order logic. Historically, Kurt Gödel's completeness theorem of 1930 established that every consistent first-order theory has a model, with proofs involving the systematic instantiation of free variables (parameters) to construct Henkin-style witnesses or canonical models. Gödel's original proof reduces the problem to sentences in prenex form, using parameters to build a countable model from the theory's axioms and ensuring that free variables are adequately substituted to satisfy all instances. However, in intuitionistic logic, while free variables function similarly as parameters in proof rules (e.g., for quantifier introduction, provided they do not occur free in assumptions), a gap arises in completeness: Gödel's semantic completeness over classical models does not directly extend, requiring instead Kripke or Heyting semantics for an analogous theorem, as intuitionistic provability demands constructive witnesses rather than classical truth preservation. This distinction highlights how parameters in intuitionistic systems emphasize realizability over mere existence, addressing foundational differences in logical deduction.

Music

In music, acoustic parameters describe the fundamental properties of sound waves that contribute to auditory perception. Frequency, denoted as f, determines the pitch of a sound, corresponding to the number of vibrations per second in hertz (Hz), with higher frequencies producing higher pitches. Amplitude, represented as A, governs the loudness or intensity of the sound, directly related to the magnitude of pressure variations in the wave. Timbre, often described as the "color" or quality of a sound, arises from the complex interplay of harmonic overtones and their relative amplitudes, distinguishing, for example, a violin from a trumpet even at the same pitch and volume. In musical composition, parameters such as tempo and key signatures provide structural guidelines within scores. Tempo, measured in beats per minute (BPM), dictates the overall pace, influencing the emotional character; for instance, a tempo of 60 BPM evokes a steady, deliberate feel, while 120 BPM suggests vivacity. Key signatures specify the tonal center by indicating the sharps or flats applied throughout a piece, establishing whether it is in a major (typically brighter) or minor (often somber) mode, and facilitating modulation between sections. These elements allow composers to manipulate time and harmony systematically, as seen in classical scores where tempo markings and key changes guide performer interpretation. Sound synthesis employs adjustable parameters to generate and shape tones electronically. In frequency modulation (FM) synthesis, the modulation index, often denoted as I, controls the depth of frequency deviation applied by a modulator to a carrier wave, producing rich timbres through sidebands; values of I around 5 can yield bell-like sounds, while lower values approximate simpler tones. MIDI (Musical Instrument Digital Interface) controls extend this by transmitting parameters like note velocity (for dynamics), aftertouch (for expressive modulation), and continuous controllers for real-time adjustments to parameters such as filter cutoff or volume, enabling performers to interact with synthesizers intuitively. Historically, the adoption of 12-tone equal temperament in the 18th century standardized tuning parameters, dividing the octave into 12 equal semitones (each approximately 100 cents), which facilitated chromatic harmony and modulation across all keys, as promoted in works by Johann Sebastian Bach. This system, where the frequency ratio between consecutive semitones is $2^{1/12}, became the foundation for Western music, enabling keyboard instruments to play in any key without retuning. In post-2000 algorithmic composition, parameters like mapping functions extract and transform gestural data (e.g., pitch density or rhythmic variance) into musical outputs, allowing composers to define probabilistic rules for structure, as in tools that map input gestures to polyphonic textures for experimental pieces.

References

  1. [1]
    Parameter definition - Math Insight
    A parameter is a quantity that influences the output or behavior of a mathematical object but is viewed as being held constant.
  2. [2]
    Parameter -- from Wolfram MathWorld
    In math, a parameter is an argument of a function that is not varied in situations of interest, unlike variables. For example, a and b in an ellipse equation.
  3. [3]
    S.1 Basic Terminology | STAT ONLINE - Penn State
    A parameter is any summary number, like an average or percentage, that describes the entire population. The population mean μ (the greek letter ...
  4. [4]
    Parameter passing mechanism: pass-by-value - Emory CS
    Definitions: Formal parameter = a parameter variable. Actual parameter = a variable whose value is to be passed to some formal parameter.
  5. [5]
    Parameter - Etymology, Origin & Meaning
    From Greek para- "beside" + metron "measure," parameter originated in the 1650s in geometry, meaning a quantity defining conic sections' properties.
  6. [6]
    Model Parameter - an overview | ScienceDirect Topics
    Model parameters are defined as the values that influence the performance of a forecasting model, where optimal values are necessary for effective ...
  7. [7]
    [PDF] Some high points of Greek mathematics after Euclid
    Archimedes is probably the best-known of the Greek mathematicians following Euclid (certainly more commonly read than Apollonius – more accessible in several ...
  8. [8]
    [PDF] 3 Ancient Greek Mathematics
    Apollonius (225 BC) writes an eight-volume book on conic sections building on earlier work of. Menaechmusus (350 BC). • By 146 BC the Greek empire had fallen ...<|separator|>
  9. [9]
    Descartes' Mathematics - Stanford Encyclopedia of Philosophy
    Nov 28, 2011 · To speak of René Descartes' contributions to the history of mathematics is to speak of his La Géométrie (1637), a short tract included with ...
  10. [10]
    [PDF] Mathematical Technique and Physical Conception in Euler's ...
    Introduction. In the 1730s and early 1740s Euler studied the statics of thin elastic bands or laminae, researches that he first began to publish in 1732.
  11. [11]
    Gauss on least-squares and maximum-likelihood estimation
    Apr 2, 2022 · Gauss' 1809 discussion of least squares, which can be viewed as the beginning of mathematical statistics, is reviewed.
  12. [12]
    [PDF] The Euler spiral: a mathematical history - UC Berkeley EECS
    Sep 2, 2008 · This report traces the history of the Euler spiral, a beautiful and useful curve known by several other names, including “clothoid,” and “Cornu ...
  13. [13]
    The Modern History of Computing
    Dec 18, 2000 · During the late 1940s and early 1950s, with the advent of electronic computing machines, the phrase 'computing machine' gradually gave way ...
  14. [14]
    [PDF] The Dawn of Commercial Computing in the 1950s
    Stored program computer. ○ optimized for scientific calculations. ○ First machine installed in IBM World Hdqtrs. in NYC in 1952. “Clink, clank, ...
  15. [15]
    5.3 - The Multiple Linear Regression Model | STAT 501
    The word "linear" in "multiple linear regression" refers to the fact that the model is linear in the parameters, β 0 , β 1 , … , ... function is a sum of these " ...
  16. [16]
    (PDF) Identifiability of Linear and Linear-in-Parameters Dynamical ...
    Sep 2, 2025 · In the case of a linear model, we provide precise definitions of several forms of identifiability, and we derive some novel, interrelated ...
  17. [17]
    [PDF] Exploring Parameter Sensitivity Analysis in Mathematical Modeling ...
    Oct 10, 2023 · Parameter sensitivity analysis explores how model output changes with input variations, using local and global methods, to understand how model ...
  18. [18]
    4.1.4.1. Linear Least Squares Regression
    ### Summary of Curve Fitting for Parameter Estimation
  19. [19]
    A contribution to the mathematical theory of epidemics - Journals
    Luckhaus S and Stevens A (2023) Kermack and McKendrick Models on a Two-Scale Network and Connections to the Boltzmann Equations Mathematics Going Forward ...
  20. [20]
    Parameter identifiability and model selection for partial differential ...
    Mar 6, 2024 · The first notion is structural identifiability, which refers to the ability to uniquely determine the values of the model parameters given an ...
  21. [21]
    On structural and practical identifiability - ScienceDirect.com
    A model is structurally identifiable if a unique parameterization exists for any given model output. A parameter pi is globally structurally identifiable [3], ...
  22. [22]
    Parameter Estimation for Differential Equation Models Using a ...
    In this paper, we propose parameter estimation methods for ordinary differential equation models (ODE) based on the local smoothing approach and a pseudo-least ...Example 1 · Example 2 · Lemma 1
  23. [23]
    [PDF] Global Sensitivity Analysis. The Primer - Andrea Saltelli
    1 Introduction to Sensitivity Analysis. 1. 1.1 Models and Sensitivity Analysis. 1. 1.1.1. Definition. 1. 1.1.2. Models. 2. 1.1.3. Models and Uncertainty.
  24. [24]
    Calculus II - Parametric Equations and Curves
    Apr 10, 2025 · In this section we will introduce parametric equations and parametric curves (i.e. graphs of parametric equations). We will graph several ...
  25. [25]
    Parametric Equation of a Circle - Math Open Reference
    The parametric equations of a circle are x = r cos(t) and y = r sin(t), where r is the radius and t is the angle. If the center is not at the origin, add h and ...
  26. [26]
    Parametric Equations for Circles and Ellipses | CK-12 Foundation
    The standard equation for an ellipse is ( x − h ) 2 a 2 + ( y − k ) 2 b 2 = 1 , where ( h , k ) is the center of the ellipse, and 2 a and 2 b are the lengths of ...
  27. [27]
    10.6: Parametric Equations - Mathematics LibreTexts
    Dec 26, 2024 · One of the reasons we parameterize a curve is because the parametric equations yield more information: specifically, the direction of the ...
  28. [28]
    Julius Plücker (1801 - 1868) - Biography - MacTutor
    The point coordinates used in both volumes are nonhomogeneous affine; in volume II the homogeneous line coordinates in a plane, formerly known as Plücker's ...
  29. [29]
    Calculus III - Parametric Surfaces - Pauls Online Math Notes
    Mar 25, 2024 · In this section we will take a look at the basics of representing a surface with parametric equations.
  30. [30]
    Parametric Surfaces | Calculus III - Lumen Learning
    The right circular cone with radius r = k h and height h has parameterization s ( u , v ) = ⟨ k v cos ⁡ u , k v sin ⁡ u , v ⟩ , 0 ≤ u ≤ 2 π , 0 ≤ v ≤ h . With a ...
  31. [31]
    World Web Math: Vector Calculus: Parametric versus Implicit - MIT
    You can think of an implict definition as a giving a test for whether or not a point lies on the shape. Conversely, an explicit or parametric definition ...<|control11|><|separator|>
  32. [32]
  33. [33]
  34. [34]
    [PDF] Asymptotic Analysis and Singular Perturbation Theory
    We will begin by illustrating some basic issues in perturbation theory with simple algebraic equations. 2. Page 7. 1.2 Algebraic equations. The first two ...
  35. [35]
    [PDF] 172-184 • Uniform convergence and derivatives - UCLA Mathematics
    The first question we can ask is: if fn converges uniformly to f, and the functions fn are differentiable, does this imply that f is also differ- entiable? And ...
  36. [36]
  37. [37]
    [PDF] On the role played by the work of Ulisse Dini on implicit function ...
    Dec 21, 2012 · The prolegomena to the idea for the implicit function theorem can be traced both in the works of I. Newton,. G.W. Leibniz, J. Bernoulli and L.
  38. [38]
    III. Contributions to the mathematical theory of evolution - Journals
    The object of the present paper is to discuss the dissection of abnormal frequency-curves into normal curves. The equations for the dissection of a frequency- ...
  39. [39]
    Gauss and the Invention of Least Squares - jstor
    When Gauss did publish on least squares, he went far beyond Legendre in both conceptual and technical development, linking the method to probability and ...
  40. [40]
    Parameter of a distribution - StatLect
    Examples of parametric families ; Bernoulli, Probability of success ; Binomial, Probability of success and number of trials ; Poisson, Expected value ; Uniform ...
  41. [41]
    Normal distribution — Probability Distribution Explorer documentation
    The Normal distribution has two parameters, the location parameter μ , which determines the location of its peak, and the scale parameter σ , which is strictly ...
  42. [42]
    [PDF] Common Families of Distributions - Purdue Department of Statistics
    The location parameter µ simply shifts the pdf f(x) so that the shape of the graph is unchanged but the point on the graph that was above x = 0 for f(x) is ...
  43. [43]
    [PDF] Chapter 8 The exponential family: Basics - People @EECS
    The natural parameter space N is convex (as a set) and the cumulant function. A(η) is convex (as a function). If the family is minimal then A(η) is strictly ...
  44. [44]
    [PDF] 18 The Exponential Family and Statistical Applications
    For a distribution in the canonical one parameter Exponential family, the parameter η is called the natural parameter, and T is called the natural parameter ...
  45. [45]
    [PDF] Chapter 8: Markov Chains
    The transition probabilities of the Markov chain are pij = P(Xt+1 = j |Xt ... We have been calculating hitting probabilities for Markov chains since Chapter 2, ...
  46. [46]
    [PDF] MARKOV CHAINS: BASIC THEORY 1.1. Definition and First ...
    The numbers p(i,j) are called the transition probabilities of the chain. Example 1. The simple random walk on the integer lattice Zd is the Markov chain whose ...<|separator|>
  47. [47]
    [PDF] 1 Geometric Brownian motion
    Letting S(t) = S0eX(t), where X(t) = σB(t) + µt is BM with drift µ, and variance σ2, we solve for new values for µ and σ (denoted by µ∗,σ∗), under which ...
  48. [48]
    Lesson 24: Sufficient Statistics - STAT ONLINE
    In general, if Y is a sufficient statistic for a parameter θ , then every one-to-one function of Y not involving θ is also a sufficient statistic for θ .
  49. [49]
    [PDF] 1 Sufficient statistics - Arizona Math
    We say T is a sufficient statistic if the statistician who knows the value of T can do just as good a job of estimating the unknown parameter θ as the ...
  50. [50]
    [PDF] Lévy processes - University of Warwick
    Historically they have always played a central role in the study of stochastic processes with some of the earliest work dating back to the early 1900s. The.
  51. [51]
    [PDF] Principles of Programming Languages - Names, Scopes, and Bindings
    Binding is the process of associating attributes with names. Names denote entities, and attributes describe their meaning. Scope is the region where a binding ...
  52. [52]
    The history of Fortran I, II, and III - ACM Digital Library
    Attitudes about Automatic Programming in the 1950s. Before 1954 almost all programming was done in machine language or assembly lan-.<|separator|>
  53. [53]
  54. [54]
  55. [55]
  56. [56]
    Function pass by value vs. pass by reference
    Pass by value means you are making a copy in memory of the actual parameter's value that is passed in, a copy of the contents of the actual parameter.
  57. [57]
    `main` function and command-line arguments (C++) | Microsoft Learn
    Aug 26, 2024 · The arguments for main allow convenient command-line parsing of arguments. The types for argc and argv are defined by the language. The names ...C++ Language Reference · Developer command prompt · C/C++ SanitizersMissing: configuration | Show results with:configuration
  58. [58]
    Documentation - Generics - TypeScript
    A generic class has a similar shape to a generic interface. Generic classes have a generic type parameter list in angle brackets ( <> ) following the name of ...
  59. [59]
    [PDF] Understanding the Use of Lambda Expressions in Java - Danny Dig
    Over the last years, several mainstream object-oriented languages, such as Java 8, C# 3 and C++ 11, added lambda expressions to facilitate the functional ...<|control11|><|separator|>
  60. [60]
    Learning representations by back-propagating errors - Nature
    Rumelhart, D. E., Hinton, G. E. & Williams, R. J. in Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Vol. 1: Foundations ...
  61. [61]
    [PDF] Random Search for Hyper-Parameter Optimization
    This paper shows empirically and theoretically that randomly chosen trials are more efficient for hyper-parameter optimization than trials on a grid. Empirical ...Missing: seminal | Show results with:seminal
  62. [62]
    What is Finite Element Analysis (FEA)? - Ansys
    Finite element analysis (FEA) is the process of predicting an object's behavior based on calculations made with the finite element method (FEM).Missing: parameters | Show results with:parameters
  63. [63]
    Introduction: PID Controller Design
    In this tutorial we will introduce a simple, yet versatile, feedback compensator structure: the Proportional-Integral-Derivative (PID) controller.
  64. [64]
    [PDF] PID Control
    The controller gain is normalized by multiplying it either with the static process gain K or with the parameter a = KTdel/T. Integral gain is normalized by ...
  65. [65]
    Standard Tolerances in Manufacturing: ISO 2768, ISO 286, and GD&T
    Sep 23, 2024 · ISO 2768 is a widely used standard that defines general tolerances for parts manufactured through machining or other material removal processes.
  66. [66]
    5.3: Finite Element Analysis - Engineering LibreTexts
    Mar 28, 2025 · In this module we will outline the principles underlying most current finite element stress analysis codes, limiting the discussion to linear elastic analysis ...
  67. [67]
    10.6: RC Circuits - Physics LibreTexts
    Mar 2, 2025 · An RC circuit is a circuit containing resistance and capacitance. As presented in Capacitance, the capacitor is an electrical component that stores electric ...
  68. [68]
    What is Reynolds Number? - Ansys
    Jan 3, 2023 · The Reynolds number (Re), which calculates the ratio of inertial force to viscous force in a flow. A low Reynolds number below a certain threshold is known to ...
  69. [69]
    General ISO Geometrical Tolerances Per. ISO 2768 - Engineers Edge
    ISO 2768 and derivative geometrical tolerance standards are intendedto simplify drawing specifications for mechanical tolerances.
  70. [70]
    [PDF] Systematic Review of Embodied Carbon Assessment and Reduction ...
    Sep 18, 2024 · The impact of embodied carbon can be assessed using the 100-year Global Warming Potential (GWP), which quantifies the energy that the emissions ...Missing: 2020s | Show results with:2020s
  71. [71]
    [PDF] Radiative Forcing of Climate Change
    global mean climate sensitivity parameter, λ) which is similar for all the different types of forcings. Model investigations of responses to many of the ...
  72. [72]
    [PDF] The 20-parameter general circulation model
    4 parameters for the surface properties: emissivity ǫs, albedo As, drag coefficient CD and thermal inertia I, assumed to be constant over the whole surface ...
  73. [73]
    Carrying Capacity and Logistic Growth Rate - UTSA
    Oct 24, 2021 · A typical application of the logistic equation is a common model of population growth, originally due to Pierre-François Verhulst in 1838.
  74. [74]
    Review Carrying Capacity of Spatially Distributed Metapopulations
    Jun 10, 2020 · The logistic equation, with carrying capacity, K, and growth rate, r, has traditionally been used to describe dynamics of ecological populations ...<|separator|>
  75. [75]
    A conceptual guide to measuring species diversity - Roswell - 2021
    Feb 9, 2021 · Three metrics of species diversity – species richness, the Shannon index and the Simpson index – are still widely used in ecology, ...
  76. [76]
    [PDF] Uncertainties Guidance Note - IPCC AR5
    These guidance notes are intended to assist Lead Authors of the Fifth Assessment Report (AR5) in the consistent treatment of uncertainties across all three ...
  77. [77]
    [PDF] Summary for Policymakers
    A pathway with lower emissions (RCP1.9), which would correspond to a lower level of projected warming than RCP2.6, was not part of CMIP5. 16.
  78. [78]
    None
    Below is a merged summary of the biodiversity loss parameters from the IPBES Post-2020 Reports and the IPBES Workshop on Biodiversity and Pandemics Report (2020). To retain all information in a dense and organized format, I’ve included detailed text summaries followed by a table in CSV format for key data points. This ensures comprehensive coverage while maintaining clarity and accessibility.
  79. [79]
    Lectures on government and binding : Chomsky, Noam
    Mar 30, 2022 · Lectures on government and binding ; Publication date: 1981 ; Topics: Generative grammar, Government-binding theory (Linguistics) ; Publisher ...
  80. [80]
    Formal Principles of Language Acquisition | Semantic Scholar
    The authors of this book have developed a rigorous and unified theory that opens the study of language learnability to discoveries about the mechanisms of ...Missing: subset | Show results with:subset
  81. [81]
    [PDF] Second Language Acquisition and the Subset Principle
    Within the Principles and Parameters approach to Universal Grammar, children acquire language by setting the parameters to match the input data. Although UG ...
  82. [82]
    [PDF] The Minimalist Program - 20th Anniversary Edition Noam Chomsky
    the parameters of the initial state in one of the permissible ways. A specific choice of parameter settings determines a language in the technical sense that.
  83. [83]
    [PDF] Computational perspectives on minimalism - UCLA Linguistics
    (2010) Oxford Handbook of Linguistic Minimalism, pp.616-641. Edward P. Stabler. Computational perspectives on minimalism. While research in 'principles and ...Missing: post- | Show results with:post-
  84. [84]
    Intuitionistic Logic - Stanford Encyclopedia of Philosophy
    Sep 1, 1999 · Intuitionistic logic encompasses the general principles of logical reasoning which have been abstracted by logicians from intuitionistic mathematics.
  85. [85]
    Proof Theory - Stanford Encyclopedia of Philosophy
    Aug 13, 2018 · has no use for free variables. Thus free variables are discarded and all terms will be closed. All formulae of this system are therefore ...
  86. [86]
    BOOK REVIEWS 403 - jstor
    in the Herbrand universe, the value of ffn with arguments h1, , hn is just. f n(hl , hn) (also an element of the Herbrand universe). One of the first steps ...
  87. [87]
    Model Theory - Stanford Encyclopedia of Philosophy
    Nov 10, 2001 · Model theory is the study of the interpretation of any language, formal or natural, by means of set-theoretic structures, with Alfred Tarski's truth definition ...
  88. [88]
    Model Theory
    In fact, identification of the definable sets in a structure has become a central φ(x) is an example of a first-order formula with free variable x. Traditional ...
  89. [89]
    CS 540 Lecture Notes: First-Order Logic - cs.wisc.edu
    Oct 14, 1998 · For example, (Ax)(Ey)P(x,y) is converted to (Ax)P(x, f(x)). f is called a Skolem function, and must be a brand new function name that does not ...
  90. [90]
    Skolem's Paradox - Stanford Encyclopedia of Philosophy
    Jan 12, 2009 · The Löwenheim-Skolem theorem says that if a first-order theory has infinite models, then it has models whose domains are only countable.Mathematical Issues · Philosophical Issues · Skolem's Views · Skolemite Skepticism
  91. [91]
    Kurt Gödel - Stanford Encyclopedia of Philosophy
    Feb 13, 2007 · The theorem as stated by Gödel in Gödel 1930 is as follows: a countably infinite set of quantificational formulas is satisfiable if and only if ...
  92. [92]
    [PDF] Henkin's Method and the Completeness Theorem
    Gödel's proof appeared in print in 1930 in [3]. An English translation of it can be found in the Collected Works of Kurt Gödel [5]. Gödel's proof was rather ...
  93. [93]
    Intuitionistic completeness of first-order logic - ScienceDirect.com
    We constructively prove completeness for intuitionistic first-order logic, iFOL, showing that a formula is provable in iFOL if and only if it is uniformly ...
  94. [94]
    [PDF] Timbre perception - MIT OpenCourseWare
    Mar 13, 2009 · In music, timbre is the quality of a musical note or sound or tone that distinguishes different types of sound production,.
  95. [95]
    [PDF] Tone Quality - Timbre
    Tone Quality - Timbre. A pure tone (aka simple tone) consists of a single frequency, e.g. f = 100 Hz. Pure tones are rare in nature – natural sounds are ...
  96. [96]
    The Parameters of Sound - Accessible Oceans
    We often rely on five sound parameters in mapping data streams. I like to state them as Frequency, Amplitude, Timbre, Location, and Duration, or simply FATLD.
  97. [97]
    Basic Music Theory for Beginners – The Complete Guide
    Time Signature – The number of beats per measure; Tempo (BPM) – Indicates how fast or slow a piece of music plays; Strong and Weak Beats – Strong beats are ...Missing: parameters | Show results with:parameters
  98. [98]
    8. Major Keys and Key Signatures – Fundamentals, Function, and ...
    In this chapter, we will discuss how a key is established using the pitches of a major scale, how to determine a major key from a given key signature, and how ...Missing: theory parameters
  99. [99]
    FM Synthesis - Carnegie Mellon University
    The index of modulation, \(I=\frac{D}{M}\), allows us to relate the depth of modulation, \(D\), the modulation frequency, \(M\), and the index of the Bessel ...
  100. [100]
    Overview of MIDI Controllers - Berklee Online
    Oct 24, 2025 · These parameters include everything from what notes are played, when they start/end, the attack velocity, etc. By changing the software ...
  101. [101]
    [PDF] The Establishment of Equal Temperament
    Without equal temperament, future Impressionist composers could not compose music that seemed to have no tonal center and future twelve-tone row composers ...