Fact-checked by Grok 2 weeks ago

Weight function

A weight function is a non-negative mathematical function that assigns relative or to different elements, points, or regions within a set, , , or , thereby modifying the outcome of computations to prioritize certain contributions over others. In its most general form, it appears as a w: D \to [0, \infty), where D is the domain, often used to weight data points or variables in statistical analyses, numerical methods, or optimization problems. In and , weight functions play a crucial role in defining inner products for orthogonal polynomials and functions, ensuring convergence and normalization in expansions like or Chebyshev series. For instance, on the interval [-1, 1], the constant weight function w(x) = 1 yields , while w(x) = \frac{1}{\sqrt{1 - x^2}} produces Chebyshev polynomials of the first kind, facilitating efficient approximations in and spectral methods. These functions must satisfy integrability conditions, such as w(x) \geq 0 and non-zero on subintervals, to guarantee the existence of orthogonal bases. In and , a weight function assigns real-valued weights to edges or vertices of a , enabling the modeling of distances, costs, or capacities in algorithms for shortest paths, minimum spanning trees, and network flows. Formally, for a graph G = (V, E), it is defined as w: E \to \mathbb{R}, where positive weights often represent lengths or expenses, influencing optimization outcomes like . Applications extend to fields, such as , where weight functions compute stress intensity factors for crack propagation analysis. In statistics and probability, weight functions adjust for biases or variances in data, as seen in regression or , where they amplify reliable observations to improve estimate accuracy. They also appear in physics and for emphasizing frequency bands in transforms or balancing equations in variational principles. Overall, weight functions provide a versatile tool across disciplines, adapting computations to reflect real-world asymmetries and priorities.

Discrete Weight Functions

Definition and Properties

A discrete weight is a non-negative w: S \to [0, \infty), where S is a discrete set, typically finite or countable, that assigns relative importance to elements of S. It induces weighted sums of the form \sum_{s \in S} f(s) w(s) for suitable functions f: S \to \mathbb{R}. Such functions are typically required to satisfy summability \sum_{s \in S} w(s) < \infty to ensure the total weight is finite. A common property is non-negativity (w(s) \geq 0 for all s \in S), which preserves the positive nature of weighted sums. Normalization often applies, especially when \sum_{s \in S} w(s) = 1, allowing w to as a probability mass function in probabilistic contexts. Examples include the uniform weight function on a finite set S = \{1, 2, \dots, n\}, defined by w(s) = 1/n, which yields the arithmetic mean \frac{1}{n} \sum_{s=1}^n f(s). In probability, the weights of a binomial distribution, w(k) = \binom{n}{k} p^k (1-p)^{n-k} for k = 0, 1, \dots, n and $0 < p < 1, form a normalized discrete weight function summing to 1.

Applications in Statistics

In statistics, discrete weight functions play a central role in estimation procedures that account for unequal reliability or representation among data points, enhancing the accuracy of inferences from heterogeneous samples. The weighted arithmetic mean provides a basic yet essential example, defined as \mu_w = \frac{\sum_{i=1}^n x_i w_i}{\sum_{i=1}^n w_i}, where x_i are the observations and w_i > 0 are nonnegative weights summing to a positive value, generalizing the unweighted arithmetic mean when all w_i are equal. This estimator is unbiased for the population mean when weights reflect inverse sampling probabilities or relative precisions. In survey sampling, weights are typically proportional to subgroup population sizes to correct for oversampling or undersampling; for instance, in stratified designs, the weight for stratum h is w_h = N_h / n_h, where N_h is the stratum population size and n_h is the sample size, yielding a population-representative mean from disproportionate allocation. Inverse-variance weighting extends this principle in to combine effect estimates from independent studies, assigning w_i = 1 / \sigma_i^2 to each study's estimate, where \sigma_i^2 is its sampling variance. The resulting pooled estimate is \hat{\theta} = \frac{\sum_{i=1}^k w_i \theta_i}{\sum_{i=1}^k w_i}, which achieves minimum variance among unbiased linear combinations under fixed-effect models, with overall variance $1 / \sum w_i. This reduces estimation error relative to equal weighting by emphasizing studies with higher precision (smaller \sigma_i^2), as the effective sample size increases with the harmonic of precisions; for example, combining two studies with variances 1 and 4 yields a pooled variance of 0.8, versus 1.25 for unweighted averaging. Weighted least squares (WLS) regression applies discrete weights to address heteroscedasticity in linear models, minimizing the objective \sum_{i=1}^n w_i (y_i - \mathbf{x}_i^T \boldsymbol{\beta})^2, with w_i = 1 / \mathrm{Var}(\epsilon_i) to downweight noisier observations, unlike ordinary least squares (OLS), which assumes homoscedasticity and weights all residuals equally, potentially leading to inefficient or biased standard errors. The WLS estimator is \hat{\boldsymbol{\beta}} = (X^T W X)^{-1} X^T W \mathbf{y}, where W is diagonal with entries w_i, producing more reliable coefficient estimates when variances are known or estimated. Beyond these, discrete weights handle heteroscedasticity in broader regression contexts by stabilizing variance. In Monte Carlo simulation, importance sampling employs weights w_i = p(x_i) / q(x_i) for samples x_i from proposal q to unbiasedly estimate target expectations E_p[f(X)], often drastically cutting variance for rare events compared to crude Monte Carlo. In kernel density estimation for discrete data, weights adjust contributions from observed points to form smoothed probability mass functions via discrete kernels, accommodating clustered or importance-sampled inputs.

Applications in Mechanics

In , discrete weight functions play a fundamental role in describing the dynamics of systems composed of point masses, where the weights w_i correspond to m_i of individual particles. The position vector of the center of mass \mathbf{r}_{cm} for such a system is given by the weighted \mathbf{r}_{cm} = \frac{\sum_i m_i \mathbf{r}_i}{\sum_i m_i}, which determines the overall translational motion as if all mass were concentrated at this point. This formulation allows the acceleration of the center of mass to be computed from the net external force via \sum_i m_i \mathbf{a}_i = M \mathbf{a}_{cm}, where M = \sum_i m_i, simplifying the analysis of multi-particle interactions under Newtonian laws. An extension of this weighted summation appears in rotational dynamics, particularly for moments of inertia, where I = \sum_i m_i d_i^2 quantifies the distribution of relative to an axis of , with d_i as the from the axis to the i-th particle. In equilibrium conditions for rigid bodies, the system requires both the of forces \sum \mathbf{F}_i = 0 and the of torques \sum \boldsymbol{\tau}_i = 0, where torques \boldsymbol{\tau}_i = \mathbf{r}_i \times \mathbf{F}_i incorporate arms as effective weights in the rotational . Here, the weights w_i can represent either masses in inertial contexts or arms in static torque calculations, ensuring no net or translation. Practical examples illustrate these principles, such as balancing a , where weights w_i suspended at positions along a horizontal beam achieve when the and counterclockwise torques balance, i.e., \sum w_i d_i = 0 on each side of the . In Newtonian particle systems modeling rigid bodies, the discrete masses interact via constraints to maintain fixed distances, enabling computation of overall motion through weighted aggregates like the center of mass. This discrete approach also serves as an approximation for continuous mass distributions, replacing integrals like \mathbf{r}_{cm} = \frac{1}{M} \int \mathbf{r} \, dm with finite sums over point masses that converge to the continuum limit as the number of particles increases. Conceptually, these mechanical applications parallel the weighted mean in but apply to quantities in physical space for deterministic and motion.

Continuous Weight Functions

Definition and Properties

A weight function in the continuous case is a non-negative w: \Omega \to [0, \infty), where \Omega \subseteq \mathbb{R}^n is a measurable domain, that induces weighted integrals of the form \int_{\Omega} f(x) w(x) \, dx for suitable integrable functions f: \Omega \to \mathbb{R}. Such functions are typically taken to be locally integrable and often satisfy the global integrability condition \int_{\Omega} w(x) \, dx < \infty, ensuring the total weight is finite. The weighted integral defines a new measure \mu on \Omega via d\mu(x) = w(x) \, dx, where dx denotes the ; in this context, w acts as the of \mu with respect to the Lebesgue measure when \mu is absolutely continuous with respect to it. Key properties include non-negativity (w(x) \geq 0 almost everywhere), which preserves the positive nature of the induced measure, and integrability, which controls the scale of the weighting. Normalization is common in applications, particularly when \int_{\Omega} w(x) \, dx = 1, rendering w a . Representative examples illustrate these concepts. The Gaussian weight w(x) = \frac{1}{\sqrt{2\pi}} \exp\left( -\frac{x^2}{2} \right) on \mathbb{R} serves as the probability density of the standard normal distribution and is normalized with total weight 1. The uniform weight w(x) = 1 on the interval [0,1] yields the standard Lebesgue measure restricted to that domain, with total weight 1. Power weights, such as w(x) = x^{\alpha} for \alpha > -1 on [0,1], are integrable and commonly used in the computation of moments \int_0^1 x^k x^{\alpha} \, dx, arising in the study of orthogonal polynomials and distributions. The conceptual roots of continuous weight functions trace to the Riemann-Stieltjes integral, introduced by in 1894 as a allowing against a non-constant integrator function, and to Henri Lebesgue's theory of 1902, which formalized weighted measures through and the Radon-Nikodym theorem.

Applications in Analysis

In analysis, continuous weight functions play a fundamental role in defining weighted integrals, which generalize the standard Lebesgue integral by incorporating a non-constant density. For a measurable set E in \mathbb{R}^n and a positive integrable weight function w: \mathbb{R}^n \to (0, \infty), the weighted volume is given by V_w(E) = \int_E w(x) \, dx, where dx denotes the ; this measures the "size" of E with varying importance across regions based on w(x). In multivariable settings, formulas apply similarly to weighted integrals, preserving the structure under diffeomorphisms, while Fubini's theorem extends to iterated weighted integrals over product spaces, allowing computation as \int_{E \times F} w(x,y) \, dx \, dy = \int_E \left( \int_F w(x,y) \, dy \right) dx provided the integrals exist, facilitating evaluations in higher dimensions. Weighted averages, another key application, compute a function's "center of mass" relative to the weight. For an integrable function f on a domain \Omega, the weighted average is \langle f \rangle_w = \frac{\int_\Omega f(x) w(x) \, dx}{\int_\Omega w(x) \, dx}, assuming the denominator is finite and positive; this normalizes the integral to account for the total weight. In , this aligns with the of a continuous X with density p(x), where E[X^k] = \int x^k p(x) \, dx serves as the k-th , treating p as the weight and illustrating how weights encode distributional emphasis. Weight functions also underpin measure theory by inducing absolutely continuous measures. Specifically, the measure \mu(E) = \int_E w(x) \, dx is absolutely continuous with respect to \lambda, with Radon-Nikodym derivative d\mu / d\lambda = w, ensuring \mu vanishes on Lebesgue-null sets. This framework supports convergence theorems adapted to weights; for instance, a weighted version of the states that if |f_n(x)| \leq g(x) pointwise for an integrable g and f_n \to f , then \int f_n w \, dx \to \int f w \, dx under suitable integrability of g w, enabling limits under weighted domination in analytical proofs. Representative examples highlight these applications. In the study of orthogonal polynomials, moments \int_a^b x^k w(x) \, dx (for interval [a,b] and suitable w > 0) determine the polynomial sequence's properties, as these integrals form the driving via Gram-Schmidt processes. In , barycentric coordinates express a point P inside a as a weighted P = \sum \lambda_i V_i with \sum \lambda_i = 1 and \lambda_i \geq 0, where weights \lambda_i are proportional to signed volumes of sub-simplices, facilitating interpolations and convex combinations. Discrete weighted sums can approximate these continuous constructs, such as via Riemann sums converging to weighted integrals.

Applications in Approximation Theory

In approximation theory, continuous weight functions play a pivotal role in defining norms for measuring errors, particularly in spaces where certain regions of the domain require emphasis. The weighted L^p norm is defined as \|f\|_{p,w} = \left( \int_a^b |f(x)|^p w(x) \, dx \right)^{1/p} for $1 \leq p < \infty, where w(x) > 0 is a continuous weight function on the [a, b]. This norm facilitates the study of best by polynomials or other functions in weighted Lebesgue spaces, with convergence rates depending on the function's smoothness and the weight's membership in classes like the Muckenhoupt A_p, which ensure boundedness of the Hardy-Littlewood maximal operator. Such weighted spaces are essential for handling functions with singularities or varying behavior, enabling tailored error minimization. Chebyshev approximation exemplifies the use of weights in minimax problems, where the objective is to minimize the maximum weighted error \|f - p\|_\infty = \max_{x \in [-1,1]} |f(x) - p(x)| w(x) for a polynomial p of degree at most n. A canonical choice is w(x) = 1 / \sqrt{1 - x^2} on [-1, 1], which corresponds to the weight for Chebyshev polynomials of the first kind and leads to near-minimax approximations via truncated Chebyshev series. The equioscillation theorem characterizes the unique best approximation: the weighted error attains its maximum magnitude at n+2 points with alternating signs, ensuring optimality and guiding computational algorithms like the Remez exchange method. This framework extends to generalized weights, preserving equioscillation properties for broader function classes. Weight functions also enhance interpolation techniques, adapting classical methods like Lagrange and Hermite interpolation to non-uniform sampling or importance by incorporating weights that reflect data density or error priorities. In Lagrange interpolation, weights can modify the basis functions to emphasize regions with higher w(x), improving stability for unevenly distributed nodes; similarly, with weights accounts for derivative conditions in weighted norms. A key application arises in numerical quadrature, where Gauss quadrature rules are constructed for a given weight w(x), using nodes as roots of the orthogonal polynomials associated with w and weights as Christoffel numbers, yielding exact approximations for polynomials of degree up to $2n-1. These rules underpin efficient computation of weighted integrals, directly supporting bounds and adaptive schemes in function reconstruction./7%3A_Integration/7.05%3A_Gauss_Quadrature_Rule_of_Integration) Representative examples illustrate these concepts in practice. approximation seeks the p minimizing \int_a^b |f(x) - p(x)|^2 w(x) \, dx, equivalent to orthogonal onto the with respect to the inner product \langle f, g \rangle_w = \int_a^b f(x) g(x) w(x) \, dx, which generates orthogonal bases like weighted Legendre or Chebyshev polynomials for stable fitting. This approach is particularly effective for functions with heteroscedastic errors, as in geophysical . In Padé approximants, weights enter through moment problems or Stieltjes integrals, where approached Padé tables approximate generating functions via orthogonal expansions with respect to w, improving convergence for analytic functions near branch points or poles.

Advanced Mathematical Structures

Inner Products and Orthogonality

In the of continuous functions, a weighted inner product is defined on the space of square-integrable functions L^2_w(\Omega) over a \Omega \subseteq \mathbb{R} as \langle f, g \rangle_w = \int_\Omega f(x) \overline{g(x)} w(x) \, dx, where w(x) > 0 is a positive function ensuring the integral converges, and the bar denotes complex conjugation for generality. This bilinear form satisfies the axioms of an inner product, making L^2_w(\Omega) a Hilbert space when equipped with the induced norm \|f\|_w = \sqrt{\langle f, f \rangle_w}. The norm measures the "size" of functions with respect to the measure w(x) \, dx, which is particularly useful for spaces where uniform weighting (w \equiv 1) fails to capture relevant geometry or probability distributions. Orthogonal polynomials arise as bases in these weighted spaces, forming a sequence \{p_n\}_{n=0}^\infty of polynomials satisfying \langle p_m, p_n \rangle_w = h_n \delta_{mn}, where h_n > 0 is the normalization constant and \delta_{mn} is the (zero for m \neq n). Classical families include the \{P_n\} on [-1, 1] with w(x) = 1, where \langle P_m, P_n \rangle_w = \frac{2}{2n+1} \delta_{mn}, and the \{H_n\} on \mathbb{R} with w(x) = e^{-x^2}, where \langle H_m, H_n \rangle_w = \sqrt{\pi} \, 2^n n! \, \delta_{mn}. These polynomials satisfy three-term recurrence relations of the form x p_n(x) = a_n p_{n+1}(x) + b_n p_n(x) + c_n p_{n-1}(x), with coefficients a_n, b_n, c_n determined by the weight, enabling efficient computation and stability in expansions. Key properties of orthogonal polynomials in L^2_w include completeness: the linear span of \{p_n\} is dense in L^2_w(\Omega), allowing any function in the space to be approximated arbitrarily well by finite linear combinations. The Christoffel-Darboux formula expresses the reproducing kernel K_n(x, y) = \sum_{k=0}^n \frac{p_k(x) p_k(y)}{h_k} as K_n(x, y) = \frac{a_n}{h_n} \left[ p_{n+1}(x) p_n(y) - p_n(x) p_{n+1}(y) \right] / (x - y), which facilitates summation of series, interpolation, and analysis of convergence in weighted spaces. These structures find applications in spectral methods for partial differential equations (PDEs), where solutions are expanded in bases tailored to the problem's weight (e.g., Hermite for unbounded domains), yielding exponential convergence for smooth data. generalize classical expansions by using such bases for non-uniform measures, enabling decomposition in L^2_w. Additionally, the Gram-Schmidt orthogonalization process adapts naturally to the weighted inner product, iteratively projecting functions onto the to construct bases from arbitrary starting sets.

Weight Functions in Optimization

In , weight functions play a crucial role in prioritizing specific regions or components within objective functions or constraints, allowing for more nuanced control over the solution process. In variational settings, such as the , weight functions are incorporated into the objective functional to emphasize certain domains; for instance, the problem of minimizing \int f(x, u(x)) w(x) \, dx over a uses w(x) to amplify the influence of u(x) in regions where high accuracy is desired, such as in or path planning problems. This weighted formulation ensures that the Euler-Lagrange equations derived from the account for spatially varying priorities, leading to solutions that balance global and local optimality. In , weight functions appear in penalty methods to enforce feasibility while approximating the original problem. Specifically, the penalty term is scaled by a weight parameter \mu > 0, as in the exact penalty function P(x) = f(x) + \mu \sum |g_i(x)| for constraints g_i(x) \leq 0, where \mu is chosen sufficiently large to ensure that minimizers of the unconstrained penalized problem coincide with those of the constrained one under suitable regularity conditions like constraint qualifications. In infinite-dimensional settings, such as functional optimization, weight functions can weight penalties based on violation severity across the to promote to feasible paths. This approach is particularly effective in , where exactness guarantees global optimality for nonsmooth penalties. A special case arises in optimization, where weights prioritize data points in the objective, serving as a foundational technique in statistical estimation. Examples of weight functions in optimization algorithms include weighted gradient descent variants, which adapt step sizes to focus on challenging directions; for instance, in re-weighted , sample weights w_i = \exp(\gamma \min(\ell_i, \tau)) are computed based on individual losses \ell_i to upweight harder examples via distributionally , accelerating convergence in tasks like image classification on , achieving up to 1% accuracy gains over standard methods. In optimal control problems, weighted controls incorporate w(x) into the dynamics, such as in the \partial_t y - \Delta y = w(x) u(t), where w(x) reflects varying control effort across the state space with spatially heterogeneous actuators, as seen in time-optimal bang-bang controls and norm-optimal problems that minimize . For hybrid discrete-continuous optimization, such as mixed- programming approximations, weight functions guide branching decisions to approximate solutions efficiently. In weighted iterated local branching, binary variables are grouped by weights combining coefficients, impacts, and violation counts—e.g., F = F_1 + F_2 + F_3 where F_1 weights contributions to the —allowing more flips in promising groups to explore neighborhoods, outperforming unweighted methods on instances by reducing search time. This weighted strategy facilitates continuous relaxations of constraints, bridging discrete and continuous domains in large-scale approximations like .

Applications in Computing and Engineering

Machine Learning and Data Analysis

In and , weight functions are essential for addressing class imbalance and facilitating by modulating the influence of data points in optimization objectives. These functions, whether discrete assignments to samples or continuous variations over feature spaces, enable models to prioritize underrepresented classes or relevant regions, improving on skewed datasets. Building on foundational weighted averages from , modern applications integrate weights directly into loss functions and kernel computations to mitigate toward majority classes. Class weights in classification algorithms assign higher penalties to errors on minority classes, typically denoted as w_c for class c, to balance training datasets. In weighted cross-entropy loss, the objective becomes \sum_c w_c \sum_{i \in c} -\log p(y_i | x_i), where p is the predicted probability, effectively upweighting rare classes to prevent model dominance by frequent ones. This approach is particularly effective in multi-class settings with severe imbalance, as demonstrated in surveys of imbalanced learning techniques. For support vector machines (SVMs), class weights are often set as w_i = 1/n_c for samples i in class c with size n_c, adjusting the hinge loss to \sum_i w_i \max(0, 1 - y_i (w \cdot x_i + b)), which enhances separation of imbalanced classes without resampling. This weighting strategy has been shown to improve SVM performance on datasets where minority classes constitute less than 10% of samples. Attention mechanisms in transformer models employ weight functions w(x) computed via softmax over similarity scores, yielding weighted sums of input embeddings as \sum_j w_{ij} h_j, where h_j are hidden states and w_{ij} = \frac{\exp(QK^T_{ij}/\sqrt{d_k})}{\sum_k \exp(QK^T_{ik}/\sqrt{d_k})}. These weights dynamically emphasize relevant tokens in sequences, enabling efficient and state-of-the-art results in tasks. Introduced in the architecture, this mechanism discards recurrent structures in favor of pure , achieving scores approximately 2 points higher than prior state-of-the-art RNN-based models on benchmarks. In methods, weighted kernels adapt Gaussian processes to non- data by incorporating weight functions into the , such as K(x,y) = k(x,y) w(x) w(y), where k is a base kernel like RBF and w modulates amplitude variations across inputs. This formulation allows GPs to model heteroscedastic noise or spatially varying smoothness, as explored in foundational treatments of functions. For non-stationary extensions, spectral mixture kernels further parameterize w via frequency components, improving predictive accuracy on datasets with trend shifts, such as environmental monitoring . Boosting algorithms exemplify discrete weight updates for ; in , sample weights w_i are iteratively adjusted based on errors, increasing for misclassified points via w_i \leftarrow w_i \exp(\alpha_t I(h_t(x_i) \neq y_i)), where \alpha_t is the weak learner's weight. This multiplicative update focuses subsequent rounds on hard examples, converging to low training error under weak learning assumptions. The original formulation demonstrated exponential improvement over single classifiers. Similarly, in neural networks for , spatial weights in the loss function assign higher values to boundary or foreground pixels, modifying to \sum_{i,j} w_{i,j} -\log p(y_{i,j} | x), where w_{i,j} depends on pixel position or distance to edges. This enhances delineation of small objects in , improving scores by approximately 2% over uniform losses in prostate MRI segmentation tasks.

Signal Processing and Filtering

In , weight functions are essential for manipulating finite-duration signals to mitigate artifacts in frequency-domain analysis. Window functions, denoted as w(t) for continuous-time signals or w(n) for discrete-time sequences, taper the signal amplitude at the edges to reduce discontinuities when analyzing non-periodic data. This application is particularly prominent in the (DFT), where abrupt truncation of finite signals causes , spreading energy across bins and degrading resolution. By applying a , the effective signal spectrum is smoothed, concentrating energy near the true frequencies while suppressing . A seminal example is the Hann window, defined as w(n) = 0.5 \left(1 - \cos\left(\frac{2\pi n}{N}\right)\right), \quad 0 \leq n \leq N-1, which provides a good balance between mainlobe width and sidelobe attenuation, reducing leakage by approximately 31 compared to the rectangular . This , analyzed in detail for , ensures better estimation in applications like audio processing and vibration analysis. Weighted filters extend this concept to direct signal transformation, where coefficients w_k define the filter's response for or shaping. In (FIR) filters, the output is computed as y_n = \sum_k w_k x_{n-k}, with the weights normalized by \sum w_k to preserve signal . A simple yet effective instance is the weighted filter, which attenuates high- noise while retaining low- trends, making it optimal for random in step-response scenarios. For instance, in discrete-time implementations, exponentially weighted variants assign higher w_k to recent samples, achieving smoother outputs than uniform averages with less computational overhead. (IIR) filters can incorporate similar time-varying weights for recursive , though stability requires careful design. These structures are foundational in systems, such as denoising sensor data in . Adaptive filtering leverages time-varying weight functions w(t) to dynamically adjust to changing signal conditions, often optimizing based on (SNR). In environments with non-stationary noise, such as acoustic cancellation, the weights update via algorithms like least mean squares (LMS), where the rate scales with estimated SNR to converge faster during high-SNR periods and stabilize in low-SNR ones. This time-sequencing approach uses multiple weight sets selected at each time step, enabling tracking of varying responses with minimal misadjustment. in sensor employs spatial weights w(\theta), where \theta represents direction, to steer nulls toward interferers and enhance desired signals. By weighting elements based on vectors, the forms a that maximizes gain, as in uniform linear for communications, achieving up to 10-20 dB interference suppression depending on size. These techniques are critical in and , where environmental variations demand real-time weight . Matched filters exemplify weight functions optimized for detection, deriving weights in the frequency domain as w(f) = S^*(f) / N(f), where S(f) is the signal spectrum and N(f) the noise power spectral density, to maximize output SNR under additive noise. This linear filter correlates the received signal with a time-reversed replica of the known waveform, yielding an SNR improvement proportional to the signal energy over noise variance. In wavelet transforms, scalable weight functions arise through the scaling parameter a, modulating the mother wavelet \psi(t) as \psi\left(\frac{t-b}{a}\right) to analyze signals at multiple resolutions. These weights facilitate orthogonal decompositions via inner products, enabling efficient compression and feature extraction in non-stationary signals like seismic data, with scalability ensuring adaptability across frequency bands.

References

  1. [1]
    Weight function - EPFL Graph Search
    A weight function is a mathematical device used when performing a sum, integral, or average to give some elements more "weight" or influence on the result ...
  2. [2]
    Weight Function: Definition & Examples - Statistics How To
    The weight function gives weights to data. Weights give more importance or influence to some elements in a set.
  3. [3]
    Weight -- from Wolfram MathWorld
    It can refer to a function w(x) (also called a weight function or weighting function) used to normalize orthogonal functions. It can also be used to ...
  4. [4]
    [PDF] Orthogonal Polynomials and Least Squares Approximation
    An integrable function w is called a weight function on the interval I if w(x) ≥ 0 for all x ∈ I and w(x) ̸≡ 0 on any subinterval of I. Remark: a weight ...
  5. [5]
    [PDF] 6.042J Chapter 5: Graph theory - MIT OpenCourseWare
    A weighted graph is the same as a simple graph except that we associate a real number (that is, the weight) with each edge in the graph. Mathematically speaking ...
  6. [6]
    Weight Function - an overview | ScienceDirect Topics
    The weight function is defined as a mathematical construct used in calculations, typically represented by the arithmetic mean of weights for two time points ...
  7. [7]
    Weighting Functions - an overview | ScienceDirect Topics
    A weighting function can be defined as a mathematical function that assigns different weights to data points based on their measurement precision, where ...
  8. [8]
    When are two weighted Lebesgue spaces the same?
    Jun 5, 2023 · The weighted Lebesgue space L1(Ω,w), where w:Ω→R is a μ-measurable weight function, is defined as the set of all μ-measurable functions ...
  9. [9]
    [PDF] Lecture 2: Integration theory and Radon-Nikodym derivative
    The function f is called the Radon-Nikodym derivative or density of λ w.r.t. ν and is denoted by dλ/dν. Consequence: If f is Borel on (Ω,F) and /. A fdν = 0 ...
  10. [10]
    [PDF] 9 Normal Distribution - CMU School of Computer Science
    Definition 9.1 A continuous r.v. X follows a Normal or Gaussian distribution, written X ∼ Normal(𝜇, 𝜎2), if X has probability density function (p.d.f.) fX (x).
  11. [11]
    [PDF] Moments of classical orthogonal polynomials
    dα(x) = ρ(x)dx, where ρ is the non-negative solution on (a, b) of the Pearson equation d dx. (σ(x)ρ(x)) = τ(x)ρ(x). The function ρ(x) is called weight function.
  12. [12]
    William Henry Young und das Lebesgue-Integral | SpringerLink
    In 1902 Henri Lebesgue (1875–1941) published his thesis containing a new theory of integration which was based on Borel's theory of measure.
  13. [13]
    [PDF] Data Analysis Toolkit #12: Weighted averages and their uncertainties
    where the subscript wtd indicates a weighted mean. In the trivial case that all the wi are equal, this formula is equivalent to the familiar unweighed mean.
  14. [14]
    an analysis of different types of weights and their implications when ...
    Feb 20, 2019 · To correctly include a sampling weight, it must be the inverse of the sampling probability that a subject is selected from the population. For ...Missing: arithmetic | Show results with:arithmetic
  15. [15]
  16. [16]
    13.1 - Weighted Least Squares | STAT 501
    Weighted least squares is used when error variance is not constant, using weights inversely proportional to error variance, unlike ordinary least squares.
  17. [17]
    [PDF] Chapter 6 Importance sampling - Arizona Math
    In ordinary Monte Carlo all of our samples contribute with equal weight. In importance sampling we give them different weights. The total weight of the ...
  18. [18]
    [PDF] A Kernel Estimator for Discrete Distributions
    We present a discrete kernel estimator appropriate for estimating probability mass functions (p.m.f's) for integer data. Discrete kernel functions analogous to ...
  19. [19]
    Center of Mass; Moment of Inertia - Feynman Lectures - Caltech
    The center of mass of these two point masses is then the center of mass of the whole object. ... (19.2) can be interpreted as a special example of the center of ...
  20. [20]
    [PDF] Chapter 4 Rigid Body Motion - Rutgers Physics
    In this chapter we develop the dynamics of a rigid body, one in which all interparticle distances are fixed by internal forces of constraint. This is,.
  21. [21]
    Torque Equilibrium - HyperPhysics
    For an extended system to be at equilibrium, the sum of the forces must be equal to zero and the sum of torques about any axis must equal zero. It is ...
  22. [22]
    Building a Mobile | Physics Van | Illinois
    Oct 22, 2007 · This mobile is balanced by moving the top string left or right on the stick until the whole thing hangs with the stick level as shown.
  23. [23]
    [PDF] Classical Mechanics - UC Homepages
    Nov 30, 2023 · continuous mass distribution, the CM is given by. R = 1. M. Z rdm ,. (7.5) with M the total mass of the system (M =R dm). Example 7.1: CM of a ...
  24. [24]
    6.4 Density, Mass, and Center of Mass
    The formula m = d ⋅ V is reminiscent of two other equations that we ... Center of Mass (point-masses). ·. For a collection of n masses , m 1 , , … , ...
  25. [25]
    Measure Theory Basics - UC Berkeley Statistics
    Aug 24, 2023 · Measure theory is an area of mathematics concerned with measuring the “size” of subsets of a certain set.
  26. [26]
    [PDF] Absolutely continuous functions, Radon-Nikodym Derivative APPM ...
    Apr 22, 2016 · The function g is unique up to a set of zero measure (wrt λ), and is called the Radon-Nikodym derivative of µ, and is often denoted g = dµ.
  27. [27]
    Expectation | Mean | Average - Probability Course
    The expected value is defined as the weighted average of the values in the range. Expected value (= mean=average):
  28. [28]
    ON EXISTENCE AND A DOMINATED CONVERGENCE THEOREM ...
    Let f be a function bounded on the closed interval [a, b]. Suppose the weighted refinement integral (1.3) and the Lebesgue-Stieltjes integral (1.4) exist. Then, ...
  29. [29]
    [PDF] Orthogonal polynomials, a short introduction - arXiv
    Nov 11, 2021 · The moment functional M is also determined by the moments ... The polynomials are orthogonal with respect to a positive C∞ weight function w on an.
  30. [30]
    [PDF] Barycentric Coordinates for Convex Sets - Applied Geometry Lab
    Aug 10, 2005 · Finally, we express the barycentric coordinate function bv(x) by dividing each weight function wv(x) by the sum of all weight functions taken ...
  31. [31]
    [PDF] APPROXIMATION IN WEIGHTED Lp SPACES - INMABB
    It is clear that Theorem B is more general than Theorem A. In the weighted Lebesgue spaces Lp w, where 1 <p< ∞ and w ∈ Ap an analogue.
  32. [32]
    [PDF] Approximation Atkinson Chapter 4, Dahlquist & Bjork Section 4.5 ...
    3 Chebyshev Equioscillation Theorem. Let f ∈ C([a, b]) and n ≥ 0. Then ... This shows that if you get to pick your weight function, then Chebyshev series are a ...
  33. [33]
    Computational Aspects of Chebyshev Approximation Using a ... - jstor
    Introduction. In a recent paper [1] the idea of a Chebyshev approxima- tion using a generalized weight function was discussed. This idea has.
  34. [34]
    Weighted Convergence of Lagrange Interpolation Based on Gauss ...
    In particular, we show that theLagrange interpolation polynomials associated with the above interpolationprocesses have the same speed of convergence as the ...
  35. [35]
    [1608.00512] Optimal weighted least-squares methods - arXiv
    Aug 1, 2016 · We study in general terms the weighted least-squares approximations from the spaces V_m based on independent random samples.
  36. [36]
    Approximation of weight function and approached Padé approximants
    The truncated orthogonal expansion d N gives rise to approached orthogonal polynomials and so to approached Padé approximants of the Stieltjes function with ...
  37. [37]
    [PDF] ORTHOGONAL POLYNOMIALS - OSU Math
    Feb 6, 2014 · Page 1. AMERICAN MATHEMATICAL SOCIETY. COLLOQUIUM PUBLICATIONS. VOLUME XXIII. ORTHOGONAL POLYNOMIALS. BY. GABOR SZEGO. PROFESSOR OF MATHEMATICS.
  38. [38]
    18.2 General Orthogonal Polynomials
    Let ( a , b ) be a finite or infinite open interval in ℝ . A system (or set) of polynomials { p n ⁡ ( x ) } , n = 0 , 1 , 2 , … , where p n ⁡ ( x ) has ...
  39. [39]
    Legendre Polynomial -- from Wolfram MathWorld
    798). The Legendre polynomials can also be generated using Gram-Schmidt orthonormalization in the open interval (-1,1) with the weighting function 1. P_0(x) ...
  40. [40]
    Hermite Polynomial -- from Wolfram MathWorld
    The Hermite polynomials H_n(x) are set of orthogonal polynomials over the domain (-infty,infty) with weighting function e^(-x^2)
  41. [41]
    [PDF] Chebyshev and Fourier Spectral Methods 2000
    This book, 'Chebyshev and Fourier Spectral Methods', by John P. Boyd, published in 2000, covers series expansions and comparisons with finite element methods.<|separator|>
  42. [42]
    Gram-Schmidt Orthonormalization -- from Wolfram MathWorld
    Gram-Schmidt orthonormalization constructs an orthogonal basis from a nonorthogonal set of linearly independent functions. It is used to generate orthogonal ...
  43. [43]
    The Power of the Weighted Sum Scalarization for Approximating ...
    Nov 22, 2021 · Calculus of Variations and Optimization ... weighted objective functions in order to obtain the objective function of the scalarized problem.
  44. [44]
    Exact Penalty Functions in Constrained Optimization - SIAM.org
    For constrained smooth or nonsmooth optimization problems, new continuously differentiable penalty functions are derived. They are proved exact in the sense ...
  45. [45]
    Stochastic Re-weighted Gradient Descent via Distributionally ... - arXiv
    Jun 15, 2023 · We present Re-weighted Gradient Descent (RGD), a novel optimization technique that improves the performance of deep neural networks through dynamic sample re- ...
  46. [46]
    Some optimal control problems of heat equations with weighted ...
    Oct 11, 2017 · The weight function ρ in equation (1.1) is meaningful, which stands for the different influence of the control function in different location.
  47. [47]
    Weighted iterated local branching for mathematical programming ...
    Apr 16, 2022 · The groups of variables are defined using weights that indicate the expected contribution of flipping the variables when trying to identify ...
  48. [48]
    [PDF] Applying Support Vector Machines to Imbalanced Datasets
    Another approach is to preprocess the data by oversampling the major- ity class or undersampling the minority class in order to create a balanced dataset. We ...
  49. [49]
    [1706.03762] Attention Is All You Need - arXiv
    Jun 12, 2017 · We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely.
  50. [50]
    [PDF] Non-Stationary Spectral Kernels - NIPS papers
    We propose non-stationary spectral kernels for Gaussian process regression by modelling the spectral density of a non-stationary kernel function as a mixture of.
  51. [51]
    [PDF] A decision-theoretic generalization of on-line learning and an ...
    We also show how the weight-update rule can be used to derive a new boosting algo- rithm which does not require prior knowledge about the performance of the ...
  52. [52]
    Boundary-weighted Domain Adaptive Neural Network for Prostate ...
    In this paper, we propose a boundary-weighted domain adaptive neural network (BOWDA-Net). To make the network more sensitive to the boundaries during ...
  53. [53]
    On the use of windows for harmonic analysis with the discrete ...
    On the use of windows for harmonic analysis with the discrete Fourier transform. Abstract: This paper makes available a concise review of data windows and ...Missing: PDF | Show results with:PDF
  54. [54]
    [PDF] Moving Average Filters
    In spite of its simplicity, the moving average filter is optimal for a common task: reducing random noise while retaining a sharp step response.
  55. [55]
    [PDF] The Time-Sequenced Adaptive Filter - Information Systems Laboratory
    Both the LMS and time- sequenced adaptive filters are digital filters composed of a tapped delay line and adjustable weights, whose impulse response is ...
  56. [56]
    [PDF] Beamforming: A Versatile Approach to Spatial Filtering
    A beamformer is a processor used in conjunction with an array of sensors to provide a versatile form of spatial filtering. The sensor array collects spatial ...
  57. [57]
    [PDF] arXiv:1204.1213v1 [nlin.CD] 5 Apr 2012
    Apr 5, 2012 · A matched filter is a linear operation that optimizes the SNR of a signal in the presence of additive white Gaussian noise (AW GN).
  58. [58]
    [PDF] Wavelets - Caltech Multi-Res Modeling Group
    9.3 Weighted Inner Products. When we discussed the construction of scaling functions and wavelets we pointed out how a weight function in the inner product ...