Fact-checked by Grok 2 weeks ago

Moving least squares

Moving least squares (MLS) is a for approximating and interpolating continuous functions from a set of unorganized, scattered points, achieved through local polynomial fits at each evaluation point. The technique constructs a global by minimizing a weighted L^2-norm of residuals using basis polynomials and position-dependent weights that decay with distance from the evaluation point, enabling smooth reconstruction without a fixed . Originally formulated by Peter Lancaster and Kazimieras Šalkauskas in 1981 for generating surfaces from scattered data, MLS supports both smoothing and exact interpolation via singular weights, projecting functions onto spaces of differentiable polynomials while preserving smoothness determined by the weight functions and basis degree. Key properties include exact reproduction of polynomials up to the basis degree m, yielding approximation errors of order O(h^{m+1}) where h is the local point spacing, and computational efficiency from solving small local systems rather than global ones. MLS has been extended in reproducing kernel formulations for applications in , such as solving partial equations on particle distributions without meshes. In , it enables provably correct from noisy point clouds, ensuring the output manifold is geometrically close (Hausdorff distance bounded by sampling density) and topologically equivalent to the underlying surface under uniform sampling assumptions. Further adaptations appear in image processing for deformation via affine or rigid transformations that minimize distortions.

Introduction

Overview

Moving least squares (MLS) is a meshfree approximation technique that reconstructs continuous functions from scattered, points by performing a local weighted least squares fit of basis functions, typically low-degree polynomials, to nearby points at each evaluation location. Unlike traditional methods that rely on a fixed or , MLS generates the on-the-fly, ensuring flexibility in handling irregular data distributions without predefined . This approach addresses the challenges of approximating functions in high-dimensional spaces or irregular domains, where mesh-based fails due to the difficulty in generating and maintaining structured grids. MLS offers key advantages, including robust handling of noisy or sparse data and the production of smooth approximations up to C^\infty class when using sufficiently smooth weight functions, making it suitable for applications like reconstruction and general in . A basic use case is reconstructing a curve from point samples, where MLS locally fits weighted polynomials to produce a continuous, curve that closely follows the while avoiding oscillations common in global methods.

Historical development

The method of , foundational to moving least squares (MLS), originated in the late 18th and early 19th centuries as a for fitting models to observational by minimizing the sum of squared residuals. developed the core principles in 1795 while working on astronomical , though he formally published them in 1809, emphasizing probabilistic justification under Gaussian error assumptions. independently introduced the method in 1805 as a practical tool for , presenting it without the statistical framework in his appendix on comet orbits. Extensions to , which assign varying importance to data points based on reliability, emerged in the 19th century, notably through contributions by mathematicians like and , enabling more flexible approximations for heterogeneous datasets. Meshfree approximation concepts, precursors to MLS, gained traction in the mid-20th century with Shepard's 1968 work on for irregularly spaced , which localized influence via distance-based weights without requiring a . The formalization of MLS as a distinct method occurred in the early , introduced by Peter Lancaster and Kestutis Salkauskas for curve and surface fitting from scattered points, using locally weighted to produce smooth, continuous . Their seminal 1981 paper analyzed MLS for generating surfaces, and their 1986 book provided a comprehensive treatment, establishing MLS as a robust tool for scattered by addressing issues like of and . In the 1990s, MLS saw significant advancements in , particularly through Ted Belytschko and colleagues' 1994 element-free , which applied MLS interpolants to solve partial differential equations in meshfree particle-based simulations, enhancing flexibility for problems with large deformations. By the early , MLS extended to , with Marc Alexa and co-authors' 2003 framework for point set , leveraging MLS to implicitly define smooth surfaces from unoriented point clouds, influencing rendering and modeling pipelines. Key reviews, such as Holger Wendland's 2005 monograph on scattered data , synthesized these developments, dedicating sections to MLS's theoretical underpinnings and practical implementations. Post-2000 research deepened MLS's theoretical foundations, including error analyses in Sobolev spaces; for instance, Davoud Mirzaei's 2015 analysis provided bounds for MLS approximations of functions in fractional-order Sobolev spaces, improving convergence guarantees for irregular data. More recently, MLS has integrated with , particularly neural networks, for enhanced data fitting and reconstruction; the 2021 Neural-IMLS approach combined self-supervised implicit MLS with to generate high-fidelity surfaces from point clouds, while subsequent works such as the 2024 optimal space for MLS approximations on spheres have refined theoretical efficiency. These evolutions underscore MLS's enduring role in bridging classical approximation theory with modern computational paradigms.

Mathematical formulation

Basis functions

In moving least squares (MLS), basis functions define the subspace for the local least squares fit at each evaluation point, enabling approximation of the underlying function using low-order polynomials such as linear or quadratic forms. These functions form the column space in the weighted least squares problem, where coefficients are determined to minimize the error at neighboring data points. The choice of basis ensures the local approximant captures essential geometric features without introducing artifacts like dimpling in sparse regions. Common basis selections include monomial polynomials up to degree m, expressed as p(\mathbf{z}) = [1, z_1, \dots, z_d, z_1^2, z_1 z_2, \dots, z_d^m]^T in d dimensions, where \mathbf{z} represents coordinates relative to the evaluation point. For applications requiring enhanced smoothness, radial basis functions are employed; for example, thin-plate splines provide infinite support, while functions offer compact support to reduce compared to higher-degree polynomials. The size of the basis in d dimensions up to m is \binom{m + d}{d}, determining the number of coefficients solved for in the local fit. Selection of the basis depends on the required reproduction accuracy; a linear basis (m=1) suffices for C^0 , while (m=2) supports higher-order consistency. For a fit, the basis vector at local coordinates is p(\mathbf{z}) = \begin{bmatrix} 1 \\ z_x \\ z_y \\ z_x^2 \\ z_x z_y \\ z_y^2 \end{bmatrix}, where \mathbf{z} = \mathbf{x}_i - \mathbf{x} for a point \mathbf{x}_i relative to the evaluation point \mathbf{x}; this shifted formulation enhances numerical conditioning by centering the basis at the evaluation location.

Weight functions

In moving least squares (MLS), weight functions w_i(x) are designed to with the d(x, x_i) between the evaluation point x and each data point x_i, thereby assigning greater influence to nearby points and ensuring the locality of the . This local support is to the "moving" aspect of MLS, as it allows the weighted least-squares fit to adapt continuously across the domain without global computation. Common weight functions include exponential forms, such as the Gaussian w_i(x) = \exp\left( - \frac{d(x, x_i)^2}{h^2} \right), where h controls the decay rate. Another frequent choice is the inverse quadratic w_i(x) = \frac{1}{1 + \left( \frac{d(x, x_i)}{h} \right)^2 }, which provides infinite support but diminishes rapidly with distance. For compact support, functions are widely used, such as w_i(x) = \left(1 - \frac{d(x, x_i)}{h}\right)_+^4 \left(4 \frac{d(x, x_i)}{h} + 1\right), offering finite extent and higher smoothness. The key parameter is the support radius h, which can be fixed or adaptively chosen based on local point density to balance locality and accuracy; larger h promotes smoother approximations but may degrade of the local systems. Weight functions are typically positive and monotonically decreasing in distance, ensuring stable and well-behaved fits, though careful selection avoids issues like singularities at d = 0 in forms such as $1 / d^2. The choice of weight influences both the of the resulting —higher differentiability in the weight yields smoother outputs—and , with compact supports improving efficiency by limiting active points per evaluation. A representative example is the Gaussian weight, often expressed as w(x - x_i) = e^{-\|x - x_i\|^2 / (2 \sigma^2)}, where \sigma serves as the analogous to h, providing infinite but rapidly decaying support for smooth local fitting.

Approximation scheme

The moving least squares (MLS) approximation scheme constructs a local fit to scattered points at each point x in the domain, using a weighted least-squares minimization that incorporates basis functions and weight functions to ensure and locality. To improve numerical , the basis functions are evaluated in local shifted coordinates \mathbf{z} = \mathbf{x}_i - \mathbf{x}. Given a set of data points \{x_i\}_{i=1}^n with associated values f_i, the MLS approximant u^h(x) at a point x is obtained by solving a local minimization problem: find the coefficients a(x) that minimize the weighted sum of squared residuals J(x) = \sum_{i=1}^n w_i(x) \left( p(x_i - x)^T a(x) - f_i \right)^2, where p(\cdot) is a vector of basis functions (e.g., monomials up to a certain degree), and w_i(x) are positive weight functions centered at x_i that decay with distance from x. The solution to this minimization yields the coefficients as a(x) = A(x)^{-1} B(x) f, where f = [f_1, \dots, f_n]^T collects the values, the moment A(x) = \sum_{i=1}^n w_i(x) p(x_i - x) p(x_i - x)^T is symmetric and positive definite (assuming sufficient and basis completeness), and B(x) = [w_1(x) p(x_1 - x), \dots, w_n(x) p(x_n - x)] is the evaluation . Substituting this into the local approximation gives the MLS approximant u^h(x) = p(0)^T a(x) = p(0)^T A(x)^{-1} \sum_{i=1}^n w_i(x) p(x_i - x) f_i. This can equivalently be expressed in non-singular form as a of the data values: u^h(x) = \sum_{i=1}^n f_i \phi_i(x), where the shape functions are \phi_i(x) = p(0)^T A(x)^{-1} \left[ w_i(x) p(x_i - x) \right]. These shape functions \phi_i(x) partition unity under appropriate choices of basis and weights, ensuring the approximant reproduces constants and possesses desirable smoothness properties. Computationally, the MLS scheme is evaluated pointwise by assembling and inverting the small moment matrix A(x) locally at each x, avoiding the need for a global system solve and enabling efficient handling of scattered or unstructured data.

Properties and analysis

Reproduction properties

Moving least squares (MLS) exhibits polynomial reproduction, whereby it exactly recovers any of degree at most m from scattered data samples of that polynomial, provided the basis functions span the space of polynomials up to degree m and the weight functions possess sufficient , such as being at least C^{m+1}. This reproduction capability arises because the local weighted least squares fit enforces constraints that align the approximant with the polynomial subspace. A sketch of the proof considers data f(x_i) = q(x_i) for all nodes x_i in the local support, where q is a polynomial with \deg(q) \leq m. Substituting q into the weighted least squares objective yields a zero residual, as the basis coefficients solve the normal equations precisely when the weighted moments match those of q; thus, the MLS approximant evaluates to q(x) at any point x. Reproduction holds under conditions that the support radius h exceeds the diameter of the local stencil, ensuring inclusion of at least m+1 points for a non-degenerate fit, and that the moment matrix A(x), formed from the basis evaluated at weighted nodes, remains well-conditioned to prevent numerical instability. These requirements guarantee the local solvability of the fitting problem without rank deficiency. The property underpins the consistency of MLS approximations, ensuring to the underlying as node density increases, while selecting higher m yields better accuracy for smooth data at the expense of greater computational demands due to larger basis dimensions. For instance, linear reproduction with m=1 exactly fits affine functions, a feature commonly exploited in for reconstructing smooth surfaces from unorganized point clouds by projecting onto local tangent planes.

Error estimates

The error estimates for moving least squares (MLS) approximations quantify the deviation between the true function u and its MLS approximant u^h for smooth functions, typically in Sobolev norms. A fundamental bound is given by \|u - u^h\|_{L^2(\Omega)} \leq C h^{m+1} |u|_{H^{m+1}(\Omega)}, where h is the fill distance of the scattered point set (measuring the maximal hole in the domain coverage), m is the polynomial reproduction degree, C is a constant independent of h, and |\cdot|_{H^{m+1}(\Omega)} denotes the Sobolev semi-norm of order m+1. This estimate holds under assumptions of quasi-uniform point distributions, where the separation between points is bounded relative to h, and sufficient smoothness of u \in H^{m+1}(\Omega). Similar bounds apply to derivatives, such as \|\nabla (u - u^h)\|_{L^2(\Omega)} \leq C h^m |u|_{H^{m+1}(\Omega)}. Early theoretical results for uniform data distributions were established by and Salkauskas, demonstrating convergence rates consistent with the above form for one-dimensional cases and bases. For scattered data, the scales as O(h^k) when approximating k- functions, with k = m+1 under the same regularity and distribution conditions. These bounds build upon the reproduction properties of MLS, which guarantee exact approximation for polynomials of degree at most m. Extensions to fractional Sobolev spaces W^{m+s}_p(\Omega) with $0 < s < 1 yield refined estimates, such as \|u - s_{u,X}\|_{W^{|\alpha|}_q(\Omega)} \leq C h^{m+s-|\alpha|-d(1/p-1/q)} \|u\|_{W^{m+s}_p(\Omega)} for multi-index \alpha, where d is the dimension, h_X the fill distance, and the point set X quasi-uniform. The accuracy of these estimates depends on the numerical stability of the local least-squares problems, governed by the condition number of the moment matrix A(x) formed by the basis and weight functions; ill-conditioning arises if the support radius is too small relative to point spacing or if weights decay too rapidly. Quasi-uniformity of the points further ensures the constant C remains bounded as h \to 0. Recent advances in the 2020s include a posteriori error estimators using MLS interpolation to guide adaptive refinement, incorporating local variations in h to achieve sharper bounds in finite element and meshfree contexts. As an illustrative example, for quadratic reproduction (m=2) on scattered points, the error scales as O(h^3) times the second derivative of u, highlighting the method's cubic convergence for sufficiently smooth functions under optimal conditions.

Algorithms and variations

Standard algorithm

The standard algorithm for evaluating the moving least squares (MLS) approximant at a query point \mathbf{x} relies on a local weighted least squares minimization, using data from a set of scattered points \{\mathbf{x}_i, f_i\}_{i=1}^N \subset \mathbb{R}^d \times \mathbb{R}. For each query \mathbf{x}, the first step identifies the neighboring points \mathbf{x}_i within a compact support radius h, typically via a KD-tree data structure after O(N \log N) preprocessing to enable efficient O(\log N + k) queries, where k is the local stencil size (number of neighbors). Weights w_i(\mathbf{x}) are then computed for these neighbors as w_i(\mathbf{x}) = w(\|\mathbf{x} - \mathbf{x}_i\| / h), where w(\cdot) is a positive, monotonically decreasing, compactly supported kernel such as the quartic spline w(r) = (1 - r^2)^2 for r \leq 1 and 0 otherwise. Next, using a monomial basis \mathbf{p}(\mathbf{z}) = [1, z_1, \dots, z_d, z_1^2, \dots]^T of degree up to q (with dimension m = \binom{d+q}{q}), assemble the symmetric moment matrix A(\mathbf{x}) = \sum_{i \in \mathcal{N}(\mathbf{x})} w_i(\mathbf{x}) \mathbf{p}(\mathbf{x}_i) \mathbf{p}(\mathbf{x}_i)^T \in \mathbb{R}^{m \times m} and the right-hand side vector B(\mathbf{x}) = \sum_{i \in \mathcal{N}(\mathbf{x})} w_i(\mathbf{x}) f_i \mathbf{p}(\mathbf{x}_i) \in \mathbb{R}^m, where \mathcal{N}(\mathbf{x}) denotes the neighbor set. Solve the linear system A(\mathbf{x}) \mathbf{a}(\mathbf{x}) = B(\mathbf{x}) for the local coefficients \mathbf{a}(\mathbf{x}), often via Cholesky decomposition since A is positive semi-definite. The approximant is then u^h(\mathbf{x}) = \mathbf{p}(\mathbf{x})^T \mathbf{a}(\mathbf{x}). The per-query complexity is O(N_q k m^2 + N_q m^3) for N_q queries, dominated by matrix assembly and the solve; since m is typically small (e.g., m=3 for linear basis in 1D or m=10 for quadratic in 2D) and k \gg m, this approximates to O(N_q k) in practice, though higher-order bases increase the m^3 term. Preprocessing remains O(N \log N). Ill-conditioning of A(\mathbf{x}) can occur with sparse or aligned neighbors, leading to numerical instability; this is addressed by regularization, such as adding a small ridge parameter \lambda I (e.g., \lambda = 10^{-10}) to A before solving. The support h must be selected adaptively to local density, often as h \approx 1.3 \bar{h} where \bar{h} is the average distance to the k-th nearest neighbor, ensuring k \approx 20{-}50 points for stability without excessive smoothing. The procedure can be outlined in pseudocode as follows:
# Preprocessing: Build KD-tree from {x_i}
kd_tree = build_kd_tree({x_i for i=1 to N})

# For each query
for x in queries:
    neighbors = kd_tree.query_ball_point(x, h)  # indices of neighbors
    k = len(neighbors)
    A = zeros(m, m)
    B = zeros(m, 1)
    for i in neighbors:
        dist = norm(x - x_i) / h
        w_i = weight_function(dist)  # e.g., (1 - dist^2)^2 if dist <= 1 else 0
        p_i = basis(x_i)  # [1, x_i[0], ..., up to degree q]
        A += w_i * outer(p_i, p_i)
        B += w_i * f_i * p_i
    A_reg = A + lambda * eye(m)  # regularization
    a = solve(A_reg, B)  # e.g., Cholesky or LU
    u = dot(basis(x), a)
    output u
This assumes a linear algebra library for the solve. A simple 1D implementation snippet in MATLAB-like syntax, for linear basis (m=2, \mathbf{p}(t) = [1, t]^T) and quartic weights, illustrates the steps on points x = [0, 1, 2], f = [1, 2, 5] at query x_q = 0.5, h=1.3:
matlab
m = 2;
x = [0 1 2]; f = [1 2 5];
xq = 0.5; h = 1.3; lambda = 1e-10;
A = zeros(m); B = zeros(m,1);
for i = 1:length(x)
    r = abs(xq - x(i)) / h;
    if r > 1
        wi = 0;
    else
        wi = (1 - r^2)^2;  % quartic weight
    end
    pi = [1; x(i)];
    A = A + wi * (pi * pi');
    B = B + wi * f(i) * pi;
end
A = A + lambda * eye(m);
a = A \ B;
u = [1; xq]' * a;
fprintf('Approximant at %.1f: %.3f\n', xq, u);
This yields u \approx 1.50, the linear fit to the first two points (third excluded as outside ).

Interpolating variants

The standard moving least squares (MLS) approximation does not generally pass through the given data points, as it minimizes the error over a local using an , leading to a but non-interpolating fit. Interpolating variants of MLS, often denoted as interpolating MLS (IMLS), modify the formulation to enforce exact reproduction of the data values at the scattered points while maintaining the flexibility of local s. A prominent approach, introduced by and Šalkauskas, employs modified weight functions that become singular (e.g., diverging as $1/r^\alpha with \alpha > 0) at the data points to prioritize exact matching there, while remaining compactly supported elsewhere; this direct strategy avoids basis augmentation but requires careful selection to prevent ill-conditioning. Such weight modifications trace back to extensions of radial basis techniques integrated with MLS. Additionally, the partition-of-unity MLS framework reformulates the approximation using reproducing kernels to ensure global consistency and when combined with local Shepard-like weighting, providing a stable pathway for exact fits in meshfree contexts. To address the ill-conditioning of singular weights, nonsingular interpolating have been developed, such as those using special non-singular weighting functions or weighted nodal , which achieve exactness without singularities. These interpolating come with trade-offs, including increased computational from solving potentially singular or larger moment matrices and a higher risk of oscillatory behavior away from points due to the enforced exactness. Nonetheless, they are particularly valuable in scenarios demanding nodal interpolation, such as finite element-inspired for . For instance, in IMLS, the resulting shape functions \phi_i(\mathbf{x}) are constructed to satisfy \phi_i(\mathbf{x}_j) = \delta_{ij} (the Kronecker ), guaranteeing that the approximate function u^h(\mathbf{x}_i) = f_i at each (\mathbf{x}_i, f_i) by design.

Applications

Surface reconstruction

In surface reconstruction, moving least squares (MLS) is applied to fit an implicit function, such as a signed distance or indicator function, to a set of oriented points from a point cloud, where the surface is defined as the zero level set of this function. This approach, introduced by Alexa et al., enables the creation of smooth manifold surfaces directly from unorganized point data without requiring explicit connectivity or meshing. The method leverages local MLS approximations, typically using low-degree polynomials, to estimate the surface at query points by minimizing the weighted least squares error based on nearby oriented points. The process involves iterative : for a point in space, an initial is estimated via fitting, followed by offsetting along this to approximate , with iterations the fit until to the manifold. This fitting draws from the MLS approximation scheme, ensuring consistency across the point cloud. fits also compute surface and offsets, allowing for robust handling of oriented point data from scans. The result is an implicit representation that can be sampled or rendered efficiently, producing watertight and smooth models even from sparse or irregular inputs. MLS surface reconstruction excels in managing noise and outliers inherent in 3D scanning data, as the weighted least squares inherently smooths perturbations while preserving overall geometry through local low-pass filtering effects. It generates closed, watertight surfaces suitable for further processing, outperforming triangulation-based methods in scenarios with incomplete coverage. In practice, it has been widely used for reconstructing objects from laser scans, such as cultural artifacts or mechanical parts, where high-fidelity geometry is needed from raw point clouds. For animation and deformation, MLS techniques enable as-rigid-as-possible transformations of point-based models, allowing intuitive sculpting and rigging of dynamic surfaces. Recent developments as of 2025 include deep learning-enhanced variants, such as deep implicit MLS functions, for reconstructing surfaces from unoriented point clouds. Despite these strengths, MLS reconstruction remains sensitive to variations in point , potentially leading to uneven or artifacts in under-sampled regions. Computational demands are high for large-scale point clouds, often scaling linearly with neighborhood size, though mitigated by hierarchical or multiresolution variants that employ partitioning or GPU acceleration to process millions of points efficiently.

Meshfree methods for PDEs

Moving least squares (MLS) serves as a foundational approximation technique in for solving partial differential equations (PDEs), particularly in where traditional mesh-based approaches like the (FEM) face challenges with domain irregularities or deformations. In these methods, the computational domain is discretized using a set of scattered particles or nodes without predefined , and MLS constructs smooth shape functions \phi_i(\mathbf{x}) that approximate the solution field u(\mathbf{x}) as u^h(\mathbf{x}) = \sum_{i=1}^n \phi_i(\mathbf{x}) u_i, where u_i are nodal parameters. This is integrated into the weak form of the PDE via the element-free Galerkin (EFG) method, pioneered by Belytschko et al. in 1994, which applies MLS interpolants to elasticity and heat conduction problems by formulating the over the particle cloud and enforcing equilibrium through Galerkin weighting. The EFG approach avoids explicit , enabling flexible handling of complex geometries and adaptive refinement by simply adding or moving particles. To enforce essential boundary conditions in MLS-based meshfree methods, penalty formulations are commonly employed, where a penalty term \beta \int_{\Gamma_u} (u^h - \bar{u})^2 \, d\Gamma is added to the weak form, with \beta a large penalty parameter (typically $10^8 or higher) to constrain the approximation u^h to the prescribed value \bar{u} on the Dirichlet \Gamma_u. This method, detailed by Kaljević and Saigal in , integrates seamlessly with the MLS shape functions and avoids the need for Lagrange multipliers, which can introduce ill-conditioning. For the Galerkin discretization, the resulting system involves assembling the with entries K_{ij} = \int_{\Omega} \nabla \phi_i \cdot \nabla \phi_j \, d\Omega for elliptic PDEs like , where background quadrature cells or direct point evaluation approximate the integrals without element connectivity. This formulation ensures consistency and stability, leveraging the reproducibility properties of MLS for polynomial fields. MLS meshfree methods excel in applications involving and large deformations, where crack propagation or material failure induces significant domain changes. In fracture simulations, the EFG method models dynamic crack growth by enriching MLS approximations near discontinuities without remeshing, as demonstrated in Belytschko et al.'s 1995 work on static and dynamic in elasticity. For large deformations, hybrid smoothed particle hydrodynamics (SPH)-MLS approaches enhance accuracy by replacing SPH kernel approximations with MLS for improved stability and tensile instability mitigation, applied to high-velocity concrete fragmentation under explosive loads. A key advantage over FEM is the elimination of remeshing requirements during severe distortions or adaptive simulations, reducing computational overhead in problems like propagating cracks or fluid-structure interactions. Recent advancements in the 2020s include adaptive MLS variants for , such as generalized MLS (GMLS) on manifolds for hydrodynamic flows, enabling efficient discretization of curved surfaces and multi-phase problems without fixed grids. Additionally, couplings with (IGA) have emerged, combining MLS meshfree flexibility for crack modeling with IGA's smooth NURBS basis for , as in hybrid formulations for static and dynamic crack propagation. As an illustrative example, consider solving the 2D Poisson equation \nabla^2 u = f over a domain \Omega with Dirichlet boundaries. The MLS-Galerkin method approximates u^h(\mathbf{x}) = \sum \phi_i(\mathbf{x}) u_i, substitutes into the weak form \int_\Omega \nabla u^h \cdot \nabla v^h \, d\Omega = \int_\Omega f v^h \, d\Omega + \int_{\Gamma_t} \bar{t} v^h \, d\Gamma for test functions v^h, and yields the discrete system \mathbf{K} \mathbf{u} = \mathbf{f} with stiffness K_{ij} = \int_\Omega \nabla \phi_i \cdot \nabla \phi_j \, d\Omega. Numerical results for potential problems, including Poisson benchmarks on irregular particle distributions, show convergence rates approaching fourth-order accuracy with cubic MLS bases, outperforming low-order FEM in adaptive settings.

References

  1. [1]
    Surfaces Generated by Moving Least Squares Methods
    In order to isolate certain features of the analysis, we begin by considering the moving least squares approximation method without attempting to interpolate.
  2. [2]
    [PDF] Moving Least Squares Approximation - Applied Mathematics
    An alternative to radial basis function interpolation and approximation is the so-called moving least squares method. As we will see below, in this method ...
  3. [3]
    Moving least-square reproducing kernel methods (I) Methodology ...
    This paper formulates the moving least-square interpolation scheme in a framework of the so-called moving least-square reproducing kernel (MLSRK) ...
  4. [4]
    [PDF] Provably Good Moving Least Squares
    Our surface reconstruction algorithm is based on a data interpolation technique called moving least squares (MLS). For each sample s ∈ S we define a globally ...
  5. [5]
    [PDF] Image Deformation Using Moving Least Squares
    We provide an image deformation method based on Moving Least Squares using various classes of linear functions including affine, similarity and rigid ...
  6. [6]
    [PDF] A Survey of Methods for Moving Least Squares Surfaces
    [ABCO*01], is the seminal paper in this area. It presents how a simple and effective representation could be achieved by the use of moving least squares (MLS).
  7. [7]
    The Discovery of the Method of Least Squares - jstor
    The technique of combining independent observations on a single quantity by forming their arithmetic mean had appeared by the end of the seventeenth century ...
  8. [8]
    Carl Friedrich Gauss & Adrien-Marie Legendre Discover the Method ...
    Carl Friedrich Gauss Offsite Link is credited with developing the fundamentals of the basis for least-squares analysis in 1795 at the age of eighteen.
  9. [9]
    Analysis of moving least squares approximation revisited
    The Moving Least Squares (MLS) approximation was introduced in an early paper by Lancaster and Salkauskas [1] in 1981 with special cases going back to ...
  10. [10]
    Neural-IMLS: Self-supervised Implicit Moving Least-Squares ... - arXiv
    Sep 9, 2021 · Abstract page for arXiv paper 2109.04398: Neural-IMLS: Self-supervised Implicit Moving Least-Squares Network for Surface Reconstruction.
  11. [11]
    None
    ### Definition and Key Features of Moving Least Squares Method
  12. [12]
    [PDF] A Unified Framework for RBF and MLS Approximation
    Moving least squares (MLS) and radial basis function (RBF) methods play a central role in multivariate approximation theory. In this paper we provide a unified.Missing: monomials seminal
  13. [13]
    Moving least squares interpolation with thin-plate splines and radial ...
    We investigate methods based on thin-plate splines and on other radial basis functions. It turns out that a small support of the weight function leads to a ...
  14. [14]
    [PDF] Solving partial differential equations on point clouds
    m is shift invariant, we take the basis functions to be the monomials shifted to x, which makes computations easier and clearer. fx can then be written as.
  15. [15]
    [PDF] An As-Short-As-Possible Introduction to the Least Squares ...
    LANCASTER, P., AND SALKAUSKAS, K. 1981. Surfaces generated by moving least squares methods. Mathematics of Computation 87, 141–. 158. LEVIN, D. 1998. The ...
  16. [16]
    Element‐free Galerkin methods - Belytschko - Wiley Online Library
    In this method, moving least-squares interpolants are used to construct the trial and test functions for the variational principle (weak form); ...
  17. [17]
    [PDF] ERROR ESTIMATES IN SOBOLEV SPACES FOR MOVING LEAST ...
    In this work we obtain error estimates in L2 norm for the approximation of the function and its rst derivatives by the moving least square method in N ...
  18. [18]
    [PDF] Analysis of moving least squares approximation revisited
    Jan 6, 2015 · Li, T. Belytschko, Moving least square reproducing kernel methods, (I) methodology and convergence, Computer Methods in Applied Mechanics and ...
  19. [19]
    Moving Least Squares Interpolation Based A-Posteriori Error ...
    The performance of a-posteriori error methodology based on moving least squares (MLS) interpolation is explored in this paper by varying the finite element ...
  20. [20]
    [PDF] Hybrid least squares for learning functions from highly noisy data
    Jul 3, 2025 · To further reduce errors, rather than using independent, uniformly distributed points, we instead take low-discrepancy quasi-random points ...
  21. [21]
    [PDF] An Introduction to Moving Least Squares Meshfree Methods
    Throughout this paper, we have developed the basis of Moving Least Squares meshfree approximation and interpolation methods. As an illustration, we have.
  22. [22]
  23. [23]
  24. [24]
    [PDF] Computing and Rendering Point Set Surfaces
    We advocate the use of point sets to represent shapes. We pro- vide a definition of a smooth manifold surface from a set of points.
  25. [25]
    [PDF] Moving Least Squares Multiresolution Surface Approximation - IMPA
    Abstract. We describe a new method for surface reconstruction based on unorganized point clouds without normals. We also present a new algorithm for ...
  26. [26]
    [PDF] Robust Moving Least-squares Fitting with Sharp Features
    We introduce a robust moving least-squares technique for recon- structing a piecewise smooth surface from a potentially noisy point.Missing: seminal | Show results with:seminal
  27. [27]
    Sharp feature preserving MLS surface reconstruction based on local ...
    The main strengths of MLS fitting include natural point denoising due to local least squares approximation, which can be seen as a local low pass filter. They ...
  28. [28]
    3D Surface Reconstruction from Scattered Data Using Moving Least ...
    This paper presents an efficient implementation of moving least square(MLS) approximation for 3D surface reconstruction. The smoothness of the MLS is mainly ...
  29. [29]
    [PDF] 3D Deformation Using Moving Least Squares - Harvard University
    We present a 3d deformation method based on Moving Least. Squares that extends the work by Schaefer et al. [Schaefer et al. 2006] to the 3d setting.
  30. [30]
    [PDF] Moving least-squares reconstruction of large models with GPUs
    We present a GPU-accelerated implementation of the Moving Least Squares (MLS) surface reconstruction technique. We believe this to be the first GPU-accelerated, ...
  31. [31]
    3D Reconstruction Based on Iterative Optimization of Moving Least ...
    Jun 14, 2024 · This method uses adaptive octree partitioning, GRU for feature optimization, and moving least-squares to generate 3D surfaces from point clouds.