Fact-checked by Grok 2 weeks ago

Radial basis function

A radial basis function (RBF) is a real-valued univariate function \phi: [0, \infty) \to \mathbb{R}$ whose value depends only on the radial distance from a fixed point (typically the origin), extended to multivariate settings by composing with the Euclidean norm to form \phi(|\mathbf{x} - \mathbf{c}|), where \mathbf{c}is a center; these functions serve as basis elements for approximating multivariate functions through linear combinationss(\mathbf{x}) = \sum_i \lambda_i \phi(|\mathbf{x} - \boldsymbol{\xi}_i|), with coefficients \lambda_iand centers\boldsymbol{\xi}_i$ selected from data points. RBFs were first introduced by R. L. Hardy in 1971 using multiquadric functions for interpolating scattered data, originating in approximation theory; foundational work on thin-plate splines followed by Jean Duchon in the late , establishing their use for interpolating scattered data in \mathbb{R}^n while minimizing energy norms akin to thin-plate bending. Key properties include for certain RBFs (e.g., Gaussians), ensuring unique and stable interpolants via nonsingular systems, and conditional positive definiteness for others (e.g., polyharmonics), which require augmentation with polynomials for well-posedness. Common types encompass infinitely smooth functions like the Gaussian \phi(r) = \exp(-\epsilon^2 r^2) and multiquadric \phi(r) = \sqrt{r^2 + c^2}, as well as piecewise smooth ones like the thin-plate spline \phi(r) = r^2 \log r, each characterized by a influencing and accuracy. In applications, RBFs excel in for solving partial differential equations (PDEs), as pioneered by Edward Kansa in 1990 for and problems on irregular domains, offering advantages in flexibility and computational efficiency over grid-based techniques. They are also integral to radial basis function networks, a class of feedforward artificial neural networks for tasks such as , , and , where a hidden layer of localized RBF neurons (often Gaussian) maps inputs to a linear output layer, enabling faster training via least-squares optimization compared to in multilayer perceptrons. Additional uses span for surface modeling, image processing for warping, and data interpolation in scattered datasets, with convergence guarantees like O(h^{k+1/2}) error rates for compactly supported RBFs under suitable smoothness assumptions. Recent developments include optimized shape parameter selection via algorithms like RBF-QR and hybrid methods combining RBFs with for enhanced scalability in high-dimensional problems. As of 2025, further advancements include deep radial basis function networks and explicit RBF Runge-Kutta methods for improved performance in and time-dependent PDEs.

History

Early Developments

The initial theoretical foundations for radial basis functions emerged in the mid-20th century within approximation theory, particularly for handling scattered data in multivariate settings. During the 1960s and 1970s, researchers began exploring methods to minimize energy norms in Sobolev spaces, providing a variational framework for spline-like functions that depend only on radial distance. A key contribution came from Jean Duchon, who in introduced interpolation techniques based on the principle of thin-plate bending, establishing rotation-invariant semi-norms for functions in Sobolev spaces and laying groundwork for spline-based radial functions suitable for scattered data problems. A pivotal practical advancement occurred in 1971 when Rolland L. Hardy proposed multiquadric functions for modeling irregular surfaces, motivated by the need to approximate data in where traditional methods failed to capture complex without oscillations. In his seminal paper published in the , Hardy demonstrated how sums of multiquadrics—functions of the form \sqrt{r^2 + c^2}, where r is the radial and c a —could fit scattered points accurately, offering flexibility for both and features in surface . This work addressed real-world challenges in terrain mapping and contouring, marking an early application of radial basis functions beyond pure . Building on these foundations, thin-plate splines gained traction in the for scattered data , particularly in fields requiring smooth from irregular points. These functions, derived from Duchon's variational approach, were applied in for modeling deformable surfaces and in for generating accurate topographic maps from sparse measurements, providing a physically motivated that minimized bending energy while ensuring interpolation properties.

Key Milestones

A pivotal early contribution to radial basis functions was the introduction of the multiquadric form by Rolland L. Hardy in 1971 for geophysical surface modeling, which laid groundwork for subsequent applications. In 1986, Charles A. Micchelli published a seminal work on of scattered using distance matrices and conditionally positive definite functions, providing theoretical foundations for RBFs in multivariate , highlighting their stability and convergence under certain conditions. The year 1988 marked a significant shift toward applications with David S. Broomhead and David Lowe's paper, which introduced Gaussian radial basis functions within architectures for , framing RBFs as a method for multivariable functional and . Their approach emphasized the localized response of basis functions, bridging theory with tasks and influencing the development of RBF networks. In 1990, Edward J. Kansa proposed a method for solving partial differential equations using unsymmetric with multiquadric radial basis functions, enabling meshless numerical solutions for complex boundary value problems. This innovation expanded RBFs beyond pure into , demonstrating spectral accuracy for elliptic, parabolic, and PDEs. Throughout the 1990s, advancements in scattered data solidified RBFs' theoretical robustness, including Holger Wendland's 1995 of compactly supported radial basis functions with associated error bounds, which improved computational efficiency by limiting the support radius while preserving and approximation order. By the 2000s, radial basis functions gained prominence in kernel methods, particularly as the Gaussian RBF kernel in support vector machines, enhancing nonlinear classification and regression tasks through implicit high-dimensional mappings, as explored in works by Bernhard Schölkopf and Alexander J. Smola. This integration underscored RBFs' versatility in , though direct RBF saw more specialized use in .

Definition and Properties

Formal Definition

A radial basis function (RBF) \phi is a univariate function \phi: [0, \infty) \to \mathbb{R} whose radialization \phi(\|\mathbf{x}\|) depends solely on the Euclidean norm \|\mathbf{x}\| of the input vector \mathbf{x} \in \mathbb{R}^d. This composition ensures radial symmetry, making \phi suitable for isotropic approximations in multivariate settings. The general form for function approximation using RBFs is given by f(\mathbf{x}) = \sum_{i=1}^N w_i \phi(\|\mathbf{x} - \mathbf{c}_i\|), where \mathbf{c}_i \in \mathbb{R}^d are the centers (or nodes), and w_i \in \mathbb{R} are the weights determined by the problem, such as interpolation or regression conditions. This linear combination traces its origins to Hardy's multiquadric method for geophysical data interpolation. For the approximation to yield unique solutions, particularly in interpolation, \phi must satisfy positive definiteness conditions. Specifically, \phi is (strictly) positive definite on \mathbb{R}^d if, for any finite set of distinct points \{\mathbf{x}_1, \dots, \mathbf{x}_m\} \subset \mathbb{R}^d and any coefficients \lambda_1, \dots, \lambda_m \in \mathbb{R} not all zero, the quadratic form \sum_{j=1}^m \sum_{k=1}^m \lambda_j \lambda_k \phi(\|\mathbf{x}_j - \mathbf{x}_k\|) > 0. Equivalently, the interpolation matrix A_{jk} = \phi(\|\mathbf{x}_j - \mathbf{x}_k\|) is positive definite and thus invertible, ensuring a unique set of weights. \phi is unconditionally positive definite if this property holds for every dimension d \geq 1, independent of the specific space. This unconditional nature is crucial for applications across varying data dimensionalities, as established in early characterizations of positive definite functions. Associated with such \phi is a native space, a reproducing kernel Hilbert space \mathcal{N}_\phi where the RBF acts as the kernel, providing a framework for error bounds and convergence in approximations.

Mathematical Properties

Radial basis functions (RBFs) exhibit varying degrees of depending on their specific form, which is crucial for applications requiring differentiable approximations. A radial basis function \phi is said to belong to the class C^k if it possesses k continuous . For instance, the Gaussian RBF, defined as \phi(r) = e^{-(\epsilon r)^2}, is infinitely differentiable (C^\infty), ensuring smooth interpolants without discontinuities in higher-order . In contrast, compactly supported RBFs, such as those constructed by , achieve a prescribed smoothness level C^{2k} through iterative application of operators to truncated power functions, allowing control over differentiability while maintaining finite support. The decay behavior of RBFs significantly influences the of interpolation matrices and the locality of approximations. Functions with fast decay, such as the Gaussian, exhibit away from the origin, which promotes well-conditioned systems for small shape parameters but can lead to ill-conditioning as the parameter approaches zero due to near-flatness. Conversely, RBFs like the multiquadric \phi(r) = \sqrt{1 + (\epsilon r)^2} exhibit polynomial growth for large r, resulting in global influence and potentially poorer for large datasets. Compactly supported RBFs, by design, have zero value beyond a finite radius, balancing locality and computational efficiency while avoiding the ill-conditioning associated with infinite support. Uniqueness and stability in RBF interpolation arise from their ability to form bases in certain function spaces, particularly Haar spaces, under specific conditions. For strictly positive definite RBFs, the associated interpolation matrices are nonsingular for distinct points, ensuring a unique solution to the interpolation problem. Conditionally positive definite RBFs of order k achieve non-singularity when augmented with polynomials from \pi_{k-1}, as the matrices become invertible in the subspace orthogonal to these polynomials, a result establishing RBFs as stable bases for scattered data approximation. This property guarantees that the interpolation operator is well-posed and stable with respect to perturbations in the data points. Error estimates for RBF approximations rely on the conditional of the , which enables stable even for large numbers of centers N. In native spaces associated with conditionally positive definite RBFs, the is bounded by terms involving the fill h of the data points and the smoothness of the target function, such as O(h^m) where m relates to the RBF's reproduction order. For large N, while ensures nonsingularity, the often grows rapidly, potentially leading to ill-conditioning that requires stabilization techniques to support reliable rates. Certain RBFs, particularly compactly supported ones like functions, satisfy the property when appropriately normalized and combined in local approximations, allowing the sum of basis functions to equal one over subdomains. This enables efficient local schemes by partitioning the into overlapping regions, where each local approximant reproduces constants and facilitates hierarchical or multiscale methods without global coupling.

Common Types

Gaussian Functions

The Gaussian radial basis function (RBF) is one of the most widely used types in approximation theory and , defined by the formula \phi(r) = \exp(-\varepsilon^2 r^2), where r = \| \mathbf{x} - \mathbf{x}_i \| is the between a point \mathbf{x} and a center \mathbf{x}_i, and \varepsilon > 0 is the that controls the function's width and decay rate. This form ensures the function is radially symmetric and centered at \mathbf{x}_i. The Gaussian RBF possesses several distinctive properties that make it suitable for scattered data approximation. It is infinitely differentiable, providing smooth without discontinuities in higher derivatives. Additionally, it is strictly positive definite on \mathbb{R}^d for all dimensions d \geq 1, guaranteeing the invertibility of the associated interpolation matrix for distinct and thus a unique solution to problems. The rapid of \phi(r) as r increases imparts a form of local support, where the influence of each diminishes quickly away from its , which is advantageous for localized approximations but requires careful placement to avoid gaps in coverage. Tuning the \varepsilon involves a fundamental : larger values yield narrower, more localized functions with better-conditioned matrices but potentially reduced accuracy for global features, while smaller values produce flatter functions that enhance accuracy yet lead to severe ill-conditioning of the system due to near-linear dependence among basis functions. In the flat limit as \varepsilon \to 0, the Gaussian approaches a , effectively degenerating the method, though specialized algorithms like RBF-QR can maintain stability even in this regime. The Gaussian RBF gained prominence in the context of radial basis function networks through the work of Broomhead and Lowe, who adopted it for multivariable functional due to its similarity to functions in neural models, enabling efficient nonlinear mapping via linear output weights. The of the Gaussian RBF further elucidates its properties in native spaces, given by \hat{\phi}(\omega) = \left( \frac{\pi}{\varepsilon^2} \right)^{d/2} \exp\left( -\frac{\| \omega \|^2}{4 \varepsilon^2} \right), which is positive and decays exponentially, confirming its positive definiteness and supporting error estimates in reproducing kernel Hilbert spaces.

Multiquadric Functions

Multiquadric radial basis functions (RBFs) are defined by the form \phi(r) = \sqrt{r^2 + \epsilon^2}, where r is the Euclidean distance between points and \epsilon > 0 is a shape parameter; this function is conditionally positive definite of order 1. The inverse multiquadric variant takes the form \phi(r) = 1 / \sqrt{r^2 + \epsilon^2}, which is strictly positive definite in any dimension. These functions exhibit slow polynomial growth for the multiquadric and decay for the inverse, unlike the rapid exponential decay of Gaussian RBFs, enabling effective global approximations over large domains. The \epsilon governs the "flatness" of the , with larger values producing flatter profiles that emphasize global behavior and smaller values yielding sharper, more localized responses; as \epsilon \to 0, the multiquadric approaches a \phi(r) \approx r and the inverse a rational form $1/r. This slow character contributes to improved numerical for interpolating large datasets compared to exponentially decaying RBFs, though small \epsilon can introduce due to ill-conditioned systems. Originally introduced by in for modeling irregular surfaces like , multiquadric RBFs were selected for their minimal properties, which reduce overshoot in approximations of geophysical data. In terms of error bounds, multiquadric RBFs demonstrate superiority in high-dimensional settings over exponential kernels like the Gaussian, as their mean dimension remains bounded near 1 as dimensionality increases, facilitating approximations without of dimensionality.

Spline Functions

Spline functions serve as a significant class of radial basis functions (RBFs) characterized by their polynomial nature, which enables efficient in scattered data while minimizing certain energy functionals derived from physical analogies. Unlike infinitely smooth RBFs, spline-based variants often exhibit conditional , allowing for unique interpolants when augmented with low-degree polynomials to ensure stability. These functions are particularly valued for their properties, achieving optimal orders in Sobolev spaces for irregularly spaced points. The thin-plate spline, a cornerstone of this category, is defined in two dimensions by the radial function \phi(r) = r^2 \log r, where r denotes the . This form minimizes the bending energy of a thin elastic plate under point loads, analogous to the physical deformation that resists curvature while allowing flexibility. In higher dimensions d \geq 2, it generalizes to \phi(r) = r^{2m-d} \log r for integer m > d/2, preserving the semi-norm minimization in appropriate Sobolev spaces. Introduced by Duchon, these splines provide C^{2m-2} and are conditionally positive definite of order m. Polyharmonic splines extend this framework using simpler power functions \phi(r) = r^k for odd positive integers k, which are conditionally positive definite of m = (k+1)/2. These splines correspond to fundamental solutions of the iterated biharmonic \Delta^m, enabling that reproduces exactly up to degree m-1 when combined with a . For instance, in one dimension, k=3 yields the cubic spline \phi(r) = r^3, widely used for univariate data fitting. Their piecewise nature arises from the at the origin, but the resulting interpolants remain globally smooth. A key theoretical foundation for spline RBFs lies in their derivation from Sobolev norms, particularly the semi-norm \|f\|_{H^m}^2 = \int |\Delta^{m/2} f|^2 \, d\mathbf{x}, which quantifies thin-plate bending energy for scattered data problems. This connection ensures that spline interpolants minimize thin-plate energy subject to interpolation constraints, providing a variational justification for their use in approximation theory. To address computational challenges like dense matrices in large-scale applications, compactly supported variants of spline RBFs promote sparsity. Wendland's construction from 1995 yields explicit positive definite radial functions with finite support, given by \phi(r) = (1-r)_+^\ell p(r) where ( \cdot )_+ denotes the positive part, \ell is chosen for smoothness, and p(r) is a polynomial ensuring minimal degree for positive definiteness in \mathbb{R}^d. These functions achieve the same polynomial reproduction properties as their infinite-support counterparts while limiting interactions to nearby points, thus enabling sparse approximations with controlled error. For example, the Wendland function \phi_{3,1}(r) = (1-r)_+^4 (4r + 1) is C^2 smooth and positive definite in \mathbb{R}^3.

Approximation Theory

Interpolation

Radial basis function (RBF) interpolation addresses the problem of constructing a f: \mathbb{R}^d \to \mathbb{R} that exactly reproduces given values y_j = f(x_j) at distinct points x_1, \dots, x_N \in \mathbb{R}^d. The interpolant takes the form f(x) = \sum_{i=1}^N w_i \phi(\|x - x_i\|), where \phi: [0, \infty) \to \mathbb{R} is a radial basis function and \| \cdot \| denotes the Euclidean norm. Substituting the interpolation conditions yields the linear system A \mathbf{w} = \mathbf{y}, with entries A_{ij} = \phi(\|x_i - x_j\|) and \mathbf{y} = (y_1, \dots, y_N)^T. This setup allows RBFs to handle scattered data without requiring a mesh, making it suitable for irregular point distributions. For radial basis functions that are only conditionally positive definite of order k, such as thin-plate splines \phi(r) = r^{2k} \log r in even dimensions, the basic form may not guarantee solvability. To ensure a unique solution, the interpolant is augmented with a term of degree at most k-1: f(x) = \sum_{i=1}^N w_i \phi(\|x - x_i\|) + p(x), where p is a satisfying \deg(p) \leq k-1. Additional side conditions enforce : \sum_{i=1}^N w_i q(x_i) = 0 for all polynomials q of degree at most k-1. This extended system maintains the interpolation conditions while stabilizing the solution. When \phi is (strictly) positive definite, the interpolation matrix A is symmetric positive definite for distinct centers, ensuring the existence and uniqueness of the weights \mathbf{w} in the native space associated with \phi, which is a . For conditionally positive definite functions of order k, uniqueness holds in the semi-normed native space after augmentation, provided the centers are unisolvent for of k-1. These results extend to scattered in any dimension, confirming solvability for common choices like multiquadrics. Error analysis for RBF interpolants relies on the fill distance h = \sup_{x \in \Omega} \min_{i=1,\dots,N} \|x - x_i\|, which measures the maximum hole in the data distribution over a bounded \Omega. For functions f in the native space with parameter s > d/2, the error satisfies |f(x) - s(x)| \leq C h^{s - d/2} \|f\|_{\mathcal{N}}, where \mathcal{N} is the native space semi-norm and C is a constant independent of h. theorems further bound the order: rates faster than O(h^s) imply f satisfies certain differential equations, such as harmonicity for biharmonic thin-plate splines. In d-dimensions, for k, typical rates are O(h^{k - d/2}) in L^\infty-norm under quasi-uniform distributions. The choice of centers significantly impacts quality, particularly through the Lebesgue constant \Lambda_N = \max_{x \in \Omega} \sum_{i=1}^N |l_i(x)|, where l_i are the cardinal basis functions. Uniform or quasi-uniform distributions, such as those generated by space-filling curves or geometric selection, minimize \Lambda_N, achieving growth like O(\sqrt{N}) and reducing sensitivity to data noise. Data-dependent selections, like algorithms that maximize function P(x) = \sup \{ |g(x)| : \|g\|_{\mathcal{N}} \leq 1, g(x_j)=0 \ \forall j \}, adapt to the and can yield faster error decay but may increase \Lambda_N for certain \phi, such as Gaussians. Optimal centers balance fill distance and separation to ensure and .

Regression

Radial basis function provides an approximate fit to values y_j at points x_1, \dots, x_N \in \mathbb{R}^d by solving a least-squares minimization problem, in contrast to the exact reproduction in . The approximant takes the form s(x) = \sum_{i=1}^M w_i \phi(\|x - \xi_i\|), where the centers \xi_1, \dots, \xi_M (with M \leq N) may be a of the points or otherwise selected, and the weights \mathbf{w} minimize \|\mathbf{A} \mathbf{w} - \mathbf{y}\|_2^2. Here, \mathbf{A} is the N \times M matrix with entries A_{ji} = \phi(\|\x_j - \xi_i\|). For strictly positive definite \phi, the solution is unique if \mathbf{A} has full rank, often computed via the normal equations \mathbf{w} = (\mathbf{A}^T \mathbf{A})^{-1} \mathbf{A}^T \mathbf{y} or QR decomposition. To handle ill-conditioning or noisy data, regularization is commonly applied, such as ridge regression minimizing \|\mathbf{A} \mathbf{w} - \mathbf{y}\|_2^2 + \lambda \|\mathbf{w}\|_2^2, or more generally using the native space norm: \min_{\mathbf{w}} \frac{1}{2} \mathbf{w}^T \mathbf{Q} \mathbf{w} + \omega \|\mathbf{A} \mathbf{w} - \mathbf{y}\|_2^2, where \mathbf{Q} is the Gram matrix for the centers and \omega > 0 is a regularization parameter. This yields stable approximants with error bounds depending on the choice of \lambda or \omega, the fill distance, and the smoothness of the target function, typically achieving convergence rates similar to interpolation under suitable conditions. RBF regression is particularly useful for overdetermined systems and extends naturally to multivariate approximation.

RBF Networks

Architecture

Radial basis function (RBF) networks are neural networks consisting of three layers: an input layer, a hidden layer, and an output layer. The input layer receives the input vector \mathbf{x} \in \mathbb{R}^n and passes it unchanged to the hidden layer. The hidden layer comprises m RBF units, each computing a radially symmetric based on the from the input to a \mathbf{c}_j. A common choice is the Gaussian RBF: h_j(\mathbf{x}) = \exp\left( -\frac{\|\mathbf{x} - \mathbf{c}_j\|^2}{r_j^2} \right), where r_j controls the width. The output layer performs a linear combination of the hidden layer outputs: y_k = \sum_{j=1}^m w_{kj} h_j(\mathbf{x}) + b_k, for the k-th output, with weights w_{kj} and bias b_k. This structure allows localized response in the hidden layer and global linear mapping in the output.

Training Algorithms

Training RBF networks typically involves a two-phase process to determine the parameters efficiently. In the first phase, the centers \mathbf{c}_j and widths r_j of the hidden layer RBFs are selected using unsupervised methods, such as on the input data to place centers at cluster means, with widths set based on cluster variances or nearest-neighbor distances. In the second phase, the output weights w_{kj} are optimized using supervised least-squares methods. Given training data \{(\mathbf{x}_i, \mathbf{t}_i)\}_{i=1}^p, the hidden layer outputs form the design matrix H \in \mathbb{R}^{p \times m} with H_{ij} = h_j(\mathbf{x}_i). The weights are solved as: \mathbf{w} = (H^T H + \lambda I)^{-1} H^T \mathbf{T}, where \mathbf{T} contains target vectors, \lambda is a regularization parameter to prevent overfitting, and I is the identity matrix. This linear system is solved directly, avoiding iterative gradient descent. Alternative approaches include orthogonal (OLS) for basis selection, which incrementally adds RBFs to minimize residual error, or full nonlinear optimization via to adjust all parameters jointly, though the two-phase method is preferred for its speed and stability.

Applications in PDEs

Methods

methods using radial basis functions (RBFs) provide a meshfree approach to solving partial equations (PDEs) by enforcing the governing at a set of scattered points. The solution is approximated as u(\mathbf{x}) = \sum_{i=1}^{N} w_i \phi(\|\mathbf{x} - \mathbf{c}_i\|) + p(\mathbf{x}), where \phi is the RBF centered at points \mathbf{c}_i, w_i are coefficients to be determined, and p(\mathbf{x}) is a low-degree to ensure non-singularity of the system. For a PDE of the form L u(\mathbf{x}) = f(\mathbf{x}) in the interior domain, enforces L u(\mathbf{x}_j) = f(\mathbf{x}_j) at interior points \mathbf{x}_j, while boundary conditions are satisfied at boundary points. This setup allows for flexible point distributions without requiring a structured . The unsymmetric formulation, known as the Kansa method, distinguishes between interior and boundary collocation points, leading to a non-Hermitian A \mathbf{w} = \mathbf{b}, where the matrix entries incorporate the L applied to the RBFs. In this approach, interior rows correspond to the PDE residual, while boundary rows enforce Dirichlet or Neumann conditions, resulting in an asymmetric that avoids the need for symmetric RBF choices but introduces challenges in . The multiquadric RBF, \phi(r) = \sqrt{1 + (r/\epsilon)^2}, is commonly employed due to its conditional and suitability for such unsymmetric systems. Solving the resulting system is straightforward for small numbers of points N using direct methods like , but for larger N, iterative solvers such as GMRES are preferred to handle the ill-conditioning inherent in RBF matrices, which worsens with decreasing shape parameter \epsilon. Preconditioning techniques, such as domain decomposition or fast multipole methods, can mitigate this for high-dimensional problems. Error convergence in collocation methods exhibits spectral rates—exponential in N—for analytic solutions, while algebraic rates of order O(h^{k+1}) hold for functions in C^k spaces, where h is the fill distance of the point set. Optimizing the \epsilon is crucial for balancing accuracy and ; fixed \epsilon values scaled to the domain size often suffice, but multi-scale strategies—using multiple \epsilon levels—enhance and for irregular geometries. These optimizations ensure robust performance across varying point densities without sacrificing the method's meshfree advantages.

Meshless Approaches

Meshless approaches using RBFs for PDEs encompass a variety of techniques beyond global , focusing on local approximations to improve computational and . One prominent method is the radial basis function-generated (RBF-FD) approach, which constructs local differentiation stencils by fitting RBFs to small neighborhoods (stencils) of points, approximating differential operators as weighted sums for time-marching or implicit PDE solvers on unstructured grids. This enables handling complex geometries and adaptive refinement while maintaining meshfree flexibility. Other meshless variants include the method (PUM) combined with local RBF interpolants for domain decomposition, reducing global matrix sizes, and Petrov-Galerkin formulations that use RBF trial functions with test functions derived from weak forms to stabilize solutions for convection-dominated problems. These methods, often applied in and , offer advantages in parallelization and extension to high dimensions compared to global RBF .