Fact-checked by Grok 2 weeks ago

Basis function

In functional analysis, a basis function refers to a member of a set of functions that spans a vector space of functions, enabling any element in that space to be expressed uniquely as a linear combination of the basis functions. These sets are fundamental in infinite-dimensional spaces, such as or , where they facilitate the decomposition and analysis of functions analogous to finite-dimensional vector bases. Key types of bases distinguish themselves by their topological properties and convergence requirements. A Schauder basis for a X is a sequence \{e_n\}_{n=1}^\infty \subset X such that every x \in X admits a unique representation x = \sum_{n=1}^\infty c_n e_n, where the series converges in the norm of X. In contrast, a Hamel basis (or algebraic basis) consists of a linearly independent set that spans the space via finite linear combinations only, often uncountable and less practical for infinite-dimensional settings due to non-constructive existence via the . For Hilbert spaces equipped with an inner product, orthonormal bases—complete sets of functions \{\alpha_j\} satisfying (\alpha_j, \alpha_k) = \delta_{jk}—allow , where the coefficients are inner products, simplifying expansions like . Basis functions play a central role in approximation theory, where they enable the representation of arbitrary functions by linear combinations of simpler, predefined forms, such as polynomials or , to achieve uniform or least-squares approximations. This is crucial for numerical methods, including spectral approximations in partial differential equations, where the choice of basis (e.g., global vs. local support) affects convergence rates and computational efficiency. In broader applications, such as and , orthonormal bases underpin transforms like the or expansions, providing tools for decomposition, compression, and analysis of complex data.

Mathematical Foundations

Vector Spaces and Linear Independence

A over a , such as the real numbers \mathbb{R} or complex numbers \mathbb{C}, is a nonempty set V equipped with operations of and that satisfy specific axioms. These include under and , associativity and commutativity of , existence of a zero vector, existence of additive inverses, distributivity of over vector and , compatibility of with multiplication, and the existence of a multiplicative identity in the ./06:_Vector_Spaces/6.01:_Examples_and_Basic_Properties) A set of vectors \{v_1, \dots, v_n\} in a V is linearly independent if the only to a_1 v_1 + \dots + a_n v_n = 0, where a_1, \dots, a_n are scalars from the field, is a_1 = \dots = a_n = 0./02:_Vectors_matrices_and_linear_combinations/2.04:_Linear_independence) This ensures that no in the set can be expressed as a nontrivial linear combination of the others. A spanning set S for V is a subset such that every in V can be written as a finite linear combination of elements from S./09:_Vector_Spaces/9.02:_Spanning_Sets) A basis for a V is a set that is both linearly independent and spans V, allowing every vector in V to be uniquely expressed as a linear combination of basis elements. The dimension of V, denoted \dim V, is the number of vectors in any basis for V, which is well-defined and finite for finite-dimensional spaces. In finite-dimensional examples, such as \mathbb{R}^n, the standard basis consists of the vectors e_1 = (1, 0, \dots, 0), e_2 = (0, 1, \dots, 0), up to e_n = (0, \dots, 0, 1), which are linearly independent and span \mathbb{R}^n./02:_Systems_of_Linear_Equations-_Geometry/2.07:_Basis_and_Dimension)/11:_Basis_and_Dimension/11.01:Bases_in(Ren)) In infinite-dimensional vector spaces, such as certain function spaces, a Hamel basis (also called an algebraic basis) exists but is typically non-constructive and requires the for its existence; every is a finite of basis elements, though such bases are rarely used in practice due to their pathological properties.

Function Spaces and Norms

Function spaces are infinite-dimensional vector spaces consisting of functions satisfying certain properties, equipped with algebraic operations of pointwise addition and scalar multiplication. A prominent example is the space C[0,1], which comprises all continuous real-valued functions on the closed interval [0,1]. Another fundamental class is the L^p spaces for $1 \leq p < \infty, defined as equivalence classes of measurable functions f on a measure space (such as [0,1] with Lebesgue measure) where \int |f|^p \, d\mu < \infty, with functions considered equivalent if they differ only on a set of measure zero. Hilbert spaces form a special category of function spaces that are complete inner product spaces, enabling the study of orthogonality and projections. The space L^2[0,1] exemplifies a , consisting of square-integrable functions with the inner product \langle f, g \rangle = \int_0^1 f(x) \overline{g(x)} \, dx, which induces the norm \|f\|_2 = \sqrt{\int_0^1 |f(x)|^2 \, dx}. Completeness in these spaces ensures that Cauchy sequences converge to an element within the space, a property essential for the convergence of infinite series expansions, such as those involving . Norms on function spaces quantify the size of functions and define convergence topologies, distinguishing between pointwise and integral behaviors. In C[0,1], the uniform norm \|f\|_\infty = \sup_{x \in [0,1]} |f(x)| measures the maximum deviation, promoting uniform (pointwise) convergence, whereas the L^p norm \|f\|_p = \left( \int_0^1 |f(x)|^p \, dx \right)^{1/p} emphasizes integral averages, leading to convergence in mean. This distinction is critical, as sequences converging in L^p may not converge uniformly, affecting the approximation properties of bases. Banach spaces are complete normed linear spaces, providing a framework for rigorous analysis in infinite dimensions; C[0,1] under the uniform norm is a canonical Banach space, where every Cauchy sequence of continuous functions converges uniformly to a continuous limit. Unlike finite-dimensional spaces like \mathbb{R}^n, which admit finite bases spanning the entire space via linear combinations, infinite-dimensional Banach spaces lack finite bases and require more sophisticated constructs like Schauder bases—countable sequences allowing unique infinite series representations with norm convergence—to span the space effectively. This shift addresses the limitations of algebraic (Hamel) bases, which rely on finite combinations and become unmanageable in infinite dimensions.

Definition and Properties

Formal Definition

In the context of function spaces, which are vector spaces consisting of functions equipped with a suitable topology, a basis function refers to a member of a family \{\phi_n\}_{n \in I} that forms a basis for the space. Specifically, every function f in the space can be uniquely represented as f = \sum_{n \in I} c_n \phi_n, where the sum converges in the topology of the space, and the coefficients c_n are scalars determined uniquely by f. This representation generalizes the notion of a basis from finite-dimensional vector spaces to infinite-dimensional settings like those encountered in analysis. In finite-dimensional function spaces, such as the space of polynomials of degree at most d, the expansion is a finite sum yielding exact equality f = \sum_{n=1}^d c_n \phi_n. However, in infinite-dimensional spaces, the sum is typically infinite, requiring convergence in a norm or other topology; for instance, the partial sums satisfy \|f - \sum_{n=1}^N c_n \phi_n\| \to 0 as N \to \infty, where \|\cdot\| denotes the norm of the space. The uniqueness of the coefficients c_n follows from the linear independence of the \{\phi_n\}, ensuring that no nontrivial linear combination vanishes. A more precise framework for infinite-dimensional cases is provided by the concept of a Schauder basis in a Banach space, which applies directly to many function spaces like C[0,1] or L^p spaces. A Schauder basis \{\phi_n\} for a Banach space X is a sequence such that every f \in X has a unique expansion f = \sum_{n=1}^\infty c_n \phi_n with \|f - \sum_{n=1}^N c_n \phi_n\| \to 0 as N \to \infty. This was introduced by Juliusz Schauder in 1927 to handle topological aspects absent in algebraic bases. In Hilbert spaces such as L^2, an orthogonal Schauder basis further satisfies Parseval's identity: \sum_{n=1}^\infty |c_n|^2 = \|f\|^2, preserving the norm through the expansion. Unlike Schauder bases, which require unique coefficients, frames or overcomplete systems in function spaces allow multiple representations of the same f, providing redundancy but sacrificing uniqueness for robustness in applications like signal processing.

Key Properties and Schauder Bases

Basis functions in Hilbert spaces often exhibit orthogonality, a property that simplifies the representation of elements. For an orthonormal basis \{\phi_n\} in a Hilbert space, the inner product satisfies \langle \phi_m, \phi_n \rangle = \delta_{mn}, where \delta_{mn} is the Kronecker delta. This orthogonality allows coefficients in the expansion f = \sum c_n \phi_n to be directly computed as c_n = \langle f, \phi_n \rangle, enhancing computational efficiency and stability in approximations. In more general Banach spaces, Schauder bases lack this orthogonality but possess biorthogonality. A Schauder basis \{\phi_n\} admits a dual (biorthogonal) sequence \{\psi_n\} in the dual space such that \langle \phi_m, \psi_n \rangle = \delta_{mn}. Coefficients are then extracted via c_n = \langle f, \psi_n \rangle, ensuring unique representations despite the absence of inner product structure. This duality is fundamental to the theory, as it guarantees the basis spans the space densely. The stability of a Schauder basis is quantified by its basis constant, defined as \Lambda = \sup_N \|P_N\|, where P_N is the bounded projection onto the span of the first N basis elements. A smaller basis constant indicates greater stability, with \Lambda \geq 1 always holding, and equality to 1 for monotone bases where partial sums are contractive. This measure is crucial for assessing how perturbations affect expansions. Unconditional bases represent a stronger variant of Schauder bases, where series convergence holds independently of the ordering of terms, provided the coefficients are square-summable in Hilbert settings. For instance, the Fourier basis in L^2 forms an unconditional basis, allowing rearrangements without altering convergence. Such bases are permutation-invariant and imply democratic properties in the space. Riesz bases extend orthonormal concepts to general Hilbert spaces, defined as the image of an orthonormal basis under a bounded invertible linear operator. They preserve essential properties like unconditional convergence and boundedness of projections, with frame bounds A, B > 0 satisfying A \|f\|^2 \leq \sum | \langle f, \phi_n \rangle |^2 \leq B \|f\|^2 for all f. This makes Riesz bases topologically equivalent to orthonormal ones. Regarding existence, conjectured that every separable admits a Schauder basis, but Per Enflo constructed a in 1973: a separable without such a basis, also lacking the property. While many classical separable spaces like \ell^p and L^p possess bases, this result highlights the non-universality of Schauder bases in separable settings.

Types of Basis Functions

Polynomial Bases

Polynomial bases are fundamental in theory, particularly for representing functions on finite intervals. The monomial basis, consisting of the set {1, x, x^2, \dots, x^n}, spans the P_n of all polynomials of degree at most n. This basis is complete within P_n, allowing any polynomial in this space to be uniquely expressed as a finite \sum_{k=0}^n c_k x^k. In the space of continuous functions C[0,1] equipped with the uniform norm, the monomials do not form a Schauder basis, as not every continuous function admits a unique uniform-convergent series expansion in this basis; such expansions converge only for analytic functions. However, the linear span of the monomials is dense in C[0,1], as established by the Weierstrass approximation theorem, which asserts that for any continuous function f on a compact interval [a,b] and any \epsilon > 0, there exists a polynomial p such that \|f - p\|_\infty < \epsilon. This density justifies the use of monomials for approximating continuous functions, though the basis is ill-conditioned, exhibiting large basis constants that lead to numerical instability in computations, particularly evident in the poor conditioning of the associated Vandermonde matrix. For analytic functions on [0,1], the monomials do form a Schauder basis under the , with expansions given by that converge uniformly to the . A representative example is the expansion of an f around x=0: f(x) = \sum_{n=0}^\infty \frac{f^{(n)}(0)}{n!} x^n, where the series converges to f(x) in a neighborhood of 0, and the coefficients are uniquely determined by the derivatives. To mitigate the ill-conditioning of monomials, orthogonal polynomial bases are often preferred. The \{P_n(x)\} form an for L^2[-1,1] with respect to the weight function 1, satisfying \int_{-1}^1 P_m(x) P_n(x) \, dx = \frac{2}{2n+1} \delta_{mn}. They are generated by the relations P_0(x) = 1, P_1(x) = x, and for n \geq 1, (n+1) P_{n+1}(x) = (2n+1) x P_n(x) - n P_{n-1}(x). This facilitates efficient projections and expansions in function spaces. Another important orthogonal family is the Chebyshev polynomials of the first kind \{T_n(x)\}, which possess the minimax property: among all monic polynomials of degree n on [-1,1], the scaled Chebyshev polynomial T_n(x)/2^{n-1} has the smallest maximum norm, equaling $1/2^{n-1}. Defined by T_n(\cos \theta) = \cos(n \theta) for \theta \in [0, \pi], they are particularly useful for interpolation due to their equioscillation, which minimizes the maximum approximation error.

Trigonometric and Fourier Bases

Trigonometric bases form a fundamental class of orthonormal bases in the space of square-integrable functions on periodic intervals, enabling frequency-based decompositions of functions. The standard trigonometric system on the interval [0,1] consists of the constant function 1 together with the functions \cos(2\pi n x) and \sin(2\pi n x) for n = 1, 2, \dots. These functions are orthogonal with respect to the L^2[0,1] inner product \langle f, g \rangle = \int_0^1 f(x) g(x) \, dx, where \langle 1, 1 \rangle = 1, \langle \cos(2\pi n x), \cos(2\pi m x) \rangle = \frac{1}{2} \delta_{nm}, and similarly for sines, with cross terms vanishing for n \neq m. To achieve , the constant is already normalized, while the cosine and sine functions are scaled by \sqrt{2}, yielding the orthonormal set \left\{1, \sqrt{2} \cos(2\pi n x), \sqrt{2} \sin(2\pi n x) \mid n \in \mathbb{N}\right\}. Any function f \in L^2[0,1] can be expanded in this basis via its Fourier series: f(x) = \frac{a_0}{2} + \sum_{n=1}^\infty \left( a_n \cos(2\pi n x) + b_n \sin(2\pi n x) \right), where the coefficients are given by a_0 = 2 \int_0^1 f(x) \, dx, \quad a_n = 2 \int_0^1 f(x) \cos(2\pi n x) \, dx, \quad b_n = 2 \int_0^1 f(x) \sin(2\pi n x) \, dx for n \geq 1. These formulas arise from the orthogonality relations and ensure that the partial sums converge to f in the L^2 norm. The trigonometric polynomials—finite linear combinations of these basis functions—are dense in L^2[0,1], establishing the system's completeness as an orthonormal basis; this follows from the Riesz-Fischer theorem applied to the closure of the span. An equivalent formulation uses complex exponentials, particularly on the [-\pi, \pi]. The set \left\{ \frac{e^{i n x}}{\sqrt{2\pi}} \mid n \in \mathbb{Z} \right\} forms an for L^2[-\pi, \pi], with inner product \langle f, g \rangle = \int_{-\pi}^\pi f(x) \overline{g(x)} \, dx. The corresponding is f(x) = \sum_{n=-\infty}^\infty c_n e^{i n x}, where c_n = \frac{1}{2\pi} \int_{-\pi}^\pi f(x) e^{-i n x} \, dx, and holds in L^2. This exponential basis connects directly to the on the real line, where the transform decomposes non-periodic functions into continuous superpositions of complex exponentials, extending the periodic . Despite issues, partial sums of exhibit the near discontinuities of f, manifesting as persistent overshoots and undershoots that do not diminish with increasing terms; the overshoot amplitude approaches approximately 8.95% of the size. This arises from the slow decay of Fourier coefficients for non-smooth functions, leading to in the approximation. A classic example is the square f(x) = -1 for -\pi < x < 0 and f(x) = 1 for $0 < x < \pi, extended periodically. Its is \frac{4}{\pi} \sum_{k=1,3,5,\dots}^\infty \frac{\sin(k x)}{k}, which converges slowly near the jumps at x = 0, \pm \pi, with Gibbs overshoots clearly visible even in high-order partial sums, illustrating the limitations for discontinuous data.

Wavelet and Other Orthogonal Bases

Wavelet bases represent a class of that provide localized representations in both time and frequency domains, offering advantages over global supports in or bases by capturing non-stationary features efficiently. These bases are generated from a mother \psi(x) through dilations and translations, forming the family \psi_{j,k}(x) = 2^{j/2} \psi(2^j x - k) for j, k \in \mathbb{Z}, which constitutes an for L^2(\mathbb{R}) under appropriate conditions. Central to their construction is multiresolution analysis (MRA), a framework that decomposes L^2(\mathbb{R}) into nested subspaces V_j spanned by dilates of a scaling function \phi, with the \psi spanning the orthogonal complement W_j = V_{j+1}^\perp. The Haar basis serves as the simplest example of a wavelet system, defined by the mother wavelet \psi(x) = 1 for x \in [0, 0.5) and \psi(x) = -1 for x \in [0.5, 1), and zero elsewhere; this forms an orthogonal basis for L^2(\mathbb{R}). Despite its piecewise constant nature, which limits smoothness, the Haar system is computationally efficient and provides perfect reconstruction in discrete settings. For applications requiring higher regularity, Daubechies wavelets extend this by constructing compactly supported orthogonal wavelets with vanishing moments, allowing better approximation of smooth functions; for instance, the D_4 wavelet has two vanishing moments and support width 4. Beyond wavelets, other orthogonal bases include Hermite functions, defined as \psi_n(x) = \frac{1}{\sqrt{2^n n! \sqrt{\pi}}} H_n(x) e^{-x^2/2} where H_n(x) are , forming an for L^2(\mathbb{R}). These functions are eigenfunctions of the and are particularly suited for problems involving quadratic potentials or quantum harmonic oscillators. In general, wavelet bases, including those like Haar and Daubechies, form Riesz bases in L^2(\mathbb{R}), meaning they are boundedly equivalent to orthonormal bases with frame bounds A and B satisfying $0 < A \leq \sum | \langle f, \psi_{j,k} \rangle |^2 \leq B \|f\|^2 < \infty for all f \in L^2(\mathbb{R}). This property ensures stable expansions and reconstructions, highlighting their utility for localized analysis compared to the delocalized nature of Fourier bases.

Applications

Approximation and Interpolation

Basis functions play a central role in theory by enabling the representation of functions within finite-dimensional subspaces. The best of a function f in a normed linear V by elements from a W spanned by the first N basis functions is defined as the infimum \inf_{g \in W} \|f - g\|, where \|\cdot\| denotes a such as the L^2 or L^\infty . For finite-dimensional W, a best w^* exists for any f \in V, and in inner product spaces, w^* is characterized by the orthogonality condition (f - w^*, w) = 0 for all w \in W. This onto the of the basis functions minimizes the and is in strictly spaces. Interpolation using basis functions seeks an exact match of f at specified nodes, constructing a function that passes through given points. In the case of polynomial bases, Lagrange interpolation provides such an approximant: for distinct nodes x_1, \dots, x_n and values y_i = f(x_i), the interpolating is P(x) = \sum_{j=1}^n y_j \prod_{\substack{k=1 \\ k \neq j}}^n \frac{x - x_k}{x_j - x_k}, which is of degree at most n-1 and satisfies P(x_i) = y_i for each i. This method leverages the basis to ensure uniqueness within the space of polynomials of that degree. Theoretical guarantees on approximation errors are provided by results such as Jackson's theorem, which bounds the error for the best uniform approximation. For trigonometric () bases, the theorem states that if f is k-times continuously differentiable on [-\pi, \pi], the error in approximating f by the best trigonometric polynomial of degree N is O(1/N^k), depending on the modulus of continuity of the k-th . This establishes the convergence rate for smooth functions in L^\infty norm using partial sums of as near-optimal . A constructive approach to the Weierstrass approximation theorem, which asserts that continuous functions on [0,1] can be uniformly approximated by polynomials, is given by polynomials. These are defined as B_n(f; x) = \sum_{k=0}^n f\left(\frac{k}{n}\right) \binom{n}{k} x^k (1-x)^{n-k}, where \binom{n}{k} is the . As n \to \infty, B_n(f; x) \to f(x) uniformly for continuous f, with the proof relying on and the probabilistic interpretation of the binomial terms as a distribution concentrating around x. This provides an explicit sequence of polynomial approximants using the . However, polynomial interpolation can exhibit instabilities, as illustrated by the Runge phenomenon. When interpolating on equidistant nodes in [-1,1] with high-degree polynomials, large oscillations occur near the endpoints, even for smooth functions like f(x) = 1/(1 + 25x^2). For a degree-10 interpolant at 11 equidistant points, the error diverges significantly at the boundaries due to the ill-conditioning of the underlying the polynomial basis expansion. This highlights the need for non-equidistant nodes, such as Chebyshev points, to mitigate such artifacts in practice. Basis function expansions also underpin collocation methods for approximate to partial differential equations (PDEs). In these methods, the is sought as an u(x) \approx \sum_{i=1}^N c_i \phi_i(x) in a basis \{\phi_i\}, where coefficients c_i are chosen to satisfy the PDE exactly at selected points. For elliptic PDEs with random coefficients, combining reduced basis techniques with sparse grid reduces dimensionality while preserving accuracy, as the selection of basis functions captures dominant manifolds. This approach yields efficient approximations, with error bounds tied to the dimension and grid density.

Signal Processing and Analysis

In , basis functions enable the of signals into constituent components, facilitating analysis, feature extraction, and manipulation in both time and domains. Orthogonal bases, such as those derived from , form the foundation for many techniques, allowing signals to be represented as linear combinations of these basis elements for efficient processing. The is a cornerstone for frequency-domain analysis, expressing a signal as a sum of complex exponentials, which are basis functions of the form e^{i \omega x}. The continuous of a f(x) is given by \hat{f}(\omega) = \int_{-\infty}^{\infty} f(x) e^{-i \omega x} \, dx, revealing the content of continuous-time signals like audio or electromagnetic waves. For digital signals, the (DFT) adapts this to finite sequences, enabling practical implementation via algorithms like the (FFT) for spectrum estimation and filtering. To address non-stationary signals where frequency content varies over time, the (STFT) applies a to localize the . The STFT uses basis functions e^{i \omega (t - \tau)} modulated by a time centered at \tau, producing a time-frequency representation that balances resolution trade-offs governed by the . This approach is widely used in and for tracking evolving spectral features. For signals with transient or localized features, such as shocks or bursts, the (CWT) provides superior time-frequency localization using scalable, shifted basis functions called . The CWT of a signal f(t) with respect to a mother \psi is defined as W_f(a, b) = \frac{1}{\sqrt{|a|}} \int_{-\infty}^{\infty} f(t) \overline{\psi\left( \frac{t - b}{a} \right)} \, dt, where a controls (inversely related to ) and b handles , making it ideal for detecting abrupt changes in non-stationary processes like seismic events. In data compression, basis functions allow energy compaction by representing signals with fewer coefficients, as seen in the standard, which employs the (DCT) based on cosine functions. The DCT transforms image blocks into coefficients where low-frequency components dominate, enabling quantization and retention of only large coefficients for ratios up to 100:1 while preserving perceptual quality. Signal denoising leverages expansions by thresholding small coefficients, which often capture noise rather than signal structure, followed by reconstruction. In or bases, soft or hard thresholding—setting coefficients below a noise-estimated to zero—achieves near-optimal mean-squared error reduction, particularly for signals sparse in the chosen basis, as demonstrated in seminal nonlinear . A practical example is electrocardiogram (ECG) analysis, where wavelet bases decompose heart signals to isolate QRS complexes and detect anomalies like arrhythmias. By applying multi-resolution wavelet transforms, subtle deviations in P-waves or T-waves indicative of ischemia can be extracted with high , outperforming traditional filters in noisy ambulatory recordings.

References

  1. [1]
    [PDF] An introduction to functional analysis for science and engineering
    all the mathematical definitions, theorems and proofs below are found in texts in functional analysis. ... 40 We can regard the basis functions themselves ...
  2. [2]
    [PDF] A discussion of bases in Banach spaces and some of their properties
    Definition 3.1. Let X be a Banach space. A sequence {en}n≥1 ⊂ X is called a Schauder basis or simply a basis for X if for any x ∈ X there exists a unique.
  3. [3]
    [PDF] AA215A Lecture 2 Approximation Theory - Aerospace Computing Lab
    Jan 7, 2016 · Suppose that we wish to approximate a given function f by a linear combination of independent basis functions φj,j = 0, .., n. In general we ...
  4. [4]
    [PDF] The vector space axioms
    A vector space over a field F is a set V , equipped with. • an element 0 ∈ V called zero, • an addition law α : V × V → V (usually written α(v, w) = v + w), ...
  5. [5]
    Linear space with (Hamel) basis and the axiom of choice
    Mar 10, 2016 · It is true that the axiom of choice is equivalent to the statement that every linear space has a Hamel basis.Defects of Hamel bases for analysis in infinite dimensionsExplicit Hamel basis of real numbers - MathOverflowMore results from mathoverflow.net
  6. [6]
    [PDF] 4. bases in banach spaces
    We shall see in Theorem 4.11 that every basis is a Schauder basis, i.e., the coefficient functionals an are always continuous. First, however, we require some ...
  7. [7]
    [PDF] Chapter 2: Continuous Functions - UC Davis Math
    These function spaces are our first examples of infinite-dimensional normed linear spaces, and we explore the concepts of convergence, completeness, density,.
  8. [8]
    [PDF] Lp spaces - UC Davis Math
    This inequality means, as stated previously, that k·kLp is a norm on Lp(X) for 1 ≤ p ≤ ∞. If 0 <p< 1, then the reverse inequality holds.
  9. [9]
    [PDF] Chapter 6: Hilbert Spaces - UC Davis Math
    Definition 6.2 A Hilbert space is a complete inner product space. In particular, every Hilbert space is a Banach space with respect to the norm in (6.1).
  10. [10]
    [PDF] Banach Spaces - UC Davis Math
    The concept of a Schauder basis is not as straightforward as it may appear. ... Applied functional analysis is discussed in Lusternik and. Sobolev [33]. For ...
  11. [11]
    [PDF] Schauder basis
    Dec 4, 2012 · This property makes the Hamel basis unwieldy for infinite-dimensional Banach spaces; as a Hamel basis for an infinite-dimensional Banach space ...
  12. [12]
    [PDF] C. Heil, A Basis Theory Primer, Expanded Edition, Birkhäuser ...
    A weak basis is a weak Schauder basis if each coefficient functional am is weakly continuous on X, i.e., if yn → y weakly in X implies am(yn) → am(y). (c) ...
  13. [13]
    [PDF] FUNCTIONAL ANALYSIS | Second Edition Walter Rudin
    Rudin, Walter, (date). Functional analysis/Walter Rudin.-2nd ed. p. em. -(international series in pure and applied mathematics).
  14. [14]
    Developments in Schauder basis theory1 - Project Euclid
    1. Introduction. In the forty-four years since 1927 when J. Schauder [101] introduced the notion of a topological basis for a Banach space, well over two ...
  15. [15]
    Polynomial Bases - Prof. Michael T. Heath
    The basis matrix for the monomial basis functions is particularly ill-conditioned, and its conditioning worsens as the dimension is increased. This ...
  16. [16]
    [PDF] MATH 421/510 Assignment 2
    The monomials {xn : n ≥ 1} do not form a Schauder basis for C[0,1]. Indeed, if they did, then given any f ∈ C[0,1], there is a unique representation f = P∞ n=0.
  17. [17]
    [PDF] Weierstrass' proof of the Weierstrass Approximation Theorem
    At age 70 Weierstrass published the proof of his well-known Approximation. Theorem. In this note we will present a self-contained version, ...Missing: source | Show results with:source
  18. [18]
    6.4 Working with Taylor Series - Calculus Volume 2 | OpenStax
    Mar 30, 2016 · Learning Objectives. 6.4.1 Write the terms of the binomial series. 6.4.2 Recognize the Taylor series expansions of common functions.
  19. [19]
    [PDF] 5.8 Chebyshev Approximation
    The minimax polynomial is very difficult to find; the Chebyshev approximating polynomial is almost identical and is very easy to compute! So, given some ( ...
  20. [20]
    [PDF] Chapter 7: Fourier Series - UC Davis Math
    What makes Hilbert spaces so powerful in many applications is the possibility of expressing a problem in terms of a suitable orthonormal basis.
  21. [21]
    [PDF] CHAPTER 4 FOURIER SERIES AND INTEGRALS
    This section explains three Fourier series: sines, cosines, and exponentials eikx. Square waves (1 or 0 or −1) are great examples, with delta functions in ...
  22. [22]
    [PDF] on the Convergence of Fourier Series
    This is called the Gibbs phenomenon. The height of the overshoot is typically about 10% of the size of the jump. The simplest example is provided by a square ...
  23. [23]
    [PDF] Approximation Theory
    If the space V is an inner product space, then a complete analysis of the best approximation problem in finite-dimensional subspaces can be given. With V.
  24. [24]
    Lagrange Interpolating Polynomial -- from Wolfram MathWorld
    The Lagrange interpolating polynomial is the polynomial P(x) of degree <=(n-1) that passes through the n points (x_1,y_1=f(x_1)).
  25. [25]
    Jackson's Theorem -- from Wolfram MathWorld
    Jackson's theorem is a statement about the error E_n(f) of the best uniform approximation to a real function f(x) on [-1,1] by real polynomials of degree at ...
  26. [26]
    [PDF] Weierstrass approximation theorem by S. Bernstein
    There is a lovely proof of the Weierstrass approximation theorem by S. Bernstein. We shall show that any function, continuous on the closed interval [0,1] ...
  27. [27]
    [PDF] The Runge Phenomenon and Piecewise Polynomial Interpolation
    Aug 16, 2017 · The Runge phenomenon shows high-degree polynomial interpolation can cause spurious oscillations, especially near singularities, making it ...
  28. [28]
    Reduced Basis Collocation Methods for Partial Differential ...
    The sparse grid stochastic collocation method is a new method for solving partial differential equations with random coefficients.
  29. [29]
    [PDF] EE 261 - The Fourier Transform and its Applications
    1 Fourier Series. 1. 1.1 Introduction and Choices to Make . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1. 1.2 Periodic Phenomena .
  30. [30]
    [PDF] Chapter 4: Frequency Domain and Fourier Transforms
    Frequency domain analysis and Fourier transforms are key for signal and system analysis, breaking down time signals into sinusoids.
  31. [31]
    The Short-Time Fourier Transform - Stanford CCRMA
    The Short-Time Fourier Transform (STFT) (or short-term Fourier transform) is a powerful general-purpose tool for audio signal processing.
  32. [32]
    [PDF] Wavelet Signal Processing for Transient Feature Extraction - DTIC
    Wavelet transform techniques were developed to extract low dimensional feature data that allowed a simple classification scheme to easily separate the various ...
  33. [33]
    [PDF] Image Compression Using the Discrete Cosine Transform
    It is widely used in image compression. Here we develop some simple functions to compute the DCT and to compress images.
  34. [34]
    [PDF] DE-NOISING BY SOFT-THRESHOLDING - Stanford University
    Our proof of these results develops new facts about abstract statistical inference and its connection with an optimal recovery model. Key Words and Phrases.
  35. [35]
    [PDF] On denoising and best signal representation
    As shown in Fig. 1, a signal reconstruction is obtained by thresholding a set of coefficients in a given basis and then applying an inverse transformation. ...
  36. [36]
    Robust and Accurate Anomaly Detection in ECG Artifacts ... - NIH
    Therefore, this work proposes a novel anomaly detection technique that is highly robust and accurate in the presence of ECG artifacts which can effectively ...
  37. [37]
    A Survey of Heart Anomaly Detection Using Ambulatory ... - MDPI
    Wavelet transform is a powerful method for analyzing non-stationary signals, such as ECGs [37]. The DWT noise removal method is used in [38,39,40]. This method ...