Fact-checked by Grok 2 weeks ago

Hilbert transform

The Hilbert transform is a specific linear operator in mathematics and signal processing that acts on a real-valued function f(t) to produce another function \hat{f}(t), defined as the Cauchy principal value integral \hat{f}(t) = \frac{1}{\pi} \mathrm{P.V.} \int_{-\infty}^{\infty} \frac{f(\tau)}{t - \tau} \, d\tau, where the principal value handles the singularity at \tau = t. This transform, introduced by David Hilbert in 1905 as part of his work on integral equations and boundary value problems for analytic functions, effectively shifts the phase of frequency components in the Fourier domain by -90^\circ for positive frequencies and +90^\circ for negative frequencies, corresponding to multiplication by -i \operatorname{sgn}(\xi) in the frequency domain. Key properties of the Hilbert transform include its linearity, meaning H(af + bg) = aH(f) + bH(g) for scalars a, b and functions f, g; its self-inverse nature up to a sign, with H^2 = -\mathrm{Id}; and its boundedness as an isometry on the L^2(\mathbb{R}) space of square-integrable functions, preserving the L^2-norm. In the context of Fourier analysis, it serves as a prototypical singular integral operator, enabling the decomposition of signals into analytic representations where the original function and its transform form the real and imaginary parts of a complex analytic signal z(t) = f(t) + i \hat{f}(t), which has no negative frequency components. In signal processing, the Hilbert transform is fundamental for applications such as envelope detection, instantaneous frequency estimation, and single-sideband (SSB) modulation, where it facilitates the suppression of unwanted sidebands in communication systems like quadrature amplitude modulation (QAM). Mathematically, it underpins harmonic analysis by providing tools for studying maximal operators, interpolation theorems, and the behavior of Fourier series on boundaries, with extensions to higher dimensions and directional variants influencing modern research in functional analysis. Its role as the "only singular operator in one dimension" highlights its unique position in understanding linear operators on function spaces.

Mathematical Foundations

Definition

The Hilbert transform is a linear integral operator that acts on a real-valued function f defined on the real line, producing another function \hat{f} interpreted as the imaginary part of the corresponding analytic signal. It is formally defined by the Cauchy principal value integral \hat{f}(t) = \frac{1}{\pi} \ \mathrm{p.v.} \int_{-\infty}^{\infty} \frac{f(\tau)}{t - \tau} \, d\tau, where \mathrm{p.v.}. denotes the Cauchy principal value, ensuring the integral converges in a symmetric limiting sense around the singularity at \tau = t. This operator is central in signal processing and harmonic analysis, transforming the input function while preserving its energy in the L^2 sense for suitable functions. In the frequency domain, the Hilbert transform corresponds to a phase shift of -90^\circ (or -\pi/2 radians) for positive frequencies and +90^\circ (or \pi/2 radians) for negative frequencies, effectively converting sines to cosines and vice versa without altering amplitudes. This property makes it a quadrature filter, useful for extracting envelope and instantaneous phase information from real signals. For a real-valued function f(t), the analytic signal is then given by z(t) = f(t) + i \hat{f}(t), where \hat{f}(t) provides the imaginary component, suppressing negative frequency contributions to yield a complex representation concentrated in the positive frequency domain. A simple illustration of the transform's behavior is its action on constant functions: the Hilbert transform of any constant c is zero, as the principal value integral of c/(t - \tau) over symmetric limits vanishes due to the odd integrand. This result underscores the transform's insensitivity to DC components, aligning with its role in emphasizing oscillatory content.

Notation and Conventions

The Hilbert transform operator is commonly denoted by H, so that the transform of a function f is written as Hf, H, or \hat{f}. This notation emphasizes the operator's role as a linear map on suitable function spaces. A distinction is drawn between the two-sided Hilbert transform, which operates symmetrically over the entire real line, and one-sided variants associated with analytic extensions to the upper or lower half-plane; the latter project onto positive or negative frequency components, respectively, and are often expressed using combinations like f_+ = \frac{1}{2} (f + i Hf) for the upper half-plane. Sign conventions for the Hilbert transform vary across fields, with signal processing typically adopting a positive orientation that induces a counterclockwise phase shift of -90^\circ for positive frequencies (equivalent to multiplication by -i in the frequency domain). In contrast, some mathematical treatments employ the opposite sign, yielding a clockwise phase shift and aligning with boundary values from the lower half-plane. The prefactor in the principal value integral representation is standardly $1/\pi under the counterclockwise convention, though -1/\pi appears in literature favoring the clockwise alternative; this discrepancy arises from the choice of kernel sign and ensures consistency with the desired phase behavior. In multivariable generalizations, such as the Riesz transforms that extend the Hilbert transform to higher dimensions, vector arguments are conventionally denoted in boldface (e.g., \mathbf{x}) to distinguish them from scalar variables.

Initial Domain of Definition

The Hilbert transform is initially defined for functions in the Schwartz space \mathcal{S}(\mathbb{R}), consisting of smooth functions that decay rapidly at infinity along with all their derivatives, ensuring the principal value integral converges absolutely. For f \in \mathcal{S}(\mathbb{R}), the transform is given by (Hf)(t) = \frac{1}{\pi} \mathrm{P.V.} \int_{-\infty}^{\infty} \frac{f(\tau)}{t - \tau} \, d\tau, where the Cauchy principal value addresses the singularity at \tau = t in the kernel $1/(t - \tau), taken as the limit \lim_{\epsilon \to 0^+} \left( \int_{-\infty}^{t - \epsilon} + \int_{t + \epsilon}^{\infty} \right) \frac{f(\tau)}{t - \tau} \, d\tau. This formulation guarantees that Hf remains well-defined and belongs to \mathcal{S}(\mathbb{R}) under these conditions. The domain extends naturally to the Lebesgue space L^2(\mathbb{R}) through the Plancherel theorem, which equates the L^2 norms of a function and its Fourier transform, allowing the Hilbert transform to be represented as multiplication by -i \operatorname{sgn}(\xi) in the frequency domain. This extension preserves the L^2 norm, making H an isometry on L^2(\mathbb{R}) and ensuring convergence in the L^2 sense for dense subsets like the Schwartz space. Further, the Hilbert transform converges and maps L^p(\mathbb{R}) to itself for $1 < p < \infty, with the principal value integral providing pointwise almost everywhere convergence for functions in these spaces. The boundedness on these L^p spaces relies on the rapid decay and smoothness properties inherited from the Schwartz class, extended via density arguments.

Historical Context

Origins and Early Development

The Hilbert transform originated in David Hilbert's foundational work on linear integral equations, where he introduced it in 1905 to address the Dirichlet problem for the Laplace equation as part of solving the Riemann-Hilbert boundary value problem for analytic functions. In his paper "Grundzüge einer allgemeinen Theorie der linearen Integralgleichungen. Erste Mitteilung," published in the Nachrichten von der Gesellschaft der Wissenschaften zu Göttingen, Hilbert formulated the transform as a principal value integral to connect the real and imaginary parts of boundary values for analytic functions in the unit disk, enabling the solution of boundary value problems through integral representations. This approach transformed the Dirichlet problem into a Fredholm-type integral equation, providing a method to recover the harmonic function inside the domain from its boundary data. The concept drew early inspiration from late 19th-century developments in complex analysis, particularly Henri Poincaré's 1885 investigations into analytic functions of two variables and conformal mappings, which emphasized the role of conjugate harmonic functions and the Cauchy integral formula in representing solutions to boundary problems. Poincaré's work highlighted how the real and imaginary parts of analytic functions behave as conjugate harmonics, laying groundwork for integral operators that link boundary values to interior solutions, a theme Hilbert extended through his singular integral. These connections underscored the transform's utility in unifying complex analysis with integral methods for potential-theoretic problems. In potential theory, the Hilbert transform found initial applications in representing harmonic conjugates, allowing the construction of analytic functions from given real parts on the boundary, essential for solving elliptic partial differential equations (PDEs). Hilbert's framework addressed boundary value problems for such PDEs, including the representation of multi-variable functions as superpositions, aligning with the motivations of his 13th problem from the 1900 International Congress of Mathematicians, where he sought general methods for expressing solutions to higher-dimensional equations. This integration of integral equations with PDE boundary conditions marked a pivotal step in early 20th-century mathematical physics, emphasizing the transform's role in rigorous solution theory without relying on series expansions.

Key Contributors and Milestones

In the early 20th century, building on David Hilbert's foundational 1905 work on boundary value problems for analytic functions, significant advancements in the theory of conjugate functions—closely related to the Hilbert transform—emerged through the efforts of key mathematicians. A pivotal contribution came from Edward Charles Titchmarsh in 1925, who established a theorem linking conjugate trigonometric integrals to Poisson integrals, providing deeper insights into the representation and properties of harmonic functions in the upper half-plane. This result, detailed in his paper on conjugate trigonometrical integrals, clarified the relationship between the Hilbert transform and boundary values of analytic functions, influencing subsequent studies in complex analysis. During the 1930s, Antoni Zygmund advanced the understanding of the Hilbert transform through his work on bounded analytic functions and their connections to Hardy spaces. In his seminal 1935 treatise Trigonometric Series, Zygmund explored the boundedness of conjugate functions in these spaces, laying groundwork for applications in Fourier analysis and establishing key inequalities that bounded the norms of Hilbert transforms on relevant function classes. The Riesz brothers, Frigyes and Marcel, made foundational contributions in the 1910s and 1920s to the study of maximal functions associated with analytic functions, which later connected the Hilbert transform to broader developments in harmonic analysis post-World War II. Marcel Riesz's 1928 proof of the L^p-boundedness (1 < p < ∞) of the conjugate function operator on the real line was particularly influential, providing a cornerstone for modern operator theory and maximal inequalities in harmonic analysis. A major milestone occurred in the 1950s with the recognition of the Hilbert transform as the prototypical singular integral operator within the Calderón-Zygmund theory. Alberto Calderón and Antoni Zygmund's collaborative work, including their 1952 paper on singular integrals, developed a general framework for the boundedness of such operators on L^p spaces, unifying the Hilbert transform with other Calderón-Zygmund kernels and enabling extensions to higher dimensions and more complex settings in analysis.

Core Relationships and Computations

Connection to the Fourier Transform

The Fourier transform of a function f \in L^1(\mathbb{R}) is defined as \hat{f}(\xi) = \int_{-\infty}^{\infty} f(t) e^{-2\pi i \xi t} \, dt, with the convention extending naturally to L^2(\mathbb{R}) via density of the Schwartz class and Plancherel's theorem, which establishes that the Fourier transform is a unitary operator on L^2(\mathbb{R}). The Hilbert transform H admits a simple representation in the frequency domain: for f \in L^2(\mathbb{R}), \widehat{H}(\xi) = -i \, \sgn(\xi) \, \hat{f}(\xi), where \sgn(\xi) is the sign function defined by \sgn(\xi) = 1 if \xi > 0, \sgn(\xi) = -1 if \xi < 0, and \sgn(0) = 0. This multiplier property characterizes the Hilbert transform as a Fourier multiplier operator with symbol -i \, \sgn(\xi). To derive this, recall that the Hilbert transform is a convolution H[f](t) = f * k (t), where k(t) = \frac{1}{\pi t} in the principal value sense. By the convolution theorem for the Fourier transform, \widehat{H}(\xi) = \hat{f}(\xi) \cdot \hat{k}(\xi). The Fourier transform of the kernel k is computed in the sense of tempered distributions: \hat{k}(\xi) = -i \, \sgn(\xi), which follows from the distributional Fourier transform of the principal value distribution \mathrm{p.v.} \, \frac{1}{t} being -\pi i \, \sgn(\xi), scaled by the factor $1/\pi. Plancherel's theorem then justifies the extension of this pointwise multiplier formula to all L^2 functions, as the operator norm of multiplication by -i \, \sgn(\xi) is bounded by 1 on L^2. This frequency-domain representation implies that the Hilbert transform acts as a phase shifter: it multiplies the Fourier coefficients by e^{-i \pi/2} = -i for positive frequencies (\xi > 0) and by e^{i \pi/2} = i for negative frequencies (\xi < 0), while preserving the magnitude spectrum |\hat{f}(\xi)| since |-i \, \sgn(\xi)| = 1. Thus, H rotates the phase of each frequency component by \pm \pi/2 without altering amplitudes, a property central to its role in analytic signal construction.

Table of Selected Hilbert Transforms

The Hilbert transform provides explicit closed-form expressions for many common functions, facilitating both theoretical analysis and practical computations in signal processing and harmonic analysis. These transforms are typically derived using the Fourier domain representation, where the Fourier transform of the function is multiplied by -i \operatorname{sgn}(\omega) before applying the inverse Fourier transform. The following table presents selected examples, focusing on the Heaviside step function, the Gaussian, and a representative power-related function such as the Cauchy distribution density. Each entry includes the input function f(t), its Hilbert transform \mathcal{H}\{f\}(t), a brief note on verification via the Fourier method, and a citation to a primary source.
Input function f(t)Hilbert transform \mathcal{H}\{f\}(t)Verification noteCitation
Heaviside step: H(t)\frac{1}{\pi} \ln \|t\|Obtained as the inverse Fourier transform of -i \operatorname{sgn}(\omega) \cdot \frac{1}{i \omega} (principal value), yielding the logarithmic form.https://doi.org/10.1017/CBO9780511721458 (King, 2009, Vol. 1, App. A)
Gaussian: e^{-t^2}-\frac{2}{\sqrt{\pi}} D(t), where D(t) = e^{-t^2} \int_0^t e^{u^2} \, du (Dawson's function, related to the imaginary error function via D(t) = \frac{\sqrt{\pi}}{2} e^{-t^2} \operatorname{erfi}(t))Obtained as the inverse Fourier transform of -i \operatorname{sgn}(\omega) \cdot \sqrt{\pi} e^{-\omega^2 / 4}, resulting in the Dawson expression equivalent to error function forms.https://doi.org/10.1016/S0096-3003(08)00578-X (Abdullah et al., 2009)
Cauchy density: \frac{1}{1 + t^2}\frac{t}{1 + t^2}Obtained as the inverse Fourier transform of $-i \operatorname{sgn}(\omega) \cdot \pi e^{-\omega

Fundamental Properties

Boundedness and Continuity

The Hilbert transform H is a bounded operator on L^2(\mathbb{R}), satisfying \|Hf\|_2 = \|f\|_2 for all f \in L^2(\mathbb{R}). This isometry follows from the Plancherel theorem applied to the Fourier multiplier representation of H, where the multiplier -i \sgn(\xi) has absolute value 1 for all frequencies \xi \in \mathbb{R}. Using the known weak-type (1,1) boundedness and the L^2 boundedness, the Marcinkiewicz interpolation theorem implies that H extends to a bounded operator on L^p(\mathbb{R}) for all $1 < p < \infty, with \|Hf\|_p \leq C_p \|f\|_p where C_p is a constant depending only on p. This interpolation bridges the endpoint behaviors to establish strong-type boundedness across the range. However, H fails to be bounded on L^1(\mathbb{R}) or L^\infty(\mathbb{R}). A counterexample for both cases involves functions whose Hilbert transforms exhibit logarithmic growth or singularities. For instance, consider the characteristic function f = \chi_{[0,1]}; its Hilbert transform is given by Hf(x) = \frac{1}{\pi} \log \left| \frac{x}{x-1} \right| for x > 1, which behaves asymptotically as \frac{1}{\pi x} as x \to +\infty. This shows Hf \notin L^1(\mathbb{R}) since \int_1^\infty \frac{1}{x} \, dx = \infty, demonstrating failure on L^1. Similarly, Hf is unbounded near x = 1, so Hf \notin L^\infty(\mathbb{R}), showing failure on L^\infty. The Hilbert transform preserves continuity for functions in the space C_0(\mathbb{R}) of continuous functions vanishing at infinity, mapping it to continuous functions on \mathbb{R}. Moreover, H is continuous with respect to the supremum norm on C_0(\mathbb{R}) when convergence of the image is measured uniformly on compact subsets of \mathbb{R}, reflecting its behavior as a singular integral operator.

Self-Adjointness and Inversion

The Hilbert transform H, defined on the space L^2(\mathbb{R}), is an anti-self-adjoint operator, satisfying H^* = -H, where H^* denotes the adjoint with respect to the L^2 inner product. This property arises from its representation as a Fourier multiplier by -i \sgn(\xi), where \sgn(\xi) is the sign function; since the multiplier is purely imaginary, the operator inherits this anti-self-adjoint nature. Consequently, H maps real-valued functions to imaginary parts in a way that preserves the anti-symmetric structure under adjoint. A direct consequence of anti-self-adjointness is that the inverse of the Hilbert transform is simply its negative, H^{-1} = -H. Applying H twice yields H^2 f = -f for f \in L^2(\mathbb{R}), establishing the invertibility and providing a simple recovery formula f = -H(Hf). For odd functions, this relation holds up to a possible additive constant in certain distributional senses, but on L^2 it is exact without qualification. The explicit inversion can also be expressed through a double principal value integral by substituting the definition of Hf: f(x) = -\frac{1}{\pi} \pv\int_{-\infty}^{\infty} \frac{Hf(t)}{x - t} \, dt = \frac{1}{\pi^2} \pv\int_{-\infty}^{\infty} \frac{1}{x - t} \left( \pv\int_{-\infty}^{\infty} \frac{f(s)}{t - s} \, ds \right) dt, where the inner integral is the principal value form of the Hilbert transform. This double integral form underscores the singular integral nature of the inversion process. Furthermore, the Hilbert transform relates to the projection onto the imaginary part of boundary values of analytic functions in the upper half-plane. Specifically, for a real-valued f \in L^2(\mathbb{R}), Hf corresponds to the imaginary component of the analytic extension f + i Hf, which lies in the Hardy space H^2 of the upper half-plane, effectively projecting onto the subspace of such imaginary parts.

Differentiation and Convolution Properties

The Hilbert transform exhibits linearity as a fundamental property, operating as a linear operator on appropriate function spaces. Specifically, for scalars \alpha, \beta \in \mathbb{R} and functions f, g in the Schwartz space \mathcal{S}(\mathbb{R}), the relation H(\alpha f + \beta g) = \alpha H f + \beta H g holds, where H denotes the Hilbert transform. This linearity follows directly from the integral definition of the transform and extends to broader spaces such as L^2(\mathbb{R}) under suitable conditions. A related homogeneity property governs the transform's behavior under amplitude and time scaling. For a scalar \lambda > 0 and a function f, the Hilbert transform satisfies H[\lambda f(\lambda t)] = \lambda (H f)(\lambda t), where the output is evaluated at the scaled time variable. This arises from the convolution form of the transform and the scaling properties of the Fourier transform, with implications for phase shifts in signal processing applications, as the transform preserves the magnitude spectrum while introducing a consistent \pm \pi/2 phase adjustment across frequencies. When \lambda < 0, an additional sign flip may occur due to the odd nature of the kernel, but the property is typically stated for positive scaling to maintain orientation. The Hilbert transform commutes with differentiation for sufficiently smooth functions. In particular, if f \in L^2(\mathbb{R}) is differentiable with f' \in L^2(\mathbb{R}) and (H f)' \in L^2(\mathbb{R}), then H[f'] = (H f)', where the equality holds pointwise or in the L^2 sense. This commutation is established via the frequency-domain representation, where the Hilbert transform multiplies the Fourier transform by -i \operatorname{sgn}(\omega) and differentiation by i \omega, yielding compatible operators that interchange under composition. The property generalizes to higher-order derivatives: H[f^{(n)}] = (H f)^{(n)} for n \in \mathbb{N}, assuming the necessary smoothness and integrability conditions on f and its derivatives. Regarding convolution, the Hilbert transform distributes over the operation under decay conditions that ensure convergence. For functions f, g \in L^1(\mathbb{R}) \cap L^2(\mathbb{R}) with sufficient decay at infinity, H(f * g) = (H f) * g = f * (H g), where * denotes the convolution (f * g)(t) = \int_{-\infty}^{\infty} f(t - s) g(s) \, ds. This follows from the Fourier multiplier property, as the transform of a convolution is the product of transforms, and the Hilbert multiplier -i \operatorname{sgn}(\omega) commutes with multiplication in the frequency domain. The equality holds symmetrically, reflecting the transform's role as a linear filter that interacts compatibly with convolutions in analytic signal constructions.

Extensions and Generalizations

Hilbert Transform for Distributions

The Hilbert transform extends to the space of tempered distributions \mathcal{S}'(\mathbb{R}) through its representation as a Fourier multiplier. For a tempered distribution u \in \mathcal{S}'(\mathbb{R}), the Hilbert transform \tilde{u} is defined by the relation \hat{\tilde{u}}(\xi) = -i \sgn(\xi) \hat{u}(\xi), where \hat{u} is the Fourier transform of u and \sgn(\xi) is the sign function, taking values $1 for \xi > 0, -1 for \xi < 0, and $0 at \xi = 0. This multiplier -i \sgn(\xi) is a bounded, slowly growing function, ensuring the operation maps \mathcal{S}' continuously to itself. This distributional definition aligns seamlessly with the classical principal value integral on the Schwartz space \mathcal{S}(\mathbb{R}), as the Fourier multiplier formula reproduces the convolution with \pv \frac{1}{\pi t} for test functions, thereby providing a consistent extension. Illustrative examples highlight the behavior on basic distributions. The Hilbert transform of the Dirac delta distribution \delta yields \tilde{\delta} = \pv \frac{1}{\pi t}, where \pv denotes the Cauchy principal value, reflecting the inverse Fourier transform of -i \sgn(\xi). For derivatives, the Hilbert transform commutes with differentiation in the distributional sense, so \widetilde{\delta'} = \left( \pv \frac{1}{\pi t} \right)', which equals -\pv \frac{1}{\pi t^2} as a homogeneous distribution of degree -2. Higher-order derivatives follow analogously, with \widetilde{\delta^{(n)}} = \left( \pv \frac{1}{\pi t} \right)^{(n)} for n \in \mathbb{N}, preserving the order of the singularity at the origin. In the distributional framework, the Hilbert transform maintains the support and singularity structure of the original distribution, particularly for point-supported examples like \delta and its derivatives, where the isolated singularity at t=0 transforms into a comparable pole-like singularity at the same location without introducing new supports elsewhere.

Hilbert Transform for Bounded Functions

The Hilbert transform for essentially bounded functions on the real line, f \in L^\infty(\mathbb{R}), addresses the challenge that the classical principal value integral may not converge pointwise for all such functions due to the lack of integrability at infinity. To define it rigorously, one approach uses the non-tangential maximal function associated with the harmonic extension of f to the upper half-plane, where the transform is recovered as the boundary value of the imaginary part of the corresponding analytic function; alternatively, for functions in BMO spaces, it is characterized via the principal value where the limit exists in the sense of distributions or maximal operators. Unlike the boundedness on L^p(\mathbb{R}) for $1 < p < \infty, the Hilbert transform H does not map L^\infty(\mathbb{R}) into itself, as counterexamples show unbounded growth at infinity. Instead, H extends to a bounded operator from L^\infty(\mathbb{R}) to the space BMO(\mathbb{R}) of functions of bounded mean oscillation, with \|Hf\|_{\mathrm{BMO}} \lesssim \|f\|_{L^\infty}. The BMO norm is defined as \|g\|_{\mathrm{BMO}} = \sup_I \frac{1}{|I|} \int_I \left| g(x) - \frac{1}{|I|} \int_I g(y)\, dy \right| dx, where the supremum is over all finite intervals I \subset \mathbb{R}, capturing functions with controlled local oscillations despite potential logarithmic growth. This mapping's sharpness is illustrated by the Hilbert transform of the characteristic function \chi_{[a,b]} of a finite interval [a, b], a prototypical L^\infty function. Explicitly, H(\chi_{[a,b]})(x) = \frac{1}{\pi} \log \left| \frac{x - a}{x - b} \right|, which exhibits logarithmic divergence as x \to \pm \infty and thus lies outside L^\infty(\mathbb{R}), but belongs to BMO(\mathbb{R}) since its mean oscillations over intervals are uniformly bounded. The Fefferman-Stein theorem establishes the profound connection between these spaces, proving that BMO(\mathbb{R}) is the dual of the real Hardy space H^1(\mathbb{R}), with every continuous linear functional on H^1(\mathbb{R}) given by \phi(f) = \int_\mathbb{R} f(x) g(x)\, dx for a unique g \in BMO(\mathbb{R}). The Hilbert transform facilitates this duality, as H maps H^1(\mathbb{R}) boundedly onto L^1(\mathbb{R}), linking the atomic decomposition of H^1 functions to the grand maximal function characterization of BMO.

Hilbert Transform on the Unit Circle

The Hilbert transform on the unit circle, denoted \mathbb{T} = \{ e^{i\theta} : \theta \in [0, 2\pi) \}, adapts the operator to periodic functions f: \mathbb{T} \to \mathbb{C} with period $2\pi. For f(\theta) = \sum_{n \in \mathbb{Z}} a_n e^{i n \theta} in its Fourier series expansion, the Hilbert transform is defined by omitting the constant term and applying the multiplier -i \operatorname{sgn}(n) to the coefficients, yielding H[f](\theta) = \sum_{n \neq 0} (-i \operatorname{sgn}(n)) a_n e^{i n \theta}, where \operatorname{sgn}(n) = 1 if n > 0, -1 if n < 0, and the series converges in the appropriate sense for integrable f. This formulation arises as the boundary value of the conjugate harmonic function and can also be expressed via the principal value integral H[f](\theta) = \frac{1}{2\pi} \mathrm{PV} \int_0^{2\pi} f(t) \cot\left( \frac{\theta - t}{2} \right) dt.[](http://www.diva-portal.org/smash/get/diva2:623719/FULLTEXT01.pdf) This operator relates closely to the conjugate Poisson integral, which extends harmonic functions from the boundary $\mathbb{T}$ into the unit disk $\mathbb{D} = \{ z : |z| < 1 \}$. For a real-valued $f \in L^1(\mathbb{T})$, the Poisson integral provides the harmonic extension $u_f(re^{i\theta}) = \int_0^{2\pi} P_r(\theta - t) f(t) \frac{dt}{2\pi}$, where $P_r(\phi) = \frac{1 - r^2}{1 - 2r \cos \phi + r^2}$ is the Poisson kernel. The conjugate Poisson integral then defines the harmonic conjugate $v_f(re^{i\theta}) = \int_0^{2\pi} Q_r(\theta - t) f(t) \frac{dt}{2\pi}$, with kernel $Q_r(\phi) = \frac{2r \sin \phi}{1 - 2r \cos \phi + r^2}$, such that $u_f + i v_f$ is analytic in $\mathbb{D}$ and $v_f(e^{i\theta}) = H[f](\theta)$ almost everywhere as $r \to 1^-$.[](https://open.library.ubc.ca/media/stream/pdf/24/1.0400091/4)[](https://gauss.math.yale.edu/~ws442/harmonicnotes_old.pdf) These extensions preserve harmonicity and enable the study of boundary behavior for functions analytic inside the disk. The Hilbert transform on $\mathbb{T}$ inherits key properties from its real-line counterpart, notably boundedness on Lebesgue spaces. It defines a bounded linear operator on $L^p(\mathbb{T})$ for $1 < p < \infty$, with $\|H\|_{L^p} \leq C_p \|f\|_{L^p}$ where $C_p$ is a constant depending only on $p$, and it is an isometry (up to sign) on $L^2(\mathbb{T})$ since the multiplier has modulus 1 for $n \neq 0$.[](http://www.diva-portal.org/smash/get/diva2:623719/FULLTEXT01.pdf)[](https://gauss.math.yale.edu/~ws442/harmonicnotes_old.pdf)[](https://open.library.ubc.ca/media/stream/pdf/24/1.0400091/4) Unlike on the line, it fails to be bounded on $L^1(\mathbb{T})$ or $L^\infty(\mathbb{T})$, but weak-type estimates hold, such as $\mu\{ \theta : |H[f](\theta)| > \lambda \} \leq C \lambda^{-1} \|f\|_{L^1}$ for integrable $f$.[](https://gauss.math.yale.edu/~ws442/harmonicnotes_old.pdf) In applications to Fourier series, the Hilbert transform facilitates analytic continuation inside the unit disk by separating positive and negative frequency components. For a function analytic in $\mathbb{D}$ with boundary values $F(e^{i\theta}) = \sum_{n=0}^\infty a_n e^{i n \theta}$, the real part $u(\theta) = \sum_{n=0}^\infty a_n \cos(n\theta)$ (assuming real coefficients) satisfies $H[u](\theta) = \sum_{n=1}^\infty a_n \sin(n\theta)$, so $u + i H$ recovers the boundary values of the analytic function almost everywhere.[](https://open.library.ubc.ca/media/stream/pdf/24/1.0400091/4) This decomposition aids in proving $L^p$-convergence of Fourier series for $1 < p < \infty$ by leveraging the analyticity and maximal function estimates in the disk.[](http://www.diva-portal.org/smash/get/diva2:623719/FULLTEXT01.pdf) ## Analytic Extensions and Conjugate Functions ### Titchmarsh's Convolution Theorem Titchmarsh's convolution theorem provides a key characterization of conjugate functions in the context of the Hilbert transform. Specifically, if $f \in L^p(\mathbb{R})$ for $1 < p < \infty$ and $g = H$ is its Hilbert transform, also in $L^p(\mathbb{R})$, then the convolution $f * g^\vee = 0$ almost everywhere if and only if the Fourier transform $\hat{f}(\xi) = 0$ for all $\xi < 0$ (up to sets of measure zero), where $g^\vee(t) = g(-t)$. This property arises from the frequency-domain representation, with $\hat{g}(\xi) = -i \sgn(\xi) \hat{f}(\xi)$, and implies that the complex function $f + i g$ is the boundary value of an analytic function in the upper half-plane.[](https://www.cambridge.org/core/books/hilbert-transforms/9FEE707FB7587A9A1A90C0038FB77E09) To sketch the proof, consider the Fourier transforms. The Hilbert transform corresponds to multiplication by $-i \sgn(\xi)$ in the frequency domain, so $\hat g(\xi) = -i \sgn(\xi) \hat f(\xi)$. The reflected function $g^\vee$ has Fourier transform $\hat g^\vee(\xi) = \hat g(-\xi) = -i \sgn(-\xi) \hat f(-\xi) = i \sgn(\xi) \hat f(-\xi)$. The Fourier transform of the convolution $f * g^\vee$ is then $\hat f(\xi) \hat g^\vee(\xi) = \hat f(\xi) \cdot i \sgn(\xi) \hat f(-\xi)$. For the convolution to vanish, the product must be zero almost everywhere, which holds if $\hat f(\xi) = 0$ for $\xi < 0$ (up to sets of measure zero). This one-sided support ensures the inverse Fourier transform of $f + i g$ extends analytically to the upper half-plane by the Paley-Wiener theorem for $L^p$ functions.[](https://see.stanford.edu/materials/lsoftaee261/book-fall-07.pdf) A direct corollary is the uniqueness of harmonic conjugates under this condition: given $f \in L^p(\mathbb{R})$ with $\hat f(\xi) = 0$ for $\xi < 0$, there is a unique $g = H \in L^p(\mathbb{R})$ (up to additive constants in some contexts) such that $f * g^\vee = 0$ almost everywhere, corresponding to the unique analytic extension in the upper half-plane. Another important corollary is a Paley-Wiener-type result for bandlimited signals: if $f$ is bandlimited to $[0, B]$, then $g = H$ satisfies the convolution property, and $f + i g$ is analytic in the upper half-plane with growth controlled by the bandwidth $B$. These results underscore the deep connection between the Hilbert transform and analytic function theory.[](https://www.math.purdue.edu/~eremenko/dvi/novik1011.pdf) The theorem was first published by Edward Charles Titchmarsh in 1926, marking an early contribution to the theory of conjugate functions and their Fourier representations.[](https://link.springer.com/content/pdf/10.1007/BF01283842.pdf) ### Riemann-Hilbert Boundary Value Problems The Riemann-Hilbert boundary value problem, in its scalar form for the upper half-plane, requires finding a function $\phi(z)$ analytic in $\operatorname{Im} z > 0$ and continuous up to the real boundary such that $\phi(t) + \overline{\phi(t)} = f(t)$ for $t \in \mathbb{R}$, where $f$ is a prescribed real-valued Hölder-continuous function satisfying suitable decay conditions at infinity. This boundary condition equates to $2 \operatorname{Re} \phi(t) = f(t)$, linking the real part of the boundary values directly to the given data. The solution is constructed via the Cauchy integral formula: \[ \phi(z) = \frac{1}{2\pi i} \int_{-\infty}^{\infty} \frac{f(t)}{t - z} \, dt + i c, where c \in \mathbb{R} is a constant chosen to ensure appropriate behavior at infinity (often c = 0 for bounded solutions). The imaginary part of the boundary values, which serves as the harmonic conjugate, is then given by the Hilbert transform of f/2: \operatorname{Im} \phi(t) = \frac{1}{\pi} \mathrm{P.V.} \int_{-\infty}^{\infty} \frac{f(s)/2}{s - t} \, ds = H[f/2](t). This representation follows from Plemelj's formulas applied to the Cauchy integral, which decompose the boundary limits into principal-value integrals involving the Hilbert kernel. A generalization of this problem incorporates a given complex coefficient G(t) on the boundary, seeking \phi(z) analytic in the upper half-plane such that \phi(t) G(t) + \overline{\phi(t)} \overline{G(t)} = f(t) for t \in \mathbb{R}, where f is real-valued and G is typically normalized with |G(t)| = 1 to ensure solvability under index conditions. This form arises when the boundary mixes the function and its conjugate with a phase factor, and it reduces to a singular integral equation of the form a(t) \phi(t) + b(t) H[\phi](t) = g(t), where the Hilbert transform appears as the associated operator. The solution proceeds by factorizing the coefficient matrix (in the scalar case, via canonical functions) and expressing \phi(z) as a Cauchy integral over the adjusted boundary data, with the conjugate component recovered through application of the Hilbert transform to the real part. For the index-zero case, the solution is unique up to a multiplicative constant, and existence holds for Hölder-continuous data with compact support. These boundary value problems, leveraging the Hilbert transform for conjugate recovery, find applications in modeling mixed boundary conditions in fluid dynamics, such as free-surface flows around dissolving or eroding bodies, where the analytic function represents the complex velocity potential. In fracture mechanics, they facilitate the analysis of crack propagation under dynamic loads by reducing elasticity problems to vector Riemann-Hilbert formulations on the crack edges.

Applications in Signal Processing

Analytic Signal Representation

The analytic signal of a real-valued signal f(t) is a complex-valued function defined as z(t) = f(t) + i \hat{f}(t), where \hat{f}(t) denotes the Hilbert transform of f(t). This representation, introduced by Dennis Gabor in 1946, constructs a signal whose Fourier transform contains only the non-negative frequency components of the original signal, effectively projecting f(t) onto the subspace of positive frequencies by zeroing out the negative-frequency contributions. In the frequency domain, the Fourier transform Z(\omega) of the analytic signal satisfies Z(\omega) = 2 F(\omega) for \omega > 0, Z(0) = F(0), and Z(\omega) = 0 for \omega < 0, where F(\omega) is the Fourier transform of f(t). This doubling of the positive-frequency components ensures that the real part of z(t) exactly recovers the original signal f(t), while the imaginary part \hat{f}(t) captures the quadrature component shifted by -\pi/2 radians for positive frequencies. The instantaneous amplitude and phase of the signal are derived from the analytic signal as A(t) = |z(t)| = \sqrt{f^2(t) + \hat{f}^2(t)} and \phi(t) = \arg(z(t)) = \atantwo(\hat{f}(t), f(t)), respectively, providing a time-localized description of the signal's envelope and oscillation. These quantities enable the representation of f(t) in polar form as f(t) = A(t) \cos(\phi(t)), with the Hilbert transform ensuring the uniqueness of this decomposition for bandlimited signals. Bedrosian's product theorem complements this framework by specifying that, for signals f(t) and g(t) whose spectra are separated—such that the support of F(\omega) lies entirely below the support of G(\omega)—the Hilbert transform of their product simplifies to \widehat{f g}(t) = f(t) \hat{g}(t). This identity, established in 1963, supports the extraction of envelopes in analytic signal representations without cross-term interference. In engineering contexts, such as communications and audio processing, the analytic signal adopts a normalization where the Hilbert transform acts as an ideal quadrature filter with unit magnitude gain across all frequencies, resulting in the factor of 2 for positive frequencies to preserve the original signal's power in the real part. This convention facilitates practical implementations, including finite impulse response filters approximating the Hilbert transform for discrete signals.

Bedrosian's Theorem and Modulation

Bedrosian's theorem, established in 1963, provides a key identity for the Hilbert transform of a product of two functions under specific spectral conditions. Specifically, if a function f(t) has its Fourier spectrum confined to the interval [- \omega_0, \omega_0] and g(t) has its spectrum in [\omega_0, \infty), then the Hilbert transform satisfies \mathcal{H}[f(t) g(t)] = f(t) \mathcal{H}[g(t)]. This result holds for finite-energy signals and relies on the non-overlapping nature of the spectra, ensuring that the convolution integral defining the Hilbert transform separates appropriately in the frequency domain. In the context of angle modulation, Bedrosian's theorem facilitates the extraction of instantaneous amplitude and phase from modulated signals. Consider a signal s(t) = a(t) \cos(\phi(t)), where a(t) is a low-frequency envelope with spectrum in [- \omega_0, \omega_0] and \cos(\phi(t)) acts as a high-frequency carrier with spectrum outside this band, assuming \phi'(t) \geq \omega_0. Applying the theorem yields \mathcal{H}[s(t)] = a(t) \sin(\phi(t)), so the analytic signal is z(t) = a(t) e^{j \phi(t)}. The instantaneous frequency is then given by \omega_i(t) = \frac{d \phi(t)}{dt}, providing a direct measure of the signal's local frequency variation. This framework applies directly to frequency modulation (FM) and phase modulation (PM). In FM, the phase is \phi(t) = \omega_c t + k_f \int_{-\infty}^t m(\tau) \, d\tau, where m(t) is the modulating signal and k_f is the frequency deviation constant, leading to an instantaneous frequency \omega_i(t) = \omega_c + k_f m(t). For PM, the phase is \phi(t) = \omega_c t + k_p m(t), with k_p the phase deviation constant, yielding \omega_i(t) = \omega_c + k_p m'(t). In both cases, the Hilbert transform, via Bedrosian's identity, enables recovery of these parameters from the analytic representation, assuming the modulating signal's bandwidth does not overlap with the carrier's spectrum. The theorem's utility in modulation analysis is limited by its spectral separation assumption; if the envelope and carrier spectra overlap, the identity fails, potentially distorting the estimated instantaneous frequency and requiring alternative methods for broadband signals.

Single Sideband Modulation and Causality

Single sideband (SSB) modulation is a technique in communication systems that transmits only one sideband of the modulated signal, reducing bandwidth compared to double sideband (DSB) modulation while preserving information content. The Hilbert transform plays a central role in generating SSB signals by providing the necessary 90-degree phase shift for the quadrature component. Specifically, for a baseband message signal m(t) bandlimited to frequencies below the carrier f_c, the lower sideband SSB signal is given by s_{\text{LSB}}(t) = m(t) \cos(2\pi f_c t) + \hat{m}(t) \sin(2\pi f_c t), where \hat{m}(t) denotes the Hilbert transform of m(t). Similarly, the upper sideband signal is s_{\text{USB}}(t) = m(t) \cos(2\pi f_c t) - \hat{m}(t) \sin(2\pi f_c t). This formulation arises from the analytic signal representation, where the complex envelope m(t) + j \hat{m}(t) is modulated onto the carrier, and taking the real part yields the SSB output. The Hilbert transform acts as an ideal Hilbert filter, imparting a -90-degree phase shift to positive frequencies and +90 degrees to negative frequencies, enabling precise sideband selection without additional filtering. Bedrosian's theorem extends the applicability of the Hilbert transform to SSB modulation by addressing the product of signals with non-overlapping spectra. The theorem states that if f(t) has a spectrum confined to |f| < a and g(t) to |f| > a, then the Hilbert transform of their product satisfies \mathcal{H}\{f(t) g(t)\} = f(t) \mathcal{H}\{g(t)\}. In SSB, the message m(t) (low-pass) and carrier \cos(2\pi f_c t) (high-pass, with f_c much larger than the message bandwidth) satisfy this condition, allowing \mathcal{H}\{m(t) \cos(2\pi f_c t)\} = \hat{m}(t) \cos(2\pi f_c t) and \mathcal{H}\{m(t) \sin(2\pi f_c t)\} = -\hat{m}(t) \sin(2\pi f_c t). This spectral separation justifies the SSB form as the real part of the analytic signal modulated by the carrier, ensuring the spectrum occupies only one sideband. The theorem, originally derived for such products, underpins the efficiency of Hilbert-based SSB in separating carrier and message components without distortion. A key challenge in implementing the Hilbert transform for SSB is its non-causal nature, as the ideal transform has an impulse response h(t) = -\frac{1}{\pi t}, which extends infinitely in both time directions and cannot be realized in real-time systems. This non-causality arises from the symmetric frequency response, requiring future signal values for exact computation. In practice, the ideal Hilbert filter is approximated using finite impulse response (FIR) or infinite impulse response (IIR) digital filters. FIR approximations, often designed via windowing the sinc-like impulse response or equiripple methods (e.g., Parks-McClellan), introduce a linear-phase delay to enforce causality, typically half the filter length, which must be compensated in the in-phase path for coherent demodulation. IIR approximations, such as allpass structures, provide sharper transitions with lower order but may introduce nonlinear phase, suitable for applications tolerating minor distortions. These approximations enable real-time SSB generation in digital signal processors while maintaining bandwidth efficiency.

Discrete and Number-Theoretic Variants

Discrete Hilbert Transform

The discrete Hilbert transform extends the continuous Hilbert transform to sequences of data, particularly in digital signal processing where signals are represented as discrete-time samples. For an infinite sequence f = \{f\}_{n \in \mathbb{Z}}, the transform is defined as the principal value sum H = \frac{1}{\pi} \sum_{k \neq n} \frac{f}{n - k}, where the sum is taken in the Cauchy principal value sense to handle the singularity at k = n. This definition ensures the transform maps sequences in \ell^p spaces (for p > 1) to themselves, preserving norms up to a constant factor, though it may map \ell^1 sequences outside \ell^1. In practice, finite-length signals require approximations of this transform. One common method uses the discrete Fourier transform (DFT): compute the DFT of the input sequence, multiply by -i \cdot \sgn(k) (where \sgn(k) is the sign function adjusted for the frequency bins, typically -i for positive frequencies and i for negative frequencies, with special handling at DC and Nyquist), and then apply the inverse DFT. This approach leverages the frequency-domain representation of the Hilbert transform, which imparts a \pm 90^\circ phase shift to positive and negative frequency components, respectively, analogous to the continuous case. However, for finite N-point DFTs, this introduces periodic extension artifacts, necessitating zero-padding to mitigate spectral leakage. For real-time implementation, finite impulse response (FIR) Hilbert transformers are preferred due to their stability and linear phase properties. These are designed by truncating and windowing the ideal infinite impulse response h = \frac{1 - (-1)^n}{\pi n} for n \neq 0 (with h{{grok:render&&&type=render_inline_citation&&&citation_id=0&&&citation_type=wikipedia}} = 0), which decays slowly and requires window functions like Hamming or Kaiser to control sidelobes and transition bandwidth. The resulting FIR filter has an antisymmetric impulse response, enabling efficient odd-length (Type III) or even-length (Type IV) designs, with filter length trading off between approximation accuracy and computational cost. Aliasing in the time domain is reduced by increasing the window length, though complete elimination is impossible for band-unlimited signals. In digital signal processing, the discrete Hilbert transform is widely applied for real-time 90-degree phase shifting, essential in generating analytic signals and single-sideband modulation without introducing unwanted sidebands. For instance, it facilitates envelope detection and instantaneous frequency estimation in communications systems, but aliasing from finite approximations can distort low-frequency components, requiring bandpass preprocessing for narrowband signals to maintain accuracy.

Hilbert Transform on Arithmetic Functions

The Hilbert transform can be applied to arithmetic functions supported on the integers, capturing singular integral behavior analogous to the classical case but tailored to discrete structures in analytic number theory. For an arithmetic function \alpha: \mathbb{Z} \to \mathbb{C} with finite support or suitable decay, it is given by the principal value H[\alpha](n) = \frac{1}{\pi} \pv \sum_{m \in \mathbb{Z} \setminus \{n\}} \frac{\alpha(m)}{n - m}, where the principal value excludes the singular term at m = n and symmetrizes the sum over positive and negative indices for convergence. In practice, truncated versions H_N[\alpha](n) = \sum_{|m - n| \le N, m \ne n} \frac{\alpha(m)}{n - m} are studied for asymptotic estimates as N \to \infty. This operator arises in the analysis of convolution-type sums with the kernel $1/k, and its boundedness on \ell^p(\mathbb{Z}) for $1 < p < \infty follows from Calderón-Zygmund theory adapted to discrete groups. Alternatively, via Dirichlet series, if D_\alpha(s) = \sum_{n=1}^\infty \alpha(n) n^{-s}, the transform relates to a multiplier operator on the series, approximately -i \sgn(\Im s) in the critical strip, facilitating analytic continuation and growth estimates. A fundamental connection to the Riemann zeta function and the prime number theorem emerges when \alpha = \Lambda, the von Mangoldt function, where \sum_{n \le x} \Lambda(n) \sim x by the PNT. The transform H[\Lambda](n) encodes information about the oscillation of \log \zeta(1 + it), as the partial sums of H[\Lambda] link to the argument change of \zeta(s) along vertical lines, with bounds depending on zero-free regions near \Re s = 1. Hardy and Littlewood's foundational work on Fourier methods for Dirichlet series provided key estimates for such oscillatory sums, showing that the mean square of the conjugate function (Hilbert transform) over intervals controls error terms in the PNT, improving explicit constants in de la Vallée Poussin's zero-free region. Their approach, detailed in early contributions to multiplicative number theory, underscores how the operator's \ell^2-boundedness implies asymptotic formulas for prime-counting functions. (Hardy & Littlewood 1920s works) In modern applications, this operator features prominently in additive combinatorics for bounding exponential sums like \sum_{n \le X} \Lambda(n) e(\alpha n), where boundedness on sparse sets (e.g., primes) yields progress on prime gaps and Waring's problem via the Hardy-Littlewood circle method. For instance, variational inequalities for truncated variants over thin subsets of primes ensure pointwise convergence of ergodic averages weighted by \Lambda, with \ell^p-norms p > 1 providing sharp decay rates O((\log X)^\epsilon) for \epsilon > 0. These results, building on Lacey-Thiele theory for bilinear forms, have high impact in resolving bounded gaps between primes (e.g., Maynard 2015 improvements as of 2025 no major changes), distinguishing the asymptotic focus here from finite-length approximations in signal processing. (Maynard on bounded gaps)

References

  1. [1]
    [PDF] The Hilbert Transform - University of Toronto
    The Hilbert transform of g(t) is the convolution of g(t) with the signal 1/πt. It is the response to g(t) of a linear time-invariant filter (called a ...
  2. [2]
    [PDF] The Hilbert Transform
    Abstract. The Hilbert transform is essentially the only singular operator in one dimension. This undoubtedly make it one of the most important linear ...
  3. [3]
    [PDF] A field guide for Hilbert transforms - UBC Library Open Collections
    Harmonic analysis is a branch of mathematics which studies how to break apart a complicated signal into more understandable pieces. The Hilbert transform is a ...
  4. [4]
    [PDF] The Hilbert transform
    Feb 14, 2017 · dx. (for f ∈ Co(R) ∩ L1(R)) The Hilbert transform of a function f on R is awkwardly described as a principal-value integral. (Hf)(x) = 1.
  5. [5]
    Hilbert Transform
    The Hilbert transform is an allpass filter that provides a $ 90$ degree phase shift at all negative frequencies, and a $ -90$ degree phase shift at all ...
  6. [6]
    Analytic Signal and Hilbert Transform - MATLAB & Simulink
    You can also generate the analytic signal by using an finite impulse response (FIR) Hilbert transformer filter to compute an approximation to the imaginary part ...
  7. [7]
    [PDF] Hilbert Transform and Applications - IntechOpen
    Apr 25, 2012 · Hilbert transform finds a companion function y(t) for a real function x(t) so that z(t) = x(t) + iy(t) can be analytically extended from the ...
  8. [8]
    [PDF] LECTURE NOTES 4 FOR 247A 1. The Hilbert transform In this set of ...
    By convention, we allow all implied constants in what follows to depend on the. Hölder exponent σ and on the implied constants in (4), (5), (6) and the operator.
  9. [9]
    [PDF] Numerical Transforms - Chester F. Carlson Center for Imaging Science
    transforms the transform variable does not necessarily have a different identity (as s is different from x) but may have the same identity (Hilbert transform).
  10. [10]
    (PDF) Steerable Wavelet Frames Based on the Riesz Transform
    Aug 10, 2025 · ... Riesz transforms as a vector-valued extension of the Hilbert ... boldface as given above represent. elements of this basis. The empty ...
  11. [11]
    [PDF] The Hilbert transform
    Jul 29, 2020 · Thus, for any positive constant c, 1/(1 + icx)s has Fourier ... u(f ◦ t−1 + x · t−1 · (f0 ◦ t−1)) = −. 1 t u(f ◦ t−1 + (x · f0) ◦ t ...
  12. [12]
    Grundzüge einer allgemeinen Theorie der linearen ...
    Dec 12, 2008 · Grundzüge einer allgemeinen Theorie der linearen Integralgleichungen. by: Hilbert, David, 1862-1943. Publication date: 1912. Topics: Integral ...
  13. [13]
    Introduction - Assets - Cambridge University Press
    definition of the Hilbert transform for the circle (Hilbert, 1904, 1912). The transform appearing in Eq. (1.2) seems to have been first discussed with some ...
  14. [14]
    Mathematical Problems by David Hilbert - Clark University
    Mathematical Problems. Lecture delivered before the International Congress of Mathematicians at Paris in 1900. By Professor David Hilbert.Missing: 1904 | Show results with:1904<|control11|><|separator|>
  15. [15]
    Fourier transform of the principal value distribution
    Jun 8, 2015 · Computing F(x) will then give you the Fourier transform of p. v. (1/x) (as a tempered distribution), which you should get that F(x)=−πi sgn(x), ...Hilbert transform and Fourier transform - Mathematics Stack ExchangeFourier transform of the distribution PV $\left( \frac{1}{x} \right)More results from math.stackexchange.com
  16. [16]
    [PDF] A sparse spectral method for fractional differential equations ... - arXiv
    Oct 15, 2022 · A useful property of the Hilbert transform is that it is anti-self adjoint [43, Th. 102]. The Hilbert transform has found uses in aerofoil ...
  17. [17]
    [PDF] Hilbert transforms and the Cauchy integral in Euclidean space
    The Hilbert transform for a domain D ⊂ R2 concerns only functions in D, mapping the real part of an analytic function to its imaginary part. In contrast ...
  18. [18]
    [PDF] arXiv:2404.02609v1 [math.CA] 3 Apr 2024
    Apr 3, 2024 · The Hilbert transform E of a function φ is defined as Cauchy principal value integral ... inversion formula holds (see [5, Formula (6.35)]).
  19. [19]
  20. [20]
    None
    ### Summary of Hilbert Transform for Tempered Distributions
  21. [21]
    [PDF] The Hilbert Transform - DiVA portal
    From a historical point of view, the. Hilbert transform originated in the work of David Hilbert on integral equations and boundary value problems in 1905.
  22. [22]
    [PDF] Fourier Series: Convergence and Summability - Yale Math
    It is a famous conjecture of Elias Stein that (83) holds under these conditions, i.e., provided. ∞ ≥ q >. 2n n − 1 and q ≥ n + 1 n − 1 p0 . Observe that ...
  23. [23]
  24. [24]
    [PDF] EE 261 - The Fourier Transform and its Applications
    Page 1. Lecture Notes for. EE 261. The Fourier Transform and its Applications ... Hilbert Transform ...
  25. [25]
    [PDF] Oscillation of Fourier Integrals with a spectral gap - Purdue Math
    May 30, 2003 · log |h(x)|, and v the conjugate function to u. In ... Titchmarsh's theorem on the support of convolution fails for hyperfunctions.
  26. [26]
    Reciprocal formulae involving series and integrals
    Reciprocal formulae involving series and integrals. By. E. C. Titchmarsh in London. 1. Introduction. theorems: Theorem A. series. (~) is convergent, and'let.
  27. [27]
    [PDF] Contents 1. Review: Complex numbers and functions of a complex ...
    The Hilbert transform, an important operator in ... upper half plane. Using the result in Exercise ... A simple Riemann-Hilbert problem. Perhaps the ...
  28. [28]
    Boundary value problems : Gakhov, F. D. (Fedor Dmitrievich)
    Dec 21, 2022 · Boundary value problems. by: Gakhov, F. D. (Fedor Dmitrievich). Publication date: 1966. Topics: Boundary value problems. Publisher: Oxford, New ...
  29. [29]
    [PDF] Riemann-Hilbert Formalism in the Study of Crack Propagation in ...
    by F.D. Gakhov [41] on Riemann–Hilbert problem, and by B. Noble ... Chapter 3 and Chapter 4, and transform the problem to a vector Riemann–Hilbert problem.
  30. [30]
    [PDF] Basics of Analytic Signals - UC Davis Mathematics
    May 17, 2023 · Analytic Signal. Gabor (1946) proposed to use the the Hilbert transform of u(t) as v(t), and called the complex-valued f (t) an analytic signal.
  31. [31]
    Analytic Signals and Hilbert Transform Filters - Stanford CCRMA
    A filter can be constructed which shifts each sinusoidal component by a quarter cycle. This is called a Hilbert transform filter.
  32. [32]
    A product theorem for Hilbert transforms | IEEE Journals & Magazine
    A product theorem for Hilbert transforms. Published in: Proceedings of the IEEE ( Volume: 51 , Issue: 5 , May 1963 )
  33. [33]
    [PDF] The Hilbert Transform - University of Toronto
    A similar derivation shows that. sLSB(f) = g(t) cos(2πfct)+g(t) sin(2πfct). Thus we see that single-sideband modulation can be regarded and implemented as a ...
  34. [34]
    [PDF] Chapter 7 Single-Sideband Modulation (SSB) and Frequency ...
    This demodulator requires taking a Hilbert transform but does not require filtering out terms at twice the carrier frequency. The modulator shown on Slide 7-6 ...
  35. [35]
    [PDF] A Product Theorem for Hilbert Transforms - RAND
    A PRODUCT THEOREM FOR. HILBERT TRANSFORMS. E. Bedrosian. PREPARED FOR: UNITED STATES AIR FORCE PROJECT RAND. The RAND Corporation. SANTA MONICA • CALIFORNIA.Missing: 1963 | Show results with:1963
  36. [36]
    [PDF] Hilbert transform: Mathematical theory and applications to signal ...
    Nov 19, 2015 · The Hilbert transform is a widely used transform in signal processing. In this thesis we explore its use for three different applications: ...
  37. [37]
    [PDF] Digital FIR Hilbert Transformers: Fundamentals and Efficient Design ...
    IIR Hilbert transformers perform a phase approximation. This means that the phase response of the system is approximated to the desired values in a given range ...
  38. [38]
    Single Sideband Modulation via the Hilbert Transform - MathWorks
    Single Sideband (SSB) modulation uses a filter to select either the lower or upper sideband, and the Hilbert transform is used to implement it.
  39. [39]