Fact-checked by Grok 2 weeks ago

Hartley transform

The Hartley transform is an closely related to the , which maps real-valued input functions to real-valued output functions using a real kernel known as the cas function, defined as \cas(\theta) = \cos(\theta) + \sin(\theta). It was originally proposed by electrical engineer Ralph V. L. Hartley in as a more symmetrical alternative to traditional , specifically designed to address transmission problems in linear systems by treating time and frequency domains equivalently and avoiding complex numbers. The mathematical formulation of the continuous Hartley transform for a function f(t) is H(\omega) = \int_{-\infty}^{\infty} f(t) \cas(\omega t) \, dt, where the transform is self-reciprocal—meaning the inverse operation uses the identical to recover f(t) from H(\omega). This self-inverse property, combined with its entirely real arithmetic, distinguishes it from the complex-valued and offers computational efficiencies, such as reduced storage and faster processing for real signals in applications like and filtering. The transform decomposes a signal into symmetric sinusoidal components, enabling straightforward analysis of even and odd parts without phase complications. The Hartley transform relates directly to the through linear combinations: the real part of the equals half the sum of the Hartley transform and its negative-frequency counterpart, while the imaginary part is half their difference, allowing seamless conversion between the two for many practical purposes. Although Hartley's original work laid the theoretical foundation, the transform gained prominence in with N. Bracewell's 1983 introduction of the discrete Hartley transform (DHT), a finite version amenable to fast algorithms analogous to the (FFT), with complexity O(N \log N) for N-point data. Key applications of the Hartley transform include efficient spectral analysis in signal and image processing, where its real-valued outputs simplify hardware implementation and reduce numerical errors compared to complex transforms. It has been employed in geophysical data interpretation for estimating power spectra without phase retrieval issues, as well as in oceanographic studies for analyzing wave and current time series. More recently, extensions like the multidimensional DHT support medical image compression and quantum signal processing variants.

History and Development

Origins in Signal Theory

The origins of the Hartley transform trace back to early 20th-century advancements in signal processing and communication theory, particularly through the work of Ralph V. L. Hartley at Bell Telephone Laboratories. In his seminal 1928 paper, Hartley introduced a quantitative measure of information transmission, defining it as proportional to the logarithm of the number of possible symbols and the number of selections, which laid foundational principles for analyzing signal capacity in communication systems. This work emerged amid Bell Labs' intensive research into telephony and radio signals during the 1920s and 1930s, where engineers grappled with distortion, bandwidth limitations, and the need for efficient frequency-domain representations to handle intersymbol interference and energy storage effects in transmission lines. Hartley's approach emphasized the maximum transmission rate as tied to the product of frequency-range width and available time, influencing subsequent developments in frequency analysis for real-world signals like voice and telegraphy. Building on this foundation, Hartley proposed the transform itself in 1942 as a more symmetrical alternative to traditional , specifically tailored for transmission problems in communication networks. Developed at , where frequency analysis was crucial for mitigating issues like phase distortion and echoes in and radio systems, the transform decomposes real signals into even and odd components using real-valued sinusoidal functions, avoiding the complexities of imaginary numbers inherent in methods. This innovation stemmed from the era's growing demand for practical tools to analyze steady-state and transient behaviors in signal propagation, enabling clearer insights into how systems preserve or alter information across frequency bands. The primary motivation was to simplify computations for real-valued signals prevalent in engineering applications, such as audio over lines, by providing a unified real arithmetic framework that treated positive and negative frequencies symmetrically. By focusing on even (cosine-like) and odd (sine-like) parts without complex conjugation, Hartley's method reduced the algebraic burden while maintaining analytical power for problems like aberration in channels. Subsequent adoption of the transform's discrete variant by Ronald N. Bracewell in the 1980s further extended its utility in computational .

Key Contributions and Evolution

Following its initial definition by Ralph V. L. Hartley in 1942, the Hartley transform experienced a resurgence in the through key advancements in its discrete form and computational efficiency. Hartley's original proposal appeared in a limited-circulation at and was largely overlooked until Ronald N. Bracewell independently developed the discrete version. In 1983, Ronald N. Bracewell introduced the discrete Hartley transform (DHT), adapting the continuous transform for finite sequences of real data, with particular relevance to astronomy and imaging applications at . This formalization addressed the need for real-valued in digital processing, building on Hartley's foundational work while enabling practical implementation on computers. The 1980s marked a pivotal evolution with the development of fast algorithms for the DHT, mirroring the efficiency of the fast Fourier transform. Bracewell proposed the fast Hartley transform (FHT) in 1984, achieving O(N log N) computational complexity through a radix-2 decomposition that avoids complex arithmetic, thus reducing storage and operations for real-valued inputs. Subsequent contributions by researchers including H. S. Hou and others refined these methods, introducing variants like split-radix and prime-factor algorithms to further optimize performance for specific sequence lengths and hardware constraints. These innovations positioned the DHT as a viable alternative for signal analysis, particularly in resource-limited environments. Later in the decade, optical implementations of the Hartley transform were proposed to exploit its real-valued properties for analog processing. In 1985, R. N. Bracewell, H. Bartelt, A. W. Lohmann, and N. Streibl described an optical system using lenses to compute the two-dimensional Hartley transform, analogous to optical systems but without complex conjugation. Additional proposals, such as those in 1987 exploring interferometric setups, highlighted potential for high-speed image processing. However, these optical approaches saw limited adoption, as the established computational advantages of the in digital systems—demonstrated by comparisons showing only marginal speed gains for the FHT in real-data DFT computations—favored the latter's widespread infrastructure.

Mathematical Definition

Continuous Hartley Transform

The continuous Hartley transform provides a real-valued alternative to the for analyzing real-valued signals in the time domain. For a real-valued f(t), the transform is defined by the H(\omega) = \int_{-\infty}^{\infty} f(t) \operatorname{cas}(\omega t) \, dt, where \omega represents the and the integration is performed over the entire real line t \in (-\infty, \infty). This formulation, originally proposed by Ralph V. L. Hartley (with variations in normalization conventions), maps the input signal to a real-valued frequency-domain representation H(\omega) defined over \omega \in (-\infty, \infty). The transform decomposes the signal f(t) into components using the \operatorname{cas} kernel, which combines cosine and sine terms to capture both even (cosine-like) and odd (sine-like) parts of the signal's sinusoidal structure. This kernel basis enables a decomposition that preserves the reality of the output for real inputs, facilitating applications in where complex arithmetic is undesirable. The continuous nature of the transform suits it for theoretical analysis of infinite-duration signals, such as those in physical systems or transmission problems.

The cas Function

The cas function, introduced by Ralph V. L. Hartley in 1942 as part of a symmetrical approach to , is defined as \operatorname{cas}(t) = \cos(t) + \sin(t). This definition combines the real and imaginary parts of the exponential kernel in a real-valued manner, serving as the core of the Hartley transform kernel. Equivalently, using angle addition formulas for sine, the cas function can be rewritten as \operatorname{cas}(t) = \sqrt{2} \sin\left(t + \frac{\pi}{4}\right). This phase-shifted form underscores its sinusoidal nature and facilitates analysis of its periodicity and amplitude scaling by \sqrt{2}. The cas function relates directly to standard trigonometric functions, enabling decomposition of the Hartley transform into cosine and sine components for handling even and odd signals. Specifically, the integral kernel \operatorname{cas}(\omega t) separates into an even cosine term \cos(\omega t) and an odd sine term \sin(\omega t); for an even signal f_e(t), the sine integral vanishes over symmetric limits, yielding twice the Fourier cosine transform, while for an odd signal f_o(t), the cosine integral vanishes, yielding twice the Fourier sine transform. Key identities for the cas function include addition and product rules derived from trigonometric expansions. The addition formula, involving the complementary function \operatorname{cas}^c(t) = \cos(t) - \sin(t) (which is the derivative of \operatorname{cas}(t)), is \operatorname{cas}(a + b) = \operatorname{cas}(a) \cos(b) + \operatorname{cas}^c(a) \sin(b). This identity, along with its counterpart \operatorname{cas}(a - b) = \operatorname{cas}(a) \cos(b) - \operatorname{cas}^c(a) \sin(b), aids in expanding the kernel for derivations in . A useful product identity is $2 \operatorname{cas}(a) \cos(b) = \operatorname{cas}(a + b) + \operatorname{cas}(a - b), analogous to the cosine product-to-sum formula but adapted to the cas kernel; a similar relation holds for the sine product as $2 \operatorname{cas}^c(a) \sin(b) = \operatorname{cas}(a + b) - \operatorname{cas}(a - b). These rules support proofs of linearity, symmetry, and convolution properties in the Hartley framework.

Fundamental Properties

Inversion and Self-Duality

The inversion of the continuous Hartley transform is achieved using an identical form to the forward transform, making it uniquely self-dual among common integral transforms. Specifically, if the forward Hartley transform of a function f(t) is defined as H(\omega) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} f(t) \operatorname{cas}(\omega t) \, dt, then the inverse transform recovers the original function via f(t) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} H(\omega) \operatorname{cas}(\omega t) \, d\omega.[2] This symmetric structure arises from the real-valued nature of the transform and the properties of the \operatorname{cas} kernel, \operatorname{cas}(x) = \cos(x) + \sin(x), which ensures that the operator is its own inverse under this unitary normalization. The self-duality property is expressed mathematically as \mathcal{H}^{-1} = \mathcal{H}, where \mathcal{H} denotes the Hartley transform operator. Applying the transform twice thus yields the original function exactly: f(t) = \mathcal{H} \left\{ \mathcal{H} \left\{ f(t) \right\} \right\}. This involutory behavior contrasts with the Fourier transform, where forward and inverse operations differ due to complex conjugation and exponential kernels, but aligns with the Hartley's design for real signals. The property was highlighted in the seminal introduction of the modern Hartley transform, emphasizing its computational symmetry. For real-valued signals, this self-duality simplifies round-trip computations, such as filtering or in the transform domain followed by reconstruction, by eliminating the need for separate forward and inverse algorithms or arithmetic adjustments. In practice, this reduces in applications, as a single routine can handle both directions up to the factor.

Normalization Conventions

The Hartley transform exhibits several normalization conventions in the literature, primarily differing in scaling factors to achieve unitarity or convenience in specific applications. In the original formulation, the transform and its inverse both incorporate a factor of \sqrt{\frac{1}{2\pi}} to yield a that preserves the L^2-norm of the function. This symmetric ensures the transform is orthonormal and self-inverse under the chosen . Alternative conventions, common in engineering and contexts, adopt an asymmetrical approach where transform is unscaled—defined simply as an without a prefactor—and the inverse includes a factor of \frac{1}{2\pi} to recover the original function. This choice aligns with traditional practices and simplifies computations in non-unitary settings, though it requires careful adjustment when comparing results across frameworks. Kernel variants further diversify the conventions, with the original and standard definition using the cas function as \operatorname{cas}(\omega t) = \cos(\omega t) + \sin(\omega t), emphasizing symmetry in positive and negative frequencies. A complementary variant, \operatorname{cas}'(\omega t) = \cos(\omega t) - \sin(\omega t), appears in some works and corresponds to the real part minus the imaginary part of the kernel, facilitating certain even and odd decompositions. The sign difference influences phase interpretations but does not alter the real-valued nature of the output. Both Hartley and Bracewell employed the standard cas with plus sign. Regardless of the chosen normalization or kernel sign, consistency is essential for the inversion property to hold, ensuring the composition of forward and inverse transforms yields the up to the specified . This requirement underpins the transform's self-duality and practical utility in .

Relation to Fourier Transform

Derivation from Fourier Components

The of a real-valued f(t) is defined as F(\omega) = \int_{-\infty}^{\infty} f(t) e^{-i \omega t} \, dt, which expands to F(\omega) = \int_{-\infty}^{\infty} f(t) \cos(\omega t) \, dt - i \int_{-\infty}^{\infty} f(t) \sin(\omega t) \, dt. The real part is thus \Re\{F(\omega)\} = \int_{-\infty}^{\infty} f(t) \cos(\omega t) \, dt, and the imaginary part is \Im\{F(\omega)\} = -\int_{-\infty}^{\infty} f(t) \sin(\omega t) \, dt. The Hartley transform H(\omega) employs the kernel \cas(\omega t) = \cos(\omega t) + \sin(\omega t), yielding H(\omega) = \int_{-\infty}^{\infty} f(t) \cas(\omega t) \, dt = \int_{-\infty}^{\infty} f(t) \cos(\omega t) \, dt + \int_{-\infty}^{\infty} f(t) \sin(\omega t) \, dt. Substituting the Fourier components gives H(\omega) = \Re\{F(\omega)\} - \Im\{F(\omega)\}. Due to the Hermitian symmetry of the for real f(t), where F(-\omega) = \overline{F(\omega)}, it follows that \Im\{F(-\omega)\} = -\Im\{F(\omega)\}. Therefore, an equivalent expression is H(\omega) = \Re\{F(\omega)\} + \Im\{F(-\omega)\}. Conversely, the can be recovered from the Hartley transform via decomposition of its real and imaginary parts. Specifically, F(\omega) = \frac{H(\omega) + H(-\omega)}{2} - i \frac{H(\omega) - H(-\omega)}{2}, which solves the system formed by H(\omega) and H(-\omega) using the even and symmetries of the cosine and sine components, respectively. The highlights the separation into even and components: the cosine corresponds to the even part of f(t), while the sine corresponds to the part. For an even f(t) = f(-t), the sine vanishes, yielding H(-\omega) = H(\omega). For an f(t) = -f(-t), the cosine vanishes, yielding H(-\omega) = -H(\omega). These symmetries parallel those of the transform's real (even) and imaginary () parts.

Comparative Advantages

The Hartley transform offers significant advantages over the in scenarios involving real-valued signals, primarily due to its production of real-valued outputs. For a real input function, the generates complex coefficients, requiring storage of both real and imaginary parts—effectively doubling the data volume to 2N real numbers for an N-point sequence. In contrast, the Hartley transform yields N real numbers, halving storage needs and simplifying data handling in memory-constrained environments. This real-valued nature stems from the transform's basis functions, which combine cosine and sine components without introducing imaginary units, making it particularly suitable for applications like image processing where inputs are inherently real. Another key benefit is the Hartley transform's self-inversion property, which eliminates the need for a distinct formula or additional operations like complex conjugation required in the case. The Hartley transform is identical to the forward transform, up to a factor, allowing seamless switching between domains with the same computational . This reduces implementation complexity in software and hardware, as a single routine can handle both forward and operations, avoiding errors associated with handling arithmetic in the inverse step. Computationally, the Hartley transform provides efficiency gains, especially for real signals, by relying solely on real arithmetic, which was particularly advantageous before the when complex multiplications were hardware-intensive and costly. The fast Hartley transform (FHT) algorithms perform operations in O(N log N) time using only real additions and multiplications, often requiring fewer total operations than the (FFT); for instance, symmetric convolutions in the Hartley domain can demand about one-quarter the real multiplications of their counterparts. These efficiencies made the Hartley transform appealing in early systems, such as those for on limited processors, though modern hardware has diminished some of these gaps. The mathematical relation to the , where the Hartley output combines the real and imaginary parts of the result, further underscores its role as a real-alternative without sacrificing core frequency-domain insights.

Additional Properties

Linearity and Symmetry

The Hartley transform exhibits , a fundamental property shared with other transforms such as the . Specifically, for any scalar constants a and b, and square-integrable functions f(t) and g(t), the transform satisfies \mathcal{H}\{a f + b g\}(\omega) = a \mathcal{H}\{f\}(\omega) + b \mathcal{H}\{g\}(\omega). This linearity follows directly from the definition of the transform and enables superposition in applications like signal decomposition. The Hartley transform also preserves symmetry properties related to the parity of the input . If f(t) is an even (f(-t) = f(t)), then its Hartley transform H(\omega) is even (H(-\omega) = H(\omega)), as the sine component integrates to zero over symmetric limits. Conversely, for an odd (f(-t) = -f(t)), the transform is odd (H(-\omega) = -H(\omega)), with the cosine component vanishing. These parity preservations arise from the structure of the function kernel, where \cos(\omega t) is even in t and \sin(\omega t) is odd, mirroring the behavior in but in a real-valued . Additionally, under appropriate (typically with a factor of $1/\sqrt{2\pi}), the Hartley transform is a on the L^2(\mathbb{R}) space, preserving the L^2 norm via a : \int_{-\infty}^{\infty} |f(t)|^2 \, dt = \int_{-\infty}^{\infty} |H(\omega)|^2 \, d\omega. This norm preservation ensures that energy in the equals energy in the Hartley domain, facilitating applications in where power conservation is essential. The shift property further highlights the transform's symmetry: for a time-shifted function f(t - t_0), the Hartley transform is given by \mathcal{H}\{f(t - t_0)\}(\omega) = \cos(\omega t_0) H(\omega) + \sin(\omega t_0) H(-\omega). This expression leverages the even and odd components of H(\omega) and H(-\omega), providing a real-valued analogous to the shift theorem but without complex exponentials.

Convolution and Theorems

The convolution theorem for the Hartley transform provides a frequency-domain representation of the convolution operation between two real-valued functions f(t) and g(t), defined as (f * g)(t) = \int_{-\infty}^{\infty} f(\tau) g(t - \tau) \, d\tau. Specifically, the Hartley transform of this convolution is given by \mathcal{H}\{f * g\}(\omega) = \frac{1}{2} \left[ H(\omega) G(\omega) + H(-\omega) G(-\omega) \right], where H(\omega) = \mathcal{H}\{f\}(\omega) and G(\omega) = \mathcal{H}\{g\}(\omega). This relation holds under the standard unnormalized definition of the continuous Hartley transform, \mathcal{H}\{f\}(\omega) = \int_{-\infty}^{\infty} f(t) \cas(\omega t) \, dt, with \cas(\theta) = \cos \theta + \sin \theta. For real outputs in applications, the expression ensures compatibility with the real-valued nature of the transform, though practical computations may require handling symmetry explicitly. A similar form governs the correlation theorem, where the (f \star g)(t) = \int_{-\infty}^{\infty} f(\tau) g(\tau + t) \, d\tau (or equivalently, with the time-reversed g(-t)) transforms as \mathcal{H}\{f \star g\}(\omega) = \frac{1}{2} \left[ H(\omega) G(-\omega) + H(-\omega) G(\omega) \right]. This follows from the property that \mathcal{H}\{g(-t)\}(\omega) = H(-\omega), adjusting the arguments accordingly. For the case where f = g, the expression simplifies to \frac{1}{2} [H(\omega) H(-\omega) + H(-\omega) H(\omega)] = H(\omega) H(-\omega), highlighting the even symmetry of functions. The derivations of both theorems stem from substituting the or into the Hartley transform definition and exploiting the algebraic identity for the product of cas functions: \cas(\alpha) \cas(\beta) = \cos(\alpha - \beta) + \sin(\alpha + \beta). This identity generates terms at sum and difference frequencies, leading to the paired evaluations at \omega and -\omega after over the shifted variables. These properties leverage the self-duality and real of the Hartley transform, distinguishing it from the counterpart while maintaining computational efficiency for real signals.

Discrete Hartley Transform

Definition and Formulation

The discrete Hartley transform (DHT) is a real-valued, invertible linear transform applied to sequences of real numbers, serving as a computationally efficient alternative to the (DFT) for processing real-valued . Introduced by N. Bracewell in , it leverages the function, defined as \cas \theta = \cos \theta + \sin \theta, to map input sequences into the without requiring complex arithmetic. For a finite sequence of N real numbers x(n), where n = 0, 1, \dots, N-1, the one-dimensional DHT is formulated as H(k) = \sum_{n=0}^{N-1} x(n) \cas\left( \frac{2\pi k n}{N} \right), \quad k = 0, 1, \dots, N-1. This transform produces a real-valued output H(k) of the same N, representing the Hartley . The kernel combines cosine and sine components symmetrically, ensuring that the transform remains entirely real for real inputs, which halves the storage requirements compared to the DFT. The DHT exhibits self-duality, meaning its inverse is nearly identical to the forward transform. The inverse DHT (IDHT) recovers the original sequence via x(n) = \frac{1}{N} \sum_{k=0}^{N-1} H(k) \cas\left( \frac{2\pi k n}{N} \right), \quad n = 0, 1, \dots, N-1. Applying the forward DHT twice yields N times the original sequence, confirming invertibility with the $1/N scaling factor in the inverse. Normalization conventions vary; the unnormalized form (as above) is common in algorithmic implementations, while some divide by \sqrt{N} for unitary properties in both directions. The DHT relates directly to the DFT through real and imaginary parts: if F(k) is the DFT of x(n), then H(k) = \Re\{F(k)\} - \Im\{F(k)\}, allowing conversion between the two with minimal additional computation. This connection facilitates the use of existing algorithms for DHT computation, though dedicated fast Hartley transform (FHT) algorithms exploit the real-valued nature for further efficiency gains. For multidimensional data, such as 2D images, the DHT extends separably by applying the 1D transform along each dimension.

Fast Algorithms and Computation

The Fast Hartley Transform (FHT) is an efficient algorithm for computing the discrete Hartley transform of length N, achieving a computational complexity of O(N \log N) through a divide-and-conquer approach analogous to the Cooley-Tukey fast Fourier transform, but relying exclusively on real arithmetic operations. This eliminates the need for complex number handling, making it particularly suitable for real-valued signals. Introduced by Ronald N. Bracewell in 1984, the FHT employs a radix-2 decomposition that recursively splits the input sequence into even- and odd-indexed subsequences, performing \log_2 N stages of processing on power-of-2 lengths. At each stage, butterfly operations combine the results of smaller subtransforms, leveraging the addition theorems of the cas function—defined as \cas(\theta) = \cos(\theta) + \sin(\theta)—to efficiently map the transform kernel without introducing imaginary components. These butterflies typically require 4 real multiplications and 6 real additions, approximately half the operations of equivalent fast Fourier transform butterflies for real data. In practice, the FHT's real-only yields significant advantages in both software and implementations, reducing execution time by up to 50% compared to complex fast transforms for purely real inputs and halving memory requirements by avoiding storage for imaginary parts. For instance, on typical computing platforms, an N=[1024](/page/1024) FHT completes in about 0.75 \log_2 N multiplications and 1.75 \log_2 N additions per datum, enabling faster processing in resource-constrained environments. Adaptations of established libraries facilitate widespread use; notably, fast Fourier transform packages like FFTPACK can be converted to FHT routines via simple index remapping, preserving efficiency across radix-2, radix-3, radix-4, radix-5, and mixed-radix variants without altering core arithmetic. Such conversions have been demonstrated to maintain floating-point operation counts equivalent to the original while simplifying code for real-data applications. Recent research as of has introduced specialized fast algorithms for small-size, odd-length, and quantum-enhanced DHT computations, further optimizing performance in niche applications.

Applications

Signal and Image Processing

The Hartley transform finds significant application in due to its , which enables efficient computation of for real-valued signals. For two functions f_1(x) and f_2(x) with Hartley transforms H_1(\omega) and H_2(\omega), the Hartley transform of their convolution is given by H(\omega) = H_1(\omega) H_2(\omega) + H_1(-\omega) H_2(-\omega), with additional terms accounting for symmetry. This property simplifies to a single real multiplication when one function is even, as often occurs in filtering operations, reducing compared to the (DFT), which involves complex arithmetic. The transform's real-to-real mapping avoids phase unwrapping issues and Hermitian symmetry redundancy, making it advantageous for tasks like (FIR) filtering and transient analysis in power systems. In adaptive signal enhancement, fast algorithms for the running Hartley transform (RDHT) facilitate processing, such as in noisy biomedical signals, by leveraging recursive computations that update the transform incrementally. The transform's self-inverse nature further supports inverse operations without additional conjugation, enhancing efficiency in iterative filtering schemes like Wiener filtering, where it estimates optimal filters for denoising by minimizing error in the transform domain. For instance, parametric variants of the Hartley transform have been applied to and Wiener filtering, demonstrating improved performance in low-signal-to-noise environments typical of biomedical signals. In image processing, the discrete Hartley transform (DHT) serves as an alternative to the DFT for convolution-based operations, particularly in two-dimensional separable forms that process rows and columns independently. This is exploited in and blurring removal, where the transform's efficiency shines for even-symmetric kernels, requiring about one-quarter the operations of the DFT. A notable application is image deblurring via filtering, where the DHT computes the filter in the transform domain to restore degraded images, achieving high fidelity with reduced computational load. Additionally, the DHT enables from intensity measurements, aiding reconstruction of images from partial data in optical systems. For , especially in , the three-dimensional DHT (3D DHT) compresses volumetric data like MRI and by approximating the transform with multiplierless algorithms, achieving 100% reduction in multiplicative complexity and approximately 70% faster execution compared to standard 3D DHT implementations. When integrated into DICOM-compliant codecs, these approximations maintain structural similarity indices (SSIM) near 1.0, preserving diagnostic quality while enabling deployment on resource-limited devices like embedded processors. The transform's real-valued output also supports lossless variants, making it suitable for high-fidelity storage of multidimensional medical datasets.

Optics and Other Fields

In the 1980s, optical implementations of the two-dimensional Hartley transform were developed using lens-based systems to enable real-time image processing. These setups synthesized the transform optically by combining two transforms with specific amplitude, phase, and orientation adjustments, achieved through configurations involving cylindrical and spherical es. The real-valued output of the Hartley transform facilitated efficient and filtering of images without the need for complex arithmetic, offering potential speed advantages over -based optical methods at the time. However, practical errors in lens alignment and transform realization could distort results, though corrections were possible with measurements from the transform plane. Beyond , the Hartley transform finds application in , particularly in for three-dimensional . In quantum signal analysis, the quantum Hartley transform serves as a unitary for handling real-valued signals in quantum circuits, preserving noise distributions and enabling efficient generative modeling in the Hartley basis before transformation back to the computational basis. Its self-inverse property supports reversibility in these quantum operations. Recent extensions include multidimensional quantum generative modeling (as of 2024) and libraries like QRTlib for efficient quantum real transforms (as of October 2025). Despite these specialized uses, the Hartley transform has seen limited adoption in modern and related fields, overshadowed by the fast Fourier transform's established role in and holographic processing; it persists in niche analog hardware scenarios where real-valued computations reduce complexity.

References

  1. [1]
    Hartley Transform -- from Wolfram MathWorld
    The Hartley transform produces real output for a real input, and is its own inverse. It therefore can have computational advantages over the discrete Fourier ...Missing: definition | Show results with:definition
  2. [2]
    [PDF] A More Symmetrical Fourier Analysis Applied to Transmission ...
    Some typical aberration curves as determined graphically from the screen patterns are shown in Fig. 8. These show the decrease in focal distance as the ray.
  3. [3]
    Hartley Transform—An Alternate Tool for Digital Signal Processing
    Jun 2, 2015 · The Hartley transform is an efficient and economical alternative to the Fourier transform in its applications to the field of digital signal ...
  4. [4]
    The use of the Hartley transform in geophysical applications
    The Hartley transform (HT) is an integral transform similar to the Fourier transform (FT). It has most of the characteristics of the FT.Missing: definition | Show results with:definition<|control11|><|separator|>
  5. [5]
    [PDF] Hartley transform: basic theory and applications in oceanographic ...
    The Hartley transform is used to estimate the spectral density fhnction of ocean surface waves and coastal current time series. 1 Introduction. In physical ...
  6. [6]
    Low-complexity three-dimensional discrete Hartley transform ...
    The discrete Hartley transform (DHT) is a useful tool for medical image coding. The three-dimensional DHT (3D DHT) can be employed to compress medical image ...
  7. [7]
    Development of Quantum Hartley Transform for Signal Processing ...
    Aug 10, 2025 · The present paper deals with the q-analogue of Hartley transform which is the q-extension of Hartley transform.
  8. [8]
    [PDF] BSTJ 7: 3. July 1928: Transmission of Information. (Hartley, R.V.L.)
    bell system technical journal. The Measurement of Information. When we speak of the capacity of a system to transmit information we imply some sort of ...
  9. [9]
    [PDF] Memories: A Personal History of Bell Telephone Laboratories
    Aug 6, 2015 · Bell Labs researchers who worked there, mostly on antennas and radio waves. ... In a 1928 paper, Ralph Hartley presented a measure for the ...Missing: transform | Show results with:transform
  10. [10]
    The Hartley Transform (Ronald N. Bracewell) | SIAM Review
    The fast Fourier transform (FFT) is a way to compute y in just O(nlogn) operations. This represents a dramatic reduction in complexity. The FFT is best ...
  11. [11]
    [PDF] Discrete Hartley transform - Semantic Scholar
    Dec 1, 1983 · A More Symmetrical Fourier Analysis Applied to Transmission Problems · R. Hartley. Engineering, Physics. Proceedings of the IRE. 1942. The ...
  12. [12]
    The fast Hartley transform - IEEE Xplore
    A fast algorithm has been worked out for performing the Discrete Hartley Transform (DHT) of a data sequence of N elements in a time proportional to Nlog2N.Missing: Bracewell | Show results with:Bracewell
  13. [13]
    Optical phase obtained by analogue Hartley transformation - Nature
    Dec 31, 1987 · Here we describe the construction of an optical system in the form of a modified Michelson interferometer which physically demonstrates that it ...Missing: implementations 1980s
  14. [14]
    [PDF] The Hartley Transform - DSP-Book
    Mar 4, 2024 · Table of Hartley Transforms. Tables 4.10 to 4.12 contain the Hartley transforms of commonly encountered signals in engineering applications.
  15. [15]
    Hartley transform and the use of the Whitened Hartley spectrum as a ...
    Mar 23, 2015 · ... function of the Hartley transform is the cas(ωt) function. The cas function was introduced by Hartley in 1942 and is defined as cas(ωt) ...
  16. [16]
    [PDF] Chapter 14 - The Hartley Transform - DSP-Book
    The Hartley transform is the real part of the Fourier transform minus the imaginary part. TABLE 14.3 Variations of the cas Function. Quadrant cas. 1st. +1 → +1 ...
  17. [17]
    [PDF] A Separable Two - Dimensional Discrete Hartley Transform
    Bracewell has proposed the Discrete Hartley Transform (DHT) as a substitute for the Discrete Fourier Transform (DFT), particularly as a means of convolution ( ...
  18. [18]
    [PDF] The Hartley Transform Via SUSY Quantum Mechanics
    Jun 9, 2020 · The Hartley transform has the following properties: E(τxf)(λ) = cas(λx)E(f)(λ), E(f ∗ g)(λ) = E(f)(λ)E(g)(λ). 3 SUSY QM with reflection. Let us ...
  19. [19]
  20. [20]
  21. [21]
    [1403.3848] On the half-Hartley transform, its iteration and ... - arXiv
    Mar 15, 2014 · Abstract page for arXiv paper 1403.3848: On the half-Hartley transform, its iteration and composition with Fourier transforms.
  22. [22]
  23. [23]
    The Discrete Hartley Transform (FFTW 3.3.10)
    Thus, the DHT transforms n real numbers to n real numbers, and has the convenient property of being its own inverse. In FFTW, a DHT (of any positive n ) can be ...Missing: Ronald adoption
  24. [24]
    [PDF] optimized fast hartley transform for the mc68000
    This work describes the development of code which takes advantage of recent advancements in trans- form algorithms and microprocessor technology to make ...
  25. [25]
    Conversion of FFT's to Fast Hartley Transforms
    View PDF. References. 1. R. V. L. Hartley, A more symmetrical Fourier analysis applied to transmission problems, Proc. I. R. E., 30 (1942), 144–150. Crossref.
  26. [26]
    (PDF) A fast running Hartley transform algorithm and its application ...
    Aug 6, 2025 · A fast recursive algorithm for computation of the running discrete Hartley transform (RDHT) is presented. This method is based on the relation ...
  27. [27]
  28. [28]
    A Fast Image Deblurring Algorithm Using the Wiener Filter and the ...
    The least-squares (Wiener) filter is an optimal statistical filter in an average sense and it can be applied to deconvolve an image [5]. The constrained least- ...
  29. [29]
  30. [30]
  31. [31]
    Medical image compression using 3-D Hartley transform
    Aug 5, 2025 · In this paper, 3-D discrete Hartley transform is applied for the compression of two medical modalities, namely, magnetic resonance images ...