The sinc function, also known as the cardinal sine function, is a fundamental mathematical function in analysis, signal processing, and engineering, typically defined in its unnormalized form as \operatorname{sinc}(x) = \frac{\sin x}{x} for x \neq 0 and \operatorname{sinc}(0) = 1, or in its normalized form as \operatorname{sinc}(x) = \frac{\sin(\pi x)}{\pi x} for x \neq 0 with the same value at zero.[1][2] This function is even, symmetric about the y-axis, exhibits decaying oscillations with lobes that decrease in amplitude as |x| increases, and has zeros at all non-zero integer multiples of \pi in the unnormalized case or integers in the normalized case.[2][3]The sinc function plays a central role in Fourier analysis as the Fourier transform of the rectangular function, representing the ideal low-pass filter in the frequency domain and the interpolationkernel for bandlimited signals in the Shannon sampling theorem.[1][4] Its integral over the real line equals \pi for the unnormalized form, highlighting its utility in normalization and approximation theory, while in numerical methods, it underpins sinc interpolation for efficient function reconstruction.[2][5] In practical applications, such as digital signal processing and optics, the sinc function models diffraction patterns and aliasing effects, though its infinite extent often necessitates truncation or windowing in computations.[3][4]
Fundamental Definitions
Unnormalized Form
The unnormalized sinc function is defined mathematically as\operatorname{sinc}(x) =
\begin{cases}
\frac{\sin x}{x} & x \neq 0, \\
1 & x = 0.
\end{cases}This piecewise definition addresses the indeterminate form at x = 0, where the expression \frac{\sin x}{x} encounters division by zero.[1]The point x = 0 represents a removable singularity, resolved by assigning the value given by the limit \lim_{x \to 0} \frac{\sin x}{x} = 1, which ensures the function is continuous everywhere, including at the origin.[1] This limit follows from the standard Taylor series expansion of the sine function around zero, \sin x = x - \frac{x^3}{6} + O(x^5), yielding \frac{\sin x}{x} = 1 - \frac{x^2}{6} + O(x^4) \to 1 as x \to 0.In certain contexts, particularly signal processing literature, the unnormalized sinc is denoted as \mathrm{Sa}(x) to distinguish it from scaled variants.[6] The function features an oscillatory profile that decays as |x|^{-1} for large |x|, with zeros at x = n\pi for every nonzero integer n, corresponding to the roots of \sin x = 0 away from the origin.[1] Its graph displays a principal lobe centered at zero, symmetric about the y-axis, and enveloped by successively smaller sidelobes that diminish in amplitude.[1] Unlike the normalized form, which scales the argument by \pi for unit integral over the real line, the unnormalized version integrates to \pi from -\infty to \infty.[1]
Normalized Form
The normalized sinc function is defined as\sinc(x) =
\begin{cases}
\frac{\sin(\pi x)}{\pi x} & x \neq 0, \\
1 & x = 0,
\end{cases}where the value at x = 0 is obtained by taking the limit as x approaches 0, yielding 1 by the standard \sin(u)/u \to 1 as u \to 0 with u = \pi x.[7][3] This form differs from the unnormalized sinc by incorporating a \pi-scaling in the argument, which normalizes the function such that its integral over the entire real line equals 1: \int_{-\infty}^{\infty} \sinc(x) \, dx = 1.[7][3]The normalized sinc exhibits zeros at all nonzero integers, i.e., \sinc(n) = 0 for any integer n \neq 0, due to \sin(\pi n) = 0 while the denominator \pi n \neq 0.[7][2] This orthogonality property at integer points, combined with \sinc(0) = 1, implies that the sum over all integers is 1: \sum_{n=-\infty}^{\infty} \sinc(n) = 1.[7] Often denoted simply as the normalized sinc in digital signal processing literature, this variant is particularly valued for its role in sampling theory, where the summation of its integer translates equals 1, facilitating ideal bandlimited signal reconstruction.[7][3]
Origins
Etymology
The term "sinc" was coined by Philip M. Woodward in a 1952 technical report co-authored with I. L. Davies on applications of information theory to telecommunication and radar systems.[8] In this work, Woodward introduced the abbreviation to denote the function \frac{\sin x}{x}, noting that it "occurs so often in Fourier analysis and its applications that it does seem to merit a name of its own." The name is a shorthand derived directly from the mathematical expression "sin x / x" for the unnormalized form of the function.[8]Woodward specified that "sinc" should be pronounced like "sink," emphasizing its phonetic simplicity in technical discourse.[9] This introduction occurred amid post-World War II advancements in radarsignal processing, where the function's role in waveform analysis necessitated efficient notation.The term's first formal publication appeared in Woodward's 1953 monograph Probability and Information Theory, with Applications to Radar, where it was defined on page 29 as \operatorname{sinc} x = \frac{\sin x}{x}.[10] Initially used informally within radar research circles, "sinc" gained standardization through its adoption in subsequent signal processing literature, becoming a staple in texts on Fourier transforms and communications engineering by the late 20th century.[9]
Historical Development
The function \sin(x)/x, now recognized as the unnormalized sinc function, first appeared in mathematical literature through Joseph Fourier's foundational 1822 treatise Théorie analytique de la chaleur, where it emerged in series expansions for solving the heat equation via Fourier series, foreshadowing the development of transform theory.[11]In optics and wave theory, the sinc function gained significance as the envelope of diffraction patterns. Joseph von Fraunhofer described the single-slit diffraction pattern around 1821, which mathematically corresponds to the sinc function for a rectangular aperture. George Biddell Airy described the related Airy disk pattern for circular apertures in 1835, involving a form akin to \frac{J_1(x)}{x} that parallels the sinc profile for linear apertures.[12] Gustav Kirchhoff provided a rigorous scalar diffraction theory in 1882, deriving the Fresnel-Kirchhoff integral that yields the sinc function as the intensity distribution for diffraction through a rectangular slit, formalizing its application in wave optics.[13]The sinc function played an implicit yet pivotal role in sampling theory through the Nyquist-Shannon theorem. Harry Nyquist's 1928 analysis of telegraph transmission established the minimum bandwidth requirements for signal reconstruction without aliasing, implying the need for adequate sampling rates.[14]Claude Shannon explicitly formulated the theorem in 1949 at Bell Laboratories, demonstrating that ideal bandlimited signal reconstruction from samples uses sinc interpolation, a breakthrough tied to wartime communications research.[15]Following World War II, the sinc function saw increased adoption in radar and telecommunications engineering, particularly at Bell Laboratories during the 1940s, where it informed signal processing for pulse compression and bandwidth optimization.
Mathematical Properties
Basic Analytic Properties
The sinc function, in both its unnormalized form \operatorname{sinc}(x) = \frac{\sin x}{x} (with \operatorname{sinc}(0) = 1) and normalized form \operatorname{sinc}(x) = \frac{\sin(\pi x)}{\pi x} (with \operatorname{sinc}(0) = 1), is an even function, satisfying \operatorname{sinc}(-x) = \operatorname{sinc}(x) for all real x.[1] This symmetry follows directly from the even nature of the sine function when scaled appropriately in the denominator.[1] Additionally, the function is bounded on the real line, with |\operatorname{sinc}(x)| \leq 1 for all x, achieving equality only at x = 0.[1]The sinc function is continuous everywhere on \mathbb{R}, including at x = 0, where the removable singularity is resolved by defining the value as the limit \lim_{x \to 0} \operatorname{sinc}(x) = 1.[1] Moreover, it is infinitely differentiable on \mathbb{R}, as the function extends analytically to the entire complex plane, forming an entire function.[1] This smooth behavior ensures that all derivatives exist and are continuous at every point, including the origin.[16]For large |x|, the sinc function displays asymptotic decay characterized by |\operatorname{sinc}(x)| \sim \frac{1}{|x|}, modulated by the oscillatory nature of the numerator, leading to persistent ripples that diminish in amplitude.[1] This slow decay, slower than exponential but faster than constant, contributes to the function's utility in applications requiring gradual roll-off.[1]Beyond the central peak at x = 0, the unnormalized sinc function features alternating local maxima and minima in its side lobes, with subsequent lobes showing progressively decreasing amplitudes. These extrema occur at points solving \tan x = x (for x \neq 0), reflecting the interplay between the oscillatory sine and the decaying envelope. The normalized form exhibits analogous structure, scaled by \pi, which is particularly relevant in discrete signal processing contexts.[1]
Integral and Summation Formulas
The unnormalized sinc function, defined as \sinc(x) = \frac{\sin x}{x} (with \sinc(0) = 1), satisfies the principal definite integral \int_{-\infty}^{\infty} \sinc(x) \, dx = \pi.[1] This result, known as the Dirichlet integral in its half-range form, implies \int_{0}^{\infty} \sinc(x) \, dx = \frac{\pi}{2}.[1] These integrals arise naturally in Fourier analysis, where the sinc function is the inverse transform of the rectangular function.[1]For the normalized sinc function, \sinc(x) = \frac{\sin(\pi x)}{\pi x} (again with \sinc(0) = 1), the corresponding integral evaluates to \int_{-\infty}^{\infty} \sinc(x) \, dx = 1.[1] This normalization ensures the function integrates to unity over the real line, making it suitable as an interpolating kernel in signal processing.[3]A key summation property holds for the normalized form: \sum_{n=-\infty}^{\infty} \sinc(x + n) = 1 for all real x. This identity, central to the Shannon-Nyquist sampling theorem, can be derived via the Poisson summation formula, which equates the discrete sum to a sum over the Fourier transform of sinc (the rect function) evaluated at integer frequencies, yielding a constant value of 1. Alternatively, a direct derivation uses the partial fraction expansion \pi \csc(\pi x) = \sum_{n=-\infty}^{\infty} \frac{(-1)^n}{x + n}; substituting into the expression for the sum gives \sum_{n=-\infty}^{\infty} \frac{\sin(\pi (x + n))}{\pi (x + n)} = \frac{\sin(\pi x)}{\pi} \sum_{n=-\infty}^{\infty} \frac{(-1)^n}{x + n} = \frac{\sin(\pi x)}{\pi} \cdot \frac{\pi}{\sin(\pi x)} = 1.
Series Expansions
The unnormalized sinc function, defined as \sinc(x) = \frac{\sin x}{x} for x \neq 0 and \sinc(0) = 1, possesses a Taylor series expansion around x = 0 derived from the power series of the sine function divided by x:\sinc(x) = \sum_{n=0}^{\infty} \frac{(-1)^n x^{2n}}{(2n+1)!}.This expansion arises because the sinc function is infinitely differentiable at x = 0, with all odd-order derivatives vanishing there. The series converges for all complex x, reflecting the fact that the unnormalized sinc function is an entire function.For the normalized sinc function, defined as \sinc(x) = \frac{\sin(\pi x)}{\pi x} for x \neq 0 and \sinc(0) = 1, the Taylor series around x = 0 adjusts for the \pi scaling in the argument of the sine:\sinc(x) = \sum_{n=0}^{\infty} \frac{(-1)^n \pi^{2n} x^{2n}}{(2n+1)!}.Like its unnormalized counterpart, this series converges everywhere in the complex plane, as the normalized sinc is also an entire function.These power series representations are especially valuable for approximating the sinc function when |x| is small, where truncating after a few terms yields high accuracy with minimal computational effort, such as in initial-value problems or local evaluations in numerical algorithms.
Relations to Other Concepts
Connection to Dirac Delta Distribution
The Dirac delta distribution \delta(x) admits an integral representation involving a limit of cosine functions, which directly connects to the sinc function through explicit evaluation of the integral. Specifically,\delta(x) = \lim_{\alpha \to \infty} \frac{1}{\pi} \int_{0}^{\alpha} \cos(k x) \, dk.Evaluating the integral yields \int_{0}^{\alpha} \cos(k x) \, dk = \frac{\sin(\alpha x)}{x}, so the expression simplifies to\delta(x) = \lim_{\alpha \to \infty} \frac{\sin(\alpha x)}{\pi x},where \frac{\sin(\alpha x)}{\pi x} is a scaled form of the unnormalized sinc function \operatorname{sinc}(u) = \frac{\sin u}{u}. This limit holds in the distributional sense, meaning that for any smooth test function \phi(x) with compact support, \int_{-\infty}^{\infty} \frac{\sin(\alpha x)}{\pi x} \phi(x) \, dx \to \phi(0) as \alpha \to \infty.[17]Equivalently, the sinc function provides a sequence of approximations to the delta distribution via scaling. For the unnormalized sinc,\delta(x) = \lim_{\epsilon \to 0^+} \frac{1}{\pi \epsilon} \operatorname{sinc}\left( \frac{x}{\epsilon} \right),where the factor \frac{1}{\pi \epsilon} ensures the integral over the real line remains unity for each \epsilon > 0. In the theory of distributions, the sinc function itself serves as a test function in the Schwartz space due to its rapid decay and smoothness, but the key connection here is the limiting process that generates the delta from scaled sincs. This approximation is particularly useful in contexts requiring a smooth cutoff, as the sinc's oscillatory tails provide a natural bandwidth limitation.Historically, this sinc-based representation has been employed in physics to regularize the Dirac delta distribution, replacing singular impulses with finite-bandwidth approximations to facilitate computations in quantum mechanics and field theory while preserving key distributional properties in the limit. For instance, early applications in wave mechanics used such limits to handle point sources without introducing divergences prematurely. Seminal treatments appear in Fourier analysis texts, where the connection arises from the inverse transform of a rectangular spectrum approaching a constant, yielding the delta.[18][19]
Hyperbolic Variant (Sinhc)
The hyperbolic variant of the sinc function, often denoted as \sinhc(x), is defined as\sinhc(x) = \frac{\sinh x}{x}for x \neq 0, with the value at x = 0 taken as the limit \sinhc(0) = 1.[20] This definition parallels the structure of the trigonometric sinc function but replaces the sine with its hyperbolic counterpart, yielding a function that grows exponentially for large |x| rather than oscillating.[20]The function \sinhc(x) is an entire function of the complex variable, analytic everywhere in the complex plane, as its Taylor series expansion converges for all z \in \mathbb{C}:\sinhc(z) = \sum_{n=0}^{\infty} \frac{z^{2n}}{(2n+1)!}.[21] This series arises directly from the Taylor expansion of \sinh z. It has no real zeros, since \sinh x = 0 only at x = 0 for real x, and \sinhc(x) approaches 1 there; for x > 0, \sinhc(x) is strictly monotonically increasing.[22]Unlike the sinc function, whose integral over the real line equals \pi, the improper integral \int_{-\infty}^{\infty} \sinhc(x) \, dx diverges. This follows from the asymptotic behavior \sinhc(x) \sim \frac{1}{2} e^{|x|} / |x| as |x| \to \infty, which grows too rapidly for convergence.[21]The hyperbolic sinc appears in the analysis of modified Bessel functions, where it serves as a bounding or normalizing factor in inequalities and representations for generalized forms.[21] It also arises in solutions to certain diffusion equations, particularly in contexts involving hyperbolic modifications that account for finite propagation speeds.[23]
Multidimensional Extensions
The sinc function extends naturally to higher dimensions through separable Cartesian and isotropic radial forms, providing foundational tools for multidimensional signal representation and analysis.In the Cartesian form, the n-dimensional unnormalized sinc function is defined as the product\sinc(\mathbf{x}) = \prod_{i=1}^n \frac{\sin x_i}{x_i},where \mathbf{x} = (x_1, \dots, x_n) \in \mathbb{R}^n. This separable structure allows independent computation along each coordinate axis, facilitating efficient numerical evaluation in applications requiring rectangular frequency-domain support.The radial form generalizes the function isotropically. In two dimensions, a common extension is the sombrero function, defined as\operatorname{somb}(\mathbf{r}) = \frac{J_1(|\mathbf{r}|)}{|\mathbf{r}|},where J_1 is the Bessel function of the first kind of order one and |\mathbf{r}| = \sqrt{r_1^2 + r_2^2}. This yields the rotationally symmetric "sombrero" profile, which is the Fourier transform of the circularly symmetric rectangular function and is used in filter design and diffraction modeling. In three dimensions, the isotropic form corresponding to spherical frequency support is\frac{3 (\sin |\mathbf{r}| - |\mathbf{r}| \cos |\mathbf{r}|)}{|\mathbf{r}|^3},employed in volume rendering and tomography for approximating reconstruction kernels.[24]A key property of the Cartesian form is its separability, enabling decomposition into one-dimensional operations along orthogonal axes, which preserves orthogonality in discrete bases like the sinc discrete variable representation (DVR). The integral of the unnormalized Cartesian sinc over \mathbb{R}^n evaluates to \pi^n, reflecting the product of individual one-dimensional integrals each equaling \pi. To achieve unit integral normalization in higher dimensions, the function is scaled by $1/\pi^n, analogous to the one-dimensional case where the normalized form \sin(\pi x)/(\pi x) integrates to 1; the multidimensional product then maintains unit volume.These extensions find use in multidimensional imaging and tomography, where the separable form supports efficient reconstruction on Cartesian grids, and radial variants aid in modeling isotropic point spread functions.[25]
Applications
In Signal Processing
In digital signal processing, the sinc function plays a central role as the impulse response of the ideal low-pass filter, whose frequency response is a rectangular function that passes all frequencies below a cutoff and attenuates all above it to zero. This filter is theoretically perfect for bandlimiting signals but non-causal and of infinite duration due to the sinc's unbounded support.[26][27]The sinc function enables perfect reconstruction of bandlimited continuous-time signals from their discrete samples via the Shannon interpolation formula, also known as the Whittaker–Shannon interpolation formula. For a signal bandlimited to less than half the sampling frequency, the reconstructed signal is given byf(t) = \sum_{n=-\infty}^{\infty} f(nT) \cdot \sinc\left(\frac{t - nT}{T}\right),where T is the sampling interval and f(nT) are the samples; this formula stems from the Nyquist–Shannon sampling theorem, ensuring no information loss if the sampling rate exceeds twice the signal's bandwidth.[28][29]To prevent aliasing distortion during sampling, an anti-aliasing filter—ideally a low-pass filter with sinc impulse response—is applied beforehand to remove frequency components above the Nyquist frequency, thereby avoiding spectral folding that would otherwise corrupt the signal.[28][27]In practice, the sinc function's infinite extent makes ideal implementation impossible, leading to truncation and windowing of the impulse response to create finite impulse response (FIR) filters; for example, Lanczos resampling employs a sinc kernel windowed by another sinc function to approximate bandlimited interpolation while reducing computational demands. Truncation introduces artifacts such as the Gibbs phenomenon, manifesting as overshoots and ringing near signal discontinuities, with ripple amplitudes approaching 9% of the jump height regardless of filter length.[30][31]
In Fourier Analysis
The sinc function plays a pivotal role in Fourier analysis due to its relationship with the rectangular function through the Fourier transform. The Fourier transform of the rectangular function \rect(t), defined as 1 for |t| < 1/2 and 0 otherwise, yields the unnormalized sinc function: \hat{\rect}(f) = \sinc(f), where \sinc(f) = \frac{\sin(\pi f)}{\pi f}. This pair is fundamental, as the sinc function represents the frequency response of a time-limited uniform pulse.[32]A key symmetry arises from the duality property of the Fourier transform, which states that if x(t) \leftrightarrow X(f), then X(t) \leftrightarrow x(-f). Applying this to the rect-sinc pair gives the inverse relationship: the Fourier transform of \sinc(t) is \rect(f). This duality highlights the interchangeability between time-limited and bandlimited signals in the frequency domain, underscoring the sinc function's role in bridging compact support in one domain with infinite extent in the other.[32]Parseval's theorem further illustrates the sinc function's properties by equating energy across domains: \int_{-\infty}^{\infty} |x(t)|^2 \, dt = \int_{-\infty}^{\infty} |\hat{x}(f)|^2 \, df. For the sinc function, whose transform is the rect function of unit height and width 1, the time-domain energy is \int_{-\infty}^{\infty} \sinc^2(t) \, dt = 1, matching the frequency-domain integral over the rect's support. This application confirms the unit energy preservation and is used to evaluate sinc-related integrals analytically.[33]In convolution operations, the sinc function acts as the ideal kernel for bandlimiting signals. Convolving an arbitrary signal with \sinc(t) implements perfect low-pass filtering, retaining only frequency components within the sinc's main lobebandwidth while suppressing higher frequencies. This is because the sinc's Fourier transform, being a rect, enforces a sharp cutoff in the frequency domain.[34]The scaling property of the Fourier transform affects the sinc function's behavior in the frequency domain inversely. If the time-domain function is scaled as \rect(at), its transform becomes \frac{1}{|a|} \sinc\left(\frac{f}{a}\right), compressing or expanding the sinc's width proportionally to $1/|a| while adjusting the amplitude. This demonstrates how time scaling alters bandwidth, essential for analyzing resolution in Fourier representations.[35]
Numerical Computation and Approximations
The sinc function is typically evaluated using the direct formula \sinc(x) = \frac{\sin(\pi x)}{\pi x} for x \neq 0, with the value defined as \sinc(0) = 1 to resolve the removable singularity at the origin. This definition ensures continuity and is implemented in standard numerical libraries; for example, NumPy's numpy.sinc function computes this expression element-wise for input arrays, returning 1 at zero via the limit. Similarly, MATLAB's built-in sinc function follows the same normalized definition and handles the zero case accordingly.[36]For small |x|, particularly near zero where direct evaluation might introduce minor floating-point cancellation in the numerator and denominator, a truncated Taylor series provides an accurate alternative. The series for the unnormalized \frac{\sin u}{u} (with u = \pi x) is derived from the Taylor expansion of \sin u divided by u:\frac{\sin u}{u} = \sum_{n=0}^{N} \frac{(-1)^n u^{2n}}{(2n+1)!} + R_{N}(u),where the remainder R_{N}(u) is bounded by the next term for small u, enabling efficient polynomial evaluation up to desired order N. This approach is useful for high-precision arithmetic or when avoiding trigonometric function calls.Computing \sinc(x) for large |x| relies on the same direct formula, but the \sin(\pi x) term requires argument reduction modulo $2\pi to keep the input within the principal range and avoid overflow. However, for very large |x| (exceeding approximately $2^{53} in IEEE 754 double precision), this reduction loses significant accuracy due to the irrationality of \pi, rendering the fractional part of \pi x imprecise and causing \sin(\pi x) to deviate from its true value by up to the full range [-1, 1]. Specialized techniques, such as those employing higher-precision constants for reduction or extended-precision intermediates, are employed in advanced libraries to maintain accuracy in such cases.