Analog signal
An analog signal is a continuous-time signal that varies smoothly in amplitude, frequency, or phase to represent physical quantities such as sound, light, or temperature in an infinite range of values within a defined continuum, such as voltage levels from +12V to -12V.[1][2][3] Unlike digital signals, which are discrete and quantized into binary steps, analog signals maintain a continuous waveform that mirrors natural phenomena without artificial limitations on resolution.[2][3] This continuous nature allows analog signals to convey information with high fidelity in real-world applications, though they are susceptible to noise and distortion during transmission or processing.[1][3] Key characteristics of analog signals include their time-varying properties, where the signal's value at any instant can take any point within its range, producing a smooth curve when plotted against time.[1][3] For instance, in audio signals, the instantaneous voltage corresponds directly to the pressure variations of a sound wave, enabling natural reproduction of complex waveforms like music or speech.[1] Analog signals often require amplification and filtering in electronic circuits to maintain integrity, and they adhere to principles like the Nyquist theorem, which states that sampling for digital conversion must occur at least twice the highest frequency component to avoid information loss.[1] Despite their vulnerability to environmental interference, such as electromagnetic noise, analog signals offer advantages in bandwidth efficiency and direct interfacing with physical sensors, making them foundational in many systems.[3] Analog signals find widespread use in applications ranging from audio recording and reproduction, where they capture the nuances of human hearing, to temperature and pressure sensors in industrial control systems.[3][2] In telecommunications, they form the basis for radio transmissions and traditional telephone lines, while in imaging, analog signals from sensors represent light intensity variations before potential digitization.[3][1] Their role persists in modern mixed-signal designs, such as biomedical devices for EEG analysis, where low-power analog processing ensures precise data acquisition from wearable sensors.[1] Overall, analog signals remain essential for bridging the physical world with electronic systems, complementing digital technologies in hybrid environments.[3]Fundamentals
Definition
An analog signal is a continuous-time signal in which the amplitude varies smoothly and continuously over time, representing physical quantities such as voltage, current, sound pressure, or light intensity in an unbroken manner without discrete steps or quantization.[4] This continuity allows the signal to capture infinite possible values within its range, mirroring the gradual changes observed in natural phenomena. Historically, analog signals originated from early devices designed to replicate continuous natural processes, predating electronic implementations. For instance, mechanical clocks used gear mechanisms to model the smooth progression of time,[2] while vinyl records encoded audio as continuous helical grooves that a stylus traced to reproduce sound waves faithfully.[5] These analog systems, emerging from ancient mechanisms like the Antikythera device around 85-60 B.C.E., emphasized direct physical analogies to real-world variations, avoiding any form of numerical discretization.[6] As carriers of information, analog signals assume a foundational role in conveying data through proportional physical representations, with their defining continuity distinguishing them from other signal types by enabling seamless variation rather than sampled approximations.Characteristics
Analog signals are defined by key properties that vary continuously, enabling them to represent real-world phenomena with smooth transitions rather than discrete steps. Amplitude refers to the signal's strength or magnitude, typically measured in volts for electrical signals, which determines the peak deviation from the baseline. Frequency denotes the rate of oscillation, expressed in hertz (Hz), representing the number of cycles per second. Phase indicates the position of the signal within its cycle, measured in degrees or radians relative to a reference point.[7][4][8] Wavelength, applicable particularly to propagating waves, is the spatial distance covered in one complete cycle, inversely related to frequency in a given medium.[9] These properties collectively allow analog signals to capture nuanced variations in physical quantities, such as pressure or voltage, without inherent quantization limits. Unlike discrete systems, the continuous nature of these attributes means amplitude, frequency, phase, and wavelength can take on any value within their physical constraints, facilitating precise representation of continuous phenomena.[3] A hallmark of analog signals is their theoretical infinite resolution, permitting an infinite number of distinguishable values within a bounded range, such as between -12V and +12V. This stems from the signal's ability to fluctuate smoothly without predefined steps, though real-world implementations are constrained by inherent physical noise that introduces uncertainty.[3][10] In the time domain, analog signals exist over continuous intervals, evolving without abrupt interruptions. For instance, sinusoidal waves serve as a fundamental model for analog signals, approximating the continuous variations in sound pressure waves or light intensity over time.[1]Mathematical Representation
Continuous-Time Signals
Continuous-time analog signals are mathematically modeled in the time domain as functions x(t), where the independent variable t represents continuous time, typically measured in seconds, and x(t) can assume any real value within a continuous range.[11] This representation captures the inherent continuity of physical phenomena, such as voltage variations in electrical circuits or pressure waves in acoustics, where the signal evolves smoothly without discrete jumps.[12] Unlike discrete signals, x(t) is defined for all real values of t in some interval, potentially infinite, allowing for an uncountably infinite set of amplitude values.[13] The continuity of these signals stems from their role as solutions to ordinary differential equations (ODEs) that describe the dynamics of physical systems, such as RLC circuits or mechanical oscillators, where state variables change continuously over time.[14] For instance, solving a first-order linear ODE like \frac{dx(t)}{dt} + \alpha x(t) = 0 yields exponential solutions that model natural decay processes, illustrating how analog signals represent infinite sampling points across the time continuum rather than finite approximations.[15] This differential equation framework underscores the analog signal's ability to reflect real-world processes without temporal discretization.[16] A fundamental example is the sinusoidal signal, expressed as x(t) = A \sin(2\pi f t + \phi), where A > 0 denotes the amplitude (peak value), f is the frequency in hertz (cycles per second), and \phi is the phase offset in radians, capturing periodic oscillations like those in alternating current.[17] Another basic form is the unit ramp signal, x(t) = t \, u(t), with u(t) as the unit step function (u(t) = 1 for t \geq 0, 0 otherwise), which models linearly increasing quantities such as integrator outputs in analog electronics.[18] Exponential signals provide further insight, such as the decaying form x(t) = e^{-\alpha t} u(t), \quad \alpha > 0, which describes transient responses like the voltage across a discharging capacitor in an RC circuit.[17] In analog audio applications, continuous-time periodic functions, often superpositions of sinusoids, represent acoustic waveforms; for example, a pure tone is a sine wave at audible frequencies (20 Hz to 20 kHz), while complex sounds like speech arise from modulated continuous envelopes.[19] These examples highlight the versatility of continuous-time modeling in capturing smooth, real-valued evolutions essential to analog signal processing.[12]Frequency Domain Analysis
The frequency domain analysis of analog signals involves representing the signal's characteristics in terms of its frequency components, providing insights into its spectral content and behavior that complement time-domain descriptions. The continuous-time Fourier transform (CTFT) is the primary tool for this purpose, decomposing an analog signal into a continuum of sinusoids. Specifically, the CTFT of a continuous-time signal x(t) is given by X(f) = \int_{-\infty}^{\infty} x(t) e^{-j 2 \pi f t} \, [dt](/page/DT), where f denotes frequency in hertz, and X(f) represents the amplitude and phase of each sinusoidal component at frequency f. This integral formulation reveals that any aperiodic analog signal can be expressed as an infinite superposition of complex exponentials e^{j 2 \pi f t}, which correspond to sinusoids, allowing analysis of the signal's frequency composition. The inverse CTFT reconstructs the original signal via x(t) = \int_{-\infty}^{\infty} X(f) e^{j 2 \pi f t} \, df, ensuring perfect recovery under conditions where the transform exists, such as for square-integrable signals.[20] A key concept emerging from the frequency domain is bandwidth, defined as the range of frequencies over which the signal's power is concentrated, typically encompassing the frequencies where most of the signal energy resides. For instance, human-audible audio signals, which are low-pass analog signals, have a bandwidth of approximately 20 kHz, spanning from 0 Hz to 20 kHz, as this captures the full spectrum perceivable by the human ear. Bandwidth quantifies the signal's information-carrying capacity and influences system design, such as filter requirements in transmission channels.[21] The power spectral density (PSD) further elucidates the energy distribution across frequencies for power signals, such as stationary random processes, by describing the power per unit frequency. The PSD \Phi_{xx}(f) is the Fourier transform of the signal's autocorrelation function and indicates how average power is allocated in the spectrum; for example, integrating the PSD over a frequency band yields the power in that band. In analog communications, baseband signals exhibit PSD concentrated at low frequencies near zero (e.g., voice signals from 0 to 4 kHz), while bandpass signals, formed by modulating a baseband signal onto a carrier, have PSD shifted to a higher frequency band around the carrier frequency, enabling efficient transmission over radio channels without low-frequency interference.[22][23]Comparison to Digital Signals
Key Differences
Analog signals are continuous in both time and amplitude, allowing them to represent information with theoretically infinite precision without the need for sampling, whereas digital signals are discrete, consisting of sampled values at specific time intervals and quantized to a finite number of amplitude levels. This discreteness in digital signals requires adherence to the Nyquist sampling theorem, which mandates a sampling rate at least twice the highest frequency component of the signal to prevent aliasing and accurately reconstruct the original waveform. In contrast, analog signals inherently capture the full continuum of variations without such constraints, enabling seamless representation of natural phenomena like sound waves or light intensity. Regarding information fidelity, analog signals preserve the exact shape and nuances of the original waveform, providing high-fidelity reproduction in ideal conditions, but they degrade continuously when subjected to noise or interference, as distortions accumulate proportionally along the transmission path. Digital signals, however, represent information in binary form (0s and 1s), which facilitates regeneration and error correction techniques, such as parity checks or forward error correction codes, allowing the signal to maintain integrity even after multiple transmissions despite initial quantization errors. This regenerative property makes digital signals more robust over long distances, as receivers can reconstruct clean versions from noisy inputs, unlike analog signals where degradation is irreversible without amplification that may further introduce noise. The implications for processing differ markedly: analog signals are typically handled using linear circuits, such as operational amplifiers and passive filters, which operate on continuous voltages or currents to perform operations like amplification or modulation directly on the waveform. Digital signals, on the other hand, rely on logic gates (e.g., AND, OR, NOT) and binary arithmetic in integrated circuits, enabling complex computations through algorithms but requiring prior analog-to-digital conversion. These approaches lead to distinct trade-offs in implementation, as summarized in the following table:| Aspect | Analog Processing | Digital Processing |
|---|---|---|
| Circuit Type | Linear components (e.g., resistors, capacitors, op-amps) | Logic gates and sequential circuits (e.g., flip-flops) |
| Noise Handling | Susceptible; noise adds directly to signal | Immune via thresholding and error correction |
| Precision | Infinite in theory, limited by hardware | Finite, determined by bit depth (e.g., 8-24 bits) |
| Complexity | Simpler for basic operations, but harder to scale | Highly scalable for complex tasks, easier to integrate |
| Power Efficiency | Often lower power for simple analog tasks | Higher for computation-intensive operations due to CMOS scaling |
Signal Conversion
Analog-to-digital conversion (ADC) transforms continuous analog signals into discrete digital representations through two primary stages: sampling and quantization. Sampling involves measuring the amplitude of the analog signal at uniform time intervals, determined by the sampling frequency f_s. According to the Nyquist-Shannon sampling theorem, to accurately reconstruct the original signal without loss of information, the sampling frequency must satisfy f_s \geq 2 f_{\max}, where f_{\max} is the highest frequency component in the signal's bandwidth.[24] Failure to meet this criterion results in aliasing, a distortion where higher-frequency components masquerade as lower frequencies in the sampled signal, potentially corrupting the data. To prevent aliasing, an anti-aliasing filter—a low-pass analog filter—is applied before sampling to attenuate frequencies above f_{\max}.[25] Quantization follows sampling by mapping each continuous amplitude sample to the nearest discrete level from a finite set of quantization levels, typically represented in binary form. The number of levels, determined by the ADC's bit resolution (e.g., 8 bits yield 256 levels), introduces quantization error, as the exact amplitude cannot always be precisely represented; this error manifests as noise with a magnitude up to half the step size between levels.[25] Higher resolution reduces this error but increases computational complexity and power consumption in the conversion process. Digital-to-analog conversion (DAC) reverses the ADC process by reconstructing a continuous analog signal from discrete digital samples. The ideal reconstruction method, derived from the Nyquist-Shannon theorem, employs sinc interpolation, where the continuous signal x(t) is expressed as: x(t) = \sum_{n=-\infty}^{\infty} x \cdot \operatorname{sinc}\left( \frac{t - nT}{T} \right) with T = 1/f_s and \operatorname{sinc}(u) = \sin(\pi u)/(\pi u), ensuring perfect recovery of the bandlimited original signal.[24] In practice, however, DACs often use a zero-order hold (ZOH), which maintains each sample's value constant over the sampling period, producing a stairstep waveform that approximates the original but introduces attenuation and phase distortion, particularly at higher frequencies due to the ZOH's inherent sinc-like frequency response.[26] A low-pass reconstruction filter follows the ZOH to smooth the output and remove imaging artifacts above the Nyquist frequency. Hybrid systems integrate ADC and DAC to enable digital processing of analog signals, with pulse code modulation (PCM) serving as a foundational example in telephony. In PCM telephony, the analog voice signal undergoes ADC via uniform sampling at 8 kHz (exceeding twice the 4 kHz voice bandwidth), 8-bit logarithmic quantization to match human auditory perception, and binary encoding into a serial bitstream for transmission over digital lines.[27] At the receiver, DAC reconstructs the signal through decoding to pulse amplitude modulation, followed by ZOH and low-pass filtering to approximate the original waveform, enabling noise-resistant long-distance communication as demonstrated in early Bell System experiments. The conversion stages form a chain: analog input → anti-aliasing filter → sampler → quantizer → encoder (ADC side); decoder → ZOH DAC → reconstruction filter → analog output, bridging continuous and discrete domains efficiently.[27]Noise and Distortions
Types of Noise
Noise in analog signals is generally modeled as an additive random process superimposed on the desired signal, degrading its fidelity and introducing uncertainty in measurement or transmission. This additive nature implies that the total received signal is the sum of the original analog waveform and the noise component, where the noise power is independent of the signal. The signal-to-noise ratio (SNR) serves as a key metric to evaluate this degradation, defined as the ratio of the signal power to the noise power, typically expressed in decibels as \text{SNR} = 10 \log_{10} \left( \frac{P_{\text{signal}}}{P_{\text{noise}}} \right), where P_{\text{signal}} and P_{\text{noise}} represent the average powers of the signal and noise, respectively.[28] Thermal noise, also known as Johnson-Nyquist noise, arises from the random thermal motion of charge carriers in resistive components, such as conductors or resistors, and is present in all electronic circuits at temperatures above absolute zero. It is characterized by a white noise spectrum, meaning its power spectral density is constant across frequencies, and its mean-square noise voltage can be calculated using the Johnson-Nyquist formula: v_n^2 = 4 k T B R, where v_n^2 is the mean-square noise voltage, k is Boltzmann's constant ($1.38 \times 10^{-23} J/K), T is the absolute temperature in Kelvin, B is the bandwidth in hertz, and R is the resistance in ohms. This noise is unavoidable and fundamentally limits the sensitivity of analog systems, particularly in low-signal applications like amplifiers or sensors.[29] Shot noise originates from the discrete nature of electric charge carriers, such as electrons, crossing a potential barrier or junction in devices like diodes, transistors, or photodetectors, resulting in random fluctuations in current akin to the statistical arrival of particles in a Poisson process. It manifests as current pulses with a white noise spectrum and variance proportional to the average current and bandwidth, becoming prominent in scenarios with low carrier densities or high-speed operations. Unlike thermal noise, shot noise depends on the DC bias current rather than temperature alone, impacting the performance of analog circuits in photon detection or low-current amplification.[30] Flicker noise, commonly referred to as 1/f noise due to its power spectral density inversely proportional to frequency (S(f) \propto 1/f), occurs in electronic devices and arises from imperfections such as material defects, surface traps, or fluctuations in carrier mobility within semiconductors. This low-frequency noise dominates at frequencies below a few kilohertz and exhibits a non-white spectrum that increases as frequency decreases, making it particularly detrimental in analog systems requiring stability, such as audio amplifiers or precision oscillators. Its origins are linked to phenomena like trapping and detrapping of charges at interfaces, contributing to long-term drift in signal levels.[31] Interference, encompassing electromagnetic interference (EMI) and radio-frequency interference (RFI), represents external noise sources that couple into analog signals through conductive, capacitive, or radiative paths, often from nearby electrical equipment, power lines, or wireless transmissions. EMI typically involves broadband or narrowband disturbances in the radio frequency range that induce unwanted voltages or currents, while RFI specifically denotes interference from radio signals, leading to crosstalk or distortion in sensitive analog channels. These forms of interference are non-intrinsic and can be deterministic or random, severely affecting signal integrity in unshielded environments like communication lines or instrumentation.[32]Mitigation Techniques
Filtering techniques are essential for mitigating noise in analog signals by selectively attenuating unwanted frequency components while preserving the desired signal bandwidth. Low-pass filters allow frequencies below a cutoff point to pass, effectively removing high-frequency noise such as thermal or shot noise that may overlay the signal.[33] A simple RC low-pass filter, consisting of a resistor in series with a capacitor to ground, achieves this with a cutoff frequency given by f_c = \frac{1}{2\pi RC}, where R is resistance and C is capacitance; for instance, with R = 1 kΩ and C = 0.1 μF, the cutoff is approximately 1.59 kHz, blocking higher-frequency interference.[34] High-pass filters, conversely, attenuate low-frequency noise like 1/f noise or DC offsets, passing signals above the cutoff; an RC high-pass configuration places the capacitor in series and resistor to ground, with the same cutoff formula, useful in audio applications to eliminate rumble below 20 Hz.[35] Bandpass filters combine low-pass and high-pass elements to isolate a specific frequency band, such as in radio receivers to select a carrier signal while rejecting out-of-band noise; cascading an RC low-pass and high-pass yields a basic second-order bandpass with adjustable center frequency and bandwidth.[36] Amplification and shielding address noise pickup and signal weakening in analog systems, particularly for low-level signals vulnerable to environmental interference. Preamplifiers boost weak analog signals early in the chain to improve the signal-to-noise ratio (SNR) before further processing, minimizing the relative impact of additive noise; for example, instrumentation preamplifiers with high input impedance are used in sensor applications to amplify millivolt-level outputs without introducing significant thermal noise.[37] Shielding protects against electromagnetic interference (EMI) by enclosing signal paths in conductive materials connected to ground, which diverts induced currents away from the signal lines; grounded metal shields around cables or circuits reduce capacitive and inductive coupling from nearby sources like power lines, often achieving 20-40 dB of EMI attenuation in industrial settings.[38] Twisted-pair wiring within shields further cancels differential-mode EMI by equalizing magnetic field exposure on both conductors.[39] Feedback systems, particularly negative feedback, stabilize analog amplifiers against distortions by subtracting a portion of the output from the input, thereby linearizing the response and reducing harmonic and intermodulation distortions. In an operational amplifier, negative feedback reduces nonlinear distortion by a factor of $1 + A\beta, where A is the open-loop gain and β is the feedback fraction, making the closed-loop behavior more predictable.[40] The closed-loop voltage gain is thus A_f = \frac{A}{1 + A\beta}, which, for large A, approximates \frac{1}{\beta}, trading some gain for improved linearity and bandwidth; this technique is widely applied in audio amplifiers to lower total harmonic distortion from 1% to below 0.01%.[41][42]Applications
In Communications
Analog signals play a central role in communication systems by enabling the transmission of continuous information such as voice, music, and video over various media. In these systems, the baseband analog signal, which represents the original information, is modulated onto a higher-frequency carrier signal to facilitate efficient propagation through channels like airwaves or wires. This modulation process allows the signal to travel longer distances with reduced attenuation and interference, forming the backbone of early broadcasting and telephony technologies.[23] Key modulation techniques for analog signals include amplitude modulation (AM), frequency modulation (FM), and phase modulation (PM). In AM, the amplitude of the carrier wave is varied in proportion to the message signal m(t), while the frequency and phase remain constant; the modulated signal is expressed ass(t) = [A_c + m(t)] \cos(2\pi f_c t),
where A_c is the carrier amplitude and f_c is the carrier frequency. This technique is straightforward and was widely used for its simplicity in early radio systems. FM, on the other hand, varies the instantaneous frequency of the carrier in accordance with the message signal, providing better resistance to amplitude noise at the cost of increased bandwidth, as approximated by Carson's rule: bandwidth ≈ 2(Δf + f_m), where Δf is the peak frequency deviation and f_m is the maximum message frequency. PM similarly alters the phase of the carrier proportional to the message signal, often generated using voltage-controlled oscillators, and is closely related to FM since frequency is the derivative of phase. These methods collectively enable the encoding of analog information for reliable transmission.[23][43][44] Analog signals are transmitted via diverse media, including radio broadcasting, telephone lines, and cable television. In radio broadcasting, AM signals were pivotal in the 1920s, with the first scheduled commercial broadcast occurring on November 2, 1920, by station KDKA in Pittsburgh, which aired presidential election results and marked the onset of widespread public radio. Traditional telephone systems, known as Plain Old Telephone Service (POTS), transmitted voice as analog electrical signals over twisted-pair copper wires, converting acoustic waves into varying voltages that propagated at audio frequencies up to 4 kHz. However, as of 2025, POTS lines are being phased out by major providers, with discontinuations authorized by the FCC starting in October 2025 and continuing through 2029, transitioning to digital alternatives.[45][46][47] Cable TV systems historically delivered analog video and audio signals over coaxial cables, originating in the late 1940s as community antenna television (CATV) to enhance reception in rural areas by amplifying over-the-air broadcasts before the shift to multichannel distribution. These media leverage the continuous nature of analog signals to carry information without quantization, though they are susceptible to noise during propagation.[48][49] To support multiple analog signals over a shared medium, frequency-division multiplexing (FDM) allocates distinct frequency bands to each channel, combining them into a composite signal for transmission and separating them at the receiver using bandpass filters. This analog technique was essential in telephony for grouping voice channels on long-haul lines and in radio for simultaneous broadcasting of multiple stations within the AM band. FDM enables efficient spectrum utilization in systems like early transatlantic telephone cables, where thousands of conversations were multiplexed onto coaxial lines.[50]