Fact-checked by Grok 2 weeks ago

Complex random variable

A complex random variable is a that takes values in the numbers, formally defined as a from a to the set of complex numbers \mathbb{C}, often expressed as Z = X + iY where X and Y are real-valued random variables representing the real and imaginary parts, respectively. This construction allows the joint distribution of X and Y to determine the probabilistic behavior of Z, which can be viewed equivalently as a two-dimensional real random [X, Y]^T. The statistical properties of complex random variables extend those of real random variables, with the mean defined as \mathbb{E}[Z] = \mathbb{E}[X] + i \mathbb{E}[Y] and the variance as \mathrm{Var}(Z) = \mathbb{E}[|Z - \mathbb{E}[Z]|^2] = \mathbb{E}[|Z|^2] - |\mathbb{E}[Z]|^2, where |\cdot| denotes the modulus. For two complex random variables Z_1 and Z_2, the covariance is \mathrm{Cov}(Z_1, Z_2) = \mathbb{E}[(Z_1 - \mathbb{E}[Z_1])(Z_2^* - \mathbb{E}[Z_2^*])], where ^* indicates the complex conjugate, resulting in a Hermitian positive semi-definite matrix for vector cases. A notable subclass is the circularly symmetric complex random variable, which has uncorrelated real and imaginary parts with equal variance and zero mean, exhibiting rotational invariance in the complex plane. Complex random variables are fundamental in fields such as , communications, and , where they model complex-valued signals, noise in channels, and multidimensional data like those in acoustics, , and . In these applications, distributions like the complex Gaussian—characterized by a mean vector and Hermitian —play a central role, enabling analysis of phenomena such as fading channels and .

Fundamentals

Definition

A complex random variable Z is formally defined as a measurable function Z: \Omega \to \mathbb{C}, where (\Omega, \mathcal{F}, P) is a and \mathbb{C} is the set of complex numbers equipped with the Borel \sigma-algebra generated by the standard topology. This measurability ensures that events involving Z, such as \{ Z \in B \} for Borel sets B \subseteq \mathbb{C}, belong to \mathcal{F}, allowing the P to be applied consistently. Any complex random variable Z can be decomposed into its real and imaginary parts as Z = X + iY, where X and Y are real-valued random variables on the same . This representation highlights the bivariate nature of complex random variables, with X = \Re(Z) and Y = \Im(Z), both of which inherit measurability from Z.

Relation to Real Random Variables

A complex random variable Z can be expressed as Z = X + iY, where X and Y are real-valued random variables defined on the same . This representation establishes a direct equivalence between Z and the bivariate real random vector (X, Y), whose joint probability measure is defined on \mathbb{R}^2. The distribution of Z is fully characterized by the joint distribution of X and Y, allowing properties of complex random variables to be analyzed through the lens of multivariate . The \sigma-algebra generated by Z, denoted \sigma(Z), consists of all preimages Z^{-1}(B) where B is a in \mathbb{C}. This \sigma-algebra is identical to the \sigma-algebra generated by the pair (X, Y), denoted \sigma(X, Y), because the mapping from Z to (X, Y) via the real and imaginary parts is measurable and bijective, ensuring that events definable in terms of Z are precisely those definable in terms of X and Y. The \mathbb{C} is identified with the \mathbb{R}^2 through the canonical z \mapsto (\operatorname{Re} z, \operatorname{Im} z), which is a preserving the topological structure. Measure-theoretically, this identification extends to an between the Borel \sigma-algebra on \mathbb{C} and the product Borel \sigma-algebra on \mathbb{R}^2, generated by rectangles of the form A \times B with A, B \in \mathcal{B}(\mathbb{R}). This structure ensures that probability measures on \mathbb{C} correspond uniquely to product measures on \mathbb{R}^2. Consequently, random variables with distinct induce distinct distributions for their respective component pairs (X_1, Y_1) and (X_2, Y_2), as the bijective guarantees that any difference in the of Z manifests as a difference in the on \mathbb{R}^2. This uniqueness underpins the consistent translation of probabilistic concepts between and real settings.

Examples

Basic Examples

A simple discrete example of a complex random variable is one that takes the value 1 with probability \frac{1}{2} and the value i with probability \frac{1}{2}. In this case, P(Z = 1) = \frac{1}{2} and P(Z = i) = \frac{1}{2}. Another discrete example is a uniform distribution over the points \{1, i, -1, -i\}, each with probability \frac{1}{4}. These values correspond to the vertices of a unit square centered at the origin in the complex plane, or equivalently, the fourth roots of unity located at angles 0, \frac{\pi}{2}, \pi, and \frac{3\pi}{2} on the unit circle. This setup is commonly used to model quadrature phase-shift keying (QPSK) symbols in digital communications, where the transmitted signal is equally likely to be any of these points. For a basic continuous example, consider Z = U + iV, where U and V are independent real-valued random variables on the [-1, 1]. The support of this forms a square in the with vertices at $1+i, $1-i, -1+i, and -1-i, illustrating how the joint over the real and imaginary components fills a rectangular region. In these examples, the and of Z emerge naturally from its into real part X and imaginary part Y, where the captures the distance from the and the encodes the position, offering an intuitive polar view of the variability in the .

Key Distributions

The on the unit disk is a distribution for complex random variables confined to the closed disk of 1 centered at the origin in the . Its is given by f_Z(z) = \frac{1}{\pi} \quad \text{for } |z| \leq 1, and f_Z(z) = 0 otherwise. This distribution arises naturally in contexts requiring rotational invariance within a bounded region, such as certain models in random matrix theory or . The mean is zero due to symmetry, and the variance of the real or imaginary part is $1/4. The , also known as the circularly symmetric Gaussian distribution, is the complex analogue of the and is widely used in and communications to model . A complex random variable Z follows Z \sim \mathcal{CN}(\mu, \sigma^2) if its probability density function is f_Z(z) = \frac{1}{\pi \sigma^2} \exp\left( -\frac{|z - \mu|^2}{\sigma^2} \right), where \mu \in \mathbb{C} is the parameter and \sigma^2 > 0 is the variance parameter, representing E[|Z - \mu|^2]. This distribution assumes , meaning the real and imaginary parts are uncorrelated and identically distributed. The magnitude of a exhibits the , which describes the envelope or amplitude in many engineering applications. Specifically, if Z \sim \mathcal{CN}(0, \sigma^2), then R = |Z| has the f_R(r) = \frac{2r}{\sigma^2} \exp\left( -\frac{r^2}{\sigma^2} \right) \quad \text{for } r \geq 0, and f_R(r) = 0 otherwise. This distribution is parameterized by \sigma^2, the variance of the underlying complex normal. The complex normal distribution \mathcal{CN}(0, \sigma^2) corresponds to a real bivariate normal distribution for the components, where if Z = X + iY, then (X, Y) follows a bivariate normal \mathcal{N}(0, (\sigma^2/2) I_2), with X and Y independent and each having variance \sigma^2/2.

Distribution Functions

Cumulative Distribution Function

The cumulative distribution function (CDF) of a complex random variable Z = X + jY, where X and Y are real-valued random variables, cannot be defined using a total order on the complex plane \mathbb{C} in the same way as for real random variables, since \mathbb{C} lacks a natural ordering. Instead, the CDF is typically defined component-wise as the joint CDF of the real and imaginary parts: F_Z(z) = F_{X,Y}(\operatorname{Re}(z), \operatorname{Im}(z)) = P(X \leq \operatorname{Re}(z), Y \leq \operatorname{Im}(z)), where z \in \mathbb{C}. Alternatively, in a more general measure-theoretic framework, the distribution of Z is characterized by the probability measure \mu_Z on the Borel \sigma-algebra of \mathbb{C}, with F_Z(A) = \mu_Z(A) = P(Z \in A) for Borel sets A \subset \mathbb{C}. This component-wise CDF inherits the properties of the bivariate CDF for real random variables. It is non-decreasing in each argument: if z_1 \leq z_2 in the sense that \operatorname{Re}(z_1) \leq \operatorname{Re}(z_2) and \operatorname{Im}(z_1) \leq \operatorname{Im}(z_2), then F_Z(z_1) \leq F_Z(z_2). It is right-continuous in each component, meaning \lim_{\epsilon \to 0^+} F_Z(z + \epsilon_r + j\epsilon_i) = F_Z(z) for \epsilon_r, \epsilon_i \geq 0. Additionally, the CDF satisfies boundary conditions: \lim_{|\operatorname{Re}(z)| + |\operatorname{Im}(z)| \to \infty, \operatorname{Re}(z), \operatorname{Im}(z) \to -\infty} F_Z(z) = 0 and \lim_{|\operatorname{Re}(z)| + |\operatorname{Im}(z)| \to \infty, \operatorname{Re}(z), \operatorname{Im}(z) \to \infty} F_Z(z) = 1. The CDF of a complex random variable directly corresponds to the joint CDF of its real and imaginary components viewed as a bivariate real random (X, Y), evaluated at (\operatorname{Re}(z), \operatorname{Im}(z)). This equivalence allows the use of standard results from multivariate , such as computing probabilities over rectangles in the via inclusion-exclusion: for example, P(z_1 < Z \leq z_2) = F_Z(z_2) - F_Z(z_1) adjusted for the partial ordering. The CDF uniquely determines the distribution measure \mu_Z on \mathbb{C}, as any two complex random variables with the same CDF must induce the same probability measure on the Borel sets, ensuring that all probabilities P(Z \in A) coincide for Borel A. This uniqueness follows from the corresponding property of multivariate CDFs on \mathbb{R}^2.

Probability Density Function

A complex random variable Z possesses a probability density function (PDF) if and only if the probability measure it induces on \mathbb{C}, identified with \mathbb{R}^2 via the map z \mapsto (\Re z, \Im z), is absolutely continuous with respect to the Lebesgue measure on \mathbb{R}^2. Under this condition, the real and imaginary parts X = \Re Z and Y = \Im Z admit a joint PDF f_{X,Y}(x,y) such that for any Borel set A \subset \mathbb{C}, P(Z \in A) = \iint_{A} f_{X,Y}(x,y) \, dx \, dy, where the integral is over the corresponding region in the plane. This joint PDF fully characterizes the distribution of Z, and the absolute continuity ensures no singular components with respect to Lebesgue measure. In complex notation, the PDF of Z is denoted f_Z(z) = f_{X,Y}(\Re z, \Im z), with probabilities computed via integration against the area element dx \, dy (or equivalently \frac{dz \, d\bar{z}}{2i}, though the standard Lebesgue form is used). This representation treats f_Z as a function on \mathbb{C}, but it is not generally holomorphic, as it fails to satisfy the Cauchy-Riemann equations unless the distribution has special symmetry. The non-holomorphic nature arises because the density is defined with respect to the real plane measure, not a complex one-dimensional measure. To derive the PDF of a transformed variable W = g(Z), where g: \mathbb{C} \to \mathbb{C} is a diffeomorphism (or locally invertible), apply the change-of-variables formula from multivariable probability. Treating g as a map \mathbb{R}^2 \to \mathbb{R}^2, the PDF is f_W(w) = f_Z(g^{-1}(w)) \cdot \frac{1}{|\det J_g(g^{-1}(w))|}, where J_g is the Jacobian matrix of the real transformation. If g is holomorphic, then |\det J_g(z)| = |g'(z)|^2, simplifying the adjustment factor to $1/|g'(g^{-1}(w))|^2. This framework preserves the absolute continuity under suitable transformations. A representative example is the uniform distribution on the unit disk, where Z is supported on \{ z \in \mathbb{C} : |z| < 1 \} with PDF f_Z(z) = 1/\pi for |z| < 1 and $0 otherwise. The corresponding joint PDF is f_{X,Y}(x,y) = 1/\pi for x^2 + y^2 < 1, reflecting the constant density over the area \pi. This distribution illustrates a rotationally invariant case without singularities.

Moments and Characteristics

Expectation

The expectation of a complex random variable Z, denoted E[Z], is defined as the complex number E[Z] = E[X] + i E[Y], where X = \Re(Z) and Y = \Im(Z) are the real and imaginary parts of Z, respectively. In measure-theoretic terms, this is expressed as the integral E[Z] = \int_{\mathbb{C}} z \, d\mu_Z(z), where \mu_Z is the probability measure on the complex plane induced by Z. The expectation exists provided that E[|Z|] < \infty, ensuring the integrals defining E[X] and E[Y] converge absolutely. A key property is the inequality |E[Z]| \leq E[|Z|], which follows from the convexity of the modulus function and applied to the expectation operator. The expectation operator is linear over the complex numbers: for complex constants a, b and complex random variables Z, W, E[aZ + bW] = a E[Z] + b E[W]. This linearity holds regardless of dependence between Z and W. For computational purposes, if Z is discrete with possible values \{z_k\} and corresponding probabilities \{p_k\}, then E[Z] = \sum_k z_k p_k. If Z is continuous with joint probability density function f_{X,Y}(x,y) for its real and imaginary parts, then E[Z] = \iint_{\mathbb{R}^2} (x + i y) f_{X,Y}(x,y) \, dx \, dy. This double integral can be separated into real and imaginary components: E[Z] = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} x f_{X,Y}(x,y) \, dx \, dy + i \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} y f_{X,Y}(x,y) \, dx \, dy.

Characteristic Function

The characteristic function of a complex random variable Z = X + i Y, where X and Y are real-valued random variables, is defined as the bivariate characteristic function of the vector (X, Y). Specifically, for \omega = (u, v)^\top \in \mathbb{R}^2, it is given by \phi_Z(\omega) = \mathbb{E}\left[ \exp\left( i \omega^\top \begin{pmatrix} X \\ Y \end{pmatrix} \right) \right] = \mathbb{E}\left[ \exp\left( i (u X + v Y) \right) \right]. This form directly extends the characteristic function from the univariate real case to the bivariate real case underlying the complex structure. An alternative definition, used in some contexts to facilitate analytic continuation, parameterizes the characteristic function over the complex plane: for t \in \mathbb{C}, \phi_Z(t) = \mathbb{E}\left[ \exp\left( i \operatorname{Re}(\bar{t} Z) \right) \right]. This expression coincides with the bivariate form when t = u + i v for real u, v, as \operatorname{Re}(\bar{t} Z) = u X + v Y. The complex-parameter form allows for extensions to moment-generating functions by replacing i with a complex variable in appropriate regions. The characteristic function possesses several fundamental properties. It satisfies \phi_Z(0) = 1, and |\phi_Z(\omega)| \leq 1 for all \omega, with equality to 1 only at \omega = 0 unless Z is degenerate. It is uniformly continuous and positive semi-definite, meaning that for any finite set of points \omega_k \in \mathbb{R}^2 and coefficients c_k \in \mathbb{C}, \sum_k \sum_l c_k \overline{c_l} \phi_Z(\omega_k - \omega_l) \geq 0. In the complex-parameter form, \phi_Z(t) is analytic in a horizontal strip of the complex plane containing the real axis, with the width of the strip determined by the existence of moments of Z. The characteristic function uniquely determines the probability law of Z, in the sense that if \phi_Z(\omega) = \phi_W(\omega) for all \omega \in \mathbb{R}^2, then Z and W have the same distribution. This uniqueness theorem extends the classical result for real random variables to the complex setting via the underlying bivariate structure. An inversion formula allows recovery of the distribution from the characteristic function. For instance, if Z admits a joint probability density function f_{X,Y}(x,y), then f_{X,Y}(x,y) = \frac{1}{(2\pi)^2} \int_{-\infty}^\infty \int_{-\infty}^\infty \phi_Z(u,v) \exp\left( -i (u x + v y) \right) \, du \, dv, provided the integral converges absolutely. Similar formulas exist for the cumulative distribution function using the Gil-Pelaez inversion or other variants, enabling reconstruction of the full distribution of Z.

Second-Order Statistics

Variance and Pseudo-Variance

For a complex random variable Z with finite second moments, the mean is defined as \mu = E[Z]. The variance of Z is given by \operatorname{Var}(Z) = E[|Z - \mu|^2] = E[(Z - \mu)(\bar{Z} - \bar{\mu})], where \bar{Z} denotes the complex conjugate of Z. This quantity is real-valued and non-negative, representing the expected squared deviation from the mean in the complex plane. An equivalent expression is \operatorname{Var}(Z) = E[|Z|^2] - |\mu|^2, which parallels the variance formula for real random variables. In addition to the variance, complex random variables admit a pseudo-variance, defined as \operatorname{PVar}(Z) = E[(Z - \mu)^2]. Unlike the variance, the pseudo-variance is generally complex-valued and captures the second-moment structure involving the non-conjugated components of Z. It measures the extent to which the distribution of Z deviates from circular symmetry around the mean. For a real-valued random variable Z, where \bar{Z} = Z, the pseudo-variance coincides with the ordinary variance, as (Z - \mu)^2 = (Z - \mu)(\bar{Z} - \bar{\mu}). A key relation between these quantities arises in the context of proper complex random variables: if \operatorname{PVar}(Z) = 0, then Z is proper, meaning its distribution is circularly symmetric about \mu. This condition implies that the real and imaginary parts of the centered variable have equal variance and zero covariance.

Covariance and Pseudo-Covariance

For complex random variables, second-order statistics extend beyond individual moments to capture dependencies between pairs. The covariance provides a measure analogous to the real case but accounts for the complex structure through conjugation, while the pseudo-covariance captures complementary information without it. These quantities together fully characterize the second-order properties of non-circular complex variables. The covariance between two complex random variables Z and W with means \mu_Z = E[Z] and \mu_W = E[W] is defined as \operatorname{Cov}(Z, W) = E\left[ (Z - \mu_Z) \overline{(W - \mu_W)} \right], where the overline denotes complex conjugation. This form ensures that the covariance is Hermitian, satisfying \operatorname{Cov}(Z, W) = \overline{\operatorname{Cov}(W, Z)}. The covariance matrix for a vector of such variables is Hermitian and positive semi-definite, reflecting the inner product structure in complex Hilbert space. In contrast, the pseudo-covariance, also known as the relation or complementary covariance, is given by \operatorname{Pcov}(Z, W) = E\left[ (Z - \mu_Z) (W - \mu_W) \right]. Unlike the covariance, this quantity is not Hermitian but symmetric, \operatorname{Pcov}(Z, W) = \operatorname{Pcov}(W, Z), and it vanishes for proper (circularly symmetric) complex variables. The pseudo-covariance captures the degree of non-circularity, providing essential information that the covariance alone misses for a complete second-order description. Key properties arise in the context of dependence. If Z and W are uncorrelated in the complex sense, both \operatorname{Cov}(Z, W) = 0 and \operatorname{Pcov}(Z, W) = 0; the single condition \operatorname{Cov}(Z, W) = 0 is insufficient, unlike in the real case. For independent zero-mean variables, both quantities are zero, as independence implies E[ZW] = 0 and E[Z \overline{W}] = 0. When Z = W, these reduce to the variance \operatorname{Var}(Z) = \operatorname{Cov}(Z, Z) and pseudo-variance \operatorname{Pvar}(Z) = \operatorname{Pcov}(Z, Z). These complex measures relate directly to the covariances of their real and imaginary components. Let Z = X + iY and W = U + iV, where X, Y, U, V are real random variables. Then, \operatorname{Cov}(Z, W) = \operatorname{Cov}(X, U) + \operatorname{Cov}(Y, V) + i \left( \operatorname{Cov}(Y, U) - \operatorname{Cov}(X, V) \right). This decomposition highlights how the real part combines the in-phase and quadrature covariances, while the imaginary part reflects their cross-terms, underscoring the intertwined nature of real and imaginary components in complex statistics.

Covariance Matrix of Components

For a complex random variable Z = X + jY, where X and Y are the real and imaginary parts, respectively, the covariance matrix of the components is the $2 \times 2 real symmetric matrix \Sigma = \begin{pmatrix} \operatorname{Var}(X) & \operatorname{Cov}(X, Y) \\ \operatorname{Cov}(Y, X) & \operatorname{Var}(Y) \end{pmatrix}, which is positive semi-definite by the properties of real covariance matrices. This matrix relates to the complex variance through \operatorname{Var}(Z) = E[|Z|^2] = \operatorname{Var}(X) + \operatorname{Var}(Y), assuming zero mean for simplicity, and the trace satisfies \operatorname{Trace}(\Sigma) = \operatorname{Var}(Z). The off-diagonal entry \operatorname{Cov}(X, Y) connects to the complex structure via the imaginary part of the pseudo-variance: \operatorname{Im}(E[Z^2]) = 2 \operatorname{Cov}(X, Y). In the case of circular symmetry, the components X and Y are uncorrelated with equal variances, yielding \operatorname{Cov}(X, Y) = 0, \operatorname{Var}(X) = \operatorname{Var}(Y) = \sigma^2 / 2, and thus \Sigma = (\sigma^2 / 2) I_2, where I_2 is the $2 \times 2 identity matrix and \sigma^2 = \operatorname{Var}(Z).

Special Properties

Circular Symmetry

A complex random variable Z is said to be circularly symmetric if its probability distribution is invariant under multiplication by any complex number of unit modulus. Specifically, for every real number \theta, the random variable e^{i \theta} Z has the same distribution as Z. This property implies that the distribution of Z is rotationally invariant in the complex plane. Equivalent conditions for circular symmetry include \mathbb{E}[Z g(Z)] = \mathbb{E}[Z] \mathbb{E}[g(Z)] for all holomorphic (analytic) functions g that satisfy suitable integrability conditions. Another formulation involves the pseudo-variance \mathbb{E}[Z^2] = 0 (for the scalar case) combined with properties ensuring uniform phase distribution, such as the argument of Z being uniformly distributed over [0, 2\pi) independently of the magnitude. For complex Gaussian random variables, zero pseudo-variance is both necessary and sufficient for circular symmetry. Key properties of a circularly symmetric complex random variable Z include a zero mean, \mathbb{E}[Z] = 0, which follows directly from the rotational invariance since \mathbb{E}[Z] = e^{i \theta} \mathbb{E}[Z] for all \theta implies the mean cannot be nonzero. Additionally, the magnitude |Z| and the argument \arg(Z) are independent random variables, with \arg(Z) uniformly distributed on [0, 2\pi). A canonical example of a circularly symmetric complex random variable is the complex normal distribution Z \sim \mathcal{CN}(0, \sigma^2), where the real and imaginary parts are independent and identically distributed as \mathcal{N}(0, \sigma^2 / 2). This distribution satisfies the rotational invariance and has zero mean and zero pseudo-variance.

Proper Complex Random Variables

A proper complex random variable Z is defined as one satisfying \mathbb{E}[|Z|^2] < \infty and \mathbb{E}[Z^2] = 0, where the former ensures a finite second moment and the latter indicates vanishing pseudo-variance. This condition implies that the real part X = \Re(Z) and imaginary part Y = \Im(Z) are uncorrelated, with \mathbb{E}[XY] = 0, and possess equal variances, \mathrm{Var}(X) = \mathrm{Var}(Y) = \frac{1}{2} \mathbb{E}[|Z|^2]. Circular symmetry implies that a complex random variable is proper and has zero mean. For Gaussian distributions, zero-mean properness is equivalent to circular symmetry. In general, zero-mean properness does not imply circular symmetry. A representative example is the zero-mean complex normal distribution, where Z \sim \mathcal{CN}(0, \sigma^2) has probability density function f_Z(z) = \frac{1}{\pi \sigma^2} \exp\left( -\frac{|z|^2}{\sigma^2} \right), which satisfies the properness conditions and exhibits circular symmetry.

Cauchy-Schwarz Inequality

The Cauchy-Schwarz inequality provides a fundamental bound on the covariance between two complex random variables Z and W with finite second moments, stated as |\operatorname{Cov}(Z, W)|^2 \leq \operatorname{Var}(Z) \operatorname{Var}(W), where \operatorname{Cov}(Z, W) = \mathbb{E}[(Z - \mathbb{E}[Z])(\overline{W - \mathbb{E}[W]})] and the variances are \operatorname{Var}(Z) = \mathbb{E}[|Z - \mathbb{E}[Z]|^2], \operatorname{Var}(W) = \mathbb{E}[|W - \mathbb{E}[W]|^2]. Equality holds if and only if Z and W are linearly dependent over the complex numbers, meaning there exist complex scalars \alpha, \beta, not both zero, such that \alpha Z + \beta W = 0 almost surely. To sketch the proof, consider the centered variables \tilde{Z} = Z - \mathbb{E}[Z] and \tilde{W} = W - \mathbb{E}[W]. For any complex scalars a, b not both zero, \mathbb{E}[|a \tilde{Z} + b \tilde{W}|^2] \geq 0. Expanding the expectation gives |a|^2 \operatorname{Var}(Z) + 2 \operatorname{Re}(a \overline{b} \operatorname{Cov}(Z, W)) + |b|^2 \operatorname{Var}(W) \geq 0. This quadratic form in a and b is positive semi-definite, implying the discriminant is non-positive, which yields |\operatorname{Cov}(Z, W)|^2 \leq \operatorname{Var}(Z) \operatorname{Var}(W). A similar but limited extension applies to the pseudo-covariance \operatorname{Pcov}(Z, W) = \mathbb{E}[(Z - \mathbb{E}[Z])(W - \mathbb{E}[W])], where |\operatorname{Pcov}(Z, W)|^2 \leq \operatorname{Var}(Z) \operatorname{Var}(W), derived analogously by applying the inequality to the non-conjugated product, though this bound is tighter for proper (circularly symmetric) variables where the pseudo-covariance vanishes. This inequality establishes key bounds in estimation theory for complex signals, such as limiting the magnitude of covariance terms in deriving for parameter estimation in noisy environments.

Multivariate Extensions

Vector Complex Random Variables

A vector complex random variable, denoted as \mathbf{Z} = (Z_1, \dots, Z_n)^T \in \mathbb{C}^n, consists of n complex random variables Z_k, each taking values in the complex plane \mathbb{C}. The joint distribution of \mathbf{Z} is specified on the space \mathbb{C}^n, which is isomorphic to \mathbb{R}^{2n} through the mapping of each complex component to its real and imaginary parts, allowing the use of real-valued statistical tools while preserving complex structure. The mean vector is defined as \boldsymbol{\mu} = E[\mathbf{Z}], providing the expected value in \mathbb{C}^n. The second-order statistics of \mathbf{Z} are characterized by the complex covariance matrix \boldsymbol{\Gamma} = E[(\mathbf{Z} - \boldsymbol{\mu})(\mathbf{Z} - \boldsymbol{\mu})^H], where ^H denotes the conjugate transpose, capturing linear dependencies under complex conjugation. Complementing this is the pseudo-covariance matrix \tilde{\boldsymbol{\Gamma}} = E[(\mathbf{Z} - \boldsymbol{\mu})(\mathbf{Z} - \boldsymbol{\mu})^T], which accounts for dependencies without conjugation and is essential for a complete description of non-circular behaviors. For the case n=1, these reduce to the scalar covariance and pseudo-variance of a single complex random variable. Key properties include the Hermitian symmetry of \boldsymbol{\Gamma}, ensuring \boldsymbol{\Gamma}^H = \boldsymbol{\Gamma}, and its positive semi-definiteness, meaning \mathbf{w}^H \boldsymbol{\Gamma} \mathbf{w} \geq 0 for any \mathbf{w} \in \mathbb{C}^n, which guarantees the validity of the covariance as a measure of dispersion. The vector \mathbf{Z} exhibits circular symmetry, also known as properness, if \tilde{\boldsymbol{\Gamma}} = \mathbf{0}, implying that the distribution is invariant under multiplication by e^{i\phi} for any real \phi, simplifying analysis in many applications.

Complex Wishart Distribution

The complex Wishart distribution, denoted CW_p(n, \Sigma), is the distribution of the sample covariance matrix W = X X^H, where X is a p \times n matrix whose columns are independent and identically distributed as complex normal vectors \sim \mathcal{CN}_p(0, \Sigma), with p the dimension, n \geq p the degrees of freedom, and \Sigma a p \times p positive definite Hermitian covariance matrix. This distribution was introduced by Goodman in 1963 as an extension of the real Wishart distribution to handle multivariate complex Gaussian data in statistical analysis. The probability density function of W, supported on the cone of p \times p positive definite Hermitian matrices, is given by f(W) = \frac{|\det(W)|^{n-p} \exp\left(-\operatorname{tr}(\Sigma^{-1} W)\right)}{|\det(\Sigma)|^n \, \pi^{p(p-1)/2} \prod_{j=1}^p \Gamma(n - j + 1)} for W > 0, where \Gamma denotes the gamma function. This form generalizes the case for the identity covariance \Sigma = I_p, where the density simplifies to a constant times |\det(W)|^{n-p} \exp(-\operatorname{tr}(W)). Key properties include the expectation \mathbb{E}[W] = n \Sigma, which follows directly from the linearity of expectation applied to the quadratic form, and the inverse expectation \mathbb{E}[W^{-1}] = \Sigma^{-1}/(n - p) for n > p. The covariance structure of elements in W mirrors that of the real Wishart but accounts for complex conjugation, enabling its use in estimating covariance matrices within multivariate complex analysis, such as hypothesis testing and confidence regions for \Sigma.

Applications

Signal Processing

In signal processing, complex random variables provide a powerful framework for modeling signals that exhibit both amplitude and phase variations, particularly in the frequency domain where noisy sinusoidal components are prevalent. The phasor representation treats a sinusoidal signal corrupted by noise as a complex random variable, encapsulating the signal's magnitude and phase shift in a single complex number, which simplifies analysis of linear time-invariant systems. A common application arises in noise modeling, where (AWGN) is represented using circularly symmetric complex Gaussian random variables; this assumption implies that the real and imaginary components are independent and identically distributed with zero mean and equal variance, capturing the isotropic nature of noise in the . This model is foundational for simulating and analyzing broadband signals in environments like and audio processing, as it aligns with the statistical properties of thermal noise. In detection and estimation tasks, the properness of complex random variables—meaning the pseudo-covariance is zero—enables simplified algorithms by reducing the dimensionality of the problem, such as in matched ing where the coefficients are conjugates of the signal to maximize the signal-to-noise . This property, often tied to as a key assumption, streamlines computations in systems without loss of optimality under . Furthermore, the of a complex random variable, viewed as the two-dimensional of its joint for the real and imaginary parts, facilitates connections to frequency-domain analysis, where it relates directly to the power of associated processes through the Wiener-Khinchin adapted for complex variables. This linkage is essential for deriving estimates in applications like and .

Array Processing and Other Fields

Complex random variables are extensively used in for modeling signals received by sensor arrays, such as in and direction-of-arrival estimation, where the array output is treated as a of complex Gaussian random variables to account for differences and correlations. In acoustics, they model sound fields and in enclosures or underwater environments, enabling statistical analysis of acoustic signals with complex envelopes. Similarly, in , complex random variables represent fluctuating electromagnetic fields in theory and statistical optics, facilitating the study of phenomena like speckle patterns and partial coherence. In , they are applied to model random ocean dynamics, such as wave propagation and underwater acoustic scattering, using complex Gaussian processes to simulate environmental variability and its impact on signal transmission.

Communications

In digital communications, complex random variables provide a natural framework for modeling modulation schemes such as (QAM) and (PSK), where transmitted symbols are points in the . The symbol s is treated as a complex random variable with real and imaginary components representing the in-phase and amplitudes, respectively, allowing for the joint analysis of both dimensions. is modeled as a circularly symmetric complex Gaussian random variable z \sim \mathcal{CN}(0, \sigma^2), yielding the received signal y = s + z. This approach facilitates the derivation of error performance metrics, such as symbol error probability, for constellations like 16-QAM or QPSK, where the signals exhibit due to phase distribution. Channel models in wireless systems frequently utilize complex random variables to capture fading phenomena. In Rayleigh fading scenarios, prevalent in urban non-line-of-sight environments, the channel coefficient h is represented as a zero-mean unit-variance complex Gaussian random variable h \sim \mathcal{CN}(0, 1), resulting in the |h| following a . The received signal then becomes y = h x + n, where x is the transmitted symbol and n is the , enabling statistical characterization of signal attenuation due to . This model underpins performance evaluations for various modulation schemes over fading channels, emphasizing the role of the complex envelope in predicting outage probabilities and orders. The application of complex random variables extends to information-theoretic analyses in communications, particularly through I(X; Y) for -valued channels. For the (AWGN) Y = X + Z with Z \sim \mathcal{CN}(0, N_0), the quantifies the reduction in uncertainty about the input X given the output Y, and the is maximized when X is circularly symmetric complex Gaussian, achieving C = \log_2 \left(1 + \frac{P}{N_0}\right) bits per use, where P is the average power constraint. In channels, the ergodic involves averaging I(X; Y) over the complex Gaussian distribution, providing bounds on reliable transmission rates. In multiple-input multiple-output () systems, random variables model the channel as a \mathbf{H} with and identically distributed entries h_{ij} \sim \mathcal{CN}(0, 1), representing the gains between transmit and receive antennas. The system equation is \mathbf{y} = \mathbf{H} \mathbf{x} + \mathbf{n}, where \mathbf{y} and \mathbf{x} are receive and transmit s, and \mathbf{n} \sim \mathcal{CN}(\mathbf{0}, N_0 \mathbf{I}). This formulation supports , where the ergodic capacity is the of the C = \mathbb{E} \left[ \log_2 \det \left( \mathbf{I} + \frac{P}{n_t N_0} \mathbf{H} \mathbf{H}^H \right) \right], with n_t transmit antennas, demonstrating how randomness enables gains proportional to the minimum of transmit and receive antennas.

References

  1. [1]
    [PDF] Discrete Stochastic Processes, Chapter 1 - MIT OpenCourseWare
    A complex random variable is a mapping from the sample space to the set of finite complex numbers, and a vector random variable (rv) is a mapping from the ...
  2. [2]
    [PDF] Random Variables, Vectors, and Sequences
    May 3, 2021 · A complex-valued random variable is defined by x(ζ) = xR(ζ ) + jxI(ζ ) where xR(ζ ) and xI(ζ ) are real-valued random variables. We will ...
  3. [3]
    None
    ### Summary of Complex Random Variables and Vectors from https://acoustics.mit.edu/faculty/abb/2.163/www/Random%20vectors.pdf
  4. [4]
    [PDF] Lecture 12
    A complex gaussian random variable z = x + iy with independent, zero-mean components x and y of equal variance is a circularly-symmetric gaussian random ...
  5. [5]
    [PDF] Complex-Valued Random Vectors and Channels - arXiv
    May 4, 2011 · Complex-valued signals are central in many scientific fields including communications, array process- ing, acoustics and optics, oceanography ...
  6. [6]
    [PDF] Random Processes for Engineers - Bruce Hajek
    ... complex random variable X = U + jV can be thought of as essentially a two dimensional random variable with real coordinates U and V . Similarly, a random ...
  7. [7]
    Complex Random Variable - an overview | ScienceDirect Topics
    A complex random variable is defined as a variable in the form of z = x + jy, where x and y are real random variables and j represents the imaginary unit.
  8. [8]
    [PDF] Phase-Aware Deep Learning with Complex-Valued CNNs for Audio ...
    Oct 10, 2025 · complex random variable ... Thus, complex random variables are equivalent to a bivariate real random vector (X, Y ), and its distribution is.<|separator|>
  9. [9]
    254A, Notes 0: A review of probability theory | What's new - Terry Tao
    Jan 1, 2010 · We refer to real and complex random variables collectively as scalar random variables. Given a {R} -valued random variable {X} , and a ...
  10. [10]
    [PDF] Chapter 5. Product Measures - UC Davis Math
    ... Borel σ-algebra on. R n is the n-fold product of the Borel σ-algebra on R. This leads to an alternative method of constructing Lebesgue measure on Rn as a ...
  11. [11]
    [PDF] RANDOM SIGNALS
    Complex random variables: A complex random variable x(ξ) = xR(ξ) + jxI(ξ). Although the definition of the mean remains unchanged, the. definition of variance ...
  12. [12]
    [PDF] 1.3 Variance of a Random Variable
    Mar 1, 2018 · Example 1.4.1 (QPSK constellation) Consider a signal that is chosesc uniformly from a QPSK constellation, 1.2-g with probability 1/4 (se Fig ...
  13. [13]
    [PDF] Complex Random Variables - Casualty Actuarial Society
    In this paper we will develop a probability theory for complex random variables and vectors, arguing that such a theory will eventually find actuarial uses.
  14. [14]
    [PDF] Probability Density Under Transformation - Cornell: Computer Science
    Sep 25, 2015 · The unit disk is given by the polar coordinates in the set [0, 1] ... In our case, we have that pB(x, y)=1/π. So, we want pA such that ...
  15. [15]
    [PDF] Topics in random matrix theory Terence Tao
    Feb 2, 2011 · complex random variable X is the small ball event “|X−z| < r” for some complex number z and some (small) radius r > 0. We refer to real and ...
  16. [16]
    [PDF] Joint Distributions, Independence Class 7, 18.05 - MIT Mathematics
    The joint cdf F(x, y) of X and Y must satisfy several properties: 1. F(x, y) is non-decreasing: i.e. if x or y increase then F(x, y) must stay ...
  17. [17]
    [PDF] Random variables and discrete distributions - Stat@Duke
    However, the cdf has some additional theoretical properties (e.g. uniqueness) that the pdf doesn't have. ... Joint cumulative distribution function (cdf). F(x, y) ...
  18. [18]
    On probability density functions for complex variables
    **Summary of Key Sections on Probability Density Functions for Complex Random Variables:**
  19. [19]
    Integral form of expectation with respect to complex random variables
    Feb 14, 2022 · I want to learn if there is an integral to express the expectation over h when h is a complex-random variable (g(h) is still a real-valued ...
  20. [20]
    [PDF] 5 Expectation Inequalities and Lp Spaces - Stat@Duke
    Dec 24, 2018 · (or sometimes complex-valued) random variables X for which E|X|p < ∞. ... Jensen's Inequality: Let ϕ(x) be a convex function on R, and X ...
  21. [21]
    [PDF] Polya's Characterization Theorem for Complex Random Variables
    A complex random variable can be Gaussian in either the narrow or the wide sense. ... characteristic function of a centered Gaussian random variable. For ...
  22. [22]
    [PDF] Phase, amplitude, and the complex normal distribution
    The variance-covariance and pseudo-variance-covariance matrices for a complex random vector X are then defined analogously to the case of complex random ...
  23. [23]
  24. [24]
    [PDF] Complex Circular Random Vectors - HAL @ USC
    Sep 24, 2015 · If z(u) is complex circular with zero mean, then the random variable zθ(u) = ejθz(u) will have the same second moments as z(u).
  25. [25]
    A Foundation in Digital Communication - Cambridge University Press
    A Foundation in Digital Communication. A Foundation in Digital Communication ... 17 - Complex Random Variables and Processes. pp 314-340. You have access ...
  26. [26]
    [PDF] Second-Order Complex Random Vectors and Normal Distributions
    Mar 18, 2018 · PICINBONO: SECOND-ORDER COMPLEX RANDOM VECTORS AND NORMAL DISTRIBUTIONS. 3. Proof: Suppose first that C is the relation matrix of a RV Z.
  27. [27]
    [PDF] Appendix A Detection and estimation in additive Gaussian noise
    Gaussian random vector with covariance matrix K is denoted as (0,K). Some special cases: 1. A complex Gaussian random variable w = wR +jwI with i.i.d. zero-mean.
  28. [28]
    [PDF] Circularly-Symmetric Gaussian random vectors - RLE at MIT
    Jan 1, 2008 · Most communication engineers believe that vectors of Gaussian random variables (real or complex) are determined by their covariance matrix. For ...<|control11|><|separator|>
  29. [29]
  30. [30]
    [PDF] Random features for dot product kernels and beyond
    Oct 25, 2022 · For a complex random variable z = a + i b with a, b ∈ R, we have |z|2 ... analytic function whose Maclaurin expansion has non-negative ...
  31. [31]
    [PDF] Notes for ECE 534 An Exploration of Random Processes for Engineers
    Dec 21, 2012 · ... complex random variable X = U +jV can be thought of as essentially a two dimensional random variable with real coordinates U and V ...
  32. [32]
    Cauchy-Schwarz inequality for random variables - TheoremDep
    Let X and Y be two complex-valued random variables. Then | E ( X Y ― ) | 2 ≤ E ( | X | 2 ) E ( | Y | 2 ) . Proof. Let V be the set of all complex-valued ...
  33. [33]
  34. [34]
    [PDF] Complex Random Variables and Stochastic Processes - DSP-Book
    The paper by. Neeser and Massey [8] treats circular (which they call “proper”) complex stochastic processes and their application in information theory. There ...
  35. [35]
    None
    - **Definition**: The complex Wishart distribution is defined for a p × p positive definite Hermitian matrix W, where W = XX* and X is a p × n matrix of independent complex normal variables with mean 0 and variance 1. It arises in multivariate statistical analysis of complex Gaussian data.
  36. [36]
    THE COMPLEX WISHART DISTRIBUTION AND THE SYMMETRIC ...
    Our definition of γp,σ as the complex Wishart distribution differs slightly from the traditional definition given in Goodman (1963) because of the factor 1/2.
  37. [37]
    [PDF] On the Exact and Approximate Eigenvalue Distribution for Sum of ...
    Apr 1, 2015 · In this section we begin with the definition of complex Wishart distribution, which depends crucially on variance and degrees-of-freedom ...<|control11|><|separator|>
  38. [38]
    Complex Random Variable - an overview | ScienceDirect Topics
    Complex random variables can be defined as random variables that assume complex numerical values as a result of a trial. They can also be represented as pairs ...
  39. [39]
    [PDF] Statistical Signal Processing of Complex-Valued Data
    We will find that phasors are a complex representation for the motion ... The second half of this chapter deals with complex random variables and signals.
  40. [40]
    Complex-Valued Signal Processing: The Proper Way to Deal With ...
    Aug 6, 2025 · A proper complex random variable is uncorrelated with its complex conjugate, and a circular complex random variable has a probability ...<|separator|>
  41. [41]
    Additive White Gaussian Noise - an overview | ScienceDirect Topics
    Additive white Gaussian noise (AWGN) is defined as a type of noise that has a normal distribution in the time domain with an average value of zero, ...
  42. [42]
    [PDF] Chapter 5 Random Variables and Processes | WITS
    ◊ In other words , the characteristic function is the Fourier transform of the probability density function f. X ... power spectral density of the output random ...<|control11|><|separator|>
  43. [43]
    [PDF] - FUNCTION AND POWER SPECTRAL DENST
    The joint characteristic function is the two-dimensional Fourier transform ... the power spectral density of a random function as a Fourier transform pair, is.