Fact-checked by Grok 2 weeks ago

Complex normal distribution

The complex normal distribution, also known as the complex Gaussian distribution, is a probability distribution defined for complex-valued random variables or vectors, extending the multivariate normal distribution from the real to the complex domain by considering the joint normality of the real and imaginary parts. A random vector Z \in \mathbb{C}^n follows a complex normal distribution, denoted Z \sim \mathcal{CN}_n(\mu, \Gamma, C), where \mu = \mathbb{E}[Z] is the mean vector, \Gamma = \mathbb{E}[(Z - \mu)(Z - \mu)^*] is the Hermitian covariance matrix (positive semi-definite), and C = \mathbb{E}[(Z - \mu)(Z - \mu)^T] is the pseudo-covariance matrix. This distribution is fully characterized by these three parameters, as the real and imaginary components \begin{pmatrix} \operatorname{Re} Z \\ \operatorname{Im} Z \end{pmatrix} follow a $2n-dimensional real multivariate normal distribution \mathcal{N}_{2n} \left( \begin{pmatrix} \operatorname{Re} \mu \\ \operatorname{Im} \mu \end{pmatrix}, \frac{1}{2} \begin{pmatrix} \operatorname{Re}(\Gamma + C) & \operatorname{Im}(-\Gamma + C) \\ \operatorname{Im}(\Gamma + C) & \operatorname{Re}(\Gamma - C) \end{pmatrix} \right). For the scalar case (n=1), the probability density function of Z \sim \mathcal{CN}(\mu, \gamma, c) simplifies to a form derived from the bivariate real normal, but in vector settings, it is given by f_Z(z) = \frac{1}{\pi^n \det(\Gamma)} \exp\left( -(z - \mu)^* \Gamma^{-1} (z - \mu) \right) when the distribution is proper (i.e., C = 0), which implies circular symmetry if \mu = 0. Key properties include the formation of a full exponential family, closure under linear transformations, and the requirement of both \Gamma and C for complete specification, as the pseudo-covariance captures dependencies between Z and its conjugate not present in real normals. The complex normal distribution is fundamental in fields such as , wireless communications, and statistical analysis of complex data like neural oscillations or phase-amplitude couplings, where often simplifies models for rotationally invariant processes. It generalizes to complex random fields and processes, uniquely determined by mean, covariance, and pseudo-covariance functions, enabling applications in areas like array and multivariate .

Definitions

Scalar Complex Gaussian Random Variable

A scalar complex Gaussian random variable, often denoted as Z, is a complex-valued defined as Z = X + i Y, where X and Y represent the real and imaginary parts, respectively, and both are real-valued random variables that are jointly Gaussian. The joint distribution of (X, Y) follows a bivariate , denoted (X, Y) \sim \mathcal{N}_2(\boldsymbol{\mu}, \boldsymbol{\Sigma}), where the vector is \boldsymbol{\mu} = \begin{pmatrix} \mu_X \\ \mu_Y \end{pmatrix} with \mu_X = \mathbb{E}[X] and \mu_Y = \mathbb{E}[Y], and the is \boldsymbol{\Sigma} = \begin{pmatrix} \sigma_{XX} & \sigma_{XY} \\ \sigma_{YX} & \sigma_{YY} \end{pmatrix}, with \sigma_{XX} = \mathrm{Var}(X), \sigma_{YY} = \mathrm{Var}(Y), and \sigma_{XY} = \sigma_{YX} = \mathrm{Cov}(X, Y). This structure ensures that the complex distribution inherits the properties of the underlying real bivariate , providing a foundation for modeling complex-valued data in probabilistic frameworks. For the distribution to be non-degenerate, the \boldsymbol{\Sigma} must be positive definite, meaning its eigenvalues are positive or equivalently \det(\boldsymbol{\Sigma}) > 0, which prevents the random variable from being confined to a lower-dimensional . The concept of the scalar complex Gaussian random variable was introduced by N. R. Goodman in 1963 as part of a broader framework for multivariate complex Gaussian distributions, primarily motivated by applications in where complex representations simplify analysis of phenomena like noise in communication systems.

Standard Scalar Complex Gaussian Random Variable

The standard scalar complex Gaussian random variable, denoted Z \sim \mathcal{CN}(0,1), is defined as a complex-valued random variable with zero mean, E[Z] = 0, unit variance E[ZZ^*] = 1, and vanishing pseudo-covariance E[ZZ] = 0, ensuring circular symmetry. This normalization serves as a foundational building block for more general complex Gaussian distributions. A canonical representation of Z is given by Z = \frac{U + iV}{\sqrt{2}}, where U and V are independent standard real Gaussian random variables, each distributed as N(0,1). This form arises from the joint bivariate normal distribution of the real and imaginary parts, each with variance $1/2 and zero correlation, preserving the unit covariance while maintaining independence. The of Z implies that its distribution is rotationally invariant in the : for any real angle \theta, the e^{i\theta} Z follows the same \mathcal{CN}(0,1) distribution as Z, rendering the standard Gaussian unique up to phase rotation. The general scalar Gaussian arises as an of this standard form.

Vector Complex Gaussian Random Variable

A Gaussian generalizes the scalar case to n > 1 dimensions, where \mathbf{Z} = [Z_1, \dots, Z_n]^T with each Z_k a scalar, and the joint distribution is defined such that the real representation [\Re(\mathbf{Z})^T, \Im(\mathbf{Z})^T]^T follows a $2n-dimensional multivariate real Gaussian distribution with a $2n-dimensional mean and a $2n \times 2n . This equivalence highlights that n dimensions correspond to $2n real dimensions, allowing the full machinery of real multivariate Gaussians to characterize the . For the distribution to be non-degenerate, the underlying real covariance matrix must be positive definite, ensuring the probability density is well-defined over the complex space. This formulation encompasses both proper (circularly symmetric) and improper complex Gaussian vectors, where the pseudo-covariance matrix E[(\mathbf{Z} - \mu)(\mathbf{Z} - \mu)^T] may be non-zero, capturing correlations between real and imaginary parts that violate circular symmetry; earlier treatments often restricted to the proper case, overlooking these improper distributions.

Standard Vector Complex Gaussian Random Variable

The standard vector Gaussian random variable, denoted \mathbf{Z} \sim \mathcal{CN}_n(\mathbf{0}, \mathbf{I}_n), is an n-dimensional zero-mean random vector whose components are independent and identically distributed as standard scalar Gaussians. This distribution is circularly symmetric by construction, satisfying E[\mathbf{Z}] = \mathbf{0}, E[\mathbf{ZZ}^H] = \mathbf{I}_n, and E[\mathbf{ZZ}^T] = \mathbf{0}. In its real-valued representation, \mathbf{Z} = \mathbf{X} + i\mathbf{Y} with \mathbf{X}, \mathbf{Y} \in \mathbb{R}^n, the stacked vector \begin{pmatrix} \mathbf{X} \\ \mathbf{Y} \end{pmatrix} follows a $2n-dimensional real consisting of $2n \mathcal{N}(0, 1/2) components. This ensures that each component Z_k has unit complex variance, E[|Z_k|^2] = 1, while the real and imaginary parts each have variance $1/2. The components of \mathbf{Z} are pairwise uncorrelated, with E[Z_k \overline{Z_l}] = \delta_{kl}, and also uncorrelated with their own conjugates across all indices, E[Z_k Z_l] = 0. These orthogonality properties arise directly from the identity covariance and zero pseudo-covariance. Conventions for scaling can vary across texts; for instance, some define the standard form with real and imaginary variances of 1, leading to complex variance 2, but the unit complex variance convention adopted here aligns with common usage in and statistics. The general vector complex Gaussian random variable can be expressed as an affine transformation of this standard form.

Statistical Parameters

Mean Vector

The mean vector of a complex normal random vector Z \in \mathbb{C}^p is defined as the expectation \mu = E[Z], where \mu \in \mathbb{C}^p serves as the in the distribution Z \sim \mathrm{CN}_p(\mu, \Gamma, C), with \Gamma the and C the relation matrix. This definition extends naturally to the scalar case, where a complex normal random variable z \in \mathbb{C} has mean \mu = E. The expectation operator in the complex domain inherits the linearity property from standard : for complex scalars a, b \in \mathbb{C} and complex normal random vectors Z, W, E[aZ + bW] = a \mu_Z + b \mu_W. This linearity facilitates affine transformations and linear combinations within the class of complex distributions. A complex distribution is termed central if its mean vector is zero, i.e., \mu = 0; in this case, the distribution simplifies to Z \sim \mathrm{CN}_p(0, \Gamma, C). For a general non-central distribution, centering is achieved by subtracting the mean vector, yielding Z - \mu \sim \mathrm{CN}_p(0, \Gamma, C), which preserves the second-order structure. Given n independent and identically distributed observations Z_1, \dots, Z_n from \mathrm{CN}_p(\mu, \Gamma, C), the sample mean \hat{\mu} = \frac{1}{n} \sum_{i=1}^n Z_i is the maximum likelihood estimator of \mu and is unbiased, satisfying E[\hat{\mu}] = \mu. The covariance of this estimator is \mathrm{Cov}(\hat{\mu}) = \Gamma / n, reflecting the scaling of the population covariance by the inverse sample size due to the independence of the observations.

Covariance Matrix

The of a complex random \mathbf{Z} \in \mathbb{C}^n with mean vector \boldsymbol{\mu} is defined as \boldsymbol{\Gamma} = \mathbb{E}[(\mathbf{Z} - \boldsymbol{\mu})(\mathbf{Z} - \boldsymbol{\mu})^H], where ^H denotes the (Hermitian transpose). This \boldsymbol{\Gamma} is Hermitian, meaning \boldsymbol{\Gamma}^H = \boldsymbol{\Gamma}, and positive semi-definite, ensuring that all eigenvalues are non-negative and that the \mathbf{w}^H \boldsymbol{\Gamma} \mathbf{w} \geq 0 for any \mathbf{w} \in \mathbb{C}^n. In the scalar case, where Z \in \mathbb{C} is a complex normal random variable with mean \mu, the covariance reduces to the scalar \Gamma = \mathbb{E}[|Z - \mu|^2], which is a real, non-negative number representing the variance of Z. This scalar covariance captures the total spread in the around the mean. For the vector case, the trace of \boldsymbol{\Gamma}, given by \operatorname{Tr}(\boldsymbol{\Gamma}) = \sum_{i=1}^n \Gamma_{ii}, equals the total variance \mathbb{E}[\|\mathbf{Z} - \boldsymbol{\mu}\|^2], as the diagonal elements \Gamma_{ii} are real and represent the individual component variances. The eigenvalues of \boldsymbol{\Gamma} provide the principal variances along the directions of the eigenvectors, quantifying the dispersion in the eigenspaces of the distribution. The complex covariance matrix relates to the real-valued representation of \mathbf{Z} through a block structure in the covariance of the real and imaginary parts. Specifically, letting \mathbf{Z} = \mathbf{X} + j \mathbf{Y} with \mathbf{X}, \mathbf{Y} \in \mathbb{R}^n, the $2n \times 2n real covariance matrix of [\mathbf{X}^T, \mathbf{Y}^T]^T takes the block form \begin{pmatrix} \operatorname{Var}(\mathbf{X}) & \operatorname{Cov}(\mathbf{X}, \mathbf{Y}) \\ \operatorname{Cov}(\mathbf{Y}, \mathbf{X}) & \operatorname{Var}(\mathbf{Y}) \end{pmatrix}, where \operatorname{Cov}(\mathbf{Y}, \mathbf{X}) = \operatorname{Cov}(\mathbf{X}, \mathbf{Y})^T. The complex covariance is then \boldsymbol{\Gamma} = \operatorname{Var}(\mathbf{X}) + \operatorname{Var}(\mathbf{Y}) + j \left( \operatorname{Cov}(\mathbf{Y}, \mathbf{X}) - \operatorname{Cov}(\mathbf{X}, \mathbf{Y}) \right); the cross-covariance blocks are zero if there is no linear correlation between the real and imaginary parts. This structure highlights how the Hermitian form of \boldsymbol{\Gamma} generalizes the symmetric real covariance to account for phase relationships in the complex domain.

Relation to Pseudo-Covariance Matrix

The pseudo-, often denoted as \Pi, provides a complementary measure to the for characterizing the second-order statistics of . For a scalar Z with \mu, the pseudo-covariance is defined as \Pi = \mathbb{E}[(Z - \mu)^2]. In the vector case, for a \mathbf{Z} \in \mathbb{C}^n with \boldsymbol{\mu}, it is given by the matrix \Pi = \mathbb{E}[(\mathbf{Z} - \boldsymbol{\mu})(\mathbf{Z} - \boldsymbol{\mu})^T]. A complex normal distribution is fully specified by its mean vector \boldsymbol{\mu}, the Hermitian positive-definite covariance matrix \Gamma = \mathbb{E}[(\mathbf{Z} - \boldsymbol{\mu})(\mathbf{Z} - \boldsymbol{\mu})^H], and the pseudo-covariance matrix \Pi. Together, these parameters capture the complete second-moment structure, with \Gamma describing the Hermitian part of the and \Pi accounting for the complementary non-Hermitian dependencies. When \Pi \neq 0, the complex normal distribution is termed improper or non-circular, indicating that the real and imaginary components exhibit correlations not captured by the covariance alone. In contrast, proper complex normals have \Pi = 0, simplifying the second-order description to \boldsymbol{\mu} and \Gamma only. Any improper complex normal random vector can be whitened to a standard complex normal form through a linear transformation derived from Cholesky-like decompositions of the augmented matrix formed by \Gamma and \Pi, which jointly ensure uncorrelated components with unit variance. This involves constructing the block matrix \begin{pmatrix} \Gamma & \Pi \\ \Pi^* & \Gamma^* \end{pmatrix} and applying its Cholesky factorization to decorrelate the real and imaginary parts in the augmented representation. Early treatments of complex normals in fields like often overlooked the pseudo-covariance matrix, assuming and leading to incomplete models for non-circular signals.

Density Functions

Scalar Case Density

The scalar complex normal distribution is defined for a Z \in \mathbb{C} whose real and imaginary parts, X = \Re(Z) and Y = \Im(Z), form a bivariate real normal random vector with mean vector \boldsymbol{\mu}_r = \begin{pmatrix} \Re(\mu) \\ \Im(\mu) \end{pmatrix} and positive definite covariance matrix \Sigma \in \mathbb{R}^{2 \times 2}. The of Z, with respect to the on \mathbb{R}^2 identified with the via d\Re(z) \, d\Im(z), is the standard bivariate normal density: f(z) = \frac{1}{2\pi \sqrt{\det \Sigma}} \exp\left( -\frac{1}{2} \begin{pmatrix} \Re(z - \mu) \\ \Im(z - \mu) \end{pmatrix}^T \Sigma^{-1} \begin{pmatrix} \Re(z - \mu) \\ \Im(z - \mu) \end{pmatrix} \right). This derivation follows directly from the definition of the multivariate normal distribution applied to the two-dimensional real vector (X, Y)^T, where the exponent represents the Mahalanobis distance scaled by the inverse covariance, and the normalization constant ensures the integral over the complex plane equals 1. The distribution is proper only if the pseudo-covariance vanishes, but in general, \Sigma encodes both the covariance and pseudo-covariance effects. The normalization constant arises from the Gaussian integral property: for the standard bivariate case with identity covariance, \int_{\mathbb{R}^2} \frac{1}{2\pi} \exp\left( -\frac{1}{2} \mathbf{r}^T \mathbf{r} \right) d\mathbf{r} = 1, which generalizes via the transformation determinant \sqrt{\det \Sigma}. Singularity is avoided when \det \Sigma > 0, ensuring \Sigma is invertible and the density is strictly positive everywhere without collapsing to a lower-dimensional support. In terms of complex parameters, let \mu \in \mathbb{C} be the mean, \Gamma = E[(Z - \mu)(Z - \mu)^*] > 0 the (scalar) covariance, and \Pi = E[(Z - \mu)^2] \in \mathbb{C} the pseudo-covariance. These relate to \Sigma via \Sigma = \frac{1}{2} \begin{pmatrix} \Gamma + \Re(\Pi) & \Im(\Pi) \\ \Im(\Pi) & \Gamma - \Re(\Pi) \end{pmatrix}, yielding \det \Sigma = \frac{1}{4} (\Gamma^2 - |\Pi|^2) and thus \sqrt{\det \Sigma} = \frac{1}{2} \sqrt{\Gamma^2 - |\Pi|^2}. The normalization constant simplifies to \frac{1}{\pi \sqrt{\Gamma^2 - |\Pi|^2}}, with the non-degeneracy condition \Gamma > |\Pi|. The exponent can be expressed purely in complex notation as -\frac{\Gamma |z - \mu|^2 - \Re \left[ \overline{\Pi} (z - \mu)^2 \right]}{\Gamma^2 - |\Pi|^2}, obtained by substituting the expressions for x^2, y^2, and xy in terms of |z - \mu|^2 and (z - \mu)^2, then simplifying the resulting . This form reveals how non-zero \Pi distorts the circular contours of the proper case (\Pi = 0), where the density reduces to \frac{1}{\pi \Gamma} \exp\left( -\frac{|z - \mu|^2}{\Gamma} \right).

Vector Case Density

The vector complex normal distribution generalizes the scalar case to \mathbf{z} \in \mathbb{C}^n, where the real and imaginary parts form a jointly normal random in \mathbb{R}^{2n}. The full characterization requires the mean \boldsymbol{\mu} \in \mathbb{C}^n, the Hermitian \Gamma = E[(\mathbf{z} - \boldsymbol{\mu})(\mathbf{z} - \boldsymbol{\mu})^H], and the pseudo- \Pi = E[(\mathbf{z} - \boldsymbol{\mu})(\mathbf{z} - \boldsymbol{\mu})^T]. When \Pi = \mathbf{0}, the distribution is proper (circularly symmetric), and the simplifies to a with respect to the on \mathbb{C}^n \cong \mathbb{R}^{2n}: f(\mathbf{z}) = \frac{1}{\pi^n \det(\Gamma)} \exp\left( - (\mathbf{z} - \boldsymbol{\mu})^H \Gamma^{-1} (\mathbf{z} - \boldsymbol{\mu}) \right). This form arises because the underlying $2n-dimensional real Gaussian has a specific block covariance structure that aligns with the complex representation, and the \pi^n normalization ensures the integral over \mathbb{C}^n equals 1. In the general case with \Pi \neq \mathbf{0} (improper ), no such simple complex-form exists; instead, the is expressed via the equivalent $2n-dimensional real for \mathbf{w} = [\Re(\mathbf{z})^T, \Im(\mathbf{z})^T]^T \in \mathbb{R}^{2n}, with mean [\Re(\boldsymbol{\mu})^T, \Im(\boldsymbol{\mu})^T]^T and \Sigma of block form \Sigma = \begin{bmatrix} \Sigma_{11} & \Sigma_{12} \\ \Sigma_{12}^T & \Sigma_{22} \end{bmatrix}, where \Sigma_{11} = \frac{1}{2} \Re(\Gamma + \Pi), \Sigma_{22} = \frac{1}{2} \Re(\Gamma - \Pi), and \Sigma_{12} = \frac{1}{2} \Im(\Pi - \Gamma). The density is then f(\mathbf{x}, \mathbf{y}) = \frac{1}{(2\pi)^n \sqrt{\det(\Sigma)}} \exp\left( -\frac{1}{2} \begin{bmatrix} \mathbf{x} - \Re(\boldsymbol{\mu}) \\ \mathbf{y} - \Im(\boldsymbol{\mu}) \end{bmatrix}^T \Sigma^{-1} \begin{bmatrix} \mathbf{x} - \Re(\boldsymbol{\mu}) \\ \mathbf{y} - \Im(\boldsymbol{\mu}) \end{bmatrix} \right), with respect to the Lebesgue measure d\mathbf{x} \, d\mathbf{y} on \mathbb{R}^{2n}, where \mathbf{z} = \mathbf{x} + i \mathbf{y}. The integral over \mathbb{C}^n uses the identification d^2\mathbf{z} = d\mathbf{x} \, d\mathbf{y}. The log-likelihood function, essential for maximum likelihood inference in parameter estimation, follows directly from either form. For the proper case, -\log f(\mathbf{z}) = n \log \pi + \log \det(\Gamma) + (\mathbf{z} - \boldsymbol{\mu})^H \Gamma^{-1} (\mathbf{z} - \boldsymbol{\mu}). In the general case, it is n \log(2\pi) + \frac{1}{2} \log \det(\Sigma) + \frac{1}{2} [\mathbf{w} - E(\mathbf{w})]^T \Sigma^{-1} [\mathbf{w} - E(\mathbf{w})]. These expressions facilitate optimization in applications like signal processing and array calibration. Computationally, evaluating the density or log-likelihood leverages the Hermitian positive-definite structure of \Gamma for the proper case, enabling efficient Cholesky decompositions or eigenvalue methods for inversion and computation, which scale as O(n^3) but remain stable in high dimensions via specialized algorithms for Hermitian matrices. For the general case, block-matrix operations on \Sigma exploit its symmetry, though requires careful handling of potential ill-conditioning when \Pi introduces near-singularity; preconditioning or augmented statistics approaches mitigate this in large n.

Circularly Symmetric Density

The circularly symmetric complex normal distribution is a special case of the vector complex normal distribution where the pseudo-covariance matrix \Pi = 0, ensuring that the distribution remains unchanged under multiplication by any complex scalar of unit modulus e^{i\theta}, which corresponds to invariance in the . This condition eliminates the dependence on the pseudo-covariance term in the general , simplifying the form while preserving the Hermitian positive definite covariance matrix \Gamma. The probability density function for an n-dimensional circularly symmetric complex normal random vector z with mean \mu and covariance \Gamma is given by f(z) = \frac{1}{\pi^n \det(\Gamma)} \exp\left( -(z - \mu)^H \Gamma^{-1} (z - \mu) \right), where the superscript H denotes the Hermitian transpose. This expression highlights the quadratic form in the exponent, which measures the Mahalanobis distance in the complex domain, and the normalization factor ensures the density integrates to unity over \mathbb{C}^n. In the central case where \mu = 0, the density simplifies further to f(z) = \frac{1}{\pi^n \det(\Gamma)} \exp\left( -z^H \Gamma^{-1} z \right). This form is particularly prevalent in modeling noise processes in signal processing, as it centers the distribution at the origin. For the isotropic subcase, where \Gamma = \sigma^2 I_n with \sigma^2 > 0 and I_n the n \times n identity matrix, the density becomes independent of direction and depends solely on the Euclidean norm \|z - \mu\|, yielding f(z) = \frac{1}{(\pi \sigma^2)^n} \exp\left( -\frac{\|z - \mu\|^2}{\sigma^2} \right). In the central isotropic version (\mu = 0), it reduces to f(z) = \frac{1}{(\pi \sigma^2)^n} \exp\left( -\frac{\|z\|^2}{\sigma^2} \right), representing uniform variance in all complex dimensions.

Characteristic Functions

Scalar Case Characteristic Function

The characteristic function of a scalar complex normal random variable Z \sim \mathrm{CN}(\mu, \Gamma, \Pi), where \mu \in \mathbb{C} is the mean, \Gamma > 0 is the covariance parameter, and \Pi \in \mathbb{C} is the pseudo-covariance parameter, is defined as \phi(t) = \mathbb{E}\left[ \exp\left( i \operatorname{Re}(t^* Z) \right) \right] for t \in \mathbb{C}. This function can be derived by viewing Z = X + i Y as arising from a bivariate real normal random vector [X, Y]^\top \sim \mathcal{N}_2\left( [\operatorname{Re} \mu, \operatorname{Im} \mu]^\top, \Sigma \right), where the $2 \times 2 covariance matrix \Sigma satisfies \Gamma = \Sigma_{11} + \Sigma_{22} and \Pi = (\Sigma_{11} - \Sigma_{22}) + 2i \Sigma_{12}. Letting t = u + i v with u, v \in \mathbb{R}, the argument \operatorname{Re}(t^* Z) = u X + v Y, so \phi(t) equals the characteristic function of this bivariate normal, given by \exp\left( i (u \operatorname{Re} \mu + v \operatorname{Im} \mu) - \frac{1}{2} \begin{pmatrix} u & v \end{pmatrix} \Sigma \begin{pmatrix} u \\ v \end{pmatrix} \right). The linear term simplifies to i \operatorname{Re}(t^* \mu). The quadratic form expands to \frac{1}{2} \Gamma |t|^2 + \frac{1}{2} \operatorname{Re}(\bar{\Pi} t^2), yielding the full expression \phi(t) = \exp\left( i \operatorname{Re}(t^* \mu) - \frac{\Gamma}{4} |t|^2 - \frac{1}{4} \operatorname{Re}(\bar{\Pi} t^2) \right). When \Pi = 0 (circularly symmetric case), this reduces to \phi(t) = \exp\left( i \operatorname{Re}(t^* \mu) - \frac{\Gamma}{4} |t|^2 \right). As a consequence of the quadratic form in the exponent, \phi(t) is an entire function, analytic throughout the complex plane t \in \mathbb{C}, reflecting the Gaussian structure.

Vector Case Characteristic Function

The characteristic function of a complex normal random vector Z \in \mathbb{C}^n is defined as \phi(t) = \mathbb{E}\left[ \exp\left( i \Re(t^H Z) \right) \right], where t \in \mathbb{C}^n and t^H denotes the conjugate transpose. This formulation generalizes the scalar case to the multivariate setting by incorporating the Hermitian inner product, ensuring the argument of the exponential corresponds to a real linear functional on the underlying real and imaginary components of Z. The use of the real part aligns the complex case with the standard Fourier transform properties in the real multivariate normal distribution. In the circularly symmetric case, where the pseudo-covariance matrix \Pi = 0, the simplifies to \phi(t) = \exp\left( i \Re(t^H \mu) - \frac{1}{4} t^H \Gamma t \right), with \mu \in \mathbb{C}^n as the mean vector and \Gamma \in \mathbb{C}^{n \times n} as the Hermitian positive semi-definite satisfying \Gamma = \mathbb{E}[(Z - \mu)(Z - \mu)^H]. This arises from the independence of the real and imaginary parts in the circular case, each following a real with appropriately scaled covariances. For the general non-circular case, the incorporates the pseudo-covariance \Pi = \mathbb{E}[(Z - \mu)(Z - \mu)^T], yielding \phi(t) = \exp\left( i \Re(t^H \mu) - \frac{1}{4} t^H \Gamma t - \frac{1}{4} \Re\left( t^T \bar{\Pi} t \right) \right), where \bar{\Pi} denotes the entry-wise of \Pi. This extension accounts for correlations between Z and its conjugate that violate . The uniquely determines the complex normal distribution, as it is an entire whose values allow reconstruction of the via inversion formulas, leveraging properties like from Bochner's theorem. Evaluation of the in high dimensions is computationally efficient, relying on the direct calculation of quadratic forms like t^H \Gamma t and t^T \bar{\Pi} t, which can be performed using matrix-vector products in O(n^2) time or faster for structured matrices such as Toeplitz or low-rank forms common in applications. While the scalar exponential is straightforward, related tasks like deriving moments may involve matrix exponentials for in large n.

Moments from Characteristic Function

The moments of a complex normal random can be derived from the analytic \phi(\mathbf{t}) = \mathbb{E}[\exp(i \mathbf{t}^H \mathbf{Z})], distinct from the standard probability used above, where \mathbf{Z} follows a normal distribution and \mathbf{t} is a argument. The cumulant-generating is \log \phi(\mathbf{t}), and its derivatives at \mathbf{t} = \mathbf{0} yield the cumulants, which coincide with the central moments up to second order for this distribution. The first moment, or mean vector \boldsymbol{\mu} = \mathbb{E}[\mathbf{Z}], is obtained from the first-order : \boldsymbol{\mu} = -i \nabla_{\mathbf{t}} \log \phi(\mathbf{t}) \bigg|_{\mathbf{t}=\mathbf{0}}, where \nabla_{\mathbf{t}} denotes the with respect to \mathbf{t}. This follows the standard relation for characteristic functions in the complex domain, adapted via for the vector case. The second central moments are captured by the \boldsymbol{\Gamma} = \mathbb{E}[(\mathbf{Z} - \boldsymbol{\mu})(\mathbf{Z} - \boldsymbol{\mu})^H] and the pseudo-covariance matrix \boldsymbol{\Pi} = \mathbb{E}[(\mathbf{Z} - \boldsymbol{\mu})(\mathbf{Z} - \boldsymbol{\mu})^T], derived from second-order mixed s: \boldsymbol{\Gamma} = -\frac{\partial^2}{\partial \mathbf{t} \partial \mathbf{t}^H} \log \phi(\mathbf{t}) \bigg|_{\mathbf{t}=\mathbf{0}}. The pseudo-covariance arises from the complementary \frac{\partial^2}{\partial \mathbf{t} \partial \mathbf{t}^T} \log \phi(\mathbf{t}) \big|_{\mathbf{t}=\mathbf{0}} = -\boldsymbol{\Pi}^*, reflecting the non-circular when \boldsymbol{\Pi} \neq \mathbf{0}. These second-order terms fully parameterize the . Higher-order even moments, such as the fourth moment \mathbb{E}[Z_i Z_j Z_k Z_l], are computed using an analog of (also known as ) for complex Gaussians. This expresses the moment as a sum of all possible contractions pairing indices via the and pseudo-covariance matrices: \mathbb{E}[Z_i Z_j Z_k Z_l] = \Gamma_{ij} \Gamma_{kl} + \Pi_{ik} \Gamma_{jl}^* + \Pi_{il} \Gamma_{jk}^* + \cdots, where the full expansion includes all Wick pairings that account for both Hermitian (\Gamma) and complementary (\Pi) correlations, vanishing for odd orders. This generalization extends the real-case to handle the bilinear structure of complex variables. The cumulants beyond second order are all zero, as the cumulant-generating function \log \phi(\mathbf{t}) is in \mathbf{t}, confirming the of the . For the fourth-order , explicit computation yields zero regardless of \boldsymbol{\Pi}, though the corresponding incorporates \boldsymbol{\Pi} terms; this holds even for non-circular cases, distinguishing the complex normal from non-Gaussian complex distributions.

Key Properties

Linear Transformations

A fundamental property of the complex normal distribution is its closure under affine transformations. If \mathbf{Z} \sim \mathcal{CN}_n(\boldsymbol{\mu}, \boldsymbol{\Gamma}, \boldsymbol{\Pi}), where \boldsymbol{\Gamma} is the Hermitian covariance matrix and \boldsymbol{\Pi} is the symmetric pseudo-covariance matrix, then for an m \times n complex matrix \mathbf{A} and an m \times 1 complex vector \mathbf{b}, the transformed vector \mathbf{Y} = \mathbf{A} \mathbf{Z} + \mathbf{b} follows \mathcal{CN}_m(\mathbf{A} \boldsymbol{\mu} + \mathbf{b}, \mathbf{A} \boldsymbol{\Gamma} \mathbf{A}^H, \mathbf{A} \boldsymbol{\Pi} \mathbf{A}^T). This result holds because the complex normal distribution arises from jointly normal real and imaginary parts, and complex affine transformations correspond to real linear transformations that preserve joint normality. The of parameters occurs covariantly: the mean vector shifts affinely, the incorporates the Hermitian transpose to preserve positive semi-definiteness and Hermitian symmetry, while the pseudo-covariance uses the plain transpose to maintain its complex symmetric structure. This ensures the resulting distribution remains complex normal, facilitating analytical tractability in applications involving and communications. In the scalar case, where Z \sim \mathcal{CN}(\mu, \gamma, \pi), applying a complex scalar \alpha and shift b yields Y = \alpha Z + b \sim \mathcal{CN}(\alpha \mu + b, |\alpha|^2 \gamma, \alpha^2 \pi). Here, the magnitude |\alpha| scales both the variance components by |\alpha|^2, while the phase of \alpha induces a rotation in the , altering the orientation of the distribution without changing its normality. If \mathbf{A} is singular, the transformed \mathbf{A} \boldsymbol{\Gamma} \mathbf{A}^H becomes singular, resulting in a degenerate complex normal distribution concentrated on a lower-dimensional affine .

Marginal and Conditional Distributions

For a complex normal random vector \mathbf{Z} \sim \mathrm{CN}(\boldsymbol{\mu}, \boldsymbol{\Gamma}, \boldsymbol{\Pi}), where \boldsymbol{\Gamma} is the and \boldsymbol{\Pi} is the pseudo-covariance matrix, the of a subvector is obtained by partitioning the parameters into conforming blocks. Consider the partition \mathbf{Z} = \begin{bmatrix} \mathbf{Z}_1 \\ \mathbf{Z}_2 \end{bmatrix}, \boldsymbol{\mu} = \begin{bmatrix} \boldsymbol{\mu}_1 \\ \boldsymbol{\mu}_2 \end{bmatrix}, \boldsymbol{\Gamma} = \begin{bmatrix} \boldsymbol{\Gamma}_{11} & \boldsymbol{\Gamma}_{12} \\ \boldsymbol{\Gamma}_{21} & \boldsymbol{\Gamma}_{22} \end{bmatrix}, and \boldsymbol{\Pi} = \begin{bmatrix} \boldsymbol{\Pi}_{11} & \boldsymbol{\Pi}_{12} \\ \boldsymbol{\Pi}_{21} & \boldsymbol{\Pi}_{22} \end{bmatrix}. The of \mathbf{Z}_1 is then \mathbf{Z}_1 \sim \mathrm{CN}(\boldsymbol{\mu}_1, \boldsymbol{\Gamma}_{11}, \boldsymbol{\Pi}_{11}). The conditional distribution \mathbf{Z}_1 \mid \mathbf{Z}_2 = \mathbf{z}_2 is also complex normal, with parameters given by \boldsymbol{\mu}_{1 \mid 2} = \boldsymbol{\mu}_1 + \boldsymbol{\Gamma}_{12} \boldsymbol{\Gamma}_{22}^{-1} (\mathbf{z}_2 - \boldsymbol{\mu}_2), \boldsymbol{\Gamma}_{1 \mid 2} = \boldsymbol{\Gamma}_{11} - \boldsymbol{\Gamma}_{12} \boldsymbol{\Gamma}_{22}^{-1} \boldsymbol{\Gamma}_{21}, and \boldsymbol{\Pi}_{1 \mid 2} = \boldsymbol{\Pi}_{11} - \boldsymbol{\Pi}_{12} \boldsymbol{\Gamma}_{22}^{-1} \boldsymbol{\Gamma}_{21} + \boldsymbol{\Gamma}_{12} \boldsymbol{\Gamma}_{22}^{-1} \boldsymbol{\Pi}_{21} - \boldsymbol{\Gamma}_{12} \boldsymbol{\Gamma}_{22}^{-1} \boldsymbol{\Pi}_{22} \boldsymbol{\Gamma}_{22}^{-1} \boldsymbol{\Gamma}_{21}. Thus, \mathbf{Z}_1 \mid \mathbf{Z}_2 = \mathbf{z}_2 \sim \mathrm{CN}(\boldsymbol{\mu}_{1 \mid 2}, \boldsymbol{\Gamma}_{1 \mid 2}, \boldsymbol{\Pi}_{1 \mid 2}). These expressions parallel the real multivariate normal case for the mean and covariance but incorporate the pseudo-covariance to capture non-circular symmetry. An alternative parameterization uses the precision matrix \boldsymbol{\Lambda} = \boldsymbol{\Gamma}^{-1}, which is also Hermitian positive definite. Partition \boldsymbol{\Lambda} = \begin{bmatrix} \boldsymbol{\Lambda}_{11} & \boldsymbol{\Lambda}_{12} \\ \boldsymbol{\Lambda}_{21} & \boldsymbol{\Lambda}_{22} \end{bmatrix}. The conditional is then \boldsymbol{\Gamma}_{1 \mid 2} = \boldsymbol{\Lambda}_{11}^{-1}, and the conditional is \boldsymbol{\mu}_{1 \mid 2} = \boldsymbol{\mu}_1 - \boldsymbol{\Lambda}_{11}^{-1} \boldsymbol{\Lambda}_{12} (\mathbf{z}_2 - \boldsymbol{\mu}_2). This precision-based form simplifies computations involving conditional independences, as zeros in off-diagonal blocks of \boldsymbol{\Lambda} directly indicate such structures, and is particularly valuable in Gaussian processes and graphical models for complex data. Block inversion of the Hermitian precision employs the \boldsymbol{\Lambda}_{11.2} = \boldsymbol{\Lambda}_{11} - \boldsymbol{\Lambda}_{12} \boldsymbol{\Lambda}_{22}^{-1} \boldsymbol{\Lambda}_{21}, but for conditionals, the direct block \boldsymbol{\Lambda}_{11} suffices. Unlike case, the complex normal's pseudo-covariance often receives less emphasis in derivations, yet it is crucial for accurate marginal and conditional specifications in non-proper distributions; this distinction underscores the importance of Hermitian block partitioning in the complex domain.

Independence Criteria

In the context of complex distributions, of random variables or subvectors is determined by their second-order statistics, specifically the and pseudo-covariance. For jointly complex random variables, uncorrelation in both the standard and pseudo senses implies , analogous to case but requiring consideration of the relation matrix (pseudo-covariance) due to potential noncircularity. For two scalar complex normal random variables Z_1 and Z_2 (assuming zero mean for simplicity), independence holds E[Z_1 Z_2] = 0, E[Z_1 Z_2^*] = 0, E[Z_1^* Z_2] = 0, and E[Z_1^* Z_2^*] = 0. These conditions ensure that the cross terms vanish, as the conjugates of the first two moments are redundant due to the Hermitian property of the . This uncorrelation guarantees that the joint factors into the product of the marginals. The joint characteristic function of Z_1 and Z_2 is the product of their marginal characteristic functions they are . For complex normals, the characteristic function is given by \phi(\omega_1, \omega_2) = \exp\left(j \operatorname{Re}(\mu^H \omega) - \frac{1}{2} \omega^H \Gamma \omega - \frac{1}{2} \omega^T \Pi \omega\right), where \Gamma is the and \Pi is the pseudo-covariance matrix. Independence requires the off-diagonal blocks of both \Gamma and \Pi to be zero, making the exponent separate into independent quadratic forms. For vector-valued complex normals, consider a partition Z = \begin{pmatrix} Z_a \\ Z_b \end{pmatrix}, where Z_a and Z_b are subvectors. Independence of Z_a and Z_b occurs the cross-covariance E[Z_a Z_b^H] = 0 and the cross-pseudo-covariance E[Z_a Z_b^T] = 0. This condition is equivalent to the overall \Gamma = E[ZZ^H] and pseudo-covariance matrix \Pi = E[ZZ^T] being block-diagonal with respect to the partition. In noncircular cases, where \Pi \neq 0, these zero cross terms are necessary to ensure the joint distribution factors, preventing dependence induced by conjugate correlations.

Circularly Symmetric Central Case

Definition and Conditions

The circularly symmetric central complex normal distribution, often denoted as \mathbf{Z} \sim \mathcal{CN}(\mathbf{0}, \boldsymbol{\Gamma}), represents a fundamental subclass of the complex normal family characterized by a zero vector \boldsymbol{\mu} = \mathbf{0} and a zero (or pseudo-covariance) \boldsymbol{\Pi} = \mathbf{0}. This distribution is fully specified by its Hermitian-positive semi-definite \boldsymbol{\Gamma} = \mathbb{E}[\mathbf{Z}\mathbf{Z}^\dagger], where \mathbf{Z}^\dagger denotes the , ensuring that the second-order statistics capture all relevant information without correlation between \mathbf{Z} and its . A key implication is the total expected power \mathbb{E}[|\mathbf{Z}|^2] = \mathrm{trace}(\boldsymbol{\Gamma}), which quantifies the overall variance across components. A defining property is its rotational invariance: for any real angle \theta, the rotated vector e^{i\theta} \mathbf{Z} follows the same distribution \mathcal{CN}(\mathbf{0}, \boldsymbol{\Gamma}), reflecting the absence of preferred phase directions. This symmetry arises directly from \boldsymbol{\Pi} = \mathbb{E}[\mathbf{Z}\mathbf{Z}^T] = \mathbf{0}, which enforces that the distribution is invariant under unitary phase rotations. Equivalently, in terms of real and imaginary parts, if \mathbf{Z} = \mathbf{X} + i\mathbf{Y} with \mathbf{X} = \mathrm{Re}(\mathbf{Z}) and \mathbf{Y} = \mathrm{Im}(\mathbf{Z}), then \mathbf{X} and \mathbf{Y} are uncorrelated zero-mean real Gaussian vectors with identical covariance matrices \mathbb{E}[\mathbf{X}\mathbf{X}^T] = \mathbb{E}[\mathbf{Y}\mathbf{Y}^T] = \frac{1}{2} \mathrm{Re}(\boldsymbol{\Gamma}), and zero cross-covariance \mathbb{E}[\mathbf{X}\mathbf{Y}^T] = \mathbf{0}. Moreover, if \boldsymbol{\Gamma} is diagonal, the components of \mathbf{Z} are independent, simplifying analysis in decoupled systems. A special isotropic subcase occurs when \boldsymbol{\Gamma} = \sigma^2 \mathbf{I}_n for dimension n and variance \sigma^2 > 0, yielding a distribution that is uniform in all directions and invariant under arbitrary unitary transformations, akin to white noise in the complex plane. This form is particularly tractable for multidimensional modeling. Historically, the circularly symmetric central complex normal has been central to wireless communications since the 1960s, where it underpins models for channel coefficients in multipath environments without line-of-sight paths.

Real and Imaginary Part Distributions

For a circularly symmetric central complex random Z \sim \mathrm{CSCN}(0, \Gamma, 0), where \Gamma is an n \times n Hermitian positive semi-definite with zero imaginary part (i.e., \Gamma is real symmetric), the decomposition into real and imaginary parts is given by Z = X + i Y, with X = \mathrm{Re}(Z) and Y = \mathrm{Im}(Z) being real n-dimensional . The joint distribution of the stacked \begin{bmatrix} X \\ Y \end{bmatrix} is multivariate \mathcal{N}_{2n}\left(0, \frac{1}{2} \mathrm{blkdiag}(\Gamma, \Gamma)\right), reflecting the block-diagonal structure due to the absence of cross-covariance terms. This structure arises because the condition, combined with \mathrm{Im}(\Gamma) = 0, ensures that the real and imaginary components are uncorrelated. The vectors X and Y are , each marginally distributed as X \sim \mathcal{N}_n\left(0, \frac{1}{2} \Gamma\right) and Y \sim \mathcal{N}_n\left(0, \frac{1}{2} \Gamma\right). This independence holds specifically under the with real \Gamma, as the pseudo- E[ZZ^T] = 0 eliminates correlations between the real and imaginary parts across components. The relations confirm : E[XY^T] = 0, E[XX^T] = E[YY^T] = \frac{1}{2} \Gamma. In the scalar case (n=1), where Z \sim \mathrm{CSCN}(0, \sigma^2, 0) with real variance \sigma^2 > 0, the magnitude |Z| follows a |Z| \sim \mathrm{Rayleigh}(\sigma / \sqrt{2}), since X and Y are i.i.d. \mathcal{N}(0, \sigma^2 / 2). This distribution captures the radial symmetry inherent in the circularly symmetric case.

Unique Properties

One distinctive feature of the circularly symmetric central complex normal distribution is its pseudo-linearity property, whereby the expected squared magnitude of a random vector \mathbf{Z} \sim \mathcal{CN}_n(\mathbf{0}, \boldsymbol{\Gamma}) equals the trace of the covariance matrix: \mathbb{E}[|\mathbf{Z}|^2] = \trace(\boldsymbol{\Gamma}). This relation simplifies power calculations in vector-valued settings. Additionally, for any \mathbf{A}, the \mathbf{Z}^H \mathbf{A} \mathbf{Z} follows a , reflecting the interplay between the form's structure and the underlying . The distribution demonstrates rotational invariance, remaining unchanged under unitary transformations \mathbf{U} \mathbf{Z} where \mathbf{U} is unitary and satisfies \mathbf{U} \boldsymbol{\Gamma} \mathbf{U}^H = \boldsymbol{\Gamma}. This property underscores the isotropic nature of the , allowing equivalent representations in rotated bases without altering statistical behavior. In contexts, and identically distributed samples from this distribution yield the sample as a minimal , capturing all information about \boldsymbol{\Gamma} while leveraging the zero-mean and assumptions. Higher-order moments exhibit characteristic patterns: all odd moments vanish due to the central , \mathbb{E}[\mathbf{Z}^{\otimes (2k+1)}] = \mathbf{0} for any integer k \geq 0. Even moments, however, are nonzero and can be systematically computed via an adaptation of to the complex domain, expressing them as sums over pairings involving the entries. This adaptation replaces real Gaussian contractions with complex Hermitian ones, facilitating moment evaluations in signal models.

Applications in Signal Processing

In wireless communications, the circularly symmetric complex normal distribution serves as the standard model for (AWGN) in the complex baseband representation of signals. This modeling choice captures the noise as a zero-mean circularly symmetric complex Gaussian random variable with independent real and imaginary parts of equal variance, enabling accurate analysis of channel impairments in systems like (OFDM). Central to this application is the computation of , where the Shannon formula C = \log_2(1 + \mathrm{SNR}) is applied to the complex AWGN , providing the theoretical maximum data rate under power constraints. In array signal processing, the complex normal distribution underpins the statistical modeling of noise in multi-antenna systems, particularly for techniques. The of the received signals, assuming spatially white complex , is estimated to derive optimal beam weights that steer nulls toward interferers and enhance signals from desired directions. Robust adaptive algorithms reconstruct the interference-plus-noise using projections onto low-rank subspaces, mitigating steering vector mismatches while relying on the complex normal assumption for . This approach has been widely adopted to improve direction-of-arrival estimation and signal recovery in and arrays. For signal detection tasks, the circularly symmetric complex normal distribution facilitates the derivation of optimal detectors in noisy environments. Likelihood ratio tests compare the hypotheses of signal presence versus noise-only scenarios, where the noise follows a complex normal distribution, yielding the as the maximum receiver. This filter correlates the received signal with a known template, achieving the Neyman-Pearson optimality criterion for detecting deterministic signals in complex , as commonly applied in communications and systems. In contemporary and systems, the circularly symmetric complex normal remains the baseline for noise modeling, though extensions to non-circular (improper) distributions address uplink scenarios with real-valued modulations and hardware asymmetries. These improper models exploit augmented covariance structures to enhance sum-rate performance in non-orthogonal multiple access () and reconfigurable intelligent surface (RIS)-aided links. Meanwhile, 2020s research integrates for blind parameter estimation of complex variances and covariances, using neural networks to handle sparse pilot data and reduce overhead in massive deployments. The circular symmetry of the baseline model simplifies these computational pipelines by ensuring uncorrelated real and imaginary components.

References

  1. [1]
    [PDF] Phase, amplitude, and the complex normal distribution
    Thus, the complex normal distribution is well- defined by the parameter set (m,Γ,C), and we denote the distribution by X ⇠ CN(m,Γ,C). Recall that the ...
  2. [2]
    [PDF] arXiv:1808.07280v2 [math.PR] 19 Mar 2019
    Mar 19, 2019 · 7.5 Complex normal distribution . ... A complex Gaussian random field is uniquely determined by its mean function and its covariance.
  3. [3]
    [PDF] Lecture 12
    A complex gaussian random variable z = x + iy has components x and y described by a real bivariate gaussian random variable (x, y). A complex gaussian random ...
  4. [4]
    Statistical Analysis Based on a Certain Multivariate Complex ...
    March, 1963 Statistical Analysis Based on a Certain Multivariate Complex Gaussian Distribution (An Introduction). N. R. Goodman.
  5. [5]
    [PDF] The Complex Multivariate Gaussian Distribution - The R Journal
    Complex-valued random variables find applications in many areas of science such as signal process- ing (Kay, 1989), radio engineering (Ozarow, 1994), ...
  6. [6]
    [PDF] arXiv:1010.6219v2 [math.PR] 5 Nov 2010
    Nov 5, 2010 · A random variable γ : Ω → K is called a (complex) Gauss- ian random variable if γ ∈ L2(Ω) and E(γ) = 0 and γ/(E|γ|2)1/2 is a (complex) standard ...
  7. [7]
    [PDF] Complex Random Vectors and ICA Models - arXiv
    Dec 15, 2005 · If pcov[x] = 0p×p. , then the r.vc. is called second order circular (or circularly symmetric). ... covariance and the pseudo-covariance matrices.
  8. [8]
    [PDF] Random matrices - arXiv
    Sep 10, 2020 · Definition 2.9. A standard complex Gaussian random variable Z is of the form. Z = X + iY. √. 2. , where X and Y are independent standard real ...
  9. [9]
    [PDF] Transmit Optimization with Improper Gaussian Signaling for ... - arXiv
    Mar 11, 2013 · Abstract—This paper studies the achievable rates of Gaus- sian interference channels with additive white Gaussian noise.
  10. [10]
  11. [11]
    [PDF] Complex-Valued Random Vectors and Channels - arXiv
    May 4, 2011 · Hence, a complex-valued random vector x will be called Gaussian distributed if x(r) is (multivariate) Gaussian distributed.
  12. [12]
    [PDF] arXiv:1704.03486v1 [math.CO] 11 Apr 2017
    Apr 11, 2017 · We say that a random vector v ∈ Cn is distributed according to a standard complex normal, which we denote by v ∼ CN(0, I), iff v1,...,vn are ...
  13. [13]
    [PDF] Circularly-Symmetric Gaussian random vectors - RLE at MIT
    Jan 1, 2008 · Abstract. A number of basic properties about circularly-symmetric Gaussian random vectors are stated and proved here.
  14. [14]
    Second-order complex random vectors and normal distributions
    Complex random vectors are usually described by their covariance matrix. This is insufficient for a complete description of second-order statistics.
  15. [15]
    [PDF] Complex Random Variables - Casualty Actuarial Society
    The standard complex normal random variable is formed from two independent real normal variables whose means equal zero and whose variances equal one half:.
  16. [16]
    [PDF] Multivariate normal distributions: characteristic functions
    Nov 3, 2008 · A random vector X has a (multivariate) normal distribution if for every real vector a, the random variable aT X is normal. PROOF OF EQUIVALENCE.
  17. [17]
    [PDF] The complex multinormal distribution, quadratic forms in complex ...
    Sep 21, 2014 · goodness-of-fit test for the complex normal distribution with unknown parameters, based on the empirical characteristic function. Monte ...
  18. [18]
  19. [19]
  20. [20]
    [PDF] Proper Complex Random Processes with Applications to Information ...
    For instance, the probability density function and the entropy of a proper complex Gaussian random vector are specified solely by the vector of means and the ...<|control11|><|separator|>
  21. [21]
    Statistical Signal Processing of Complex-Valued Data
    Statistical Signal Processing of Complex-Valued Data: The Theory of Improper and Noncircular Signals. Search within full text.
  22. [22]
  23. [23]
  24. [24]
    [PDF] Circularly Symmetric Gaussian Random Vectors - EE IIT Bombay
    Oct 1, 2013 · A complex Gaussian vector is circularly symmetric if and only if its mean and pseudocovariance are zero. Proof. • The forward direction was ...
  25. [25]
    [PDF] Topic 1. Complex Random Vector and Circularly Symmetric ...
    Jan 17, 2020 · Σx = E{(x − E[x])(x − E[x])T. } ... – Each channel coefficient is modeled as a circularly symmetric complex Gaussian random variable with zero mean.
  26. [26]
    The complex multinormal distribution, quadratic forms in complex ...
    Sep 21, 2014 · This paper first reviews some basic properties of the (noncircular) complex multinormal distribution and presents a few characterizations of ...
  27. [27]
    Computing the Moments of the Complex Gaussian: Full and Sparse ...
    As it is in the real case, CGD's are characterised by a complex covariance matrix Σ = E Z Z * , which an Hermitian operator, Σ * = Σ . In some cases, we assume ...Missing: structure | Show results with:structure
  28. [28]
    [PDF] Capacity Limits of MIMO Systems - Stanford University
    Thus, the Shannon capacity of the MIMO AWGN channel is based on its maximum mutual information, as described in the next section. When the channel is time- ...
  29. [29]
    [PDF] On Capacity-Achieving Distributions for Complex AWGN Channels ...
    Apr 24, 2020 · It is shown that the capacity of an AWGN channel under transmit average power and receiver delivered power constraints is the same as the ...Missing: normal | Show results with:normal
  30. [30]
    Robust Adaptive Beamforming Algorithm Based on Complex Gauss ...
    Sep 7, 2023 · This study addresses the poor robustness of the current beamforming algorithm for covariance matrix reconstruction and the high ...
  31. [31]
    Adaptive beamforming algorithm for coprime array based on ...
    Dec 2, 2021 · Based on the idea of interference plus noise covariance matrix (INCM) reconstruction, this study proposed a robust adaptive beamforming algorithm using the ...
  32. [32]
    Signal Detection in White Gaussian Noise - MATLAB & Simulink
    A matched filter is often used at the receiver front end to enhance the SNR. From the discrete signal point of view, matched filter coefficients are simply ...
  33. [33]
    [PDF] Generalized Likelihood Ratio Test for Detection of Gaussian ... - HAL
    Jan 10, 2017 · Abstract—We consider the classical radar problem of detecting a target in Gaussian noise with unknown covariance matrix. In.
  34. [34]
    (PDF) Improper Gaussian Signaling for two-user MISO-NOMA ...
    In this paper, we study the advantages of improper Gaussian signaling (IGS) with the existence of hardware impairments (HWI) and imperfect successive ...
  35. [35]
    [PDF] Low-Complexity Blind Parameter Estimation in Wireless Systems ...
    This paper proposes low-complexity blind estimators for noise power, signal power, SNR, and MSE in wireless systems, using sparse data to reduce pilot overhead.