Here I introduce package cmvnorm, a complex generalization of the mvtnorm package. A complex generalization of the Gaussian process is suggested and numerical results presented using the package. An application in the context of approximating the Weierstrass \(\sigma\)-function using a complex Gaussian process is given.
Complex-valued random variables find applications in many areas of science such as signal processing (Kay 1989), radio engineering (Ozarow 1994), and atmospheric physics (Mandic et al. 2009). In this short paper I introduce cmvnorm (Hankin 2015), a package for investigating one commonly encountered complex-valued probability distribution, the complex Gaussian.
The real multivariate Gaussian distribution is well supported in R by package mvtnorm (Genz et al. 2014), having density function
\[\label{eq:realgaussianPDF} f\left(\mathbf x;\boldsymbol\mu,\Sigma\right) = \frac{e^{-\frac{1}{2}\left(\mathbf x-\boldsymbol\mu\right)^T\Sigma^{-1}\left(\mathbf x-\boldsymbol\mu\right)}}{\sqrt{\left|2\pi\Sigma\right|}}\qquad\mathbf x\in\mathbb{R}^n, \tag{1} \]
where \(\left|M\right|\) denotes the determinant of matrix \(M\). Here, \(\boldsymbol\mu=\mathbb{E}\left[\mathbf X\right]\in\mathbb{R}^n\) is the mean vector and \(\Sigma=\mathbb{E}\left[\left(\mathbf X-\boldsymbol\mu\right)\left(\mathbf X-\boldsymbol\mu\right)^T\right]\) the covariance of random vector \(\mathbf X\); we write \(\mathbf X\sim\mathcal{N}_n\left(\boldsymbol\mu,\Sigma\right)\). One natural generalization would be to consider \(\mathbf Z\sim\mathcal{N}\mathcal{C}_n\left(\boldsymbol\mu,\Gamma\right)\), the complex multivariate Gaussian, with density function
\[\label{eq:complexGaussianPDF} f\left(\mathbf z;\boldsymbol\mu,\Gamma\right) = \frac{e^{-{\left(\mathbf z-\boldsymbol\mu\right)}^\ast\Gamma^{-1}\left(\mathbf z-\boldsymbol\mu\right)}}{\left|\pi\Gamma\right|}\qquad\mathbf z\in\mathbb{C}^n, \tag{2} \]
where \({\mathbf z}^\ast\) denotes the Hermitian transpose of complex vector \(\mathbf z\). Now \(\boldsymbol\mu\in\mathbb{C}^n\) is the complex mean and \(\Gamma=\mathbb{E}\left[\left(\mathbf Z-\boldsymbol\mu\right){\left(\mathbf Z-\boldsymbol\mu\right)}^\ast\right]\) is the complex variance; \(\Gamma\) is a Hermitian positive definite matrix. Note the simpler form of ((2)), essentially due to Gauss’s integral operating more cleanly over the complex plane than the real line:
\[\int_\mathbb{C}e^{-{z}^\ast z}\,dz= \int_{x\in\mathbb{R}}\int_{y\in\mathbb{R}}e^{-\left(x^2+y^2\right)}\,dx\,dy= \int_{\theta=0}^{2\pi}\int_{r=0}^\infty e^{-r^2}r\,dr\,d\theta=\pi.\]
A zero mean complex random vector \(\mathbf Z\) is said to be circularly symmetric (Goodman 1963) if \(\mathbb{E}\left[\mathbf Z\mathbf Z^T\right]=0\), or equivalently \(\mathbf Z\) and \(e^{i\alpha}\mathbf Z\) have identical distributions for any \(\alpha\in\mathbb{R}\). Equation ((2)) clearly has this property.
Most results from real multivariate analysis have a direct generalization to the complex case, as long as “transpose” is replaced by “Hermitian transpose”. For example, \(\mathbf X\sim\mathcal{N}_n\left(\mathbf 0,\Sigma\right)\) implies \(B\mathbf X\sim\mathcal{N}_n\left(\mathbf 0,B^T\Sigma B\right)\) for any constant matrix \(B\in\mathbb{R}^{m\times n}\), and analogously \(\mathbf Z\sim\mathcal{N}\mathcal{C}_n\left(\mathbf 0,\Gamma\right)\) implies \(B\mathbf Z\sim\mathcal{N}\mathcal{C}_n\left(\mathbf 0,{B}^\ast\Gamma B\right)\), \(B\in\mathbb{C}^{m\times n}\). Similar generalizations operate for Schur complement methods on partitioned matrices.
Also, linear regression generalizes similarly. Specifically, consider \(\mathbf y\in\mathbb{R}^n\). If \(\mathbf y=X\boldsymbol\beta+\boldsymbol\epsilon\) where \(X\) is a \(n\times p\) design matrix, \(\boldsymbol\beta\in\mathbb{R}^p\) a vector of regression coefficients and \(\boldsymbol\epsilon\sim\mathcal{N}_n\left(\mathbf 0,\Sigma\right)\) is a vector of errors, then \(\hat{\boldsymbol\beta} = \left(X^T\Sigma^{-1} X\right)^{-1}X^T\Sigma^{-1}\mathbf y\) is the maximum likelihood estimator for \(\boldsymbol\beta\). The complex generalization is to write \(\mathbf z=Z\boldsymbol\beta+\boldsymbol\epsilon\), \(Z\in\mathbb{C}^{n\times p}\), \(\boldsymbol\beta\in\mathbb{C}^p\), \(\boldsymbol\epsilon\sim\mathcal{N}\mathcal{C}_n\left(\mathbf 0,\Gamma\right)\) which gives \(\hat{\boldsymbol\beta}=\left({Z}^\ast\Gamma^{-1}Z\right)^{-1}{Z}^\ast\Gamma^{-1}\mathbf z\). Such considerations suggest a natural complex generalization of the Gaussian process.
This short vignette introduces the cmvnorm package which furnishes some functionality for the complex multivariate Gaussian distribution, and applies it in the context of a complex generalization of the emulator package (Hankin 2005), which implements functionality for investigating (real) Gaussian processes.
Random complex vectors are generated using the rcmvnorm()
function,
analogous to rmvnorm()
:
> set.seed(1)
> library("cmvnorm", quietly = TRUE)
> cm <- c(1, 1i)
> cv <- matrix(c(2, 1i, -1i, 2), 2, 2)
> (z <- rcmvnorm(6, mean = cm, sigma = cv))
1] [,2]
[,1,] 0.9680986+0.5525419i 0.0165969+2.9770976i
[2,] 0.2044744-1.4994889i 1.8320765+0.8271259i
[3,] 1.0739973+0.2279914i -0.7967020+0.1736071i
[4,] 1.3171073-0.9843313i 0.9257146+0.5524913i
[5,] 1.3537303-0.8086236i -0.0571337+0.3935375i
[6,] 2.9751506-0.1729231i 0.3958585+3.3128439i [
Function dcmvnorm()
returns the density according
to ((2)):
> dcmvnorm(z, cm, cv)
1] 5.103754e-04 1.809636e-05 2.981718e-03 1.172242e-03 4.466836e-03 6.803356e-07 [
So it is possible to determine a maximum likelihood estimate for the mean using direct numerical optimization
> helper <- function(x) c(x[1] + 1i * x[2], x[3] + 1i * x[4])
> objective <- function(x, cv)
+ -sum(dcmvnorm(z, mean = helper(x), sigma = cv, log = TRUE))
> helper(optim(c(1, 0, 1, 0), objective, cv = cv)$par)
1] 1.315409-0.447863i 0.385704+1.372762i [
(helper functions are needed because optim()
optimizes
over \(\mathbb{R}^n\) as opposed to \(\mathbb{C}^n\)). This shows reasonable
agreement with the true value of the mean and indeed the analytic value
of the MLE, specifically
> colMeans(z)
1] 1.315426-0.447472i 0.386068+1.372784i [
In the context of the emulator, a (real) Gaussian process is usually defined as a random function \(\eta\colon\mathbb{R}^p\longrightarrow\mathbb{R}\) which, for any set of points \(\left\{\mathbf x_1,\ldots,\mathbf x_n\right\}\) in its domain \({\mathcal D}\) the random vector \(\{\eta\left(\mathbf x_1\right)\), \(\ldots\), \(\eta\left(\mathbf x_n\right)\}\) is multivariate Gaussian.
It is convenient to specify
\(\mathbb{E}\left[\left.\eta\left(\mathbf x\right)\right|\boldsymbol\beta\right]=h\left(\mathbf x\right)\boldsymbol\beta\),
that is, the expectation of the process at
point \(\mathbf x\in{\mathcal D}\) conditional on the (unknown) vector of
coefficients \(\boldsymbol\beta\).
Here \(h\colon\mathbb{R}^p\longrightarrow\mathbb{R}^q\) specifies the \(q\)
known regressor functions of \(\mathbf x=\left(x_1,\ldots,x_p\right)^T\);
a common choice
is \(h\left(\mathbf x\right)=\left(1,x_1,\ldots,x_p\right)^T\)
[giving \(q=p+1\)], but one is in principle free to choose any function
of \(\mathbf x\). One
writes \(H^T=\left(h\left(\mathbf x_1\right),\ldots,h\left(\mathbf x_n\right)\right)\)
when considering the entire design matrix \(X\); the R idiom is
regressor.multi()
.
The covariance is typically given by \[cov\left(\eta(\mathbf x),\eta(\mathbf x')\right)=V\left(\mathbf x-\mathbf x'\right),\] where \(V\colon\mathbb{R}^n\longrightarrow\mathbb{R}\) must be chosen so that the variance matrix of any finite set of observations is always positive-definite. Bochner’s theorem (Feller 1971 XIX) shows that \(V\left(\cdot\right)\) must be proportional to the characteristic function (CF) of a symmetric probability Borel measure.
(Oakley 1999) uses techniques which have clear complex analogues to show that the posterior mean of \(\eta\left(\mathbf x\right)\) is given by
\[\label{eq:emulator} h\left(\mathbf x\right)^T\boldsymbol\beta+ \left(cov\left(\mathbf x,\mathbf x_1\right),\ldots,cov\left(\mathbf x,\mathbf x_n\right)\right)^TA^{-1}\left(\mathbf y-H\hat{\boldsymbol\beta}\right). \tag{3} \]
Here \(A\) is an \(n\times n\) matrix of correlations between the observations, \(\sigma^2A_{ij}=cov\left(\eta\left(\mathbf x_i\right),\eta\left(\mathbf x_j\right)\right)\) where \(\sigma^2\) is an overall variance term; and \(\hat{\boldsymbol\beta}=\left(X^TA^{-1} X\right)^{-1}X^TA^{-1}\mathbf y\) is the maximum likelihood estimator for \(\boldsymbol\beta\).
Equation ((3)) furnishes a cheap approximation to \(\eta\left(\mathbf x\right)\) and is known as the ‘emulator’.
The complex case is directly analogous, with \(\eta\colon\mathbb{C}^p\longrightarrow\mathbb{C}\) and \(\boldsymbol\beta\in\mathbb{C}^q\). Writing \(cov\left(\eta\left(\mathbf z_1\right),\ldots,\eta\left(\mathbf z_n\right)\right)\) \(=\Omega\), so that element \((i,j)\) of matrix \(\Omega\) is \(cov\left(\eta\left(\mathbf z_i\right),\eta\left(\mathbf z_j\right)\right)\), we may relax the requirement that \(\Omega\) be symmetric positive definite to requiring only Hermitian positive definiteness. This allows one to use the characteristic function of any, possibly non-symmetric, random variable \(\Psi\) with density function \(f\colon\mathbb{R}^p\longrightarrow\mathbb{R}\) and characteristic function \(\phi\):
\[\Omega_{ij}= cov\left(\eta\left(\mathbf z_i\right),\eta\left(\mathbf z_j\right)\right) = \phi\left(\mathbf z_i-\mathbf z_j\right).\]
That \(\Omega\) remains Hermitian positive definite may be shown by evaluating a quadratic form with it and arbitrary \(\mathbf w\in\mathbb{C}^n\) and establishing that it is real and non-negative:
\[\begin{aligned} {\mathbf w}^\ast\Omega\mathbf w&= \sum_{i,j}\overline{\mathbf w_i}cov\left(\eta\left(\mathbf z_i\right),\eta\left(\mathbf z_j\right)\right){\mathbf w_j}&\mbox{\small definition of quadratic form}\\ &= \sum_{i,j}\overline{\mathbf w_i}\phi\left(\mathbf z_i-\mathbf z_j\right)\mathbf w_j&\mbox{\small covariance function is the CF of~$\Psi$}\\ &= \sum_{i,j}\overline{\mathbf w_i}\left[\int_{\mathbf t\in\mathbb{C}^n}e^{\mathrm{i}\operatorname{Re}{\mathbf t}^\ast(\mathbf z_i-\mathbf z_j)}f\left(\mathbf t\right)\,d\mathbf t\right]{\mathbf w_j}\qquad&\mbox{\small definition of CF of~$\Psi$}\\ &= \int_{\mathbf t\in\mathbb{C}^n}\left[\sum_{i,j}\overline{\mathbf w_i}e^{\mathrm{i}\operatorname{Re}{\mathbf t}^\ast(\mathbf z_i-\mathbf z_j)}{\mathbf w_j}f\left(\mathbf t\right)\right]\,d\mathbf t & \mbox{\small integration and summation commute}\\ &= \int_{\mathbf t\in\mathbb{C}^n}\left[\sum_{i,j}\overline{\mathbf w_i}e^{\mathrm{i}\operatorname{Re}({\mathbf t}^\ast\mathbf z_i)}\overline{\overline{\mathbf w_j}e^{\mathrm{i}\operatorname{Re}({\mathbf t}^\ast\mathbf z_j)}}f\left(\mathbf t\right)\right]\,d\mathbf t&\mbox{\small expand and rearrange}\\ &= \int_{\mathbf t\in\mathbb{C}^n}\left|\sum_{i}\overline{\mathbf w_i}e^{\mathrm{i}\operatorname{Re}({\mathbf t}^\ast\mathbf z_i)}\right|^2 f\left(\mathbf t\right)\,d\mathbf t&\mbox{\small algebra}\\ &\geqslant 0. &\mbox{\small integral of sum of real positive functions} \end{aligned}\]
(This motivates the definition of the characteristic function of a complex multivariate random variable \({\mathbf Z}\) as \(\mathbb{E}\left[e^{i\operatorname{Re}\left({\mathbf t}^\ast{\mathbf Z}\right)}\right]\)). Thus the covariance matrix is Hermitian positive definite: although its entries are not necessarily real, its eigenvalues are all nonnegative.
In the real case one typically chooses \(\Psi\) to be a zero-mean Gaussian distribution; in the complex case one can use the complex multivariate distribution given in equation ((2)) which has characteristic function
\[\label{eq:complexgaussianCF} \exp\left(i\operatorname{Re}\left({\mathbf t}^\ast\boldsymbol\mu\right) - \frac{1}{4}{\mathbf t}^\ast\Gamma\mathbf t\right) \tag{4} \]
and following (Hankin 2012) in writing \(\mathfrak{B}=\Gamma/4\), we can write the variance matrix as a product of a (real) scalar \(\sigma^2\) term and
\[\label{eq:ct} c\left(\mathbf t\right) = \exp\left(i\operatorname{Re}\left({\mathbf t}^\ast\boldsymbol\mu\right) - {\mathbf t}^\ast\mathfrak{B}\mathbf t\right). \tag{5} \]
Thus the covariance matrix \(\Omega\) is given by
\[\Omega_{ij}=cov\left(\eta\left(\mathbf z_i\right),\eta\left(\mathbf z_j\right)\right)= \sigma^2c\left(\mathbf z_i-\mathbf z_j\right).\]
In ((5)), \(\mathfrak{B}\) has the same meaning as in conventional emulator techniques and controls the modulus of the covariance between \(\eta\left(\mathbf z\right)\) and \(\eta\left(\mathbf z'\right)\); \(\boldsymbol\mu\) governs the phase.
Given the above, it seems to be reasonable to follow (Oakley 1999) and admit only diagonal \(\mathfrak{B}\); but now distributions with nonzero mean can be considered (compare the real case which requires a zero mean). A parametrization using diagonal \(\mathfrak{B}\) and complex mean vector requires \(3p\) (real) hyperparameters; compare \(2p\) if \(\mathbb{C}^p\) is identified with \(\mathbb{R}^{2p}\).
Analytic functions of several complex variables are an important and interesting class of objects; (Krantz 1987) motivates and discusses the discipline. Formally, consider \(f\colon\mathbb{C}^n\longrightarrow\mathbb{C}\), \(n\geqslant 2\) and write \(f\left(z_1,\ldots,z_n\right)\). Function \(f\) is analytic if it satisfies the Cauchy-Riemann conditions in each variable separately, that is \(\partial f/\partial\overline{z}_j=0\), \(1\leqslant j\leqslant n\).
Such an \(f\) is continuous (due to a “non-trivial theorem of Hartogs”) and continuously differentiable to arbitrarily high order. Krantz (1987) goes on to state some results which are startling if one’s exposure to complex analysis is restricted to functions of a single variable: for example, any isolated singularity is removable.
The natural definition of complex Gaussian processes above, together with the features of analytic functions of several complex variables, suggests that a complex emulation of analytic functions of several complex variables might be a useful technique.
The ideas presented above, and the cmvnorm package, can now be used to sample directly from an appropriate complex Gaussian distribution and estimate the roughness parameters:
> val <- latin.hypercube(40, 2, names = c("a", "b"), complex = TRUE)
> head(val)
a b1,] 0.7375+0.2375i 0.2375+0.7125i
[2,] 0.6875+0.5875i 0.1375+0.3375i
[3,] 0.4625+0.5375i 0.9875+0.5875i
[4,] 0.7875+0.0625i 0.0625+0.7875i
[5,] 0.3875+0.0375i 0.5875+0.7625i
[6,] 0.2125+0.5625i 0.7625+0.9625i [
(function latin.hypercube()
is used to generate a random complex
design matrix). We may now specify a variance matrix using simple values
for the roughness
hyperparameters \(\mathfrak{B}=\left(\begin{smallmatrix}1&0\\0&2\end{smallmatrix}\right)\)
and \(\boldsymbol\mu=\left(1,i\right)^T\):
> true_scales <- c(1, 2)
> true_means <- c(1, 1i)
> A <- corr_complex(val, means = true_means, scales = true_scales)
> round(A[1:4, 1:4], 2)
1] [,2] [,3] [,4]
[,1,] 1.00+0.00i 0.59-0.27i 0.25-0.10i 0.89+0.11i
[2,] 0.59+0.27i 1.00+0.00i 0.20+0.00i 0.42+0.26i
[3,] 0.25+0.10i 0.20+0.00i 1.00+0.00i 0.10+0.06i
[4,] 0.89-0.11i 0.42-0.26i 0.10-0.06i 1.00+0.00i [
Function corr_complex()
is a complex generalization of corr()
;
matrix A
is Hermitian positive-definite:
> all(eigen(A)$values > 0)
1] TRUE [
It is now possible to make a single multivariate observation \(d\) of this process, using \(\boldsymbol\beta=\left(1,1+i,1-2i\right)^T\):
> true_beta <- c(1, 1+1i, 1-2i)
> d <- drop(rcmvnorm(n = 1, mean = regressor.multi(val) %*% true_beta, sigma = A))
> head(d)
1] 3.212719+1.594901i 1.874278+0.345517i 3.008503-0.767618i 3.766526+2.071882i
[5] 3.712913+0.800983i 3.944167+0.924833i [
Thus d
is a single observation from a complex multivariate Gaussian
distribution. Most of the functions of the emulator package operate
without modification. Thus betahat.fun()
, which calculates the maximum
likelihood
estimate \(\hat{\boldsymbol\beta}=\left({H}^\ast A^{-1}H\right)^{-1}{H}^\ast A^{-1}\mathbf y\)
takes complex values directly:
> betahat.fun(val, solve(A), d)
const a b 0.593632-0.0128655i 0.843608+1.0920437i 1.140372-2.5053751i
However, because the likelihood function is different, the
interpolant()
functionality is implemented in the cmvnorm package by
interpolant.quick.complex()
, named in analogy to function
interpolant.quick()
of package emulator.
For example, it is possible to evaluate the posterior distribution of the process at \((0.5,0.3+0.1i)\), a point at which no observation has been made:
> interpolant.quick.complex(rbind(c(0.5, 0.3+0.1i)), d, val, solve(A),
+ scales = true_scales, means = true_means, give.Z = TRUE)
$mstar.star
1] 1.706402-1.008601i
[
$Z
1] 0.203295
[
$prior
1] 1.608085-0.104419i [
Thus the posterior distribution for the process is complex Gaussian at this point with a mean of about \(1.71-1.01i\) and a variance of about \(0.2\).
These techniques are now used to emulate an analytic function of several complex variables. A complex function’s being analytic is a very strong restriction; (Needham 2004) uses ‘rigidity’ to describe the severe constraint that analyticity represents.
Here the Weierstrass \(\sigma\)-function (Chandrasekharan 1985) is chosen as an example, on the grounds that (Littlewood and Offord 1948) consider it to be a typical entire function in a well-defined sense. The elliptic package (Hankin 2006) is used for numerical evaluation.
The \(\sigma\)-function takes a primary argument \(z\) and two invariants \(g_1,g_2\), so a three-column complex design matrix is required:
> library("elliptic")
> valsigma <- 2 + 1i + round(latin.hypercube(30, 3,
+ names = c("z", "g1", "g2"), complex = TRUE)/4, 2)
> head(valsigma)
z g1 g21,] 2.17+1.15i 2.09+1.22i 2.21+1.09i
[2,] 2.11+1.01i 2.04+1.03i 2.25+1.15i
[3,] 2.10+1.04i 2.15+1.00i 2.22+1.20i
[4,] 2.13+1.10i 2.24+1.21i 2.01+1.16i
[5,] 2.20+1.00i 2.20+1.06i 2.08+1.08i
[6,] 2.05+1.10i 2.19+1.04i 2.11+1.03i [
(an offset is needed because \(\sigma\left(z,g_1,g_2\right)=z+\operatorname{\mathcal{O}}\left(z^5\right)\)). The \(\sigma\)-function can now be evaluated at the points of the design matrix:
> dsigma <- apply(valsigma, 1, function(u) sigma(u[1], g = u[2:3]))
One way of estimating the roughness parameters is to use maximum
likelihood. The likelihood for any set of roughness parameters is given
by (Oakley 1999)
as \(\left(\sigma^2\right)^{-\frac{n-q}{2}}\left|A\right|^{-1/2} \left|H^TA^{-1}H\right|^{-1/2}\) with complex
generalization \(\left(\sigma^2\right)^{-(n-q)}\left|A\right|^{-1} \left|{H}^\ast A^{-1}H\right|^{-1}\) which is calculated in the package
by function scales.likelihood.complex()
; this can be used to return
the log-likelihood for a specific set of roughness parameters:
> scales.likelihood.complex(scales = c(1, 1, 2), means = c(1, 1+1i, 1-2i),
+ zold = valsigma, z = dsigma, give_log = TRUE)
1] 144.5415 [
Numerical methods can then be used to find the maximum likelihood
estimate. Because function optim()
optimizes over \(\mathbb{R}^n\),
helper functions are again needed which translate from the optimand to
scales and means:
> scales <- function(x) exp(x[c(1, 2, 2)])
> means <- function(x) x[c(3, 4, 4)] + 1i * x[c(5, 6, 6)]
Because the diagonal elements of \(\mathfrak{B}\) are strictly positive, their logarithms are optimized, following (Hankin 2005); it is implicitly assumed that the scales and means associated with \(g_1\) and \(g_2\) are equal.
> objective <- function(x, valsigma, dsigma)
+ -scales.likelihood.complex(scales = scales(x), means = means(x),
+ zold = valsigma, z = dsigma)
> start <- c(-0.538, -5.668, 0.6633, -0.0084, -1.73, -0.028)
> jj <- optim(start, objective, valsigma = valsigma, dsigma = dsigma,
+ method = "SANN", control = list(maxit = 100))
> (u <- jj$par)
1] -0.5380 -5.6680 0.6633 -0.0084 -1.7300 -0.0280 [
Function corr_complex()
may now be used to calculate the covariance of
the observations:
> Asigma <- corr_complex(z1 = valsigma, scales = scales(u), means = means(u))
So now we can compare the emulator against the “true” value:
> interpolant.quick.complex(rbind(c(2+1i, 2+1i, 2+1i)), zold = valsigma,
+ d = dsigma, Ainv = solve(Asigma), scales = scales(u), means = means(u))
1] 3.078956+1.259993i [
> sigma(2 + 1i, g = c(2 + 1i, 2 + 1i))
1] 3.078255+1.257819i [
showing reasonable agreement. It is also possible to test the hypothesis \(H_\mathbb{R}\colon\boldsymbol\mu\in\mathbb{R}^2\) (that is, the variance matrix \(A\) is real), by calculating the likelihood ratio of the unconstrained model ((5)) to that obtained by \(H_\mathbb{R}\). This may be achieved by constraining the optimization to satisfy \(\boldsymbol\mu\in\mathbb{R}^2\):
> ob2 <- function(x, valsigma, dsigma)
+ -scales.likelihood.complex(scales = scales(x), means = c(0, 0, 0),
+ zold = valsigma, z = dsigma)
> jjr <- optim(u[1:2], ob2, method = "SANN", control = list(maxit = 1000),
+ valsigma = valsigma, dsigma = dsigma)
> (ur <- jjr$par)
1] 0.2136577 -4.2640825 [
so the test statistic \(D\) is given by
> LR <- scales.likelihood.complex(scales = scales(ur), means = c(0, 0, 0),
+ zold = valsigma, z = dsigma)
> LC <- scales.likelihood.complex(scales = scales(u), means = means(u),
+ zold = valsigma, z = dsigma)
> (D <- 2 * (LC - LR))
1] 22.17611 [
Observing that \(D\) is in the tail region of its asymptotic distribution, \(\chi^2_{3}\), the hypothesis \(H_\mathbb{R}\) may be rejected.
The cmvnorm package for the complex multivariate Gaussian distribution has been introduced and motivated. The Gaussian process has been generalized to the complex case, and a complex generalization of the emulator technique has been applied to an analytic function of several complex variables. The complex variance matrix was specified using a novel parameterization which accommodated non-real covariances in the context of circulary symmetric random variables. Further work might include numerical support for the complex multivariate Student \(t\) distribution.
This article is converted from a Legacy LaTeX article using the texor package. The pdf version is the official version. To report a problem with the html, refer to CONTRIBUTE on the R Journal homepage.
Text and figures are licensed under Creative Commons Attribution CC BY 4.0. The figures that have been reused from other sources don't fall under this license and can be recognized by a note in their caption: "Figure from ...".
For attribution, please cite this work as
Hankin, "The Complex Multivariate Gaussian Distribution", The R Journal, 2015
BibTeX citation
@article{RJ-2015-006, author = {Hankin, Robin K. S.}, title = {The Complex Multivariate Gaussian Distribution}, journal = {The R Journal}, year = {2015}, note = {https://rjournal.github.io/}, volume = {7}, issue = {1}, issn = {2073-4859}, pages = {73-80} }