orthoDr is a package in R that solves dimension reduction problems using orthogonality constrained optimization approach. The package serves as a unified framework for many regression and survival analysis dimension reduction models that utilize semiparametric estimating equations. The main computational machinery of orthoDr is a first-order algorithm developed by (Wen and Yin 2012) for optimization within the Stiefel manifold. We implement the algorithm through Rcpp and OpenMP for fast computation. In addition, we developed a general-purpose solver for such constrained problems with user-specified objective functions, which works as a drop-in version of optim(). The package also serves as a platform for future methodology developments along this line of work.
Dimension reduction is a long-standing problem in statistics and data science. While the traditional principal component analysis (Jolliffe 1986) and related works provide a way of reducing the dimension of the covariates, the term “sufficient dimension reduction” is more commonly referring to a series of regression works originated from the seminal paper on sliced inverse regression (Li 1991). In such problems, we observe an outcome \(Y \in \mathbb{R}\), along with a set of covariates \(X = (X_1, \ldots, X_p)^{ \mathrm{\scriptscriptstyle T} }\in \mathbb{R}^p\). Dimension reduction models are interested in modeling the conditional distribution of \(Y\) given \(X\), while their relationship satisfies, for some \(p \times d\) matrix \({ \mathbf{B} }= (\boldsymbol\beta_1, \ldots, \boldsymbol\beta_p)\),
\[\label{dr:model} Y = h(X, \epsilon) = h({ \mathbf{B} }^{ \mathrm{\scriptscriptstyle T} }X, \epsilon) = h(\boldsymbol\beta_1^{ \mathrm{\scriptscriptstyle T} }X, \ldots, \boldsymbol\beta_d^{ \mathrm{\scriptscriptstyle T} }X, \epsilon), \tag{1}\] where \(\epsilon\) represents any error terms and \(h\), with a slight abuse of notation, represents the link function using \(X\) or \({ \mathbf{B} }^{ \mathrm{\scriptscriptstyle T} }X\). One can easily notice that when \(d\), the number of columns in \({ \mathbf{B} }\), is less than \(p\), a dimension reduction is achieved, in the sense that only a \(d\) dimensional covariate information is necessary for fully describing the relationship (Cook 2009). Alternatively, this relationship can be represented as (Zeng and Zhu 2010) \[\label{dr:perp} Y \perp X \mid { \mathbf{B} }^{ \mathrm{\scriptscriptstyle T} }X, \tag{2}\] which again describes the sufficiency of \({ \mathbf{B} }^{ \mathrm{\scriptscriptstyle T} }X\). Following the work of (Li 1991), a variety of methods have been proposed. An incomplete list of literature includes (Cook and Weisberg 1991; Cook and Lee 1999; Chiaromonte et al. 2002; Yin and Cook 2002; Zhu et al. 2006; Li and Wang 2007; Cook et al. 2010; Zhu et al. 2010a; Zhu et al. 2010b; Lee et al. 2013; Cook and Zhang 2014; Li and Zhang 2017). For a more comprehensive review of the literature, we refer the readers to (Ma and Zhu 2013a). One advantage of many early developments in dimension reduction models is that only a singular value decomposition is required to obtain the reduced space parameters \({ \mathbf{B} }\) through inverse sliced averaging. However, this comes at a price of assuming the linearity assumption (Li 1991), which is almost the same as assuming that the covariates follow an elliptical distribution (Li and Dong 2009; Dong and Li 2010). Moreover, some methods require more restrictive assumptions on the covariance structure (Cook and Weisberg 1991). Many methods attempt to avoid these assumptions by resorting to nonparametric estimations. The most successful ones include (Xia et al. 2002) and (Xia 2007). However, recently a new line of work started by (Ma and Zhu 2012a,b; 2013b) shows that by formulating the problem into semiparametric estimating equations, not only we can avoid many distributional assumptions on the covariates, the obtained estimator of \({ \mathbf{B} }\) also enjoys efficiency. Extending this idea, (Sun et al. 2017) developed a framework for dimension reduction in survival analysis using a counting process based estimating equations. The method performs significantly better than existing dimension reduction methods for censored data such as (Li et al. 1999; Xia et al. 2010) and (Lu and Li 2011). Another recent development that also utilizes this semiparametric formulation is (Zhao et al. 2017), in which an efficient estimator is derived.
Although there are celebrated theoretical and methodological advances, estimating \({ \mathbf{B} }\) through the semiparametric estimating equations is still not a trivial task. Two challenges remain: first, by a careful look at the model definition (1), we quickly noticed that the parameters are not identifiable unless certain constraints are placed. In fact, if we let \(\mathbf{A}\) be any \(d \times d\) full rank matrix, then \(({ \mathbf{B} }\mathbf{A})^{ \mathrm{\scriptscriptstyle T} }X\) preserves the same column space information of \({ \mathbf{B} }^{ \mathrm{\scriptscriptstyle T} }X\), hence, we can define \(h^\ast(({ \mathbf{B} }\mathbf{A})^{ \mathrm{\scriptscriptstyle T} }X, \epsilon)\) accordingly to retain exactly the same model as (1). While traditional methods can utilize singular value decompositions (SVD) of the estimation matrix to identify the column space of \({ \mathbf{B} }\) instead of recovering each parameter (Cook and Lee 1999), it appears to be a difficult task in the semiparametric estimating equation framework. One challenge is that if we let \({ \mathbf{B} }\) change freely, the rank of the \({ \mathbf{B} }\) matrix cannot be guaranteed, which makes the formulation meaningless. Hence, for both computational and theoretical concerns, (Ma and Zhu 2012a) resorts to an approach that fixes the upper \(d \times d\) block of \({ \mathbf{B} }\) as an identity matrix, i.e., \({ \mathbf{B} }= (\mathbf{I}_{d \times d}, \,{ \mathbf{B} }^{\ast{ \mathrm{\scriptscriptstyle T} }})^{ \mathrm{\scriptscriptstyle T} }\), where \({ \mathbf{B} }^\ast\) is a \((p-d) \times d\) matrix that sits in the lower block of \({ \mathbf{B} }\). Hence, in this formulation, only \({ \mathbf{B} }^\ast\) needs to be solved. While the solution is guaranteed to be rank \(d\) in this formulation, as pointed out by (Sun et al. 2017), this approach still requires correctly identifying and reordering of the covariate vector \(\mathbf{x}\) such that the first \(d\) entries are indeed important, which creates another daunting task. Another challenge is that solving semiparametric estimating equations requires the estimation of nonparametric components. These components need to be computed through kernel estimations, usually the Nadaraya-Watson type, which significantly increases the computational intensity of the method considering that these components need to be recalculated at each iteration of the optimization. Up to date, these drawbacks remain as the strongest criticism of the semiparametric approaches. Hence, although enjoying superior statistical asymptotic properties, are not as attractive as a traditional sliced inverse type of approaches such as (Li 1991) and (Cook and Weisberg 1991).
The goal of our orthoDr package is to develop a computationally efficient optimization platform for solving the semiparametric estimating equation approaches proposed in (Ma and Zhu 2013b), (Sun et al. 2017) and possibly any future work along this line. Revisiting the rank preserving problem of \({ \mathbf{B} }\) mentioned above, we can essentially set a constraint that \[\label{B:constraint} { \mathbf{B} }^{ \mathrm{\scriptscriptstyle T} }{ \mathbf{B} }= \mathbf{I}, \tag{3}\] where \(\mathbf{I}\) is a \(d \times d\) identity matrix. A solution of the estimating equations that satisfies the constraint will correctly identify the dimensionality-reduced subspace. This is known as optimizing on the Stiefel manifold, which is a class of well-studied problems (Edelman et al. 1998). A recent R development (Martin et al. 2016) utilizes quasi-Newton methods such as the well known BFGS method on the Riemannian manifold (Huang et al. 2018). However, second order optimization methods always require forming and storing large hessian matrices. In addition, they may not be easily adapted to penalized optimization problems, which often appear in high dimensional statistical problems (Zhu et al. 2006; Li and Yin 2008). On the other hand, first-order optimization methods are faster in each iteration, and may also incorporate penalization in a more convenient way (Wen et al. 2010). By utilizing the techniques developed by (Wen and Yin 2012), we can effectively search for the solution in the Stiefel manifold, and this becomes the main machinery of our package. Further incorporating the popular Rcpp (Eddelbuettel and François 2011) and RcppArmadillo (Eddelbuettel and Sanderson 2014) toolboxes and the OpenMP parallel commuting, the computational time for our package is comparable to state-of-the-art existing implementations (such as ManifoldOpthm), making the semiparametric dimension reduction models more accessible in practice.
The purpose of this article is to provide a general overview of the orthoDr package (version 0.6.2) and provide some concrete examples to demonstrate its advantages. orthoDr is available from the Comprehensive R Archive Network (CRAN) at https://CRAN.R-project.org/package=orthoDr and GitHub at https://github.com/teazrq/orthoDr. We begin by explaining the underlying formulation of the estimating equation problem and the parameter updating scheme that preserves orthogonality. Next, the software is introduced in detail using simulated data and real data as examples. We further demonstrate an example that utilizes the package as a general purpose solver. We also investigate the computational time of the package compared with existing solvers. Future plans for extending the package to other dimension reduction problems are also discussed.
To give a concrete example of the estimating equations, we use the semiparametric inverse regression approach defined in (Sun et al. 2017) to demonstrate the calculation. Following the common notations in the survival analysis literature, let \(X_i\) be the observed \(p\) dimensional covariate values of subject \(i\), \(Y_i = \min(T_i, C_i)\) is the observed survival time, with failure time \(T_i\) and censoring time \(C_i\), and \(\delta_i = I(T_i \leq C_i)\) is the censoring indicator. A set of i.i.d. observations \(\{X_i, Y_i, \delta_i\}_{i=1}^n\) is observed. We are interested in a situation that the conditional distribution of failure time \(T_i | X_i\) depends only on the reduced space \({ \mathbf{B} }^{ \mathrm{\scriptscriptstyle T} }X_i\). Hence, to estimate \({ \mathbf{B} }\), the estimating equation is given by \[\label{eeq:IR-Semi} \widehat{\psi}_n \big({ \mathbf{B} }\big)\!=\!\text{vec} \Bigg[ \!\frac{1}{n}\sum_{i=1}^n \sum_{\substack{j=1 \\ \delta_j = 1}}^n \!\left\{ X_i \!-\! \widehat E \big(X \big| Y \geq Y_j, { \mathbf{B} }^{ \mathrm{\scriptscriptstyle T} }X_i\big) \right\} \widehat \varphi^{ \mathrm{\scriptscriptstyle T} }(Y_j) \left\{\delta_i I(j \!=\! i) \!-\! \widehat \lambda\big(Y_j | { \mathbf{B} }^{ \mathrm{\scriptscriptstyle T} }X_i\big) \right\} \Bigg], \tag{4}\] where the operator \(\text{vec}(\cdot)\) is the vectorization of matrix. Several components are estimated nonparasitically: the function \(\widehat\varphi(u)\) is estimated by sliced averaging, \[\label{eq:phi} \widehat \varphi(u)=\frac{\sum_{i=1}^n X_i I\big( u \leq Y_i < u + \triangle u, \delta_i = 1\big) }{\sum_{i=1}^n I\big( u \leq Y_i < u+\triangle u, \delta_i = 1\big)} - \frac{\sum_{i=1}^n X_i I\big(Y_i \geq u\big)}{\sum_{i=1}^n I\big(Y_i \geq u\big)}, \tag{5}\] where \(\triangle u\) is chosen such that there are \(hn\) number of observations lie between \(u\) and \(u + \triangle u\). The conditional mean function \(\widehat E \big(X | Y \geq u, { \mathbf{B} }^{ \mathrm{\scriptscriptstyle T} }X \! = \! z\big)\!\) is estimated through the Nadaraya-Watson kernel estimator \[\label{eq:condX} \widehat E \big(X | Y \geq u, { \mathbf{B} }^{ \mathrm{\scriptscriptstyle T} }X \! = \! z\big)\! = \frac{\sum_{i=1}^n X_i K_h({ \mathbf{B} }^{ \mathrm{\scriptscriptstyle T} }X_i \!-\! z)I(Y_i \geq u) }{\textstyle\sum_{i=1}^n K_h({ \mathbf{B} }^{ \mathrm{\scriptscriptstyle T} }X_i \!-\! z) I(Y_i \geq u)}. \tag{6}\] In addition, the the conditional hazard function at any time point \(u\) can be estimated by \[\label{eq:chf} \widehat \lambda(u | { \mathbf{B} }^{ \mathrm{\scriptscriptstyle T} }X = z) = \frac{\sum_{i=1}^n K_b(Y_i-u) \delta_i K_h\big({ \mathbf{B} }^{ \mathrm{\scriptscriptstyle T} }X_i - z\big)}{\sum_{j=1}^n I\big(Y_j \geq u\big) K_h \big({ \mathbf{B} }^{ \mathrm{\scriptscriptstyle T} }X_j - z\big)}. \tag{7}\]
However, this substantially increase the computational burden since the double kernel estimator requires \({\cal O}(n^2)\) flops to calculate the hazard at any given \(u\) and \(z\). Instead, an alternative version using (Dabrowska 1989) can greatly reduce the computational cost without compromising the performance. Hence, we estimate the conditional hazard function by
\[ \label{eq:lambda} \widehat \lambda(u | { \mathbf{B} }^{ \mathrm{\scriptscriptstyle T} }X = z) = \frac{\sum_{i=1}^n I\big(Y_i = u\big)I\big(\delta_i = 1\big)K_h\big({ \mathbf{B} }^{ \mathrm{\scriptscriptstyle T} }X_i - z\big)}{\sum_{i=1}^n I\big(Y_i \geq u\big) K_h\big({ \mathbf{B} }^{ \mathrm{\scriptscriptstyle T} }X_i - z\big)}, \tag{8}\]
which requires only \({\cal O}(n)\) flops. In the above equations (5), (6) and (8), \(h\) is a pre-specified kernel bandwidth and \(K_h(\cdot) = K(\cdot/h)/h\), where \(K(\cdot)\) is the Gaussian kernel function. By utilizing the method of moments estimators (Hansen 1982) and noticing our constraint for identifying the column space of \({ \mathbf{B} }\), solving for the solution of the estimating equations (4) is equivalent to \[\begin{aligned} \label{eq:GMM} \text{minimize} \quad & f({ \mathbf{B} }) = \widehat{\psi}_n({ \mathbf{B} })^{ \mathrm{\scriptscriptstyle T} }\widehat{\psi}_n({ \mathbf{B} })\\ \text{subject to} & \quad { \mathbf{B} }^{ \mathrm{\scriptscriptstyle T} }{ \mathbf{B} }= \mathbf{I}. \end{aligned} \tag{9}\] Essentially all other semiparametric dimension reduction models described in (Ma and Zhu 2013b), and more recently (Ma and Zhang 2015) (Xu et al. 2016), (Sun et al. 2017), (Huang and Chiang 2017) and many others can be estimated in the samimilar fashion as the above optimization problem. However, due to the difficult in the constrains and the purpose of identifiability, all of these methods resort to either fixing the upper block of the \({ \mathbf{B} }\) matrix as an identity matrix or adding a penalty of \(\lVert { \mathbf{B} }^{ \mathrm{\scriptscriptstyle T} }{ \mathbf{B} }- \mathbf{I} \rVert_F\) to preserve the orthogonality constraint. There appears to be no existing method that solves (9) directly. Here, we utilize (Wen and Yin 2012)’s approach which can effectively tackle this problem.
The algorithm works in the same fashion as a regular gradient decent, except that we need to preserve the orthogonality at each iteration of the update. As described in (Wen and Yin 2012), given any feasible point \({ \mathbf{B} }_0\), i.e., \({{ \mathbf{B} }_0}^{ \mathrm{\scriptscriptstyle T} }{ \mathbf{B} }_0 = \mathbf{I}\), which can always be generated randomly, we update \({ \mathbf{B} }_0\) as follows. Let the \(p \times d\) gradient matrix be \[\begin{aligned} \label{eq:gradient} \mathbf{G} = \left( \frac{\partial f({ \mathbf{B} }_0)}{\partial { \mathbf{B} }_0(i,j) } \right)_{\{i,j\}}. \end{aligned} \tag{10}\] Then, utilizing the Cayley transformation, we have \[\begin{aligned} \label{eq:update} { \mathbf{B} }_{\text{new}} &= \Big(\mathbf{I} + \frac{\tau}{2} \mathbf{A} \Big)^{-1} \Big(\mathbf{I} - \frac{\tau}{2} \mathbf{A} \Big) { \mathbf{B} }_0, \end{aligned} \tag{11}\] with the orthogonality preserving property \({ \mathbf{B} }_{\text{new}}^{ \mathrm{\scriptscriptstyle T} }{ \mathbf{B} }_{\text{new}} = \mathbf{I}\). Here, \(\mathbf{A} = \mathbf{G} {{ \mathbf{B} }_0}^{ \mathrm{\scriptscriptstyle T} }- { \mathbf{B} }_0 \mathbf{G}^{ \mathrm{\scriptscriptstyle T} }\) is a skew-symmetric matrix. It can be shown that \(\{{ \mathbf{B} }_{\text{new}}(\tau)\}_{\tau \geq 0}\) is a descent path. Similar to line search algorithms, we can then find a proper step size \(\tau\) through a curvilinear search. Recursively updating the current value of \({ \mathbf{B} }\), the algorithm stops when the tolerance level is reached. An initial value is also important for the performance of nonconvex optimization problems. A convenient initial value for our framework is the computational efficient approach developed in (Sun et al. 2017), which only requires a SVD of the estimation matrix.
There are several main functions in the orthoDr package:
orthoDr_surv
, ortho_reg
and ortho_optim
. They are corresponding to
the survival model described perviously (Sun et al. 2017), the
regression model in (Ma and Zhu 2012a), and a general constrained
optimization function, respectively. In this section, we demonstrate the
details of using these main functions, illustrate them with examples.
The orthoDr_surv
function implements the optimization problem defined
in Equation (9), where the kernel estimations and various
quantities are implemented and calculated within C++. Note that in
addition, the method defined previously, some simplified versions are
also implemented such as the counting process inverse regression models
and the forward regression models, which are all described in
(Sun et al. 2017). These specifications can be made using the method
parameter. A routine call of the function orthoDr_surv
proceed as
orthoDr_surv(x, y, censor, method, ndr, B.initial, bw, keep.data,
control, maxitr, verbose, ncore)
x
: A matrix or data.frame for features (numerical only).y
: A vector of observed survival times.censor
: A vector of censoring indicators.method
: The estimating equation method used.
"dm"
(default): semiparametric inverse regression given in
(4)."dn"
: counting process inverse regression."forward"
: forward regression model with one structural
dimensional.ndr
: The number of structural dimensional. For method
= "dn"
or "dm"
, the default is 2. For method
= "forward"
only one
structural dimension is allowed, hence the parameter is suppressed.B.initial
: Initial B
values. Unless specifically interested,
this should be left as default, which uses the computational
efficient approach (with the CPSIR()
function) in
(Sun et al. 2017) as the initial. If specified, must be a matrix
with ncol(x)
rows and ndr
columns. The matrix will be processed
by Gram-Schmidt if it does not satisfy the orthogonality constrain.bw
: A kernel bandwidth, assuming each variables have unit
variance. By default we use the Silverman rule-of-thumb formula
(Silverman 1986) to determine the bandwidth
\[\text{\texttt{bw}} = 1.06\times\left(\frac4{d+2}\right)^\frac1{d+4}n^{-\frac1{d+4}}.\]
This bandwidth can be computed using the silverman(n, d)
function
in our package.keep.data
: Should the original data be kept for prediction?
Default is FALSE
.control
: A list of tuning variables for optimization, including
the convergence criteria. In particular, epsilon
is the size for
numerically approximating the gradient, ftol
, gtol
, and btol
are tolerance levels for the objective function, gradients, and the
parameter estimations, respectively, for judging the convergence.
The default values are selected based on (Wen and Yin 2012) .maxitr
: Maximum number of iterations. Default is 500.verbose
: Should information be displayed? Default is FALSE
.ncore
: Number of cores for parallel computing when approximating
the gradients numerically. The default is the maximum number of
threads.We demonstrate the usage of orthoDr_surv
function by solving a problem
with generated survival data.
# generate some survival data with two structural dimensions
> set.seed(1)
R> N = 350; P = 6; dataX = matrix(rnorm(N*P), N, P)
R> failEDR = as.matrix(cbind(c(1, 1, 0, 0, 0, 0, rep(0, P-6)),
R+ c(0, 0, 1, -1, 0, 0, rep(0, P-6))))
> censorEDR = as.matrix(c(0, 1, 0, 1, 1, 1, rep(0, P-6)))
R> T = exp(-2.5 + dataX %*% failEDR[,1] + 0.5*(dataX %*%
R+ failEDR[,1])*(dataX %*% failEDR[,2]) + 0.25*log(-log(1-runif(N))))
> C = exp( -0.5 + dataX %*% censorEDR + log(-log(1-runif(N))))
R> Y = pmin(T, C)
R> Censor = (T < C)
R
# fit the model
> orthoDr.fit = orthoDr_surv(dataX, Y, Censor, ndr = 2)
R> orthoDr.fit
R
1] [,2]
[,1,] -0.689222616 0.20206497
[2,] -0.670750726 0.19909057
[3,] -0.191817963 -0.66623300
[4,] 0.192766630 0.68605407
[5,] 0.005897188 0.02021414
[6,] 0.032829356 0.06773089 [
To evaluate the accuracy of this estimation, a distance function
distance()
can be used. This function calculates the distance between
the column spaces generated by the true \({ \mathbf{B} }\) and the
estimated version \(\widehat{{ \mathbf{B} }}\). Note that the sine angle
distance between the two column spaces is closely related to the
canonical correlation between the two matrices \({ \mathbf{B} }\) and
\(\widehat{{ \mathbf{B} }}\).
distance(s1, s2, method, x)
s1
: A matrix for the first column space (e.g., \({ \mathbf{B} }\)).s2
: A matrix for the second column space (e.g.,
\(\widehat{{ \mathbf{B} }}\)).method
:
"dist"
: the Frobenius norm distance between the projection
matrices of the two given matrices, where for any given matrix
\({ \mathbf{B} }\), the projection matrix
\(\mathbf{P} = { \mathbf{B} }({ \mathbf{B} }^{ \mathrm{\scriptscriptstyle T} }{ \mathbf{B} })^{-1} { \mathbf{B} }^{ \mathrm{\scriptscriptstyle T} }\)."trace"
: the trace correlation between two projection matrices
\(\textnormal{tr}(\mathbf{P} \widehat{\mathbf{P}})/d\), where \(d\)
is the number of columns of the given matrix."canonical"
: the canonical correlation between
\({ \mathbf{B} }^{ \mathrm{\scriptscriptstyle T} }X\) and
\(\widehat{{ \mathbf{B} }}^{ \mathrm{\scriptscriptstyle T} }X\)."sine"
: the sine angle distance \(\|\sin{\Theta} \|_F\) obtained
from
\(\mathbf{P}_1(\mathbf{I} - \mathbf{P}_2) = \mathbf{U} \sin{\Theta} \mathbf{V}^{ \mathrm{\scriptscriptstyle T} }\).x
: The design matrix \(X\) (default = NULL
), required only if
method
\(=\) "canonical"
is used.We compare the accuracy of the estimations obtained by the
method =``"dm"
and "dn"
. Note that the "dm"
method enjoys double
robustness property of the estimating equations, hence the result is
usually better.
# Calculate the distance to the true parameters
> distance(failEDR, orthoDr.fit$B, "dist")
R
1] 0.1142773 [
# Compare with the counting process inverse regression model
> orthoDr.fit1 = orthoDr_surv(dataX, Y, Censor, method = "dn", ndr = 2)
R> distance(failEDR, orthoDr.fit1$B, "dist")
R
1] 0.1631814 [
The orthoDr_reg
function implements the semiparametric dimension
reduction methods proposed in (Ma and Zhu 2012a). A routine call of
the function orthoDr_reg
proceed as
orthoDr_reg(x, y, method, ndr, B.initial, bw, keep.data, control,
maxitr, verbose, ncore)
x
: A matrix or data.frame for features (numerical only).y
: A vector of observed continuous outcome.method
: We currently implemented two methods: the semiparametric
sliced inverse regression method ("sir"
), and the semiparametric
principal Hessian directions method ("phd"
).
"sir"
: semiparametric sliced inverse regression method solves
the sample version of the estimating equation
\[E\Big(\big[E(X|Y)-E\{E(X|Y)|{ \mathbf{B} }^{ \mathrm{\scriptscriptstyle T} }X\}\big]\big[X-E(X|{ \mathbf{B} }^{ \mathrm{\scriptscriptstyle T} }X)\big]^{ \mathrm{\scriptscriptstyle T} }\Big) = 0\]"phd"
: semiparametric principal Hessian directions method that
estimates \({ \mathbf{B} }\) by solving the sample version of
\[E\big[\{Y-E(Y|{ \mathbf{B} }^{ \mathrm{\scriptscriptstyle T} }X)\}\{XX^{ \mathrm{\scriptscriptstyle T} }- E(XX^{ \mathrm{\scriptscriptstyle T} }| { \mathbf{B} }^{ \mathrm{\scriptscriptstyle T} }X)\}\big] = 0\]ndr
: The number of structural dimensional (default is 2).B.initial
: Initial B
values. For each method, the initial values
are taken from the corresponding traditional inverse regression
approach using the dr package. The obtained matrix will be
processed by Gram-Schmidt for orthogonality.bw
, keep.data
, control
, maxitr
, verbose
and ncore
are
exactly the same as those in the orthoDr_surv
function.To demonstrate the usage of orthoDr_reg
, we consider the problem of
dimension reduction by fitting a semi-PHD model proposed by
(Ma and Zhu 2012a).
> set.seed(1)
R> N = 100; P = 4; dataX = matrix(rnorm(N*P), N, P)
R> Y = -1 + dataX[,1] + rnorm(N)
R> orthoDr_reg(dataX, Y, ndr = 1, method = "phd")
R
for regression model using phd approach:
Subspace 1]
[,1,] 0.99612339
[2,] 0.06234337
[3,] -0.04257601
[4,] -0.04515279 [
The estimation equations of the dimension reduction problem in the survival and regression settings usually have a complicated form. Especially, multiple kernel estimations are involved, which results in difficulties in taking derivatives analytically. As an alternative, numerically approximated gradients are implemented using OpenMP. A comparison between a single core and multiple cores (4 cores) is given in the following example. Results from 20 independent simulation runes are summarized in Table 1. The data generating procedure used in this example is the same as the survival data used in Section 3.1. All simulations are performed on an i7-4770K CPU.
> t0 = Sys.time()
R> dn.fit = orthoDr_surv(dataX, Y, Censor, method = "dn", ndr = ndr,
R+ ncore = 4, control = list(ftol = 1e-6))
> Sys.time() - t0 R
# of cores | ||
---|---|---|
2-3 | 1 | 4 |
\(n\!=\!350\), \(p\!=\!6\) | 3.9831 | 1.2741 |
\(n\!=\!350\), \(p\!=\!12\) | 12.7780 | 3.4850 |
ortho_optim
is a general purpose optimization function that can
incorporate any user defined objective function \(f\) (and gradient
function if supplied). The usage of ortho_optim
is similar to the
widely used optim()
function. A routine call of the function proceed
as
ortho_optim(B, fn, grad, ..., maximize, control, maxitr, verbose)
B
: Initial B
values. Must be a matrix, and the columns are
subject to the orthogonality constrains. It will be processed by
Gram-Schmidt if not orthogonal.fn
: A function that calculates the objective function value. The
first argument should be B
. Returns a single value.grad
: A function that calculate the gradient. The first argument
should be B
. Returns a matrix with the same dimension as B
. If
not specified, a numerical approximation is used....
: Arguments passed to fn
and grad
besides B
.maximize
: By default, the solver will try to minimize the
objective function unless maximize = TRUE
.maxitr
, verbose
and ncore
works in the same way
as introduced in the previous sections.To demonstrate the simple usage of ortho_optim
as a drop-in function
of optim()
, we consider the problem of searching for the first
principle component for a data matrix.
# an example of searching for the first principal component
> set.seed(1)
R> N = 400; P = 100; X = scale(matrix(rnorm(N*P), N, P), scale = FALSE)
R> w = gramSchmidt(matrix(rnorm(P), P, 1))$Q
R> fx <- function(w, X) t(w) %*% t(X) %*% X %*% w
R> gx <- function(w, X) 2*t(X) %*% X %*% w
R
# fit the model
> fit = ortho_optim(w, fx, gx, X = X, maximize = TRUE, verbose = 0)
R> head(fit$B)
R
1]
[,1,] 0.01268226
[2,] -0.09065592
[3,] -0.01471700
[4,] 0.10583958
[5,] -0.02656409
[6,] -0.04186199
[
# compare results with the prcomp() function
> library(pracma)
R> distance(fit$B, as.matrix(prcomp(X)$rotation[, 1]), type = "dist")
R
1] 1.417268e-05 [
The ManifoldOptim (Martin et al. 2016) package is known for solving optimization problems on manifolds. We consider the problem of optimizing Brockett cost function (Huang et al. 2018) on the Stiefel manifold with objective and gradient functions written in R. The problem can be stated as
\[\min_{{ \mathbf{B} }^{ \mathrm{\scriptscriptstyle T} }{ \mathbf{B} }= \mathbf{I}_{p}, \, \, { \mathbf{B} }\in \mathbb{R}^{n\times p}} \text{trace}({ \mathbf{B} }^{ \mathrm{\scriptscriptstyle T} }X { \mathbf{B} }D),\]
where \(X \in \mathbb{R}^{n\times n}\), \(X = X^{ \mathrm{\scriptscriptstyle T} }\), \(D = \text{diag}(\mu_1,\mu_2,...,\mu_p)\) with \(\mu_1\geq\mu_2\geq...\geq\mu_p\). We generate the data with exactly the same procedure as the documentation file provided in the ManifoldOptim package, with only a change of notation. For our orthoDr package, the following code is used to specify the objective and gradient functions and solve for the optimal \({ \mathbf{B} }\).
> n = 150; p = 5; set.seed(1)
R
> X <- matrix(rnorm(n*n), nrow=n)
R> X <- X + t(X)
R> D <- diag(p:1, p)
R
> f1 <- function(B, X, D) { Trace( t(B) %*% X %*% B %*% D ) }
R> g1 <- function(B, X, D) { 2 * X %*% B %*% D }
R
> b1 = gramSchmidt(matrix(rnorm(n*p), nrow=n, ncol=p))$Q
R> res2 = ortho_optim(b1, fn = f1, grad = g1, X, D)
R> head(res2$B)
R
1] [,2] [,3] [,4] [,5]
[,1,] -0.110048632 -0.060656649 -0.001113691 -0.03451514 -0.063626067
[2,] -0.035495670 -0.142148873 -0.011204859 0.01784039 0.129255824
[3,] 0.052141162 0.015140614 -0.034893426 0.02600569 0.006868275
[4,] 0.151239722 -0.008553174 -0.096884087 0.01398827 0.132756189
[5,] -0.001144864 -0.056849007 0.080050182 0.23351751 -0.007219738
[6,] -0.140444290 -0.112932425 0.082197835 0.18644089 -0.057003273 [
Furthermore, we compare the performence with the ManifoldOptim
package, using four optimization methods: "LRBFGS"
, "LRTRSR1"
,
"RBFGS"
and "RTRSR1"
(Huang et al. 2018). We wrote the same required
functions for the Brockett problem in R. Further more, note that
different algorithms implements slightly different stoping criterion, we
run each algorithm a fixed number of iterations with a single core. We
consider three smaller settings with \(n = 150\), and \(p = 5, 10\) and 15,
and a larger setting with \(n = 500\) and \(p = 50\). Each simulation is
repeated 100 times. The functional value progression (Figures
1 and 2) and the total time cost up to a
certain number of iterations (Table 2) are
presented.
We found that "LRBFGS"
and our orthoDr package usually achieve the
best performance, with functional value decreases the steepest in the
log scale. In terms of computing time, "LRBFGS"
and orthoDr
performers similarly. Although "LRTRSR1"
has similar computational
time, its functional value falls behind. This is mainly because the
theoretical complexity of second-order algorithms is similar to first
order algorithms, both are of order \({\cal O}(p^3)\). However, it should
be noted that for a semiparametric dimension reduction method, the major
computational cost is not due to the parameter updates, rather, it is
calculating the gradient since complicated kernel estimations are
involved. Hence, we believe there is no significant advantage using
either "LRBFGS"
or our orthoDr package regarding the efficiency of
the algorithm. However, first order algorithms may have an advantage
when developing methods for penalized high-dimensional models.
\(n\) | \(p\) | iteration | ManifoldOpthm | orthoDr | |||
---|---|---|---|---|---|---|---|
LRBFGS | LRTRSR1 | RBFGS | RTRSR1 | ||||
\(150\) | 5 | 250 | 0.053 | 0.062 | 0.451 | 0.452 | 0.065 |
\(150\) | 10 | 500 | 0.176 | 0.201 | 4.985 | 5.638 | 0.221 |
\(150\) | 20 | 750 | 0.526 | 0.589 | 28.084 | 36.142 | 0.819 |
\(150\) | 50 | 1000 | 2.469 | 2.662 | – | – | 6.929 |
\(500\) | 5 | 250 | 0.403 | 0.414 | 7.382 | 7.426 | 0.423 |
\(500\) | 10 | 500 | 1.234 | 1.305 | 57.047 | 67.738 | 1.332 |
\(500\) | 20 | 750 | 3.411 | 3.6 | – | – | 3.974 |
\(500\) | 50 | 1000 | 13.775 | 14.43 | – | – | 19.862 |
We use the Concrete Compressive Strength (Yeh 1998) dataset as
an example to further demonstrate the orthoDr_reg
function and to
visualize the results. The dataset is obtained from the UCI Machine
Learning Repository.
Concrete is the most important material in civil engineering. The concrete compressive strength is a highly nonlinear function of age and ingredients. These ingredients include cement, blast furnace slag, fly ash, water, superplasticizer, coarse aggregate, and fine aggregate. In this dataset, we have \(n=1030\) observation, 8 quantitative input variables, and 1 quantitative output variable. We present the estimated two directions for structural dimension and further plot the observed data in these two directions. A non-parametric kernel estimation surface is further included to approximate the mean concrete strength.
> concrete_data = read.csv(choose.files())
R> X = as.matrix(concrete_data[,1:8])
R> colnames(X) = c("Cement", "Blast Furnace Slag", "Fly Ash", "Water",
R"Superplasticizer", "Coarse Aggregate", "Fine Aggregate", "Age")
> Y = as.matrix(concrete_data[,9])
R
> result = orthoDr_reg(X, Y, ndr = 2, method = "sir", maxitr = 1000,
R+ keep.data = TRUE)
> rownames(result$B) = colnames(X)
R> result$B
R
1] [,2]
[,0.08354280 -0.297899899
Cement 0.27563507 0.320304097
Blast Furnace Slag 0.82665328 -0.468889856
Fly Ash 0.20738201 0.460314093
Water 0.43496780 0.540733516
Superplasticizer 0.01141892 0.011870495
Coarse Aggregate 0.02936740 -0.004718979
Fine Aggregate 0.02220664 -0.290444936 Age
Using the algorithm proposed by (Wen and Yin 2012) for optimization on the Stiefel manifold, we developed the orthoDr package that serves specifically for semi-parametric dimension reductions problems. A variety of dimension reduction models are implemented for censored survival outcome and regression problems. In addition, we implemented parallel computing for numerically appropriate the gradient function. This is particularly useful for semi-parametric estimating equation methods because the objective function usually involves kernel estimations and the gradients are difficult to calculate. Our package can also be used as a general purpose solver and is comparable with existing manifold optimization approaches. However, since the performances of different optimization approaches could be problem dependent, hence, it could be interesting to investigate other choices such as the “LRBFGS” approach in the ManifoldOptim package.
Our package also serves as a platform for future methodology developments along this line of work. For example, we are currently developing a personalized dose-finding model with dimension reduction structure (Zhou and Zhu 2018). Also, when the number of covariates \(p\) is large, the model can be over-parameterized. Hence, applying a \(L_1\) penalty can force sparsity and allow the model to handle high-dimensional data. To this end, first-order optimization approaches can have advantages over second-order approaches. However, persevering the orthogonality during the Cayley transformation while also preserve the sparsity can be a challenging task and requires new methodologies. Furthermore, tuning parameters can be selected through a cross-validation approach, which can be implemented in the future.
Xin Zhang’s research was supported in part by grants DMS-1613154 and CCF-1617691 from U.S. National Science Foundation. Ruoqing Zhu’s research was supported in part by grant RB19046 from UIUC.
orthoDr, Rcpp, RcppArmadillo, ManifoldOpthm
HighPerformanceComputing, NumericalMathematics
This article is converted from a Legacy LaTeX article using the texor package. The pdf version is the official version. To report a problem with the html, refer to CONTRIBUTE on the R Journal homepage.
Text and figures are licensed under Creative Commons Attribution CC BY 4.0. The figures that have been reused from other sources don't fall under this license and can be recognized by a note in their caption: "Figure from ...".
For attribution, please cite this work as
Zhu, et al., "orthoDr: Semiparametric Dimension Reduction via Orthogonality Constrained Optimization", The R Journal, 2019
BibTeX citation
@article{RJ-2019-006, author = {Zhu, Ruoqing and Zhang, Jiyang and Zhao, Ruilin and Xu, Peng and Zhou, Wenzhuo and Zhang, Xin}, title = {orthoDr: Semiparametric Dimension Reduction via Orthogonality Constrained Optimization}, journal = {The R Journal}, year = {2019}, note = {https://rjournal.github.io/}, volume = {11}, issue = {2}, issn = {2073-4859}, pages = {24-37} }