Variable Clustering in High-Dimensional Linear Regression: The R Package clere

Dimension reduction is one of the biggest challenge in high-dimensional regression models. We recently introduced a new methodology based on variable clustering as a means to reduce dimensionality. We present here an R package that implements this methodology. An overview of the package functionalities as well as examples to run an analysis are described. Numerical experiments on real data were performed to illustrate the good predictive performance of our parsimonious method compared to standard dimension reduction approaches.


Introduction
High dimensionality is increasingly ubiquitous in numerous scientific fields including genetics, economics and physics. Reducing the dimensionality is a challenge that most statistical methodologies must meet not only to remain interpretable but also to achieve reliable predictions. In linear regression models, dimension reduction techniques often refer to variable selection. Approaches for variable selection are implemented in publicly available software, that involve the well-known R packages glmnet [Friedman et al. (2010)] and spikeslab [Ishwaran et al. (2013)]. The R package glmnet implements the Elastic net methodology [Zou and Hastie (2005)], which is a generalization of both the LASSO [Tibshirani (1996)] and the ridge regression (RR) [Hoerl and Kennard (1970)]. The R package spikeslab in turn, implements the Spike and Slab methodology [Ishwaran and Rao (2005)], which is a Bayesian approach for variable selection.
Dimension reduction can not however be restricted to variable selection. Indeed, the field can be extended to include approaches which aim is to create surrogate covariates that summarize the information carried in initial covariates. Since the emblematic Principal Component Regression (PCR) [Jolliffe (1982)], many of the other methods spread in the recent literature. As specific examples, we may refer to the OSCAR methodology [Bondell and Reich (2008)], or the PACS methodology [Sharma et al. (2013)] which is a generalization of the latter approach. Those methods mainly proposed variables clustering within a regression model as a way to reduce the dimensionality. Despite their theoretical and practical appeal, implementations of those methods were often proposed only through Matlab or R scripts, limiting thus the flexibility and the computational efficiency of their use. The CLusterwise Effect REgression (CLERE) methodology [Yengo et al. (2014)], was recently introduced as a novel methodology for simultaneous variables clustering and regression. The CLERE methodology is based on the assumption that each regression coefficient is an unobserved random variable sampled from a mixture of Gaussian distributions with an arbitrary number g of components. In addition, all components in the mixture are assumed to have different means (b 1 , . . . , b g ) and equal variances γ 2 .
In this paper, we propose two new features for the CLERE model. First, the stochastic EM (SEM) algorithm is proposed as a more computationally efficicient alternative to the Monte Carlo EM (MCEM) algorithm previously introduced in [Yengo et al. (2014)]. Secondly, the CLERE model is enhanced with the possibility of constraining the first component to have its mean equal to 0, i.e. b1 = 0. This enhancement mainly aimed at facilitating the interpretation of the model. Indeed when b 1 is set to 0, variables assigned to the cluster associated with b 1 might be considered less relevant than other variables provided γ 2 to be small enough. Those two new features were implemented in the R package clere. The core of the package is a C++ program interfaced with R using R packages Rcpp [Eddelbuettel and François (2011)] and RcppEigen [Bates and Eddelbuettel (2013)]. The R package clere can be downloaded from the Comprehensive R Archive Network (CRAN) at http://cran.r-project.org/web/packages/clere/.
The outline of the present paper is the following. In the following section the definition of the model is recalled and the strategy to estimate the model parameter is presented. Afterwards, the main functionalities of the R package clere are presented. Real data analyses are then presented, aiming at illustrating the good predictive performances of CLERE, with noticeable parsimony ability, compared to standard dimension reduction methods. Finally, perspectives and further potential improvements of the package are discussed in the last section.

Model definition and notation
Our model is defined by the following hierarchical relationships: where N (µ, σ 2 ) is the normal distribution of center µ and variance σ 2 , and M 1, π 1 , . . . , π g the one-order multinomial distribution of parameters π = π 1 , . . . , π g such as, ∀ k = 1, . . . , g π k > 0 and ∑ g k=1 π k = 1, and β 0 is a constant term. For an individual i = 1, . . . , n, y i is the response and x ij is an observed value for the j-th covariate. β j is the regression coefficient associated with the j-th covariate (j = 1, . . . , p), which is assumed to follow a mixture of g Gaussians. The variable z j indicates from which mixture component β j is drawn (z jk = 1 if β j comes from component k of the mixture, z jk = 0 otherwise). Let's note that model (1) can be considered as a variable selection-like model by constraining the model parameter b 1 to be equal to 0. Indeed, assuming that one of the components is centered in zero means that a cluster of regression coefficients have null expectation, and thus that the corresponding variables are not significant for explaining the response variable. This functionality is available in the package. Let β = β 1 , . . . , β p , y = (y 1 , . . . , y n ) , X = (x ij ), Z = (z jk ), b = (b 1 . . . b g ) and π = (π 1 , . . . , π g ) . Moreover, log p(y|X; θ) denotes the log-likelihood of model (1) assessed for the parameter θ = β 0 , b, π, σ 2 , γ 2 . Model (1) can be interpreted as a Bayesian approach. However, to be fully Bayesian a prior distribution for parameter θ would have been necessary. Instead, we proposed to estimate θ by maximizing the (marginal) log-likelihood, log p(y|X; θ). This partially Bayesian approach is referred to as Empirical Bayes (EB) [Casella (1985)]. Let Z be the set of p × g-matrices partitioning p covariates into g groups. Those matrices are defined as The log-likelihood log p(y|X; θ) is defined as log p(y|X; θ) = log ∑ Z∈Z R p p(y, β, Z|X; θ)dβ .
Since it requires integrating over Z with cardinality g p , evaluating the likelihood becomes rapidly computationally unaffordable.
Nonetheless, maximum likelihood estimation is still achievable using the expectation maximization (EM) algorithm [Dempster et al. (1977)]. The latter algorithm is an iterative method which starts with an initial estimate of the parameter and updates this estimate until convergence. Each iteration of the algorithm consists of two steps, denoted as the E and the M steps. At each iteration d of the algorithm, the E step consists in calculating the expectation of the log-likelihood of the complete data (observed + unobserved) with respect to p(β, Z|y, X; θ (d) ), the conditional distribution of the unobserved data given the observed data, and the value of the parameter at the current iteration, θ (d) . This expectation, often denoted as Q(θ|θ (d) ) is then maximized with respect to θ at the M step.
In model (1), the E step is analytically intractable. A broad literature devoted to intractable E steps recommends the use of a stochastic approximation of Q(θ|θ (d) ) through Monte Carlo (MC) simulations [Wei and Tanner (1990), Levine and Casella (2001)]. This approach is referred to as the MCEM algorithm. Besides, mean-field-type approximations are also proposed [Govaert and Nadif (2008), Mariadassou et al. (2010)]. Despite their computational appeal, the latter approximations do not generally ensure convergence to the maximum likelihood [Gunawardana and Byrne (2005)]. Alternatively, the SEM algorithm [Celeux et al. (1996)] was introduced as a stochastic version of the EM algorithm. In this algorithm, the E step is replaced with a simulation step (S step) that consists in generating a complete sample by simulating the unobserved data using p(β, Z|y, X; θ (d) ) providing thus a sample (β (d) , Z (d) ). Note that the Monte Carlo algorithm we use to perform this simulation is the Gibbs sampler. After the S step follows the M step which consists in maximizing p(β (d) , Z (d) |y, X; θ) with respect to θ. Alternating those two steps generate a sequence θ (d) , which is a Markov chain whose stationary distribution (when it exists) concentrates around a local maximum of the likelihood.

Estimation and model selection
In this section, two algorithms for model inference are presented: the Monte-Carlo Expectation Maximization (MCEM) algorithm and the Stochastic Expectation Maximization (SEM) algorithm. The section starts with the initialization strategy common to both algorithms and continues with the detailed description of each algorithm. Then, model selection (for choosing g) and variable selection are discussed.

Initialization
The two algorithms presented in this section are initialized using a primary estimate β j (0) of each β j .
The latter can be chosen either at random, or obtained from univariate regression coefficients or penalized approaches like LASSO and ridge regression. For large SEM or MCEM chains, initialization is not a critical issue. The choice of the initialization strategy is therefore made to speed up the convergence of the chains. A Gaussian mixture model with g component(s) is then fitted using as observed data to produce starting values b (0) , π (0) and γ 2 (0) respectively for parameters b, π and γ 2 . Using maximum a posteriori (MAP) clustering, an initial partition β 0 and σ 2 are initialized using β (0) as follows:
However, sampling from p β, Z|y, X; θ (d) is not straightforward. However, we can use a Gibbs sampling scheme to simulate unobserved data, taking advantage of p β|Z, y, X; θ (d) and p Z|β, y, X; θ (d) from which it is easy to simulate. Those distributions, respectively Gaussian and multinomial, are described below in Equations (2) and (3).
and (note that p Z|β, y, X; θ (d) does not depend on X nor y) In Equation (2), I p and 1 p respectively stands for the identity matrix in dimension p and the vector of R p which all coordinates equal 1. To efficiently sample from p β|Z, y, X; θ (d) a preliminary singular vector decomposition of matrix X is necessary. Once this decomposition is performed the overall complexity of the approximated E step is O M(p 2 + pg) .

The M step
Using the M draws obtained by Gibbs sampling at iteration d, the M step is straightforward as detailed in Equations (4) to (8). The overall computational complexity of that step is O (Mpg).

SEM algorithm
In most situations, the SEM algorithm can be considered as a special case of the MCEM algorithm [Celeux et al. (1996)], obtained by setting M = 1. In model (1), such a direct derivation leads to an algorithm which computational complexity remains quadratic with respect to p. To reduce that complexity, we propose a SEM algorithm based on the integrated complete data likelihood p(y, Z|X; θ) rather than p(y, β, Z|X; θ). A closed form of p(y, Z|X; θ) is available and given subsequently.

Closed form of the integrated complete data likelihood
Let the SVD decomposition of matrix X be USV , where U and V are respectively n × n and p × p orthogonal matrices, and S is n × p rectangular diagonal matrix which diagonal terms are the eigenvalues λ 2 1 , . . . , λ 2 n of matrix XX . We now define X u = U X and y u = U y. Let M be the n × (g + 1) matrix which first column is made of 1's and which additional columns are those of matrix X u Z. Let also t = (β 0 , b) ∈ R (g+1) and R be a n × n diagonal matrix which i-th diagonal term equal σ 2 + γ 2 λ 2 i . With these notations we can express the complete data likelihood integrated over β as

Simulation step
To sample from p (Z|y, X; θ) we use a Gibbs sampling strategy based on the conditional distributions p z j |y, Z −j , X; θ , Z −j denoting the set of cluster membership indicators for all covariates but where x u j is the j-th column of X u . In the classical SEM algorithm, convergence to p (Z|y, X; θ) should be reached before updating θ. However, a valid inference can still be ensured in settings when θ is updated only after one or few Gibbs iterations. These approaches are referred to as SEM-Gibbs algorithm [Biernacki and Jacques (2013)]. The overall computational complexity of the simulation step is O (npg), so linear with p and no more quadratic contrarily to the previous MCEM. To improve the mixing of the generated Markov chain, we start the simulation step at each iteration by creating a random permutation of {1, . . . , p}. Then, according to the order defined by that permutation, we update each z jk using p(z jk = 1|Z −j , y, X; θ).

Maximization step
log p (y, Z|X; θ) corresponds to the marginal log-likelihood of a linear mixed model [Searle et al. (1992)] which can be written where v is an unobserved random vector such as v ∼ N 0, γ 2 I n , ε ∼ N 0, σ 2 I n and λ = diag (λ 1 , . . . , λ n ). The estimation of the parameters of model (11) can be performed using the EM algorithm, as in [Searle et al. (1992)]. We adapt below the EM equations defined in [Searle et al. (1992)], using our notations. At iteration s of the internal EM algorithm, we define R (s) = σ 2 (s) I n + γ 2 (s) λ λ. The detailed internal E and M steps are given below: Internal M step: Given a non-negative user-specified threshold δ and a maximum number N max of iterations, Internal E and M steps are alternated until The computational complexity of the M step is O g 3 + ngN max , thus not involving p.

Attracting and absorbing states
Absorbing states. The SEM algorithm described above defines a Markov chain which stationnary distribution is concentrated around values of θ corresponding to local maxima of the likelihood function. This chain has absorbing states in values of θ such as σ 2 = 0 or γ 2 = 0. In fact, the internal M step reveals that updated values for σ 2 and γ 2 are proportional to previous values of those parameters.
Attracting states. We empirically observed that attraction around σ 2 = 0 was quite frequent when using the MCEM algorithm, especially when p > n and when the number M of draws was small. We therefore advocate to use at least 5 draws (M ≥ 5 using option nsamp= in the function fitClere).

Model selection
Once the MLE θ is calculated (using one or the other algorithm), the maximum log-likelihood and the posterior clustering matrix E Z|y, X; θ are approximated using MC simulations based on Equations (9) and (10). The approximated maximum log-likelihood l, is then utilized to calculate AIC [Akaike (1974)] and BIC [Schwarz (1978)] criteria for model selection. In model (1), those criteria can be written as BIC = −2 l + 2(g + 1) log(n) and AIC = −2 l + 4(g + 1).
An additional criterion for model selection, namely the ICL criterion [Biernacki et al. (2000)] is also implemented in the R package clere. The latter criterion can be written where π jk = E z jk |y, X; θ .
Interpretation of the special group of variables associated with b 1 = 0 The constraint b 1 = 0 is mainly driven by an interpretation purpose. The meaning of this group depends on both the total number g of groups and the estimated value of parameter γ 2 . In fact, when g > 1 and γ 2 is small, covariates assigned to that group are likely less relevant to explain the response. Determining whether γ 2 is small enough is not straightforward. However, when this property holds, we may expect the groups of covariates to be separated. This would for example translate in the posterior probabilities π j1 being larger than 0.7. In addition to the benefit in interpretation, the constraint b 1 = 0, reduces the number of parameters to be estimated and consequently the variance of the predictions performed using the model.

Package functionalities
The R package clere mainly implements a function for parameter estimation and model selection: the function fitClere(). Four additional functions for graphical representation plot(), summarizing the results summary(), for getting the predicted clusters of variables clusters() and for making predictions from new design matrices predict() are also implemented in the package. Examples of calls for the functions presented in this section are given in the next Section.

The main function fitClere()
The main function fitClere() has only three mandatory arguments: the vector of response y (size n), the matrix of explanatory variable x (size n × p) and the number g of groups of regression coefficients which is expected. The optional parameter analysis, when it takes the value aic, bic or icl, allows to test all the possible number of groups between 1 and g. The choice between the two proposed algorithms is possible thanks to the parameter algorithm, but we encourage the users to use the default value, the SEM algorithm, which generally overperforms the MCEM algorithm (see the first experiment of the next section). Several other parameters allow to tune the different numbers of iterations of the estimation algorithm. Generally, higher are these parameters values, better is the quality of the estimation but heavier is the computing time. What we advice is to use the default values, and to graphically check the quality of the estimation with plots as in Figure 2: if the values of the model parameters are quite stable for a sufficient large part of the iterations, it is ok. If the stability is not reached sufficiently early before the end of the iterations, higher numbers of iterations should be chosen. Finally, among the remaining parameters (the complete list can be obtained with help(fitClere)), two are especially important: parallel allows to run parallel computations (if compatible with the user's computer) and sparse allows to impose that one of the regression parameter is equal to 0 and thus to introduce a cluster of not significant explanatory variables.

Secondary methods summary(), plot(), clusters() and predict()
The summary() function prints an overview of the estimated parameters and returns the estimated likelihood and information based model selection criteria (AIC, BIC and ICL).
The call of function plot() is similar to the one of function summary(). The latter function produces graphs such as ones presented in Figure 2.
The function clusters(), takes one argument of class Clere and a threshold argument. This function assigns each variable to the group which associated conditional probability of membership is larger than the given threshold. If conditional probabilities of membership are larger than the specified threshold for more than one group, then the group having the largest probability is returned and warning is printed. If moreover, there are ex-aequos on that largest probability then the group with the smallest index is returned. When threshold = NULL, the maximum a posteriori (MAP) strategy is used to infer the clusters.
The predict() function has two arguments, being a clere and a design matrix X new . Using that new design matrix, the predict() function returns an approximation of E X new β|y, X;θ .

Numerical experiments
This section presents two sets of numerical experiments. The first set of experiments aims at comparing the MCEM and SEM algorithms in terms of computational time and estimation or prediction accuracy. The second set of experiments aimed at comparing CLERE to standard dimension reduction techniques. The latter comparison is performed on both simulated and real data.

Description of the simulation study
In this section, a comparison between the SEM algorithm and the MCEM algorithm is performed. This comparison is performed using the four following performance indicators: 1. Computational time (CT) to run a pre-defined number of SEM/MCEM iterations. This number was set to 2,000 in this simulation study.

Mean squared estimation error (MSEE) defined as
where a ∈ {"SEM","MCEM"} and θ a is an estimated value for parameter θ obtained with algorithm a. Since θ is only known up to a permutation of the group labels, we chose the permutation leading to the smallest MSEE value.

Mean squared prediction error (MSPE) defined as
where y v and X v are respectively a vector of responses and a design matrix from a validation dataset.
Three versions of the MCEM algorithm were proposed for comparison with the SEM algorithm, depending on the number M (or nsamp) of Gibbs iterations used to approximate the E step. That number was varied between 5, 25 and 125. We chose these iterations numbers so as to cover different situations, from a situation in which the number of iterations is too small to a situation in which that number seems sufficient to expect having reached the convergence of the simulated Markov chain. Those versions were respectively denoted MCEM 5 , MCEM 25 and MCEM 125 . The comparison was performed using 200 simulated datasets. In order to consider high-dimensional situations with sizes allowing to reproduce multiple simulations in a reasonable time, each training dataset consisted of n = 25 individuals and p = 50 variables. Validation datasets used to calculate MSPE consisted of 1,000 individuals each. All covariates were simulated independently according to the standard Gaussian distribution: ∀(i, j) x ij ∼ N (0, 1).
Both training and validation datasets were simulated according to model (1) using β 0 = 0, b = (0, 3, 15) , π = (0.64, 0.20, 0.16) , σ 2 = 1 and γ 2 = 0. This is equivalent to simulate data according to the standard linear regression model defined by: 15 × x ij , 1   All algorithms were run using 10 different random starting points. Estimates yielding the largest likelihood were then used for the comparison. Table 1 summarizes the results of the comparison between the algorithms. The MCEM 5 algorithm was 1.3-fold faster than the SEM algorithm however the latter algorithm poorly performed regarding all other performance criteria (estimation quality, prediction error, likelihood maximization). This observation illustrates the importance of setting a large number M of draws to improve the estimation. Indeed, when increasing this number to 25 or 125, we observed an improvement in the estimation accuracy but no noticeable improvement in the likehood. In turn, the SEM algorithm was quite efficient compared to MCEM 25 and MCEM 125 algorithms. This algorithm not only ran faster (between 3 and 13-fold faster than MCEM 25 and MCEM 125 -see Table 1) but also reached predictive performances close to the oracle (i.e. using the true parameter). Those good performances were mainly explained by the fact that the SEM algorithm most of the time (66.5% of the time) reached a better likelihood than the other algorithms.

Results of the comparison
The results of this simulation study were made available as an internal dataset named algoComp in the R package clere. More details can be obtained using the command help(algoComp).

Description of the methods
In this section, we compare our model to standard dimension reduction approaches in terms of MSPE. Although a more detailed comparison was proposed in [Yengo et al. (2014)], we propose here a quick illustration of the relative predictive performance of our model. The comparison is achieved using data simulated according to the scenario described above in Section Description of the simulation study .The methods selected for comparison are the ridge regression [Hoerl and Kennard (1970)], the elastic net [Zou and Hastie (2005)], the LASSO [Tibshirani (1996)], PACS [Sharma et al. (2013)], the method of Park and colleagues [Park et al. (2007)] (subsequently denoted AVG) and the spike and slab model [Ishwaran and Rao (2005)] (subsequently denoted SS). The first three methods are implemented in the freely available R package glmnet. With the latter package, the tuning parameter lambda was selected using the function cv.glm aiming at minimizing the mean squarred error (option type="mse"). In particular for the Elastic net, the second tuning parameter alpha (measuring of the amount of mixture between the L 1 and L 2 penalty) was jointly selected with lambda to minimize the mean squarred error. The R package glmnet proposes a procedure for automatically selecting values for lambda. We therefore used this default procedure while we selected alpha among {0, 0.1, 0.2, . . . , 0.9, 1}. PACS methodology proposes to estimate the regression coefficients by solving a penalized least squares problem. It imposes a constraint on β that is a weighted combination of the L 1 norm and the pairwise L ∞ norm. Upper-bounding the pairwise L ∞ norm enforces the covariates to have close coefficients. When the constraint is strong enough, closeness translates into equality achieving thus a grouping property. For PACS, no software was available. Only an R script was released on Bondell's webpage 1 . Since this R script was running very slowly, we decided to reimplement it in C++ and observed a 30-fold speed-up of computational time. Similarly to Bondell's R script, our implementation uses two parameters lambda and betawt. Our reimplementation of Bondell's script was included in the R package clere under the function fitPacs(). In [Sharma et al. (2013)], the authors suggest assigning betawt with the coefficients obtained from a ridge regression model after the tuning parameter was selected using AIC. In this simulation study we used the same strategy; however the ridge parameter was selected via 5-fold cross validation. 5-fold CV was preferred to AIC since selecting the ridge parameter using AIC always led to estimated coefficients equal to zero. Once betawt was selected, lambda was chosen via 5-fold cross validation among the following values: 0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1, 2, 5, 10, 20, 50, 100, 200 and 500. All other default parameters of their script were unchanged. The AVG method is a two-step approach. The first step uses hierarchical clustering of covariates to create surrogate covariates by averaging the variables within each group. Those new predictors are afterwards included in a linear regression model, replacing the primary variables. A variable selection algorithm is then applied to select the most predictive groups of covariates. To implement this method, we followed the algorithm described in [Park et al. (2007)] and programmed it in R.
The spike and slab model is a Bayesian approach for variable selection. It is based on the assumption that the regression coefficients are distributed according to a mixture of two centered Gaussian distributions with different variances. One component of the mixture (the spike) is chosen to have a small variance, while the other component (the slab) is allowed to have a large variance. Variables assigned to the spike are dropped from the model. We used the R package spikeslab to run the spike and slab models. Especially, we used the function spikeslab from that package to detect influential variables. The number of iterations used to run the function spikeslab was 2,000 (1,000 discarded). When running fitClere(), the number nItEM of SEM iterations was set to 2,000. The number g of groups for CLERE was chosen between 1 and 5 using AIC (option analysis="aic"). Two versions of CLERE were considered: the one with all parameters estimated and the one with b 1 set to 0. The latter approach is subsequently denoted CLERE 0 (option sparse=TRUE). Figure 1, summarizes the comparison between the methods. In this simulated scenario, CLERE outperformed the other methods in terms of prediction error. Those good performances were improved when parameter b 1 was set to 0. CLERE was also the most parsimonous approach with an averaged number of estimated parameters equal to 7.7 (6.9 when b 1 = 0). The second best approach was PACS which also led to parsimonous models. The superiority of such methods could be expected since in the simulation model the regression coefficients are gathered in three groups. Variable selection approaches as whole yielded the largest prediction error in this simulation. CLERE, PACS and Spike and Slab had the largest computational times (CT). For CLERE and PACS this loss in CT were compensated by a a strong improvement in prediction error as explained above, while Spike and Slab yielded the worst prediction error in addition of being the slowest approach in this example.

Results of the comparison
The results of this simulation study were made available as an internal dataset named numExpSimData in the R package clere. More details can be obtained using the command help(numExpSimData). Figure 1: Comparison between CLERE and some standard dimension reduction approaches. The number of estimated parameters (df: +/-standard error) is given in the right along with the name of the method utilized. The average computational time with its corresponding standard error (given in parenthesis) is also provided in each situation.

Description of the datasets
We used in this section the real datasets Prostate and eyedata from the R packages lasso2 and flare respectively. The Prostate dataset comes from a study that examined the correlation between the level of prostate specific antigen and a number of clinical measures in n = 97 men who were about to receive a radical prostatectomy. This dataset is a benchmark dataset used in multiple publications about high-dimensional regression model, including [Tibshirani (1996)] and was chosen here in order to illustrate the performances of CLERE in comparaison of the competitor methods. We used the prostate specific antigen (variable lpsa) as response variable and the p = 8 other measurements as covariates. The eyedata dataset is extracted from the published study of [Scheetz (2006)]. This dataset consists in gene expression levels measured at p = 200 probes in n = 120 rats. The response variable utilized was the expression of the TRIM32 gene which is a biomarker of the Bardet-Bidel Syndrome (BBS). We chose this dataset to illustrate the performances of CLERE on a (manageable) high-dimensional problem which is the actual context for which this method was developped [Yengo et al. (2014)].
Those two datasets was utilized to compare CLERE to the same methods used above in the Section presenting the simulation stydy. All methods were compared in term of out-of-sample prediction error estimated using 5-fold cross-validation (CV). Those CV statistics were then averaged and compared across the methods in Table 2.

> mod@P
Group The covariates were respectively assigned to their group with a probability larger than 0.7. Moreover, given that parameter γ 2 had very small value ( γ 2 = 4.065 × 10 −8 ), we can argue that cancer volume and prostate weight are the only relevant explanatory covariates. To assess the prediction error associated with the model we can run the command predict() as follows: > error <-mean((yv -predict(mod, xv))^2) > error [1] 1.543122 Table 2 summarizes the prediction errors and the number of parameters obtained for all the methods. CLERE 0 had the lowest prediction error in the analysis of the Prostate dataset and the second best performance with the eyedata dataset. The AVG method was also very competitive compared to variable selection approaches stressing thus the relevance of creating groups of variables to reduce the dimensionality (especially in the eyedata dataset). It is worth noting that in both datasets, imposing the constraint b 1 = 0 improved the predictive performance of CLERE.

Results of the analysis
In the Prostate dataset, CLERE robustly identified two groups of variables representing influential (b 2 > 0) and not relevant variables (b 1 = 0). In the eyedata dataset in turn, AIC led to select only one group of variables. However, this did not lessened the predictive performance of the model since CLERE 0 was second best after AVG, while needing significantly less parameters. PACS low performed in both datasets. The Elastic net showed good predictive performances compared to the variable selection methods like LASSO or spike and slab model. Ridge regression and Elastic net had comparable results in both datasets. In line with the results of the simulation study, we observed that despite the a larger computational time (CT), CLERE and CLERE 0 had a reduced mean squarred error compared to the fastest methods. However, this improvement was less substantial than observed in the simulation study given the differences in CT. This increased CT may be explained by the fact that no simple stopping rule is proposed when fitting CLERE. We may therefore consider that a smaller number of SEM iterations could have been used to yield a similar prediction error. Indeed, when looking at Figure 2, we see that the convergence was achieved almost from the first 10 iterations. Still, the observed CT for CLERE being around 22s for the eyedata dataset and around 3s for the Prostate dataset remains affordable in these examples.
The results of this analysis on real data were made available as an internal dataset named numExpRealData in the R package clere. More details can be obtained using the command help(numExpRealData). 2.1 ( 0.9 ) 2.8 ( 0.5 ) 106.7 ( 34 ) Table 2: Real data analysis. Out-of-sample prediction error (averaged CV-statistic) was estimated using cross-validation in 100 splitted datasets. The number of parameters reported for CLERE/CLERE 0 was selected using AIC. CT stands for the average Computational Time.

Conclusions
We presented in this paper the R package clere. This package implements two efficient algorithms for fitting the CLusterwise Effect REgression model: the MCEM and the SEM algorithms. If the MCEM algorithm is to be preferred when p < n, the SEM algorithm is more efficient for high dimensional datasets (n < p). The good performances of SEM over MCEM could have been expected regarding the computational complexities of the two algorithms that are O npg + g 3 + N max ng and O M(p 2 + pg) respectively. In fact, as long as p > n, the SEM algorithm has a lower complexity. However, the computational time to run our SEM algorithm is more variable compared to MCEM as its M step does not have a closed form. We finally advocate the use the MCEM algorithm only when p n. Another important feature for model interpretation is proposed by constraining the model parameter b 1 to equal 0, which leads to carry out variable selection. Such constraint may also lead to a reduced prediction error. We illustrated on a real dataset, how to run an analysis using a detailed R script. Although our numerical experiments showed that the CLERE method tended to be slower than variable selection methods, it still had better or competitive predictive performances. In addition, the CLERE model was often more parsimonious than other models and was easily interpretable since groups of regression coefficients/variables could be summarized using a single parameter. Our model can easily be extended to the analysis of binary responses. This extension will be proposed in forthcoming version of the package. Another direction for future research will be to develop an efficient stoping rule for the proposed SEM algorithm, specific to our context. Such a criterion is expected to improve the computational performances of our estimation algorithm.