dimRed and coRanking —Unifying Dimensionality Reduction in R

This document is based on the manuscript of Kraemer et al. (2018) which was published in the R-Journal and has been modiőed and extended to őt the format of a package vignette and to match the extended functionality of the dimRed package. łDimensionality reductionž (DR) is a widely used approach to őnd low dimensional and interpretable representations of data that are natively embedded in high-dimensional spaces. DR can be realized by a plethora of methods with different properties, objectives, and, hence, (dis)advantages. The resulting low-dimensional data embeddings are often difficult to compare with objective criteria. Here, we introduce the dimRed and coRanking packages for the R language. These open source software packages enable users to easily access multiple classical and advanced DR methods using a common interface. The packages also provide quality indicators for the embeddings and easy visualization of high dimensional data. The coRanking package provides the functionality for assessing DR methods in the co-ranking matrix framework. In tandem, these packages allow for uncovering complex structures high dimensional data. Currently 15 DR methods are available in the package, some of which were not previously available to R users. Here, we outline the dimRed and coRanking packages and make the implemented methods understandable to the interested reader.


Introduction
Dimensionality Reduction (DR) essentially aims to őnd low dimensional representations of data while preserving their key properties. Many methods exist in literature, optimizing different criteria: maximizing the variance or the statistical independence of the projected data, minimizing the reconstruction error under different constraints, or optimizing for different error metrics, just to name a few. Choosing an inadequate method may imply that much of the underlying structure remains undiscovered. Often the structures of interest in a data set can be well represented by fewer dimensions than exist in the original data. Data compression of this kind has the additional beneőt of making the encoded information better conceivable to our brains for further analysis tasks like classiőcation or regression problems.
For example, the morphology of a plant's leaves, stems, and seeds reŕect the environmental conditions the species usually grow in (e.g., plants with large soft leaves will never grow in a desert but might have an advantage in a humid and shadowy environment). Because the morphology of the entire plant depends on the environment, many morphological combinations will never occur in nature and the morphological space of all plant species is tightly constrained. Díaz et al. (2016) found that out of six observed morphological characteristics only two embedding dimensions were enough to represent three quarters of the totally observed variability.
DR is a widely used approach for the detection of structure in multivariate data, and has applications in a variety of őelds. In climatology, DR is used to őnd the modes of some phenomenon, e.g., the őrst Empirical Orthogonal Function of monthly mean sea surface temperature of a given region over the Paciőc is often linked to the El Niño Southern Oscillation or ENSO (e.g., Hsieh, 2004). In ecology the comparison of sites with different species abundances is a classical multivariate problem: each observed species adds an extra dimension, and because species are often bound to certain habitats, there is a lot of redundant information. Using DR is a popular technique to represent the sites in few dimensions, e.g., Aart (1972) matches wolfspider communities to habitat and Morrall (1974) match soil fungi data to soil types. (In ecology the general name for DR is ordination or indirect gradient analysis.) Today, hyperspectral satellite imagery collects so many bands that it is very difficult to analyze and interpret the data directly. Resuming the data into a set of few, yet independent, components is one way to reduce complexity (e.g., see Laparra et al., 2015). DR can also be used to visualize the interiors of deep neural networks (e.g., see Han et al., 2017), where the high dimensionality comes from the large number of weights used in a neural network and convergence can be visualized by means of DR. We could őnd many more example applications here but this is not the main focus of this publication.
The difficulty in applying DR is that each DR method is designed to maintain certain aspects of the original data and therefore may be appropriate for one task and inappropriate for another. Most methods also have parameters to tune and follow different assumptions. The quality of the outcome may strongly depend on their tuning, which adds additional complexity. DR methods can be modeled after physical models with attracting and repelling forces (Force Directed Methods), projections onto low dimensional planes (PCA, ICA), divergence of statistical distributions (SNE family), or the reconstruction of local spaces or points by their neighbors (LLE).
As an example for how changing internal parameters of a method can have a great impact, the breakthrough for Stochastic Neighborhood Embedding (SNE) methods came when a Student's t-distribution was used instead of a normal distribution to model probabilities in low dimensional space to avoid the łcrowding problemž, that is, a sphere in high dimensional space has a much larger volume than in low dimensional space and may contain too many points to be represented accurately in few di-mensions. The t-distribution, allows medium distances to be accurately represented in few dimensions by larger distances due to its heavier tails. The result is called in t-SNE and is especially good at preserving local structures in very few dimensions, this feature made t-SNE useful for a wide array of data visualization tasks and the method became much more popular than standard SNE (around six times more citations of van der Maaten and Hinton (2008) compared to Hinton and Roweis (2003) in Scopus (Elsevier, 2017)).
There are a number of software packages for other languages providing collections of methods: In Python there is scikit-learn (Pedregosa et al., 2011), which contains a module for DR. In Julia we currently őnd ManifoldLearning.jl for nonlinear and MultivariateStats.jl for linear DR methods. There are several toolboxes for DR implemented in Matlab (Van Der Maaten et al., 2009;Arenas-Garcia et al., 2013). The Shogun toolbox (Sonnenburg et al., 2017) implements a variety of methods for dimensionality reduction in C++ and offers bindings for a many common high level languages (including R, but the installation is anything but simple, as there is no CRAN package). However, there is no comprehensive package for R and none of the former mentioned software packages provides means to consistently compare the quality of different methods for DR.
For many applications it can be difficult to objectively őnd the right method or parameterization for the DR task. This paper presents the dimRed and coRanking packages for the popular programming language R. Together, they provide a standardized interface to various dimensionality reduction methods and quality metrics for embeddings. They are implemented using the S4 class system of R, making the packages both easy to use and to extend.
The design goal for these packages is to enable researchers, who may not necessarily be experts in DR, to apply the methods in their own work and to objectively identify the most suitable methods for their data. This paper provides an overview of the methods collected in the packages and contains examples as to how to use the packages.
The notation in this paper will be as follows: X = [xi] T 1≤i≤n ∈ R n×p , and the observations xi ∈ R p . These observations may be transformed prior to the dimensionality reduction step (e.g., centering and/or standardization) resulting in X ′ = [x ′ i ] T 1≤i≤n ∈ R n×p . A DR method then embeds each vector in X ′ onto a vector in Y = [yi] T 1≤i≤n ∈ R n×q with yi ∈ R q , ideally with q ≪ p. Some methods provide an explicit mapping f (x ′ i ) = yi. Some even offer an inverse mapping f −1 (yi) =x ′ i , such that one can reconstruct a (usually approximate) sample from the low-dimensional representation. For some methods, pairwise distances between points are needed, we set dij = d(xi, xj) anddij = d(yi, yj), where d is some appropriate distance function.
When referring to functions in the dimRed package or base R simply the function name is mentioned, functions from other packages are referenced with their namespace, as with package::function.

Dimensionality Reduction Methods
In the following section we do not aim for an exhaustive explanation to every method in dimRed but rather to provide a general idea on how the methods work. An overview and classiőcation of the most commonly used DR methods can be found in Figure 1.
In all methods, parameters have to be optimized or decisions have to be made, even if it is just about the preprocessing steps of data. The dimRed package tries to make the optimization process for parameters as easy as possible, but, if possible, the parameter space should be narrowed down using prior knowledge. Often decisions can be made based on theoretical knowledge. For example, sometimes an analysis requires data to be kept in their original scales and sometimes this is exactly what has to be avoided as when comparing different physical units. Sometimes decisions based on the experience of others can be made, e.g., the Gaussian kernel is probably the most universal kernel and therefore should be tested őrst if there is a choice.
All methods presented here have the embedding dimensionality, q, as a parameter (or ndim as a parameter for embed). For methods based on eigenvector decomposition, the result generally does not depend on the number of dimensions, i.e., the őrst dimension will be the same, no matter if we decide to calculate only two dimensions or more. If more dimensions are added, more information is maintained, the őrst dimension is the most important and higher dimensions are successively less important. This means, that a method based on eigenvalue decomposition only has to be run once if one wishes to compare the embedding in different dimensions. In optimization based methods this is generally not the case, the number of dimensions has to be chosen a priori, an embedding of 2 and 3 dimensions may vary signiőcantly, and there is no ordered importance of dimensions. This means that comparing dimensions of optimization-based methods is computationally much more expensive.
We try to give the computational complexity of the methods. Because of the actual implementation, computation times may differ largely. R is an interpreted language, so all parts of an algorithm that are implemented in R often will tend to be slow compared to methods that call efficient implementations in a compiled language. Methods where most of the computing time is spent for eigenvalue decomposition do have very efficient implementations as R uses optimized linear algebra libraries. Although, eigenvalue decomposition itself does not scale very well in naive implementations (O(n 3 )).

PCA
Principal Component Analysis (PCA) is the most basic technique for reducing dimensions. It dates back to Pearson (1901). PCA őnds a linear projection (U ) of the high dimensional space into a low dimensional space Y = XU , maintaining maximum variance of the data. It is based on solving the following eigenvalue problem: where CXX = 1 n X T X is the covariance matrix, λ k and u k are the k-th eigenvalue and eigenvector, and I is the identity matrix. The equation has several solutions for different values of λ k (leaving aside the trivial solution u k = 0). PCA can be efficiently applied to large data sets, because it computationally scales as O(np 2 + p 3 ), that is, it scales linearly with the number of samples and R uses specialized linear algebra libraries for such kind of computations.
PCA is a rotation around the origin and there exist a forward and inverse mapping. PCA may suffer from a scale problem, i.e., when one variable dominates the variance simply because it is in a higher scale, to remedy this, the data can be scaled to zero mean and unit variance, depending on the use case, if this is necessary or desired.
Base R implements PCA in the functions prcomp and princomp; but several other implementations exist i.e., pcaMethods from Bioconductor which implements versions of PCA that can deal with missing data. The dimRed package wraps prcomp.

kPCA
Kernel Principal Component Analysis (kPCA) extends PCA to deal with nonlinear dependencies among variables. The idea behind kPCA is to map the data into a high dimensional space using a possibly non-linear function ϕ and then to perform a PCA in this high dimensional space. Some mathematical tricks are used for efficient computation.
If the columns of X are centered around 0, then the principal components can also be computed from the inner product matrix K = X T X. Due to this way of calculating a PCA, we do not need to explicitly map all points into the high dimensional space and do the calculations there, it is enough to obtain the inner product matrix or kernel matrix K ∈ R n×n of the mapped points (Schölkopf et al., 1998).
Here is an example calculating the kernel matrix using a Gaussian kernel: where σ is a length scale parameter accounting for the width of the kernel. The other trick used is known as the łrepresenters theorem.ž The interested reader is referred to Schölkopf et al. (2001). The kPCA method is very ŕexible and there exist many kernels for special purposes. The most common kernel function is the Gaussian kernel (Equation 2). The ŕexibility comes at the price that the method has to be őnely tuned for the data set because some parameter combinations are simply unsuitable for certain data. The method is not suitable for very large data sets, because memory scales with O(n 2 ) and computation time with O(n 3 ).
Diffusion Maps, Isomap, Locally Linear Embedding, and some other techniques can be seen as special cases of kPCA. In which case, an outof-sample extension using the Nyström formula can be applied (Bengio et al., 2004). This can also yield applications for bigger data, where an embedding is trained with a sub-sample of all data and then the data is embedded using the Nyström formula.
Kernel PCA in R is implemented in the kernlab package using the function kernlab::kpca, and supports a number of kernels and user deőned functions. For details see the help page for kernlab::kpca.
The dimRed package wraps kernlab::kpca but additionally provides forward and inverse methods (Bakir et al., 2004) which can be used to őt out-of-sample data or to visualize the transformation of the data space.

Classical Scaling
What today is called Classical Scaling was őrst introduced by Torgerson (1952). It uses an eigenvalue decomposition of a transformed distance matrix to őnd an embedding that maintains the distances of the distance matrix. The method works because of the same reason that kPCA works, i.e., classical scaling can be seen as a kPCA with kernel x T y. A matrix of Euclidean distances can be transformed into an inner product matrix by some simple transformations and therefore yields the same result as a PCA. Classical scaling is conceptually more general than PCA in that arbitrary distance matrices can be used, i.e., the method does not even need the original coordinates, just a distance matrix D. Then it tries to őnd an embedding Y so thatdij is as similar to dij as possible.
The disadvantage is that it is computationally much more demanding, i.e., an eigenvalue decomposition of an n × n matrix has to be computed. This step requires O(n 2 ) memory and O(n 3 ) computation time, while PCA requires only the eigenvalue decomposition of a d × d matrix and usually n ≫ d. R implements classical scaling in the cmdscale function.
The dimRed package wraps cmdscale and allows the speciőcation of arbitrary distance functions for calculating the distance matrix. Additionally a forward method is implemented.

Isomap
As Classical Scaling can deal with arbitrarily deőned distances, Tenenbaum et al. (2000) suggested to approximate the structure of the manifold by using geodesic distances. In practice, a graph is created by either keeping only the connections between every point and its k nearest neighbors to produce a k-nearest neighbor graph (k-NNG), or simply by keeping all distances smaller than a value ε producing an ε-neighborhood graph (ε-NNG). Geodesic distances are obtained by recording the distance on the graph and classical scaling is used to őnd an embedding in fewer dimensions. This leads to an łunfoldingž of possibly convoluted structures (see Figure 3).
Isomap's computational cost is dominated by the eigenvalue decomposition and therefore scales with O(n 3 ). Other related techniques can use more efficient algorithms because the distance matrix becomes sparse due to a different preprocessing.
In R, Isomap is implemented in the vegan package. The vegan::isomap calculates an Isomap embedding and vegan::isomapdist calculates a geodesic distance matrix. The dimRed package uses its own implementation. This implementation is faster mainly due to using a KD-tree for the nearest neighbor search (from the RANN package) and to a faster implementation for the shortest path search in the k-NNG (from the igraph package). The implementation in dimRed also includes a forward method that can be used to train the embedding on a subset of data points and then use these points to approximate an embedding for the remaining points. This technique is generally referred to as landmark Isomap (De Silva and Tenenbaum, 2004).

Locally Linear Embedding
Points that lie on a manifold in a high dimensional space can be reconstructed through linear combinations of their neighborhoods if the manifold is well sampled and the neighbohoods lie on a locally linear patch. These reconstruction weights, W , are the same in the high dimensional space as the internal coordinates of the manifold. Locally Linear Embedding (LLE; Roweis and Saul, 2000) is a technique that constructs a weight matrix W ∈ R n×n with elements wij so that is minimized under the constraint that wij = 0 if xj does not belong to the neighborhood and the constraint that n j=1 wij = 1. Finally the embedding is made in such a way that the following cost function is minimized for Y , (4) This can be solved using an eigenvalue decomposition. Conceptually the method is similar to Isomap but it is computationally much nicer because the weight matrix is sparse and there exist efficient solvers. In R, LLE is implemented by the package lle, the embedding can be calculated with lle::lle. Unfortunately the implementation does not make use of the sparsity of the weight matrix W . The manifold must be well sampled and the neighborhood size must be chosen appropriately for LLE to give good results.

Laplacian Eigenmaps
Laplacian Eigenmaps were originally developed under the name spectral clustering to separate non-convex clusters. Later it was also used for graph embedding and DR (Belkin and Niyogi, 2003).
A number of variants have been proposed. First, a graph is constructed, usually from a distance matrix, the graph can be made sparse by keeping only the k nearest neighbors, or by specifying an ε neighborhood. Then, a similarity matrix W is calculated by using a Gaussian kernel (see Equation 2), if c = 2σ 2 = ∞, then all distances are treated equally, the smaller c the more emphasis is given to differences in distance.
The degree of vertex i is di = n j=1 wij and the degree matrix, D, is the diagonal matrix with entries di. Then we can form the graph Laplacian L = D − W and, then, there are several ways how to proceed, an overview can be found in Luxburg (2007).
The dimRed package implements the algorithm from Belkin and Niyogi (2003). Analogously to LLE, Laplacian eigenmaps avoid computational complexity by creating a sparse matrix and not having to estimate the distances between all pairs of points. Then the eigenvectors corresponding to the lowest eigenvalues larger than 0 of either the matrix L or the normalized Laplacian D −1/2 LD −1/2 are computed and form the embedding.

Diffusion Maps
Diffusion Maps (Coifman and Lafon, 2006) take a distance matrix as input and calculates the transition probability matrix P of a diffusion process between the points to approximate the manifold. Then the embedding is done by an eigenvalue decompositon of P to calculate the coordinates of the embedding. The algorithm for calculating Diffusion Maps shares some elements with the way Laplacian Eigenmaps are calculated. Both algorithms depart from the same weight matrix, Diffusion Maps calculate the transition probability on the graph after t time steps and do the embedding on this probability matrix.
The idea is to simulate a diffusion process between the nodes of the graph, which is more robust to short-circuiting than the k-NNG from Isomap (see bottom right Figure 3). Diffusion maps in R are accessible via the diffusionMap::diffuse() function, which is available in the diffusionMap package. Additional points can be approximated into an existing embedding using the Nyström formula (Bengio et al., 2004). The implementation in dimRed is based on the diffusionMap::diffuse function.

non-Metric Dimensional Scaling
While Classical Scaling and derived methods (see section Classical Scaling) use eigenvector decomposition to embed the data in such a way that the given distances are maintained, non-Metric Dimensional Scaling (nMDS, Kruskal, 1964a,b) uses optimization methods to reach the same goal. Therefore a stress function, is used, and the algorithm tries to embed yi in such a way that the order of the dij is the same as the order of thedij Because optimization methods can őt a wide variety of problems, there are very loose limits set to the form of the error or stress function. For instance Mahecha et al. (2007) found that nMDS using geodesic distances can be almost as powerful as Isomap for embedding biodiversity patterns. Because of the ŕexibility of nMDS, there is a whole package in R devoted to Multidimensional Scaling, smacof (de Leeuw and Mair, 2009). Several packages provide implementations for nMDS in R, for example MASS and vegan with the functions MASS::isoMDS and vegan::monoMDS. Related methods include Sammons Mapping which con be found as MASS::sammon. The dimRed package wraps vegan::monoMDS.

Force Directed Methods
The data X can be considered as a graph with weighted edges, where the weights are the distances between points. Force directed algorithms see the edges of the graphs as springs or the result of an electric charge of the nodes that result in an attractive or repulsive force between the nodes, the algorithms then try to minimize the overall energy of the graph.
where kij is the spring constant for the spring connecting points i and j. Graph embedding algorithms generally suffer from long running times (though compared to other methods presented here they do not scale as badly) and many local optima. This is why a number of methods have been developed that try to deal with some of the shortcomings, for example, the Kamada-Kawai (Kamada and Kawai, 1989), the Fruchtermann-Reingold (Fruchterman and Reingold, 1991), or the DrL (Martin et al., 2007) algorithms.
There are a number of graph embedding algorithms included in the igraph package, they can be accessed using the igraph::layout_with_* function family. The dimRed package only wraps the three algorithms mentioned above; there are many others which are not interesting for dimensionality reduction.

t-SNE
Stochastic Neighbor Embedding (SNE; Hinton and Roweis, 2003) is a technique that minimizes the Kullback-Leibler divergence of scaled similarities of the points i and j in a high dimensional space, pij, and a low dimensional space, qij: SNE uses a Gaussian kernel (see Equation 2) to compute similarities in a high and a low dimensional space. The t-Distributed Stochastic Neighborhood Embedding (t-SNE; van der Maaten and Hinton, 2008) improves on SNE by using a t-Distribution as a kernel in low dimensional space. Because of the heavy-tailed t-distribution, t-SNE maintains local neighborhoods of the data better and penalizes wrong embeddings of dissimilar points. This property makes it especially suitable to represent clustered data and complex structures in few dimensions. The t-SNE method has one parameter, perplexity, to tune. This determines the neighborhood size of the kernels used.
The general runtime of t-SNE is O(n 2 ), but an efficient implementation using tree search algorithms that scales as O(n log n) exists and can be found in the Rtsne package in R. The t-SNE implementation in dimRed wraps the Rtsne package.
There exist a number of derived techniques for dimensionality reduction, e.g., NeRV (Venna et al., 2010) and JNE (Lee et al., 2013), that improve results but for which there do not yet exist packages on CRAN implementing them.

ICA
Independent Component Analysis (ICA) interprets the data X as a mixture of independent signals, e.g., a number of sound sources recorded by several microphones, and tries to łun-mixž them to őnd the original signals in the recorded signals. ICA is a linear rotation of the data, just as PCA, but instead of recovering the maximum variance, it recovers statistically independent components. A signal matrix S and a mixing matrix A are estimated so that X = AS.
There are a number of algorithms for ICA, the most widely used is fastICA (Hyvarinen, 1999) because it provides a fast and robust way to estimate A and S. FastICA maximizes a measure for non-Gaussianity called negentropy J (Comon, 1994). This is equivalent to minimizing mutual information between the resulting components. Negentropy J is deőned as follows: where u = (u1, . . . , un) T is a random vector with density f (·) and ugauss is a Gaussian random variable with the same covariance structure as u.
FastICA uses a very efficient approximation to calculate negentropy. Because ICA can be translated into a simple linear projection, a forward and an inverse method can be supplied.
There are a number of packages in R that implement algorithms for ICA, the dimRed package wraps the fastICA::fastICA() function from fastICA.

DRR
Dimensionality Reduction via Regression is a very recent technique extending PCA (Laparra et al., 2015). Starting from a rotated (PCA) solution X ′ = XU , it predicts redundant information from the remaining components using non-linear regression.
with x·i and y·i being the loading of observations on the i-th axis. In theory, any kind of regression can be used. the authors of the original paper choose Kernel Ridge Regression (KRR; Saunders et al., 1998) because it is a ŕexible nonlinear regression technique and computational optimizations for a fast calculation exist. DRR has another advantage over other techniques presented here, because it provides an exact forward and inverse function.
The usage of KRR also has the advantage of making the method convex, here we list it under non-convex methods, because other types of regression may make it non-convex.
Mathematically, functions are limited to map one input to a single output point. Therefore, DRR reduces to PCA if manifolds are too complex; but it seems very useful for slightly curved manifolds. The initial rotation is important, because the result strongly depends on the order of dimensions in high dimensional space.
DRR is implemented in the package DRR. The package provides forward and inverse functions which can be used to train on a subset.

Quality criteria
The advantage of unsupervised learning is that one does not need to specify classes or a target variable for the data under scrutiny. Instead the chosen algorithm arranges the input data. For example, arranged into clusters or into a lower dimensional representation. In contrast to a supervised problem, there is no natural way to directly measure the quality of any output or to compare two methods by an objective measure like for instance modeling efficiency or classiőcation error. The reason is that every method optimizes a different error function, and it would be unfair to compare t-SNE and PCA by means of either recovered variance or KL-Divergence. One fair measure would be the reconstruction error, i.e., reconstructing the original data from a limited number of dimensions, but as discussed above not many methods provide forward and inverse mappings.
However, there are a series of independent estimators on the quality of a low-dimensional embedding. The dimRed package provides a number of quality measures which have been proposed in the literature to measure performance of dimensionality reduction techniques.

Co-ranking matrix based measures
The co-ranking matrix (Lee and Verleysen, 2009) is a way to capture the changes in ordinal distance. As before, let dij = d(xi, xj) be the distances between xi and xj, i.e., in high dimensional space anddij = d(yi, yj) the distances in low dimensional space, then we can deőne the rank of yj with respect to yî rij = |{k :d ik <dij or (d ik =dij and 1 ≤ k < j ≤ n)}|, and, analogously, the rank in high-dimensional space as: where the notation |A| denotes the number of elements in a set A. This means that we simply replace the distances in a distance matrix column wise by their ranks. Therefore rij is an integer which indicates that xi is the rij-th closest neighbor of xj in the set X. The co-ranking matrix Q then has elements q kl = |{(i, j) :rij = k and rij = l}|, which is the 2d-histogram of the ranks. That is, qij is an integer which counts how many points of distance rank j became rank i. In a perfect DR, this matrix will only have non-zero entries in the diagonal; if most of the non-zero entries are in the lower triangle, then the DR collapsed far away points onto each other; if most of the non-zero entries are in the upper triangle, then the DR teared close points apart. For a detailed description of the properties of the co-ranking matrix the reader is referred to Lueks et al. (2011). The co-ranking matrix can be computed using function coRanking::coranking() and can be visualized using coRanking::imageplot(). A good embedding should scatter the values around the diagonal of the matrix. If the values are predominantly in the lower triangle, then the embedding collapses the original structure causing far away points to be much closer; if the values are predominantly in the upper triangle the points from the original structure are torn apart. Nevertheless this method requires visual inspection of the matrix. For an automated assessment of quality, a scalar value that assigns a quality to an embedding is needed.
A number of metrics can be computed from the co-ranking matrix. For example: which is the number of points that belong to the k-th nearest neighbors in both high-and low-dimensional space, normalized to give a maximum of 1 (Lee and Verleysen, 2009). This quantity can be adjusted for random embeddings, giving the Local Continuity Meta Criterion (Chen and Buja, 2009): The above measures still depend on k, but LCMC has a well deőned maximum at kmax. Two measures without parameters are then deőned: These measure the preservation of local and global distances respectively. The original authors advised using Q local over Q global , but this depends on the application.
LCMC(k) can be normalized to a maximum of 1, yielding the following measure for a quality embedding (Lee et al., 2013): where a value of 0 corresponds to a random embedding and a value of 1 to a perfect embedding into the k-ary neighborhood. To transform RNX (k) into a parameterless measure, the area under the curve can be used: This measure is normalized to one and takes k at a log-scale. Therefore it prefers methods that preserve local distances.
In R, the co-ranking matrix can be calculated using the the coRanking::coranking function. The dimRed package contains the functions Q_local, Q_global, Q_NX, LCMC, and R_NX to calculate the above quality measures in addition to AUC_lnK_R_NX.
Calculating the co-ranking matrix is a relatively expensive operation because it requires sorting every row of the distance matrix twice. It therefore scales with O(n 2 log n). There is also a plotting function plot_R_NX, which plots the RNX values with log-scaled K and adds the AU C ln K to the legend (see Figure 2).
There are a number of other measures that can be computed from a co-ranking matrix, e.g., see Lueks et al. (2011); Verleysen (2009), or Babaee et al. (2013).

Cophenetic correlation
An old measure originally developed to compare clustering methods in the őeld of phylogenetics is cophenetic correlation (Sokal and Rohlf, 1962). This method consists simply of the correlation between the upper or lower triangles of the distance matrices (in dendrograms they are called cophenetic matrices, hence the name) in a high and low dimensional space. Additionally the distance measure and correlation method can be varied. In the dimRed package this is implemented in the cophenetic_correlation function.
Some studies use a measure called łresidual variancež (Tenenbaum et al., 2000;Mahecha et al., 2007), which is deőned as where r is the Pearson correlation and D,D are the distances matrices consisting of elements dij anddij respectively.

Reconstruction error
The fairest and most common way to assess the quality of a dimensionality reduction when the method provides an inverse mapping is the reconstruction error. The dimRed package includes a function to calculate the root mean squared error which is deőned as: with x ′ i = f −1 (yi), f −1 being the function that maps an embedded value back to feature space.
The dimRed package provides the reconstruction_rmse and reconstruction_error functions.

Test data sets
There are a number of test data sets that are often used to showcase a dimensionality reduction technique. Common ones being the 3d S-curve and the Swiss roll, among others. These data sets have in common that they usually have three dimensions, and well deőned manifolds. Real world examples usually have more dimensions and often are much noisier, the manifolds may not be well sampled and exhibit holes and large pieces may be missing. Additionally, we cannot be sure if we can observe all the relevant variables.
The dimRed package implements a number of test datasets that are being used in literature to benchmark methods with the function dimRed::loadDataSet(). For artiőcial datasets the number of points and the noise level can be adjusted, the function also returns the internal coordinates.

The dimRed Package
The dimRed package collects DR methods readily implemented in R, implements missing methods and offers means to compare the quality of embeddings. The package is open source and available under the GPL3 license. Released versions of the package are available through CRAN (https://cran.r-project.org/package=dimRed) and development versions are hosted on GitHub (https://github.com/gdkrmr/dimRed). The dimRed package provides a common interface and convenience functions for a variety of different DR methods so that it is made easier to use and compare different methods. An overview of the packages main functions can be found in Table 1.
Internally, the package uses S4 classes but for normal usage the user does not need to have any knowledge on the inner workings of the S4 class system in R (cf. table 2). The package contains simple conversion functions from and to standard R-objects like a data.frame or a matrix. The "dimRedData" class provides a container for the data to be processed. The slot data contains a matrix with dimensions in columns and observations in rows, the slot meta may contain a data frame with additional information, e.g., categories or other information of the data points.
Each embedding method is a class which inherits from "dimRedMethod" which means that it contains a function to generate "dimRedResult" objects and a list of standard parameters. The class "dimRedResult" contains the data in reduced dimensions, the original meta information along Function Description embed Embed data using a DR method. quality Calculate a quality score from the result of embed. plot Plot a "dimRedData" or "dimRedResult" object, colors the points automatically, for exploring the data. plot_R_NX Compares the quality of various embeddings. dimRedMethodList Returns a character vector that contains all implemented DR methods. dimRedQualityList Returns a character vector that contains all implemented quality measures.

Class Name Function
"dimRedData" Holds the data for a DR. Fed to embed(). An as.dimRedData() methods exists for "data.frame", "matrix", and "formula" exist. "dimRedMethod" Virtual class, ancestor of all DR methods. "dimRedResult" The result of embed(), the embedded data. with the original data, and, if possible, functions for the forward and inverse mapping.
From a user-perspective the central function of the package is embed which is called in the form embed(data,method,...), data can take standard R objects such as instances of "data.frame", "matrix", or "formula", as input. The method is given as a character vector. All available methods can be listed by calling 'dimRedMethodList()'. Method-speciőc parameters can be passed through ...; when no method-speciőc parameters are given, defaults are chosen. The embed function returns an object of class "dimRedResult".
For comparing different embeddings, dimRed contains the function quality which relies on the output of embed and a method name. This function returns a scalar quality score; a vector that contains the names of all quality functions is returned by calling 'dimRedQualityList()'.
For easy visual examination, the package contains plot methods for "dimRedData" and "dimRedResult" objects in order to plot high dimensional data using parallel plots and pairwise scatter plots. Automatic coloring of data points is done using the available metadata.

Examples
The comparison of different DR methods, choosing the right parameters for a method, and the inspection of the results is simpliőed by dimRed. This section contains a number of examples to highlight the usage of the package.
To compare methods of dimensionality reduction, őrst a test data set is loaded using loadDataSet, then the embed function is used for DR (embed can also handle standard R types like matrix and data.frame). This makes it very simple to apply different methods of DR to the same data e.g., by deőning a character vector of method names and then iterating over these, say with lapply. For inspection, dimRed provides methods for the plot function to visualize the resulting embedding (Figure 2 b and d), internal coordinates of the manifold are represented by color gradients. To visualize how well embeddings represent different neighborhood sizes, the function plot_R_NX is used on a list of embedding results (Figure 2 c). The plots in őgure 2 are produced by the following code: ## define which methods to apply embed_methods <-c("Isomap", "PCA") ## load test data set data_set <-loadDataSet("3D S Curve", n = 1000) ## apply dimensionality reduction data_emb <-lapply(embed_methods, function(x) embed(data_set, x)) names(data_emb) <-embed_methods ## figure \ref{fig:plotexample}a, the data set plot(data_set, type = "3vars") ## figures \ref{fig:plotexample}b (Isomap) and \ref{fig:plotexample}d (PCA) lapply(data_emb, plot, type = "2vars") ## figure \ref{fig:plotexample}c, quality analysis plot_R_NX(data_emb) The function plot_R_NX produces a őgure that plots the neighborhood size (k at a log-scale) against the quality measure RNX (k) (see Equation  18). This gives an overview of the general behavior of methods: if RNX is high for low values of K, then local neighborhoods are maintained well; if RNX is high for large values of K, then global gradients are maintained well. It also provides a way to directly compare methods by plotting more than one RNX curve and an overall quality of the embedding by taking the area under the curve as an indicator for the overall quality of the embedding (see őg 19) which is shown as a number in the legend.
Therefore we can see from Figure 2c that t-SNE is very good a maintaining close and medium distances for the given data set, whereas PCA is only better at maintaining the very large distances. The large distances are dominated by the overall bent shape of the S in 3D space, while the close distances are not affected by this bending. This is reŕected in the properties recovered by the different methods, the PCA embedding recovers the S-shape, while t-SNE ignores the S-shape and recovers the inner structure of the manifold.
Often the quality of an embedding strongly depends on the choice of parameters, the interface of dimRed can be used to facilitate searching the parameter space. Isomap has one parameter k which determines the number of neighbors used to construct the k-NNG. If this number is too large, then Isomap will resemble an MDS (Figure 3 e), if the number is too small, the resulting embedding contains holes (Figure 3 c). The following code őnds the optimal value, kmax, for k using the Q local criterion, the results are visualized in Figure 3  The original data set, a 2 dimensional manifold bent in an S-shape in 3 dimensional space. Bottom row: Embeddings and k-NNG for different values of k. (c) When k = 5, the value for k is too small resulting in holes in the embedding, the manifold itself is still unfolded correctly. (d) Choose k = k max , the best representation of the original manifold in two dimensions achievable with Isomap. (e) k = 100, too large, the k-NNG does not approximate the manifold any more.
## Load data ss <-loadDataSet("3D S Curve", n = 500) ## Parameter space kk <-floor(seq(5, 100, length.out = 40)) ## Embedding over parameter space emb <-lapply(kk, function(x) embed(ss, "Isomap", knn = x)) ## Quality over embeddings qual <-sapply(emb, function(x) quality(x, "Q_local")) ## Find best value for K ind_max <-which.max(qual) k_max <-kk[ind_max] Figure 3a shows how the Q local criterion changes when varying the neighborhood size k for Isomap, the gray lines in Figure 3 represent the edges of the k-NN Graph. If the value for k is too low, the inner structure of the manifold will still be recovered, but it will be imperfect (Figure 3c, note that the holes appear in places that are not covered by the edges of the k-NN Graph), therefore the Q local score is lower than optimal. If k is too large, the error of the embedding is much larger due to short circuiting and we observe a very steep drop in the Q local score. The short circuiting can be observed in Figure 3e with the edges that cross the gap between the tips and the center of the S-shape.
It is also very easy to compare across methods and quality scores. The following code produces a matrix of quality scores and methods, where dimRedMethodList returns a character vector with all methods. A visualization of the matrix can be found in Figure 4. The methods are ordered by mean quality score. The reconstruction error was omitted, because a higher value means a worse embedding, while in the present methods a higher score means a better embedding. Parameters were not tuned for the example, therefore it should not be seen as a general quality assessment of the methods.

Conclusion
This paper presents the dimRed and coRanking packages and it provides a brief overview of the methods implemented therein. The dimRed package is written in the R language, one of the most popular languages for data analysis. The package is freely available from CRAN. The package is object oriented and completely open source and therefore easily available and extensible. Although most of the DR methods already had implementations in R, dimRed adds some new methods for dimensionality reduction, and coRanking adds methods for an independent quality control of DR methods to the R ecosystem. DR is a widely used technique. However, due to the lack of easily usable tools, choosing the right method for DR is complex and depends upon a variety of factors. The dimRed package aims to facilitate experimentation with different techniques, parameters, and quality measures so that choosing the right method becomes easier. The dimRed package wants to enable the user to objectively compare methods that rely on very different algorithmic approaches. It makes the life of the programmer easier, because all methods are aggregated in one place and there is a single interface and standardized classes to access the functionality.