auditor: an R Package for Model-Agnostic Visual Validation and Diagnostics

Machine learning models have successfully been applied to challenges in applied in biology, medicine, finance, physics, and other fields. With modern software it is easy to train even a complex model that fits the training data and results in high accuracy on test set. However, problems often arise when models are confronted with the real-world data. This paper describes methodology and tools for model-agnostic auditing. It provides functinos for assessing and comparing the goodness of fit and performance of models. In addition, the package may be used for analysis of the similarity of residuals and for identification of outliers and influential observations. The examination is carried out by diagnostic scores and visual verification. The code presented in this paper are implemented in the auditor package. Its flexible and consistent grammar facilitates the validation models of a large class of models.

Alicja Gosiewska (Faculty of Mathematics and Information Science, Warsaw University of Technology) , Przemysław Biecek (Faculty of Mathematics and Information Science, Warsaw University of Technology)
2019-08-18

1 Introduction

Predictive modeling is a process using mathematical and computational methods to forecast outcomes. Many algorithms in this area have been developed and novel ones are continuously being proposed. Therefore, there are countless possible models to choose from and a lot of ways to train a new new complex model. A poorly- or over-fitted model usually will be of no use when confronted with future data. Its predictions will be misleading (Sheather 2009) or harmful (O’Neil 2016). That is why methods that support model diagnostics are important.

Diagnostics are often carried out only by checking model assumptions. However, they are often neglected for complex machine learning models and they may be used as if they were assumption-free. Still, there is a need to verify their quality. We strongly believe that a genuine diagnosis or an audit incorporates a broad approach to model exploration. The audit includes three objectives.

In this paper, we introduce the auditor package for R, which is a tool for diagnostics and visual verification. As it focuses on residuals1 and does not require any additional model assumptions, most of the presented methods are model-agnostic. A consistent grammar across various tools reduces the amount of effort needed to create informative plots and makes the validation more convenient and available.

graphic without alt text
Figure 1: Anscombe Quartet data sets are identical when examined with he use of simple summary statistics. The difference is noticeable after plotting the data.

Diagnostic methods have been a subject of much research (Atkinson 1985). Atkinson and M. Riani (2012) focus on graphical methods of diagnostics regression analysis. Liu, X. Wang, M. Liu, and J. Zhu (2017) present an overview of interactive visual model validation. One of the most popular tools for verification are measures of the differences between the values predicted by a model and the observed values (Willmott, S. G. Ackleson, R. E. Davis, J. J. Feddema, K. M. Klink, D. R. Legates, J. O’Donnell, and C. M. Rowe 1985). These tools include Root Mean Square Error (RMSE) and Mean Absolute Error (MAE) (Hastie, R. Tibshirani, and J. Friedman 2001). Such measures are used for well-researched and easily interpretable linear model as well as for complex models such as random forests (Ho 1995), gradient-boosted trees (Chen and C. Guestrin 2016), or neural networks (Venables and B. D. Ripley 2002).

However, no matter which measure of model performance we use, it does not reflect all aspects of the model. For example, Breiman (2001) points out that a linear regression model validated only on the basis of \(R^2\) may lead to many false conclusions. The best known example of this issue is the Anscombe Quartet (Anscombe 1973). It contains four different data sets constructed to have nearly identical simple statistical properties such as mean, variance, correlation, etc. These measures directly correspond to the coefficients of the linear models. Therefore, by fitting a linear regression to the Anscombe Quartet we obtain four almost identical models (see Figure 1). However, residuals of these models are very different. The Anscombe Quartet is used to highlight that the numerical measures should be supplemented by graphical data visualizations.

The analysis of diagnostics is well-researched for linear and generalized linear models. The said analysis is typically done by extracting raw, studentized, deviance, or Pearson residuals and examining residual plots. Common problems with model fit and basic diagnostics methods are presented in Faraway (2002) and Harrell Jr. (2006)

Model validation may involve both checking the overall trend in residuals and looking at residual values of individual observations (Littell, G. A. Milliken, W. W. Stroup, R. D. Wolfinger, and O. Schabenberger 2007). Gałecki and T. Burzykowski (2013) discussed methods based on residuals for individual observation and groups of observations.

Diagnostics methods are commonly used for linear regression (Faraway 2004). Complex models are treated as if they were assumption-free, which is why their diagnostics is often ignored. Considering the above, there is a need for more extensive methods and software dedicated for model auditing. Many of diagnostic tools, such as plots and statistics developed for linear models, are still useful for exploring machine learning models. Applying the same tools to all models facilitates their comparison.

The paper is organized as follows. Section 2 summarizes related work and state of the art. Section 3 contains an architecture of the auditor package. Section 4 provides the notation. Selected tools that help to validate models are presented in Section 5 and conclusions can be found in Section 6.

2 Related work

In this chapter, we overview common methods and tools for auditing and examining the validity of the models. There are several attempts to validate. They include diagnostics for predictor variables before and after model fit, methods dedicated to specific models, and model-agnostic approaches.

Data diagnostics before model fitting

The problem of data diagnostics is related to the Objective 2 presented in the 1, that is, the identification of problems with observations. There are several tools that address this issue. We review the most popular of them.

Diagnostics methods for linear models

As linear models have a very simple structure and do not require high computational power, they have been and still are used very frequently. Therefore, there are many tools that validate different aspects of linear models. Below, we overview the most widely known tools implemented in R  packages.

Other model-specific approaches

There are also several tools to generate validation plots for time series, principal component analysis, clustering, and others.

Model-agnostic approach

The tools presented above target specific model classes. The model-agnostic approach allows us to compare different models.

Model-agnostic audit

In this paper, we present the auditor package for R, which fills out the part of model-agnostic validation. As it expands methods used for linear regression, it may be used to verify any predictive model.

3 Package Architecture

The auditor package works for any predictive model which returns a numeric value. It offers a consistent grammar of model validation, what is an efficient and convenient way to generate plots and diagnostic scores. A diagnostic score is a number that evaluates one of the properties of a model. That might be, for example, an accuracy of model, an independence of residuals or an influence of observation.

Figure 2 presents the architecture of the package. The auditor provides 2 pipelines for model validation. First of them consists of two steps. Function audit wraps up the model with meta-data, then the result is passed to the plot or score function. Second pipeline includes an additional step, which consists of calling one of the functions that generate computations for plots and scores. These functions are: modelResiduals, modelEvaluation, modelFit, modelPerformance, and observationInfluence. Further, we call them computational functions. Results of these functions are tidy data frames (Wickham 2014).

graphic without alt text
Figure 2: Architecture of the auditor. The blue color indicates the first pipeline, while orange indicates the second. Function audit takes model and data or “explainer” object created with the DALEX package.

Both pipelines for model audit are compared below.

  1. model %>% audit() %>% computational function %>% plot(type=…)
    We recommend this pipeline. Function audit wraps up a model with meta-data used for modeling and creates a "modelAudit" object. One of the computational functions takes "modelAudit" object and computes the results of validation. Then, outputs may be printed or passed to functions score and plot with defined type. We describe types of plots in Chapter 5. This approach requires one additional function within the pipeline. However, once created output of the computational function contains all necessary calculations for related plots. Therefore, generating multiple plots is fast.

  2. model %>% audit() %>% plot(type=…)
    This pipeline is shorter than the previous one. The only difference is that it does not include computational function. Calculations are carried out every time a generic plot function is called. Omitting one step might be convenient for ad-hoc model analyses.

Implemented types of plots are presented in Table 1. Scores are presented in Table 2. All plots are generated with ggplot2, what provides a convenient way to modify and combine plots.

Table 1: Columns contain respectively: name of the plot, name of the computational function, value for type parameter of the function plot, indications whether the plot can be applied to regression and classification tasks.
Plot Function plot(type = ...) Reg. Class.
Autocorrelation Function modelResiduals "ACF" + +
Autocorrelation modelResiduals "Autocorrelation" + +
Cooks’s Distances observationInfluence "CooksDistance" + +
Half-Normal modelFit "HalfNormal" + +
LIFT Chart modelEvaluation "LIFT" +
Model Correlation modelResiduals "ModelCorrelation" + +
Model PCA modelResiduals "ModelPCA" + +
Model Ranking modelPerformance "ModelRanking" + +
Predicted Response modelPerformance "ModelPerformance" + +
REC Curve modelResiduals "REC" + +
Residuals modelResiduals "Residual" + +
Residual Boxplot modelResiduals "ResidualBoxplot" + +
Residual Density modelResiduals "ResidualDensity" + +
ROC Curve modelEvaluation "ROC" +
RROC Curve modelResiduals "RROC" + +
Scale-Location modelResiduals "ScaleLocation" + +
Two-sided ECDF modelResiduals "TwoSidedECDF" + +
Table 2: Columns contain respectively: name of a score, name of a computational function, value for type parameter of function the score, indications whether the score can be applied to regression and classification tasks.
Score Function score(type = ...) Reg. Class.
Cook’s Distance observationInfluence "CooksDistance" + +
Durbin-Watson modelResiduals "DW" + +
Half-Normal modelFit "HalfNormal" + +
Mean Absolute Error modelResiduals "MAE" + +
Mean Squared Error modelResiduals "MSE" + +
Area Over the REC modelResiduals "REC" + +
Root Mean Squared Error modelResiduals "RMSE" + +
Area Under the ROC modelEvaluation "ROC" +
Area Over the RROC modelResiduals "RROC" + +
Runs modelResiduals "Runs" + +
Peak modelResiduals "Peak" + +

4 Notation

Let us use the following notation: \(x_i = (x_i^{(1)}, x_i^{(2)}, ..., x_i^{(p)}) \in \mathcal{X} \subset \mathcal{R}^{p}\) is a vector in space \(\mathcal{X}\), \(y_i \in \mathcal{R}\) is an observed response associated with \(x_i\). A single observation we denote as a pair \((y_i, x_i)\) and \(n\) is the number of observations.

Let us denote a model as a function \(f: \mathcal{X} \to \mathcal{R}\). Predictions of the model \(f\) for particular observation we shall denote as \[f(x_i) = \hat{y_i}.\] The row residual, or simply the residual, is the difference between the observed value \(y_i\) and the predicted value \(\hat{y_i}\). We shall denote residual of particular observation as \[r_i = y_i - \hat{y_i}.\]

5 Illustrations

Diagnostics allows for evaluation of different properties of a model. They may be related to the following questions: Which model has better performance? Does the model fit data? Which observations are abnormal? These questions are directly related to the diagnostics objectives described in the 1. First of them refers to the evaluation of a model performance, which was proposed as the Objective 1. The second question concerns the examination of residuals distribution (Objective 3). The last one refers to outliers and influential observations (Objective 2).

In this Section we illustrate chosen validation tools that allow for exploration of the above issues. To demonstrate applications of the auditor, we use the data set apartments available in the DALEX package. First, we fit two models: simple linear regression and random forest.

  library("auditor")
  library("DALEX")
  library("randomForest")

  lm_model <- lm(m2.price ~ ., data = apartments)
  set.seed(59)
  rf_model <- randomForest(m2.price ~ ., data = apartments)

The next step creates "modelAudit" objects related to these two models.

  lm_audit <- audit(lm_model, label = "lm", 
                data = apartmentsTest, y = apartmentsTest$m2.price)
  rf_audit <- audit(rf_model, label = "rf", 
                data = apartmentsTest, y = apartmentsTest$m2.price)

Below, we create objects of class "modelResidual", which are needed to generate plots. Parameter variable determines the order of residuals in the plot. When the variable argument is set to "Fitted values" residuals are sorted by values of predicted responses. Entering a name of a variable "m2.price" implies that residuals will be in order of this variable.

  lm_res_fitted <- modelResiduals(lm_audit, variable = "Fitted values")
  rf_res_fitted <- modelResiduals(rf_audit, variable = "Fitted values")
  
  lm_res_observed <- modelResiduals(lm_audit, variable = "m2.price")
  rf_res_observed <- modelResiduals(rf_audit, variable = "m2.price")

Model Ranking Plot

In this subsection, we propose a Model Ranking plot which compares models performance across multiple measures (see Figure 3). The implemented measures are listed in Table 2 in Chapter 3. The descriptions of all scores are in (Gosiewska and P. Biecek 2018).

Model Ranking Radar plot consists of two parts. On the left side there is a radar plot. Colors correspond to models, edges to values of scores. Score values are inverted and rescaled to \([0,1]\).

Let us use the following notation: \(m_i \in \mathcal{M}\) is a model in a finite set of models \(\mathcal{M}\), where \(|\mathcal{M}| = k\), \(score: \mathcal{M} \to \mathbb{R}\) is a loss function for the model under consideration. Higher values mean worse model performance. The \(score(m_i)\) is a performance of model \(m_i\).

Definition 1. We define the inverted score of model \(m_i\) as \[\label{invscore-2018-143} invscore(m_i) = \frac{1}{score(m_i)} \min_{j=1...k}{score(m_j)}. \tag{1}\]

Models with the larger \(invscore\) are closer to the centre. Therefore, the best model is located the farthest from the center of the plot. On the right side of the plot is a table with results of scoring. The third column contains scores scaled to one of the models.

Let \(m_l \in \mathcal{M}\) where \(l \in \{ 1,2, ..., k \}\) be a model to which we scale.

Definition 2. We define the scaled score of model \(m_i\) to model \(m_l\) as \[scaled_l(m_i) = \frac{score(m_l)}{score(m_i)}.\]

As values of \(scaled_l(m_l)\) are always between \(0\) and \(1\), comparison of models is easy, regardless of the ranges of scores.

The plot below is generated by plot function with parameter type = "ModelRanking" or by function plotModelRanking. The scores included in the plot may be specified by scores parameter.

  rf_mp <- modelPerformance(rf_audit)
  lm_mp <- modelPerformance(lm_audit)
  plot(rf_mp, lm_mp, type = "ModelRanking")
graphic without alt text
Figure 3: Model Ranking Plot. Random forest (red) has better performance in aspect of MAE and REC scores, while linear model (blue) is better in aspect of MSE and RROC scores.

REC Curve Plot

Regression Error Characteristic (REC) curve (see Figure 4) is a generalization of Receiver Operating Characteristic (ROC) curve for binary classification (Swets 1988).

REC curve estimates the Cumulative Distribution Function of the error. On the x axis of the plot there is an error tolerance. On the y axis there is an accuracy at the given tolerance level. Bi and K. P. Bennett (2003) define the accuracy at tolerance \(\epsilon\) as a percentage of observations predicted within the tolerance \(\epsilon\). In other words, residuals larger than \(\epsilon\) are considered as errors.

Let us consider pairs \((y_i, x_i)\) introduced in the beginning of Chapter 5. Bi and K. P. Bennett (2003) define an accuracy as follows.

Definition 3. An accuracy at tolerance level \(\epsilon\) is given by \[acc(\epsilon) = \frac{|\{ (x,y): loss(f(x_i),y_i) \leq \epsilon, i = 1,...,n \}|}{n}.\]

REC Curves implemented in the auditor are plotted for a special case of Definition 3 where the loss is defined as \[loss(f(x_i),y_i) = |f(x_i) - y_i| = |r_i|.\] The shape of the curve illustrates the behavior of errors. The quality of the model can be evaluated and compared for different tolerance levels. The stable growth of the accuracy does not indicate any problems with the model. A small increase of accuracy near \(0\) and the areas where the growth is fast signalize bias of the model predictions.

The plot below is generated by plot function with parameter type = "REC" or by plotREC function.

  plot(rf_res_fitted, lm_res_fitted, type = "REC")
graphic without alt text
Figure 4: REC curve. Curve for linear model (blue) suggests that the model is biased. It displays poor accuracy when the tolerance ϵ is small. However, once ϵ exceeds the error tolerance 130, accuracy rapidly increases. The random forest (red) has a stable increase of accuracy when compared to the linear model. However, there is s fraction of large residuals.

As often it is difficult to compare models on the plot, there is an REC score implemented in the auditor. This score is the Area Over the REC Curve (AOC), which is a biased estimate of the expected error for a regression model. As Bi and K. P. Bennett (2003) proved, AOC provides a measure of the overall performance of regression model.

Scores may be obtained by score function with type = "REC" or scoreREC function.

  scoreREC(lm_res_fitted)
  scoreREC(rf_res_fitted)

Residual Boxplot Plot

Residual boxplot shows the distribution of the absolute values of residuals \(r_i\). They may be used for analysis and comparison of residuals. Example plots are presented in Figure 5. Boxplots (Tukey 1977) usually consist of five components. The box itself corresponds to the first quartile, median, and third quartile. The whiskers extend to the smallest and largest values, no further than 1.5 of Interquartile Range (IQR) from the first and third quartile respectively. Residual boxplots consists of a sixth component, namely a red dot which stands for Root Mean Square Error (RMSE). In case of an appropriate model, most of the residuals should lay near zero. A large spread of values indicates problems with a model.

The plot presented below is generated by plotResidualBoxplot or by plot function with parameter type = ’ResidualBoxplot’ function.

  plot(lm_res_fitted, rf_res_fitted, type = "ResidualBoxplot")
graphic without alt text
Figure 5: Boxplots of absolute values of residuals. Dots are in similar places, hence RMSE for both models is almost identical. However, the distribution of residuals of these two models is different. For the linear model (blue), most of the residuals are around the average. For the random forest (red), most residuals are small. Nevertheless, there is also a fraction of large residuals.

Residual Density Plot

Residual Density plot detects the incorrect behavior of residuals. An example is presented in Figure 6. On the plot, there are estimated densities of residuals. For some models, the expected shape of density derives from the model assumptions. For example, simple linear model residuals should be normally distributed. However, even if the model does not have an assumption about the distribution of residuals, such a plot may be informative. If most of the residuals are not concentrated around zero, it is likely that the model predictions are biased. Values of errors are displayed as marks along the x axis. That makes it possible to ascertain whether there are individual observations or groups of observations with residuals significantly larger than others.

The plot below is generated by plotResidualDensity function or by plot function with parameter type = "ResidualDensity".

  plot(rf_res_observed, lm_res_observed, type = "ResidualDensity")
graphic without alt text
Figure 6: Residual Density Plot. The density of residuals for the linear model (blue) forms two peaks. There are no residuals with values around zero. Residuals do not follow the normal distribution, what is one of the assumptions of the simple linear regression. There is an asymmetry of residuals generated by random forest (red).

Two-sided ECDF Plot

Two-sided ECDF plot (see Figure 7) shows an Empirical Cumulative Distribution Functions (ECDF) for positive and negative values of residuals separately.

Let \(x_1, ..., x_n\) be a random sample from a cumulative distribution function \(F(t)\). The following definition comes from van der Vaart (2000).

Definition 4. The empirical cumulative distribution function is given by \[F_n(t) = \frac{1}{n} \sum_{i=1}^n \mathbb{1} \{ x_i \leq t\}.\] Empirical cumulative distribution function presents a fraction of observations that are less than or equal to \(t\). It is an estimator for the cumulative distribution function \(F(t)\).

On the positive side of the x-axis, there is the ECDF of positive values of residuals. On the negative side, there is a transformation of ECDF: \[F_{rev}(t) = 1 - F(t).\] Let \(n_N\) and \(n_P\) be numbers of negative and positive values of residuals respectively. Negative part of the plot is normalized by multiplying it by the ratio of the \(n_N\) over \(n_N + n_P\). Similarly, positive part is normalized by multiplying it by the ratio of the \(n_P\) over \(n_N + n_P\). Due to the applied scale, the ends of the curves add up to \(100\%\) in total. The plot shows the distribution of residuals divided into groups with positive and negative values. It helps to identify the asymmetry of the residuals. Points represent individual error values, what makes it possible to identify ‘outliers’.

The plot below is generated by plotTwoSidedECDF function or by plot function with parameter type = "TwoSidedECDF".

  plot(rf_res_fitted, lm_res_fitted, type = "TwoSidedECDF")
graphic without alt text
Figure 7: Two-sided ECDF plot. The plot shows that majority of residuals for the random forest (red) is smaller than residuals for the linear model (blue). However, random forest has also fractions of large residuals.

6 Conclusion and future work

In this article, we presented the auditor package and selected diagnostic scores and plots. We discussed the existing methods of model validation and proposed new visual approaches. We also specified three objectives of model audit (see Section 1), proposed relevant verification tools, and demonstrated their usage. Model Ranking Plot and REC Curve enrich the information about model performance (Objective 1). Residual Boxplot, Residual Density, and Two-Sided ECDF Plots expand the knowledge about the distribution of residuals (Objective 3). What is more, the latter two tools allow for identification of outliers (Objective 2). Finally, we proposed two new plots, the Model Ranking Plot and the Two-Sided ECDF Plot.

We implemented all the presented scores and plots in the auditor package for R. The included functions are based on a uniform grammar introduced in Figure 3. Documentation and examples are available at https://mi2datalab.github.io/auditor/. The stable version of the package is on CRAN, the development version is on GitHub (https://github.com/MI2DataLab/auditor). A more detailed description of methodology is available in the extended version of this paper on arXiv: https://arxiv.org/abs/1809.07763 (Gosiewska and P. Biecek 2018).

There are many potential areas for future work that we would like to explore, including more extensions of model-specific diagnostics to model-agnostic methods and residual-based methods for investigating interactions. Another potential aim would be to develop methods for local audit based on the diagnostics of a model around a single observation or a group of observations.

7 Acknowledgements

We would like to acknowledge and thank Aleksandra Grudziąż and Mateusz Staniak for valuable discussions. Also, we wish to thank Dr. Rafael De Andrade Moral for his assistance and help related to the hnp package.

The work was supported by NCN Opus grant 2016/21/B/ST6/02176.

CRAN packages used

auditor

CRAN Task Views implied by cited packages

Note

This article is converted from a Legacy LaTeX article using the texor package. The pdf version is the official version. To report a problem with the html, refer to CONTRIBUTE on the R Journal homepage.

T. W. Anderson and D. A. Darling. Asymptotic theory of certain goodness of fit criteria based on stochastic processes. Ann Math Statist. 23 (2):, 1952. URL https://doi.org/10.1214/aoms/1177729437.
F. J. Anscombe. Graphs in statistical analysis. The American Statistician 27 (1):, 1973. URL https://doi.org/10.1080/00031305.1973.10478966.
A. C. Atkinson. Plots, Transformations, and Regression: An Introduction to Graphical Methods of Diagnostic Regression Analysis. Oxford statistical science series Clarendon Press, 1985. URL https://books.google.pl/books?id=oFjgnQEACAAJ.
A. Atkinson and M. Riani. Robust Diagnostic Regression Analysis. Springer Series in Statistics Springer-Verlag, 2012. URL https://books.google.pl/books?id=sZ3SBwAAQBAJ.
J. Bi and K. P. Bennett. Regression error characteristic curves. In ICML, 2003.
P. Biecek. DALEX: Explainers for Complex Predictive Models. ArXiv e-prints, 2018.
G. E. P. Box and D. R. Cox. An analysis of transformations. Journal of the Royal Statistical Society B pages, 1964.
L. Breiman. Statistical modeling: The two cultures (with comments and a rejoinder by the author). Statist Sci. 16 (3):, 2001. URL https://doi.org/10.1214/ss/1009213726.
T. S. Breusch and A. R. Pagan. A simple test for heteroscedasticity and random coefficient variation. Econometrica 47 (5): ISSN 00129682 14680262, 1979. URL http://www.jstor.org/stable/1911963.
T. Chen and C. Guestrin. Xgboost: A scalable tree boosting system. CoRR abs/1603.02754, 2016. URL http://arxiv.org/abs/1603.02754.
H. Cramer. On the composition of elementary errors: Second paper: Statistical applications. Scandinavian Actuarial Journal (1):, 1928.
E. de Jonge and M. van der Loo. Validatetools: Checking and Simplifying Validation Rule Sets, . R package version 0.4.3, 2018. URL https://CRAN.R-project.org/package=validatetools.
J. J. Faraway. Linear Models with R. Chapman & Hall/CRC Texts in Statistical Science Taylor & Francis, 2004. URL https://books.google.pl/books?id=fvenzpofkagC.
J. J. Faraway. Practical Regression and Anova Using R. University of Bath, 2002. URL https://books.google.pl/books?id=UjhBnwEACAAJ.
J. Fox and S. Weisberg. An R Companion to Applied Regression. Sage Thousand Oaks CA 2nd edition, 2011. URL http://socserv.socsci.mcmaster.ca/jfox/Books/Companion.
M. Friendly. Corrgrams: Exploratory displays for correlation matrices. The American Statistician 56 (4):, 2002.
A. Gałecki and T. Burzykowski. Linear Mixed-Effects Models Using R: A Step-by-Step Approach. Springer Texts in Statistics Springer-Verlag, 2013. URL https://books.google.pl/books?id=rbk_AAAAQBAJ.
S. M. Goldfeld and R. E. Quandt. Some tests for homoscedasticity. Journal of the American Statistical Association 60(310):, 1965. URL https://doi.org/10.1080/01621459.1965.10480811.
A. Gosiewska and P. Biecek. auditor: An R Package for Model-Agnostic Visual Validation and Diagnostic. ArXiv e-prints, 2018.
J. Gross and U. Ligges. Nortest: Tests for Normality, . R package version 1.0-4, 2015. URL https://CRAN.R-project.org/package=nortest.
F. E. Harrell Jr. Rms: Regression Modeling Strategies, . R package version 5.1-2, 2018. URL https://CRAN.R-project.org/package=rms.
F. E. Harrell Jr. Regression Modeling Strategies. Springer-Verlag Berlin Heidelberg, 2006.
M. J. Harrison and B. P. M. McCabe. A test for heteroscedasticity based on ordinary least squares residuals. Journal of the American Statistical Association 74(366): ISSN 01621459, 1979. URL http://www.jstor.org/stable/2286361.
A. C. Harvey and P. Collier. Testing for functional misspecification in regression analysis. Journal of Econometrics 6 (1): 103 – 119 ISSN 0304-4076, 1977. URL https://doi.org/10.1016/0304-4076(77)90057-4.
T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Springer Series in Statistics Springer-Verlag New York NY USA, 2001.
T. K. Ho. Random decision forests. In Proceedings of the Third International Conference onDocument Analysis; Recognition (Volume 1) - Volume 1 ICDAR ’95 pages278– Washington DC USA IEEE Computer Society, 1995. URL http://dl.acm.org/citation.cfm?id=844379.844681.
A. Liaw and M. Wiener. Classification and regression by randomforest. R News 2 (3):, 2002. URL https://CRAN.R-project.org/doc/Rnews/.
R. C. Littell, G. A. Milliken, W. W. Stroup, R. D. Wolfinger, and O. Schabenberger. SAS for Mixed Models, Second Edition. SAS Institute, 2007. URL https://books.google.pl/books?id=z9qv32OyEu4C.
S. Liu, X. Wang, M. Liu, and J. Zhu. Towards better analysis of machine learning models: A visual analytics perspective. Visual Informatics 1 (1): 48 – 56 ISSN 2468-502X, 2017. URL https://doi.org/10.1016/j.visinf.2017.01.006.
C. Molnar, B. Bischl, and G. Casalicchio. Iml: An r package for interpretable machine learning. JOSS 3 (26): 786, 2018. URL https://doi.org/10.21105/joss.00786.
R. Moral, J. Hinde, and C. Demétrio. Half-normal plots and overdispersed models in r: The hnp package. Journal of Statistical Software Articles 81(10): ISSN 1548-7660, 2017. URL https://doi.org/10.18637/jss.v081.i10.
C. O’Neil. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown Publishing Group New York NY USA 9780553418811, 2016.
E. A. Peña and E. H. Slate. Global validation of linear model assumptions. Journal of the American Statistical Association 101(473): PMID: 20157621, 2006. URL https://doi.org/10.1198/016214505000000637.
A. H. Petersen and C. T. Ekstrom. dataMaid: A Suite of Checks for Identification of Potential Errors in a Data Frame as Part of the Data Screening Process, . R package version 1.1.2, 2018. URL https://CRAN.R-project.org/package=dataMaid.
J. B. Ramsey. Tests for specification errors in classical linear least-squares regression analysis. Journal of the Royal Statistical Society B 31(2): ISSN 00359246, 1969. URL http://www.jstor.org/stable/2984219.
D. Robinson. Broom: Convert Statistical Analysis Objects into Tidy Data Frames, . R package version 0.4.4, 2018. URL https://CRAN.R-project.org/package=broom.
K. P. F. R. S. X. On the Criterion That a Given System of Deviations from the Probable in the Case of a Correlated System of Variables Is Such That It Can Be Reasonably Supposed to Have Arisen from Random Sampling, volume 50. Taylor & Francis, 1900. URL https://doi.org/10.1080/14786440009463897.
S. Sanford Shapiro and R. S. Francia. An approximate analysis of variance test for normality. Journal of the American Statistical Association 67:, 1972.
S. Sheather. A Modern Approach to Regression with R. Springer Texts in Statistics Springer-Verlag, 2009. URL https://books.google.pl/books?id=zS3Jiyxqr98C.
C. Sievert, C. Parmer, T. Hocking, S. Chamberlain, K. Ram, M. Corvellec, and P. Despouy. Plotly: Create Interactive Web Graphics via ’plotly.js’, . R package version 4.7.1, 2017. URL https://CRAN.R-project.org/package=plotly.
M. A. Stephens. Edf statistics for goodness of fit and some comparisons. Journal of the American Statistical Association 69(347): ISSN 01621459, 1974. URL http://www.jstor.org/stable/2286009.
J. Swets. Measuring the accuracy of diagnostic systems. Science 240 (4857): ISSN 0036-8075, 1988. URL https://doi.org/10.1126/science.3287615.
Y. Tang, M. Horikoshi, and W. Li. ggfortify: Unified Interface to Visualize Statistical Results of Popular R Packages. The R Journal 8 (2):, 2016. URL https://journal.r-project.org/archive/2016/RJ-2016-060/index.html.
Y. Tang. Autoplotly - Automatic Generation of Interactive Visualizations for Popular Statistical Results. ArXiv e-prints, 2018.
J. W. Tukey. Exploratory Data Analysis. Addison-Wesley series in behavioral science Addison-WesleyPublishing Company, 1977. URL https://books.google.pl/books?id=UT9dAAAAIAAJ.
J. M. Utts. The rainbow test for lack of fit in regression. Communications in Statistics - Theory; Methods 11(24):, 1982. URL https://doi.org/10.1080/03610928208828423.
M. van der Loo. Lumberjack: Track Changes in Data, . R package version 0.2.0, 2017. URL https://CRAN.R-project.org/package=lumberjack.
A. W. van der Vaart. Asymptotic Statistics. Asymptotic Statistics Cambridge University Press, 2000. URL https://books.google.pl/books?id=UEuQEM5RjWgC.
W. N. Venables and B. D. Ripley. Modern Applied Statistics with S. Springer-Verlag New York 4th edition, 2002. URL http://www.stats.ox.ac.uk/pub/MASS4.
R. Von Mises. Wahrscheinlichkeit, Statistik Und Wahrheit. Number t 3 in Schriften zur wissenschaftlichen Weltauffassung.Springer-Verlag, 1928. URL https://books.google.pl/books?id=W1IaAAAAIAAJ.
H. Wickham. Ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag, 2009. URL http://ggplot2.org.
H. Wickham. Tidy data. The Journal of Statistical Software 59, 2014. URL http://www.jstatsoft.org/v59/i10/.
C. J. Willmott, S. G. Ackleson, R. E. Davis, J. J. Feddema, K. M. Klink, D. R. Legates, J. O’Donnell, and C. M. Rowe. Statistics for the evaluation and comparison of models. Journal of Geophysical Research 90 (C5):8995, 1985. URL https://doi.org/10.1029/jc090ic05p08995.
M. N. Wright and A. Ziegler. ranger: A fast implementation of random forests for high dimensional data in C++ and R. Journal of Statistical Software 77 (1):, 2017. URL https://doi.org/10.18637/jss.v077.i01.
K. Wright. Corrgram: Plot a Correlogram, . R package version 1.13, 2018. URL https://CRAN.R-project.org/package=corrgram.
A. Zeileis and T. Hothorn. Diagnostic checking in regression relationships. R News 2 (3):, 2002. URL https://CRAN.R-project.org/doc/Rnews/.
A. Zeileis, F. Leisch, K. Hornik, and C. Kleiber. Strucchange: An r package for testing for structural change in linear regression models. Journal of Statistical Software Articles 7(2): ISSN 1548-7660, 2002. URL https://doi.org/10.18637/jss.v007.i02.

References

Reuse

Text and figures are licensed under Creative Commons Attribution CC BY 4.0. The figures that have been reused from other sources don't fall under this license and can be recognized by a note in their caption: "Figure from ...".

Citation

For attribution, please cite this work as

Gosiewska & Biecek, "auditor: an R Package for Model-Agnostic Visual Validation and Diagnostics", The R Journal, 2019

BibTeX citation

@article{RJ-2019-036,
  author = {Gosiewska, Alicja and Biecek, Przemysław},
  title = {auditor: an R Package for Model-Agnostic Visual Validation and Diagnostics},
  journal = {The R Journal},
  year = {2019},
  note = {https://rjournal.github.io/},
  volume = {11},
  issue = {2},
  issn = {2073-4859},
  pages = {85-98}
}