ROC curve analysis is a fundamental tool for evaluating the performance of a marker in a number of research areas, e.g., biomedicine, bioinformatics, engineering etc., and is frequently used for discriminating cases from controls. There are a number of analysis tools which are used to guide researchers through their analysis. Some of these tools are commercial and provide basic methods for ROC curve analysis while others offer advanced analysis techniques and a command-based user interface, such as the R environment. The R environmentg includes comprehensive tools for ROC curve analysis; however, using a command-based interface might be challenging and time consuming when a quick evaluation is desired; especially for non-R users, physicians etc. Hence, a quick, comprehensive, free and easy-to-use analysis tool is required. For this purpose, we developed a user-friendly web-tool based on the R language. This tool provides ROC statistics, graphical tools, optimal cutpoint calculation, comparison of several markers, and sample size estimation to support researchers in their decisions without writing R codes. easyROC can be used via any device with an internet connection independently of the operating system. The web interface of easyROC is constructed with the R package shiny. This tool is freely available through www.biosoft.hacettepe.edu.tr/easyROC.
The receiver operating characteristics (ROC) curve is a graphical
approach used to visualize and assess the performance of a binary
classifier system. This unique feature of ROC curve analysis makes it
one of the most extensively used methods in various fields of science.
It was originally developed during World War II to detect whether a
signal on the radar screen represented an object or a noise
(Egan 1975; Swets et al. 2000; Fan et al. 2006) and today it is widely used in
medicine, radiology, biometrics, bioinformatics and various applications
of machine learning and data mining research
(Fawcett 2006; Sonego et al. 2008). ROC curve analysis can be implemented
for several reasons: (i) to assess the overall performance of a
classifier using several performance measures, (ii) to compare the
performances of classifiers, and (iii) to determine the optimal cutpoint
for a given classifier, diagnostic test or marker/biomarker. For
simplicity of language, we will use the terms classifier and diagnostic
test throughout the manuscript. The performance of a classifier can be
summarized using the point estimations and confidence intervals of
several basic performance measures such as sensitivity, specificity or
combined measures of sensitivity and specificity such as likelihood
ratios, accuracy, area under the ROC curve (AUC), etc. A ROC curve is
basically a plot of a classifier’s true positive rates (TPR:
sensitivity) versus false positive rates (FPR:
There are a number of commercial (e.g., IBM SPSS, MedCalc, Stata, etc.) and open-source (R) software packages which are used to guide researchers through their ROC curve analysis. Some of these software packages provide basic features for ROC curve analysis while others, such as R, offer advanced features but also a command-based user interface. The R environment includes comprehensive tools for ROC curve analysis, such as ROCR (Sing et al. 2005), pROC (Robin et al. 2011), ROC (Carey and Redestig 2015) and OptimalCutpoints (Lopez-Raton et al. 2014).
All of the R packages mentioned above perform ROC curve analysis using
the related package functions. Although these packages are comprehensive
and flexible, they require a good programming knowledge of the R
language. However, working with a command-based interface might be
challenging and time consuming when a quick evaluation is desired
especially for non-R users, such as physicians and other health care
professionalists. Fortunately, an R package
shiny (Chang, J. Cheng, J. Allaire, Y. Xie, and J. McPherson 2015) allows
users to create interactive web-tools with a nicely designed,
user-friendly and easy-to-use user interface. In this context, we
developed a web-tool, easyROC, for ROC curve analysis. The user
interface of easyROC is constructed via shiny and HTML codes. easyROC
combines several R packages for ROC curve analysis. This tool has three
main parts including ROC statistics, cutpoint calculations and sample
size estimation. Detailed information about easyROC and the related
methods together with mathematical background are given in
Section 2. easyROC is freely available at
http://www.biosoft.hacettepe.edu.tr/easyROC and all the source codes
are on GitHub
Let us consider the binary classification problem where
The parametric ROC curve is plotted using the FPR
(
When the distribution of the classifier is Normal, the parametric ROC
curve is fitted using binormal ROC properties. Suppose
where
Fitting the ROC curve by using Equation (3) has two major drawbacks: (i) incorrect ROC curves may arise when the underlying distribution is not normal, (ii) ROC lines are improper when within class variations are not similar, i.e., heteroscedasticity. An example of improper ROC curves is given in Figure 1. To overcome these problems, one may nonparametrically fit the ROC curve without considering distributional assumptions or use parametric/semiparametric alternatives to the binormal model (Gönen and Heller 2010).
Consider the estimated class labels in Equation (1). The FPR and TPR given in Equation (2) are estimated; as given in Equation (5).
The empirical ROC curve is plotted using
where
The predicted and actual classes, i.e., gold standard test results, can
be shown with a
Predicted Labels | Actual Labels | Total | |
---|---|---|---|
2-3 | Positive |
Negative |
|
Positive |
TP | FP | |
Negative |
FN | TN | |
Total | |||
TP: True positive FP: False positive TN: True negative FN: False negative NPV: Negative predictive value PPV: Positive predictive value PLR: Positive likelihood ratio NLR: Negative likelihood ratio |
Although researchers are usually interested in the overall diagnostic
performance of a classifier, it is sometimes useful to focus on a
portion of the ROC curve to compute the partial AUCs (pAUC). pAUC is an
extension of the AUC measure which considers the trapezoids within a
given interval of sensitivity and/or specificity. Let us consider the
pAUC where specificity (or sensitivity) lies within the interval
As the interval
Identification of the optimal cutpoint is an important task to avoid incorrect conclusions. Various methods are available in the literature to determine the optimal cutpoint. Most of these methods are based on the sensitivity and specificity measures. However, other methods are also available based on cost-benefit, prevalence, predictive values and diagnostic likelihood ratios. Two popular methods are, for example, the Youden index and the minimization of the distance of the point on the curve to the top-left corner, i.e., the point indicating perfect discrimination.
Table 1 gives the list of optimal cutpoint methods we consider in easyROC. For detailed information and mathematical background, see Lopez-Raton et al. (2014).
A common subject of interest in ROC analysis is to compare the
performances of several classifiers to select the best one to
discriminate cases from controls. For a classifier with random chance
discrimination ability, the equation
Under the large sample theory, the significance of AUC is tested using the Wald test statistic as given in Equation (9).
When the parametric approach is used, the variance of AUC is estimated using Equation (10) (McClish 1989; Zhou et al. 2002).
where
and the estimated variances for
The estimated values of
Mann-Whitney version of rank-sum test:
Hanley and McNeil (1982) propose the variance estimation given in Equation (12). This method estimates the variance using an approximation based on exponential distribution as
where
DeLong et al. (1988)’s estimate:
Since the exponential distribution approximation in Equation
(12) gives biased variance estimates, DeLong et al. (1988)
suggest an alternative method which is free from distributional
assumptions. Define the components
Using the Equation (13) the variance of AUC is estimated as
where
Normal approximation of binomial proportion:
Another alternative for variance estimation is to use binomial approximation under the large sample theory, as given in Equation (16). For small samples, this method may give biased estimates.
The estimated variance derived from one of the methods described above is used to construct the confidence intervals of the AUC. A common method is to use large sample approximation as below:
When the area under the curve is close to 1 or the sample size is relatively small, the large sample approximation in Equation (17) produces improper confidence intervals since the upper limit exceeds 1. To solve this problem, Agresti and Coull (1998) proposed the score confidence interval that guarantees the upper limit is less than or equal to 1. Another alternative is to construct the binomial exact confidence intervals given in Equation (18) using the relationship between binomial and F-distribution (Morisette and Khorram 1998)
where
In most studies, determining the required sample size is an important step for the research to be able to detect significant results. Sample size determination is required for both constructing the confidence interval of the unknown population parameter and testing a research hypothesis. Obuchowski (1998) reviewed sample size determination for several study designs. In this paper, we cover the sample size determination for three types of studies based on AUCs. In addition, the following sample size calculations can be extended to other performance measures such as sensitivity, specificity, etc.
The variance estimates of AUCs can be obtained using one of the Equations (12), (14) and (16). While Equation (12) is a good approximation for a variety of underlying distributions, the estimated variance will be underestimated if the test results are in a discrete rating format. To overcome this problem, Obuchowski (1998) and Obuchowski et al. (2004) suggest an alternative variance estimation method for rating data using the variance function as given in Equation (19) which is based on an underlying binormal distribution. In this section, we focused on sample size calculation for discrete scale data. However, the same formulas are valid for continuous scale diagnostic tests since the only difference is about estimating the variance of diagnostic test accuracy.
where
Hypothesis test to determine the AUC of a single classifier:
In most of the studies with a single classifier, the aim of the
study is to determine whether the diagnostic test performs well for
discriminating diseased patients from controls. Consider the
hypotheses
where
Comparing the AUCs of two classifiers:
When the aim of a study is to compare two classifiers, one may
consider the hypotheses
where
The total sample size is calculated using the allocation ratio. When two classifiers are performed on the same subjects, the design will be paired yielding the covariance term to be a nonzero (usually positive) quantity. However, the covariance term will be zero (i.e., independent classifiers) if each test is performed on different subjects. Detailed information on the calculation of the covariance term can be found in Zhou et al. (2002).
Non-inferiority of a new classifier to a standard one:
In addition to comparing two classifiers, some studies are designed
to explore the performance of a new classifier to that of a standard
one. The new classifier should perform as well as but not
necessarily better than the standard test
(Obuchowski et al. 2004). The hypotheses are
where
ROC curve analysis is one of the standard procedures included in most statistical analysis tools such as IBM SPSS, Stata, MedCalc and R. Each tool offers different features within ROC curve analysis. Among commercial software packages, IBM SPSS, which is one of the most widely used commercial software packages, plots the ROC curve and computes some basic statistics such as AUC and its standard error, confidence interval and statistical significance. However, it does not provide any method for sample size calculation or cutpoint determination. Stata offers a variety of calculations for ROC curve analysis including partial AUC, multiple comparisons of ROC curves, optimal cutpoint determination using the Youden index and several performance measures. Another commercial software alternative for ROC curve analysis is MedCalc, which has comprehensive features compared to most of the other available commercial software packages and is especially developed for biomedical research. MedCalc provides sample size estimation for a single diagnostic test, but it does not have an option for pAUC calculation.
Unlike commercial software packages, R is an open source and free software package that includes all the features of commercial software packages and more through several packages such as ROC, ROCR, pROC and OptimalCutpoints. ROC is an R/Bioconductor package which can plot the ROC curve and calculate the AUC. It also calculates pAUCs based on false positive rates. This package is originally developed to be used for the ROC analysis with DNA microarrays. ROCR is a comprehensive R package providing over 25 different performance measures (based on package version 1.0-7). It allows users to create two dimensional performance curves. Although ROCR is one of the most comprehensive packages for assessing the performance measures, it provides limited options to select the optimum cutpoint. One may use any of the two-dimensional performance graphs to determine the optimal cutpoint graphically. It computes the AUC and its confidence interval, however, it does not provide a statistical test for performance measures.
pROC, on the other hand, offers more comprehensive and flexible features than its free and commercial counterparts. It performs statistical tests for the comparison of ROC curves using DeLong et al. (1988), Venkatraman and Begg (1996) and Venkatraman (2000) for AUC, and Hanley and McNeil (1983) and Pepe et al. (2009) for both AUC and pAUC. It also calculates the confidence intervals for the sensitivity, specificity, ROC curves, pAUC, and smoothed ROC curves. The confidence intervals are computed using DeLong et al. (1988)’s method for AUCs and using bootstrap for pAUCs, sensitivity and specificity at given threshold(s). Bootstrap confidence intervals and pAUC regions are shown in the ROC curve plot. Several diagnostic measures, such as sensitivity, specificity, negative and positive predictive values, are computed for a given threshold. Like ROCR, pROC also offers limited features for detecting the optimal cutpoint. Two methods, i.e., Youden index and closest point to the top-left corner, are available to find the optimal cutpoint. In addition, pROC is an alternative among the ROC packages on CRAN to find the required sample size for a single diagnostic test or the comparison of two diagnostic tests. Two versions of pROC are available: (i) for the R programming language and (ii) with a graphical user interface for the S-PLUS statistical software package.
There are several packages providing optimal cutpoint calculations
through R. OptimalCutpoints is a sophisticated R package specifically
developed to determine the optimal cutpoint of a test or biomarker
(Lopez-Raton et al. 2014). It includes 34 different cutpoint calculation
methods based on sensitivity/specificity measures, cost-benefit
analysis, predictive values, diagnostic likelihood ratios, prevalences
and
Another R package worth mentioning is plotROC (Sachs 2016) which is available on CRAN and also for shiny platforms. plotROC is a flexible and sophisticated R package which can be used to create nice-looking and interactive ROC graphs. Unlike the packages described above, plotROC has a web-based user interface which is very useful for non-R users. Researchers can use its web service to create ROC graphics and download the figures to their local computer. However, it does not provide any statistical tests or sample size calculations.
IBM SPSS | Stata | MedCalc | ROC | ROCR | pROC | easyROC | |
---|---|---|---|---|---|---|---|
Plots | Yes | Yes | Yes* | Yes | Yes* | Yes* | Yes* |
Conf. intervals | Yes | Yes* | Yes | Yes | Yes | Yes* | Yes* |
pAUC | No | Yes | No | Yes | Yes | Yes* | Yes* |
Statistical tests | No | Yes | No | Yes | Yes | Yes* | Yes* |
Diagnostic measures | No | Yes | Yes | No | Yes* | Yes | Yes |
Multiple comp. | No | Yes | Yes* | No | No | Yes* | Yes |
Cutpoints | No | Yes | Yes | No | No | Yes | Yes* |
Sample size | No | No | Yes | No | No | Yes | Yes* |
Free license | No | No | No | Yes* | Yes* | Yes* | Yes* |
Open source | No | No | No | Yes* | Yes* | Yes* | Yes* |
Web-tool access | No | No | No | No | No | No | Yes* |
User interface | Yes | Yes* | Yes* | No | No | Yes* | Yes* |
* Comprehensive ones. |
easyROC aims to extend the features of several ROC packages in R and allows researchers to conduct their ROC curve analysis through a single and easy-to-use interface without writing any R code. This tool is a web-based application created via shiny and HTML programming. easyROC makes use of the R packages plyr (Wickham 2011), pROC and OptimalCutpoints for conducting ROC analysis. plyr is used for manipulating data while pROC is used for estimation and hypothesis testing of pAUCs. easyROC has comprehensive options for ROC curve analysis which other tools do not have (or partially shares some features). The ROC curve can be estimated using parametric or nonparametric approaches. It offers four different methods for the calculation of the standard error and confidence interval of the AUC. Researchers can calculate the pAUCs based on sensitivity and specificity, if necessary. One may perform pairwise comparisons to find the classifiers which have similar or different discrimination ability. However, the pairwise comparison should be carried out carefully since the type I error increases with the increasing number of comparisons. easyROC offers multiple test corrections in order to keep type I error at a given level. Multiple comparisons of diagnostic tests can be applied using either Bonferroni or false discovery rate correction. Furthermore, the optimal cutpoints are determined using the methods from OptimalCutpoints and the corresponding measures at a given cutpoint, including sensitivity, specificity, positive and negative predictive values, and positive and negative likelihood ratios are also returned. One can determine the desired sample size for ROC curve analysis using this tool for three different cases. All these comprehensive features are accessible through a graphical user interface, which makes the analysis process easier for all users. The comparison with other tools is given in Table 2 and the features of each module are given in Table 3.
Modules (Tab panels) | Features |
---|---|
ROC curve |
|
Cutpoints |
|
Sample size |
|
To illustrate our application, we used the non-alcoholic fatty liver disease (NAFLD) dataset of Celikbilek et al. (2014). This study was designed to identify the non-invasive miRNA biomarkers of NAFLD. The authors obtained the serum samples of 20 healthy and 20 NAFLD observations and quantified the expression levels of eight miRNAs using quantitative Real-Time PCR (qPCR) technology. After performing the necessary statistical analysis, the authors revealed that miR-197, miR-146b, miR-181d and miR-99a may be potential biomarkers in identifying NAFLD. The normalized expression values of these miRNAs and the class information (the column named “Group”, where 0 refers to controls and 1 refers to cases) of each observation are given in Supplementary 5. This file can be directly used as input to the easyROC web-tool and users can arrange their own data based on this file. Two example datasets, Mayo and PBC (Murtaugh et al. 1994), are also available in the web-tool for users to practice the application. In our example, the aim is to investigate the discriminative performances of each miRNA, compare each other and identify the optimal cutpoints for each miRNA in identifying NAFLD.
The data are uploaded to the easyROC interface using the Data upload tab (Figure 2). easyROC accepts a delimited text file with variable names in the first row. The status variable is also set by the same tab panel. easyROC automatically detects the variable names and exports them into related fields. When data are correctly uploaded, researchers may proceed with ROC curve analysis, cutpoint estimations or sample size calculations. The area under the curve, confidence intervals and significance tests for AUC, multiple comparisons (if multiple markers are selected) and pAUCs are calculated with the ROC curve tab (Figures 3 and 4). The ROC curve is estimated using the nonparametric approach. The advanced option allows researchers to select a method for standard error estimation and confidence intervals. easyROC selects the DeLong et al. (1988) method by default.
Here, we select mir197, mir146b, mir181d and mir99a miRNAs to assess
their performances and to compare them with each other in identifying
NAFLD. Since the expression levels of all miRNAs are underexpressed in
the NAFLD group, lower values will indicate higher risk and therefore we
should uncheck the “Higher values indicate higher risks” box. Using
DeLong et al. (1988) standard error estimations, we obtained the ROC curves for
each miRNA biomarker and AUC values as
Finding a suitable cutpoint is one of the aims of ROC curve analysis. We
made use of the OptimalCutPoints package from R (Lopez-Raton et al. 2014),
which has 34 different methods, to calculate cutpoints for each marker.
An optimal cutpoint can be computed via the Cut point tab by selecting
a marker and a method. Then, the application will calculate an optimal
cutpoint and performance measures such as sensitivity, specificity,
positive and negative predictive value, and positive and negative
likelihood ratio based on the corresponding cutpoint value. The “ROC01”
method, for example, determines the optimal cutpoint as
Since ROC curve analysis is one of the principal statistical analysis methods, it is used by a wide range of the scientific community. Both commercial and free software tools are available for users to perform it. Generally, easy-to-use and nicely-designed interfaces are offered by commercial software packages whereas flexible and comprehensive tools are available in free, open-access, code-based software packages, such as R. The first novelty of our tool is that it allows the user to use free and open-access software with an easy-to-use interface. In other words, we combine the power of an open-source and free language with a nicely designed and easily accessible interface. This tool offers more comprehensive features and a wide variety of implementations for ROC curve analysis than its commercial and free counterparts, which is another novelty of this application. It is specifically constructed for ROC curve analysis, unlike the commercial software packages, such as IBM SPSS, Stata and MedCalc.
This web-based application is intended for research purposes only, not for clinical or commercial use. Since it is a non-profit service to the scientific community, it comes with no warranty and no data security. However, since this web server uses the R package shiny, each user performs his/her analyses in a new R session. After uploading data, the application only saves responses within its R session and prints the results instantly. After a user has quit the application, the corresponding R session will be closed and any uploaded data, responses or outputs will not be saved locally or remotely.
This tool is freely available through http://www.biosoft.hacettepe.edu.tr/easyROC/ and all the source codes are available at http://www.github.com/dncR/easyROC under GPL version 3. It will be regularly updated upon the dependent R packages used in this application, including shiny and OptimalCutpoints, and new features will be continually added as they are developed.
Method | Description |
Youden | Youden index identifies the cutpoint that maximizes the sum of |
CB | CB is a measure based on the cost and benefit method, and is calculated from the slope of the ROC curve. |
MinValueSe | For a given minimum value for |
ValueSe | For a given particular value for |
MinValueSpSe | For given minimum values for |
MaxSe | MaxSp and MaxSe are two measures based on the maximization of |
MaxSpSe | MaxSpSe is a measure based on the simultaneous maximization of both |
MaxProdSpSe | MaxProdSpSe is a measure based on the maximization of the product of |
ROC01 | ROC01 identifies the optimal cutpoint that is closest to the upper-left corner |
SpEqualSe | SpEqualSe is a measure based on the minimization of the absolute difference between |
MaxEfficiency | MaxEfficiency is a measure based on the minimization of the misclassification error, |
Minimax | Minimax is a measure based on the minimization of the most frequent error. Minimax is computed using the
equation |
MaxDOR | MaxDOR is a measure based on the maximization of the diagnostic odds ratio, calculated using the equation
|
MinValuePPV | For a given minimum value for |
ValuePPV | For a given particular value for |
MinValueNPVPPV | For given minimum values for predictive values, MinValueNPVPPV identifies the optimal value as the one that
gives the maximum |
PROC01 | PROC01 identifies the optimal cutpoint that is closest to the upper-left corner |
NPVEqualPPV | NPVEqualPPV is a measure based on the minimization of the absolute difference between |
MaxNPVPPV | MaxNPVPPV is a measure based on the simultaneous maximization of both |
MaxSumNPVPPV | MaxSumNPVPPV is a measure based on the maximization of the sum of |
MaxProdNPVPPV | MaxProdNPVPPV is a measure based on the maximization of the product of |
ValueDLR.Positive | These two measures are based on setting particular values for negative and positive diagnostic likelihood ratios, respectively. |
MinPvalue | MinPvalue is a measure based on the minimization of the |
ObservedPrev | ObservedPrev is a measure which identifies the optimal cutpoint closest to the observed prevalence by
minimizing the quantity |
MeanPrev | MeanPrev is a measure which identifies the optimal cutpoint closest to the average of the diagnostic test values. It is suggested to use this measure if the diagnostic test takes values between 0 and 1. |
PrevalenceMatching | PrevalenceMatching is a measure based on the equality of actual and predicted prevalence. The cutpoint
minimizes the absolute quantity |
For details, see Lopez-Raton et al. (2014). |
Grup |
mir197 |
mir146b |
mir181d |
mir99a |
|
Grup |
mir197 |
mir146b |
mir181d |
mir99a |
---|---|---|---|---|---|---|---|---|---|---|
1 |
0.921 |
0.687 |
0.474 |
0.941 |
0 |
1.214 |
1.122 |
0.882 |
1.610 |
|
1 |
0.967 |
1.059 |
0.474 |
0.575 |
0 |
1.401 |
0.148 |
0.444 |
0.625 |
|
1 |
0.854 |
1.105 |
0.722 |
0.936 |
0 |
0.494 |
0.179 |
1.386 |
0.134 |
|
1 |
1.088 |
1.353 |
0.577 |
1.077 |
0 |
1.608 |
1.386 |
2.242 |
0.926 |
|
1 |
0.107 |
0.515 |
0.286 |
0.560 |
0 |
1.274 |
1.609 |
0.769 |
1.108 |
|
1 |
0.547 |
1.191 |
0.583 |
1.119 |
0 |
0.827 |
1.128 |
0.452 |
0.374 |
|
1 |
1.081 |
1.445 |
1.303 |
1.202 |
0 |
0.147 |
0.545 |
0.878 |
0.044 |
|
1 |
1.081 |
1.308 |
1.276 |
1.066 |
0 |
0.353 |
0.320 |
0.225 |
0.367 |
|
1 |
0.841 |
0.463 |
0.290 |
0.747 |
0 |
1.635 |
0.677 |
0.838 |
0.543 |
|
1 |
1.188 |
0.975 |
1.407 |
2.123 |
0 |
1.848 |
1.523 |
1.712 |
0.940 |
|
1 |
1.014 |
0.649 |
1.194 |
1.786 |
0 |
0.987 |
0.606 |
0.626 |
0.542 |
|
1 |
1.081 |
1.256 |
1.229 |
0.679 |
0 |
0.020 |
0.503 |
0.600 |
0.367 |
|
1 |
1.295 |
1.204 |
1.607 |
2.216 |
0 |
1.061 |
1.518 |
1.217 |
0.209 |
|
1 |
1.081 |
1.268 |
0.829 |
0.658 |
0 |
0.474 |
0.572 |
0.292 |
0.786 |
|
1 |
1.081 |
1.365 |
1.376 |
1.457 |
0 |
0.868 |
0.505 |
0.408 |
0.117 |
|
1 |
1.081 |
1.371 |
0.812 |
1.804 |
0 |
0.414 |
0.259 |
0.665 |
0.363 |
|
1 |
1.081 |
0.769 |
1.359 |
0.156 |
0 |
0.394 |
0.417 |
1.000 |
0.130 |
|
1 |
0.854 |
1.243 |
0.444 |
1.460 |
0 |
0.941 |
0.543 |
0.431 |
1.083 |
|
1 |
1.074 |
1.365 |
1.572 |
0.339 |
0 |
0.387 |
0.202 |
0.568 |
0.345 |
|
1 |
0.634 |
0.276 |
0.130 |
0.081 |
0 |
0.674 |
0.689 |
0.995 |
0.893 |
|
|
ROCR, pROC, OptimalCutpoints, shiny, plotROC, plyr
MachineLearning, WebTechnologies
This article is converted from a Legacy LaTeX article using the texor package. The pdf version is the official version. To report a problem with the html, refer to CONTRIBUTE on the R Journal homepage.
Text and figures are licensed under Creative Commons Attribution CC BY 4.0. The figures that have been reused from other sources don't fall under this license and can be recognized by a note in their caption: "Figure from ...".
For attribution, please cite this work as
Goksuluk, et al., "easyROC: An Interactive Web-tool for ROC Curve Analysis Using R Language Environment", The R Journal, 2016
BibTeX citation
@article{RJ-2016-042, author = {Goksuluk, Dincer and Korkmaz, Selcuk and Zararsiz, Gokmen and Karaagaoglu, A. Ergun}, title = {easyROC: An Interactive Web-tool for ROC Curve Analysis Using R Language Environment}, journal = {The R Journal}, year = {2016}, note = {https://rjournal.github.io/}, volume = {8}, issue = {2}, issn = {2073-4859}, pages = {213-230} }