Title: | Computing Sensitivity at a Fix Value of Specificity and Vice Versa as Well as Bootstrap Metrics for ROC Curves |
---|---|
Description: | This software assesses the receiver operating characteristic (ROC) curve at adjusted thresholds, enabling the comparison of sensitivity and specificity across multiple binary classification models. Instead of comparing different models with varied cutoff values in their risk thresholds, all models can be compared at a fixed threshold of sensitivity, a fixed threshold of specificity, or the crossing point between sensitivity and specificity. If a threshold for specificity is given (e.g., specificity = 0.9), sensitivity and its confidence interval are computed, and vice versa. If the threshold for either sensitivity or specificity is not provided, the crossing point between the sensitivity and specificity curves is returned, along with their confidence intervals. For bootstrap procedures, the software evaluates the mean and CI bootstrap values for sensitivity, specificity, and the crossing point between specificity and sensitivity. This allows users to discern whether the performance of a model (based on adjusted sensitivity or adjusted specificity) is significantly different from other models. This software addresses the issue of comparing different classification models with varying predefined cutoff thresholds, which often leads to inconclusive results due to the fluctuating values of both sensitivity and specificity. |
Authors: | E. F. Haghish |
Maintainer: | E. F. Haghish <[email protected]> |
License: | MIT + file LICENSE |
Version: | 0.4 |
Built: | 2024-11-22 04:37:47 UTC |
Source: | https://github.com/haghish/adjroc |
computes adjusted sensitivity, adjusted specificity, or the crossing point between sensitivity and specificity for different thresholds
adjroc( score, class, method = "emp", sensitivity = NULL, specificity = NULL, plot = FALSE, scale = FALSE )
adjroc( score, class, method = "emp", sensitivity = NULL, specificity = NULL, plot = FALSE, scale = FALSE )
score |
A numeric array of diagnostic score i.e. the estimated probability of each diagnosis |
class |
A numeric array of equal length of |
method |
Specifies the method for estimating the ROC curve. Three methods are supported, which are |
sensitivity |
numeric. Specify the threshold of sensitivity |
specificity |
numeric. Specify the threshold of specificity |
plot |
logical. if TRUE, the sensitivity and specificity will be plotted |
scale |
logical. if TRUE, the estimated probabilties (cutoff) will be scaled to range from 0 to 1. |
data.frame including cutoff point, and adjusted sensitivity and specificity based on the specified threshold
# random classification and probability score score <- runif(10000, min=0, max=1) class <- sample(x = c(1,0), 10000, replace=TRUE) # calculate adjusted sensitivity, when specificity threshold is 0.90: adjroc(score = score, class = class, specificity = 0.9, plot = TRUE) # calculate adjusted specificity, when sensitivity threshold equals 0.9 adjroc(score = score, class = class, sensitivity = 0.9, plot = TRUE) # calculate the meeting point between sensitivity and specificity adjroc(score = score, class = class, plot = TRUE)
# random classification and probability score score <- runif(10000, min=0, max=1) class <- sample(x = c(1,0), 10000, replace=TRUE) # calculate adjusted sensitivity, when specificity threshold is 0.90: adjroc(score = score, class = class, specificity = 0.9, plot = TRUE) # calculate adjusted specificity, when sensitivity threshold equals 0.9 adjroc(score = score, class = class, sensitivity = 0.9, plot = TRUE) # calculate the meeting point between sensitivity and specificity adjroc(score = score, class = class, plot = TRUE)
computes bootstrap adjusted sensitivity, bootstrap adjusted specificity, or bootstrap crossing point between sensitivity and specificity for different thresholds
boot.adjroc( score, class, n = 100, method = "emp", sensitivity = NULL, specificity = NULL )
boot.adjroc( score, class, n = 100, method = "emp", sensitivity = NULL, specificity = NULL )
score |
A numeric array of diagnostic score i.e. the estimated probability of each diagnosis |
class |
A numeric array of equal length of |
n |
number of bootstrap samples. |
method |
Specifies the method for estimating the ROC curve. Three methods are supported, which are |
sensitivity |
numeric. Specify the threshold of sensitivity. |
specificity |
numeric. Specify the threshold of specificity. |
list including mean and CI of bootstrap value (sensitivity, specificity, or the crossing point) and the bootstrap data.
# random classification and probability score score <- runif(10000, min=0, max=1) class <- sample(x = c(1,0), 10000, replace=TRUE) # calculate adjusted sensitivity, when specificity threshold is 0.90: adjroc(score = score, class = class, specificity = 0.9, plot = TRUE) # calculate adjusted specificity, when sensitivity threshold equals 0.9 boot.adjroc(score = score, class = class, n = 100, sensitivity = 0.9) # calculate the bootstrap meeting point between sensitivity and specificity boot.adjroc(score = score, class = class, n = 100)
# random classification and probability score score <- runif(10000, min=0, max=1) class <- sample(x = c(1,0), 10000, replace=TRUE) # calculate adjusted sensitivity, when specificity threshold is 0.90: adjroc(score = score, class = class, specificity = 0.9, plot = TRUE) # calculate adjusted specificity, when sensitivity threshold equals 0.9 boot.adjroc(score = score, class = class, n = 100, sensitivity = 0.9) # calculate the bootstrap meeting point between sensitivity and specificity boot.adjroc(score = score, class = class, n = 100)
computes bootstrap AUC and AUCPR for the ROC curve
boot.roc( score, class, metric = "AUC", n = 100, method = "emp", event_level = "first" )
boot.roc( score, class, metric = "AUC", n = 100, method = "emp", event_level = "first" )
score |
A numeric array of diagnostic score i.e. the estimated probability of each diagnosis |
class |
A numeric array of equal length of |
metric |
character. specify the metric of interest which can be
|
n |
number of bootstrap samples. |
method |
Specifies the method for estimating the ROC curve. Three methods
are supported, which are |
event_level |
character. only needed for bootstrapping AUCPR. this
argument specifies which level of the "class" should be
considered the positive event. the values can only be
|
list including mean and CI of bootstrap value (sensitivity, specificity, or the crossing point) and the bootstrap data.
# random classification and probability score score <- runif(10000, min=0, max=1) class <- sample(x = c(1,0), 10000, replace=TRUE) # calculate bootstrap AUC of the ROC curve boot.roc(score = score, class = class, n = 100, metric = "AUC") # calculate bootstrap AUCPR of the ROC curve boot.roc(score = score, class = class, n = 100, metric = "AUCPR")
# random classification and probability score score <- runif(10000, min=0, max=1) class <- sample(x = c(1,0), 10000, replace=TRUE) # calculate bootstrap AUC of the ROC curve boot.roc(score = score, class = class, n = 100, metric = "AUC") # calculate bootstrap AUCPR of the ROC curve boot.roc(score = score, class = class, n = 100, metric = "AUCPR")
computes bootstrap AUC and AUCPR for two different ROC curves (models) and performs significance testing
compare.roc( score, class, score2, class2, metric = "AUC", n = 100, method = "emp", event_level = "first" )
compare.roc( score, class, score2, class2, metric = "AUC", n = 100, method = "emp", event_level = "first" )
score |
A numeric array of diagnostic score i.e. the estimated probability of each diagnosis for model 1 |
class |
A numeric array of equal length of |
score2 |
A numeric array of diagnostic score i.e. the estimated probability of each diagnosis for model 2 |
class2 |
A numeric array of equal length of |
metric |
character. specify the metric of interest which can be
|
n |
number of bootstrap samples. |
method |
Specifies the method for estimating the ROC curve. Three methods
are supported, which are |
event_level |
character. only needed for bootstrapping AUCPR. this
argument specifies which level of the "class" should be
considered the positive event. the values can only be
|
list including mean and CI of bootstrap value (sensitivity, specificity, or the crossing point) and the bootstrap data.
# random classification and probability score score <- runif(10000, min=0, max=1) class <- sample(x = c(1,0), 10000, replace=TRUE) # calculate bootstrap AUC of the ROC curve boot.roc(score = score, class = class, n = 100, metric = "AUC") # calculate bootstrap AUCPR of the ROC curve boot.roc(score = score, class = class, n = 100, metric = "AUCPR")
# random classification and probability score score <- runif(10000, min=0, max=1) class <- sample(x = c(1,0), 10000, replace=TRUE) # calculate bootstrap AUC of the ROC curve boot.roc(score = score, class = class, n = 100, metric = "AUC") # calculate bootstrap AUCPR of the ROC curve boot.roc(score = score, class = class, n = 100, metric = "AUCPR")