Package 'adjROC'

Title: Computing Sensitivity at a Fix Value of Specificity and Vice Versa as Well as Bootstrap Metrics for ROC Curves
Description: This software assesses the receiver operating characteristic (ROC) curve at adjusted thresholds, enabling the comparison of sensitivity and specificity across multiple binary classification models. Instead of comparing different models with varied cutoff values in their risk thresholds, all models can be compared at a fixed threshold of sensitivity, a fixed threshold of specificity, or the crossing point between sensitivity and specificity. If a threshold for specificity is given (e.g., specificity = 0.9), sensitivity and its confidence interval are computed, and vice versa. If the threshold for either sensitivity or specificity is not provided, the crossing point between the sensitivity and specificity curves is returned, along with their confidence intervals. For bootstrap procedures, the software evaluates the mean and CI bootstrap values for sensitivity, specificity, and the crossing point between specificity and sensitivity. This allows users to discern whether the performance of a model (based on adjusted sensitivity or adjusted specificity) is significantly different from other models. This software addresses the issue of comparing different classification models with varying predefined cutoff thresholds, which often leads to inconclusive results due to the fluctuating values of both sensitivity and specificity.
Authors: E. F. Haghish
Maintainer: E. F. Haghish <[email protected]>
License: MIT + file LICENSE
Version: 0.4
Built: 2024-11-22 04:37:47 UTC
Source: https://github.com/haghish/adjroc

Help Index


adjroc

Description

computes adjusted sensitivity, adjusted specificity, or the crossing point between sensitivity and specificity for different thresholds

Usage

adjroc(
  score,
  class,
  method = "emp",
  sensitivity = NULL,
  specificity = NULL,
  plot = FALSE,
  scale = FALSE
)

Arguments

score

A numeric array of diagnostic score i.e. the estimated probability of each diagnosis

class

A numeric array of equal length of "score", including the actual class of the observations

method

Specifies the method for estimating the ROC curve. Three methods are supported, which are "empirical", "binormal", and "nonparametric"

sensitivity

numeric. Specify the threshold of sensitivity

specificity

numeric. Specify the threshold of specificity

plot

logical. if TRUE, the sensitivity and specificity will be plotted

scale

logical. if TRUE, the estimated probabilties (cutoff) will be scaled to range from 0 to 1.

Value

data.frame including cutoff point, and adjusted sensitivity and specificity based on the specified threshold

Examples

# random classification and probability score
score <- runif(10000, min=0, max=1)
class <- sample(x = c(1,0), 10000, replace=TRUE)

# calculate adjusted sensitivity, when specificity threshold is 0.90:
adjroc(score = score, class = class, specificity = 0.9, plot = TRUE)

# calculate adjusted specificity, when sensitivity threshold equals 0.9
adjroc(score = score, class = class, sensitivity = 0.9, plot = TRUE)

# calculate the meeting point between sensitivity and specificity
adjroc(score = score, class = class, plot = TRUE)

boot.adjroc

Description

computes bootstrap adjusted sensitivity, bootstrap adjusted specificity, or bootstrap crossing point between sensitivity and specificity for different thresholds

Usage

boot.adjroc(
  score,
  class,
  n = 100,
  method = "emp",
  sensitivity = NULL,
  specificity = NULL
)

Arguments

score

A numeric array of diagnostic score i.e. the estimated probability of each diagnosis

class

A numeric array of equal length of "score", including the actual class of the observations

n

number of bootstrap samples.

method

Specifies the method for estimating the ROC curve. Three methods are supported, which are "empirical", "binormal", and "nonparametric"

sensitivity

numeric. Specify the threshold of sensitivity.

specificity

numeric. Specify the threshold of specificity.

Value

list including mean and CI of bootstrap value (sensitivity, specificity, or the crossing point) and the bootstrap data.

Examples

# random classification and probability score
score <- runif(10000, min=0, max=1)
class <- sample(x = c(1,0), 10000, replace=TRUE)

# calculate adjusted sensitivity, when specificity threshold is 0.90:
adjroc(score = score, class = class, specificity = 0.9, plot = TRUE)

# calculate adjusted specificity, when sensitivity threshold equals 0.9
boot.adjroc(score = score, class = class, n = 100, sensitivity = 0.9)

# calculate the bootstrap meeting point between sensitivity and specificity
boot.adjroc(score = score, class = class, n = 100)

boot.roc

Description

computes bootstrap AUC and AUCPR for the ROC curve

Usage

boot.roc(
  score,
  class,
  metric = "AUC",
  n = 100,
  method = "emp",
  event_level = "first"
)

Arguments

score

A numeric array of diagnostic score i.e. the estimated probability of each diagnosis

class

A numeric array of equal length of "score", including the actual class of the observations

metric

character. specify the metric of interest which can be "AUC" (Area Under the Curve, default) or "AUCPR" (Area Under the Precision-Recall Curve).

n

number of bootstrap samples.

method

Specifies the method for estimating the ROC curve. Three methods are supported, which are "empirical", "binormal", and "nonparametric"

event_level

character. only needed for bootstrapping AUCPR. this argument specifies which level of the "class" should be considered the positive event. the values can only be "first" or "second".

Value

list including mean and CI of bootstrap value (sensitivity, specificity, or the crossing point) and the bootstrap data.

Examples

# random classification and probability score
score <- runif(10000, min=0, max=1)
class <- sample(x = c(1,0), 10000, replace=TRUE)

# calculate bootstrap AUC of the ROC curve
boot.roc(score = score, class = class, n = 100, metric = "AUC")

# calculate bootstrap AUCPR of the ROC curve
boot.roc(score = score, class = class, n = 100, metric = "AUCPR")

compare.roc

Description

computes bootstrap AUC and AUCPR for two different ROC curves (models) and performs significance testing

Usage

compare.roc(
  score,
  class,
  score2,
  class2,
  metric = "AUC",
  n = 100,
  method = "emp",
  event_level = "first"
)

Arguments

score

A numeric array of diagnostic score i.e. the estimated probability of each diagnosis for model 1

class

A numeric array of equal length of "score", including the actual class of the observations for model 1

score2

A numeric array of diagnostic score i.e. the estimated probability of each diagnosis for model 2

class2

A numeric array of equal length of "score", including the actual class of the observations for model 2

metric

character. specify the metric of interest which can be "AUC" (Area Under the Curve, default), "AUCPR" (Area Under the Precision-Recall Curve) or "meting_point", which evaluates the crossing-point between sensitivity and specificity of two different models.

n

number of bootstrap samples.

method

Specifies the method for estimating the ROC curve. Three methods are supported, which are "empirical", "binormal", and "nonparametric"

event_level

character. only needed for bootstrapping AUCPR. this argument specifies which level of the "class" should be considered the positive event. the values can only be "first" or "second".

Value

list including mean and CI of bootstrap value (sensitivity, specificity, or the crossing point) and the bootstrap data.

Examples

# random classification and probability score
score <- runif(10000, min=0, max=1)
class <- sample(x = c(1,0), 10000, replace=TRUE)

# calculate bootstrap AUC of the ROC curve
boot.roc(score = score, class = class, n = 100, metric = "AUC")

# calculate bootstrap AUCPR of the ROC curve
boot.roc(score = score, class = class, n = 100, metric = "AUCPR")