Sensitivity At Specificity

Module Interface

class torchmetrics.SensitivityAtSpecificity(**kwargs)[source]

Compute the highest possible sensitivity value given the minimum specificity thresholds provided.

This is done by first calculating the Receiver Operating Characteristic (ROC) curve for different thresholds and the find the sensitivity for a given specificity level.

This function is a simple wrapper to get the task specific versions of this metric, which is done by setting the task argument to either 'binary', 'multiclass' or multilabel. See the documentation of BinarySensitivityAtSpecificity, MulticlassSensitivityAtSpecificity and MultilabelSensitivityAtSpecificity for the specific details of each argument influence and examples.

static __new__(cls, task, min_specificity, thresholds=None, num_classes=None, num_labels=None, ignore_index=None, validate_args=True, **kwargs)[source]

Initialize task metric.

Return type:

Metric

BinarySensitivityAtSpecificity

class torchmetrics.classification.BinarySensitivityAtSpecificity(min_specificity, thresholds=None, ignore_index=None, validate_args=True, **kwargs)[source]

Compute the highest possible sensitivity value given the minimum specificity thresholds provided.

This is done by first calculating the Receiver Operating Characteristic (ROC) curve for different thresholds and the find the sensitivity for a given specificity level.

Accepts the following input tensors:

  • preds (float tensor): (N, ...). Preds should be a tensor containing probabilities or logits for each observation. If preds has values outside [0,1] range we consider the input to be logits and will auto apply sigmoid per element.

  • target (int tensor): (N, ...). Target should be a tensor containing ground truth labels, and therefore only contain {0,1} values (except if ignore_index is specified).

Additional dimension ... will be flattened into the batch dimension.

The implementation both supports calculating the metric in a non-binned but accurate version and a binned version that is less accurate but more memory efficient. Setting the thresholds argument to None will activate the non-binned version that uses memory of size \(\mathcal{O}(n_{samples})\) whereas setting the thresholds argument to either an integer, list or a 1d tensor will use a binned version that uses memory of size \(\mathcal{O}(n_{thresholds})\) (constant memory).

Parameters:
  • min_specificity (float) – float value specifying minimum specificity threshold.

  • thresholds (Union[int, list[float], Tensor, None]) –

    Can be one of:

    • None, will use a non-binned approach where thresholds are dynamically calculated from all the data. It is the most accurate but also the most memory-consuming approach.

    • int (larger than 1), will use that number of thresholds linearly spaced from 0 to 1 as bins for the calculation.

    • list of floats, will use the indicated thresholds in the list as bins for the calculation

    • 1d tensor of floats, will use the indicated thresholds in the tensor as bins for the calculation.

  • validate_args (bool) – bool indicating if input arguments and tensors should be validated for correctness. Set to False for faster computations.

  • kwargs (Any) – Additional keyword arguments, see Advanced metric settings for more info.

Returns:

a tuple of 2 tensors containing:

  • sensitivity: an scalar tensor with the maximum sensitivity for the given specificity level

  • threshold: an scalar tensor with the corresponding threshold level

Return type:

(tuple)

Example

>>> from torchmetrics.classification import BinarySensitivityAtSpecificity
>>> from torch import tensor
>>> preds = tensor([0, 0.5, 0.4, 0.1])
>>> target = tensor([0, 1, 1, 1])
>>> metric = BinarySensitivityAtSpecificity(min_specificity=0.5, thresholds=None)
>>> metric(preds, target)
(tensor(1.), tensor(0.1000))
>>> metric = BinarySensitivityAtSpecificity(min_specificity=0.5, thresholds=5)
>>> metric(preds, target)
(tensor(0.6667), tensor(0.2500))

MulticlassSensitivityAtSpecificity

class torchmetrics.classification.MulticlassSensitivityAtSpecificity(num_classes, min_specificity, thresholds=None, ignore_index=None, validate_args=True, **kwargs)[source]

Compute the highest possible sensitivity value given the minimum specificity thresholds provided.

This is done by first calculating the Receiver Operating Characteristic (ROC) curve for different thresholds and the find the sensitivity for a given specificity level.

For multiclass the metric is calculated by iteratively treating each class as the positive class and all other classes as the negative, which is referred to as the one-vs-rest approach. One-vs-one is currently not supported by this metric.

Accepts the following input tensors:

  • preds (float tensor): (N, C, ...). Preds should be a tensor containing probabilities or logits for each observation. If preds has values outside [0,1] range we consider the input to be logits and will auto apply softmax per sample.

  • target (int tensor): (N, ...). Target should be a tensor containing ground truth labels, and therefore only contain values in the [0, n_classes-1] range (except if ignore_index is specified).

Additional dimension ... will be flattened into the batch dimension.

The implementation both supports calculating the metric in a non-binned but accurate version and a binned version that is less accurate but more memory efficient. Setting the thresholds argument to None will activate the non-binned version that uses memory of size \(\mathcal{O}(n_{samples})\) whereas setting the thresholds argument to either an integer, list or a 1d tensor will use a binned version that uses memory of size \(\mathcal{O}(n_{thresholds} \times n_{classes})\) (constant memory).

Parameters:
  • num_classes (int) – Integer specifying the number of classes

  • min_specificity (float) – float value specifying minimum specificity threshold.

  • thresholds (Union[int, list[float], Tensor, None]) –

    Can be one of:

    • None, will use a non-binned approach where thresholds are dynamically calculated from all the data. It is the most accurate but also the most memory-consuming approach.

    • int (larger than 1), will use that number of thresholds linearly spaced from 0 to 1 as bins for the calculation.

    • list of floats, will use the indicated thresholds in the list as bins for the calculation

    • 1d tensor of floats, will use the indicated thresholds in the tensor as bins for the calculation.

  • validate_args (bool) – bool indicating if input arguments and tensors should be validated for correctness. Set to False for faster computations.

  • kwargs (Any) – Additional keyword arguments, see Advanced metric settings for more info.

Returns:

a tuple of either 2 tensors or 2 lists containing

  • sensitivity: an 1d tensor of size (n_classes, ) with the maximum sensitivity for the given

    specificity level per class

  • thresholds: an 1d tensor of size (n_classes, ) with the corresponding threshold level per class

Return type:

(tuple)

Example

>>> from torchmetrics.classification import MulticlassSensitivityAtSpecificity
>>> from torch import tensor
>>> preds = tensor([[0.75, 0.05, 0.05, 0.05, 0.05],
...                 [0.05, 0.75, 0.05, 0.05, 0.05],
...                 [0.05, 0.05, 0.75, 0.05, 0.05],
...                 [0.05, 0.05, 0.05, 0.75, 0.05]])
>>> target = tensor([0, 1, 3, 2])
>>> metric = MulticlassSensitivityAtSpecificity(num_classes=5, min_specificity=0.5, thresholds=None)
>>> metric(preds, target)
(tensor([1., 1., 0., 0., 0.]), tensor([0.7500, 0.7500, 1.0000, 1.0000, 1.0000]))
>>> metric = MulticlassSensitivityAtSpecificity(num_classes=5, min_specificity=0.5, thresholds=5)
>>> metric(preds, target)
(tensor([1., 1., 0., 0., 0.]), tensor([0.7500, 0.7500, 1.0000, 1.0000, 1.0000]))

MultilabelSensitivityAtSpecificity

class torchmetrics.classification.MultilabelSensitivityAtSpecificity(num_labels, min_specificity, thresholds=None, ignore_index=None, validate_args=True, **kwargs)[source]

Compute the highest possible sensitivity value given the minimum specificity thresholds provided.

This is done by first calculating the Receiver Operating Characteristic (ROC) curve for different thresholds and the find the sensitivity for a given specificity level.

Accepts the following input tensors:

  • preds (float tensor): (N, C, ...). Preds should be a tensor containing probabilities or logits for each observation. If preds has values outside [0,1] range we consider the input to be logits and will auto apply sigmoid per element.

  • target (int tensor): (N, C, ...). Target should be a tensor containing ground truth labels, and therefore only contain {0,1} values (except if ignore_index is specified).

Additional dimension ... will be flattened into the batch dimension.

The implementation both supports calculating the metric in a non-binned but accurate version and a binned version that is less accurate but more memory efficient. Setting the thresholds argument to None will activate the non-binned version that uses memory of size \(\mathcal{O}(n_{samples})\) whereas setting the thresholds argument to either an integer, list or a 1d tensor will use a binned version that uses memory of size \(\mathcal{O}(n_{thresholds} \times n_{labels})\) (constant memory).

Parameters:
  • num_labels (int) – Integer specifying the number of labels

  • min_specificity (float) – float value specifying minimum specificity threshold.

  • thresholds (Union[int, list[float], Tensor, None]) –

    Can be one of:

    • None, will use a non-binned approach where thresholds are dynamically calculated from all the data. It is the most accurate but also the most memory-consuming approach.

    • int (larger than 1), will use that number of thresholds linearly spaced from 0 to 1 as bins for the calculation.

    • list of floats, will use the indicated thresholds in the list as bins for the calculation

    • 1d tensor of floats, will use the indicated thresholds in the tensor as bins for the calculation.

  • validate_args (bool) – bool indicating if input arguments and tensors should be validated for correctness. Set to False for faster computations.

  • kwargs (Any) – Additional keyword arguments, see Advanced metric settings for more info.

Returns:

a tuple of either 2 tensors or 2 lists containing

  • sensitivity: an 1d tensor of size (n_classes, ) with the maximum sensitivity for the given

    specificity level per class

  • thresholds: an 1d tensor of size (n_classes, ) with the corresponding threshold level per class

Return type:

(tuple)

Example

>>> from torchmetrics.classification import MultilabelSensitivityAtSpecificity
>>> from torch import tensor
>>> preds = tensor([[0.75, 0.05, 0.35],
...                 [0.45, 0.75, 0.05],
...                 [0.05, 0.55, 0.75],
...                 [0.05, 0.65, 0.05]])
>>> target = tensor([[1, 0, 1],
...                  [0, 0, 0],
...                  [0, 1, 1],
...                  [1, 1, 1]])
>>> metric = MultilabelSensitivityAtSpecificity(num_labels=3, min_specificity=0.5, thresholds=None)
>>> metric(preds, target)
(tensor([0.5000, 1.0000, 0.6667]), tensor([0.7500, 0.5500, 0.3500]))
>>> metric = MultilabelSensitivityAtSpecificity(num_labels=3, min_specificity=0.5, thresholds=5)
>>> metric(preds, target)
(tensor([0.5000, 1.0000, 0.6667]), tensor([0.7500, 0.5000, 0.2500]))

Functional Interface

torchmetrics.functional.classification.sensitivity_at_specificity(preds, target, task, min_specificity, thresholds=None, num_classes=None, num_labels=None, ignore_index=None, validate_args=True)[source]

Compute the highest possible sensitivity value given the minimum specificity thresholds provided.

This is done by first calculating the Receiver Operating Characteristic (ROC) curve for different thresholds and the find the sensitivity for a given specificity level.

This function is a simple wrapper to get the task specific versions of this metric, which is done by setting the task argument to either 'binary', 'multiclass' or multilabel. See the documentation of binary_sensitivity_at_specificity(), multiclass_sensitivity_at_specificity() and multilabel_sensitivity_at_specificity() for the specific details of each argument influence and examples.

Return type:

Union[Tensor, tuple[Tensor, Tensor, Tensor], tuple[List[Tensor], List[Tensor], List[Tensor]]]

binary_sensitivity_at_specificity

torchmetrics.functional.classification.binary_sensitivity_at_specificity(preds, target, min_specificity, thresholds=None, ignore_index=None, validate_args=True)[source]

Compute the highest possible sensitivity value given the minimum specificity levels provided for binary tasks.

This is done by first calculating the Receiver Operating Characteristic (ROC) curve for different thresholds and the find the sensitivity for a given specificity level.

Accepts the following input tensors:

  • preds (float tensor): (N, ...). Preds should be a tensor containing probabilities or logits for each observation. If preds has values outside [0,1] range we consider the input to be logits and will auto apply sigmoid per element.

  • target (int tensor): (N, ...). Target should be a tensor containing ground truth labels, and therefore only contain {0,1} values (except if ignore_index is specified).

Additional dimension ... will be flattened into the batch dimension.

The implementation both supports calculating the metric in a non-binned but accurate version and a binned version that is less accurate but more memory efficient. Setting the thresholds argument to None will activate the non-binned version that uses memory of size \(\mathcal{O}(n_{samples})\) whereas setting the thresholds argument to either an integer, list or a 1d tensor will use a binned version that uses memory of size \(\mathcal{O}(n_{thresholds})\) (constant memory).

Parameters:
  • preds (Tensor) – Tensor with predictions

  • target (Tensor) – Tensor with true labels

  • min_specificity (float) – float value specifying minimum specificity threshold.

  • thresholds (Union[int, list[float], Tensor, None]) –

    Can be one of:

    • None, will use a non-binned approach where thresholds are dynamically calculated from all the data. It is the most accurate but also the most memory-consuming approach.

    • int (larger than 1), will use that number of thresholds linearly spaced from 0 to 1 as bins for the calculation.

    • list of floats, will use the indicated thresholds in the list as bins for the calculation

    • 1d tensor of floats, will use the indicated thresholds in the tensor as bins for the calculation.

  • ignore_index (Optional[int]) – Specifies a target value that is ignored and does not contribute to the metric calculation

  • validate_args (bool) – bool indicating if input arguments and tensors should be validated for correctness. Set to False for faster computations.

Returns:

a tuple of 2 tensors containing:

  • sensitivity: a scalar tensor with the maximum sensitivity for the given specificity level

  • threshold: a scalar tensor with the corresponding threshold level

Return type:

(tuple)

Example

>>> from torchmetrics.functional.classification import binary_sensitivity_at_specificity
>>> preds = torch.tensor([0, 0.5, 0.4, 0.1])
>>> target = torch.tensor([0, 1, 1, 1])
>>> binary_sensitivity_at_specificity(preds, target, min_specificity=0.5, thresholds=None)
(tensor(1.), tensor(0.1000))
>>> binary_sensitivity_at_specificity(preds, target, min_specificity=0.5, thresholds=5)
(tensor(0.6667), tensor(0.2500))

multiclass_sensitivity_at_specificity

torchmetrics.functional.classification.multiclass_sensitivity_at_specificity(preds, target, num_classes, min_specificity, thresholds=None, ignore_index=None, validate_args=True)[source]

Compute the highest possible sensitivity value given minimum specificity level provided for multiclass tasks.

This is done by first calculating the Receiver Operating Characteristic (ROC) curve for different thresholds and the find the sensitivity for a given specificity level.

Accepts the following input tensors:

  • preds (float tensor): (N, C, ...). Preds should be a tensor containing probabilities or logits for each observation. If preds has values outside [0,1] range we consider the input to be logits and will auto apply softmax per sample.

  • target (int tensor): (N, ...). Target should be a tensor containing ground truth labels, and therefore only contain values in the [0, n_classes-1] range (except if ignore_index is specified).

Additional dimension ... will be flattened into the batch dimension.

The implementation both supports calculating the metric in a non-binned but accurate version and a binned version that is less accurate but more memory efficient. Setting the thresholds argument to None will activate the non-binned version that uses memory of size \(\mathcal{O}(n_{samples})\) whereas setting the thresholds argument to either an integer, list or a 1d tensor will use a binned version that uses memory of size \(\mathcal{O}(n_{thresholds} \times n_{classes})\) (constant memory).

Parameters:
  • preds (Tensor) – Tensor with predictions

  • target (Tensor) – Tensor with true labels

  • num_classes (int) – Integer specifying the number of classes

  • min_specificity (float) – float value specifying minimum specificity threshold.

  • thresholds (Union[int, list[float], Tensor, None]) –

    Can be one of:

    • None, will use a non-binned approach where thresholds are dynamically calculated from all the data. It is the most accurate but also the most memory-consuming approach.

    • int (larger than 1), will use that number of thresholds linearly spaced from 0 to 1 as bins for the calculation.

    • list of floats, will use the indicated thresholds in the list as bins for the calculation

    • 1d tensor of floats, will use the indicated thresholds in the tensor as bins for the calculation.

  • ignore_index (Optional[int]) – Specifies a target value that is ignored and does not contribute to the metric calculation

  • validate_args (bool) – bool indicating if input arguments and tensors should be validated for correctness. Set to False for faster computations.

Returns:

a tuple of either 2 tensors or 2 lists containing

  • recall: an 1d tensor of size (n_classes, ) with the maximum recall for the given precision level per class

  • thresholds: an 1d tensor of size (n_classes, ) with the corresponding threshold level per class

Return type:

(tuple)

Example

>>> from torchmetrics.functional.classification import multiclass_sensitivity_at_specificity
>>> preds = torch.tensor([[0.75, 0.05, 0.05, 0.05, 0.05],
...                       [0.05, 0.75, 0.05, 0.05, 0.05],
...                       [0.05, 0.05, 0.75, 0.05, 0.05],
...                       [0.05, 0.05, 0.05, 0.75, 0.05]])
>>> target = torch.tensor([0, 1, 3, 2])
>>> multiclass_sensitivity_at_specificity(preds, target, num_classes=5, min_specificity=0.5, thresholds=None)
(tensor([1., 1., 0., 0., 0.]), tensor([0.7500, 0.7500, 1.0000, 1.0000, 1.0000]))
>>> multiclass_sensitivity_at_specificity(preds, target, num_classes=5, min_specificity=0.5, thresholds=5)
(tensor([1., 1., 0., 0., 0.]), tensor([0.7500, 0.7500, 1.0000, 1.0000, 1.0000]))

multilabel_sensitivity_at_specificity

torchmetrics.functional.classification.multilabel_sensitivity_at_specificity(preds, target, num_labels, min_specificity, thresholds=None, ignore_index=None, validate_args=True)[source]

Compute the highest possible sensitivity value given minimum specificity level provided for multilabel tasks.

This is done by first calculating the Receiver Operating Characteristic (ROC) curve for different thresholds and the find the sensitivity for a given specificity level.

Accepts the following input tensors:

  • preds (float tensor): (N, C, ...). Preds should be a tensor containing probabilities or logits for each observation. If preds has values outside [0,1] range we consider the input to be logits and will auto apply sigmoid per element.

  • target (int tensor): (N, C, ...). Target should be a tensor containing ground truth labels, and therefore only contain {0,1} values (except if ignore_index is specified).

Additional dimension ... will be flattened into the batch dimension.

The implementation both supports calculating the metric in a non-binned but accurate version and a binned version that is less accurate but more memory efficient. Setting the thresholds argument to None will activate the non-binned version that uses memory of size \(\mathcal{O}(n_{samples})\) whereas setting the thresholds argument to either an integer, list or a 1d tensor will use a binned version that uses memory of size \(\mathcal{O}(n_{thresholds} \times n_{labels})\) (constant memory).

Parameters:
  • preds (Tensor) – Tensor with predictions

  • target (Tensor) – Tensor with true labels

  • num_labels (int) – Integer specifying the number of labels

  • min_specificity (float) – float value specifying minimum specificity threshold.

  • thresholds (Union[int, list[float], Tensor, None]) –

    Can be one of:

    • None, will use a non-binned approach where thresholds are dynamically calculated from all the data. It is the most accurate but also the most memory-consuming approach.

    • int (larger than 1), will use that number of thresholds linearly spaced from 0 to 1 as bins for the calculation.

    • list of floats, will use the indicated thresholds in the list as bins for the calculation

    • 1d tensor of floats, will use the indicated thresholds in the tensor as bins for the calculation.

  • ignore_index (Optional[int]) – Specifies a target value that is ignored and does not contribute to the metric calculation

  • validate_args (bool) – bool indicating if input arguments and tensors should be validated for correctness. Set to False for faster computations.

Returns:

a tuple of either 2 tensors or 2 lists containing

  • sensitivity: an 1d tensor of size (n_classes, ) with the maximum recall for the given precision

    level per class

  • thresholds: an 1d tensor of size (n_classes, ) with the corresponding threshold level per class

Return type:

(tuple)

Example

>>> from torchmetrics.functional.classification import multilabel_sensitivity_at_specificity
>>> preds = torch.tensor([[0.75, 0.05, 0.35],
...                       [0.45, 0.75, 0.05],
...                       [0.05, 0.55, 0.75],
...                       [0.05, 0.65, 0.05]])
>>> target = torch.tensor([[1, 0, 1],
...                        [0, 0, 0],
...                        [0, 1, 1],
...                        [1, 1, 1]])
>>> multilabel_sensitivity_at_specificity(preds, target, num_labels=3, min_specificity=0.5, thresholds=None)
(tensor([0.5000, 1.0000, 0.6667]), tensor([0.7500, 0.5500, 0.3500]))
>>> multilabel_sensitivity_at_specificity(preds, target, num_labels=3, min_specificity=0.5, thresholds=5)
(tensor([0.5000, 1.0000, 0.6667]), tensor([0.7500, 0.5000, 0.2500]))