Calibration Error¶
Module Interface¶
- class torchmetrics.CalibrationError(**kwargs)[source]
-
The expected calibration error can be used to quantify how well a given model is calibrated e.g. how well the predicted output probabilities of the model matches the actual probabilities of the ground truth distribution. Three different norms are implemented, each corresponding to variations on the calibration error metric.
\[\text{ECE} = \sum_i^N b_i \|(p_i - c_i)\|, \text{L1 norm (Expected Calibration Error)}\]\[\text{MCE} = \max_{i} (p_i - c_i), \text{Infinity norm (Maximum Calibration Error)}\]\[\text{RMSCE} = \sqrt{\sum_i^N b_i(p_i - c_i)^2}, \text{L2 norm (Root Mean Square Calibration Error)}\]Where \(p_i\) is the top-1 prediction accuracy in bin \(i\), \(c_i\) is the average confidence of predictions in bin \(i\), and \(b_i\) is the fraction of data points in bin \(i\). Bins are constructed in an uniform way in the [0,1] range.
This function is a simple wrapper to get the task specific versions of this metric, which is done by setting the
task
argument to either'binary'
or'multiclass'
. See the documentation ofBinaryCalibrationError
andMulticlassCalibrationError
for the specific details of each argument influence and examples.
BinaryCalibrationError¶
- class torchmetrics.classification.BinaryCalibrationError(n_bins=15, norm='l1', ignore_index=None, validate_args=True, **kwargs)[source]¶
Top-label Calibration Error for binary tasks.
The expected calibration error can be used to quantify how well a given model is calibrated e.g. how well the predicted output probabilities of the model matches the actual probabilities of the ground truth distribution. Three different norms are implemented, each corresponding to variations on the calibration error metric.
\[\text{ECE} = \sum_i^N b_i \|(p_i - c_i)\|, \text{L1 norm (Expected Calibration Error)}\]\[\text{MCE} = \max_{i} (p_i - c_i), \text{Infinity norm (Maximum Calibration Error)}\]\[\text{RMSCE} = \sqrt{\sum_i^N b_i(p_i - c_i)^2}, \text{L2 norm (Root Mean Square Calibration Error)}\]Where \(p_i\) is the top-1 prediction accuracy in bin \(i\), \(c_i\) is the average confidence of predictions in bin \(i\), and \(b_i\) is the fraction of data points in bin \(i\). Bins are constructed in an uniform way in the [0,1] range.
As input to
forward
andupdate
the metric accepts the following input:preds
(Tensor
): A float tensor of shape(N, ...)
containing probabilities or logits for each observation. If preds has values outside [0,1] range we consider the input to be logits and will auto apply sigmoid per element.target
(Tensor
): An int tensor of shape(N, ...)
containing ground truth labels, and therefore only contain {0,1} values (except if ignore_index is specified). The value 1 always encodes the positive class.
As output to
forward
andcompute
the metric returns the following output:bce
(Tensor
): A scalar tensor containing the calibration error
Additional dimension
...
will be flattened into the batch dimension.- Parameters:
n_bins¶ (
int
) – Number of bins to use when computing the metric.norm¶ (
Literal
['l1'
,'l2'
,'max'
]) – Norm used to compare empirical and expected probability bins.ignore_index¶ (
Optional
[int
]) – Specifies a target value that is ignored and does not contribute to the metric calculationvalidate_args¶ (
bool
) – bool indicating if input arguments and tensors should be validated for correctness. Set toFalse
for faster computations.kwargs¶ (
Any
) – Additional keyword arguments, see Advanced metric settings for more info.
Example
>>> from torch import tensor >>> from torchmetrics.classification import BinaryCalibrationError >>> preds = tensor([0.25, 0.25, 0.55, 0.75, 0.75]) >>> target = tensor([0, 0, 1, 1, 1]) >>> metric = BinaryCalibrationError(n_bins=2, norm='l1') >>> metric(preds, target) tensor(0.2900) >>> bce = BinaryCalibrationError(n_bins=2, norm='l2') >>> bce(preds, target) tensor(0.2918) >>> bce = BinaryCalibrationError(n_bins=2, norm='max') >>> bce(preds, target) tensor(0.3167)
- plot(val=None, ax=None)[source]¶
Plot a single or multiple values from the metric.
- Parameters:
val¶ (
Union
[Tensor
,Sequence
[Tensor
],None
]) – Either a single result from calling metric.forward or metric.compute or a list of these results. If no value is provided, will automatically call metric.compute and plot that result.ax¶ (
Optional
[Axes
]) – An matplotlib axis object. If provided will add plot to that axis
- Return type:
- Returns:
Figure object and Axes object
- Raises:
ModuleNotFoundError – If matplotlib is not installed
>>> from torch import rand, randint >>> # Example plotting a single value >>> from torchmetrics.classification import BinaryCalibrationError >>> metric = BinaryCalibrationError(n_bins=2, norm='l1') >>> metric.update(rand(10), randint(2,(10,))) >>> fig_, ax_ = metric.plot()
>>> from torch import rand, randint >>> # Example plotting multiple values >>> from torchmetrics.classification import BinaryCalibrationError >>> metric = BinaryCalibrationError(n_bins=2, norm='l1') >>> values = [ ] >>> for _ in range(10): ... values.append(metric(rand(10), randint(2,(10,)))) >>> fig_, ax_ = metric.plot(values)
MulticlassCalibrationError¶
- class torchmetrics.classification.MulticlassCalibrationError(num_classes, n_bins=15, norm='l1', ignore_index=None, validate_args=True, **kwargs)[source]¶
Top-label Calibration Error for multiclass tasks.
The expected calibration error can be used to quantify how well a given model is calibrated e.g. how well the predicted output probabilities of the model matches the actual probabilities of the ground truth distribution. Three different norms are implemented, each corresponding to variations on the calibration error metric.
\[\text{ECE} = \sum_i^N b_i \|(p_i - c_i)\|, \text{L1 norm (Expected Calibration Error)}\]\[\text{MCE} = \max_{i} (p_i - c_i), \text{Infinity norm (Maximum Calibration Error)}\]\[\text{RMSCE} = \sqrt{\sum_i^N b_i(p_i - c_i)^2}, \text{L2 norm (Root Mean Square Calibration Error)}\]Where \(p_i\) is the top-1 prediction accuracy in bin \(i\), \(c_i\) is the average confidence of predictions in bin \(i\), and \(b_i\) is the fraction of data points in bin \(i\). Bins are constructed in an uniform way in the [0,1] range.
As input to
forward
andupdate
the metric accepts the following input:preds
(Tensor
): A float tensor of shape(N, C, ...)
containing probabilities or logits for each observation. If preds has values outside [0,1] range we consider the input to be logits and will auto apply softmax per sample.target
(Tensor
): An int tensor of shape(N, ...)
containing ground truth labels, and therefore only contain values in the [0, n_classes-1] range (except if ignore_index is specified).
Tip
Additional dimension
...
will be flattened into the batch dimension.As output to
forward
andcompute
the metric returns the following output:mcce
(Tensor
): A scalar tensor containing the calibration error
- Parameters:
num_classes¶ (
int
) – Integer specifying the number of classesn_bins¶ (
int
) – Number of bins to use when computing the metric.norm¶ (
Literal
['l1'
,'l2'
,'max'
]) – Norm used to compare empirical and expected probability bins.ignore_index¶ (
Optional
[int
]) – Specifies a target value that is ignored and does not contribute to the metric calculationvalidate_args¶ (
bool
) – bool indicating if input arguments and tensors should be validated for correctness. Set toFalse
for faster computations.kwargs¶ (
Any
) – Additional keyword arguments, see Advanced metric settings for more info.
Example
>>> from torch import tensor >>> from torchmetrics.classification import MulticlassCalibrationError >>> preds = tensor([[0.25, 0.20, 0.55], ... [0.55, 0.05, 0.40], ... [0.10, 0.30, 0.60], ... [0.90, 0.05, 0.05]]) >>> target = tensor([0, 1, 2, 0]) >>> metric = MulticlassCalibrationError(num_classes=3, n_bins=3, norm='l1') >>> metric(preds, target) tensor(0.2000) >>> mcce = MulticlassCalibrationError(num_classes=3, n_bins=3, norm='l2') >>> mcce(preds, target) tensor(0.2082) >>> mcce = MulticlassCalibrationError(num_classes=3, n_bins=3, norm='max') >>> mcce(preds, target) tensor(0.2333)
- plot(val=None, ax=None)[source]¶
Plot a single or multiple values from the metric.
- Parameters:
val¶ (
Union
[Tensor
,Sequence
[Tensor
],None
]) – Either a single result from calling metric.forward or metric.compute or a list of these results. If no value is provided, will automatically call metric.compute and plot that result.ax¶ (
Optional
[Axes
]) – An matplotlib axis object. If provided will add plot to that axis
- Return type:
- Returns:
Figure object and Axes object
- Raises:
ModuleNotFoundError – If matplotlib is not installed
>>> from torch import randn, randint >>> # Example plotting a single value >>> from torchmetrics.classification import MulticlassCalibrationError >>> metric = MulticlassCalibrationError(num_classes=3, n_bins=3, norm='l1') >>> metric.update(randn(20,3).softmax(dim=-1), randint(3, (20,))) >>> fig_, ax_ = metric.plot()
>>> from torch import randn, randint >>> # Example plotting a multiple values >>> from torchmetrics.classification import MulticlassCalibrationError >>> metric = MulticlassCalibrationError(num_classes=3, n_bins=3, norm='l1') >>> values = [] >>> for _ in range(20): ... values.append(metric(randn(20,3).softmax(dim=-1), randint(3, (20,)))) >>> fig_, ax_ = metric.plot(values)
Functional Interface¶
- torchmetrics.functional.calibration_error(preds, target, task, n_bins=15, norm='l1', num_classes=None, ignore_index=None, validate_args=True)[source]¶
-
The expected calibration error can be used to quantify how well a given model is calibrated e.g. how well the predicted output probabilities of the model matches the actual probabilities of the ground truth distribution. Three different norms are implemented, each corresponding to variations on the calibration error metric. :rtype:
Tensor
\[\text{ECE} = \sum_i^N b_i \|(p_i - c_i)\|, \text{L1 norm (Expected Calibration Error)}\]\[\text{MCE} = \max_{i} (p_i - c_i), \text{Infinity norm (Maximum Calibration Error)}\]\[\text{RMSCE} = \sqrt{\sum_i^N b_i(p_i - c_i)^2}, \text{L2 norm (Root Mean Square Calibration Error)}\]Where \(p_i\) is the top-1 prediction accuracy in bin \(i\), \(c_i\) is the average confidence of predictions in bin \(i\), and \(b_i\) is the fraction of data points in bin \(i\). Bins are constructed in an uniform way in the [0,1] range.
This function is a simple wrapper to get the task specific versions of this metric, which is done by setting the
task
argument to either'binary'
or'multiclass'
. See the documentation ofbinary_calibration_error()
andmulticlass_calibration_error()
for the specific details of each argument influence and examples.
binary_calibration_error¶
- torchmetrics.functional.classification.binary_calibration_error(preds, target, n_bins=15, norm='l1', ignore_index=None, validate_args=True)[source]¶
Top-label Calibration Error for binary tasks.
The expected calibration error can be used to quantify how well a given model is calibrated e.g. how well the predicted output probabilities of the model matches the actual probabilities of the ground truth distribution. Three different norms are implemented, each corresponding to variations on the calibration error metric.
\[\text{ECE} = \sum_i^N b_i \|(p_i - c_i)\|, \text{L1 norm (Expected Calibration Error)}\]\[\text{MCE} = \max_{i} (p_i - c_i), \text{Infinity norm (Maximum Calibration Error)}\]\[\text{RMSCE} = \sqrt{\sum_i^N b_i(p_i - c_i)^2}, \text{L2 norm (Root Mean Square Calibration Error)}\]Where \(p_i\) is the top-1 prediction accuracy in bin \(i\), \(c_i\) is the average confidence of predictions in bin \(i\), and \(b_i\) is the fraction of data points in bin \(i\). Bins are constructed in an uniform way in the [0,1] range.
Accepts the following input tensors:
preds
(float tensor):(N, ...)
. Preds should be a tensor containing probabilities or logits for each observation. If preds has values outside [0,1] range we consider the input to be logits and will auto apply sigmoid per element.target
(int tensor):(N, ...)
. Target should be a tensor containing ground truth labels, and therefore only contain {0,1} values (except if ignore_index is specified). The value 1 always encodes the positive class.
Additional dimension
...
will be flattened into the batch dimension.- Parameters:
n_bins¶ (
int
) – Number of bins to use when computing the metric.norm¶ (
Literal
['l1'
,'l2'
,'max'
]) – Norm used to compare empirical and expected probability bins.ignore_index¶ (
Optional
[int
]) – Specifies a target value that is ignored and does not contribute to the metric calculationvalidate_args¶ (
bool
) – bool indicating if input arguments and tensors should be validated for correctness. Set toFalse
for faster computations.
- Return type:
Example
>>> from torchmetrics.functional.classification import binary_calibration_error >>> preds = torch.tensor([0.25, 0.25, 0.55, 0.75, 0.75]) >>> target = torch.tensor([0, 0, 1, 1, 1]) >>> binary_calibration_error(preds, target, n_bins=2, norm='l1') tensor(0.2900) >>> binary_calibration_error(preds, target, n_bins=2, norm='l2') tensor(0.2918) >>> binary_calibration_error(preds, target, n_bins=2, norm='max') tensor(0.3167)
multiclass_calibration_error¶
- torchmetrics.functional.classification.multiclass_calibration_error(preds, target, num_classes, n_bins=15, norm='l1', ignore_index=None, validate_args=True)[source]¶
Top-label Calibration Error for multiclass tasks.
The expected calibration error can be used to quantify how well a given model is calibrated e.g. how well the predicted output probabilities of the model matches the actual probabilities of the ground truth distribution. Three different norms are implemented, each corresponding to variations on the calibration error metric.
\[\text{ECE} = \sum_i^N b_i \|(p_i - c_i)\|, \text{L1 norm (Expected Calibration Error)}\]\[\text{MCE} = \max_{i} (p_i - c_i), \text{Infinity norm (Maximum Calibration Error)}\]\[\text{RMSCE} = \sqrt{\sum_i^N b_i(p_i - c_i)^2}, \text{L2 norm (Root Mean Square Calibration Error)}\]Where \(p_i\) is the top-1 prediction accuracy in bin \(i\), \(c_i\) is the average confidence of predictions in bin \(i\), and \(b_i\) is the fraction of data points in bin \(i\). Bins are constructed in an uniform way in the [0,1] range.
Accepts the following input tensors:
preds
(float tensor):(N, C, ...)
. Preds should be a tensor containing probabilities or logits for each observation. If preds has values outside [0,1] range we consider the input to be logits and will auto apply softmax per sample.target
(int tensor):(N, ...)
. Target should be a tensor containing ground truth labels, and therefore only contain values in the [0, n_classes-1] range (except if ignore_index is specified).
Additional dimension
...
will be flattened into the batch dimension.- Parameters:
num_classes¶ (
int
) – Integer specifying the number of classesn_bins¶ (
int
) – Number of bins to use when computing the metric.norm¶ (
Literal
['l1'
,'l2'
,'max'
]) – Norm used to compare empirical and expected probability bins.ignore_index¶ (
Optional
[int
]) – Specifies a target value that is ignored and does not contribute to the metric calculationvalidate_args¶ (
bool
) – bool indicating if input arguments and tensors should be validated for correctness. Set toFalse
for faster computations.
- Return type:
Example
>>> from torchmetrics.functional.classification import multiclass_calibration_error >>> preds = torch.tensor([[0.25, 0.20, 0.55], ... [0.55, 0.05, 0.40], ... [0.10, 0.30, 0.60], ... [0.90, 0.05, 0.05]]) >>> target = torch.tensor([0, 1, 2, 0]) >>> multiclass_calibration_error(preds, target, num_classes=3, n_bins=3, norm='l1') tensor(0.2000) >>> multiclass_calibration_error(preds, target, num_classes=3, n_bins=3, norm='l2') tensor(0.2082) >>> multiclass_calibration_error(preds, target, num_classes=3, n_bins=3, norm='max') tensor(0.2333)