Short-Time Objective Intelligibility (STOI)

Module Interface

class torchmetrics.audio.stoi.ShortTimeObjectiveIntelligibility(fs, extended=False, **kwargs)[source]

Calculate STOI (Short-Time Objective Intelligibility) metric for evaluating speech signals.

Intelligibility measure which is highly correlated with the intelligibility of degraded speech signals, e.g., due to additive noise, single-/multi-channel noise reduction, binary masking and vocoded speech as in CI simulations. The STOI-measure is intrusive, i.e., a function of the clean and degraded speech signals. STOI may be a good alternative to the speech intelligibility index (SII) or the speech transmission index (STI), when you are interested in the effect of nonlinear processing to noisy speech, e.g., noise reduction, binary masking algorithms, on speech intelligibility. Description taken from Cees Taal’s website and for further details see STOI ref1 and STOI ref2.

This metric is a wrapper for the pystoi package. As the implementation backend implementation only supports calculations on CPU, all input will automatically be moved to CPU to perform the metric calculation before being moved back to the original device.

As input to forward and update the metric accepts the following input

  • preds (Tensor): float tensor with shape (...,time)

  • target (Tensor): float tensor with shape (...,time)

As output of forward and compute the metric returns the following output

  • stoi (Tensor): float scalar tensor

Note

using this metrics requires you to have pystoi install. Either install as pip install torchmetrics[audio] or pip install pystoi.

Parameters:
Raises:

ModuleNotFoundError – If pystoi package is not installed

Example

>>> from torch import randn
>>> from torchmetrics.audio import ShortTimeObjectiveIntelligibility
>>> preds = randn(8000)
>>> target = randn(8000)
>>> stoi = ShortTimeObjectiveIntelligibility(8000, False)
>>> stoi(preds, target)
tensor(-0.084...)
compute()[source]

Compute metric.

Return type:

Tensor

plot(val=None, ax=None)[source]

Plot a single or multiple values from the metric.

Parameters:
  • val (Union[Tensor, Sequence[Tensor], None]) – Either a single result from calling metric.forward or metric.compute or a list of these results. If no value is provided, will automatically call metric.compute and plot that result.

  • ax (Optional[Axes]) – An matplotlib axis object. If provided will add plot to that axis

Return type:

Tuple[Figure, Union[Axes, ndarray]]

Returns:

Figure and Axes object

Raises:

ModuleNotFoundError – If matplotlib is not installed

>>> # Example plotting a single value
>>> from torch import randn
>>> from torchmetrics.audio import ShortTimeObjectiveIntelligibility
>>> preds = randn(8000)
>>> target = randn(8000)
>>> metric = ShortTimeObjectiveIntelligibility(8000, False)
>>> metric.update(preds, target)
>>> fig_, ax_ = metric.plot()
../_images/short_time_objective_intelligibility-1.png
>>> # Example plotting multiple values
>>> from torch import randn
>>> from torchmetrics.audio import ShortTimeObjectiveIntelligibility
>>> metric = ShortTimeObjectiveIntelligibility(8000, False)
>>> preds = randn(8000)
>>> target = randn(8000)
>>> values = [ ]
>>> for _ in range(10):
...     values.append(metric(preds, target))
>>> fig_, ax_ = metric.plot(values)
../_images/short_time_objective_intelligibility-2.png
update(preds, target)[source]

Update state with predictions and targets.

Return type:

None

Functional Interface

torchmetrics.functional.audio.stoi.short_time_objective_intelligibility(preds, target, fs, extended=False, keep_same_device=False)[source]

Calculate STOI (Short-Time Objective Intelligibility) metric for evaluating speech signals.

Intelligibility measure which is highly correlated with the intelligibility of degraded speech signals, e.g., due to additive noise, single-/multi-channel noise reduction, binary masking and vocoded speech as in CI simulations. The STOI-measure is intrusive, i.e., a function of the clean and degraded speech signals. STOI may be a good alternative to the speech intelligibility index (SII) or the speech transmission index (STI), when you are interested in the effect of nonlinear processing to noisy speech, e.g., noise reduction, binary masking algorithms, on speech intelligibility. Description taken from Cees Taal’s website and for further details see STOI ref1 and STOI ref2.

This metric is a wrapper for the pystoi package. As the implementation backend implementation only supports calculations on CPU, all input will automatically be moved to CPU to perform the metric calculation before being moved back to the original device.

Note

using this metrics requires you to have pystoi install. Either install as pip install torchmetrics[audio] or pip install pystoi

Parameters:
  • preds (Tensor) – float tensor with shape (...,time)

  • target (Tensor) – float tensor with shape (...,time)

  • fs (int) – sampling frequency (Hz)

  • extended (bool) – whether to use the extended STOI described in STOI ref3.

  • keep_same_device (bool) – whether to move the stoi value to the device of preds

Return type:

Tensor

Returns:

stoi value of shape […]

Raises:

Example

>>> from torch import randn
>>> from torchmetrics.functional.audio.stoi import short_time_objective_intelligibility
>>> preds = randn(8000)
>>> target = randn(8000)
>>> short_time_objective_intelligibility(preds, target, 8000).float()
tensor(-0.084...)