Inception Score

Module Interface

class torchmetrics.image.inception.InceptionScore(feature='logits_unbiased', splits=10, normalize=False, **kwargs)[source]

Calculate the Inception Score (IS) which is used to access how realistic generated images are.

\[IS = exp(\mathbb{E}_x KL(p(y | x ) || p(y)))\]

where \(KL(p(y | x) || p(y))\) is the KL divergence between the conditional distribution \(p(y|x)\) and the marginal distribution \(p(y)\). Both the conditional and marginal distribution is calculated from features extracted from the images. The score is calculated on random splits of the images such that both a mean and standard deviation of the score are returned. The metric was originally proposed in inception ref1.

Using the default feature extraction (Inception v3 using the original weights from inception ref2), the input is expected to be mini-batches of 3-channel RGB images of shape (3xHxW). If argument normalize is True images are expected to be dtype float and have values in the [0,1] range, else if normalize is set to False images are expected to have dtype uint8 and take values in the [0, 255] range. All images will be resized to 299 x 299 which is the size of the original training data.


using this metric with the default feature extractor requires that torch-fidelity is installed. Either install as pip install torchmetrics[image] or pip install torch-fidelity

As input to forward and update the metric accepts the following input

  • imgs (Tensor): tensor with images feed to the feature extractor

As output of forward and compute the metric returns the following output

  • inception_mean (Tensor): float scalar tensor with mean inception score over subsets

  • inception_std (Tensor): float scalar tensor with standard deviation of inception score over subsets

  • feature (Union[str, int, Module]) –

    Either an str, integer or nn.Module:

    • an str or integer will indicate the inceptionv3 feature layer to choose. Can be one of the following: ‘logits_unbiased’, 64, 192, 768, 2048

    • an nn.Module for using a custom feature extractor. Expects that its forward method returns an (N,d) matrix where N is the batch size and d is the feature size.

  • splits (int) – integer determining how many splits the inception score calculation should be split among

  • kwargs (Any) – Additional keyword arguments, see Advanced metric settings for more info.

  • ValueError – If feature is set to an str or int and torch-fidelity is not installed

  • ValueError – If feature is set to an str or int and not one of ('logits_unbiased', 64, 192, 768, 2048)

  • TypeError – If feature is not an str, int or torch.nn.Module


>>> import torch
>>> _ = torch.manual_seed(123)
>>> from torchmetrics.image.inception import InceptionScore
>>> inception = InceptionScore()
>>> # generate some images
>>> imgs = torch.randint(0, 255, (100, 3, 299, 299), dtype=torch.uint8)
>>> inception.update(imgs)
>>> inception.compute()
(tensor(1.0544), tensor(0.0117))
plot(val=None, ax=None)[source]

Plot a single or multiple values from the metric.

  • val (Union[Tensor, Sequence[Tensor], None]) – Either a single result from calling metric.forward or metric.compute or a list of these results. If no value is provided, will automatically call metric.compute and plot that result.

  • ax (Optional[Axes]) – An matplotlib axis object. If provided will add plot to that axis

Return type:

Tuple[Figure, Union[Axes, ndarray]]


Figure and Axes object


ModuleNotFoundError – If matplotlib is not installed

>>> # Example plotting a single value
>>> import torch
>>> from torchmetrics.image.inception import InceptionScore
>>> metric = InceptionScore()
>>> metric.update(torch.randint(0, 255, (50, 3, 299, 299), dtype=torch.uint8))
>>> fig_, ax_ = metric.plot()  # the returned plot only shows the mean value by default
>>> # Example plotting multiple values
>>> import torch
>>> from torchmetrics.image.inception import InceptionScore
>>> metric = InceptionScore()
>>> values = [ ]
>>> for _ in range(3):
...     # we index by 0 such that only the mean value is plotted
...     values.append(metric(torch.randint(0, 255, (50, 3, 299, 299), dtype=torch.uint8))[0])
>>> fig_, ax_ = metric.plot(values)