## Module Interface¶

$AMI(U,V) = \frac{MI(U,V) - E(MI(U,V))}{avg(H(U), H(V)) - E(MI(U,V))}$

Where $$U$$ is a tensor of target values, $$V$$ is a tensor of predictions, $$M_p(U,V)$$ is the generalized mean of order $$p$$ of $$U$$ and $$V$$, and $$MI(U,V)$$ is the mutual information score between clusters $$U$$ and $$V$$. The metric is symmetric, therefore swapping $$U$$ and $$V$$ yields the same mutual information score.

This clustering metric is an extrinsic measure, because it requires ground truth clustering labels, which may not be available in practice since clustering in generally is used for unsupervised learning.

As input to forward and update the metric accepts the following input:

• preds (Tensor): single integer tensor with shape (N,) with predicted cluster labels

• target (Tensor): single integer tensor with shape (N,) with ground truth cluster labels

As output of forward and compute the metric returns the following output:

• ami_score (Tensor): A tensor with the Adjusted Mutual Information Score

Parameters:
• average_method (Literal['min', 'geometric', 'arithmetic', 'max']) – Method used to calculate generalized mean for normalization. Choose between 'min', 'geometric', 'arithmetic', 'max'.

• kwargs (Any) – Additional keyword arguments, see Advanced metric settings for more info.

Example::
>>> import torch
>>> preds = torch.tensor([2, 1, 0, 1, 0])
>>> target = torch.tensor([0, 2, 1, 1, 0])
>>> ami_score(preds, target)
tensor(-0.2500)

plot(val=None, ax=None)[source]

Plot a single or multiple values from the metric.

Parameters:
Return type:
Returns:

Figure and Axes object

Raises:

ModuleNotFoundError – If matplotlib is not installed

>>> # Example plotting a single value
>>> import torch
>>> metric.update(torch.randint(0, 4, (10,)), torch.randint(0, 4, (10,)))
>>> fig_, ax_ = metric.plot(metric.compute())

>>> # Example plotting multiple values
>>> import torch
>>> values = []
>>> for _ in range(10):
...     values.append(metric(torch.randint(0, 4, (10,)), torch.randint(0, 4, (10,))))
>>> fig_, ax_ = metric.plot(values)

higher_is_better: Optional[bool] = None

## Functional Interface¶

Compute adjusted mutual information between two clusterings.

Parameters:
Return type:

Tensor

Returns:

Scalar tensor with adjusted mutual info score between 0.0 and 1.0

Example

>>> from torchmetrics.functional.clustering import adjusted_mutual_info_score
>>> preds = torch.tensor([2, 1, 0, 1, 0])
>>> target = torch.tensor([0, 2, 1, 1, 0])