Panoptic Quality¶
Module Interface¶
- class torchmetrics.detection.PanopticQuality(things, stuffs, allow_unknown_preds_category=False, return_sq_and_rq=False, return_per_class=False, **kwargs)[source]¶
Compute the Panoptic Quality for panoptic segmentations.
\[PQ = \frac{IOU}{TP + 0.5 FP + 0.5 FN}\]where IOU, TP, FP and FN are respectively the sum of the intersection over union for true positives, the number of true positives, false positives and false negatives. This metric is inspired by the PQ implementation of panopticapi, a standard implementation for the PQ metric for panoptic segmentation.
As input to
forward
andupdate
the metric accepts the following input:preds
(Tensor
): An int tensor of shape(B, *spatial_dims, 2)
containing the pair(category_id, instance_id)
for each point, where there needs to be at least one spatial dimension.target
(Tensor
): An int tensor of shape(B, *spatial_dims, 2)
containing the pair(category_id, instance_id)
for each point, where there needs to be at least one spatial dimension.
As output to
forward
andcompute
the metric returns the following output:quality
(Tensor
): Ifreturn_sq_and_rq=False
andreturn_per_class=False
then a single scalar tensor is returned with average panoptic quality over all classes. Ifreturn_sq_and_rq=True
andreturn_per_class=False
a tensor of length 3 is returned with panoptic, segmentation and recognition quality (in that order). If Ifreturn_sq_and_rq=False
andreturn_per_class=True
a tensor of length equal to the number of classes are returned, with panoptic quality for each class. The order of classes isthings
first and thenstuffs
, and numerically sorted within each. (ex. withthings=[4, 1], stuffs=[3, 2]
, the output classes are ordered by[1, 4, 2, 3]
) Finally, if both arguments areTrue
a tensor of shape(3, C)
is returned with individual panoptic, segmentation and recognition quality for each class.
- Parameters:
things¶ (
Collection
[int
]) – Set ofcategory_id
for countable things.stuffs¶ (
Collection
[int
]) – Set ofcategory_id
for uncountable stuffs.allow_unknown_preds_category¶ (
bool
) – Boolean flag to specify if unknown categories in the predictions are to be ignored in the metric computation or raise an exception when found.return_sq_and_rq¶ (
bool
) – Boolean flag to specify if Segmentation Quality and Recognition Quality should be also returned.return_per_class¶ (
bool
) – Boolean flag to specify if the per-class values should be returned or the class average.
- Raises:
ValueError – If
things
,stuffs
have at least one commoncategory_id
.TypeError – If
things
,stuffs
contain non-integercategory_id
.
Example
>>> from torch import tensor >>> from torchmetrics.detection import PanopticQuality >>> preds = tensor([[[[6, 0], [0, 0], [6, 0], [6, 0]], ... [[0, 0], [0, 0], [6, 0], [0, 1]], ... [[0, 0], [0, 0], [6, 0], [0, 1]], ... [[0, 0], [7, 0], [6, 0], [1, 0]], ... [[0, 0], [7, 0], [7, 0], [7, 0]]]]) >>> target = tensor([[[[6, 0], [0, 1], [6, 0], [0, 1]], ... [[0, 1], [0, 1], [6, 0], [0, 1]], ... [[0, 1], [0, 1], [6, 0], [1, 0]], ... [[0, 1], [7, 0], [1, 0], [1, 0]], ... [[0, 1], [7, 0], [7, 0], [7, 0]]]]) >>> panoptic_quality = PanopticQuality(things = {0, 1}, stuffs = {6, 7}) >>> panoptic_quality(preds, target) tensor(0.5463, dtype=torch.float64)
- You can also return the segmentation and recognition quality alognside the PQ
>>> from torch import tensor >>> from torchmetrics.detection import PanopticQuality >>> preds = tensor([[[[6, 0], [0, 0], [6, 0], [6, 0]], ... [[0, 0], [0, 0], [6, 0], [0, 1]], ... [[0, 0], [0, 0], [6, 0], [0, 1]], ... [[0, 0], [7, 0], [6, 0], [1, 0]], ... [[0, 0], [7, 0], [7, 0], [7, 0]]]]) >>> target = tensor([[[[6, 0], [0, 1], [6, 0], [0, 1]], ... [[0, 1], [0, 1], [6, 0], [0, 1]], ... [[0, 1], [0, 1], [6, 0], [1, 0]], ... [[0, 1], [7, 0], [1, 0], [1, 0]], ... [[0, 1], [7, 0], [7, 0], [7, 0]]]]) >>> panoptic_quality = PanopticQuality(things = {0, 1}, stuffs = {6, 7}, return_sq_and_rq=True) >>> panoptic_quality(preds, target) tensor([0.5463, 0.6111, 0.6667], dtype=torch.float64)
- You can also specify to return the per-class metrics
>>> from torch import tensor >>> from torchmetrics.detection import PanopticQuality >>> preds = tensor([[[[6, 0], [0, 0], [6, 0], [6, 0]], ... [[0, 0], [0, 0], [6, 0], [0, 1]], ... [[0, 0], [0, 0], [6, 0], [0, 1]], ... [[0, 0], [7, 0], [6, 0], [1, 0]], ... [[0, 0], [7, 0], [7, 0], [7, 0]]]]) >>> target = tensor([[[[6, 0], [0, 1], [6, 0], [0, 1]], ... [[0, 1], [0, 1], [6, 0], [0, 1]], ... [[0, 1], [0, 1], [6, 0], [1, 0]], ... [[0, 1], [7, 0], [1, 0], [1, 0]], ... [[0, 1], [7, 0], [7, 0], [7, 0]]]]) >>> panoptic_quality = PanopticQuality(things = {0, 1}, stuffs = {6, 7}, return_per_class=True) >>> panoptic_quality(preds, target) tensor([[0.5185, 0.0000, 0.6667, 1.0000]], dtype=torch.float64)
- plot(val=None, ax=None)[source]¶
Plot a single or multiple values from the metric.
- Parameters:
val¶ (
Union
[Tensor
,Sequence
[Tensor
],None
]) – Either a single result from calling metric.forward or metric.compute or a list of these results. If no value is provided, will automatically call metric.compute and plot that result.ax¶ (
Optional
[Axes
]) – An matplotlib axis object. If provided will add plot to that axis
- Return type:
- Returns:
Figure object and Axes object
- Raises:
ModuleNotFoundError – If matplotlib is not installed
>>> from torch import tensor >>> from torchmetrics.detection import PanopticQuality >>> preds = tensor([[[[6, 0], [0, 0], [6, 0], [6, 0]], ... [[0, 0], [0, 0], [6, 0], [0, 1]], ... [[0, 0], [0, 0], [6, 0], [0, 1]], ... [[0, 0], [7, 0], [6, 0], [1, 0]], ... [[0, 0], [7, 0], [7, 0], [7, 0]]]]) >>> target = tensor([[[[6, 0], [0, 1], [6, 0], [0, 1]], ... [[0, 1], [0, 1], [6, 0], [0, 1]], ... [[0, 1], [0, 1], [6, 0], [1, 0]], ... [[0, 1], [7, 0], [1, 0], [1, 0]], ... [[0, 1], [7, 0], [7, 0], [7, 0]]]]) >>> metric = PanopticQuality(things = {0, 1}, stuffs = {6, 7}) >>> metric.update(preds, target) >>> fig_, ax_ = metric.plot()
>>> # Example plotting multiple values >>> from torch import tensor >>> from torchmetrics.detection import PanopticQuality >>> preds = tensor([[[[6, 0], [0, 0], [6, 0], [6, 0]], ... [[0, 0], [0, 0], [6, 0], [0, 1]], ... [[0, 0], [0, 0], [6, 0], [0, 1]], ... [[0, 0], [7, 0], [6, 0], [1, 0]], ... [[0, 0], [7, 0], [7, 0], [7, 0]]]]) >>> target = tensor([[[[6, 0], [0, 1], [6, 0], [0, 1]], ... [[0, 1], [0, 1], [6, 0], [0, 1]], ... [[0, 1], [0, 1], [6, 0], [1, 0]], ... [[0, 1], [7, 0], [1, 0], [1, 0]], ... [[0, 1], [7, 0], [7, 0], [7, 0]]]]) >>> metric = PanopticQuality(things = {0, 1}, stuffs = {6, 7}) >>> vals = [] >>> for _ in range(20): ... vals.append(metric(preds, target)) >>> fig_, ax_ = metric.plot(vals)
Functional Interface¶
- torchmetrics.functional.detection.panoptic_quality(preds, target, things, stuffs, allow_unknown_preds_category=False, return_sq_and_rq=False, return_per_class=False)[source]¶
Compute Panoptic Quality for panoptic segmentations.
\[PQ = \frac{IOU}{TP + 0.5 FP + 0.5 FN}\]where IOU, TP, FP and FN are respectively the sum of the intersection over union for true positives, the number of true positives, false positives and false negatives. This metric is inspired by the PQ implementation of panopticapi, a standard implementation for the PQ metric for object detection.
- Parameters:
preds¶ (
Tensor
) – torch tensor with panoptic detection of shape [height, width, 2] containing the pair (category_id, instance_id) for each pixel of the image. If the category_id refer to a stuff, the instance_id is ignored.target¶ (
Tensor
) – torch tensor with ground truth of shape [height, width, 2] containing the pair (category_id, instance_id) for each pixel of the image. If the category_id refer to a stuff, the instance_id is ignored.things¶ (
Collection
[int
]) – Set ofcategory_id
for countable things.stuffs¶ (
Collection
[int
]) – Set ofcategory_id
for uncountable stuffs.allow_unknown_preds_category¶ (
bool
) – Boolean flag to specify if unknown categories in the predictions are to be ignored in the metric computation or raise an exception when found.return_sq_and_rq¶ (
bool
) – Boolean flag to specify if Segmentation Quality and Recognition Quality should be also returned.return_per_class¶ (
bool
) – Boolean flag to specify if the per-class values should be returned or the class average.
- Raises:
ValueError – If
things
,stuffs
have at least one commoncategory_id
.TypeError – If
things
,stuffs
contain non-integercategory_id
.TypeError – If
preds
ortarget
is not antorch.Tensor
.ValueError – If
preds
ortarget
has different shape.ValueError – If
preds
has less than 3 dimensions.ValueError – If the final dimension of
preds
has size != 2.
- Return type:
Example
>>> from torch import tensor >>> preds = tensor([[[[6, 0], [0, 0], [6, 0], [6, 0]], ... [[0, 0], [0, 0], [6, 0], [0, 1]], ... [[0, 0], [0, 0], [6, 0], [0, 1]], ... [[0, 0], [7, 0], [6, 0], [1, 0]], ... [[0, 0], [7, 0], [7, 0], [7, 0]]]]) >>> target = tensor([[[[6, 0], [0, 1], [6, 0], [0, 1]], ... [[0, 1], [0, 1], [6, 0], [0, 1]], ... [[0, 1], [0, 1], [6, 0], [1, 0]], ... [[0, 1], [7, 0], [1, 0], [1, 0]], ... [[0, 1], [7, 0], [7, 0], [7, 0]]]]) >>> panoptic_quality(preds, target, things = {0, 1}, stuffs = {6, 7}) tensor(0.5463, dtype=torch.float64)
- You can also return the segmentation and recognition quality alognside the PQ
>>> from torch import tensor >>> preds = tensor([[[[6, 0], [0, 0], [6, 0], [6, 0]], ... [[0, 0], [0, 0], [6, 0], [0, 1]], ... [[0, 0], [0, 0], [6, 0], [0, 1]], ... [[0, 0], [7, 0], [6, 0], [1, 0]], ... [[0, 0], [7, 0], [7, 0], [7, 0]]]]) >>> target = tensor([[[[6, 0], [0, 1], [6, 0], [0, 1]], ... [[0, 1], [0, 1], [6, 0], [0, 1]], ... [[0, 1], [0, 1], [6, 0], [1, 0]], ... [[0, 1], [7, 0], [1, 0], [1, 0]], ... [[0, 1], [7, 0], [7, 0], [7, 0]]]]) >>> panoptic_quality(preds, target, things = {0, 1}, stuffs = {6, 7}, return_sq_and_rq=True) tensor([0.5463, 0.6111, 0.6667], dtype=torch.float64)
- You can also specify to return the per-class metrics
>>> from torch import tensor >>> preds = tensor([[[[6, 0], [0, 0], [6, 0], [6, 0]], ... [[0, 0], [0, 0], [6, 0], [0, 1]], ... [[0, 0], [0, 0], [6, 0], [0, 1]], ... [[0, 0], [7, 0], [6, 0], [1, 0]], ... [[0, 0], [7, 0], [7, 0], [7, 0]]]]) >>> target = tensor([[[[6, 0], [0, 1], [6, 0], [0, 1]], ... [[0, 1], [0, 1], [6, 0], [0, 1]], ... [[0, 1], [0, 1], [6, 0], [1, 0]], ... [[0, 1], [7, 0], [1, 0], [1, 0]], ... [[0, 1], [7, 0], [7, 0], [7, 0]]]]) >>> panoptic_quality(preds, target, things = {0, 1}, stuffs = {6, 7}, return_per_class=True) tensor([[0.5185, 0.0000, 0.6667, 1.0000]], dtype=torch.float64)
- You can also specify to return the per-class metrics and the segmentation and recognition quality
>>> from torch import tensor >>> preds = tensor([[[[6, 0], [0, 0], [6, 0], [6, 0]], ... [[0, 0], [0, 0], [6, 0], [0, 1]], ... [[0, 0], [0, 0], [6, 0], [0, 1]], ... [[0, 0], [7, 0], [6, 0], [1, 0]], ... [[0, 0], [7, 0], [7, 0], [7, 0]]]]) >>> target = tensor([[[[6, 0], [0, 1], [6, 0], [0, 1]], ... [[0, 1], [0, 1], [6, 0], [0, 1]], ... [[0, 1], [0, 1], [6, 0], [1, 0]], ... [[0, 1], [7, 0], [1, 0], [1, 0]], ... [[0, 1], [7, 0], [7, 0], [7, 0]]]]) >>> panoptic_quality(preds, target, things = {0, 1}, stuffs = {6, 7}, ... return_per_class=True, return_sq_and_rq=True) tensor([[0.5185, 0.7778, 0.6667], [0.0000, 0.0000, 0.0000], [0.6667, 0.6667, 1.0000], [1.0000, 1.0000, 1.0000]], dtype=torch.float64)