Can torchmetrics BinaryAccuracy incorrectly interprets logits as likelihoods?

The torchmetrics.classification.BinaryAccuracy documentation states that :

If preds is a floating point tensor with values outside [0,1] range we consider the input to be logits and will auto apply sigmoid per element.

However, logits values range from -inf to +inf. There is thus a non-null probability that a set of computed values all lie withing the [0,1] range, and are thus interpreted are likelihood.

Am I correct?

In that case, when working we logit, is it safer to manually call the sigmoid function before calling the BinaryAccuracy computation?

Here is a small python code illustrating the issue.

import torch.nn.functional
from torchmetrics.classification import BinaryAccuracy
from torch import Tensor

# BinaryAccuracy documentation:
# https://lightning.ai/docs/torchmetrics/stable/classification/accuracy.html#torchmetrics.classification.BinaryAccuracy

def main():
    accuracy = BinaryAccuracy()
    logit_example_01 = Tensor([0.01, 0.99])
    print(torch.nn.functional.sigmoid(logit_example_01))  # tensor([0.5025, 0.7311])
    logit_example_02 = Tensor([0.01, 1.01])
    print(torch.nn.functional.sigmoid(logit_example_02))  # tensor([0.5025, 0.7311])

    assert accuracy(logit_example_01, Tensor([1, 1])) == 0.5  # logit erroneously interpreted as likelihood?
    assert accuracy(logit_example_02, Tensor([1, 1])) == 1.0
    assert accuracy(torch.nn.functional.sigmoid(logit_example_01), Tensor([1, 1])) == 1. # expected value