Why there is anomly in bf16-mixed precision

I was checking my gpu precision support and lightning fabric can use bf16-mixed successfully while torch say my gpu is not supporting bf16 precision. check this output

>>> fabric = Fabric(accelerator="cuda", devices=1, precision="bf16-mixed")
Using bfloat16 Automatic Mixed Precision (AMP)
>>> torch.cuda.is_bf16_supported()
False