To properly utilize them, you should set `torch.set_float32_matmul_precision('medium' | 'high')` which will trade-off precision for performance

I have set the precision in the trainer as 16-mixed. Yet I get this warning,
I don’t understand why?

It looks like you’re encountering a warning when using precision="16-mixed" in PyTorch Lightning. This typically happens when there are compatibility issues between your hardware, PyTorch version, and Lightning configuration.

What are the steps to solve this?