TPUBf16Precision¶
- class lightning.fabric.plugins.precision.TPUBf16Precision[source]¶
Bases:
TPUPrecision
Plugin that enables bfloats on TPUs.
- convert_input(data)[source]¶
Convert model inputs (forward) to the floating point precision type of this plugin.
This is a no-op in the base precision plugin, since we assume the data already has the desired type (default is torch.float32).
- Return type: