Shortcuts

TPUBf16Precision

class lightning.fabric.plugins.precision.TPUBf16Precision[source]

Bases: TPUPrecision

Plugin that enables bfloats on TPUs.

convert_input(data)[source]

Convert model inputs (forward) to the floating point precision type of this plugin.

This is a no-op in the base precision plugin, since we assume the data already has the desired type (default is torch.float32).

Return type:

Any

convert_output(data)[source]

Convert outputs to the floating point precision type expected after model’s forward.

This is a no-op in the base precision plugin, since we assume the data already has the desired type (default is torch.float32).

Return type:

Any

teardown()[source]

This method is called to teardown the training process.

It is the right place to release memory and free other resources.

Return type:

None