HalfPrecision

class lightning.fabric.plugins.precision.HalfPrecision(precision='16-true')[source]

Bases: Precision

Plugin for training with half precision.

Parameters:

precision (Literal['bf16-true', '16-true']) – Whether to use torch.float16 ('16-true') or torch.bfloat16 ('bf16-true').

convert_input(data)[source]

Convert model inputs (forward) to the floating point precision type of this plugin.

This is a no-op in the base precision plugin, since we assume the data already has the desired type (default is torch.float32).

Return type:

Any

convert_module(module)[source]

Convert the module parameters to the precision type this plugin handles.

This is optional and depends on the precision limitations during optimization.

Return type:

Module

convert_output(data)[source]

Convert outputs to the floating point precision type expected after model’s forward.

This is a no-op in the base precision plugin, since we assume the data already has the desired type (default is torch.float32).

Return type:

Any

forward_context()[source]

A contextmanager for managing model forward/training_step/evaluation_step/predict_step.

Return type:

ContextManager

module_init_context()[source]

Instantiate module parameters or tensors in the precision type this plugin handles.

This is optional and depends on the precision limitations during optimization.

Return type:

ContextManager

tensor_init_context()[source]

Controls how tensors get created (device, dtype).

Return type:

ContextManager