MixedPrecision
- class lightning.fabric.plugins.precision.MixedPrecision(precision, device, scaler=None)[source]
Bases:
Precision
Plugin for Automatic Mixed Precision (AMP) training with
torch.autocast
.- Parameters:
precision (
Literal
['16-mixed'
,'bf16-mixed'
]) – Whether to usetorch.float16
('16-mixed'
) ortorch.bfloat16
('bf16-mixed'
).device (
str
) – The device fortorch.autocast
.scaler (
Optional
[GradScaler
]) – An optionaltorch.cuda.amp.GradScaler
to use.
- backward(tensor, model, *args, **kwargs)[source]
Performs the actual backpropagation.
- convert_input(data)[source]
Convert model inputs (forward) to the floating point precision type of this plugin.
This is a no-op in the base precision plugin, since we assume the data already has the desired type (default is torch.float32).
- Return type:
- convert_output(data)[source]
Convert outputs to the floating point precision type expected after model’s forward.
This is a no-op in the base precision plugin, since we assume the data already has the desired type (default is torch.float32).
- Return type:
- forward_context()[source]
A contextmanager for managing model forward/training_step/evaluation_step/predict_step.
- Return type:
- load_state_dict(state_dict)[source]
Called when loading a checkpoint, implement to reload precision plugin state given precision plugin state_dict.