MixedPrecision
- class lightning.pytorch.plugins.precision.MixedPrecision(precision, device, scaler=None)[source]
Bases:
Precision
Plugin for Automatic Mixed Precision (AMP) training with
torch.autocast
.- Parameters:
precision (
Literal
['16-mixed'
,'bf16-mixed'
]) – Whether to usetorch.float16
('16-mixed'
) ortorch.bfloat16
('bf16-mixed'
).device (
str
) – The device fortorch.autocast
.scaler (
Optional
[GradScaler
]) – An optionaltorch.cuda.amp.GradScaler
to use.
- clip_gradients(optimizer, clip_val=0.0, gradient_clip_algorithm=GradClipAlgorithmType.NORM)[source]
Clips the gradients.
- Return type:
- load_state_dict(state_dict)[source]
Called when loading a checkpoint, implement to reload precision plugin state given precision plugin state_dict.
- optimizer_step(optimizer, model, closure, **kwargs)[source]
Hook to run the optimizer step.
- Return type:
- pre_backward(tensor, module)[source]
Runs before precision plugin executes backward.
- Parameters:
tensor (
Tensor
) – The tensor that will be used for backpropagationmodule (
LightningModule
) – The module that was involved in producing the tensor and whose parameters need the gradients
- Return type: