NativeMixedPrecisionPlugin
- class pytorch_lightning.plugins.precision.NativeMixedPrecisionPlugin(precision, device, scaler=None)[source]
Bases:
pytorch_lightning.plugins.precision.mixed.MixedPrecisionPlugin
Plugin for Native Mixed Precision (AMP) training with
torch.autocast
.- Parameters
precision (
Union
[str
,int
]) – Whether to usetorch.float16
(16
) ortorch.bfloat16
('bf16'
).device (
str
) – The device fortorch.autocast
.scaler (
Optional
[GradScaler
]) – An optionaltorch.cuda.amp.GradScaler
to use.
- load_state_dict(state_dict)[source]
Called when loading a checkpoint, implement to reload precision plugin state given precision plugin state_dict.
- optimizer_step(model, optimizer, optimizer_idx, closure, **kwargs)[source]
Hook to run the optimizer step.
- Return type
- pre_backward(model, closure_loss)[source]
Run before precision plugin executes backward.
- Parameters
model (
LightningModule
) – the model to be optimizedclosure_loss (
Tensor
) – the loss value obtained from the closure
- Return type