Shortcuts

NativeMixedPrecisionPlugin

class pytorch_lightning.plugins.precision.NativeMixedPrecisionPlugin(precision, device, scaler=None)[source]

Bases: pytorch_lightning.plugins.precision.mixed.MixedPrecisionPlugin

Plugin for Native Mixed Precision (AMP) training with torch.autocast.

Parameters:
forward_context()[source]

Enable autocast context.

Return type:

Generator[None, None, None]

load_state_dict(state_dict)[source]

Called when loading a checkpoint, implement to reload precision plugin state given precision plugin state_dict.

Parameters:

state_dict (Dict[str, Any]) – the precision plugin state returned by state_dict.

Return type:

None

optimizer_step(model, optimizer, optimizer_idx, closure, **kwargs)[source]

Hook to run the optimizer step.

Return type:

Any

pre_backward(model, closure_loss)[source]

Run before precision plugin executes backward.

Parameters:
  • model (LightningModule) – the model to be optimized

  • closure_loss (Tensor) – the loss value obtained from the closure

Return type:

Tensor

state_dict()[source]

Called when saving a checkpoint, implement to generate precision plugin state_dict.

Return type:

Dict[str, Any]

Returns:

A dictionary containing precision plugin state.