Shortcuts

ApexMixedPrecisionPlugin

class pytorch_lightning.plugins.precision.ApexMixedPrecisionPlugin(amp_level='O2')[source]

Bases: pytorch_lightning.plugins.precision.precision_plugin.PrecisionPlugin

Mixed Precision Plugin based on Nvidia/Apex (https://github.com/NVIDIA/apex)

backward(tensor, model, optimizer, *args, **kwargs)[source]

Run before precision plugin executes backward.

Parameters:
  • tensor (Tensor) – the loss value obtained from the closure

  • model (LightningModule) – the model to be optimized

  • optimizer (Optional[Optimizable]) – current optimizer being used. None if using manual optimization

  • *args (Any) – Positional arguments intended for the actual function that performs the backward, like backward().

  • **kwargs (Any) – Keyword arguments for the same purpose as *args.

Return type:

None

dispatch(trainer)[source]

Hook to do something when Strategy.dispatch() gets called.

Return type:

None

load_state_dict(state_dict)[source]

Called when loading a checkpoint, implement to reload precision plugin state given precision plugin state_dict.

Parameters:

state_dict (Dict[str, Any]) – the precision plugin state returned by state_dict.

Return type:

None

main_params(optimizer)[source]

The main params of the model.

Returns the plain model params here. Maybe different in other precision plugins.

Return type:

Iterator[Parameter]

optimizer_step(optimizer, model, optimizer_idx, closure, **kwargs)[source]

Hook to run the optimizer step.

Return type:

Any

state_dict()[source]

Called when saving a checkpoint, implement to generate precision plugin state_dict.

Return type:

Dict[str, Any]

Returns:

A dictionary containing precision plugin state.