Shortcuts

PrecisionPlugin

class pytorch_lightning.plugins.precision.PrecisionPlugin[source]

Bases: pytorch_lightning.plugins.base_plugin.Plugin, pytorch_lightning.core.hooks.CheckpointHooks

Base class for all plugins handling the precision-specific parts of the training. The class attribute precision must be overwritten in child classes. The default value reflects fp32 training.

backward(model, closure_loss, optimizer, *args, **kwargs)[source]

Performs the actual backpropagation

Parameters
  • model (LightningModule) – the model to be optimized

  • closure_loss (Tensor) – the loss value obtained from the closure

  • optimizer (Optional[Optimizer]) – current optimizer being used. None if using manual optimization

Return type

None

clip_grad_by_norm(optimizer, clip_val)[source]

Clip gradients by norm

Return type

None

clip_grad_by_value(optimizer, clip_val)[source]

Clip gradients by value

Return type

None

clip_gradients(optimizer, clip_val, gradient_clip_algorithm=<GradClipAlgorithmType.NORM: 'norm'>, model=None)[source]

Clips the gradients

Return type

None

connect(model, optimizers, lr_schedulers)[source]

Connects this plugin to the accelerator and the training process

Return type

Tuple[Module, List[Optimizer], List[Any]]

master_params(optimizer)[source]

The master params of the model. Returns the plain model params here. Maybe different in other precision plugins.

Return type

Iterator[Parameter]

post_backward(model, closure_loss)[source]

Run after precision plugin executes backward

Parameters
  • model (LightningModule) – the model to be optimized

  • closure_loss (Tensor) – the loss value obtained from the closure

Return type

Tensor

post_optimizer_step(optimizer, optimizer_idx)[source]

Hook to do something after each optimizer step.

Return type

None

pre_backward(model, closure_loss)[source]

Run before precision plugin executes backward

Parameters
  • model (LightningModule) – the model to be optimized

  • closure_loss (Tensor) – the loss value obtained from the closure

Return type

Tensor

pre_optimizer_step(model, optimizer, optimizer_idx, lambda_closure, **kwargs)[source]

Hook to do something before each optimizer step.

Return type

bool