Shortcuts

MixedPrecision

class lightning_fabric.plugins.precision.MixedPrecision(precision, device, scaler=None)[source]

Bases: lightning_fabric.plugins.precision.precision.Precision

Plugin for Automatic Mixed Precision (AMP) training with torch.autocast.

Parameters:
backward(tensor, model, *args, **kwargs)[source]

Performs the actual backpropagation.

Parameters:
  • tensor (Tensor) – The tensor that will be used for backpropagation

  • model (Optional[Module]) – The module that was involved in producing the tensor and whose parameters need the gradients

Return type:

None

convert_input(data)[source]

Convert model inputs (forward) to the floating point precision type of this plugin.

This is a no-op for tensors that are not of floating-point type or already have the desired type.

Return type:

Tensor

forward_context()[source]

A contextmanager for managing model forward/training_step/evaluation_step/predict_step.

Return type:

Generator[None, None, None]

load_state_dict(state_dict)[source]

Called when loading a checkpoint, implement to reload precision plugin state given precision plugin state_dict.

Parameters:

state_dict (Dict[str, Any]) – the precision plugin state returned by state_dict.

Return type:

None

optimizer_step(optimizer, **kwargs)[source]

Hook to run the optimizer step.

Return type:

Any

state_dict()[source]

Called when saving a checkpoint, implement to generate precision plugin state_dict.

Return type:

Dict[str, Any]

Returns:

A dictionary containing precision plugin state.


© Copyright Copyright (c) 2018-2023, Lightning AI et al...

Built with Sphinx using a theme provided by Read the Docs.