Precision
- class lightning.fabric.plugins.precision.Precision[source]
Bases:
object
Base class for all plugins handling the precision-specific parts of the training.
The class attribute precision must be overwritten in child classes. The default value reflects fp32 training.
- backward(tensor, model, *args, **kwargs)[source]
Performs the actual backpropagation.
- convert_input(data)[source]
Convert model inputs (forward) to the floating point precision type of this plugin.
This is a no-op in the base precision plugin, since we assume the data already has the desired type (default is torch.float32).
- Return type:
- convert_module(module)[source]
Convert the module parameters to the precision type this plugin handles.
This is optional and depends on the precision limitations during optimization.
- Return type:
- convert_output(data)[source]
Convert outputs to the floating point precision type expected after model’s forward.
This is a no-op in the base precision plugin, since we assume the data already has the desired type (default is torch.float32).
- Return type:
- forward_context()[source]
A contextmanager for managing model forward/training_step/evaluation_step/predict_step.
- Return type:
- load_state_dict(state_dict)[source]
Called when loading a checkpoint, implement to reload precision plugin state given precision plugin state_dict.
- main_params(optimizer)[source]
The main params of the model.
Returns the plain model params here. Maybe different in other precision plugins.
- module_init_context()[source]
Instantiate module parameters or tensors in the precision type this plugin handles.
This is optional and depends on the precision limitations during optimization.
- Return type:
- post_backward(tensor, module)[source]
Runs after precision plugin executes backward.
- pre_backward(tensor, module)[source]
Runs before precision plugin executes backward.
- state_dict()[source]
Called when saving a checkpoint, implement to generate precision plugin state_dict.
- teardown()[source]
This method is called to teardown the training process.
It is the right place to release memory and free other resources.
- Return type:
- tensor_init_context()[source]
Controls how tensors get created (device, dtype).
- Return type: