FSDPPrecision¶
- class lightning.fabric.plugins.precision.FSDPPrecision(precision, scaler=None)[source]¶
Bases:
Precision
Precision plugin for training with Fully Sharded Data Parallel (FSDP).
Warning
This is an experimental feature.
- Parameters:
precision¶ (
Literal
['32-true'
,'16-true'
,'bf16-true'
,'16-mixed'
,'bf16-mixed'
]) – Full precision (32-true), half precision (16-true, bf16-true) or mixed precision (16-mixed, bf16-mixed).scaler¶ (
Optional
[ShardedGradScaler
]) – An optionaltorch.distributed.fsdp.sharded_grad_scaler.ShardedGradScaler
to use.
- Raises:
ValueError – If unsupported
precision
is provided.
- convert_input(data)[source]¶
Convert model inputs (forward) to the floating point precision type of this plugin.
This is a no-op in the base precision plugin, since we assume the data already has the desired type (default is torch.float32).
- Return type:
- convert_module(module)[source]¶
Convert the module parameters to the precision type this plugin handles.
This is optional and depends on the precision limitations during optimization.
- Return type:
- convert_output(data)[source]¶
Convert outputs to the floating point precision type expected after model’s forward.
This is a no-op in the base precision plugin, since we assume the data already has the desired type (default is torch.float32).
- Return type:
- forward_context()[source]¶
A contextmanager for managing model forward/training_step/evaluation_step/predict_step.
- Return type:
- load_state_dict(state_dict)[source]¶
Called when loading a checkpoint, implement to reload precision plugin state given precision plugin state_dict.
- module_init_context()[source]¶
Instantiate module parameters or tensors in the precision type this plugin handles.
This is optional and depends on the precision limitations during optimization.
- Return type:
- state_dict()[source]¶
Called when saving a checkpoint, implement to generate precision plugin state_dict.