FSDPStrategy¶
- class lightning.pytorch.strategies.FSDPStrategy(accelerator=None, parallel_devices=None, cluster_environment=None, checkpoint_io=None, precision_plugin=None, process_group_backend=None, timeout=datetime.timedelta(seconds=1800), cpu_offload=None, mixed_precision=None, auto_wrap_policy=None, activation_checkpointing=None, activation_checkpointing_policy=None, sharding_strategy='FULL_SHARD', state_dict_type='full', device_mesh=None, **kwargs)[source]¶
Bases:
ParallelStrategy
Strategy for Fully Sharded Data Parallel provided by torch.distributed.
Fully Sharded Training shards the entire model across all available GPUs, allowing you to scale model size, whilst using efficient communication to reduce overhead. In practice, this means we can remain at parity with PyTorch DDP, whilst scaling our model sizes dramatically. The technique is similar to ZeRO-Stage 3.
For more information check out this blogpost.
Defaults have been set and options have been exposed, but may require configuration based on your level of memory/speed efficiency. We suggest having a look at this tutorial for more information.
- Parameters:
cpu_offload¶ (
Union
[bool
,CPUOffload
,None
]) – Seecpu_offload
parameter intorch.distributed.fsdp.FullyShardedDataParallel
.mixed_precision¶ (
Optional
[MixedPrecision
]) – Seemixed_precision
parameter intorch.distributed.fsdp.FullyShardedDataParallel
.auto_wrap_policy¶ (
Union
[set
[type
[Module
]],Callable
[[Module
,bool
,int
],bool
],ModuleWrapPolicy
,None
]) – Same asauto_wrap_policy
parameter intorch.distributed.fsdp.FullyShardedDataParallel
. For convenience, this also accepts a set of the layer classes to wrap.activation_checkpointing¶ (
Union
[type
[Module
],list
[type
[Module
]],None
]) – Deprecated. Useactivation_checkpointing_policy
.activation_checkpointing_policy¶ (
Union
[set
[type
[Module
]],Callable
[[Module
,bool
,int
],bool
],ModuleWrapPolicy
,None
]) – Same asauto_wrap_policy
parameter intorch.distributed.fsdp.FullyShardedDataParallel
but used when selecting the modules for which you want to enable activation checkpointing. Enabling this can free up a significant amount of memory at the cost of speed since activations in these layers need to be recomputed during backpropagation. For convenience, this also accepts a set of the layer classes to wrap.sharding_strategy¶ (
Union
[ShardingStrategy
,Literal
['FULL_SHARD'
,'SHARD_GRAD_OP'
,'NO_SHARD'
,'HYBRID_SHARD'
]]) –Select whether to shard model parameters, gradients, optimizer states, or a combination of them. Available values are:
"FULL_SHARD"
: Shards model parameters, gradients, and optimizer states (default)."SHARD_GRAD_OP"
: Shards gradients and optimizer states only. Model parameters get replicated."NO_SHARD"
: No sharding (identical to regular DDP)."HYBRID_SHARD"
: Shards model parameters, gradients, and optimizer states within a single machine, but replicates across machines. See also the device_mesh parameter below.
Also accepts a
torch.distributed.fsdp.ShardingStrategy
enum value.device_mesh¶ (
Union
[tuple
[int
],DeviceMesh
,None
]) – A tuple (replication size, sharding size) that defines over how many devices to shard and replicate the model. The product of the two numbers must equal the world size. Only valid in combination with the HYBRID_SHARD sharding strategy.state_dict_type¶ (
Literal
['full'
,'sharded'
]) –The format in which the state of the model and optimizers gets saved into the checkpoint.
"full"
: The full weights and optimizer states get assembled on rank 0 and saved to a single file."sharded"
: Each rank saves its shard of weights and optimizer states to a file. The checkpoint is a folder with as many files as the world size.
**kwargs¶ (
Any
) – See available parameters intorch.distributed.fsdp.FullyShardedDataParallel
.
- barrier(name=None)[source]¶
Synchronizes all processes which blocks processes until the whole group enters this function.
- model_sharded_context()[source]¶
Provide hook to create modules in a distributed aware context. This is useful for when we’d like to shard the model instantly, which is useful for extremely large models which can save memory and initialization time.
Returns: Model parallel context.
- optimizer_state(optimizer)[source]¶
Returns state of an optimizer.
Allows for syncing/collating optimizer state from processes in custom strategies.
- reduce(tensor, group=None, reduce_op='mean')[source]¶
Reduces a tensor from several distributed processes to one aggregated tensor.
- Parameters:
tensor¶ (
Union
[Tensor
,Any
]) – the tensor to sync and reducegroup¶ (
Optional
[Any
]) – the process group to gather results from. Defaults to all processes (world)reduce_op¶ (
Union
[ReduceOp
,str
,None
]) – the reduction operation. Defaults to ‘mean’/’avg’. Can also be a string ‘sum’ to calculate the sum during reduction.
- Return type:
- Returns:
reduced value, except when the input was not a tensor the output remains is unchanged
- save_checkpoint(checkpoint, filepath, storage_options=None)[source]¶
Save model/training states as a checkpoint file through state-dump and file-write.
- setup(trainer)[source]¶
Sets up the accelerator, plugins and initializes the optimizers (if needed).
- setup_environment()[source]¶
Setup any processes or distributed connections.
This is called before the LightningModule/DataModule setup hook which allows the user to access the accelerator environment before setup is complete.
- Return type:
- teardown()[source]¶
This method is called to teardown the training process.
It is the right place to release memory and free other resources.
- Return type:
- property lightning_restore_optimizer: bool¶
Override to disable Lightning restoring optimizers/schedulers.
This is useful for strategies which manage restoring optimizers/schedulers.