Shortcuts

DDPFullyShardedNativeStrategy

class pytorch_lightning.strategies.DDPFullyShardedNativeStrategy(accelerator=None, parallel_devices=None, cluster_environment=None, checkpoint_io=None, precision_plugin=None, process_group_backend=None, cpu_offload=None, backward_prefetch=None, mixed_precision=None, activation_checkpointing=None, **kwargs)[source]

Bases: pytorch_lightning.strategies.parallel.ParallelStrategy

Strategy for Fully Sharded Data Parallel provided by torch.distributed.

Warning

DDPFullyShardedNativeStrategy is in BETA and subject to change. The interface can bring breaking changes and new features with the next release of PyTorch.

Fully Sharded Training shards the entire model across all available GPUs, allowing you to scale model size, whilst using efficient communication to reduce overhead. In practice, this means we can remain at parity with PyTorch DDP, whilst scaling our model sizes dramatically. The technique is similar to ZeRO-Stage 3.

For more information check out.

Defaults have been set and options have been exposed, but may require configuration based on your level of memory/speed efficiency. We suggest having a look at this tutorial for more information.

Parameters:
  • cpu_offload (Union[bool, CPUOffload, None]) – Enable offloading parameters and gradients to CPU to save GPU memory at the cost of speed. You can also pass a config: cpu_offload=CPUOffload(offload_params=True). Note that this currently implicitly enables gradient offloading to CPU in order for parameters and gradients to be on same device to work with the optimizer. This API is subject to change. Default: no offoading

  • backward_prefetch (Optional[BackwardPrefetch]) – This is an experimental feature that is subject to change in the the near future. It allows users to enable two different backward_prefetch algorithms to help backward communication and computation overlapping. The pros and cons of each algorithm is explained in the class BackwardPrefetch.

  • mixed_precision (Optional[MixedPrecision]) – Mixed Precision config. By default, Lightning will enable FP16 if precision=16 or BF16 if precision=bf16 unless a config is passed in. This is only available in PyTorch 1.12 and later.

  • activation_checkpointing (Union[Type[Module], List[Type[Module]], None]) – A single layer or a list of layer classes for which you want to enable activation checkpointing. This is typically your transformer block (including attention + feed-forward). Enabling this can free up a significant amount of memory at the cost of speed since activations in these layers need to be recomputed during backpropagation.

  • **kwargs (Any) – Passed to the FSDP context manager which will configure the FSDP class when wrapping modules.

barrier(name=None)[source]

Synchronizes all processes which blocks processes until the whole group enters this function.

Parameters:

name (Optional[str]) – an optional name to pass into barrier.

Return type:

None

broadcast(obj, src=0)[source]

Broadcasts an object to all processes.

Parameters:
  • obj (TypeVar(TBroadcast)) – the object to broadcast

  • src (int) – source rank

Return type:

TypeVar(TBroadcast)

model_sharded_context()[source]

Provide hook to create modules in a distributed aware context. This is useful for when we’d like to shard the model instantly, which is useful for extremely large models which can save memory and initialization time.

Returns: Model parallel context.

Return type:

Generator

model_to_device()[source]

Moves the model to the correct device.

Return type:

None

predict_step(*args, **kwargs)[source]

The actual predict step.

See predict_step() for more details

Return type:

Union[Tensor, Dict[str, Any]]

reduce(tensor, group=None, reduce_op='mean')[source]

Reduces a tensor from several distributed processes to one aggregated tensor.

Parameters:
  • tensor (Union[Tensor, Any]) – the tensor to sync and reduce

  • group (Optional[Any]) – the process group to gather results from. Defaults to all processes (world)

  • reduce_op (Union[ReduceOp, str, None]) – the reduction operation. Defaults to ‘mean’/’avg’. Can also be a string ‘sum’ to calculate the sum during reduction.

Return type:

Tensor

Returns:

reduced value, except when the input was not a tensor the output remains is unchanged

setup(trainer)[source]

Setup plugins for the trainer fit and creates optimizers.

Parameters:

trainer (Trainer) – the trainer instance

Return type:

None

setup_environment()[source]

Setup any processes or distributed connections.

This is called before the LightningModule/DataModule setup hook which allows the user to access the accelerator environment before setup is complete.

Return type:

None

setup_optimizers(trainer)[source]

Creates optimizers and schedulers.

Parameters:

trainer (Trainer) – the Trainer, these optimizers should be connected to

Return type:

None

teardown()[source]

This method is called to teardown the training process.

It is the right place to release memory and free other resources.

Return type:

None

test_step(*args, **kwargs)[source]

The actual test step.

See test_step() for more details

Return type:

Union[Tensor, Dict[str, Any], None]

training_step(*args, **kwargs)[source]

The actual training step.

See training_step() for more details

Return type:

Union[Tensor, Dict[str, Any]]

validation_step(*args, **kwargs)[source]

The actual validation step.

See validation_step() for more details

Return type:

Union[Tensor, Dict[str, Any], None]

property root_device: torch.device

Return the root device.