DDPStrategy

class lightning.fabric.strategies.DDPStrategy(accelerator=None, parallel_devices=None, cluster_environment=None, checkpoint_io=None, precision=None, process_group_backend=None, timeout=datetime.timedelta(seconds=1800), start_method='popen', **kwargs)[source]

Bases: ParallelStrategy

Strategy for multi-process single-device training on one or multiple nodes.

_configure_launcher()[source]

Attach the launcher based on Strategy.

Return type:

None

all_reduce(tensor, group=None, reduce_op='mean')[source]

Reduces a tensor from several distributed processes to one aggregated tensor.

Parameters:
  • tensor (Tensor) – the tensor to sync and reduce

  • group (Optional[Any]) – the process group to gather results from. Defaults to all processes (world)

  • reduce_op (Union[ReduceOp, str, None]) – the reduction operation. Defaults to ‘mean’/’avg’. Can also be a string ‘sum’ to calculate the sum during reduction.

Return type:

Tensor

Returns:

reduced value, except when the input was not a tensor the output remains is unchanged

barrier(*args, **kwargs)[source]

Synchronizes all processes which blocks processes until the whole group enters this function.

Parameters:

name – an optional name to pass into barrier.

Return type:

None

broadcast(obj, src=0)[source]

Broadcasts an object to all processes.

Parameters:
  • obj (TypeVar(TBroadcast)) – the object to broadcast

  • src (int) – source rank

Return type:

TypeVar(TBroadcast)

get_module_state_dict(module)[source]

Returns model state.

Return type:

Dict[str, Union[Any, Tensor]]

load_module_state_dict(module, state_dict, strict=True)[source]

Loads the given state into the model.

Return type:

None

module_to_device(module)[source]

Moves the model to the correct device.

Return type:

None

setup_environment()[source]

Setup any processes or distributed connections.

This must be called by the framework at the beginning of every process, before any distributed communication takes place.

Return type:

None

setup_module(module)[source]

Wraps the model into a DistributedDataParallel module.

Return type:

DistributedDataParallel

property distributed_sampler_kwargs: Dict[str, Any]

Arguments for the DistributedSampler.

If this method is not defined, or it returns None, then the DistributedSampler will not be used.

property root_device: device

Returns the root device.