DDPStrategy¶
- class lightning_fabric.strategies.DDPStrategy(accelerator=None, parallel_devices=None, cluster_environment=None, checkpoint_io=None, precision=None, process_group_backend=None, timeout=datetime.timedelta(seconds=1800), start_method='popen', **kwargs)[source]¶
Bases:
lightning_fabric.strategies.parallel.ParallelStrategy
Strategy for multi-process single-device training on one or multiple nodes.
- all_reduce(tensor, group=None, reduce_op='mean')[source]¶
Reduces a tensor from several distributed processes to one aggregated tensor.
- Parameters:
- Return type:
- Returns:
reduced value, except when the input was not a tensor the output remains is unchanged
- barrier(*args, **kwargs)[source]¶
Synchronizes all processes which blocks processes until the whole group enters this function.
- setup_environment()[source]¶
Setup any processes or distributed connections.
This must be called by the framework at the beginning of every process, before any distributed communication takes place.
- Return type:
- setup_module(module)[source]¶
Wraps the model into a
DistributedDataParallel
module.- Return type:
- property distributed_sampler_kwargs: Dict[str, Any]¶
Arguments for the
DistributedSampler
.If this method is not defined, or it returns
None
, then theDistributedSampler
will not be used.
- property root_device: torch.device¶
Returns the root device.