DDPStrategy¶
- class lightning.pytorch.strategies.DDPStrategy(accelerator=None, parallel_devices=None, cluster_environment=None, checkpoint_io=None, precision_plugin=None, ddp_comm_state=None, ddp_comm_hook=None, ddp_comm_wrapper=None, model_averaging_period=None, process_group_backend=None, timeout=datetime.timedelta(seconds=1800), start_method='popen', **kwargs)[source]¶
- Bases: - lightning.pytorch.strategies.parallel.ParallelStrategy- Strategy for multi-process single-device training on one or multiple nodes. - barrier(*args, **kwargs)[source]¶
- Synchronizes all processes which blocks processes until the whole group enters this function. 
 - on_exception(exception)[source]¶
- Called when the trainer execution is interrupted by an exception. - Return type
 
 - optimizer_step(optimizer, closure, model=None, **kwargs)[source]¶
- Performs the actual optimizer step. - Parameters
- Return type
 
 - reduce(tensor, group=None, reduce_op='mean')[source]¶
- Reduces a tensor from several distributed processes to one aggregated tensor. - Parameters
- Return type
- Returns
- reduced value, except when the input was not a tensor the output remains is unchanged 
 
 - setup_environment()[source]¶
- Setup any processes or distributed connections. - This is called before the LightningModule/DataModule setup hook which allows the user to access the accelerator environment before setup is complete. - Return type
 
 - teardown()[source]¶
- This method is called to teardown the training process. - It is the right place to release memory and free other resources. - Return type
 
 - training_step(*args, **kwargs)[source]¶
- The actual training step. - See - training_step()for more details
 - validation_step(*args, **kwargs)[source]¶
- The actual validation step. - See - validation_step()for more details
 - property root_device: torch.device¶
- Return the root device. - Return type