ParallelStrategy¶
- class lightning.pytorch.strategies.ParallelStrategy(accelerator=None, parallel_devices=None, cluster_environment=None, checkpoint_io=None, precision_plugin=None)[source]¶
- Bases: - lightning.pytorch.strategies.strategy.Strategy,- abc.ABC- Plugin for training with multiple processes in parallel. - all_gather(tensor, group=None, sync_grads=False)[source]¶
- Perform a all_gather on all processes. - Return type
 
 - block_backward_sync()[source]¶
- Blocks ddp sync gradients behaviour on backwards pass. - This is useful for skipping sync when accumulating gradients, reducing communication overhead Returns: context manager with sync behaviour off - Return type
 
 - reduce_boolean_decision(decision, all=True)[source]¶
- Reduces a boolean decision over distributed processes. By default is analagous to - allfrom the standard library, returning- Trueonly if all input decisions evaluate to- True. If- allis set to- False, it behaves like- anyinstead.
 - teardown()[source]¶
- This method is called to teardown the training process. - It is the right place to release memory and free other resources. - Return type
 
 - property is_global_zero: bool¶
- Whether the current process is the rank zero process not only on the local node, but for all nodes. - Return type
 
 - abstract property root_device: torch.device¶
- Return the root device. - Return type