ParallelStrategy¶
- class pytorch_lightning.strategies.ParallelStrategy(accelerator=None, parallel_devices=None, cluster_environment=None, checkpoint_io=None, precision_plugin=None)[source]¶
Bases:
pytorch_lightning.strategies.strategy.Strategy
,abc.ABC
Plugin for training with multiple processes in parallel.
- all_gather(tensor, group=None, sync_grads=False)[source]¶
Perform a all_gather on all processes.
- Return type:
- block_backward_sync()[source]¶
Blocks ddp sync gradients behaviour on backwards pass.
This is useful for skipping sync when accumulating gradients, reducing communication overhead Returns: context manager with sync behaviour off
- Return type:
- reduce_boolean_decision(decision, all=True)[source]¶
Reduces a boolean decision over distributed processes. By default is analagous to
all
from the standard library, returningTrue
only if all input decisions evaluate toTrue
. Ifall
is set toFalse
, it behaves likeany
instead.
- teardown()[source]¶
This method is called to teardown the training process.
It is the right place to release memory and free other resources.
- Return type:
- property is_global_zero: bool¶
Whether the current process is the rank zero process not only on the local node, but for all nodes.
- abstract property root_device: torch.device¶
Return the root device.