ParallelStrategy
- class pytorch_lightning.strategies.ParallelStrategy(accelerator=None, parallel_devices=None, cluster_environment=None, checkpoint_io=None, precision_plugin=None)[source]
Bases:
pytorch_lightning.strategies.strategy.Strategy
,abc.ABC
Plugin for training with multiple processes in parallel.
- all_gather(tensor, group=None, sync_grads=False)[source]
Perform a all_gather on all processes.
- Return type
- block_backward_sync()[source]
Blocks ddp sync gradients behaviour on backwards pass.
This is useful for skipping sync when accumulating gradients, reducing communication overhead Returns: context manager with sync behaviour off
- reconciliate_processes(trace)[source]
Function to re-conciliate processes on failure.
- reduce_boolean_decision(decision)[source]
Reduce the early stopping decision across all processes.
- Return type
- teardown()[source]
This method is called to teardown the training process.
It is the right place to release memory and free other resources.
- Return type
- property is_global_zero: bool
Whether the current process is the rank zero process not only on the local node, but for all nodes.
- Return type
- property lightning_module: Optional[pytorch_lightning.core.lightning.LightningModule]
Returns the pure LightningModule without potential wrappers.
- Return type
- abstract property root_device: torch.device
Return the root device.
- Return type