Shortcuts

ParallelStrategy

class pytorch_lightning.strategies.ParallelStrategy(accelerator=None, parallel_devices=None, cluster_environment=None, checkpoint_io=None, precision_plugin=None)[source]

Bases: pytorch_lightning.strategies.strategy.Strategy, abc.ABC

Plugin for training with multiple processes in parallel.

all_gather(tensor, group=None, sync_grads=False)[source]

Perform a all_gather on all processes.

Return type:

Tensor

block_backward_sync()[source]

Blocks ddp sync gradients behaviour on backwards pass.

This is useful for skipping sync when accumulating gradients, reducing communication overhead Returns: context manager with sync behaviour off

Return type:

Generator

reconciliate_processes(trace)[source]

Function to re-conciliate processes on failure.

Return type:

None

reduce_boolean_decision(decision)[source]

Reduce a boolean decision across all processes.

Return type:

bool

teardown()[source]

This method is called to teardown the training process.

It is the right place to release memory and free other resources.

Return type:

None

property is_global_zero: bool

Whether the current process is the rank zero process not only on the local node, but for all nodes.

Return type:

bool

property lightning_module: Optional[pytorch_lightning.core.module.LightningModule]

Returns the pure LightningModule without potential wrappers.

Return type:

Optional[LightningModule]

abstract property root_device: torch.device

Return the root device.

Return type:

device

property torch_distributed_backend: str

Deprecated property.

Return type:

str