DDPSpawnShardedStrategy¶
- class pytorch_lightning.strategies.DDPSpawnShardedStrategy(accelerator=None, parallel_devices=None, cluster_environment=None, checkpoint_io=None, precision_plugin=None, ddp_comm_state=None, ddp_comm_hook=None, ddp_comm_wrapper=None, process_group_backend=None, timeout=datetime.timedelta(seconds=1800), start_method='spawn', **kwargs)[source]¶
Bases:
pytorch_lightning.strategies.ddp_spawn.DDPSpawnStrategy
Optimizer sharded training provided by FairScale.
- block_backward_sync()[source]¶
Blocks syncing gradients behaviour on backwards pass.
This is useful for skipping sync when accumulating gradients, reducing communication overhead Returns: context manager with sync behaviour off
- Return type:
- optimizer_state(optimizer)[source]¶
Returns state of an optimizer.
Allows for syncing/collating optimizer state from processes in custom plugins.
- property lightning_module: Optional[pytorch_lightning.core.module.LightningModule]¶
Returns the pure LightningModule without potential wrappers.
- Return type: