DDPShardedStrategy¶
- class pytorch_lightning.strategies.DDPShardedStrategy(*args, **kwargs)[source]¶
Bases:
pytorch_lightning.strategies.ddp.DDPStrategy
Optimizer and gradient sharded training provided by FairScale.
- block_backward_sync()[source]¶
Blocks syncing gradients behaviour on backwards pass.
This is useful for skipping sync when accumulating gradients, reducing communication overhead Returns: context manager with sync behaviour off
- Return type:
- connect(model)[source]¶
Called by the accelerator to connect the accelerator and the model with this plugin.
- Return type: