Shortcuts

DDPSpawnShardedStrategy

class pytorch_lightning.strategies.DDPSpawnShardedStrategy(*args, **kwargs)[source]

Bases: pytorch_lightning.strategies.ddp_spawn.DDPSpawnStrategy

Optimizer sharded training provided by FairScale.

block_backward_sync()[source]

Blocks syncing gradients behaviour on backwards pass.

This is useful for skipping sync when accumulating gradients, reducing communication overhead Returns: context manager with sync behaviour off

Return type

Generator

connect(model)[source]

Called by the accelerator to connect the accelerator and the model with this plugin.

Return type

None

pre_backward(closure_loss)[source]

Run before precision plugin executes backward.

Return type

None