HorovodStrategy¶
- class pytorch_lightning.strategies.HorovodStrategy(accelerator=None, parallel_devices=None, checkpoint_io=None, precision_plugin=None)[source]¶
 Bases:
pytorch_lightning.strategies.parallel.ParallelStrategyPlugin for Horovod distributed training integration.
- all_gather(result, group=None, sync_grads=False)[source]¶
 Perform a all_gather on all processes.
- Return type:
 
- barrier(*args, **kwargs)[source]¶
 Synchronizes all processes which blocks processes until the whole group enters this function.
- reduce(tensor, group=None, reduce_op='mean')[source]¶
 Reduces a tensor from several distributed processes to one aggregated tensor.
- Parameters:
 tensor¶ (
Union[Any,Tensor]) – the tensor to sync and reducegroup¶ (
Optional[Any]) – the process group to gather results from. Defaults to all processes (world)reduce_op¶ (
Union[ReduceOp,str,None]) – the reduction operation. Defaults to ‘mean’/’avg’. Can also be a string ‘sum’ to calculate the sum during reduction.
- Return type:
 - Returns:
 reduced value, except when the input was not a tensor the output remains is unchanged
- teardown()[source]¶
 This method is called to teardown the training process.
It is the right place to release memory and free other resources.
- Return type:
 
- property handles_gradient_accumulation: bool¶
 Whether the plugin handles gradient accumulation internally.
- Return type:
 
- property root_device: torch.device¶
 Return the root device.
- Return type: