HPUParallelStrategy¶
- class pytorch_lightning.strategies.HPUParallelStrategy(accelerator=None, parallel_devices=None, checkpoint_io=None, precision_plugin=None, process_group_backend='hccl')[source]¶
Bases:
pytorch_lightning.strategies.ddp.DDPStrategy
Strategy for distributed training on multiple HPU devices.