HPUParallelStrategy
- class lightning.pytorch.strategies.HPUParallelStrategy(accelerator=None, parallel_devices=None, cluster_environment=None, checkpoint_io=None, precision_plugin=None, ddp_comm_state=None, ddp_comm_hook=None, ddp_comm_wrapper=None, model_averaging_period=None, process_group_backend='hccl', **kwargs)[source]
Bases:
DDPStrategy
Strategy for distributed training on multiple HPU devices.
Warning
This is an experimental feature.
- broadcast(obj, src=0)[source]
Broadcasts an object to all processes.
- optimizer_step(optimizer, closure, model=None, **kwargs)[source]
Performs the actual optimizer step.
- Parameters:
- Return type:
- setup_environment()[source]
Setup any processes or distributed connections.
This is called before the LightningModule/DataModule setup hook which allows the user to access the accelerator environment before setup is complete.
- Return type: