Shortcuts

HPUParallelStrategy

class pytorch_lightning.strategies.HPUParallelStrategy(accelerator=None, parallel_devices=None, cluster_environment=None, checkpoint_io=None, precision_plugin=None, ddp_comm_state=None, ddp_comm_hook=None, ddp_comm_wrapper=None, model_averaging_period=None, process_group_backend='hccl', **kwargs)[source]

Bases: pytorch_lightning.strategies.ddp.DDPStrategy

Strategy for distributed training on multiple HPU devices.

broadcast(obj, src=0)[source]

Broadcasts an object to all processes.

Parameters:
  • obj (object) – the object to broadcast

  • src (int) – source rank

Return type:

object

setup_environment()[source]

Setup any processes or distributed connections.

This is called before the LightningModule/DataModule setup hook which allows the user to access the accelerator environment before setup is complete.

Return type:

None

teardown()[source]

This method is called to teardown the training process.

It is the right place to release memory and free other resources.

Return type:

None