Shortcuts

HPUParallelStrategy

class pytorch_lightning.strategies.HPUParallelStrategy(accelerator=None, parallel_devices=None, checkpoint_io=None, precision_plugin=None, process_group_backend='hccl')[source]

Bases: pytorch_lightning.strategies.ddp.DDPStrategy

Strategy for distributed training on multiple HPU devices.

broadcast(obj, src=0)[source]

Broadcasts an object to all processes.

Parameters
  • obj (object) – the object to broadcast

  • src (int) – source rank

Return type

object

setup_environment()[source]

Setup any processes or distributed connections.

This is called before the LightningModule/DataModule setup hook which allows the user to access the accelerator environment before setup is complete.

Return type

None

teardown()[source]

This method is called to teardown the training process.

It is the right place to release memory and free other resources.

Return type

None

You are viewing an outdated version of PyTorch Lightning Docs

Click here to view the latest version→