Shortcuts

HPUParallelStrategy

class pytorch_lightning.strategies.HPUParallelStrategy(accelerator=None, parallel_devices=None, cluster_environment=None, checkpoint_io=None, precision_plugin=None, ddp_comm_state=None, ddp_comm_hook=None, ddp_comm_wrapper=None, model_averaging_period=None, process_group_backend='hccl', **kwargs)[source]

Bases: pytorch_lightning.strategies.ddp.DDPStrategy

Strategy for distributed training on multiple HPU devices.

broadcast(obj, src=0)[source]

Broadcasts an object to all processes.

Parameters
  • obj (object) – the object to broadcast

  • src (int) – source rank

Return type

object

optimizer_step(optimizer, opt_idx, closure, model=None, **kwargs)[source]

Performs the actual optimizer step.

Parameters
  • optimizer (Optimizer) – the optimizer performing the step

  • opt_idx (int) – index of the current optimizer

  • closure (Callable[[], Any]) – closure calculating the loss value

  • model (Union[LightningModule, Module, None]) – reference to the model, optionally defining optimizer step related hooks

  • **kwargs – Any extra arguments to optimizer.step

Return type

Any

setup_environment()[source]

Setup any processes or distributed connections.

This is called before the LightningModule/DataModule setup hook which allows the user to access the accelerator environment before setup is complete.

Return type

None

teardown()[source]

This method is called to teardown the training process.

It is the right place to release memory and free other resources.

Return type

None

You are viewing an outdated version of PyTorch Lightning Docs

Click here to view the latest version→