HPUParallelStrategy¶
- class lightning.pytorch.strategies.HPUParallelStrategy(accelerator=None, parallel_devices=None, cluster_environment=None, checkpoint_io=None, precision_plugin=None, ddp_comm_state=None, ddp_comm_hook=None, ddp_comm_wrapper=None, model_averaging_period=None, process_group_backend='hccl', **kwargs)[source]¶
- Bases: - lightning.pytorch.strategies.ddp.DDPStrategy- Strategy for distributed training on multiple HPU devices. - Warning - This is an experimental feature. - optimizer_step(optimizer, closure, model=None, **kwargs)[source]¶
- Performs the actual optimizer step. - Parameters
- Return type