SingleDevicePlugin¶
- class pytorch_lightning.plugins.training_type.SingleDevicePlugin(device)[source]¶
Bases:
pytorch_lightning.plugins.training_type.training_type_plugin.TrainingTypePlugin
Plugin that handles communication on a single device.
- all_gather(tensor, group=None, sync_grads=False)[source]¶
Perform a all_gather on all processes
- Return type
- barrier(*args, **kwargs)[source]¶
Forces all possibly joined processes to wait for each other
- Return type
- reduce(tensor, *args, **kwargs)[source]¶
Reduces a tensor from several distributed processes to one aggregated tensor. As this plugin only operates with a single device, the reduction is simply the identity.
- teardown()[source]¶
This method is called to teardown the training process. It is the right place to release memory and free other resources.
- Return type
- property is_global_zero: bool¶
Whether the current process is the rank zero process not only on the local node, but for all nodes.
- property on_gpu: bool¶
Returns whether the current process is done on GPU
- property on_tpu: bool¶
Returns whether the current process is done on TPU
- property root_device: torch.device¶
Returns the root device