CUDAAccelerator
class pytorch_lightning.accelerators. CUDAAccelerator [source]
Bases: pytorch_lightning.accelerators.accelerator.Accelerator
Accelerator for NVIDIA CUDA devices.
static auto_device_count ( ) [source]
Get the devices when set to auto.
Return type:
int
get_device_stats ( device ) [source]
Gets stats for the given GPU device.
Parameters:
device (Union
[device
, str
, int
]) – GPU device for which to get stats
Return type:
Dict
[str
, Any
]
Returns:
A dictionary mapping the metrics to their values.
Raises:
FileNotFoundError – If nvidia-smi installation not found
static get_parallel_devices ( devices ) [source]
Gets parallel devices for the Accelerator.
Return type:
List
[device
]
static is_available ( ) [source]
Detect if the hardware is available.
Return type:
bool
static parse_devices ( devices ) [source]
Accelerator device parsing logic.
Return type:
Optional
[List
[int
]]
setup ( trainer ) [source]
Setup plugins for the trainer fit and creates optimizers.
Parameters:
trainer (Trainer
) – the trainer instance
Return type:
None
setup_environment ( root_device ) [source]
Raises:
MisconfigurationException – If the selected device is not GPU.
Return type:
None
teardown ( ) [source]
Clean up any state created by the accelerator.
Return type:
None
To analyze traffic and optimize your experience, we serve cookies on this site. By clicking or navigating, you agree to allow our usage of cookies. Read PyTorch Lightning's Privacy Policy .