DataParallelPlugin¶
- class pytorch_lightning.plugins.training_type.DataParallelPlugin(parallel_devices)[source]¶
Bases:
pytorch_lightning.plugins.training_type.parallel.ParallelPlugin
Implements data-parallel training in a single process, i.e., the model gets replicated to each device and each gets a split of the data.
- reduce(collection, *args, **kwargs)[source]¶
Reduces a collection of tensors from all processes. It can be applied to just a single tensor.
- reduce_boolean_decision(decision)[source]¶
Reduce the early stopping decision across all processes
- Return type
- property root_device¶
Returns the root device