DataParallelPlugin¶
- class pytorch_lightning.plugins.training_type.DataParallelPlugin(parallel_devices=None, checkpoint_io=None)[source]¶
Bases:
pytorch_lightning.plugins.training_type.parallel.ParallelPlugin
Implements data-parallel training in a single process, i.e., the model gets replicated to each device and each gets a split of the data.
- barrier(*args, **kwargs)[source]¶
Synchronizes all processes which blocks processes until the whole group enters this function.
- Parameters
name¶ – an optional name to pass into barrier.
- reduce(collection, *args, **kwargs)[source]¶
Reduces a collection of tensors from all processes. It can be applied to just a single tensor.
- reduce_boolean_decision(decision)[source]¶
Reduce the early stopping decision across all processes.
- Return type
- teardown()[source]¶
This method is called to teardown the training process.
It is the right place to release memory and free other resources.
- Return type
- property root_device¶
Return the root device.