DataParallelStrategy
- class pytorch_lightning.strategies.DataParallelStrategy(accelerator=None, parallel_devices=None, checkpoint_io=None, precision_plugin=None)[source]
Bases:
pytorch_lightning.strategies.parallel.ParallelStrategy
Implements data-parallel training in a single process, i.e., the model gets replicated to each device and each gets a split of the data.
- barrier(*args, **kwargs)[source]
Synchronizes all processes which blocks processes until the whole group enters this function.
- batch_to_device(batch, device=None, dataloader_idx=0)[source]
Moves the batch to the correct device.
The input and the output is the same type.
- broadcast(obj, src=0)[source]
Broadcasts an object to all processes.
- predict_step(*args, **kwargs)[source]
The actual predict step.
See
predict_step()
for more details
- reduce(collection, *args, **kwargs)[source]
Reduces a collection of tensors from all processes. It can be applied to just a single tensor.
- reduce_boolean_decision(decision)[source]
Reduce the early stopping decision across all processes.
- Return type
- setup(trainer)[source]
Setup plugins for the trainer fit and creates optimizers.
- teardown()[source]
This method is called to teardown the training process.
It is the right place to release memory and free other resources.
- Return type
- test_step(*args, **kwargs)[source]
The actual test step.
See
test_step()
for more details
- training_step(*args, **kwargs)[source]
The actual training step.
See
training_step()
for more details
- validation_step(*args, **kwargs)[source]
The actual validation step.
See
validation_step()
for more details