DataParallelStrategy¶
- class lightning.fabric.strategies.DataParallelStrategy(accelerator=None, parallel_devices=None, checkpoint_io=None, precision=None)[source]¶
Bases:
ParallelStrategy
Implements data-parallel training in a single process, i.e., the model gets replicated to each device and each gets a split of the data.
- all_reduce(collection, group=None, reduce_op='mean')[source]¶
Reduces the given tensor (e.g. across GPUs/processes).
- barrier(*args, **kwargs)[source]¶
Synchronizes all processes which blocks processes until the whole group enters this function.
- batch_to_device(batch, device=None)[source]¶
Moves the batch to the correct device.
The returned batch is of the same type as the input batch, just having all tensors on the correct device.
- load_module_state_dict(module, state_dict, strict=True)[source]¶
Loads the given state into the model.
- Return type:
- reduce_boolean_decision(decision, all=True)[source]¶
Reduces a boolean decision over distributed processes. By default is analagous to
all
from the standard library, returningTrue
only if all input decisions evaluate toTrue
. Ifall
is set toFalse
, it behaves likeany
instead.
- setup_module(module)[source]¶
Wraps the given model into a
DataParallel
module.- Return type: