ParallelStrategy
- class lightning_fabric.strategies.ParallelStrategy(accelerator=None, parallel_devices=None, cluster_environment=None, checkpoint_io=None, precision=None)[source]
Bases:
lightning_fabric.strategies.strategy.Strategy
,abc.ABC
Strategy for training with multiple processes in parallel.
- all_gather(tensor, group=None, sync_grads=False)[source]
Perform a all_gather on all processes.
- Return type
- reduce_boolean_decision(decision, all=True)[source]
Reduces a boolean decision over distributed processes. By default is analagous to
all
from the standard library, returningTrue
only if all input decisions evaluate toTrue
. Ifall
is set toFalse
, it behaves likeany
instead.
- teardown()[source]
This method is called to teardown the training process.
It is the right place to release memory and free other resources.
- Return type
- property distributed_sampler_kwargs: Optional[Dict[str, Any]]
Arguments for the
DistributedSampler
.If this method is not defined, or it returns
None
, then theDistributedSampler
will not be used.