ParallelStrategy
- class lightning.fabric.strategies.ParallelStrategy(accelerator=None, parallel_devices=None, cluster_environment=None, checkpoint_io=None, precision=None)[source]
-
Strategy for training with multiple processes in parallel.
- all_gather(tensor, group=None, sync_grads=False)[source]
Perform a all_gather on all processes.
- Return type:
- reduce_boolean_decision(decision, all=True)[source]
Reduces a boolean decision over distributed processes. By default is analagous to
all
from the standard library, returningTrue
only if all input decisions evaluate toTrue
. Ifall
is set toFalse
, it behaves likeany
instead.
- teardown()[source]
This method is called to teardown the training process.
It is the right place to release memory and free other resources.
- Return type:
- property distributed_sampler_kwargs: Optional[dict[str, Any]]
Arguments for the
DistributedSampler
.If this method is not defined, or it returns
None
, then theDistributedSampler
will not be used.
- property is_global_zero: bool
Whether the current process is the rank zero process not only on the local node, but for all nodes.