Shortcuts

ParallelStrategy

class lightning_fabric.strategies.ParallelStrategy(accelerator=None, parallel_devices=None, cluster_environment=None, checkpoint_io=None, precision=None)[source]

Bases: lightning_fabric.strategies.strategy.Strategy, abc.ABC

Strategy for training with multiple processes in parallel.

all_gather(tensor, group=None, sync_grads=False)[source]

Perform a all_gather on all processes.

Return type:

Tensor

reduce_boolean_decision(decision, all=True)[source]

Reduces a boolean decision over distributed processes. By default is analagous to all from the standard library, returning True only if all input decisions evaluate to True. If all is set to False, it behaves like any instead.

Parameters:
  • decision (bool) – A single input decision.

  • all (bool) – Whether to logically emulate all or any. Defaults to True.

Returns:

The reduced boolean decision.

Return type:

bool

teardown()[source]

This method is called to teardown the training process.

It is the right place to release memory and free other resources.

Return type:

None

property distributed_sampler_kwargs: Optional[Dict[str, Any]]

Arguments for the DistributedSampler.

If this method is not defined, or it returns None, then the DistributedSampler will not be used.

property is_global_zero: bool

Whether the current process is the rank zero process not only on the local node, but for all nodes.


© Copyright Copyright (c) 2018-2023, Lightning AI et al...

Built with Sphinx using a theme provided by Read the Docs.