lightning.fabric.strategies

Strategies

Strategy

Base class for all strategies that change the behaviour of the training, validation and test- loop.

DDPStrategy

Strategy for multi-process single-device training on one or multiple nodes.

DataParallelStrategy

Implements data-parallel training in a single process, i.e., the model gets replicated to each device and each gets a split of the data.

FSDPStrategy

Strategy for Fully Sharded Data Parallel provided by torch.distributed.

DeepSpeedStrategy

Provides capabilities to run training using the DeepSpeed library, with training optimizations for large billion parameter models.

XLAStrategy

Strategy for training multiple TPU devices using the torch_xla.distributed.xla_multiprocessing.spawn() method.

XLAFSDPStrategy

Strategy for training multiple XLA devices using the torch_xla.distributed.xla_fully_sharded_data_parallel.XlaFullyShardedDataParallel() method.

ParallelStrategy

Strategy for training with multiple processes in parallel.

SingleDeviceStrategy

Strategy that handles communication on a single device.

SingleDeviceXLAStrategy

Strategy for training on a single XLA device.

ModelParallelStrategy

Enables user-defined parallelism applied to a model.