API References¶
Accelerator API¶
The Accelerator Base Class. |
|
Accelerator for CPU devices. |
|
Accelerator for GPU devices. |
|
Accelerator for TPU devices. |
Core API¶
LightningDataModule for loading DataLoaders with ease. |
|
Decorator for LightningModule methods. |
|
Various hooks to be used in the Lightning code. |
|
The LightningModule - an nn.Module with many additional features. |
Callbacks API¶
Abstract base class used to build new callbacks. |
|
Early Stopping |
|
GPU Stats Monitor |
|
Gradient Accumulator |
|
Learning Rate Monitor |
|
Model Checkpointing |
|
Progress Bars |
Loggers API¶
Abstract base class used to build new loggers. |
|
Comet Logger |
|
CSV logger |
|
MLflow Logger |
|
Neptune Logger |
|
TensorBoard Logger |
|
Test Tube Logger |
|
Weights and Biases Logger |
Plugins API¶
Training Type Plugins¶
Base class for all training type plugins that change the behaviour of the training, validation and test-loop. |
|
Plugin that handles communication on a single device. |
|
Plugin for training with multiple processes in parallel. |
|
Implements data-parallel training in a single process, i.e., the model gets replicated to each device and each gets a split of the data. |
|
Plugin for multi-process single-device training on one or multiple nodes. |
|
DDP2 behaves like DP in one node, but synchronization across nodes behaves like in DDP. |
|
Optimizer and gradient sharded training provided by FairScale. |
|
Optimizer sharded training provided by FairScale. |
|
Spawns processes using the |
|
Provides capabilities to run training using the DeepSpeed library, with training optimizations for large billion parameter models. |
|
Plugin for Horovod distributed training integration. |
|
Plugin for training on a single TPU device. |
|
Plugin for training multiple TPU devices using the |
Precision Plugins¶
Base class for all plugins handling the precision-specific parts of the training. |
|
Plugin for native mixed precision training with |
|
Mixed Precision for Sharded Training |
|
Mixed Precision Plugin based on Nvidia/Apex (https://github.com/NVIDIA/apex) |
|
Precision plugin for DeepSpeed integration. |
|
Plugin that enables bfloats on TPUs |
|
Plugin for training with double ( |
Cluster Environments¶
Specification of a cluster environment. |
|
The default environment used by Lightning for a single node or free cluster (not managed). |
|
An environment for running on clusters managed by the LSF resource manager. |
|
Environment for fault-tolerant and elastic training with torchelastic |
|
Environment for distributed training using the PyTorchJob operator from Kubeflow |
|
Cluster environment for training on a cluster managed by SLURM. |
Profiler API¶
Specification of a profiler. |
|
This profiler uses Python’s cProfiler to record more detailed information about time spent in each function call recorded during a given action. |
|
If you wish to write a custom profiler, you should inherit from this class. |
|
This class should be used when you don’t want the (small) overhead of profiling. |
|
This profiler uses PyTorch’s Autograd Profiler and lets you inspect the cost of different operators inside your model - both on the CPU and GPU |
|
This profiler simply records the duration of actions (in seconds) and reports the mean duration of each action and the total time spent over the entire training run. |
Trainer API¶
Trainer to automate the training. |
Tuner API¶
Tuner class to tune your model |
Utilities API¶
Helper functions to help with reproducibility of models. |