Trainer¶
Once you’ve organized your PyTorch code into a LightningModule, the Trainer automates everything else.
This abstraction achieves the following:
You maintain control over all aspects via PyTorch code without an added abstraction.
The trainer uses best practices embedded by contributors and users from top AI labs such as Facebook AI Research, NYU, MIT, Stanford, etc…
The trainer allows overriding any key part that you don’t want automated.
Basic use¶
This is the basic use of the trainer:
model = MyLightningModule()
trainer = Trainer()
trainer.fit(model, train_dataloader, val_dataloader)
Under the hood¶
Under the hood, the Lightning Trainer handles the training loop details for you, some examples include:
Automatically enabling/disabling grads
Running the training, validation and test dataloaders
Calling the Callbacks at the appropriate times
Putting batches and computations on the correct devices
Here’s the pseudocode for what the trainer does under the hood (showing the train loop only)
# put model in train mode
model.train()
torch.set_grad_enabled(True)
losses = []
for batch in train_dataloader:
# calls hooks like this one
on_train_batch_start()
# train step
loss = training_step(batch)
# clear gradients
optimizer.zero_grad()
# backward
loss.backward()
# update parameters
optimizer.step()
losses.append(loss)
Trainer in Python scripts¶
In Python scripts, it’s recommended you use a main function to call the Trainer.
from argparse import ArgumentParser
def main(hparams):
model = LightningModule()
trainer = Trainer(accelerator=hparams.accelerator, devices=hparams.devices)
trainer.fit(model)
if __name__ == "__main__":
parser = ArgumentParser()
parser.add_argument("--accelerator", default=None)
parser.add_argument("--devices", default=None)
args = parser.parse_args()
main(args)
So you can run it like so:
python main.py --accelerator 'gpu' --devices 2
Note
Pro-tip: You don’t need to define all flags manually. Lightning can add them automatically
from argparse import ArgumentParser
def main(args):
model = LightningModule()
trainer = Trainer.from_argparse_args(args)
trainer.fit(model)
if __name__ == "__main__":
parser = ArgumentParser()
parser = Trainer.add_argparse_args(parser)
args = parser.parse_args()
main(args)
So you can run it like so:
python main.py --accelerator 'gpu' --devices 2 --max_steps 10 --limit_train_batches 10 --any_trainer_arg x
Note
If you want to stop a training run early, you can press “Ctrl + C” on your keyboard.
The trainer will catch the KeyboardInterrupt
and attempt a graceful shutdown, including
running accelerator callback on_train_end
to clean up memory. The trainer object will also set
an attribute interrupted
to True
in such cases. If you have a callback which shuts down compute
resources, for example, you can conditionally run the shutdown logic for only uninterrupted runs.
Validation¶
You can perform an evaluation epoch over the validation set, outside of the training loop,
using pytorch_lightning.trainer.trainer.Trainer.validate()
. This might be
useful if you want to collect new metrics from a model right at its initialization
or after it has already been trained.
trainer.validate(dataloaders=val_dataloaders)
Testing¶
Once you’re done training, feel free to run the test set! (Only right before publishing your paper or pushing to production)
trainer.test(dataloaders=test_dataloaders)
Reproducibility¶
To ensure full reproducibility from run to run you need to set seeds for pseudo-random generators,
and set deterministic
flag in Trainer
.
Example:
from pytorch_lightning import Trainer, seed_everything
seed_everything(42, workers=True)
# sets seeds for numpy, torch and python.random.
model = Model()
trainer = Trainer(deterministic=True)
By setting workers=True
in seed_everything()
, Lightning derives
unique seeds across all dataloader workers and processes for torch
, numpy
and stdlib
random
number generators. When turned on, it ensures that e.g. data augmentations are not repeated across workers.
Trainer flags¶
accelerator¶
Supports passing different accelerator types ("cpu", "gpu", "tpu", "ipu", "auto"
)
as well as custom accelerator instances.
# CPU accelerator
trainer = Trainer(accelerator="cpu")
# Training with GPU Accelerator using 2 GPUs
trainer = Trainer(devices=2, accelerator="gpu")
# Training with TPU Accelerator using 8 tpu cores
trainer = Trainer(devices=8, accelerator="tpu")
# Training with GPU Accelerator using the DistributedDataParallel strategy
trainer = Trainer(devices=4, accelerator="gpu", strategy="ddp")
Note
The "auto"
option recognizes the machine you are on, and selects the respective Accelerator
.
# If your machine has GPUs, it will use the GPU Accelerator for training
trainer = Trainer(devices=2, accelerator="auto")
You can also modify hardware behavior by subclassing an existing accelerator to adjust for your needs.
Example:
class MyOwnAcc(CPUAccelerator):
...
Trainer(accelerator=MyOwnAcc())
Note
If the devices
flag is not defined, it will assume devices
to be "auto"
and fetch the auto_device_count
from the accelerator.
# This is part of the built-in `GPUAccelerator`
class GPUAccelerator(Accelerator):
"""Accelerator for GPU devices."""
@staticmethod
def auto_device_count() -> int:
"""Get the devices when set to auto."""
return torch.cuda.device_count()
# Training with GPU Accelerator using total number of gpus available on the system
Trainer(accelerator="gpu")
Warning
Passing training strategies (e.g., "ddp"
) to accelerator
has been deprecated in v1.5.0
and will be removed in v1.7.0. Please use the strategy
argument instead.
accumulate_grad_batches¶
Accumulates grads every k batches or as set up in the dict.
Trainer also calls optimizer.step()
for the last indivisible step number.
# default used by the Trainer (no accumulation)
trainer = Trainer(accumulate_grad_batches=1)
Example:
# accumulate every 4 batches (effective batch size is batch*4)
trainer = Trainer(accumulate_grad_batches=4)
# no accumulation for epochs 1-4. accumulate 3 for epochs 5-10. accumulate 20 after that
trainer = Trainer(accumulate_grad_batches={5: 3, 10: 20})
amp_backend¶
Use PyTorch AMP (‘native’), or NVIDIA apex (‘apex’).
# using PyTorch built-in AMP, default used by the Trainer
trainer = Trainer(amp_backend="native")
# using NVIDIA Apex
trainer = Trainer(amp_backend="apex")
amp_level¶
The optimization level to use (O1, O2, etc…) for 16-bit GPU precision (using NVIDIA apex under the hood).
Check NVIDIA apex docs for level
Example:
# default used by the Trainer
trainer = Trainer(amp_level='O2')
auto_scale_batch_size¶
Automatically tries to find the largest batch size that fits into memory, before any training.
# default used by the Trainer (no scaling of batch size)
trainer = Trainer(auto_scale_batch_size=None)
# run batch size scaling, result overrides hparams.batch_size
trainer = Trainer(auto_scale_batch_size="binsearch")
# call tune to find the batch size
trainer.tune(model)
auto_select_gpus¶
If enabled and devices
is an integer, pick available GPUs automatically.
This is especially useful when GPUs are configured to be in “exclusive mode”,
such that only one process at a time can access them.
Example:
# no auto selection (picks first 2 GPUs on system, may fail if other process is occupying)
trainer = Trainer(accelerator="gpu", devices=2, auto_select_gpus=False)
# enable auto selection (will find two available GPUs on system)
trainer = Trainer(accelerator="gpu", devices=2, auto_select_gpus=True)
# specifies all GPUs regardless of its availability
Trainer(accelerator="gpu", devices=-1, auto_select_gpus=False)
# specifies all available GPUs (if only one GPU is not occupied, uses one gpu)
Trainer(accelerator="gpu", devices=-1, auto_select_gpus=True)
auto_lr_find¶
Runs a learning rate finder algorithm (see this paper) when calling trainer.tune(), to find optimal initial learning rate.
# default used by the Trainer (no learning rate finder)
trainer = Trainer(auto_lr_find=False)
Example:
# run learning rate finder, results override hparams.learning_rate
trainer = Trainer(auto_lr_find=True)
# call tune to find the lr
trainer.tune(model)
Example:
# run learning rate finder, results override hparams.my_lr_arg
trainer = Trainer(auto_lr_find='my_lr_arg')
# call tune to find the lr
trainer.tune(model)
Note
See the learning rate finder guide.
benchmark¶
Defaults to True
if deterministic
is not set.
This flag sets the torch.backends.cudnn.deterministic
flag. You can read more about its impact
here
This is likely to increase the speed of your system if your input sizes don’t change. However, if they do, then it might make your system slower. The CUDNN auto-tuner will try to find the best algorithm for the hardware when a new input size is encountered. Read more about it here.
Example:
# defaults to True if not deterministic (which is False by default)
trainer = Trainer()
# you can overwrite the value
trainer = Trainer(benchmark=False)
deterministic¶
If true enables cudnn.deterministic.
Might make your system slower, but ensures reproducibility.
Also sets $HOROVOD_FUSION_THRESHOLD=0
.
For more info check [pytorch docs].
Example:
# default used by the Trainer
trainer = Trainer(deterministic=False)
callbacks¶
Add a list of Callback
. Callbacks run sequentially in the order defined here
with the exception of ModelCheckpoint
callbacks which run
after all others to ensure all states are saved to the checkpoints.
# a list of callbacks
callbacks = [PrintCallback()]
trainer = Trainer(callbacks=callbacks)
Example:
from pytorch_lightning.callbacks import Callback
class PrintCallback(Callback):
def on_train_start(self, trainer, pl_module):
print("Training is started!")
def on_train_end(self, trainer, pl_module):
print("Training is done.")
Model-specific callbacks can also be added inside the LightningModule
through
configure_callbacks()
.
Callbacks returned in this hook will extend the list initially given to the Trainer
argument, and replace
the trainer callbacks should there be two or more of the same type.
ModelCheckpoint
callbacks always run last.
check_val_every_n_epoch¶
Check val every n train epochs.
Example:
# default used by the Trainer
trainer = Trainer(check_val_every_n_epoch=1)
# run val loop every 10 training epochs
trainer = Trainer(check_val_every_n_epoch=10)
checkpoint_callback¶
Warning
checkpoint_callback has been deprecated in v1.5 and will be removed in v1.7.
To disable checkpointing, pass enable_checkpointing = False
to the Trainer instead.
default_root_dir¶
Default path for logs and weights when no logger or
pytorch_lightning.callbacks.ModelCheckpoint
callback passed. On
certain clusters you might want to separate where logs and checkpoints are
stored. If you don’t then use this argument for convenience. Paths can be local
paths or remote paths such as s3://bucket/path or ‘hdfs://path/’. Credentials
will need to be set up to use remote filepaths.
# default used by the Trainer
trainer = Trainer(default_root_dir=os.getcwd())
devices¶
Number of devices to train on (int
), which devices to train on (list
or str
), or "auto"
.
It will be mapped to either gpus
, tpu_cores
, num_processes
or ipus
,
based on the accelerator type ("cpu", "gpu", "tpu", "ipu", "auto"
).
# Training with CPU Accelerator using 2 processes
trainer = Trainer(devices=2, accelerator="cpu")
# Training with GPU Accelerator using GPUs 1 and 3
trainer = Trainer(devices=[1, 3], accelerator="gpu")
# Training with TPU Accelerator using 8 tpu cores
trainer = Trainer(devices=8, accelerator="tpu")
Tip
The "auto"
option recognizes the devices to train on, depending on the Accelerator
being used.
# If your machine has GPUs, it will use all the available GPUs for training
trainer = Trainer(devices="auto", accelerator="auto")
# Training with CPU Accelerator using 1 process
trainer = Trainer(devices="auto", accelerator="cpu")
# Training with TPU Accelerator using 8 tpu cores
trainer = Trainer(devices="auto", accelerator="tpu")
# Training with IPU Accelerator using 4 ipus
trainer = Trainer(devices="auto", accelerator="ipu")
Note
If the devices
flag is not defined, it will assume devices
to be "auto"
and fetch the auto_device_count
from the accelerator.
# This is part of the built-in `GPUAccelerator`
class GPUAccelerator(Accelerator):
"""Accelerator for GPU devices."""
@staticmethod
def auto_device_count() -> int:
"""Get the devices when set to auto."""
return torch.cuda.device_count()
# Training with GPU Accelerator using total number of gpus available on the system
Trainer(accelerator="gpu")
enable_checkpointing¶
By default Lightning saves a checkpoint for you in your current working directory, with the state of your last training epoch, Checkpoints capture the exact value of all parameters used by a model. To disable automatic checkpointing, set this to False.
# default used by Trainer, saves the most recent model to a single checkpoint after each epoch
trainer = Trainer(enable_checkpointing=True)
# turn off automatic checkpointing
trainer = Trainer(enable_checkpointing=False)
You can override the default behavior by initializing the ModelCheckpoint
callback, and adding it to the callbacks
list.
See Saving and Loading Checkpoints for how to customize checkpointing.
from pytorch_lightning.callbacks import ModelCheckpoint
# Init ModelCheckpoint callback, monitoring 'val_loss'
checkpoint_callback = ModelCheckpoint(monitor="val_loss")
# Add your callback to the callbacks list
trainer = Trainer(callbacks=[checkpoint_callback])
fast_dev_run¶
Runs n if set to n
(int) else 1 if set to True
batch(es) of train, val and test
to find any bugs (ie: a sort of unit test).
Under the hood the pseudocode looks like this when running fast_dev_run with a single batch:
# loading
__init__()
prepare_data
# test training step
training_batch = next(train_dataloader)
training_step(training_batch)
# test val step
val_batch = next(val_dataloader)
out = validation_step(val_batch)
validation_epoch_end([out])
# default used by the Trainer
trainer = Trainer(fast_dev_run=False)
# runs 1 train, val, test batch and program ends
trainer = Trainer(fast_dev_run=True)
# runs 7 train, val, test batches and program ends
trainer = Trainer(fast_dev_run=7)
Note
This argument is a bit different from limit_train/val/test_batches
. Setting this argument will
disable tuner, checkpoint callbacks, early stopping callbacks, loggers and logger callbacks like
LearningRateLogger
and runs for only 1 epoch. This must be used only for debugging purposes.
limit_train/val/test_batches
only limits the number of batches and won’t disable anything.
flush_logs_every_n_steps¶
Warning
flush_logs_every_n_steps
has been deprecated in v1.5 and will be removed in v1.7.
Please configure flushing directly in the logger instead.
Writes logs to disk this often.
# default used by the Trainer
trainer = Trainer(flush_logs_every_n_steps=100)
- See Also:
gpus¶
Number of GPUs to train on (int)
or which GPUs to train on (list)
can handle strings
# default used by the Trainer (ie: train on CPU)
trainer = Trainer(gpus=None)
# equivalent
trainer = Trainer(gpus=0)
Example:
# int: train on 2 gpus
trainer = Trainer(gpus=2)
# list: train on GPUs 1, 4 (by bus ordering)
trainer = Trainer(gpus=[1, 4])
trainer = Trainer(gpus='1, 4') # equivalent
# -1: train on all gpus
trainer = Trainer(gpus=-1)
trainer = Trainer(gpus='-1') # equivalent
# combine with num_nodes to train on multiple GPUs across nodes
# uses 8 gpus in total
trainer = Trainer(gpus=2, num_nodes=4)
# train only on GPUs 1 and 4 across nodes
trainer = Trainer(gpus=[1, 4], num_nodes=4)
- See Also:
gradient_clip_val¶
Gradient clipping value
0 means don’t clip.
# default used by the Trainer
trainer = Trainer(gradient_clip_val=0.0)
limit_train_batches¶
How much of training dataset to check. Useful when debugging or testing something that happens at the end of an epoch.
# default used by the Trainer
trainer = Trainer(limit_train_batches=1.0)
Example:
# default used by the Trainer
trainer = Trainer(limit_train_batches=1.0)
# run through only 25% of the training set each epoch
trainer = Trainer(limit_train_batches=0.25)
# run through only 10 batches of the training set each epoch
trainer = Trainer(limit_train_batches=10)
limit_test_batches¶
How much of test dataset to check.
# default used by the Trainer
trainer = Trainer(limit_test_batches=1.0)
# run through only 25% of the test set each epoch
trainer = Trainer(limit_test_batches=0.25)
# run for only 10 batches
trainer = Trainer(limit_test_batches=10)
In the case of multiple test dataloaders, the limit applies to each dataloader individually.
limit_val_batches¶
How much of validation dataset to check. Useful when debugging or testing something that happens at the end of an epoch.
# default used by the Trainer
trainer = Trainer(limit_val_batches=1.0)
# run through only 25% of the validation set each epoch
trainer = Trainer(limit_val_batches=0.25)
# run for only 10 batches
trainer = Trainer(limit_val_batches=10)
In the case of multiple validation dataloaders, the limit applies to each dataloader individually.
log_every_n_steps¶
How often to add logging rows (does not write to disk)
# default used by the Trainer
trainer = Trainer(log_every_n_steps=50)
- See Also:
logger¶
Logger (or iterable collection of loggers) for experiment tracking. A True
value uses the default TensorBoardLogger
shown below. False
will disable logging.
from pytorch_lightning.loggers import TensorBoardLogger
# default logger used by trainer
logger = TensorBoardLogger(save_dir=os.getcwd(), version=1, name="lightning_logs")
Trainer(logger=logger)
max_epochs¶
Stop training once this number of epochs is reached
# default used by the Trainer
trainer = Trainer(max_epochs=1000)
If both max_epochs
and max_steps
aren’t specified, max_epochs
will default to 1000
.
To enable infinite training, set max_epochs = -1
.
min_epochs¶
Force training for at least these many epochs
# default used by the Trainer
trainer = Trainer(min_epochs=1)
max_steps¶
Stop training after this number of global steps. Training will stop if max_steps or max_epochs have reached (earliest).
# Default (disabled)
trainer = Trainer(max_steps=None)
# Stop after 100 steps
trainer = Trainer(max_steps=100)
If max_steps
is not specified, max_epochs
will be used instead (and max_epochs
defaults to
1000
if max_epochs
is not specified). To disable this default, set max_steps = -1
.
min_steps¶
Force training for at least this number of global steps. Trainer will train model for at least min_steps or min_epochs (latest).
# Default (disabled)
trainer = Trainer(min_steps=None)
# Run at least for 100 steps (disable min_epochs)
trainer = Trainer(min_steps=100, min_epochs=0)
max_time¶
Set the maximum amount of time for training. Training will get interrupted mid-epoch.
For customizable options use the Timer
callback.
# Default (disabled)
trainer = Trainer(max_time=None)
# Stop after 12 hours of training or when reaching 10 epochs (string)
trainer = Trainer(max_time="00:12:00:00", max_epochs=10)
# Stop after 1 day and 5 hours (dict)
trainer = Trainer(max_time={"days": 1, "hours": 5})
In case max_time
is used together with min_steps
or min_epochs
, the min_*
requirement
always has precedence.
num_nodes¶
Number of GPU nodes for distributed training.
# default used by the Trainer
trainer = Trainer(num_nodes=1)
# to train on 8 nodes
trainer = Trainer(num_nodes=8)
num_processes¶
Number of processes to train with. Automatically set to the number of GPUs
when using strategy="ddp"
. Set to a number greater than 1 when
using accelerator="cpu"
and strategy="ddp"
to mimic distributed training on a
machine without GPUs. This is useful for debugging, but will not provide
any speedup, since single-process Torch already makes efficient use of multiple
CPUs. While it would typically spawns subprocesses for training, setting
num_nodes > 1
and keeping num_processes = 1
runs training in the main
process.
# Simulate DDP for debugging on your GPU-less laptop
trainer = Trainer(accelerator="cpu", strategy="ddp", num_processes=2)
num_sanity_val_steps¶
Sanity check runs n batches of val before starting the training routine. This catches any bugs in your validation without having to wait for the first validation check. The Trainer uses 2 steps by default. Turn it off or modify it here.
# default used by the Trainer
trainer = Trainer(num_sanity_val_steps=2)
# turn it off
trainer = Trainer(num_sanity_val_steps=0)
# check all validation data
trainer = Trainer(num_sanity_val_steps=-1)
This option will reset the validation dataloader unless num_sanity_val_steps=0
.
overfit_batches¶
Uses this much data of the training set. If nonzero, will turn off validation. If the training dataloaders have shuffle=True, Lightning will automatically disable it.
Useful for quickly debugging or trying to overfit on purpose.
# default used by the Trainer
trainer = Trainer(overfit_batches=0.0)
# use only 1% of the train set
trainer = Trainer(overfit_batches=0.01)
# overfit on 10 of the same batches
trainer = Trainer(overfit_batches=10)
plugins¶
Plugins allow you to connect arbitrary backends, precision libraries, clusters etc. For example:
To define your own behavior, subclass the relevant class and pass it in. Here’s an example linking up your own
ClusterEnvironment
.
from pytorch_lightning.plugins.environments import ClusterEnvironment
class MyCluster(ClusterEnvironment):
def main_address(self):
return your_main_address
def main_port(self):
return your_main_port
def world_size(self):
return the_world_size
trainer = Trainer(plugins=[MyCluster()], ...)
prepare_data_per_node¶
Warning
prepare_data_per_node
has been deprecated in v1.5 and will be removed in v1.7.
Please set its value inside LightningDataModule
and/or LightningModule
directly described
in the following code:
class LitDataModule(LightningDataModule):
def __init__(self):
super().__init__()
self.prepare_data_per_node = True
If set to True
will call prepare_data()
on LOCAL_RANK=0 for every node.
If set to False
will only call from NODE_RANK=0, LOCAL_RANK=0.
# default
Trainer(prepare_data_per_node=True)
# use only NODE_RANK=0, LOCAL_RANK=0
Trainer(prepare_data_per_node=False)
precision¶
Lightning supports either double (64), float (32), bfloat16 (bf16), or half (16) precision training.
Half precision, or mixed precision, is the combined use of 32 and 16 bit floating points to reduce memory footprint during model training. This can result in improved performance, achieving +3X speedups on modern GPUs.
# default used by the Trainer
trainer = Trainer(precision=32)
# 16-bit precision
trainer = Trainer(precision=16, accelerator="gpu", devices=1) # works only on CUDA
# bfloat16 precision
trainer = Trainer(precision="bf16")
# 64-bit precision
trainer = Trainer(precision=64)
Note
When running on TPUs, torch.bfloat16 will be used but tensor printing will still show torch.float32.
If you are interested in using Apex 16-bit training:
NVIDIA Apex and DDP have instability problems. We recommend using the native AMP for 16-bit precision with multiple GPUs. To use Apex 16-bit training:
Set the
precision
trainer flag to 16. You can customize the Apex optimization level by setting the amp_level flag.# turn on 16-bit trainer = Trainer(amp_backend="apex", amp_level="O2", precision=16, accelerator="gpu", devices=1)
process_position¶
Warning
process_position
has been deprecated in v1.5 and will be removed in v1.7.
Please pass TQDMProgressBar
with process_position
directly to the Trainer’s callbacks
argument instead.
Orders the progress bar. Useful when running multiple trainers on the same node.
# default used by the Trainer
trainer = Trainer(process_position=0)
Note
This argument is ignored if a custom callback is passed to callbacks
.
profiler¶
To profile individual steps during training and assist in identifying bottlenecks.
See the profiler documentation. for more details.
from pytorch_lightning.profiler import SimpleProfiler, AdvancedProfiler
# default used by the Trainer
trainer = Trainer(profiler=None)
# to profile standard training events, equivalent to `profiler=SimpleProfiler()`
trainer = Trainer(profiler="simple")
# advanced profiler for function-level stats, equivalent to `profiler=AdvancedProfiler()`
trainer = Trainer(profiler="advanced")
progress_bar_refresh_rate¶
Warning
progress_bar_refresh_rate
has been deprecated in v1.5 and will be removed in v1.7.
Please pass TQDMProgressBar
with refresh_rate
directly to the Trainer’s callbacks
argument instead. To disable the progress bar,
pass enable_progress_bar = False
to the Trainer.
How often to refresh progress bar (in steps).
# default used by the Trainer
trainer = Trainer(progress_bar_refresh_rate=1)
# disable progress bar
trainer = Trainer(progress_bar_refresh_rate=0)
- Note:
In Google Colab notebooks, faster refresh rates (lower number) is known to crash them because of their screen refresh rates. Lightning will set it to 20 in these environments if the user does not provide a value.
This argument is ignored if a custom callback is passed to
callbacks
.
enable_progress_bar¶
Whether to enable or disable the progress bar. Defaults to True.
# default used by the Trainer
trainer = Trainer(enable_progress_bar=True)
# disable progress bar
trainer = Trainer(enable_progress_bar=False)
reload_dataloaders_every_n_epochs¶
Set to a positive integer to reload dataloaders every n epochs.
# if 0 (default)
train_loader = model.train_dataloader()
for epoch in epochs:
for batch in train_loader:
...
# if a positive integer
for epoch in epochs:
if not epoch % reload_dataloaders_every_n_epochs:
train_loader = model.train_dataloader()
for batch in train_loader:
...
replace_sampler_ddp¶
Enables auto adding of DistributedSampler
. In PyTorch, you must use it in
distributed settings such as TPUs or multi-node. The sampler makes sure each GPU sees the appropriate part of your data.
By default it will add shuffle=True
for train sampler and shuffle=False
for val/test sampler.
If you want to customize it, you can set replace_sampler_ddp=False
and add your own distributed sampler.
If replace_sampler_ddp=True
and a distributed sampler was already added,
Lightning will not replace the existing one.
# default used by the Trainer
trainer = Trainer(replace_sampler_ddp=True)
By setting to False, you have to add your own distributed sampler:
# in your LightningModule or LightningDataModule
def train_dataloader(self):
# default used by the Trainer
sampler = torch.utils.data.distributed.DistributedSampler(dataset, shuffle=True)
dataloader = DataLoader(dataset, batch_size=32, sampler=sampler)
return dataloader
Note
For iterable datasets, we don’t do this automatically.
resume_from_checkpoint¶
Warning
resume_from_checkpoint
is deprecated in v1.5 and will be removed in v2.0.
Please pass trainer.fit(ckpt_path="some/path/to/my_checkpoint.ckpt")
instead.
To resume training from a specific checkpoint pass in the path here. If resuming from a mid-epoch checkpoint, training will start from the beginning of the next epoch.
# default used by the Trainer
trainer = Trainer(resume_from_checkpoint=None)
# resume from a specific checkpoint
trainer = Trainer(resume_from_checkpoint="some/path/to/my_checkpoint.ckpt")
strategy¶
Supports passing different training strategies with aliases (ddp, ddp_spawn, etc) as well as custom strategies.
# Training with the DistributedDataParallel strategy on 4 GPUs
trainer = Trainer(strategy="ddp", accelerator="gpu", devices=4)
# Training with the DDP Spawn strategy using 4 cpu processes
trainer = Trainer(strategy="ddp_spawn", accelerator="cpu", devices=4)
Note
Additionally, you can pass your custom strategy to the strategy
argument.
from pytorch_lightning.strategies import DDPStrategy
class CustomDDPStrategy(DDPStrategy):
def configure_ddp(self):
self._model = MyCustomDistributedDataParallel(
self.model,
device_ids=...,
)
trainer = Trainer(strategy=CustomDDPStrategy(), accelerator="gpu", devices=2)
sync_batchnorm¶
Enable synchronization between batchnorm layers across all GPUs.
trainer = Trainer(sync_batchnorm=True)
track_grad_norm¶
no tracking (-1)
Otherwise tracks that norm (2 for 2-norm)
# default used by the Trainer
trainer = Trainer(track_grad_norm=-1)
# track the 2-norm
trainer = Trainer(track_grad_norm=2)
tpu_cores¶
How many TPU cores to train on (1 or 8).
Which TPU core to train on [1-8]
A single TPU v2 or v3 has 8 cores. A TPU pod has up to 2048 cores. A slice of a POD means you get as many cores as you request.
Your effective batch size is batch_size * total tpu cores.
This parameter can be either 1 or 8.
Example:
# your_trainer_file.py
# default used by the Trainer (ie: train on CPU)
trainer = Trainer(tpu_cores=None)
# int: train on a single core
trainer = Trainer(tpu_cores=1)
# list: train on a single selected core
trainer = Trainer(tpu_cores=[2])
# int: train on all cores few cores
trainer = Trainer(tpu_cores=8)
# for 8+ cores must submit via xla script with
# a max of 8 cores specified. The XLA script
# will duplicate script onto each TPU in the POD
trainer = Trainer(tpu_cores=8)
To train on more than 8 cores (ie: a POD), submit this script using the xla_dist script.
Example:
python -m torch_xla.distributed.xla_dist
--tpu=$TPU_POD_NAME
--conda-env=torch-xla-nightly
--env=XLA_USE_BF16=1
-- python your_trainer_file.py
val_check_interval¶
How often within one training epoch to check the validation set. Can specify as float or int.
pass a
float
in the range [0.0, 1.0] to check after a fraction of the training epoch.pass an
int
to check after a fixed number of training batches.
# default used by the Trainer
trainer = Trainer(val_check_interval=1.0)
# check validation set 4 times during a training epoch
trainer = Trainer(val_check_interval=0.25)
# check validation set every 1000 training batches
# use this when using iterableDataset and your dataset has no length
# (ie: production cases with streaming data)
trainer = Trainer(val_check_interval=1000)
# Here is the computation to estimate the total number of batches seen within an epoch.
# Find the total number of train batches
total_train_batches = total_train_samples // (train_batch_size * world_size)
# Compute how many times we will call validation during the training loop
val_check_batch = max(1, int(total_train_batches * val_check_interval))
val_checks_per_epoch = total_train_batches / val_check_batch
# Find the total number of validation batches
total_val_batches = total_val_samples // (val_batch_size * world_size)
# Total number of batches run
total_fit_batches = total_train_batches + total_val_batches
weights_save_path¶
Warning
weights_save_path has been deprecated in v1.6 and will be removed in v1.8. Please pass
dirpath
directly to the ModelCheckpoint
callback.
Directory of where to save weights if specified.
# default used by the Trainer
trainer = Trainer(weights_save_path=os.getcwd())
# save to your custom path
trainer = Trainer(weights_save_path="my/path")
Example:
# if checkpoint callback used, then overrides the weights path
# **NOTE: this saves weights to some/path NOT my/path
checkpoint = ModelCheckpoint(dirpath='some/path')
trainer = Trainer(
callbacks=[checkpoint],
weights_save_path='my/path'
)
weights_summary¶
Warning
weights_summary is deprecated in v1.5 and will be removed in v1.7. Please pass ModelSummary
directly to the Trainer’s callbacks
argument instead. To disable the model summary,
pass enable_model_summary = False
to the Trainer.
Prints a summary of the weights when training begins. Options: ‘full’, ‘top’, None.
# default used by the Trainer (ie: print summary of top level modules)
trainer = Trainer(weights_summary="top")
# print full summary of all modules and submodules
trainer = Trainer(weights_summary="full")
# don't print a summary
trainer = Trainer(weights_summary=None)
enable_model_summary¶
Whether to enable or disable the model summarization. Defaults to True.
# default used by the Trainer
trainer = Trainer(enable_model_summary=True)
# disable summarization
trainer = Trainer(enable_model_summary=False)
# enable custom summarization
from pytorch_lightning.callbacks import ModelSummary
trainer = Trainer(enable_model_summary=True, callbacks=[ModelSummary(max_depth=-1)])
Trainer class API¶
Methods¶
init¶
- Trainer.__init__(logger=True, checkpoint_callback=None, enable_checkpointing=True, callbacks=None, default_root_dir=None, gradient_clip_val=None, gradient_clip_algorithm=None, process_position=0, num_nodes=1, num_processes=None, devices=None, gpus=None, auto_select_gpus=False, tpu_cores=None, ipus=None, log_gpu_memory=None, progress_bar_refresh_rate=None, enable_progress_bar=True, overfit_batches=0.0, track_grad_norm=- 1, check_val_every_n_epoch=1, fast_dev_run=False, accumulate_grad_batches=None, max_epochs=None, min_epochs=None, max_steps=- 1, min_steps=None, max_time=None, limit_train_batches=None, limit_val_batches=None, limit_test_batches=None, limit_predict_batches=None, val_check_interval=None, flush_logs_every_n_steps=None, log_every_n_steps=50, accelerator=None, strategy=None, sync_batchnorm=False, precision=32, enable_model_summary=True, weights_summary='top', weights_save_path=None, num_sanity_val_steps=2, resume_from_checkpoint=None, profiler=None, benchmark=None, deterministic=False, reload_dataloaders_every_n_epochs=0, auto_lr_find=False, replace_sampler_ddp=True, detect_anomaly=False, auto_scale_batch_size=False, prepare_data_per_node=None, plugins=None, amp_backend='native', amp_level=None, move_metrics_to_cpu=False, multiple_trainloader_mode='max_size_cycle', stochastic_weight_avg=False, terminate_on_nan=None)[source]
Customize every aspect of training via flags.
- Parameters
accelerator¶ (
Union
[str
,Accelerator
,None
]) –Supports passing different accelerator types (“cpu”, “gpu”, “tpu”, “ipu”, “hpu”, “auto”) as well as custom accelerator instances.
Deprecated since version v1.5: Passing training strategies (e.g., ‘ddp’) to
accelerator
has been deprecated in v1.5.0 and will be removed in v1.7.0. Please use thestrategy
argument instead.accumulate_grad_batches¶ (
Union
[int
,Dict
[int
,int
],None
]) – Accumulates grads every k batches or as set up in the dict. Default:None
.amp_backend¶ (
str
) – The mixed precision backend to use (“native” or “apex”). Default:'native''
.amp_level¶ (
Optional
[str
]) – The optimization level to use (O1, O2, etc…). By default it will be set to “O2” ifamp_backend
is set to “apex”.auto_lr_find¶ (
Union
[bool
,str
]) – If set to True, will make trainer.tune() run a learning rate finder, trying to optimize initial learning for faster convergence. trainer.tune() method will set the suggested learning rate in self.lr or self.learning_rate in the LightningModule. To use a different key set a string instead of True with the key name. Default:False
.auto_scale_batch_size¶ (
Union
[str
,bool
]) – If set to True, will initially run a batch size finder trying to find the largest batch size that fits into memory. The result will be stored in self.batch_size in the LightningModule. Additionally, can be set to either power that estimates the batch size through a power search or binsearch that estimates the batch size through a binary search. Default:False
.auto_select_gpus¶ (
bool
) – If enabled andgpus
is an integer, pick available gpus automatically. This is especially useful when GPUs are configured to be in “exclusive mode”, such that only one process at a time can access them. Default:False
.benchmark¶ (
Optional
[bool
]) – Setstorch.backends.cudnn.benchmark
. Defaults toTrue
ifdeterministic
isFalse
. Overwrite to manually set a different value. Default:None
.callbacks¶ (
Union
[List
[Callback
],Callback
,None
]) – Add a callback or list of callbacks. Default:None
.checkpoint_callback¶ (
Optional
[bool
]) –If
True
, enable checkpointing. Default:None
.Deprecated since version v1.5:
checkpoint_callback
has been deprecated in v1.5 and will be removed in v1.7. Please consider usingenable_checkpointing
instead.enable_checkpointing¶ (
bool
) – IfTrue
, enable checkpointing. It will configure a default ModelCheckpoint callback if there is no user-defined ModelCheckpoint incallbacks
. Default:True
.check_val_every_n_epoch¶ (
int
) – Check val every n train epochs. Default:1
.default_root_dir¶ (
Optional
[str
]) – Default path for logs and weights when no logger/ckpt_callback passed. Default:os.getcwd()
. Can be remote file paths such as s3://mybucket/path or ‘hdfs://path/’detect_anomaly¶ (
bool
) – Enable anomaly detection for the autograd engine. Default:False
.deterministic¶ (
bool
) – IfTrue
, sets whether PyTorch operations must use deterministic algorithms. Default:False
.devices¶ (
Union
[List
[int
],str
,int
,None
]) – Will be mapped to either gpus, tpu_cores, num_processes or ipus, based on the accelerator type.fast_dev_run¶ (
Union
[int
,bool
]) – Runs n if set ton
(int) else 1 if set toTrue
batch(es) of train, val and test to find any bugs (ie: a sort of unit test). Default:False
.flush_logs_every_n_steps¶ (
Optional
[int
]) –How often to flush logs to disk (defaults to every 100 steps).
Deprecated since version v1.5:
flush_logs_every_n_steps
has been deprecated in v1.5 and will be removed in v1.7. Please configure flushing directly in the logger instead.gpus¶ (
Union
[List
[int
],str
,int
,None
]) – Number of GPUs to train on (int) or which GPUs to train on (list or str) applied per node Default:None
.gradient_clip_val¶ (
Union
[int
,float
,None
]) – The value at which to clip gradients. Passinggradient_clip_val=None
disables gradient clipping. If using Automatic Mixed Precision (AMP), the gradients will be unscaled before. Default:None
.gradient_clip_algorithm¶ (
Optional
[str
]) – The gradient clipping algorithm to use. Passgradient_clip_algorithm="value"
to clip by value, andgradient_clip_algorithm="norm"
to clip by norm. By default it will be set to"norm"
.limit_train_batches¶ (
Union
[int
,float
,None
]) – How much of training dataset to check (float = fraction, int = num_batches). Default:1.0
.limit_val_batches¶ (
Union
[int
,float
,None
]) – How much of validation dataset to check (float = fraction, int = num_batches). Default:1.0
.limit_test_batches¶ (
Union
[int
,float
,None
]) – How much of test dataset to check (float = fraction, int = num_batches). Default:1.0
.limit_predict_batches¶ (
Union
[int
,float
,None
]) – How much of prediction dataset to check (float = fraction, int = num_batches). Default:1.0
.logger¶ (
Union
[LightningLoggerBase
,Iterable
[LightningLoggerBase
],bool
]) – Logger (or iterable collection of loggers) for experiment tracking. ATrue
value uses the defaultTensorBoardLogger
.False
will disable logging. If multiple loggers are provided and the save_dir property of that logger is not set, local files (checkpoints, profiler traces, etc.) are saved indefault_root_dir
rather than in thelog_dir
of any of the individual loggers. Default:True
.log_gpu_memory¶ (
Optional
[str
]) –None, ‘min_max’, ‘all’. Might slow performance.
Deprecated since version v1.5: Deprecated in v1.5.0 and will be removed in v1.7.0 Please use the
DeviceStatsMonitor
callback directly instead.log_every_n_steps¶ (
int
) – How often to log within steps. Default:50
.prepare_data_per_node¶ (
Optional
[bool
]) –If True, each LOCAL_RANK=0 will call prepare data. Otherwise only NODE_RANK=0, LOCAL_RANK=0 will prepare data
Deprecated since version v1.5: Deprecated in v1.5.0 and will be removed in v1.7.0 Please set
prepare_data_per_node
inLightningDataModule
and/orLightningModule
directly instead.Orders the progress bar when running multiple models on same machine.
Deprecated since version v1.5:
process_position
has been deprecated in v1.5 and will be removed in v1.7. Please passTQDMProgressBar
withprocess_position
directly to the Trainer’scallbacks
argument instead.progress_bar_refresh_rate¶ (
Optional
[int
]) –How often to refresh progress bar (in steps). Value
0
disables progress bar. Ignored when a custom progress bar is passed tocallbacks
. Default: None, means a suitable value will be chosen based on the environment (terminal, Google COLAB, etc.).Deprecated since version v1.5:
progress_bar_refresh_rate
has been deprecated in v1.5 and will be removed in v1.7. Please passTQDMProgressBar
withrefresh_rate
directly to the Trainer’scallbacks
argument instead. To disable the progress bar, passenable_progress_bar = False
to the Trainer.enable_progress_bar¶ (
bool
) – Whether to enable to progress bar by default. Default:False
.profiler¶ (
Union
[BaseProfiler
,str
,None
]) – To profile individual steps during training and assist in identifying bottlenecks. Default:None
.overfit_batches¶ (
Union
[int
,float
]) – Overfit a fraction of training data (float) or a set number of batches (int). Default:0.0
.plugins¶ (
Union
[Strategy
,PrecisionPlugin
,ClusterEnvironment
,CheckpointIO
,LayerSync
,str
,List
[Union
[Strategy
,PrecisionPlugin
,ClusterEnvironment
,CheckpointIO
,LayerSync
,str
]],None
]) – Plugins allow modification of core behavior like ddp and amp, and enable custom lightning plugins. Default:None
.precision¶ (
Union
[int
,str
]) – Double precision (64), full precision (32), half precision (16) or bfloat16 precision (bf16). Can be used on CPU, GPU, TPUs, HPUs or IPUs. Default:32
.max_epochs¶ (
Optional
[int
]) – Stop training once this number of epochs is reached. Disabled by default (None). If both max_epochs and max_steps are not specified, defaults tomax_epochs = 1000
. To enable infinite training, setmax_epochs = -1
.min_epochs¶ (
Optional
[int
]) – Force training for at least these many epochs. Disabled by default (None).max_steps¶ (
int
) – Stop training after this number of steps. Disabled by default (-1). Ifmax_steps = -1
andmax_epochs = None
, will default tomax_epochs = 1000
. To enable infinite training, setmax_epochs
to-1
.min_steps¶ (
Optional
[int
]) – Force training for at least these number of steps. Disabled by default (None
).max_time¶ (
Union
[str
,timedelta
,Dict
[str
,int
],None
]) – Stop training after this amount of time has passed. Disabled by default (None
). The time duration can be specified in the format DD:HH:MM:SS (days, hours, minutes seconds), as adatetime.timedelta
, or a dictionary with keys that will be passed todatetime.timedelta
.num_nodes¶ (
int
) – Number of GPU nodes for distributed training. Default:1
.num_processes¶ (
Optional
[int
]) – Number of processes for distributed training withaccelerator="cpu"
. Default:1
.num_sanity_val_steps¶ (
int
) – Sanity check runs n validation batches before starting the training routine. Set it to -1 to run all batches in all validation dataloaders. Default:2
.reload_dataloaders_every_n_epochs¶ (
int
) – Set to a non-negative integer to reload dataloaders every n epochs. Default:0
.replace_sampler_ddp¶ (
bool
) – Explicitly enables or disables sampler replacement. If not specified this will toggled automatically when DDP is used. By default it will addshuffle=True
for train sampler andshuffle=False
for val/test sampler. If you want to customize it, you can setreplace_sampler_ddp=False
and add your own distributed sampler.resume_from_checkpoint¶ (
Union
[str
,Path
,None
]) –Path/URL of the checkpoint from which training is resumed. If there is no checkpoint file at the path, an exception is raised. If resuming from mid-epoch checkpoint, training will start from the beginning of the next epoch.
Deprecated since version v1.5:
resume_from_checkpoint
is deprecated in v1.5 and will be removed in v2.0. Please pass the path toTrainer.fit(..., ckpt_path=...)
instead.strategy¶ (
Union
[str
,Strategy
,None
]) – Supports different training strategies with aliases as well custom strategies. Default:None
.sync_batchnorm¶ (
bool
) – Synchronize batch norm layers between process groups/whole world. Default:False
.terminate_on_nan¶ (
Optional
[bool
]) –If set to True, will terminate training (by raising a ValueError) at the end of each training batch, if any of the parameters or the loss are NaN or +/-inf.
Deprecated since version v1.5: Trainer argument
terminate_on_nan
was deprecated in v1.5 and will be removed in 1.7. Please usedetect_anomaly
instead.detect_anomaly¶ – Enable anomaly detection for the autograd engine. Default:
False
.tpu_cores¶ (
Union
[List
[int
],str
,int
,None
]) – How many TPU cores to train on (1 or 8) / Single TPU to train on (1) Default:None
.ipus¶ (
Optional
[int
]) – How many IPUs to train on. Default:None
.track_grad_norm¶ (
Union
[int
,float
,str
]) – -1 no tracking. Otherwise tracks that p-norm. May be set to ‘inf’ infinity-norm. If using Automatic Mixed Precision (AMP), the gradients will be unscaled before logging them. Default:-1
.val_check_interval¶ (
Union
[int
,float
,None
]) – How often to check the validation set. Pass afloat
in the range [0.0, 1.0] to check after a fraction of the training epoch. Pass anint
to check after a fixed number of training batches. Default:1.0
.enable_model_summary¶ (
bool
) – Whether to enable model summarization by default. Default:True
.weights_summary¶ (
Optional
[str
]) –Prints a summary of the weights when training begins.
Deprecated since version v1.5:
weights_summary
has been deprecated in v1.5 and will be removed in v1.7. To disable the summary, passenable_model_summary = False
to the Trainer. To customize the summary, passModelSummary
directly to the Trainer’scallbacks
argument.weights_save_path¶ (
Optional
[str
]) –Where to save weights if specified. Will override default_root_dir for checkpoints only. Use this if for whatever reason you need the checkpoints stored in a different place than the logs written in default_root_dir. Can be remote file paths such as s3://mybucket/path or ‘hdfs://path/’ Defaults to default_root_dir.
Deprecated since version v1.6:
weights_save_path
has been deprecated in v1.6 and will be removed in v1.8. Please passdirpath
directly to theModelCheckpoint
callback.move_metrics_to_cpu¶ (
bool
) – Whether to force internal logged metrics to be moved to cpu. This can save some gpu memory, but can make training slower. Use with attention. Default:False
.multiple_trainloader_mode¶ (
str
) – How to loop over the datasets when there are multiple train loaders. In ‘max_size_cycle’ mode, the trainer ends one epoch when the largest dataset is traversed, and smaller datasets reload when running out of their data. In ‘min_size’ mode, all the datasets reload when reaching the minimum length of datasets. Default:"max_size_cycle"
.stochastic_weight_avg¶ (
bool
) –Whether to use Stochastic Weight Averaging (SWA). Default:
False
.Deprecated since version v1.5:
stochastic_weight_avg
has been deprecated in v1.5 and will be removed in v1.7. Please passStochasticWeightAveraging
directly to the Trainer’scallbacks
argument instead.
fit¶
- Trainer.fit(model, train_dataloaders=None, val_dataloaders=None, datamodule=None, ckpt_path=None)[source]
Runs the full optimization routine.
- Parameters
model¶ (
LightningModule
) – Model to fit.train_dataloaders¶ (
Union
[DataLoader
,Sequence
[DataLoader
],Sequence
[Sequence
[DataLoader
]],Sequence
[Dict
[str
,DataLoader
]],Dict
[str
,DataLoader
],Dict
[str
,Dict
[str
,DataLoader
]],Dict
[str
,Sequence
[DataLoader
]],LightningDataModule
,None
]) – A collection oftorch.utils.data.DataLoader
or aLightningDataModule
specifying training samples. In the case of multiple dataloaders, please see this section.val_dataloaders¶ (
Union
[DataLoader
,Sequence
[DataLoader
],None
]) – Atorch.utils.data.DataLoader
or a sequence of them specifying validation samples.ckpt_path¶ (
Optional
[str
]) – Path/URL of the checkpoint from which training is resumed. If there is no checkpoint file at the path, an exception is raised. If resuming from mid-epoch checkpoint, training will start from the beginning of the next epoch.datamodule¶ (
Optional
[LightningDataModule
]) – An instance ofLightningDataModule
.
- Return type
validate¶
- Trainer.validate(model=None, dataloaders=None, ckpt_path=None, verbose=True, datamodule=None)[source]
Perform one evaluation epoch over the validation set.
- Parameters
model¶ (
Optional
[LightningModule
]) – The model to validate.dataloaders¶ (
Union
[DataLoader
,Sequence
[DataLoader
],LightningDataModule
,None
]) – Atorch.utils.data.DataLoader
or a sequence of them, or aLightningDataModule
specifying validation samples.ckpt_path¶ (
Optional
[str
]) – Eitherbest
or path to the checkpoint you wish to validate. IfNone
and the model instance was passed, use the current weights. Otherwise, the best model checkpoint from the previoustrainer.fit
call will be loaded if a checkpoint callback is configured.datamodule¶ (
Optional
[LightningDataModule
]) – An instance ofLightningDataModule
.
- Return type
- Returns
List of dictionaries with metrics logged during the validation phase, e.g., in model- or callback hooks like
validation_step()
,validation_epoch_end()
, etc. The length of the list corresponds to the number of validation dataloaders used.
test¶
- Trainer.test(model=None, dataloaders=None, ckpt_path=None, verbose=True, datamodule=None)[source]
Perform one evaluation epoch over the test set. It’s separated from fit to make sure you never run on your test set until you want to.
- Parameters
model¶ (
Optional
[LightningModule
]) – The model to test.dataloaders¶ (
Union
[DataLoader
,Sequence
[DataLoader
],LightningDataModule
,None
]) – Atorch.utils.data.DataLoader
or a sequence of them, or aLightningDataModule
specifying test samples.ckpt_path¶ (
Optional
[str
]) – Eitherbest
or path to the checkpoint you wish to test. IfNone
and the model instance was passed, use the current weights. Otherwise, the best model checkpoint from the previoustrainer.fit
call will be loaded if a checkpoint callback is configured.datamodule¶ (
Optional
[LightningDataModule
]) – An instance ofLightningDataModule
.
- Return type
- Returns
List of dictionaries with metrics logged during the test phase, e.g., in model- or callback hooks like
test_step()
,test_epoch_end()
, etc. The length of the list corresponds to the number of test dataloaders used.
predict¶
- Trainer.predict(model=None, dataloaders=None, datamodule=None, return_predictions=None, ckpt_path=None)[source]
Run inference on your data. This will call the model forward function to compute predictions. Useful to perform distributed and batched predictions. Logging is disabled in the predict hooks.
- Parameters
model¶ (
Optional
[LightningModule
]) – The model to predict with.dataloaders¶ (
Union
[DataLoader
,Sequence
[DataLoader
],LightningDataModule
,None
]) – Atorch.utils.data.DataLoader
or a sequence of them, or aLightningDataModule
specifying prediction samples.datamodule¶ (
Optional
[LightningDataModule
]) – The datamodule with a predict_dataloader method that returns one or more dataloaders.return_predictions¶ (
Optional
[bool
]) – Whether to return predictions.True
by default except when an accelerator that spawns processes is used (not supported).ckpt_path¶ (
Optional
[str
]) – Eitherbest
or path to the checkpoint you wish to predict. IfNone
and the model instance was passed, use the current weights. Otherwise, the best model checkpoint from the previoustrainer.fit
call will be loaded if a checkpoint callback is configured.
- Return type
- Returns
Returns a list of dictionaries, one for each provided dataloader containing their respective predictions.
tune¶
- Trainer.tune(model, train_dataloaders=None, val_dataloaders=None, datamodule=None, scale_batch_size_kwargs=None, lr_find_kwargs=None)[source]
Runs routines to tune hyperparameters before training.
- Parameters
model¶ (
LightningModule
) – Model to tune.train_dataloaders¶ (
Union
[DataLoader
,Sequence
[DataLoader
],Sequence
[Sequence
[DataLoader
]],Sequence
[Dict
[str
,DataLoader
]],Dict
[str
,DataLoader
],Dict
[str
,Dict
[str
,DataLoader
]],Dict
[str
,Sequence
[DataLoader
]],LightningDataModule
,None
]) – A collection oftorch.utils.data.DataLoader
or aLightningDataModule
specifying training samples. In the case of multiple dataloaders, please see this section.val_dataloaders¶ (
Union
[DataLoader
,Sequence
[DataLoader
],None
]) – Atorch.utils.data.DataLoader
or a sequence of them specifying validation samples.datamodule¶ (
Optional
[LightningDataModule
]) – An instance ofLightningDataModule
.scale_batch_size_kwargs¶ (
Optional
[Dict
[str
,Any
]]) – Arguments forscale_batch_size()
lr_find_kwargs¶ (
Optional
[Dict
[str
,Any
]]) – Arguments forlr_find()
- Return type
Properties¶
callback_metrics¶
The metrics available to callbacks. These are automatically set when you log via self.log
def training_step(self, batch, batch_idx):
self.log("a_val", 2)
callback_metrics = trainer.callback_metrics
assert callback_metrics["a_val"] == 2
current_epoch¶
The number of epochs run.
if trainer.current_epoch >= 10:
...
global_step¶
The number of optimizer steps taken (does not reset each epoch). This includes multiple optimizers and TBPTT steps (if enabled).
if trainer.global_step >= 100:
...
logger¶
The current logger being used. Here’s an example using tensorboard
logger = trainer.logger
tensorboard = logger.experiment
loggers¶
The list of loggers currently being used by the Trainer.
# List of LightningLoggerBase objects
loggers = trainer.loggers
for logger in loggers:
logger.log_metrics({"foo": 1.0})
logged_metrics¶
The metrics sent to the logger (visualizer).
def training_step(self, batch, batch_idx):
self.log("a_val", 2, logger=True)
logged_metrics = trainer.logged_metrics
assert logged_metrics["a_val"] == 2
log_dir¶
The directory for the current experiment. Use this to save images to, etc…
def training_step(self, batch, batch_idx):
img = ...
save_img(img, self.trainer.log_dir)
is_global_zero¶
Whether this process is the global zero in multi-node training
def training_step(self, batch, batch_idx):
if self.trainer.is_global_zero:
print("in node 0, accelerator 0")
progress_bar_metrics¶
The metrics sent to the progress bar.
def training_step(self, batch, batch_idx):
self.log("a_val", 2, prog_bar=True)
progress_bar_metrics = trainer.progress_bar_metrics
assert progress_bar_metrics["a_val"] == 2
estimated_stepping_batches¶
Check out estimated_stepping_batches()
.