Shortcuts

Regular User

reg. user 1.9

If

Then

Ref

used Python 3.7

upgrade to Python 3.8 or higher

#16579

used PyTorch 1.10

upgrade to PyTorch 1.11 or higher

#16492

used Trainer’s flag gpus

use devices with the same number

#16171

used Trainer’s flag tpu_cores

use devices with the same number

#16171

used Trainer’s flag ipus

use devices with the same number

#16171

used Trainer’s flag num_processes

use devices with the same number

#16171

used Trainer’s flag resume_from_checkpoint

pass the path to the Trainer.fit(ckpt_path="...") method,

#10061

used Trainer’s flag auto_select_gpus

use devices="auto"

#16184

called the pl.tuner.auto_gpu_select.pick_single_gpu function

use Trainer’s flag``devices=”auto”``

#16184

called the pl.tuner.auto_gpu_select.pick_multiple_gpus functions

use Trainer’s flag``devices=”auto”``

#16184

used Trainer’s flag accumulate_grad_batches with a scheduling dictionary value

use the GradientAccumulationScheduler callback and configure it

#16729

imported profiles from pl.profiler

import from pl.profilers

#16359

used Tuner as part of Trainer in any form

move to a standalone Tuner object or use particular callbacks LearningRateFinder and BatchSizeFinder

https://lightning.ai/docs/pytorch/latest/advanced/training_tricks.html#batch-size-finder

used Trainer’s flag auto_scale_batch_size

use BatchSizeFinder callback instead and the Trainer.tune() method was removed

used Trainer’s flag auto_lr_find

use callbacks LearningRateFinder callback instead and the Trainer.tune() method was removed

Advanced User

adv. user 1.9

If

Then

Ref

used the pl.lite module

switch to lightning_fabric

#15953

used Trainer’s flag strategy='dp'

use DDP with strategy='ddp' or DeepSpeed instead

#16748

implemented LightningModule.training_epoch_end hooks

port your logic to LightningModule.on_training_epoch_end hook

#16520

implemented LightningModule.validation_epoch_end hook

port your logic to LightningModule.on_validation_epoch_end hook

#16520

implemented LightningModule.test_epoch_end hooks

port your logic to LightningModule.on_test_epoch_end hook

#16520

used Trainer’s flag multiple_trainloader_mode

switch to CombinedLoader(..., mode=...) and set mode directly now

#16800

used Trainer’s flag move_metrics_to_cpu

implement particular offload logic in your custom metric or turn it on in torchmetrics

#16358

used Trainer’s flag track_grad_norm

overwrite on_before_optimizer_step hook and pass the argument directly and LightningModule.log_grad_norm() hook

#16745 #16745

used Trainer’s flag replace_sampler_ddp

use use_distributed_sample; the sampler gets created not only for the DDP strategies

relied on the on_tpu argument in LightningModule.optimizer_step hook

switch to manual optimization

#16537 Manual Optimization

relied on the using_lbfgs argument in LightningModule.optimizer_step hook

switch to manual optimization

#16538 Manual Optimization

were using nvidia/apex in any form

switch to PyTorch native mixed precision torch.amp instead

#16039 Precision

used Trainer’s flag using_native_amp

use PyTorch native mixed precision

#16039 Precision

used Trainer’s flag amp_backend

use PyTorch native mixed precision

#16039 Precision

used Trainer’s flag amp_level

use PyTorch native mixed precision

#16039 Precision

used Trainer’s attribute using_native_amp

use PyTorch native mixed precision

#16039 Precision

used Trainer’s attribute amp_backend

use PyTorch native mixed precision

#16039 Precision

used Trainer’s attribute amp_level

use PyTorch native mixed precision

#16039 Precision

use the FairScale integration

consider using PyTorch’s native FSDP implementation or outsourced implementation into own project

https://github.com/Lightning-Sandbox/lightning-Fairscale

used pl.overrides.fairscale.LightningShardedDataParallel

use native FSDP instead

#16400 FSDP

used pl.plugins.precision.fully_sharded_native_amp.FullyShardedNativeMixedPrecisionPlugin

use native FSDP instead

#16400 FSDP

used pl.plugins.precision.sharded_native_amp.ShardedNativeMixedPrecisionPlugin

use native FSDP instead

#16400 FSDP

used pl.strategies.fully_sharded.DDPFullyShardedStrategy

use native FSDP instead

#16400 FSDP

used pl.strategies.sharded.DDPShardedStrategy

use native FSDP instead

#16400 FSDP

used pl.strategies.sharded_spawn.DDPSpawnShardedStrategy

use native FSDP instead

#16400 FSDP

used save_config_overwrite parameters in LightningCLI

pass this option and via dictionary of save_config_kwargs parameter

#14998

used save_config_multifile parameters in LightningCLI

pass this option and via dictionary of save_config_kwargs parameter

#14998

have customized loops Loop.replace()

implement your training loop with Fabric.

#14998 Fabric

have customized loops Loop.run()

implement your training loop with Fabric.

#14998 Fabric

have customized loops Loop.connect()

implement your training loop with Fabric.

#14998 Fabric

used the Trainer’s trainer.fit_loop property

implement your training loop with Fabric

#14998 Fabric

used the Trainer’s trainer.validate_loop property

implement your training loop with Fabric

#14998 Fabric

used the Trainer’s trainer.test_loop property

implement your training loop with Fabric

#14998 Fabric

used the Trainer’s trainer.predict_loop property

implement your training loop with Fabric

#14998 Fabric

used the Trainer.loop and fetching classes

being marked as protected

used opt_idx argument in BaseFinetuning.finetune_function

use manual optimization

#16539

used opt_idx argument in Callback.on_before_optimizer_step

use manual optimization

#16539 Manual Optimization

used optimizer_idx as an optional argument in LightningModule.training_step

use manual optimization

#16539 Manual Optimization

used optimizer_idx argument in LightningModule.on_before_optimizer_step

use manual optimization

#16539 Manual Optimization

used optimizer_idx argument in LightningModule.configure_gradient_clipping

use manual optimization

#16539 Manual Optimization

used optimizer_idx argument in LightningModule.optimizer_step

use manual optimization

#16539 Manual Optimization

used optimizer_idx argument in LightningModule.optimizer_zero_grad

use manual optimization

#16539 Manual Optimization

used optimizer_idx argument in LightningModule.lr_scheduler_step

use manual optimization

#16539 Manual Optimization

used declaring optimizer frequencies in the dictionary returned from LightningModule.configure_optimizers

use manual optimization

#16539 Manual Optimization

used optimizer argument in LightningModule.backward

use manual optimization

#16539 Manual Optimization

used optimizer_idx argument in LightningModule.backward

use manual optimization

#16539 Manual Optimization

used optimizer_idx argument in PrecisionPlugin.optimizer_step

use manual optimization

#16539 Manual Optimization

used optimizer_idx argument in PrecisionPlugin.,backward

use manual optimization

#16539 Manual Optimization

used optimizer_idx argument in PrecisionPlugin.optimizer_step

use manual optimization

#16539 Manual Optimization

used optimizer_idx argument in Strategy.backward

use manual optimization

#16539 Manual Optimization

used optimizer_idx argument in Strategy.optimizer_step

use manual optimization

#16539 Manual Optimization

used Trainer’s Trainer.optimizer_frequencies attribute

use manual optimization

Manual Optimization

used PL_INTER_BATCH_PARALLELISM environment flag

#16355

used training integration with Horovod

install standalone package/project

https://github.com/Lightning-AI/lightning-Horovod

used training integration with ColossalAI

install standalone package/project

https://lightning.ai/docs/pytorch/latest/advanced/third_party/colossalai.html

used QuantizationAwareTraining callback

use Torch’s Quantization directly

#16750

had any logic except reducing the DP outputs in LightningModule.training_step_end hook

port it to LightningModule.training_batch_end hook

#16791

had any logic except reducing the DP outputs in LightningModule.validation_step_end hook

port it to LightningModule.validation_batch_end hook

#16791

had any logic except reducing the DP outputs in LightningModule.test_step_end hook

port it to LightningModule.test_batch_end hook

#16791

used pl.strategies.DDPSpawnStrategy

switch to general DDPStrategy(start_method='spawn') with proper starting method

#16809

used the automatic addition of a moving average of the training_step loss in the progress bar

use self.log("loss", ..., prog_bar=True) instead.

#16192

rely on the outputs argument from the on_predict_epoch_end hook

access them via trainer.predict_loop.predictions

#16655

need to pass a dictionary to self.log()

pass them independently.

#16389

Developer

devel 1.9

If

Then

Ref

passed the pl_module argument to distributed module wrappers

passed the (required) forward_module argument

#16386

used DataParallel and the LightningParallelModule wrapper

use DDP or DeepSpeed instead

#16748 DDP

used pl_module argument from the distributed module wrappers

use DDP or DeepSpeed instead

#16386 DDP

called pl.overrides.base.unwrap_lightning_module function

use DDP or DeepSpeed instead

#16386 DDP

used or derived from pl.overrides.distributed.LightningDistributedModule class

use DDP instead

#16386 DDP

used the pl.plugins.ApexMixedPrecisionPlugin`` plugin

use PyTorch native mixed precision

#16039

used the pl.plugins.NativeMixedPrecisionPlugin plugin

switch to the pl.plugins.MixedPrecisionPlugin plugin

#16039

used the fit_loop.min_steps setters

implement your training loop with Fabric

#16803

used the fit_loop.max_steps setters

implement your training loop with Fabric

#16803

used the data_parallel attribute in Trainer

check the same using isinstance(trainer.strategy, ParallelStrategy)

#16703

used any function from pl.utilities.xla_device

switch to pl.accelerators.TPUAccelerator.is_available()

#14514 #14550

imported functions from pl.utilities.device_parser.*

import them from lightning_fabric.utilities.device_parser.*

#14492 #14753

imported functions from pl.utilities.cloud_io.*

import them from lightning_fabric.utilities.cloud_io.*

#14515

imported functions from pl.utilities.apply_func.*

import them from lightning_utilities.core.apply_func.*

#14516 #14537

used any code from pl.core.mixins

use the base classes

#16424

used any code from pl.utilities.distributed

rely on Pytorch’s native functions

#16390

used any code from pl.utilities.data

it was removed

#16440

used any code from pl.utilities.optimizer

it was removed

#16439

used any code from pl.utilities.seed

it was removed

#16422

were using truncated backpropagation through time (TBPTT) with LightningModule.truncated_bptt_steps

use manual optimization

#16172 Manual Optimization

were using truncated backpropagation through time (TBPTT) with LightningModule.tbptt_split_batch

use manual optimization

#16172 Manual Optimization

were using truncated backpropagation through time (TBPTT) and passing hidden to LightningModule.training_step

use manual optimization

#16172 Manual Optimization

used pl.utilities.finite_checks.print_nan_gradients function

it was removed

used pl.utilities.finite_checks.detect_nan_parameters function

it was removed

used pl.utilities.parsing.flatten_dict function

it was removed

used pl.utilities.metrics.metrics_to_scalars function

it was removed

used pl.utilities.memory.get_model_size_mb function

it was removed

used pl.strategies.utils.on_colab_kaggle function

it was removed

#16437

used LightningDataModule.add_argparse_args() method

switch to using LightningCLI

#16708

used LightningDataModule.parse_argparser() method

switch to using LightningCLI

#16708

used LightningDataModule.from_argparse_args() method

switch to using LightningCLI

#16708

used LightningDataModule.get_init_arguments_and_types() method

switch to using LightningCLI

#16708

used Trainer.default_attributes() method

switch to using LightningCLI

#16708

used Trainer.from_argparse_args() method

switch to using LightningCLI

#16708

used Trainer.parse_argparser() method

switch to using LightningCLI

#16708

used Trainer.match_env_arguments() method

switch to using LightningCLI

#16708

used Trainer.add_argparse_args() method

switch to using LightningCLI

#16708

used pl.utilities.argparse.from_argparse_args() function

switch to using LightningCLI

#16708

used pl.utilities.argparse.parse_argparser() function

switch to using LightningCLI

#16708

used pl.utilities.argparseparse_env_variables() function

switch to using LightningCLI

#16708

used get_init_arguments_and_types() function

switch to using LightningCLI

#16708

used pl.utilities.argparse.add_argparse_args() function

switch to using LightningCLI

#16708

used pl.utilities.parsing.str_to_bool() function

switch to using LightningCLI

#16708

used pl.utilities.parsing.str_to_bool_or_int() function

switch to using LightningCLI

#16708

used pl.utilities.parsing.str_to_bool_or_str() function

switch to using LightningCLI

#16708

derived from pl.utilities.distributed.AllGatherGrad class

switch to PyTorch native equivalent

#15364

used PL_RECONCILE_PROCESS=1 env. variable

customize your logger

#16204

if you derived from mixin’s method pl.core.saving.ModelIO.load_from_checkpoint

rely on pl.core.module.LightningModule

#16999

used Accelerator.setup_environment method

switch to Accelerator.setup_device

#16436

used PL_FAULT_TOLERANT_TRAINING env. variable

implement own logic with Fabric

#16516 #16533

used or derived from public pl.overrides.distributed.IndexBatchSamplerWrapper class

it is set as protected

#16826

used the DataLoaderLoop class

use manual optimization

#16726 Manual Optimization

used the EvaluationEpochLoop class

use manual optimization

#16726 Manual Optimization

used the PredictionEpochLoop class

use manual optimization

#16726 Manual Optimization

used trainer.reset_*_dataloader() methods

use Loop.setup_data() for the top-level loops

#16726

used LightningModule.precision attribute

rely on Trainer precision attribute

#16203

used Trainer.model setter

you shall pass the model in fit/test/predict method

#16462

relied on pl.utilities.supporters.CombinedLoaderIterator class

pass dataloders directly

#16714

relied on pl.utilities.supporters.CombinedLoaderIterator class

pass dataloders directly

#16714

accessed ProgressBarBase.train_batch_idx property

rely on Trainer internal loops’ properties

#16760

accessed ProgressBarBase.val_batch_idx property

rely on Trainer internal loops’ properties

#16760

accessed ProgressBarBase.test_batch_idx property

rely on Trainer internal loops’ properties

#16760

accessed ProgressBarBase.predict_batch_idx property

rely on Trainer internal loops’ properties

#16760

used Trainer.prediction_writer_callbacks property

rely on precision plugin

#16759

used PrecisionPlugin.dispatch

it was removed

#16618

used Strategy.dispatch

it was removed

#16618


© Copyright Copyright (c) 2018-2023, Lightning AI et al...

Built with Sphinx using a theme provided by Read the Docs.