• Docs >
  • Upgrade from 1.8 to the 2.0
Shortcuts

Upgrade from 1.8 to the 2.0

Regular User

reg. user 1.8

If

Then

Ref

used seed_everything_default=None in LightningCLI

set seed_everything_default=False instead

PR12804

used Trainer.reset_train_val_dataloaders()

call Trainer.fit_loop.setup_data()

PR12184

imported pl.core.lightning

import pl.core.module instead

PR12740

reg. user 1.9

If

Then

Ref

used Python 3.7

upgrade to Python 3.8 or higher

PR16579

used PyTorch 1.10

upgrade to PyTorch 1.11 or higher

PR16492

used Trainer’s flag gpus

use devices with the same number

PR16171

used Trainer’s flag tpu_cores

use devices with the same number

PR16171

used Trainer’s flag ipus

use devices with the same number

PR16171

used Trainer’s flag num_processes

use devices with the same number

PR16171

used Trainer’s flag resume_from_checkpoint

pass the path to the Trainer.fit(ckpt_path="...") method,

PR10061

used Trainer’s flag auto_select_gpus

use devices="auto"

PR16184

called the pl.tuner.auto_gpu_select.pick_single_gpu function

use Trainer’s flag``devices=”auto”``

PR16184

called the pl.tuner.auto_gpu_select.pick_multiple_gpus functions

use Trainer’s flag``devices=”auto”``

PR16184

used Trainer’s flag accumulate_grad_batches with a scheduling dictionary value

use the GradientAccumulationScheduler callback and configure it

PR16729

imported profiles from pl.profiler

import from pl.profilers

PR16359

used Tuner as part of Trainer in any form

move to a standalone Tuner object or use particular callbacks LearningRateFinder and BatchSizeFinder

Batch Size Finder Learning Rate Finder

used Trainer’s flag auto_scale_batch_size

use BatchSizeFinder callback instead and the Trainer.tune() method was removed

used Trainer’s flag auto_lr_find

use callbacks LearningRateFinder callback instead and the Trainer.tune() method was removed

Advanced User

adv. user 1.8

If

Then

Ref

imported pl.callbacks.base

import pl.callbacks.callback

PR13031

imported pl.loops.base

import pl.loops.loop instead

PR13043

imported pl.utilities.cli

import pl.cli instead

PR13767

imported profiler classes from pl.profiler.*

import pl.profilers instead

PR12308

used pl.accelerators.GPUAccelerator

use pl.accelerators.CUDAAccelerator

PR13636

used LightningDeepSpeedModule

use strategy="deepspeed" or strategy=DeepSpeedStrategy(...)

DeepSpeedStrategy

used the with init_meta_context() context manager from import pl.utilities.meta

switch to deepspeed-zero-stage-3

DeepSpeed ZeRO Stage 3

used the Lightning Hydra multi-run integration

removed support for it as it caused issues with processes hanging

PR15689

used pl.utilities.memory.get_gpu_memory_map

use pl.accelerators.cuda.get_nvidia_gpu_stats

PR9921

adv. user 1.9

If

Then

Ref

used the pl.lite module

switch to lightning_fabric

PR15953

used Trainer’s flag strategy='dp'

use DDP with strategy='ddp' or DeepSpeed instead

PR16748

implemented LightningModule.training_epoch_end hooks

port your logic to LightningModule.on_train_epoch_end hook

PR16520

implemented LightningModule.validation_epoch_end hook

port your logic to LightningModule.on_validation_epoch_end hook

PR16520

implemented LightningModule.test_epoch_end hooks

port your logic to LightningModule.on_test_epoch_end hook

PR16520

used Trainer’s flag multiple_trainloader_mode

switch to CombinedLoader(..., mode=...) and set mode directly now

PR16800

used Trainer’s flag move_metrics_to_cpu

implement particular offload logic in your custom metric or turn it on in torchmetrics

PR16358

used Trainer’s flag track_grad_norm

overwrite on_before_optimizer_step hook and pass the argument directly and LightningModule.log_grad_norm() hook

PR16745 PR16745

used Trainer’s flag replace_sampler_ddp

use use_distributed_sampler; the sampler gets created not only for the DDP strategies

relied on the on_tpu argument in LightningModule.optimizer_step hook

switch to manual optimization

PR16537 Manual Optimization

relied on the using_lbfgs argument in LightningModule.optimizer_step hook

switch to manual optimization

PR16538 Manual Optimization

were using nvidia/apex in any form

switch to PyTorch native mixed precision torch.amp instead

PR16039 Precision

used Trainer’s flag using_native_amp

use PyTorch native mixed precision

PR16039 Precision

used Trainer’s flag amp_backend

use PyTorch native mixed precision

PR16039 Precision

used Trainer’s flag amp_level

use PyTorch native mixed precision

PR16039 Precision

used Trainer’s attribute using_native_amp

use PyTorch native mixed precision

PR16039 Precision

used Trainer’s attribute amp_backend

use PyTorch native mixed precision

PR16039 Precision

used Trainer’s attribute amp_level

use PyTorch native mixed precision

PR16039 Precision

use the FairScale integration

consider using PyTorch’s native FSDP implementation or outsourced implementation into own project

lightning-Fairscale

used pl.overrides.fairscale.LightningShardedDataParallel

use native FSDP instead

PR16400 FSDP

used pl.plugins.precision.fully_sharded_native_amp.FullyShardedNativeMixedPrecisionPlugin

use native FSDP instead

PR16400 FSDP

used pl.plugins.precision.sharded_native_amp.ShardedNativeMixedPrecisionPlugin

use native FSDP instead

PR16400 FSDP

used pl.strategies.fully_sharded.DDPFullyShardedStrategy

use native FSDP instead

PR16400 FSDP

used pl.strategies.sharded.DDPShardedStrategy

use native FSDP instead

PR16400 FSDP

used pl.strategies.sharded_spawn.DDPSpawnShardedStrategy

use native FSDP instead

PR16400 FSDP

used save_config_overwrite parameters in LightningCLI

pass this option and via dictionary of save_config_kwargs parameter

PR14998

used save_config_multifile parameters in LightningCLI

pass this option and via dictionary of save_config_kwargs parameter

PR14998

have customized loops Loop.replace()

implement your training loop with Fabric.

PR14998 Fabric

have customized loops Loop.run()

implement your training loop with Fabric.

PR14998 Fabric

have customized loops Loop.connect()

implement your training loop with Fabric.

PR14998 Fabric

used the Trainer’s trainer.fit_loop property

implement your training loop with Fabric

PR14998 Fabric

used the Trainer’s trainer.validate_loop property

implement your training loop with Fabric

PR14998 Fabric

used the Trainer’s trainer.test_loop property

implement your training loop with Fabric

PR14998 Fabric

used the Trainer’s trainer.predict_loop property

implement your training loop with Fabric

PR14998 Fabric

used the Trainer.loop and fetching classes

being marked as protected

used opt_idx argument in BaseFinetuning.finetune_function

use manual optimization

PR16539

used opt_idx argument in Callback.on_before_optimizer_step

use manual optimization

PR16539 Manual Optimization

used optimizer_idx as an optional argument in LightningModule.training_step

use manual optimization

PR16539 Manual Optimization

used optimizer_idx argument in LightningModule.on_before_optimizer_step

use manual optimization

PR16539 Manual Optimization

used optimizer_idx argument in LightningModule.configure_gradient_clipping

use manual optimization

PR16539 Manual Optimization

used optimizer_idx argument in LightningModule.optimizer_step

use manual optimization

PR16539 Manual Optimization

used optimizer_idx argument in LightningModule.optimizer_zero_grad

use manual optimization

PR16539 Manual Optimization

used optimizer_idx argument in LightningModule.lr_scheduler_step

use manual optimization

PR16539 Manual Optimization

used declaring optimizer frequencies in the dictionary returned from LightningModule.configure_optimizers

use manual optimization

PR16539 Manual Optimization

used optimizer argument in LightningModule.backward

use manual optimization

PR16539 Manual Optimization

used optimizer_idx argument in LightningModule.backward

use manual optimization

PR16539 Manual Optimization

used optimizer_idx argument in PrecisionPlugin.optimizer_step

use manual optimization

PR16539 Manual Optimization

used optimizer_idx argument in PrecisionPlugin.,backward

use manual optimization

PR16539 Manual Optimization

used optimizer_idx argument in PrecisionPlugin.optimizer_step

use manual optimization

PR16539 Manual Optimization

used optimizer_idx argument in Strategy.backward

use manual optimization

PR16539 Manual Optimization

used optimizer_idx argument in Strategy.optimizer_step

use manual optimization

PR16539 Manual Optimization

used Trainer’s Trainer.optimizer_frequencies attribute

use manual optimization

Manual Optimization

used PL_INTER_BATCH_PARALLELISM environment flag

PR16355

used training integration with Horovod

install standalone package/project

lightning-Horovod

used training integration with ColossalAI

install standalone package/project

lightning-ColossalAI

used QuantizationAwareTraining callback

use Torch’s Quantization directly

PR16750

had any logic except reducing the DP outputs in LightningModule.training_step_end hook

port it to LightningModule.on_train_batch_end hook

PR16791

had any logic except reducing the DP outputs in LightningModule.validation_step_end hook

port it to LightningModule.on_validation_batch_end hook

PR16791

had any logic except reducing the DP outputs in LightningModule.test_step_end hook

port it to LightningModule.on_test_batch_end hook

PR16791

used pl.strategies.DDPSpawnStrategy

switch to general DDPStrategy(start_method='spawn') with proper starting method

PR16809

used the automatic addition of a moving average of the training_step loss in the progress bar

use self.log("loss", ..., prog_bar=True) instead.

PR16192

rely on the outputs argument from the on_predict_epoch_end hook

access them via trainer.predict_loop.predictions

PR16655

need to pass a dictionary to self.log()

pass them independently.

PR16389

Developer

devel 1.8

If

Then

Ref

derived from pytorch_lightning.loggers.base.LightningLoggerBase

derive from pytorch_lightning.loggers.logger.Logger

PR12014

derived from pytorch_lightning.profiler.base.BaseProfiler

derive from pytorch_lightning.profilers.profiler.Profiler

PR12150

derived from pytorch_lightning.profiler.base.AbstractProfiler

derive from pytorch_lightning.profilers.profiler.Profiler

PR12106

devel 1.9

If

Then

Ref

passed the pl_module argument to distributed module wrappers

passed the (required) forward_module argument

PR16386

used DataParallel and the LightningParallelModule wrapper

use DDP or DeepSpeed instead

PR16748 DDP

used pl_module argument from the distributed module wrappers

use DDP or DeepSpeed instead

PR16386 DDP

called pl.overrides.base.unwrap_lightning_module function

use DDP or DeepSpeed instead

PR16386 DDP

used or derived from pl.overrides.distributed.LightningDistributedModule class

use DDP instead

PR16386 DDP

used the pl.plugins.ApexMixedPrecisionPlugin`` plugin

use PyTorch native mixed precision

PR16039

used the pl.plugins.NativeMixedPrecisionPlugin plugin

switch to the pl.plugins.MixedPrecisionPlugin plugin

PR16039

used the fit_loop.min_steps setters

implement your training loop with Fabric

PR16803

used the fit_loop.max_steps setters

implement your training loop with Fabric

PR16803

used the data_parallel attribute in Trainer

check the same using isinstance(trainer.strategy, ParallelStrategy)

PR16703

used any function from pl.utilities.xla_device

switch to pl.accelerators.XLAAccelerator.is_available()

PR14514 PR14550

imported functions from pl.utilities.device_parser.*

import them from lightning_fabric.utilities.device_parser.*

PR14492 PR14753

imported functions from pl.utilities.cloud_io.*

import them from lightning_fabric.utilities.cloud_io.*

PR14515

imported functions from pl.utilities.apply_func.*

import them from lightning_utilities.core.apply_func.*

PR14516 PR14537

used any code from pl.core.mixins

use the base classes

PR16424

used any code from pl.utilities.distributed

rely on Pytorch’s native functions

PR16390

used any code from pl.utilities.data

it was removed

PR16440

used any code from pl.utilities.optimizer

it was removed

PR16439

used any code from pl.utilities.seed

it was removed

PR16422

were using truncated backpropagation through time (TBPTT) with LightningModule.truncated_bptt_steps

use manual optimization

PR16172 Manual Optimization

were using truncated backpropagation through time (TBPTT) with LightningModule.tbptt_split_batch

use manual optimization

PR16172 Manual Optimization

were using truncated backpropagation through time (TBPTT) and passing hidden to LightningModule.training_step

use manual optimization

PR16172 Manual Optimization

used pl.utilities.finite_checks.print_nan_gradients function

it was removed

used pl.utilities.finite_checks.detect_nan_parameters function

it was removed

used pl.utilities.parsing.flatten_dict function

it was removed

used pl.utilities.metrics.metrics_to_scalars function

it was removed

used pl.utilities.memory.get_model_size_mb function

it was removed

used pl.strategies.utils.on_colab_kaggle function

it was removed

PR16437

used LightningDataModule.add_argparse_args() method

switch to using LightningCLI

PR16708

used LightningDataModule.parse_argparser() method

switch to using LightningCLI

PR16708

used LightningDataModule.from_argparse_args() method

switch to using LightningCLI

PR16708

used LightningDataModule.get_init_arguments_and_types() method

switch to using LightningCLI

PR16708

used Trainer.default_attributes() method

switch to using LightningCLI

PR16708

used Trainer.from_argparse_args() method

switch to using LightningCLI

PR16708

used Trainer.parse_argparser() method

switch to using LightningCLI

PR16708

used Trainer.match_env_arguments() method

switch to using LightningCLI

PR16708

used Trainer.add_argparse_args() method

switch to using LightningCLI

PR16708

used pl.utilities.argparse.from_argparse_args() function

switch to using LightningCLI

PR16708

used pl.utilities.argparse.parse_argparser() function

switch to using LightningCLI

PR16708

used pl.utilities.argparseparse_env_variables() function

switch to using LightningCLI

PR16708

used get_init_arguments_and_types() function

switch to using LightningCLI

PR16708

used pl.utilities.argparse.add_argparse_args() function

switch to using LightningCLI

PR16708

used pl.utilities.parsing.str_to_bool() function

switch to using LightningCLI

PR16708

used pl.utilities.parsing.str_to_bool_or_int() function

switch to using LightningCLI

PR16708

used pl.utilities.parsing.str_to_bool_or_str() function

switch to using LightningCLI

PR16708

derived from pl.utilities.distributed.AllGatherGrad class

switch to PyTorch native equivalent

PR15364

used PL_RECONCILE_PROCESS=1 env. variable

customize your logger

PR16204

if you derived from mixin’s method pl.core.saving.ModelIO.load_from_checkpoint

rely on pl.core.module.LightningModule

PR16999

used Accelerator.setup_environment method

switch to Accelerator.setup_device

PR16436

used PL_FAULT_TOLERANT_TRAINING env. variable

implement own logic with Fabric

PR16516 PR16533

used or derived from public pl.overrides.distributed.IndexBatchSamplerWrapper class

it is set as protected

PR16826

used the DataLoaderLoop class

use manual optimization

PR16726 Manual Optimization

used the EvaluationEpochLoop class

use manual optimization

PR16726 Manual Optimization

used the PredictionEpochLoop class

use manual optimization

PR16726 Manual Optimization

used trainer.reset_*_dataloader() methods

use Loop.setup_data() for the top-level loops

PR16726

used LightningModule.precision attribute

rely on Trainer precision attribute

PR16203

used Trainer.model setter

you shall pass the model in fit/test/predict method

PR16462

relied on pl.utilities.supporters.CombinedLoaderIterator class

pass dataloders directly

PR16714

relied on pl.utilities.supporters.CombinedLoaderIterator class

pass dataloders directly

PR16714

used pl.callbacks.progress.base.ProgressBarBase

rename to pl.callbacks.progress.ProgressBar

PR17058

accessed ProgressBarBase.train_batch_idx property

rely on Trainer internal loops’ properties

PR16760

accessed ProgressBarBase.val_batch_idx property

rely on Trainer internal loops’ properties

PR16760

accessed ProgressBarBase.test_batch_idx property

rely on Trainer internal loops’ properties

PR16760

accessed ProgressBarBase.predict_batch_idx property

rely on Trainer internal loops’ properties

PR16760

used Trainer.prediction_writer_callbacks property

rely on precision plugin

PR16759

used PrecisionPlugin.dispatch

it was removed

PR16618

used Strategy.dispatch

it was removed

PR16618