• Docs >
  • Upgrade from 1.7 to the 2.0
Shortcuts

Upgrade from 1.7 to the 2.0

Regular User

reg. user 1.7

If

Then

Ref

have wrapped your loggers with LoggerCollection

directly pass a list of loggers to the Trainer and access the list via the trainer.loggers attribute.

#12147

used Trainer.lr_schedulers

access trainer.lr_scheduler_configs instead, which contains dataclasses instead of dictionaries.

#11443

used neptune-client API in the NeptuneLogger

upgrade to the latest API

#14727

used LightningDataModule.on_save hook

use LightningDataModule.on_save_checkpoint instead

#11887

used LightningDataModule.on_load_checkpoint hook

use LightningDataModule.on_load_checkpoint hook instead

#11887

used LightningModule.on_hpc_load hook

switch to general purpose hook LightningModule.on_load_checkpoint

#14315

used LightningModule.on_hpc_save hook

switch to general purpose hook LightningModule.on_save_checkpoint

#14315

used Trainer’s flag weights_save_path

use directly dirpath argument in the ModelCheckpoint callback.

#14424

used Trainer’s property Trainer.weights_save_path is dropped

#14424

reg. user 1.8

If

Then

Ref

used seed_everything_default=None in LightningCLI

set seed_everything_default=False instead

#12804

used Trainer.reset_train_val_dataloaders()

call Trainer.reset_train_dataloaders() and Trainer.reset_val_dataloaders() separately

#12184

imported pl.core.lightning

import pl.core.module instead

#12740

reg. user 1.9

If

Then

Ref

used Python 3.7

upgrade to Python 3.8 or higher

#16579

used PyTorch 1.10

upgrade to PyTorch 1.11 or higher

#16492

used Trainer’s flag gpus

use devices with the same number

#16171

used Trainer’s flag tpu_cores

use devices with the same number

#16171

used Trainer’s flag ipus

use devices with the same number

#16171

used Trainer’s flag num_processes

use devices with the same number

#16171

used Trainer’s flag resume_from_checkpoint

pass the path to the Trainer.fit(ckpt_path="...") method,

#10061

used Trainer’s flag auto_select_gpus

use devices="auto"

#16184

called the pl.tuner.auto_gpu_select.pick_single_gpu function

use Trainer’s flag``devices=”auto”``

#16184

called the pl.tuner.auto_gpu_select.pick_multiple_gpus functions

use Trainer’s flag``devices=”auto”``

#16184

used Trainer’s flag accumulate_grad_batches with a scheduling dictionary value

use the GradientAccumulationScheduler callback and configure it

#16729

imported profiles from pl.profiler

import from pl.profilers

#16359

used Tuner as part of Trainer in any form

move to a standalone Tuner object or use particular callbacks LearningRateFinder and BatchSizeFinder

https://lightning.ai/docs/pytorch/latest/advanced/training_tricks.html#batch-size-finder

used Trainer’s flag auto_scale_batch_size

use BatchSizeFinder callback instead and the Trainer.tune() method was removed

used Trainer’s flag auto_lr_find

use callbacks LearningRateFinder callback instead and the Trainer.tune() method was removed

Advanced User

adv. user 1.7

If

Then

Ref

used DDP2Strategy

switch to DDPStrategy

#14026

used Trainer.training_type_plugin property

now use Trainer.strategy and update the references

#11141

used any TrainingTypePluginsn

rename them to Strategy

#11120

used DistributedType

rely on protected _StrategyType

#10505

used DeviceType

rely on protected _AcceleratorType

#10503

used pl.utiltiies.meta functions

switch to built-in https://github.com/pytorch/torchdistx support

#13868

have implemented Callback.on_configure_sharded_model hook

move your implementation to Callback.setup

#14834

have implemented the Callback.on_before_accelerator_backend_setup hook

move your implementation to Callback.setup

#14834

have implemented the Callback.on_batch_start hook

move your implementation to Callback.on_train_batch_start

#14834

have implemented the Callback.on_batch_end hook

move your implementation to Callback.on_train_batch_end

#14834

have implemented the Callback.on_epoch_start hook

move your implementation to Callback.on_train_epoch_start , to Callback.on_validation_epoch_start , to Callback.on_test_epoch_start

#14834

have implemented the Callback.on_pretrain_routine_{start,end} hook

move your implementation to Callback.on_fit_start

#14834

used Callback.on_init_start hook

use Callback.on_train_start instead

#10940

used Callback.on_init_end hook

use Callback.on_train_start instead

#10940

used Trainer’s attribute Trainer.num_processes

it was replaced by Trainer.num_devices

#12388

used Trainer’s attribute Trainer.gpus

it was replaced by Trainer.num_devices

#12436

used Trainer’s attribute Trainer.num_gpus

use Trainer.num_devices instead

#12384

used Trainer’s attribute Trainer.ipus

use Trainer.num_devices instead

#12386

used Trainer’s attribute Trainer.tpu_cores

use Trainer.num_devices instead

#12437

used Trainer.num_processes attribute

switch to using Trainer.num_devices

#12388

used LightningIPUModule

it was removed

#14830

logged with LightningLoggerBase.agg_and_log_metrics

switch to LightningLoggerBase.log_metrics

#11832

used agg_key_funcs parameter from LightningLoggerBase

log metrics explicitly

#11871

used agg_default_func parameters in LightningLoggerBase

log metrics explicitly

#11871

used Trainer.validated_ckpt_path attribute

rely on generic read-only property Trainer.ckpt_path which is set when checkpoints are loaded via Trainer.validate(````ckpt_path=...)

#11696

used Trainer.tested_ckpt_path attribute

rely on generic read-only property Trainer.ckpt_path which is set when checkpoints are loaded via Trainer.test(````ckpt_path=...)

#11696

used Trainer.predicted_ckpt_path attribute

rely on generic read-only property Trainer.ckpt_path, which is set when checkpoints are loaded via Trainer.predict(````ckpt_path=...)

#11696

rely on the returned dictionary from Callback.on_save_checkpoint

call directly Callback.state_dict instead

#11887

adv. user 1.8

If

Then

Ref

imported pl.callbacks.base

import pl.callbacks.callback

#13031

imported pl.loops.base

import pl.loops.loop instead

#13043

imported pl.utilities.cli

import pl.cli instead

#13767

imported profiler classes from pl.profiler.*

import pl.profilers instead

#12308

used pl.accelerators.GPUAccelerator

use pl.accelerators.CUDAAccelerator

#13636

used LightningDeepSpeedModule

use strategy="deepspeed" or strategy=DeepSpeedStrategy(...)

https://pytorch-lightning.readthedocs.io/en/stable/api/pytorch_lightning.strategies.DeepSpeedStrategy.html

used the with init_meta_context() context manager from import pl.utilities.meta

switch to deepspeed-zero-stage-3

https://pytorch-lightning.readthedocs.io/en/stable/advanced/model_parallel.html#deepspeed-zero-stage-3

used the Lightning Hydra multi-run integration

removed support for it as it caused issues with processes hanging

#15689

used pl.utilities.memory.get_gpu_memory_map

use pl.accelerators.cuda.get_nvidia_gpu_stats

#9921

adv. user 1.9

If

Then

Ref

used the pl.lite module

switch to lightning_fabric

#15953

used Trainer’s flag strategy='dp'

use DDP with strategy='ddp' or DeepSpeed instead

#16748

implemented LightningModule.training_epoch_end hooks

port your logic to LightningModule.on_training_epoch_end hook

#16520

implemented LightningModule.validation_epoch_end hook

port your logic to LightningModule.on_validation_epoch_end hook

#16520

implemented LightningModule.test_epoch_end hooks

port your logic to LightningModule.on_test_epoch_end hook

#16520

used Trainer’s flag multiple_trainloader_mode

switch to CombinedLoader(..., mode=...) and set mode directly now

#16800

used Trainer’s flag move_metrics_to_cpu

implement particular offload logic in your custom metric or turn it on in torchmetrics

#16358

used Trainer’s flag track_grad_norm

overwrite on_before_optimizer_step hook and pass the argument directly and LightningModule.log_grad_norm() hook

#16745 #16745

used Trainer’s flag replace_sampler_ddp

use use_distributed_sample; the sampler gets created not only for the DDP strategies

relied on the on_tpu argument in LightningModule.optimizer_step hook

switch to manual optimization

#16537 Manual Optimization

relied on the using_lbfgs argument in LightningModule.optimizer_step hook

switch to manual optimization

#16538 Manual Optimization

were using nvidia/apex in any form

switch to PyTorch native mixed precision torch.amp instead

#16039 Precision

used Trainer’s flag using_native_amp

use PyTorch native mixed precision

#16039 Precision

used Trainer’s flag amp_backend

use PyTorch native mixed precision

#16039 Precision

used Trainer’s flag amp_level

use PyTorch native mixed precision

#16039 Precision

used Trainer’s attribute using_native_amp

use PyTorch native mixed precision

#16039 Precision

used Trainer’s attribute amp_backend

use PyTorch native mixed precision

#16039 Precision

used Trainer’s attribute amp_level

use PyTorch native mixed precision

#16039 Precision

use the FairScale integration

consider using PyTorch’s native FSDP implementation or outsourced implementation into own project

https://github.com/Lightning-Sandbox/lightning-Fairscale

used pl.overrides.fairscale.LightningShardedDataParallel

use native FSDP instead

#16400 FSDP

used pl.plugins.precision.fully_sharded_native_amp.FullyShardedNativeMixedPrecisionPlugin

use native FSDP instead

#16400 FSDP

used pl.plugins.precision.sharded_native_amp.ShardedNativeMixedPrecisionPlugin

use native FSDP instead

#16400 FSDP

used pl.strategies.fully_sharded.DDPFullyShardedStrategy

use native FSDP instead

#16400 FSDP

used pl.strategies.sharded.DDPShardedStrategy

use native FSDP instead

#16400 FSDP

used pl.strategies.sharded_spawn.DDPSpawnShardedStrategy

use native FSDP instead

#16400 FSDP

used save_config_overwrite parameters in LightningCLI

pass this option and via dictionary of save_config_kwargs parameter

#14998

used save_config_multifile parameters in LightningCLI

pass this option and via dictionary of save_config_kwargs parameter

#14998

have customized loops Loop.replace()

implement your training loop with Fabric.

#14998 Fabric

have customized loops Loop.run()

implement your training loop with Fabric.

#14998 Fabric

have customized loops Loop.connect()

implement your training loop with Fabric.

#14998 Fabric

used the Trainer’s trainer.fit_loop property

implement your training loop with Fabric

#14998 Fabric

used the Trainer’s trainer.validate_loop property

implement your training loop with Fabric

#14998 Fabric

used the Trainer’s trainer.test_loop property

implement your training loop with Fabric

#14998 Fabric

used the Trainer’s trainer.predict_loop property

implement your training loop with Fabric

#14998 Fabric

used the Trainer.loop and fetching classes

being marked as protected

used opt_idx argument in BaseFinetuning.finetune_function

use manual optimization

#16539

used opt_idx argument in Callback.on_before_optimizer_step

use manual optimization

#16539 Manual Optimization

used optimizer_idx as an optional argument in LightningModule.training_step

use manual optimization

#16539 Manual Optimization

used optimizer_idx argument in LightningModule.on_before_optimizer_step

use manual optimization

#16539 Manual Optimization

used optimizer_idx argument in LightningModule.configure_gradient_clipping

use manual optimization

#16539 Manual Optimization

used optimizer_idx argument in LightningModule.optimizer_step

use manual optimization

#16539 Manual Optimization

used optimizer_idx argument in LightningModule.optimizer_zero_grad

use manual optimization

#16539 Manual Optimization

used optimizer_idx argument in LightningModule.lr_scheduler_step

use manual optimization

#16539 Manual Optimization

used declaring optimizer frequencies in the dictionary returned from LightningModule.configure_optimizers

use manual optimization

#16539 Manual Optimization

used optimizer argument in LightningModule.backward

use manual optimization

#16539 Manual Optimization

used optimizer_idx argument in LightningModule.backward

use manual optimization

#16539 Manual Optimization

used optimizer_idx argument in PrecisionPlugin.optimizer_step

use manual optimization

#16539 Manual Optimization

used optimizer_idx argument in PrecisionPlugin.,backward

use manual optimization

#16539 Manual Optimization

used optimizer_idx argument in PrecisionPlugin.optimizer_step

use manual optimization

#16539 Manual Optimization

used optimizer_idx argument in Strategy.backward

use manual optimization

#16539 Manual Optimization

used optimizer_idx argument in Strategy.optimizer_step

use manual optimization

#16539 Manual Optimization

used Trainer’s Trainer.optimizer_frequencies attribute

use manual optimization

Manual Optimization

used PL_INTER_BATCH_PARALLELISM environment flag

#16355

used training integration with Horovod

install standalone package/project

https://github.com/Lightning-AI/lightning-Horovod

used training integration with ColossalAI

install standalone package/project

https://lightning.ai/docs/pytorch/latest/advanced/third_party/colossalai.html

used QuantizationAwareTraining callback

use Torch’s Quantization directly

#16750

had any logic except reducing the DP outputs in LightningModule.training_step_end hook

port it to LightningModule.training_batch_end hook

#16791

had any logic except reducing the DP outputs in LightningModule.validation_step_end hook

port it to LightningModule.validation_batch_end hook

#16791

had any logic except reducing the DP outputs in LightningModule.test_step_end hook

port it to LightningModule.test_batch_end hook

#16791

used pl.strategies.DDPSpawnStrategy

switch to general DDPStrategy(start_method='spawn') with proper starting method

#16809

used the automatic addition of a moving average of the training_step loss in the progress bar

use self.log("loss", ..., prog_bar=True) instead.

#16192

rely on the outputs argument from the on_predict_epoch_end hook

access them via trainer.predict_loop.predictions

#16655

need to pass a dictionary to self.log()

pass them independently.

#16389

Developer

devel 1.7

If

Then

Ref

Removed the legacy Trainer.get_deprecated_arg_names()

#14415

used the generic method Trainer.run_stage

switch to a specific one depending on your purpose Trainer.{fit,validate,test,predict} .

#11000

used rank_zero_only from pl.utilities.distributed

import it from pl.utilities.rank_zero

#11747

used rank_zero_debug from pl.utilities.distributed

import it from pl.utilities.rank_zero

#11747

used rank_zero_info from pl.utilities.distributed

import it from pl.utilities.rank_zero

#11747

used rank_zero_warn from pl.utilities.warnings

import it from pl.utilities.rank_zero

#11747

used rank_zero_deprecation from pl.utilities.warnings

import it from pl.utilities.rank_zero

#11747

used LightningDeprecationWarning from pl.utilities.warnings

import it from pl.utilities.rank_zero

#11747

used LightningDeprecationWarning from pl.utilities.warnings

import it from pl.utilities.rank_zero

#11747

used Trainer.data_parallel_device_ids attribute

switch it to Trainer.device_ids

#12072

derived it from TrainerCallbackHookMixin

use Trainer base class

#14401

used base class pytorch_lightning.profiler.BaseProfilerto

switch to use pytorch_lightning.profiler.Profiler instead

#12150

set distributed backend via the environment variable PL_TORCH_DISTRIBUTED_BACKEND

use process_group_backend in the strategy constructor

#11745

used PrecisionPlugin.on_load_checkpoint hooks

switch to PrecisionPlugin.load_state_dict

#11978

used PrecisionPlugin.on_save_checkpoint hooks

switch to PrecisionPlugin.load_state_dict

#11978

used Trainer.root_gpu attribute

use Trainer.strategy.root_device.index when GPU is used

#12262

used Trainer.use_amp attribute

rely on Torch native AMP

#12312

used LightingModule.use_amp attribute

rely on Torch native AMP

#12315

used Trainer’s attribute Trainer.verbose_evaluate

rely on loop constructor EvaluationLoop(verbose=...)

#10931

used Trainer’s attribute Trainer.should_rank_save_checkpoint

it was removed

#11068

derived from TrainerOptimizersMixin

rely on core/optimizer.py

#11155

derived from TrainerDataLoadingMixin

rely on methods from Trainer and DataConnector

#11282

used Trainer’s attribute Trainer.lightning_optimizers

switch to the Strategy and its attributes.

#11444

used Trainer.call_hook

it was set as a protected method Trainer._call_callback_hooks, Trainer._call_lightning_module_hook, Trainer._call_ttp_hook, Trainer._call_accelerator_hook and shall not be used.

#10979

used Profiler’s attribute SimpleProfiler.profile_iterable

it was removed

#12102

used Profiler’s attribute AdvancedProfiler.profile_iterable

it was removed

#12102

used the device_stats_monitor.prefix_metric_keys

#11254

used on_train_batch_end(outputs, ...) with 2d list with sizes (n_optimizers, tbptt_steps)

chang it to (tbptt_steps, n_optimizers). You can update your code by adding the following parameter to your hook signature: on_train_batch_end(outputs, ..., new_format=True).

#12182

used training_epoch_end(outputs) with a 3d list with sizes (n_optimizers, n_batches, tbptt_steps)

change it to (n_batches, tbptt_steps, n_optimizers). You can update your code by adding the following parameter to your hook signature: training_epoch_end(outputs, new_format=True).

#12182

devel 1.8

If

Then

Ref

derived from pytorch_lightning.loggers.base.LightningLoggerBase

derive from pytorch_lightning.loggers.logger.Logger

#12014

derived from pytorch_lightning.profiler.base.BaseProfiler

derive from pytorch_lightning.profilers.profiler.Profiler

#12150

derived from pytorch_lightning.profiler.base.AbstractProfiler

derive from pytorch_lightning.profilers.profiler.Profiler

#12106

devel 1.9

If

Then

Ref

passed the pl_module argument to distributed module wrappers

passed the (required) forward_module argument

#16386

used DataParallel and the LightningParallelModule wrapper

use DDP or DeepSpeed instead

#16748 DDP

used pl_module argument from the distributed module wrappers

use DDP or DeepSpeed instead

#16386 DDP

called pl.overrides.base.unwrap_lightning_module function

use DDP or DeepSpeed instead

#16386 DDP

used or derived from pl.overrides.distributed.LightningDistributedModule class

use DDP instead

#16386 DDP

used the pl.plugins.ApexMixedPrecisionPlugin`` plugin

use PyTorch native mixed precision

#16039

used the pl.plugins.NativeMixedPrecisionPlugin plugin

switch to the pl.plugins.MixedPrecisionPlugin plugin

#16039

used the fit_loop.min_steps setters

implement your training loop with Fabric

#16803

used the fit_loop.max_steps setters

implement your training loop with Fabric

#16803

used the data_parallel attribute in Trainer

check the same using isinstance(trainer.strategy, ParallelStrategy)

#16703

used any function from pl.utilities.xla_device

switch to pl.accelerators.TPUAccelerator.is_available()

#14514 #14550

imported functions from pl.utilities.device_parser.*

import them from lightning_fabric.utilities.device_parser.*

#14492 #14753

imported functions from pl.utilities.cloud_io.*

import them from lightning_fabric.utilities.cloud_io.*

#14515

imported functions from pl.utilities.apply_func.*

import them from lightning_utilities.core.apply_func.*

#14516 #14537

used any code from pl.core.mixins

use the base classes

#16424

used any code from pl.utilities.distributed

rely on Pytorch’s native functions

#16390

used any code from pl.utilities.data

it was removed

#16440

used any code from pl.utilities.optimizer

it was removed

#16439

used any code from pl.utilities.seed

it was removed

#16422

were using truncated backpropagation through time (TBPTT) with LightningModule.truncated_bptt_steps

use manual optimization

#16172 Manual Optimization

were using truncated backpropagation through time (TBPTT) with LightningModule.tbptt_split_batch

use manual optimization

#16172 Manual Optimization

were using truncated backpropagation through time (TBPTT) and passing hidden to LightningModule.training_step

use manual optimization

#16172 Manual Optimization

used pl.utilities.finite_checks.print_nan_gradients function

it was removed

used pl.utilities.finite_checks.detect_nan_parameters function

it was removed

used pl.utilities.parsing.flatten_dict function

it was removed

used pl.utilities.metrics.metrics_to_scalars function

it was removed

used pl.utilities.memory.get_model_size_mb function

it was removed

used pl.strategies.utils.on_colab_kaggle function

it was removed

#16437

used LightningDataModule.add_argparse_args() method

switch to using LightningCLI

#16708

used LightningDataModule.parse_argparser() method

switch to using LightningCLI

#16708

used LightningDataModule.from_argparse_args() method

switch to using LightningCLI

#16708

used LightningDataModule.get_init_arguments_and_types() method

switch to using LightningCLI

#16708

used Trainer.default_attributes() method

switch to using LightningCLI

#16708

used Trainer.from_argparse_args() method

switch to using LightningCLI

#16708

used Trainer.parse_argparser() method

switch to using LightningCLI

#16708

used Trainer.match_env_arguments() method

switch to using LightningCLI

#16708

used Trainer.add_argparse_args() method

switch to using LightningCLI

#16708

used pl.utilities.argparse.from_argparse_args() function

switch to using LightningCLI

#16708

used pl.utilities.argparse.parse_argparser() function

switch to using LightningCLI

#16708

used pl.utilities.argparseparse_env_variables() function

switch to using LightningCLI

#16708

used get_init_arguments_and_types() function

switch to using LightningCLI

#16708

used pl.utilities.argparse.add_argparse_args() function

switch to using LightningCLI

#16708

used pl.utilities.parsing.str_to_bool() function

switch to using LightningCLI

#16708

used pl.utilities.parsing.str_to_bool_or_int() function

switch to using LightningCLI

#16708

used pl.utilities.parsing.str_to_bool_or_str() function

switch to using LightningCLI

#16708

derived from pl.utilities.distributed.AllGatherGrad class

switch to PyTorch native equivalent

#15364

used PL_RECONCILE_PROCESS=1 env. variable

customize your logger

#16204

if you derived from mixin’s method pl.core.saving.ModelIO.load_from_checkpoint

rely on pl.core.module.LightningModule

#16999

used Accelerator.setup_environment method

switch to Accelerator.setup_device

#16436

used PL_FAULT_TOLERANT_TRAINING env. variable

implement own logic with Fabric

#16516 #16533

used or derived from public pl.overrides.distributed.IndexBatchSamplerWrapper class

it is set as protected

#16826

used the DataLoaderLoop class

use manual optimization

#16726 Manual Optimization

used the EvaluationEpochLoop class

use manual optimization

#16726 Manual Optimization

used the PredictionEpochLoop class

use manual optimization

#16726 Manual Optimization

used trainer.reset_*_dataloader() methods

use Loop.setup_data() for the top-level loops

#16726

used LightningModule.precision attribute

rely on Trainer precision attribute

#16203

used Trainer.model setter

you shall pass the model in fit/test/predict method

#16462

relied on pl.utilities.supporters.CombinedLoaderIterator class

pass dataloders directly

#16714

relied on pl.utilities.supporters.CombinedLoaderIterator class

pass dataloders directly

#16714

accessed ProgressBarBase.train_batch_idx property

rely on Trainer internal loops’ properties

#16760

accessed ProgressBarBase.val_batch_idx property

rely on Trainer internal loops’ properties

#16760

accessed ProgressBarBase.test_batch_idx property

rely on Trainer internal loops’ properties

#16760

accessed ProgressBarBase.predict_batch_idx property

rely on Trainer internal loops’ properties

#16760

used Trainer.prediction_writer_callbacks property

rely on precision plugin

#16759

used PrecisionPlugin.dispatch

it was removed

#16618

used Strategy.dispatch

it was removed

#16618


© Copyright Copyright (c) 2018-2023, Lightning AI et al...

Built with Sphinx using a theme provided by Read the Docs.