• Docs >
  • Upgrade from 1.4 to the 2.0
Shortcuts

Upgrade from 1.4 to the 2.0

Regular User

reg. user 1.4

If

Then

Ref

relied on the outputs in your LightningModule.on_train_epoch_end or Callback.on_train_epoch_end hooks

rely on either on_training_epoch_end or set outputs as attributes in your LightningModule instances and access them from the hook

#7339

accessed Trainer.truncated_bptt_steps

swicth to manual optimization

#7323

called LightningModule.write_predictions and LightningModule.write_predictions_dict

rely on predict_step and Trainer.predict + callbacks to write out predictions

#7066

passed the period argument to the ModelCheckpoint callback

pass the every_n_epochs argument to the ModelCheckpoint callback

#6146

passed the output_filename argument to Profiler

now pass dirpath and filename, that is Profiler(dirpath=...., filename=...)

#6621

passed the profiled_functions argument in PytorchProfiler

now pass the record_functions argument

#6349

relied on the @auto_move_data decorator to use the LightningModule outside of the Trainer for inference

use Trainer.predict

#6993

implemented on_load_checkpoint with a checkpoint only argument, as in Callback.on_load_checkpoint(checkpoint)

now update the signature to include pl_module and trainer, as in Callback.on_load_checkpoint(trainer, pl_module, checkpoint)

#7253

relied on pl.metrics

now import separate package torchmetrics

https://torchmetrics.readthedocs.io/en/stable

accessed datamodule attribute of LightningModule, that is model.datamodule

now access Trainer.datamodule, that is model.trainer.datamodule

#7168

reg. user 1.5

If

Then

Ref

used trainer.fit(train_dataloaders=...)

use trainer.fit(dataloaders=...)

#7431

used trainer.validate(val_dataloaders...)

use trainer.validate(dataloaders=...)

#7431

passed num_nodes to DDPPlugin and DDPSpawnPlugin

remove them since these parameters are now passed from the Trainer

#7026

passed sync_batchnorm to DDPPlugin and DDPSpawnPlugin

remove them since these parameters are now passed from the Trainer

#7026

didn’t provide a monitor argument to the EarlyStopping callback and just relied on the default value

pass monitor as it is now a required argument

#7907

used every_n_val_epochs in ModelCheckpoint

change the argument to every_n_epochs

#8383

used Trainer’s flag reload_dataloaders_every_epoch

use pass reload_dataloaders_every_n_epochs

#5043

used Trainer’s flag distributed_backend

use strategy

#8575

reg. user 1.6

If

Then

Ref

used Trainer’s flag terminate_on_nan

set detect_anomaly instead, which enables detecting anomalies in the autograd engine

#9175

used Trainer’s flag weights_summary

pass a ModelSummary callback with max_depth instead

#9699

used Trainer’s flag checkpoint_callback

set enable_checkpointing. If you set enable_checkpointing=True, it configures a default ModelCheckpoint callback if none is provided lightning_pytorch.trainer.trainer.Trainer.callbacks.ModelCheckpoint

#9754

used Trainer’s flag stochastic_weight_avg

add the StochasticWeightAveraging callback directly to the list of callbacks, so for example, Trainer(..., callbacks=[StochasticWeightAveraging(), ...])

#8989

used Trainer’s flag flush_logs_every_n_steps

pass it to the logger init if it is supported for the particular logger

#9366

used Trainer’s flag max_steps to the Trainer, max_steps=None won’t have any effect

turn off the limit by passing Trainer(max_steps=-1) which is the default

#9460

used Trainer’s flag resume_from_checkpoint="..."

pass the same path to the fit function instead, trainer.fit(ckpt_path="...")

#9693

used Trainer’s flag log_gpu_memory, gpu_metrics

use the DeviceStatsMonitor callback instead

#9921

used Trainer’s flag progress_bar_refresh_rate

set the ProgressBar callback and set refresh_rate there, or pass enable_progress_bar=False to disable the progress bar

#9616

called LightningModule.summarize()

use the utility function pl.utilities.model_summary.summarize(model)

#8513

used the LightningModule.model_size property

use the utility function pl.utilities.memory.get_model_size_mb(model)

#8495

relied on the on_trai_dataloader() hooks in LightningModule and LightningDataModule

use train_dataloader

#9098

relied on the on_val_dataloader() hooks in LightningModule and LightningDataModule

use val_dataloader

#9098

relied on the on_test_dataloader() hooks in LightningModule and LightningDataModule

use test_dataloader

#9098

relied on the on_predict_dataloader() hooks in LightningModule and LightningDataModule

use predict_dataloader

#9098

implemented the on_keyboard_interrupt callback hook

implement the on_exception hook, and specify the exception type

#9260

relied on the TestTubeLogger

Use another logger like TensorBoardLogger

#9065

used the basic progress bar ProgressBar callback

use the TQDMProgressBar callback instead with the same arguments

#10134

were using GPUStatsMonitor callbacks

use DeviceStatsMonitor callback instead

#9924

were using XLAStatsMonitor callbacks

use DeviceStatsMonitor callback instead

#9924

reg. user 1.7

If

Then

Ref

have wrapped your loggers with LoggerCollection

directly pass a list of loggers to the Trainer and access the list via the trainer.loggers attribute.

#12147

used Trainer.lr_schedulers

access trainer.lr_scheduler_configs instead, which contains dataclasses instead of dictionaries.

#11443

used neptune-client API in the NeptuneLogger

upgrade to the latest API

#14727

used LightningDataModule.on_save hook

use LightningDataModule.on_save_checkpoint instead

#11887

used LightningDataModule.on_load_checkpoint hook

use LightningDataModule.on_load_checkpoint hook instead

#11887

used LightningModule.on_hpc_load hook

switch to general purpose hook LightningModule.on_load_checkpoint

#14315

used LightningModule.on_hpc_save hook

switch to general purpose hook LightningModule.on_save_checkpoint

#14315

used Trainer’s flag weights_save_path

use directly dirpath argument in the ModelCheckpoint callback.

#14424

used Trainer’s property Trainer.weights_save_path is dropped

#14424

reg. user 1.8

If

Then

Ref

used seed_everything_default=None in LightningCLI

set seed_everything_default=False instead

#12804

used Trainer.reset_train_val_dataloaders()

call Trainer.reset_train_dataloaders() and Trainer.reset_val_dataloaders() separately

#12184

imported pl.core.lightning

import pl.core.module instead

#12740

reg. user 1.9

If

Then

Ref

used Python 3.7

upgrade to Python 3.8 or higher

#16579

used PyTorch 1.10

upgrade to PyTorch 1.11 or higher

#16492

used Trainer’s flag gpus

use devices with the same number

#16171

used Trainer’s flag tpu_cores

use devices with the same number

#16171

used Trainer’s flag ipus

use devices with the same number

#16171

used Trainer’s flag num_processes

use devices with the same number

#16171

used Trainer’s flag resume_from_checkpoint

pass the path to the Trainer.fit(ckpt_path="...") method,

#10061

used Trainer’s flag auto_select_gpus

use devices="auto"

#16184

called the pl.tuner.auto_gpu_select.pick_single_gpu function

use Trainer’s flag``devices=”auto”``

#16184

called the pl.tuner.auto_gpu_select.pick_multiple_gpus functions

use Trainer’s flag``devices=”auto”``

#16184

used Trainer’s flag accumulate_grad_batches with a scheduling dictionary value

use the GradientAccumulationScheduler callback and configure it

#16729

imported profiles from pl.profiler

import from pl.profilers

#16359

used Tuner as part of Trainer in any form

move to a standalone Tuner object or use particular callbacks LearningRateFinder and BatchSizeFinder

https://lightning.ai/docs/pytorch/latest/advanced/training_tricks.html#batch-size-finder

used Trainer’s flag auto_scale_batch_size

use BatchSizeFinder callback instead and the Trainer.tune() method was removed

used Trainer’s flag auto_lr_find

use callbacks LearningRateFinder callback instead and the Trainer.tune() method was removed

Advanced User

adv. user 1.4

If

Then

Ref

called ModelCheckpoint.save_function

now call Trainer.save_checkpoint

#7201

accessed the Trainer.running_sanity_check property

now access the Trainer.sanity_checking property

#4945

used LightningModule.grad_norm

now use the pl.utilities.grad_norm utility function instead

#7292

used TrainerTrainingTricksMixin.detect_nan_tensors

now use pl.utilities.grads.grad_norm

#6834

used TrainerTrainingTricksMixin.print_nan_gradients

now use pl.utilities.finite_checks.print_nan_gradients

#6834

If you relied on TrainerLoggingMixin.metrics_to_scalars

now use pl.utilities.metrics.metrics_to_scalars

#7180

selected the i-th GPU with Trainer(gpus="i,j")

now this will set the number of GPUs, just like passing Trainer(devices=i), you can still select the specific GPU by setting the CUDA_VISIBLE_DEVICES=i,j environment variable

#6388

adv. user 1.5

If

Then

Ref

used self.log(sync_dist_op=...)

use self.log(reduce_fx=...) instead. Passing "mean" will still work, but it also takes a callable

#7891

used the argument model from pytorch_lightning.utilities.model_helper.is_overridden

use instance instead

#7918

returned values from training_step that had .grad defined (e.g., a loss) and expected .detach() to be called for you

call .detach() manually

#7994

imported pl.utilities.distributed.rank_zero_warn

import pl.utilities.rank_zero.rank_zero_warn

relied on DataModule.has_prepared_data attribute

manage data lifecycle in customer methods

#7657

relied on DataModule.has_setup_fit attribute

manage data lifecycle in customer methods

#7657

relied on DataModule.has_setup_validate attribute

manage data lifecycle in customer methods

#7657

relied on DataModule.has_setup_test attribute

manage data lifecycle in customer methods

#7657

relied on DataModule.has_setup_predict attribute

manage data lifecycle in customer methods

#7657

relied on DataModule.has_teardown_fit attribute

manage data lifecycle in customer methods

#7657

relied on DataModule.has_teardown_validate attribute

manage data lifecycle in customer methods

#7657

relied on DataModule.has_teardown_test attribute

manage data lifecycle in customer methods

#7657

relied on DataModule.has_teardown_predict attribute

manage data lifecycle in customer methods

#7657

used DDPPlugin.task_idx

use DDPStrategy.local_rank

#8203

used Trainer.disable_validation

use the condition not Trainer.enable_validation

#8291

adv. user 1.6

If

Then

Ref

passed prepare_data_per_node to the Trainer

set it as a property of DataHooks, accessible in the LightningModule and LightningDataModule instead

#8958

used process_position flag

specify your ProgressBar callback and set it as process_position directly

#9222

used distributed training attributes add_to_queue and get_from_queue in LightningModule

user the same methods in DDPStrategy(start_method=’spawn’)

#9118

called LightningModule.get_progress_bar_dict

use the utility function pl.callbacks.progress.base.get_standard_metrics(module.trainer)

#9118

used LightningModule.on_post_move_to_device

remove it as parameters tying happens automatically without the need of implementing your own logic

#9525

relied on Trainer.progress_bar_dict

use ProgressBarBase.get_metrics

#9118

used LightningDistributed

rely on the logic in DDPStrategy(start_method=’…’)

#9691

used the Accelerator collective API Accelerator.barrier, Accelerator.broadcast, and Accelerator.all_gather

call Strategy collectives API directly, without going through Accelerator

#9677

used pytorch_lightning.core.decorators.parameter_validation

rely on automatic parameters tying with pytorch_lightning.utilities.params_tying.set_shared_parameters

#9525

used LearningRateMonitor.lr_sch_names

access them using LearningRateMonitor.lrs.keys() which will return the names of all the optimizers, even those without a scheduler.

#10066

implemented DataModule train_transforms, val_transforms, test_transforms, size, dims

switch to LightningDataModule

#8851

adv. user 1.7

If

Then

Ref

used DDP2Strategy

switch to DDPStrategy

#14026

used Trainer.training_type_plugin property

now use Trainer.strategy and update the references

#11141

used any TrainingTypePluginsn

rename them to Strategy

#11120

used DistributedType

rely on protected _StrategyType

#10505

used DeviceType

rely on protected _AcceleratorType

#10503

used pl.utiltiies.meta functions

switch to built-in https://github.com/pytorch/torchdistx support

#13868

have implemented Callback.on_configure_sharded_model hook

move your implementation to Callback.setup

#14834

have implemented the Callback.on_before_accelerator_backend_setup hook

move your implementation to Callback.setup

#14834

have implemented the Callback.on_batch_start hook

move your implementation to Callback.on_train_batch_start

#14834

have implemented the Callback.on_batch_end hook

move your implementation to Callback.on_train_batch_end

#14834

have implemented the Callback.on_epoch_start hook

move your implementation to Callback.on_train_epoch_start , to Callback.on_validation_epoch_start , to Callback.on_test_epoch_start

#14834

have implemented the Callback.on_pretrain_routine_{start,end} hook

move your implementation to Callback.on_fit_start

#14834

used Callback.on_init_start hook

use Callback.on_train_start instead

#10940

used Callback.on_init_end hook

use Callback.on_train_start instead

#10940

used Trainer’s attribute Trainer.num_processes

it was replaced by Trainer.num_devices

#12388

used Trainer’s attribute Trainer.gpus

it was replaced by Trainer.num_devices

#12436

used Trainer’s attribute Trainer.num_gpus

use Trainer.num_devices instead

#12384

used Trainer’s attribute Trainer.ipus

use Trainer.num_devices instead

#12386

used Trainer’s attribute Trainer.tpu_cores

use Trainer.num_devices instead

#12437

used Trainer.num_processes attribute

switch to using Trainer.num_devices

#12388

used LightningIPUModule

it was removed

#14830

logged with LightningLoggerBase.agg_and_log_metrics

switch to LightningLoggerBase.log_metrics

#11832

used agg_key_funcs parameter from LightningLoggerBase

log metrics explicitly

#11871

used agg_default_func parameters in LightningLoggerBase

log metrics explicitly

#11871

used Trainer.validated_ckpt_path attribute

rely on generic read-only property Trainer.ckpt_path which is set when checkpoints are loaded via Trainer.validate(````ckpt_path=...)

#11696

used Trainer.tested_ckpt_path attribute

rely on generic read-only property Trainer.ckpt_path which is set when checkpoints are loaded via Trainer.test(````ckpt_path=...)

#11696

used Trainer.predicted_ckpt_path attribute

rely on generic read-only property Trainer.ckpt_path, which is set when checkpoints are loaded via Trainer.predict(````ckpt_path=...)

#11696

rely on the returned dictionary from Callback.on_save_checkpoint

call directly Callback.state_dict instead

#11887

adv. user 1.8

If

Then

Ref

imported pl.callbacks.base

import pl.callbacks.callback

#13031

imported pl.loops.base

import pl.loops.loop instead

#13043

imported pl.utilities.cli

import pl.cli instead

#13767

imported profiler classes from pl.profiler.*

import pl.profilers instead

#12308

used pl.accelerators.GPUAccelerator

use pl.accelerators.CUDAAccelerator

#13636

used LightningDeepSpeedModule

use strategy="deepspeed" or strategy=DeepSpeedStrategy(...)

https://pytorch-lightning.readthedocs.io/en/stable/api/pytorch_lightning.strategies.DeepSpeedStrategy.html

used the with init_meta_context() context manager from import pl.utilities.meta

switch to deepspeed-zero-stage-3

https://pytorch-lightning.readthedocs.io/en/stable/advanced/model_parallel.html#deepspeed-zero-stage-3

used the Lightning Hydra multi-run integration

removed support for it as it caused issues with processes hanging

#15689

used pl.utilities.memory.get_gpu_memory_map

use pl.accelerators.cuda.get_nvidia_gpu_stats

#9921

adv. user 1.9

If

Then

Ref

used the pl.lite module

switch to lightning_fabric

#15953

used Trainer’s flag strategy='dp'

use DDP with strategy='ddp' or DeepSpeed instead

#16748

implemented LightningModule.training_epoch_end hooks

port your logic to LightningModule.on_training_epoch_end hook

#16520

implemented LightningModule.validation_epoch_end hook

port your logic to LightningModule.on_validation_epoch_end hook

#16520

implemented LightningModule.test_epoch_end hooks

port your logic to LightningModule.on_test_epoch_end hook

#16520

used Trainer’s flag multiple_trainloader_mode

switch to CombinedLoader(..., mode=...) and set mode directly now

#16800

used Trainer’s flag move_metrics_to_cpu

implement particular offload logic in your custom metric or turn it on in torchmetrics

#16358

used Trainer’s flag track_grad_norm

overwrite on_before_optimizer_step hook and pass the argument directly and LightningModule.log_grad_norm() hook

#16745 #16745

used Trainer’s flag replace_sampler_ddp

use use_distributed_sample; the sampler gets created not only for the DDP strategies

relied on the on_tpu argument in LightningModule.optimizer_step hook

switch to manual optimization

#16537 Manual Optimization

relied on the using_lbfgs argument in LightningModule.optimizer_step hook

switch to manual optimization

#16538 Manual Optimization

were using nvidia/apex in any form

switch to PyTorch native mixed precision torch.amp instead

#16039 Precision

used Trainer’s flag using_native_amp

use PyTorch native mixed precision

#16039 Precision

used Trainer’s flag amp_backend

use PyTorch native mixed precision

#16039 Precision

used Trainer’s flag amp_level

use PyTorch native mixed precision

#16039 Precision

used Trainer’s attribute using_native_amp

use PyTorch native mixed precision

#16039 Precision

used Trainer’s attribute amp_backend

use PyTorch native mixed precision

#16039 Precision

used Trainer’s attribute amp_level

use PyTorch native mixed precision

#16039 Precision

use the FairScale integration

consider using PyTorch’s native FSDP implementation or outsourced implementation into own project

https://github.com/Lightning-Sandbox/lightning-Fairscale

used pl.overrides.fairscale.LightningShardedDataParallel

use native FSDP instead

#16400 FSDP

used pl.plugins.precision.fully_sharded_native_amp.FullyShardedNativeMixedPrecisionPlugin

use native FSDP instead

#16400 FSDP

used pl.plugins.precision.sharded_native_amp.ShardedNativeMixedPrecisionPlugin

use native FSDP instead

#16400 FSDP

used pl.strategies.fully_sharded.DDPFullyShardedStrategy

use native FSDP instead

#16400 FSDP

used pl.strategies.sharded.DDPShardedStrategy

use native FSDP instead

#16400 FSDP

used pl.strategies.sharded_spawn.DDPSpawnShardedStrategy

use native FSDP instead

#16400 FSDP

used save_config_overwrite parameters in LightningCLI

pass this option and via dictionary of save_config_kwargs parameter

#14998

used save_config_multifile parameters in LightningCLI

pass this option and via dictionary of save_config_kwargs parameter

#14998

have customized loops Loop.replace()

implement your training loop with Fabric.

#14998 Fabric

have customized loops Loop.run()

implement your training loop with Fabric.

#14998 Fabric

have customized loops Loop.connect()

implement your training loop with Fabric.

#14998 Fabric

used the Trainer’s trainer.fit_loop property

implement your training loop with Fabric

#14998 Fabric

used the Trainer’s trainer.validate_loop property

implement your training loop with Fabric

#14998 Fabric

used the Trainer’s trainer.test_loop property

implement your training loop with Fabric

#14998 Fabric

used the Trainer’s trainer.predict_loop property

implement your training loop with Fabric

#14998 Fabric

used the Trainer.loop and fetching classes

being marked as protected

used opt_idx argument in BaseFinetuning.finetune_function

use manual optimization

#16539

used opt_idx argument in Callback.on_before_optimizer_step

use manual optimization

#16539 Manual Optimization

used optimizer_idx as an optional argument in LightningModule.training_step

use manual optimization

#16539 Manual Optimization

used optimizer_idx argument in LightningModule.on_before_optimizer_step

use manual optimization

#16539 Manual Optimization

used optimizer_idx argument in LightningModule.configure_gradient_clipping

use manual optimization

#16539 Manual Optimization

used optimizer_idx argument in LightningModule.optimizer_step

use manual optimization

#16539 Manual Optimization

used optimizer_idx argument in LightningModule.optimizer_zero_grad

use manual optimization

#16539 Manual Optimization

used optimizer_idx argument in LightningModule.lr_scheduler_step

use manual optimization

#16539 Manual Optimization

used declaring optimizer frequencies in the dictionary returned from LightningModule.configure_optimizers

use manual optimization

#16539 Manual Optimization

used optimizer argument in LightningModule.backward

use manual optimization

#16539 Manual Optimization

used optimizer_idx argument in LightningModule.backward

use manual optimization

#16539 Manual Optimization

used optimizer_idx argument in PrecisionPlugin.optimizer_step

use manual optimization

#16539 Manual Optimization

used optimizer_idx argument in PrecisionPlugin.,backward

use manual optimization

#16539 Manual Optimization

used optimizer_idx argument in PrecisionPlugin.optimizer_step

use manual optimization

#16539 Manual Optimization

used optimizer_idx argument in Strategy.backward

use manual optimization

#16539 Manual Optimization

used optimizer_idx argument in Strategy.optimizer_step

use manual optimization

#16539 Manual Optimization

used Trainer’s Trainer.optimizer_frequencies attribute

use manual optimization

Manual Optimization

used PL_INTER_BATCH_PARALLELISM environment flag

#16355

used training integration with Horovod

install standalone package/project

https://github.com/Lightning-AI/lightning-Horovod

used training integration with ColossalAI

install standalone package/project

https://lightning.ai/docs/pytorch/latest/advanced/third_party/colossalai.html

used QuantizationAwareTraining callback

use Torch’s Quantization directly

#16750

had any logic except reducing the DP outputs in LightningModule.training_step_end hook

port it to LightningModule.training_batch_end hook

#16791

had any logic except reducing the DP outputs in LightningModule.validation_step_end hook

port it to LightningModule.validation_batch_end hook

#16791

had any logic except reducing the DP outputs in LightningModule.test_step_end hook

port it to LightningModule.test_batch_end hook

#16791

used pl.strategies.DDPSpawnStrategy

switch to general DDPStrategy(start_method='spawn') with proper starting method

#16809

used the automatic addition of a moving average of the training_step loss in the progress bar

use self.log("loss", ..., prog_bar=True) instead.

#16192

rely on the outputs argument from the on_predict_epoch_end hook

access them via trainer.predict_loop.predictions

#16655

need to pass a dictionary to self.log()

pass them independently.

#16389

Developer

devel 1.5

If

Then

Ref

called CheckpointConnector.hpc_load()

just call CheckpointConnector.restore()

#7652

used TrainerModelHooksMixin

now rely on the corresponding utility functions in pytorch_lightning.utilities.signature_utils

#7422

assigned the Trainer.train_loop property

now assign the equivalent Trainer.fit_loop property

#8025

accessed LightningModule.loaded_optimizer_states_dict

the property has been removed

#8229

devel 1.6

If

Then

Ref

called LightningLoggerBase.close

switch to LightningLoggerBase.finalize.

#9422

called LoggerCollection.close

switch to LoggerCollection.finalize.

#9422

used AcceleratorConnector.is_slurm_managing_tasks attribute

it is set not as protected and discouraged from direct use

#10101

used AcceleratorConnector.configure_slurm_ddp attributes

it is set not as protected and discouraged from direct use

#10101

used ClusterEnvironment.creates_children() method

change it to ClusterEnvironment.creates_processes_externally which is property now.

#10106

called PrecisionPlugin.master_params()

update it PrecisionPlugin.main_params()

#10105

devel 1.7

If

Then

Ref

Removed the legacy Trainer.get_deprecated_arg_names()

#14415

used the generic method Trainer.run_stage

switch to a specific one depending on your purpose Trainer.{fit,validate,test,predict} .

#11000

used rank_zero_only from pl.utilities.distributed

import it from pl.utilities.rank_zero

#11747

used rank_zero_debug from pl.utilities.distributed

import it from pl.utilities.rank_zero

#11747

used rank_zero_info from pl.utilities.distributed

import it from pl.utilities.rank_zero

#11747

used rank_zero_warn from pl.utilities.warnings

import it from pl.utilities.rank_zero

#11747

used rank_zero_deprecation from pl.utilities.warnings

import it from pl.utilities.rank_zero

#11747

used LightningDeprecationWarning from pl.utilities.warnings

import it from pl.utilities.rank_zero

#11747

used LightningDeprecationWarning from pl.utilities.warnings

import it from pl.utilities.rank_zero

#11747

used Trainer.data_parallel_device_ids attribute

switch it to Trainer.device_ids

#12072

derived it from TrainerCallbackHookMixin

use Trainer base class

#14401

used base class pytorch_lightning.profiler.BaseProfilerto

switch to use pytorch_lightning.profiler.Profiler instead

#12150

set distributed backend via the environment variable PL_TORCH_DISTRIBUTED_BACKEND

use process_group_backend in the strategy constructor

#11745

used PrecisionPlugin.on_load_checkpoint hooks

switch to PrecisionPlugin.load_state_dict

#11978

used PrecisionPlugin.on_save_checkpoint hooks

switch to PrecisionPlugin.load_state_dict

#11978

used Trainer.root_gpu attribute

use Trainer.strategy.root_device.index when GPU is used

#12262

used Trainer.use_amp attribute

rely on Torch native AMP

#12312

used LightingModule.use_amp attribute

rely on Torch native AMP

#12315

used Trainer’s attribute Trainer.verbose_evaluate

rely on loop constructor EvaluationLoop(verbose=...)

#10931

used Trainer’s attribute Trainer.should_rank_save_checkpoint

it was removed

#11068

derived from TrainerOptimizersMixin

rely on core/optimizer.py

#11155

derived from TrainerDataLoadingMixin

rely on methods from Trainer and DataConnector

#11282

used Trainer’s attribute Trainer.lightning_optimizers

switch to the Strategy and its attributes.

#11444

used Trainer.call_hook

it was set as a protected method Trainer._call_callback_hooks, Trainer._call_lightning_module_hook, Trainer._call_ttp_hook, Trainer._call_accelerator_hook and shall not be used.

#10979

used Profiler’s attribute SimpleProfiler.profile_iterable

it was removed

#12102

used Profiler’s attribute AdvancedProfiler.profile_iterable

it was removed

#12102

used the device_stats_monitor.prefix_metric_keys

#11254

used on_train_batch_end(outputs, ...) with 2d list with sizes (n_optimizers, tbptt_steps)

chang it to (tbptt_steps, n_optimizers). You can update your code by adding the following parameter to your hook signature: on_train_batch_end(outputs, ..., new_format=True).

#12182

used training_epoch_end(outputs) with a 3d list with sizes (n_optimizers, n_batches, tbptt_steps)

change it to (n_batches, tbptt_steps, n_optimizers). You can update your code by adding the following parameter to your hook signature: training_epoch_end(outputs, new_format=True).

#12182

devel 1.8

If

Then

Ref

derived from pytorch_lightning.loggers.base.LightningLoggerBase

derive from pytorch_lightning.loggers.logger.Logger

#12014

derived from pytorch_lightning.profiler.base.BaseProfiler

derive from pytorch_lightning.profilers.profiler.Profiler

#12150

derived from pytorch_lightning.profiler.base.AbstractProfiler

derive from pytorch_lightning.profilers.profiler.Profiler

#12106

devel 1.9

If

Then

Ref

passed the pl_module argument to distributed module wrappers

passed the (required) forward_module argument

#16386

used DataParallel and the LightningParallelModule wrapper

use DDP or DeepSpeed instead

#16748 DDP

used pl_module argument from the distributed module wrappers

use DDP or DeepSpeed instead

#16386 DDP

called pl.overrides.base.unwrap_lightning_module function

use DDP or DeepSpeed instead

#16386 DDP

used or derived from pl.overrides.distributed.LightningDistributedModule class

use DDP instead

#16386 DDP

used the pl.plugins.ApexMixedPrecisionPlugin`` plugin

use PyTorch native mixed precision

#16039

used the pl.plugins.NativeMixedPrecisionPlugin plugin

switch to the pl.plugins.MixedPrecisionPlugin plugin

#16039

used the fit_loop.min_steps setters

implement your training loop with Fabric

#16803

used the fit_loop.max_steps setters

implement your training loop with Fabric

#16803

used the data_parallel attribute in Trainer

check the same using isinstance(trainer.strategy, ParallelStrategy)

#16703

used any function from pl.utilities.xla_device

switch to pl.accelerators.TPUAccelerator.is_available()

#14514 #14550

imported functions from pl.utilities.device_parser.*

import them from lightning_fabric.utilities.device_parser.*

#14492 #14753

imported functions from pl.utilities.cloud_io.*

import them from lightning_fabric.utilities.cloud_io.*

#14515

imported functions from pl.utilities.apply_func.*

import them from lightning_utilities.core.apply_func.*

#14516 #14537

used any code from pl.core.mixins

use the base classes

#16424

used any code from pl.utilities.distributed

rely on Pytorch’s native functions

#16390

used any code from pl.utilities.data

it was removed

#16440

used any code from pl.utilities.optimizer

it was removed

#16439

used any code from pl.utilities.seed

it was removed

#16422

were using truncated backpropagation through time (TBPTT) with LightningModule.truncated_bptt_steps

use manual optimization

#16172 Manual Optimization

were using truncated backpropagation through time (TBPTT) with LightningModule.tbptt_split_batch

use manual optimization

#16172 Manual Optimization

were using truncated backpropagation through time (TBPTT) and passing hidden to LightningModule.training_step

use manual optimization

#16172 Manual Optimization

used pl.utilities.finite_checks.print_nan_gradients function

it was removed

used pl.utilities.finite_checks.detect_nan_parameters function

it was removed

used pl.utilities.parsing.flatten_dict function

it was removed

used pl.utilities.metrics.metrics_to_scalars function

it was removed

used pl.utilities.memory.get_model_size_mb function

it was removed

used pl.strategies.utils.on_colab_kaggle function

it was removed

#16437

used LightningDataModule.add_argparse_args() method

switch to using LightningCLI

#16708

used LightningDataModule.parse_argparser() method

switch to using LightningCLI

#16708

used LightningDataModule.from_argparse_args() method

switch to using LightningCLI

#16708

used LightningDataModule.get_init_arguments_and_types() method

switch to using LightningCLI

#16708

used Trainer.default_attributes() method

switch to using LightningCLI

#16708

used Trainer.from_argparse_args() method

switch to using LightningCLI

#16708

used Trainer.parse_argparser() method

switch to using LightningCLI

#16708

used Trainer.match_env_arguments() method

switch to using LightningCLI

#16708

used Trainer.add_argparse_args() method

switch to using LightningCLI

#16708

used pl.utilities.argparse.from_argparse_args() function

switch to using LightningCLI

#16708

used pl.utilities.argparse.parse_argparser() function

switch to using LightningCLI

#16708

used pl.utilities.argparseparse_env_variables() function

switch to using LightningCLI

#16708

used get_init_arguments_and_types() function

switch to using LightningCLI

#16708

used pl.utilities.argparse.add_argparse_args() function

switch to using LightningCLI

#16708

used pl.utilities.parsing.str_to_bool() function

switch to using LightningCLI

#16708

used pl.utilities.parsing.str_to_bool_or_int() function

switch to using LightningCLI

#16708

used pl.utilities.parsing.str_to_bool_or_str() function

switch to using LightningCLI

#16708

derived from pl.utilities.distributed.AllGatherGrad class

switch to PyTorch native equivalent

#15364

used PL_RECONCILE_PROCESS=1 env. variable

customize your logger

#16204

if you derived from mixin’s method pl.core.saving.ModelIO.load_from_checkpoint

rely on pl.core.module.LightningModule

#16999

used Accelerator.setup_environment method

switch to Accelerator.setup_device

#16436

used PL_FAULT_TOLERANT_TRAINING env. variable

implement own logic with Fabric

#16516 #16533

used or derived from public pl.overrides.distributed.IndexBatchSamplerWrapper class

it is set as protected

#16826

used the DataLoaderLoop class

use manual optimization

#16726 Manual Optimization

used the EvaluationEpochLoop class

use manual optimization

#16726 Manual Optimization

used the PredictionEpochLoop class

use manual optimization

#16726 Manual Optimization

used trainer.reset_*_dataloader() methods

use Loop.setup_data() for the top-level loops

#16726

used LightningModule.precision attribute

rely on Trainer precision attribute

#16203

used Trainer.model setter

you shall pass the model in fit/test/predict method

#16462

relied on pl.utilities.supporters.CombinedLoaderIterator class

pass dataloders directly

#16714

relied on pl.utilities.supporters.CombinedLoaderIterator class

pass dataloders directly

#16714

accessed ProgressBarBase.train_batch_idx property

rely on Trainer internal loops’ properties

#16760

accessed ProgressBarBase.val_batch_idx property

rely on Trainer internal loops’ properties

#16760

accessed ProgressBarBase.test_batch_idx property

rely on Trainer internal loops’ properties

#16760

accessed ProgressBarBase.predict_batch_idx property

rely on Trainer internal loops’ properties

#16760

used Trainer.prediction_writer_callbacks property

rely on precision plugin

#16759

used PrecisionPlugin.dispatch

it was removed

#16618

used Strategy.dispatch

it was removed

#16618


© Copyright Copyright (c) 2018-2023, Lightning AI et al...

Built with Sphinx using a theme provided by Read the Docs.