• Docs >
  • Upgrade from 1.5 to the 2.0
Shortcuts

Upgrade from 1.5 to the 2.0

Regular User

reg. user 1.5

If

Then

Ref

used trainer.fit(train_dataloaders=...)

use trainer.fit(dataloaders=...)

PR7431

used trainer.validate(val_dataloaders...)

use trainer.validate(dataloaders=...)

PR7431

passed num_nodes to DDPPlugin and DDPSpawnPlugin

remove them since these parameters are now passed from the Trainer

PR7026

passed sync_batchnorm to DDPPlugin and DDPSpawnPlugin

remove them since these parameters are now passed from the Trainer

PR7026

didn’t provide a monitor argument to the EarlyStopping callback and just relied on the default value

pass monitor as it is now a required argument

PR7907

used every_n_val_epochs in ModelCheckpoint

change the argument to every_n_epochs

PR8383

used Trainer’s flag reload_dataloaders_every_epoch

use pass reload_dataloaders_every_n_epochs

PR5043

used Trainer’s flag distributed_backend

use strategy

PR8575

reg. user 1.6

If

Then

Ref

used Trainer’s flag terminate_on_nan

set detect_anomaly instead, which enables detecting anomalies in the autograd engine

PR9175

used Trainer’s flag weights_summary

pass a ModelSummary callback with max_depth instead

PR9699

used Trainer’s flag checkpoint_callback

set enable_checkpointing. If you set enable_checkpointing=True, it configures a default ModelCheckpoint callback if none is provided lightning_pytorch.trainer.trainer.Trainer.callbacks.ModelCheckpoint

PR9754

used Trainer’s flag stochastic_weight_avg

add the StochasticWeightAveraging callback directly to the list of callbacks, so for example, Trainer(..., callbacks=[StochasticWeightAveraging(), ...])

PR8989

used Trainer’s flag flush_logs_every_n_steps

pass it to the logger init if it is supported for the particular logger

PR9366

used Trainer’s flag max_steps to the Trainer, max_steps=None won’t have any effect

turn off the limit by passing Trainer(max_steps=-1) which is the default

PR9460

used Trainer’s flag resume_from_checkpoint="..."

pass the same path to the fit function instead, trainer.fit(ckpt_path="...")

PR9693

used Trainer’s flag log_gpu_memory, gpu_metrics

use the DeviceStatsMonitor callback instead

PR9921

used Trainer’s flag progress_bar_refresh_rate

set the ProgressBar callback and set refresh_rate there, or pass enable_progress_bar=False to disable the progress bar

PR9616

called LightningModule.summarize()

use the utility function pl.utilities.model_summary.summarize(model)

PR8513

used the LightningModule.model_size property

use the utility function pl.utilities.memory.get_model_size_mb(model)

PR8495

relied on the on_train_dataloader() hooks in LightningModule and LightningDataModule

use train_dataloader

PR9098

relied on the on_val_dataloader() hooks in LightningModule and LightningDataModule

use val_dataloader

PR9098

relied on the on_test_dataloader() hooks in LightningModule and LightningDataModule

use test_dataloader

PR9098

relied on the on_predict_dataloader() hooks in LightningModule and LightningDataModule

use predict_dataloader

PR9098

implemented the on_keyboard_interrupt callback hook

implement the on_exception hook, and specify the exception type

PR9260

relied on the TestTubeLogger

Use another logger like TensorBoardLogger

PR9065

used the basic progress bar ProgressBar callback

use the TQDMProgressBar callback instead with the same arguments

PR10134

were using GPUStatsMonitor callbacks

use DeviceStatsMonitor callback instead

PR9924

were using XLAStatsMonitor callbacks

use DeviceStatsMonitor callback instead

PR9924

reg. user 1.7

If

Then

Ref

have wrapped your loggers with LoggerCollection

directly pass a list of loggers to the Trainer and access the list via the trainer.loggers attribute.

PR12147

used Trainer.lr_schedulers

access trainer.lr_scheduler_configs instead, which contains dataclasses instead of dictionaries.

PR11443

used neptune-client API in the NeptuneLogger

upgrade to the latest API

PR14727

used LightningDataModule.on_save hook

use LightningDataModule.on_save_checkpoint instead

PR11887

used LightningDataModule.on_load_checkpoint hook

use LightningDataModule.on_load_checkpoint hook instead

PR11887

used LightningModule.on_hpc_load hook

switch to general purpose hook LightningModule.on_load_checkpoint

PR14315

used LightningModule.on_hpc_save hook

switch to general purpose hook LightningModule.on_save_checkpoint

PR14315

used Trainer’s flag weights_save_path

use directly dirpath argument in the ModelCheckpoint callback.

PR14424

used Trainer’s property Trainer.weights_save_path is dropped

PR14424

reg. user 1.8

If

Then

Ref

used seed_everything_default=None in LightningCLI

set seed_everything_default=False instead

PR12804

used Trainer.reset_train_val_dataloaders()

call Trainer.reset_train_dataloaders() and Trainer.reset_val_dataloaders() separately

PR12184

imported pl.core.lightning

import pl.core.module instead

PR12740

reg. user 1.9

If

Then

Ref

used Python 3.7

upgrade to Python 3.8 or higher

PR16579

used PyTorch 1.10

upgrade to PyTorch 1.11 or higher

PR16492

used Trainer’s flag gpus

use devices with the same number

PR16171

used Trainer’s flag tpu_cores

use devices with the same number

PR16171

used Trainer’s flag ipus

use devices with the same number

PR16171

used Trainer’s flag num_processes

use devices with the same number

PR16171

used Trainer’s flag resume_from_checkpoint

pass the path to the Trainer.fit(ckpt_path="...") method,

PR10061

used Trainer’s flag auto_select_gpus

use devices="auto"

PR16184

called the pl.tuner.auto_gpu_select.pick_single_gpu function

use Trainer’s flag``devices=”auto”``

PR16184

called the pl.tuner.auto_gpu_select.pick_multiple_gpus functions

use Trainer’s flag``devices=”auto”``

PR16184

used Trainer’s flag accumulate_grad_batches with a scheduling dictionary value

use the GradientAccumulationScheduler callback and configure it

PR16729

imported profiles from pl.profiler

import from pl.profilers

PR16359

used Tuner as part of Trainer in any form

move to a standalone Tuner object or use particular callbacks LearningRateFinder and BatchSizeFinder

Batch Size Finder Learning Rate Finder

used Trainer’s flag auto_scale_batch_size

use BatchSizeFinder callback instead and the Trainer.tune() method was removed

used Trainer’s flag auto_lr_find

use callbacks LearningRateFinder callback instead and the Trainer.tune() method was removed

Advanced User

adv. user 1.5

If

Then

Ref

used self.log(sync_dist_op=...)

use self.log(reduce_fx=...) instead. Passing "mean" will still work, but it also takes a callable

PR7891

used the argument model from pytorch_lightning.utilities.model_helper.is_overridden

use instance instead

PR7918

returned values from training_step that had .grad defined (e.g., a loss) and expected .detach() to be called for you

call .detach() manually

PR7994

imported pl.utilities.distributed.rank_zero_warn

import pl.utilities.rank_zero.rank_zero_warn

relied on DataModule.has_prepared_data attribute

manage data lifecycle in customer methods

PR7657

relied on DataModule.has_setup_fit attribute

manage data lifecycle in customer methods

PR7657

relied on DataModule.has_setup_validate attribute

manage data lifecycle in customer methods

PR7657

relied on DataModule.has_setup_test attribute

manage data lifecycle in customer methods

PR7657

relied on DataModule.has_setup_predict attribute

manage data lifecycle in customer methods

PR7657

relied on DataModule.has_teardown_fit attribute

manage data lifecycle in customer methods

PR7657

relied on DataModule.has_teardown_validate attribute

manage data lifecycle in customer methods

PR7657

relied on DataModule.has_teardown_test attribute

manage data lifecycle in customer methods

PR7657

relied on DataModule.has_teardown_predict attribute

manage data lifecycle in customer methods

PR7657

used DDPPlugin.task_idx

use DDPStrategy.local_rank

PR8203

used Trainer.disable_validation

use the condition not Trainer.enable_validation

PR8291

adv. user 1.6

If

Then

Ref

passed prepare_data_per_node to the Trainer

set it as a property of DataHooks, accessible in the LightningModule and LightningDataModule instead

PR8958

used process_position flag

specify your ProgressBar callback and set it as process_position directly

PR9222

used distributed training attributes add_to_queue and get_from_queue in LightningModule

user the same methods in DDPStrategy(start_method='spawn')

PR9118

called LightningModule.get_progress_bar_dict

use the utility function pl.callbacks.progress.base.get_standard_metrics(module.trainer)

PR9118

used LightningModule.on_post_move_to_device

remove it as parameters tying happens automatically without the need of implementing your own logic

PR9525

relied on Trainer.progress_bar_dict

use ProgressBarBase.get_metrics

PR9118

used LightningDistributed

rely on the logic in DDPStrategy(start_method='...')

PR9691

used the Accelerator collective API Accelerator.barrier, Accelerator.broadcast, and Accelerator.all_gather

call Strategy collectives API directly, without going through Accelerator

PR9677

used pytorch_lightning.core.decorators.parameter_validation

rely on automatic parameters tying with pytorch_lightning.utilities.params_tying.set_shared_parameters

PR9525

used LearningRateMonitor.lr_sch_names

access them using LearningRateMonitor.lrs.keys() which will return the names of all the optimizers, even those without a scheduler.

PR10066

implemented DataModule train_transforms, val_transforms, test_transforms, size, dims

switch to LightningDataModule

PR8851

adv. user 1.7

If

Then

Ref

used DDP2Strategy

switch to DDPStrategy

PR14026

used Trainer.training_type_plugin property

now use Trainer.strategy and update the references

PR11141

used any TrainingTypePluginsn

rename them to Strategy

PR11120

used DistributedType

rely on protected _StrategyType

PR10505

used DeviceType

rely on protected _AcceleratorType

PR10503

used pl.utiltiies.meta functions

switch to built-in https://github.com/pytorch/torchdistx support

PR13868

have implemented Callback.on_configure_sharded_model hook

move your implementation to Callback.setup

PR14834

have implemented the Callback.on_before_accelerator_backend_setup hook

move your implementation to Callback.setup

PR14834

have implemented the Callback.on_batch_start hook

move your implementation to Callback.on_train_batch_start

PR14834

have implemented the Callback.on_batch_end hook

move your implementation to Callback.on_train_batch_end

PR14834

have implemented the Callback.on_epoch_start hook

move your implementation to Callback.on_train_epoch_start , to Callback.on_validation_epoch_start , to Callback.on_test_epoch_start

PR14834

have implemented the Callback.on_pretrain_routine_{start,end} hook

move your implementation to Callback.on_fit_start

PR14834

used Callback.on_init_start hook

use Callback.on_train_start instead

PR10940

used Callback.on_init_end hook

use Callback.on_train_start instead

PR10940

used Trainer’s attribute Trainer.num_processes

it was replaced by Trainer.num_devices

PR12388

used Trainer’s attribute Trainer.gpus

it was replaced by Trainer.num_devices

PR12436

used Trainer’s attribute Trainer.num_gpus

use Trainer.num_devices instead

PR12384

used Trainer’s attribute Trainer.ipus

use Trainer.num_devices instead

PR12386

used Trainer’s attribute Trainer.tpu_cores

use Trainer.num_devices instead

PR12437

used Trainer.num_processes attribute

switch to using Trainer.num_devices

PR12388

used LightningIPUModule

it was removed

PR14830

logged with LightningLoggerBase.agg_and_log_metrics

switch to LightningLoggerBase.log_metrics

PR11832

used agg_key_funcs parameter from LightningLoggerBase

log metrics explicitly

PR11871

used agg_default_func parameters in LightningLoggerBase

log metrics explicitly

PR11871

used Trainer.validated_ckpt_path attribute

rely on generic read-only property Trainer.ckpt_path which is set when checkpoints are loaded via Trainer.validate(````ckpt_path=...)

PR11696

used Trainer.tested_ckpt_path attribute

rely on generic read-only property Trainer.ckpt_path which is set when checkpoints are loaded via Trainer.test(````ckpt_path=...)

PR11696

used Trainer.predicted_ckpt_path attribute

rely on generic read-only property Trainer.ckpt_path, which is set when checkpoints are loaded via Trainer.predict(````ckpt_path=...)

PR11696

rely on the returned dictionary from Callback.on_save_checkpoint

call directly Callback.state_dict instead

PR11887

adv. user 1.8

If

Then

Ref

imported pl.callbacks.base

import pl.callbacks.callback

PR13031

imported pl.loops.base

import pl.loops.loop instead

PR13043

imported pl.utilities.cli

import pl.cli instead

PR13767

imported profiler classes from pl.profiler.*

import pl.profilers instead

PR12308

used pl.accelerators.GPUAccelerator

use pl.accelerators.CUDAAccelerator

PR13636

used LightningDeepSpeedModule

use strategy="deepspeed" or strategy=DeepSpeedStrategy(...)

DeepSpeedStrategy

used the with init_meta_context() context manager from import pl.utilities.meta

switch to deepspeed-zero-stage-3

DeepSpeed ZeRO Stage 3

used the Lightning Hydra multi-run integration

removed support for it as it caused issues with processes hanging

PR15689

used pl.utilities.memory.get_gpu_memory_map

use pl.accelerators.cuda.get_nvidia_gpu_stats

PR9921

adv. user 1.9

If

Then

Ref

used the pl.lite module

switch to lightning_fabric

PR15953

used Trainer’s flag strategy='dp'

use DDP with strategy='ddp' or DeepSpeed instead

PR16748

implemented LightningModule.training_epoch_end hooks

port your logic to LightningModule.on_train_epoch_end hook

PR16520

implemented LightningModule.validation_epoch_end hook

port your logic to LightningModule.on_validation_epoch_end hook

PR16520

implemented LightningModule.test_epoch_end hooks

port your logic to LightningModule.on_test_epoch_end hook

PR16520

used Trainer’s flag multiple_trainloader_mode

switch to CombinedLoader(..., mode=...) and set mode directly now

PR16800

used Trainer’s flag move_metrics_to_cpu

implement particular offload logic in your custom metric or turn it on in torchmetrics

PR16358

used Trainer’s flag track_grad_norm

overwrite on_before_optimizer_step hook and pass the argument directly and LightningModule.log_grad_norm() hook

PR16745 PR16745

used Trainer’s flag replace_sampler_ddp

use use_distributed_sampler; the sampler gets created not only for the DDP strategies

relied on the on_tpu argument in LightningModule.optimizer_step hook

switch to manual optimization

PR16537 Manual Optimization

relied on the using_lbfgs argument in LightningModule.optimizer_step hook

switch to manual optimization

PR16538 Manual Optimization

were using nvidia/apex in any form

switch to PyTorch native mixed precision torch.amp instead

PR16039 Precision

used Trainer’s flag using_native_amp

use PyTorch native mixed precision

PR16039 Precision

used Trainer’s flag amp_backend

use PyTorch native mixed precision

PR16039 Precision

used Trainer’s flag amp_level

use PyTorch native mixed precision

PR16039 Precision

used Trainer’s attribute using_native_amp

use PyTorch native mixed precision

PR16039 Precision

used Trainer’s attribute amp_backend

use PyTorch native mixed precision

PR16039 Precision

used Trainer’s attribute amp_level

use PyTorch native mixed precision

PR16039 Precision

use the FairScale integration

consider using PyTorch’s native FSDP implementation or outsourced implementation into own project

lightning-Fairscale

used pl.overrides.fairscale.LightningShardedDataParallel

use native FSDP instead

PR16400 FSDP

used pl.plugins.precision.fully_sharded_native_amp.FullyShardedNativeMixedPrecisionPlugin

use native FSDP instead

PR16400 FSDP

used pl.plugins.precision.sharded_native_amp.ShardedNativeMixedPrecisionPlugin

use native FSDP instead

PR16400 FSDP

used pl.strategies.fully_sharded.DDPFullyShardedStrategy

use native FSDP instead

PR16400 FSDP

used pl.strategies.sharded.DDPShardedStrategy

use native FSDP instead

PR16400 FSDP

used pl.strategies.sharded_spawn.DDPSpawnShardedStrategy

use native FSDP instead

PR16400 FSDP

used save_config_overwrite parameters in LightningCLI

pass this option and via dictionary of save_config_kwargs parameter

PR14998

used save_config_multifile parameters in LightningCLI

pass this option and via dictionary of save_config_kwargs parameter

PR14998

have customized loops Loop.replace()

implement your training loop with Fabric.

PR14998 Fabric

have customized loops Loop.run()

implement your training loop with Fabric.

PR14998 Fabric

have customized loops Loop.connect()

implement your training loop with Fabric.

PR14998 Fabric

used the Trainer’s trainer.fit_loop property

implement your training loop with Fabric

PR14998 Fabric

used the Trainer’s trainer.validate_loop property

implement your training loop with Fabric

PR14998 Fabric

used the Trainer’s trainer.test_loop property

implement your training loop with Fabric

PR14998 Fabric

used the Trainer’s trainer.predict_loop property

implement your training loop with Fabric

PR14998 Fabric

used the Trainer.loop and fetching classes

being marked as protected

used opt_idx argument in BaseFinetuning.finetune_function

use manual optimization

PR16539

used opt_idx argument in Callback.on_before_optimizer_step

use manual optimization

PR16539 Manual Optimization

used optimizer_idx as an optional argument in LightningModule.training_step

use manual optimization

PR16539 Manual Optimization

used optimizer_idx argument in LightningModule.on_before_optimizer_step

use manual optimization

PR16539 Manual Optimization

used optimizer_idx argument in LightningModule.configure_gradient_clipping

use manual optimization

PR16539 Manual Optimization

used optimizer_idx argument in LightningModule.optimizer_step

use manual optimization

PR16539 Manual Optimization

used optimizer_idx argument in LightningModule.optimizer_zero_grad

use manual optimization

PR16539 Manual Optimization

used optimizer_idx argument in LightningModule.lr_scheduler_step

use manual optimization

PR16539 Manual Optimization

used declaring optimizer frequencies in the dictionary returned from LightningModule.configure_optimizers

use manual optimization

PR16539 Manual Optimization

used optimizer argument in LightningModule.backward

use manual optimization

PR16539 Manual Optimization

used optimizer_idx argument in LightningModule.backward

use manual optimization

PR16539 Manual Optimization

used optimizer_idx argument in PrecisionPlugin.optimizer_step

use manual optimization

PR16539 Manual Optimization

used optimizer_idx argument in PrecisionPlugin.,backward

use manual optimization

PR16539 Manual Optimization

used optimizer_idx argument in PrecisionPlugin.optimizer_step

use manual optimization

PR16539 Manual Optimization

used optimizer_idx argument in Strategy.backward

use manual optimization

PR16539 Manual Optimization

used optimizer_idx argument in Strategy.optimizer_step

use manual optimization

PR16539 Manual Optimization

used Trainer’s Trainer.optimizer_frequencies attribute

use manual optimization

Manual Optimization

used PL_INTER_BATCH_PARALLELISM environment flag

PR16355

used training integration with Horovod

install standalone package/project

lightning-Horovod

used training integration with ColossalAI

install standalone package/project

lightning-ColossalAI

used QuantizationAwareTraining callback

use Torch’s Quantization directly

PR16750

had any logic except reducing the DP outputs in LightningModule.training_step_end hook

port it to LightningModule.on_train_batch_end hook

PR16791

had any logic except reducing the DP outputs in LightningModule.validation_step_end hook

port it to LightningModule.on_validation_batch_end hook

PR16791

had any logic except reducing the DP outputs in LightningModule.test_step_end hook

port it to LightningModule.on_test_batch_end hook

PR16791

used pl.strategies.DDPSpawnStrategy

switch to general DDPStrategy(start_method='spawn') with proper starting method

PR16809

used the automatic addition of a moving average of the training_step loss in the progress bar

use self.log("loss", ..., prog_bar=True) instead.

PR16192

rely on the outputs argument from the on_predict_epoch_end hook

access them via trainer.predict_loop.predictions

PR16655

need to pass a dictionary to self.log()

pass them independently.

PR16389

Developer

devel 1.5

If

Then

Ref

called CheckpointConnector.hpc_load()

just call CheckpointConnector.restore()

PR7652

used TrainerModelHooksMixin

now rely on the corresponding utility functions in pytorch_lightning.utilities.signature_utils

PR7422

assigned the Trainer.train_loop property

now assign the equivalent Trainer.fit_loop property

PR8025

accessed LightningModule.loaded_optimizer_states_dict

the property has been removed

PR8229

devel 1.6

If

Then

Ref

called LightningLoggerBase.close

switch to LightningLoggerBase.finalize.

PR9422

called LoggerCollection.close

switch to LoggerCollection.finalize.

PR9422

used AcceleratorConnector.is_slurm_managing_tasks attribute

it is set not as protected and discouraged from direct use

PR10101

used AcceleratorConnector.configure_slurm_ddp attributes

it is set not as protected and discouraged from direct use

PR10101

used ClusterEnvironment.creates_children() method

change it to ClusterEnvironment.creates_processes_externally which is property now.

PR10106

called PrecisionPlugin.master_params()

update it PrecisionPlugin.main_params()

PR10105

devel 1.7

If

Then

Ref

Removed the legacy Trainer.get_deprecated_arg_names()

PR14415

used the generic method Trainer.run_stage

switch to a specific one depending on your purpose Trainer.{fit,validate,test,predict} .

PR11000

used rank_zero_only from pl.utilities.distributed

import it from pl.utilities.rank_zero

PR11747

used rank_zero_debug from pl.utilities.distributed

import it from pl.utilities.rank_zero

PR11747

used rank_zero_info from pl.utilities.distributed

import it from pl.utilities.rank_zero

PR11747

used rank_zero_warn from pl.utilities.warnings

import it from pl.utilities.rank_zero

PR11747

used rank_zero_deprecation from pl.utilities.warnings

import it from pl.utilities.rank_zero

PR11747

used LightningDeprecationWarning from pl.utilities.warnings

import it from pl.utilities.rank_zero

PR11747

used LightningDeprecationWarning from pl.utilities.warnings

import it from pl.utilities.rank_zero

PR11747

used Trainer.data_parallel_device_ids attribute

switch it to Trainer.device_ids

PR12072

derived it from TrainerCallbackHookMixin

use Trainer base class

PR14401

used base class pytorch_lightning.profiler.BaseProfilerto

switch to use pytorch_lightning.profiler.Profiler instead

PR12150

set distributed backend via the environment variable PL_TORCH_DISTRIBUTED_BACKEND

use process_group_backend in the strategy constructor

PR11745

used PrecisionPlugin.on_load_checkpoint hooks

switch to PrecisionPlugin.load_state_dict

PR11978

used PrecisionPlugin.on_save_checkpoint hooks

switch to PrecisionPlugin.load_state_dict

PR11978

used Trainer.root_gpu attribute

use Trainer.strategy.root_device.index when GPU is used

PR12262

used Trainer.use_amp attribute

rely on Torch native AMP

PR12312

used LightingModule.use_amp attribute

rely on Torch native AMP

PR12315

used Trainer’s attribute Trainer.verbose_evaluate

rely on loop constructor EvaluationLoop(verbose=...)

PR10931

used Trainer’s attribute Trainer.should_rank_save_checkpoint

it was removed

PR11068

derived from TrainerOptimizersMixin

rely on core/optimizer.py

PR11155

derived from TrainerDataLoadingMixin

rely on methods from Trainer and DataConnector

PR11282

used Trainer’s attribute Trainer.lightning_optimizers

switch to the Strategy and its attributes.

PR11444

used Trainer.call_hook

it was set as a protected method Trainer._call_callback_hooks, Trainer._call_lightning_module_hook, Trainer._call_ttp_hook, Trainer._call_accelerator_hook and shall not be used.

PR10979

used Profiler’s attribute SimpleProfiler.profile_iterable

it was removed

PR12102

used Profiler’s attribute AdvancedProfiler.profile_iterable

it was removed

PR12102

used the device_stats_monitor.prefix_metric_keys

PR11254

used on_train_batch_end(outputs, ...) with 2d list with sizes (n_optimizers, tbptt_steps)

chang it to (tbptt_steps, n_optimizers). You can update your code by adding the following parameter to your hook signature: on_train_batch_end(outputs, ..., new_format=True).

PR12182

used training_epoch_end(outputs) with a 3d list with sizes (n_optimizers, n_batches, tbptt_steps)

change it to (n_batches, tbptt_steps, n_optimizers). You can update your code by adding the following parameter to your hook signature: training_epoch_end(outputs, new_format=True).

PR12182

devel 1.8

If

Then

Ref

derived from pytorch_lightning.loggers.base.LightningLoggerBase

derive from pytorch_lightning.loggers.logger.Logger

PR12014

derived from pytorch_lightning.profiler.base.BaseProfiler

derive from pytorch_lightning.profilers.profiler.Profiler

PR12150

derived from pytorch_lightning.profiler.base.AbstractProfiler

derive from pytorch_lightning.profilers.profiler.Profiler

PR12106

devel 1.9

If

Then

Ref

passed the pl_module argument to distributed module wrappers

passed the (required) forward_module argument

PR16386

used DataParallel and the LightningParallelModule wrapper

use DDP or DeepSpeed instead

PR16748 DDP

used pl_module argument from the distributed module wrappers

use DDP or DeepSpeed instead

PR16386 DDP

called pl.overrides.base.unwrap_lightning_module function

use DDP or DeepSpeed instead

PR16386 DDP

used or derived from pl.overrides.distributed.LightningDistributedModule class

use DDP instead

PR16386 DDP

used the pl.plugins.ApexMixedPrecisionPlugin`` plugin

use PyTorch native mixed precision

PR16039

used the pl.plugins.NativeMixedPrecisionPlugin plugin

switch to the pl.plugins.MixedPrecisionPlugin plugin

PR16039

used the fit_loop.min_steps setters

implement your training loop with Fabric

PR16803

used the fit_loop.max_steps setters

implement your training loop with Fabric

PR16803

used the data_parallel attribute in Trainer

check the same using isinstance(trainer.strategy, ParallelStrategy)

PR16703

used any function from pl.utilities.xla_device

switch to pl.accelerators.TPUAccelerator.is_available()

PR14514 PR14550

imported functions from pl.utilities.device_parser.*

import them from lightning_fabric.utilities.device_parser.*

PR14492 PR14753

imported functions from pl.utilities.cloud_io.*

import them from lightning_fabric.utilities.cloud_io.*

PR14515

imported functions from pl.utilities.apply_func.*

import them from lightning_utilities.core.apply_func.*

PR14516 PR14537

used any code from pl.core.mixins

use the base classes

PR16424

used any code from pl.utilities.distributed

rely on Pytorch’s native functions

PR16390

used any code from pl.utilities.data

it was removed

PR16440

used any code from pl.utilities.optimizer

it was removed

PR16439

used any code from pl.utilities.seed

it was removed

PR16422

were using truncated backpropagation through time (TBPTT) with LightningModule.truncated_bptt_steps

use manual optimization

PR16172 Manual Optimization

were using truncated backpropagation through time (TBPTT) with LightningModule.tbptt_split_batch

use manual optimization

PR16172 Manual Optimization

were using truncated backpropagation through time (TBPTT) and passing hidden to LightningModule.training_step

use manual optimization

PR16172 Manual Optimization

used pl.utilities.finite_checks.print_nan_gradients function

it was removed

used pl.utilities.finite_checks.detect_nan_parameters function

it was removed

used pl.utilities.parsing.flatten_dict function

it was removed

used pl.utilities.metrics.metrics_to_scalars function

it was removed

used pl.utilities.memory.get_model_size_mb function

it was removed

used pl.strategies.utils.on_colab_kaggle function

it was removed

PR16437

used LightningDataModule.add_argparse_args() method

switch to using LightningCLI

PR16708

used LightningDataModule.parse_argparser() method

switch to using LightningCLI

PR16708

used LightningDataModule.from_argparse_args() method

switch to using LightningCLI

PR16708

used LightningDataModule.get_init_arguments_and_types() method

switch to using LightningCLI

PR16708

used Trainer.default_attributes() method

switch to using LightningCLI

PR16708

used Trainer.from_argparse_args() method

switch to using LightningCLI

PR16708

used Trainer.parse_argparser() method

switch to using LightningCLI

PR16708

used Trainer.match_env_arguments() method

switch to using LightningCLI

PR16708

used Trainer.add_argparse_args() method

switch to using LightningCLI

PR16708

used pl.utilities.argparse.from_argparse_args() function

switch to using LightningCLI

PR16708

used pl.utilities.argparse.parse_argparser() function

switch to using LightningCLI

PR16708

used pl.utilities.argparseparse_env_variables() function

switch to using LightningCLI

PR16708

used get_init_arguments_and_types() function

switch to using LightningCLI

PR16708

used pl.utilities.argparse.add_argparse_args() function

switch to using LightningCLI

PR16708

used pl.utilities.parsing.str_to_bool() function

switch to using LightningCLI

PR16708

used pl.utilities.parsing.str_to_bool_or_int() function

switch to using LightningCLI

PR16708

used pl.utilities.parsing.str_to_bool_or_str() function

switch to using LightningCLI

PR16708

derived from pl.utilities.distributed.AllGatherGrad class

switch to PyTorch native equivalent

PR15364

used PL_RECONCILE_PROCESS=1 env. variable

customize your logger

PR16204

if you derived from mixin’s method pl.core.saving.ModelIO.load_from_checkpoint

rely on pl.core.module.LightningModule

PR16999

used Accelerator.setup_environment method

switch to Accelerator.setup_device

PR16436

used PL_FAULT_TOLERANT_TRAINING env. variable

implement own logic with Fabric

PR16516 PR16533

used or derived from public pl.overrides.distributed.IndexBatchSamplerWrapper class

it is set as protected

PR16826

used the DataLoaderLoop class

use manual optimization

PR16726 Manual Optimization

used the EvaluationEpochLoop class

use manual optimization

PR16726 Manual Optimization

used the PredictionEpochLoop class

use manual optimization

PR16726 Manual Optimization

used trainer.reset_*_dataloader() methods

use Loop.setup_data() for the top-level loops

PR16726

used LightningModule.precision attribute

rely on Trainer precision attribute

PR16203

used Trainer.model setter

you shall pass the model in fit/test/predict method

PR16462

relied on pl.utilities.supporters.CombinedLoaderIterator class

pass dataloders directly

PR16714

relied on pl.utilities.supporters.CombinedLoaderIterator class

pass dataloders directly

PR16714

accessed ProgressBarBase.train_batch_idx property

rely on Trainer internal loops’ properties

PR16760

accessed ProgressBarBase.val_batch_idx property

rely on Trainer internal loops’ properties

PR16760

accessed ProgressBarBase.test_batch_idx property

rely on Trainer internal loops’ properties

PR16760

accessed ProgressBarBase.predict_batch_idx property

rely on Trainer internal loops’ properties

PR16760

used Trainer.prediction_writer_callbacks property

rely on precision plugin

PR16759

used PrecisionPlugin.dispatch

it was removed

PR16618

used Strategy.dispatch

it was removed

PR16618


© Copyright Copyright (c) 2018-2023, Lightning AI et al...

Built with Sphinx using a theme provided by Read the Docs.