Upgrade from 1.4 to the 2.0¶
Regular User¶
| If | Then | Ref | 
|---|---|---|
| relied on the  | rely on either  | #7339 | 
| accessed  | swicth to manual optimization | #7323 | 
| called   | rely on  | #7066 | 
| passed the  | pass the  | #6146 | 
| passed the  | now pass  | #6621 | 
| passed the  | now pass the   | #6349 | 
| relied on the  | use  | #6993 | 
| implemented  | now update the signature to include  | #7253 | 
| relied on  | now import separate package  | |
| accessed  | now access  | #7168 | 
| If | Then | Ref | 
|---|---|---|
| used  | use  | #7431 | 
| used  | use   | #7431 | 
| passed  | remove them since these parameters are now passed from the  | #7026 | 
| passed  | remove them since these parameters are now passed from the  | #7026 | 
| didn’t provide a  | pass   | #7907 | 
| used  | change the argument to  | #8383 | 
| used Trainer’s flag  | use pass  | #5043 | 
| used Trainer’s flag  | use  | #8575 | 
| If | Then | Ref | 
|---|---|---|
| used Trainer’s flag  | set  | #9175 | 
| used Trainer’s flag  | pass a  | #9699 | 
| used Trainer’s flag  | set  | #9754 | 
| used Trainer’s flag  | add the  | #8989 | 
| used Trainer’s flag  | pass it to the logger init if it is supported for the particular logger | #9366 | 
| used Trainer’s flag  | turn off the limit by passing  | #9460 | 
| used Trainer’s flag  | pass the same path to the fit function instead,  | #9693 | 
| used Trainer’s flag  | use the  | #9921 | 
| used Trainer’s flag  | set the  | #9616 | 
| called  | use the utility function  | #8513 | 
| used the  | use the utility function  | #8495 | 
| relied on the  | use  | #9098 | 
| relied on the  | use  | #9098 | 
| relied on the  | use  | #9098 | 
| relied on the  | use  | #9098 | 
| implemented the  | implement the  | #9260 | 
| relied on the  | Use another logger like  | #9065 | 
| used the basic progress bar  | use the  | #10134 | 
| were using  | use  | #9924 | 
| were using  | use  | #9924 | 
| If | Then | Ref | 
|---|---|---|
| have wrapped your loggers with  | directly pass a list of loggers to the Trainer and access the list via the  | #12147 | 
| used  | access  | #11443 | 
| used  | upgrade to the latest API | #14727 | 
| used   | use   | #11887 | 
| used   | use   | #11887 | 
| used   | switch to general purpose hook  | #14315 | 
| used   | switch to general purpose hook  | #14315 | 
| used Trainer’s flag  | use directly  | #14424 | 
| used Trainer’s property  | #14424 | 
| If | Then | Ref | 
|---|---|---|
| used  | set  | #12804 | 
| used  | call  | #12184 | 
| imported  | import  | #12740 | 
| If | Then | Ref | 
|---|---|---|
| used Python 3.7 | upgrade to Python 3.8 or higher | #16579 | 
| used PyTorch 1.10 | upgrade to PyTorch 1.11 or higher | #16492 | 
| used Trainer’s flag  | use  | #16171 | 
| used Trainer’s flag  | use  | #16171 | 
| used Trainer’s flag  | use  | #16171 | 
| used Trainer’s flag  | use  | #16171 | 
| used Trainer’s flag  | pass the path to the  | #10061 | 
| used Trainer’s flag  | use  | #16184 | 
| called the  | use Trainer’s flag``devices=”auto”`` | #16184 | 
| called the  | use Trainer’s flag``devices=”auto”`` | #16184 | 
| used Trainer’s flag   | use the   | #16729 | 
| imported profiles from  | import from  | #16359 | 
| used  | move to a standalone  | https://lightning.ai/docs/pytorch/latest/advanced/training_tricks.html#batch-size-finder | 
| used Trainer’s flag  | use  | |
| used Trainer’s flag  | use callbacks  | 
Advanced User¶
| If | Then | Ref | 
|---|---|---|
| called  | now call  | #7201 | 
| accessed the  | now  access the  | #4945 | 
| used  | now use the  | #7292 | 
| used  | now use  | #6834 | 
| used  | now use  | #6834 | 
| If you relied on  | now use  | #7180 | 
| selected the i-th GPU with  | now this will set the number of GPUs, just like passing  | #6388 | 
| If | Then | Ref | 
|---|---|---|
| used  | use  | #7891 | 
| used the argument  | use  | #7918 | 
| returned values from  | call  | #7994 | 
| imported  | import  | |
| relied on  | manage data lifecycle in customer methods | #7657 | 
| relied on  | manage data lifecycle in customer methods | #7657 | 
| relied on  | manage data lifecycle in customer methods | #7657 | 
| relied on  | manage data lifecycle in customer methods | #7657 | 
| relied on  | manage data lifecycle in customer methods | #7657 | 
| relied on  | manage data lifecycle in customer methods | #7657 | 
| relied on  | manage data lifecycle in customer methods | #7657 | 
| relied on  | manage data lifecycle in customer methods | #7657 | 
| relied on  | manage data lifecycle in customer methods | #7657 | 
| used  | use  | #8203 | 
| used  | use the condition  | #8291 | 
| If | Then | Ref | 
|---|---|---|
| passed prepare_data_per_node to the Trainer | set it as a property of DataHooks, accessible in the LightningModule and LightningDataModule instead | #8958 | 
| used process_position flag | specify your ProgressBar callback and set it as process_position directly | #9222 | 
| used distributed training attributes add_to_queue and get_from_queue in LightningModule | user the same methods in DDPStrategy(start_method=’spawn’) | #9118 | 
| called LightningModule.get_progress_bar_dict | use the utility function pl.callbacks.progress.base.get_standard_metrics(module.trainer) | #9118 | 
| used LightningModule.on_post_move_to_device | remove it as parameters tying happens automatically without the need of implementing your own logic | #9525 | 
| relied on Trainer.progress_bar_dict | use ProgressBarBase.get_metrics | #9118 | 
| used LightningDistributed | rely on the logic in DDPStrategy(start_method=’…’) | #9691 | 
| used the Accelerator collective API Accelerator.barrier, Accelerator.broadcast, and Accelerator.all_gather | call Strategy collectives API directly, without going through Accelerator | #9677 | 
| used pytorch_lightning.core.decorators.parameter_validation | rely on automatic parameters tying with pytorch_lightning.utilities.params_tying.set_shared_parameters | #9525 | 
| used LearningRateMonitor.lr_sch_names | access them using LearningRateMonitor.lrs.keys() which will return the names of all the optimizers, even those without a scheduler. | #10066 | 
| implemented DataModule train_transforms, val_transforms, test_transforms, size, dims | switch to LightningDataModule | #8851 | 
| If | Then | Ref | 
|---|---|---|
| used  | switch to  | #14026 | 
| used   | now use  | #11141 | 
| used any   | rename them to   | #11120 | 
| used  | rely on protected  | #10505 | 
| used  | rely on protected   | #10503 | 
| used  | switch to built-in https://github.com/pytorch/torchdistx support | #13868 | 
| have implemented  | move your implementation to  | #14834 | 
| have implemented the  | move your implementation to  | #14834 | 
| have implemented the  | move your implementation to  | #14834 | 
| have implemented the  | move your implementation to  | #14834 | 
| have implemented the  | move your implementation  to  | #14834 | 
| have implemented the  | move your implementation to  | #14834 | 
| used  | use  | #10940 | 
| used  | use  | #10940 | 
| used Trainer’s attribute  | it was replaced by   | #12388 | 
| used Trainer’s attribute  | it was replaced by   | #12436 | 
| used Trainer’s attribute  | use  | #12384 | 
| used Trainer’s attribute  | use   | #12386 | 
| used Trainer’s attribute  | use  | #12437 | 
| used   | switch to using  | #12388 | 
| used  | it was removed | #14830 | 
| logged with  | switch to  | #11832 | 
| used   | log metrics explicitly | #11871 | 
| used   | log metrics explicitly | #11871 | 
| used   | rely on generic read-only property  | #11696 | 
| used   | rely on generic read-only property  | #11696 | 
| used   | rely on generic read-only property  | #11696 | 
| rely on the returned dictionary from   | call directly  | #11887 | 
| If | Then | Ref | 
|---|---|---|
| imported  | import  | #13031 | 
| imported  | import  | #13043 | 
| imported  | import   | #13767 | 
| imported profiler classes from  | import  | #12308 | 
| used  | use  | #13636 | 
| used  | use  | |
| used the  | switch to  | |
| used the Lightning Hydra multi-run integration | removed support for it as it caused issues with processes hanging | #15689 | 
| used  | use   | #9921 | 
| If | Then | Ref | 
|---|---|---|
| used the  | switch to  | #15953 | 
| used Trainer’s flag  | use DDP with  | #16748 | 
| implemented  | port your logic to   | #16520 | 
| implemented  | port your logic to   | #16520 | 
| implemented  | port your logic to   | #16520 | 
| used Trainer’s flag  | switch to   | #16800 | 
| used Trainer’s flag  | implement particular offload logic in your custom metric or turn it on in  | #16358 | 
| used Trainer’s flag  | overwrite  | #16745 #16745 | 
| used Trainer’s flag  | use   | |
| relied on the  | switch to manual optimization | #16537 Manual Optimization | 
| relied on the  | switch to manual optimization | #16538 Manual Optimization | 
| were using  | switch to PyTorch native mixed precision  | #16039 Precision | 
| used Trainer’s flag  | use PyTorch native mixed precision | #16039 Precision | 
| used Trainer’s flag  | use PyTorch native mixed precision | #16039 Precision | 
| used Trainer’s flag  | use PyTorch native mixed precision | #16039 Precision | 
| used Trainer’s attribute  | use PyTorch native mixed precision | #16039 Precision | 
| used Trainer’s attribute  | use PyTorch native mixed precision | #16039 Precision | 
| used Trainer’s attribute  | use PyTorch native mixed precision | #16039 Precision | 
| use the  | consider using PyTorch’s native FSDP implementation or outsourced implementation into own project | |
| used  | use native FSDP instead | #16400 FSDP | 
| used  | use native FSDP instead | #16400 FSDP | 
| used  | use native FSDP instead | #16400 FSDP | 
| used  | use native FSDP instead | #16400 FSDP | 
| used  | use native FSDP instead | #16400 FSDP | 
| used  | use native FSDP instead | #16400 FSDP | 
| used  | pass this option and via dictionary of  | #14998 | 
| used  | pass this option and via dictionary of  | #14998 | 
| have customized loops  | implement your training loop with Fabric. | #14998 Fabric | 
| have customized loops  | implement your training loop with Fabric. | #14998 Fabric | 
| have customized loops  | implement your training loop with Fabric. | #14998 Fabric | 
| used the Trainer’s  | implement your training loop with Fabric | #14998 Fabric | 
| used the Trainer’s  | implement your training loop with Fabric | #14998 Fabric | 
| used the Trainer’s  | implement your training loop with Fabric | #14998 Fabric | 
| used the Trainer’s  | implement your training loop with Fabric | #14998 Fabric | 
| used the  | being marked as protected | |
| used  | use manual optimization | #16539 | 
| used  | use manual optimization | #16539 Manual Optimization | 
| used  | use manual optimization | #16539 Manual Optimization | 
| used  | use manual optimization | #16539 Manual Optimization | 
| used  | use manual optimization | #16539 Manual Optimization | 
| used  | use manual optimization | #16539 Manual Optimization | 
| used  | use manual optimization | #16539 Manual Optimization | 
| used  | use manual optimization | #16539 Manual Optimization | 
| used declaring optimizer frequencies in the dictionary returned from  | use manual optimization | #16539 Manual Optimization | 
| used  | use manual optimization | #16539 Manual Optimization | 
| used  | use manual optimization | #16539 Manual Optimization | 
| used  | use manual optimization | #16539 Manual Optimization | 
| used  | use manual optimization | #16539 Manual Optimization | 
| used  | use manual optimization | #16539 Manual Optimization | 
| used  | use manual optimization | #16539 Manual Optimization | 
| used  | use manual optimization | #16539 Manual Optimization | 
| used Trainer’s  | use manual optimization | |
| used  | #16355 | |
| used training integration with Horovod | install standalone package/project | |
| used training integration with ColossalAI | install standalone package/project | https://lightning.ai/docs/pytorch/latest/advanced/third_party/colossalai.html | 
| used  | use Torch’s Quantization directly | #16750 | 
| had any logic except reducing the DP outputs in   | port it to  | #16791 | 
| had any logic except reducing the DP outputs in   | port it to  | #16791 | 
| had any logic except reducing the DP outputs in   | port it to  | #16791 | 
| used  | switch to general   | #16809 | 
| used the automatic addition of a moving average of the  | use  | #16192 | 
| rely on the  | access them via  | #16655 | 
| need to pass a dictionary to  | pass them independently. | #16389 | 
Developer¶
| If | Then | Ref | 
|---|---|---|
| called  | just call  | #7652 | 
| used  | now rely on the corresponding utility functions in  | #7422 | 
| assigned the  | now assign the equivalent  | #8025 | 
| accessed  | the property has been removed | #8229 | 
| If | Then | Ref | 
|---|---|---|
| called  | switch to  | #9422 | 
| called  | switch to  | #9422 | 
| used  | it is set not as protected and discouraged from direct use | #10101 | 
| used  | it is set not as protected and discouraged from direct use | #10101 | 
| used  | change it to  | #10106 | 
| called  | update it   | #10105 | 
| If | Then | Ref | 
|---|---|---|
| Removed the legacy  | #14415 | |
| used the generic method  | switch to a specific one depending on your purpose  | #11000 | 
| used  | import it from  | #11747 | 
| used  | import it from  | #11747 | 
| used  | import it from  | #11747 | 
| used  | import it from  | #11747 | 
| used  | import it from  | #11747 | 
| used  | import it from  | #11747 | 
| used  | import it from  | #11747 | 
| used  | switch it to  | #12072 | 
| derived it from  | use Trainer base class | #14401 | 
| used base class  | switch to use  | #12150 | 
| set distributed backend via the environment variable  | use  | #11745 | 
| used  | switch to   | #11978 | 
| used  | switch to   | #11978 | 
| used  | use  | #12262 | 
| used  | rely on Torch native AMP | #12312 | 
| used  | rely on Torch native AMP | #12315 | 
| used Trainer’s attribute  | rely on loop constructor   | #10931 | 
| used Trainer’s attribute  | it was removed | #11068 | 
| derived from  | rely on  | #11155 | 
| derived from  | rely on methods from  | #11282 | 
| used Trainer’s attribute  | switch to the  | #11444 | 
| used  | it was set as a protected method  | #10979 | 
| used Profiler’s attribute   | it was removed | #12102 | 
| used Profiler’s attribute   | it was removed | #12102 | 
| used the   | #11254 | |
| used  | chang it to (tbptt_steps, n_optimizers). You can update your code by adding the following parameter to your hook signature:  | #12182 | 
| used  | change it to (n_batches, tbptt_steps, n_optimizers). You can update your code by adding the following parameter to your hook signature:  | #12182 | 
| If | Then | Ref | 
|---|---|---|
| derived from  | derive from  | #12014 | 
| derived from  | derive from  | #12150 | 
| derived from  | derive from  | #12106 | 
| If | Then | Ref | 
|---|---|---|
| passed the  | passed the (required)  | #16386 | 
| used  | use DDP or DeepSpeed instead | #16748 DDP | 
| used  | use DDP or DeepSpeed instead | #16386 DDP | 
| called  | use DDP or DeepSpeed instead | #16386 DDP | 
| used or derived from  | use DDP instead | #16386 DDP | 
| used the pl.plugins.ApexMixedPrecisionPlugin`` plugin | use PyTorch native mixed precision | #16039 | 
| used the  | switch to the  | #16039 | 
| used the  | implement your training loop with Fabric | #16803 | 
| used the  | implement your training loop with Fabric | #16803 | 
| used the  | check the same using  | #16703 | 
| used any function from  | switch to  | #14514 #14550 | 
| imported functions from   | import them from  | #14492 #14753 | 
| imported functions from  | import them from  | #14515 | 
| imported functions from  | import them from  | #14516 #14537 | 
| used any code from  | use the base classes | #16424 | 
| used any code from  | rely on Pytorch’s native functions | #16390 | 
| used any code from  | it was removed | #16440 | 
| used any code from  | it was removed | #16439 | 
| used any code from  | it was removed | #16422 | 
| were using truncated backpropagation through time (TBPTT) with  | use manual optimization | #16172 Manual Optimization | 
| were using truncated backpropagation through time (TBPTT) with  | use manual optimization | #16172 Manual Optimization | 
| were using truncated backpropagation through time (TBPTT) and passing  | use manual optimization | #16172 Manual Optimization | 
| used  | it was removed | |
| used  | it was removed | |
| used  | it was removed | |
| used  | it was removed | |
| used  | it was removed | |
| used  | it was removed | #16437 | 
| used  | switch to using  | #16708 | 
| used  | switch to using  | #16708 | 
| used  | switch to using  | #16708 | 
| used  | switch to using  | #16708 | 
| used  | switch to using  | #16708 | 
| used  | switch to using  | #16708 | 
| used  | switch to using  | #16708 | 
| used  | switch to using  | #16708 | 
| used  | switch to using  | #16708 | 
| used  | switch to using  | #16708 | 
| used  | switch to using  | #16708 | 
| used  | switch to using  | #16708 | 
| used  | switch to using  | #16708 | 
| used  | switch to using  | #16708 | 
| used  | switch to using  | #16708 | 
| used  | switch to using  | #16708 | 
| used  | switch to using  | #16708 | 
| derived from  | switch to PyTorch native equivalent | #15364 | 
| used  | customize your logger | #16204 | 
| if you derived from mixin’s method  | rely on  | #16999 | 
| used   | switch to  | #16436 | 
| used  | implement own logic with Fabric | #16516 #16533 | 
| used or derived from public  | it is set as protected | #16826 | 
| used the  | use manual optimization | #16726 Manual Optimization | 
| used the  | use manual optimization | #16726 Manual Optimization | 
| used the  | use manual optimization | #16726 Manual Optimization | 
| used  | use   | #16726 | 
| used  | rely on Trainer precision attribute | #16203 | 
| used   | you shall pass the  | #16462 | 
| relied on  | pass dataloders directly | #16714 | 
| relied on  | pass dataloders directly | #16714 | 
| accessed  | rely on Trainer internal loops’ properties | #16760 | 
| accessed  | rely on Trainer internal loops’ properties | #16760 | 
| accessed  | rely on Trainer internal loops’ properties | #16760 | 
| accessed  | rely on Trainer internal loops’ properties | #16760 | 
| used  | rely on precision plugin | #16759 | 
| used  | it was removed | #16618 | 
| used  | it was removed | #16618 |