{"id":1376,"date":"2022-03-29T23:08:45","date_gmt":"2022-03-29T23:08:45","guid":{"rendered":"https:\/\/www.grid.ai\/?p=1376"},"modified":"2022-11-01T15:26:39","modified_gmt":"2022-11-01T19:26:39","slug":"pytorch-lightning-v1-6","status":"publish","type":"post","link":"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-v1-6\/","title":{"rendered":"Lightning 1.6: Habana Accelerator, Bagua Distributed, Fault-Tolerance Improvements"},"content":{"rendered":"<h3>PyTorch Lightning 1.6 Now Available<\/h3>\n<p>The PyTorch Lightning team released version 1.6 with support for Intel&#8217;s Habana Accelerator, new efficient DDP strategy (Bagua), manual Fault-tolerance, and other stability and reliability changes.<\/p>\n<p>\u26a1Visit the <a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/releases\/tag\/1.6.0\"><u>release page on GitHub<\/u><\/a> to download.\u26a1<\/p>\n<ul>\n<li>Lightning Highlights<\/li>\n<li>New Hooks<\/li>\n<li>New Properties<\/li>\n<li>Experimental Features<\/li>\n<li>Backward Incompatible Changes<\/li>\n<li>Full Lightning Changelog<\/li>\n<\/ul>\n<h3>Lightning Highlights<\/h3>\n<p>PyTorch Lightning 1.6 is the work of 99 contributors who have worked on features, bug-fixes, and documentation for a total of over 750 commits since 1.5. Here are some highlights:<\/p>\n<h4>Introducing Intel&#8217;s Habana Accelerator<\/h4>\n<p>Lightning 1.6 now supports the Habana\u00ae framework, which includes Gaudi\u00ae AI training processors. Their heterogeneous architecture includes a cluster of fully programmable Tensor Processing Cores (TPC) along with its associated development tools and libraries and a configurable Matrix Math engine.<\/p>\n<p>You can leverage the\u00a0<a href=\"https:\/\/pytorch-lightning.readthedocs.io\/en\/stable\/accelerators\/hpu.html\" rel=\"nofollow\">Habana<\/a>\u00a0hardware to accelerate your Deep Learning training workloads simply by passing:<\/p>\n<pre>trainer = pl.Trainer(accelerator=\"hpu\")\r\n\r\n# single Gaudi training\r\ntrainer = pl.Trainer(accelerator=\"hpu\", devices=1)\r\n\r\n# distributed training with 8 Gaudi\r\ntrainer = pl.Trainer(accelerator=\"hpu\", devices=8)<\/pre>\n<h4>The Bagua Strategy<\/h4>\n<p>The\u00a0<a href=\"https:\/\/pytorch-lightning.readthedocs.io\/en\/stable\/accelerators\/gpu.html#bagua\" rel=\"nofollow\">Bagua Strategy<\/a>\u00a0is a deep learning acceleration framework that supports multiple, advanced distributed training algorithms with state-of-the-art system relaxation techniques. Enabling\u00a0<a href=\"https:\/\/github.com\/BaguaSys\/bagua\">Bagua<\/a>, which can be considerably faster than vanilla PyTorch DDP, is as simple as:<\/p>\n<pre>trainer = pl.Trainer(strategy=\"bagua\")\r\n\r\n# or to choose a custom algorithm\r\ntrainer = pl.Trainer(strategy=BaguaStrategy(algorithm=\"gradient_allreduce\") # default<\/pre>\n<h4>Towards stable Accelerator, Strategy, and Plugin APIs<\/h4>\n<p>The\u00a0<code>Accelerator<\/code>,\u00a0<code>Strategy<\/code>, and\u00a0<code>Plugin<\/code>\u00a0APIs are a core part of PyTorch Lightning. They&#8217;re where all the distributed boilerplate lives, and we&#8217;re constantly working to improve both them and the overall PyTorch Lightning platform experience.<\/p>\n<p>In this release, we&#8217;ve made some large changes to achieve that goal. Not to worry, though! The only users affected by these changes are those who use custom implementations of Accelerator and Strategy (<code>TrainingTypePlugin<\/code>) as well as certain Plugins. In particular, we want to highlight the following changes:<\/p>\n<ul>\n<li>All\u00a0<code>TrainingTypePlugin<\/code>s have been renamed to\u00a0<code>Strategy<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11120\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11120\/hovercard\">#11120<\/a>). Strategy is a more appropriate name because it encompasses more than simply training communcation. This change is now aligned with the changes we implemented in 1.5, which introduced the new\u00a0<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/releases\/tag\/1.5.0#strategy-and-devices\"><code>strategy<\/code>\u00a0and\u00a0<code>devices<\/code>\u00a0flags to the Trainer<\/a>.\n<pre># Before\r\nfrom pytorch_lighting.plugins import DDPPlugin\r\n\r\n# New\r\nfrom pytorch_lighting.strategies import DDPStrategy<\/pre>\n<\/li>\n<li>The\u00a0<code>Accelerator<\/code>\u00a0and\u00a0<code>PrecisionPlugin<\/code>\u00a0have moved into\u00a0<code>Strategy<\/code>. All strategies now take an optional parameter\u00a0<code>accelerator<\/code>\u00a0and\u00a0<code>precision_plugin<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11022\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11022\/hovercard\">#11022<\/a>,\u00a0<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10570\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10570\/hovercard\">#10570<\/a>).<\/li>\n<li>Custom Accelerator implementations must now implement two new abstract methods:\u00a0<code>is_available()<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11797\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11797\/hovercard\">#11797<\/a>) and\u00a0<code>auto_device_count()<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10222\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10222\/hovercard\">#10222<\/a>). The latter determines how many devices get used by default when specifying\u00a0<code>Trainer(accelerator=..., devices=\"auto\")<\/code>.<\/li>\n<li>We redesigned the process creation for spawn-based strategies such as\u00a0<code>DDPSpawnStrategy<\/code>\u00a0and\u00a0<code>TPUSpawnStrategy<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10896\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10896\/hovercard\">#10896<\/a>). All spawn-based strategies now spawn processes immediately upon calling\u00a0<code>Trainer.{fit,validate,test,predict}<\/code>, which means the hooks\/callbacks\u00a0<code>prepare_data<\/code>,\u00a0<code>setup<\/code>,\u00a0<code>configure_sharded_model<\/code>\u00a0and\u00a0<code>teardown<\/code>\u00a0all run under an initialized process group. These changes align the spawn-based strategies with their non-spawn counterparts (such as\u00a0<code>DDPStrategy<\/code>).<\/li>\n<\/ul>\n<p>We&#8217;ve also exposed the process group backend for use. For example, you can now easily enable\u00a0<a href=\"https:\/\/github.com\/facebookresearch\/fairring\"><code>fairring<\/code><\/a>\u00a0like this:<\/p>\n<pre># Explicitly specify the process group backend if you choose to\r\nddp = pl.strategies.DDPStrategy(process_group_backend=\"fairring\")\r\ntrainer = Trainer(strategy=ddp, accelerator=\"gpu\", devices=8)<\/pre>\n<p>In a similar fashion, if installing\u00a0<code>torch&gt;=1.11<\/code>, you can enable\u00a0<a href=\"https:\/\/pytorch.org\/blog\/pytorch-1.11-released\/#stable-ddp-static-graph\" rel=\"nofollow\">DDP static graph<\/a>\u00a0to apply special runtime optimizations:<\/p>\n<pre>trainer = Trainer(devices=4, strategy=DDPStrategy(static_graph=True))<\/pre>\n<h4><code>LightningCLI<\/code>\u00a0improvements<\/h4>\n<p>In the previous release, we added shorthand notation support for registered components. In this release, we added a flag to\u00a0<a href=\"https:\/\/pytorch-lightning.readthedocs.io\/en\/stable\/common\/lightning_cli.html#subclass-registration\" rel=\"nofollow\">automatically register<\/a>\u00a0all available components:<\/p>\n<pre>from pytorch_lightning.utilities.cli import LightningCLI\r\n\r\nLightningCLI(auto_registry=True)<\/pre>\n<p>We have also added support for the\u00a0<code>ReduceLROnPlateau<\/code>\u00a0scheduler with shorthand notation:<\/p>\n<pre>$ python script.py fit --optimizer=Adam --lr_scheduler=ReduceLROnPlateau --lr_scheduler.monitor=metric_to_track<\/pre>\n<p>If you need to\u00a0<a href=\"https:\/\/pytorch-lightning.readthedocs.io\/en\/stable\/common\/lightning_cli.html#optimizers-and-learning-rate-schedulers\" rel=\"nofollow\">customize the learning rate scheduler configuration<\/a>, you can do so by overriding:<\/p>\n<pre>class MyLightningCLI(LightningCLI):\r\n   @staticmethod\r\n    def configure_optimizers(lightning_module, optimizer, lr_scheduler=None):\r\n        return {\"optimizer\": optimizer, \"lr_scheduler\": {\"scheduler\": lr_scheduler, ...}}<\/pre>\n<p>Finally, loggers are also now configurable with shorthand:<\/p>\n<pre>$ python script.py fit --trainer.logger=WandbLogger --trainer.logger.name=\"my_lightning_run\"<\/pre>\n<h4>Control SLURM&#8217;s re-queueing<\/h4>\n<p>We&#8217;ve added the ability to turn the\u00a0<a href=\"https:\/\/pytorch-lightning.readthedocs.io\/en\/stable\/clouds\/cluster.html#wall-time-auto-resubmit\" rel=\"nofollow\">automatic resubmission<\/a>\u00a0on or off when a job gets interrupted by the SLURM controller (via signal handling). Users who prefer to let their code handle the resubmission (for example, when submitit is used) can now pass:<\/p>\n<pre>from pytorch_lightning.plugins.environments import SLURMEnvironment\r\n\r\ntrainer = pl.Trainer(plugins=SLURMEnvironment(auto_requeue=False))<\/pre>\n<h4>Fault-tolerance improvements<\/h4>\n<p>The\u00a0<a href=\"https:\/\/pytorch-lightning.readthedocs.io\/en\/stable\/advanced\/fault_tolerant_training.html\" rel=\"nofollow\">Fault-tolerance training<\/a>\u00a0under manual optimization now tracks optimization progress. We also changed the graceful exit signal from\u00a0<code>SIGUSR1<\/code>\u00a0to\u00a0<code>SIGTERM<\/code>\u00a0for better support inside cloud instances.<\/p>\n<p>An additional feature we&#8217;re excited to announce is support for consecutive <code>trainer.fit()<\/code>\u00a0calls.<\/p>\n<pre>trainer = pl.Trainer(max_epochs=2)\r\ntrainer.fit(model)\r\n\r\n# now, run 2 more epochs\r\ntrainer.fit_loop.max_epochs = 4\r\ntrainer.fit(model)<\/pre>\n<h4>Loop customization improvements<\/h4>\n<p>The\u00a0<a href=\"https:\/\/pytorch-lightning.readthedocs.io\/en\/stable\/extensions\/loops.html\" rel=\"nofollow\"><code>Loop<\/code><\/a>&#8216;s state is now included as part of the checkpoints saved by the library. This enables finer restoration of custom loops.<\/p>\n<p>We&#8217;ve also made it easier to replace Lightning&#8217;s loops with your own. For example:<\/p>\n<pre>class MyCustomLoop(pl.loops.TrainingEpochLoop):\r\n    ...\r\n\r\ntrainer = pl.Trainer(...)\r\ntrainer.fit_loop.replace(epoch_loop=MyCustomLoop)\r\n# Trainer runs the fit loop with your new epoch loop!\r\ntrainer.fit(model)<\/pre>\n<h4>Data-Loading improvements<\/h4>\n<p>In previous versions, Lightning required that the\u00a0<code>DataLoader<\/code>\u00a0instance set its input arguments as instance attributes. This meant that custom\u00a0<code>DataLoader<\/code>s also had this hidden requirement. In this release, we do this automatically for the user, easing the passing of custom loaders:<\/p>\n<pre>class MyDataLoader(torch.utils.data.DataLoader):\r\n    def __init__(self, a=123, *args, **kwargs):\r\n-       # this was required before\r\n-       self.a = a\r\n        super().__init__(*args, **kwargs)\r\n\r\ntrainer.fit(model, train_dataloader=MyDataLoader())<\/pre>\n<p>As of this release, Lightning no longer pre-fetches 1 extra batch if it doesn&#8217;t need to. Previously, doing so would conflict with the internal pre-fetching done by optimized data loaders such as\u00a0<a href=\"https:\/\/ffcv.io\/\" rel=\"nofollow\">FFCV&#8217;s<\/a>. You can now define your own pre-fetching value like this:<\/p>\n<pre>class MyCustomLoop(pl.loops.FitLoop):\r\n    @property\r\n    def prefetch_batches(self):\r\n        return 7 # lucky number 7\r\n\r\ntrainer = pl.Trainer(...)\r\ntrainer.fit_loop.replace(fit_loop=MyCustomLoop)<\/pre>\n<h3>New Hooks<\/h3>\n<h3><code>LightningModule.lr_scheduler_step<\/code><\/h3>\n<p>Lightning now allows the use of\u00a0<a href=\"https:\/\/pytorch-lightning.readthedocs.io\/en\/stable\/common\/optimization.html#bring-your-own-custom-learning-rate-schedulers\" rel=\"nofollow\">custom learning rate schedulers<\/a>\u00a0that aren&#8217;t natively available in\u00a0<a href=\"https:\/\/pytorch.org\/docs\/stable\/optim.html#how-to-adjust-learning-rate\" rel=\"nofollow\">PyTorch<\/a>. A great example of this is\u00a0<a href=\"https:\/\/github.com\/rwightman\/pytorch-image-models\/blob\/master\/timm\/scheduler\/scheduler.py\">Timm Schedulers<\/a>.<\/p>\n<p>When using custom learning rate schedulers relying on an API other than PyTorch&#8217;s, you can now define the\u00a0<code>LightningModule.lr_scheduler_step<\/code>\u00a0with your desired logic.<\/p>\n<pre>from timm.scheduler import TanhLRScheduler\r\n\r\n\r\nclass MyLightningModule(pl.LightningModule):\r\n    def configure_optimizers(self):\r\n        optimizer = ...\r\n        scheduler = TanhLRScheduler(optimizer, ...)\r\n        return {\"optimizer\": optimizer, \"lr_scheduler\": {\"scheduler\": scheduler, \"interval\": \"epoch\"}}\r\n\r\n    def lr_scheduler_step(self, scheduler, optimizer_idx, metric):\r\n        scheduler.step(epoch=self.current_epoch)  # timm's scheduler need the epoch value<\/pre>\n<h4>A new stateful API<\/h4>\n<p>This release introduces new hooks to standardize all stateful components to use\u00a0<code>state_dict<\/code>\u00a0and\u00a0<code>load_state_dict<\/code>, mimicking the PyTorch API. The new hooks receive their own component&#8217;s state and replace most usages of the previous\u00a0<code>on_save_checkpoint<\/code>\u00a0and\u00a0<code>on_load_checkpoint<\/code>\u00a0hooks.<\/p>\n<pre>def MyCallback(pl.Callback):\r\n-   def on_save_checkpoint(self, trainer, pl_module, checkpoint):\r\n-       return {'x': self.x}\r\n    \r\n-   def on_load_checkpoint(self, trainer, pl_module, checkpoint):\r\n-       self.x = x\r\n\r\n+   def state_dict(self):\r\n+       return {'x': self.x}\r\n    \r\n+   def load_state_dict(self, checkpoint):\r\n+       self.x = x<\/pre>\n<h3>New Properties<\/h3>\n<p><strong><code>Trainer.estimated_stepping_batches<\/code><\/strong><\/p>\n<p>You can use built-in\u00a0<code>Trainer.estimated_stepping_batches<\/code>\u00a0to compute the total number of stepping batches needed for the complete training.<\/p>\n<p>The property takes gradient accumulation factor and distributed setting into consideration when performing this computation so that you don&#8217;t have to derive it manually:<\/p>\n<pre>class MyLightningModule(pl.LightningModule):\r\n    def configure_optimizers(self):\r\n        optimizer = ...\r\n        scheduler = torch.optim.lr_scheduler.OneCycleLR(\r\n            optimizer, max_lr=1e-3, total_steps=self.trainer.estimated_stepping_batches\r\n        )\r\n        return {\"optimizer\": optimizer, \"lr_scheduler\": scheduler}<\/pre>\n<p><strong><code>Trainer.num_devices<\/code>\u00a0and\u00a0<code>Trainer.device_ids<\/code><\/strong><\/p>\n<p>In the past, retrieving the number of devices used, or their IDs, posed a considerable challenge. Additionally, doing so required knowing which property to access based on the current\u00a0<code>Trainer<\/code>\u00a0configuration.<\/p>\n<p>To simplify this process, we&#8217;ve deprecated the per-accelerator properties to have accelerator agnostic properties. For example:<\/p>\n<pre>- num_devices = max(1, trainer.num_gpus, trainer.num_processes)\r\n- if trainer.tpu_cores:\r\n-    num_devices = max(num_devices, trainer.tpu_cores)\r\n+ num_devices = trainer.num_devices<\/pre>\n<h3>Experimental Features<\/h3>\n<p><strong>Manual Fault-tolerance<\/strong><\/p>\n<p><a href=\"https:\/\/pytorch-lightning.readthedocs.io\/en\/latest\/advanced\/fault_tolerant_training.html#:~:text=Fault%2Dtolerant%20Training%20is%20an,can%20shutdown%20at%20any%20time.\" rel=\"nofollow\">Fault Tolerance<\/a>\u00a0has limitations that require specific information about your data-loading structure.<\/p>\n<p>It is now possible to resolve those limitations by enabling manual fault tolerance where you can write your own logic and specify how exactly to checkpoint your own datasets and samplers. You can do so using this environment flag:<\/p>\n<pre>$ PL_FAULT_TOLERANT_TRAINING=MANUAL python script.py<\/pre>\n<p>Check out\u00a0<a href=\"https:\/\/www.youtube.com\/watch?v=-HRh_szyuhE\" rel=\"nofollow\">this video<\/a>\u00a0for a dive into the internals of this flag.<\/p>\n<p><strong>Customizing the layer synchronization<\/strong><\/p>\n<p>We introduced a new plugin class for wrapping layers of a model with synchronization logic for multiprocessing.<\/p>\n<pre>class MyLayerSync(pl.plugins.LayerSync):\r\n    ...\r\n\r\nlayer_sync = MyLayerSync(...)\r\ntrainer = Trainer(sync_batchnorm=True, plugins=layer_sync, strategy=\"ddp\")<\/pre>\n<p><strong>Registering Custom Accelerators<\/strong><\/p>\n<p>There has been much progress in the field of ML Accelerators, and the list of accelerators is constantly expanding.<\/p>\n<p>We&#8217;ve made it easier for users to try out new accelerators by enabling support for registering custom\u00a0<code>Accelerator<\/code>\u00a0classes in Lightning.<\/p>\n<pre>from pytorch_lightning.accelerators import Accelerator, AcceleratorRegistry\r\n\r\nclass SOTAAccelerator(Accelerator):\r\n    def __init__(self, x):\r\n        ...\r\n\r\nAcceleratorRegistry.register(\"sota_accelerator\", SOTAAccelerator, x=123)\r\n# the following works now:\r\ntrainer = Trainer(accelerator=\"sota_accelerator\")<\/pre>\n<h3>Backward Incompatible Changes<\/h3>\n<p>Here is a selection of notable changes that are not backward compatible with previous versions. The full list of changes and removals can be found in the CHANGELOG below.<\/p>\n<h4>Drop PyTorch 1.7 support<\/h4>\n<p>Following our 4 PyTorch release window, this release supports PyTorch 1.8 to 1.11. Support for PyTorch 1.7 has been removed.<\/p>\n<h4>Drop Python 3.6 support<\/h4>\n<p>Following\u00a0<a href=\"https:\/\/endoflife.date\/python\" rel=\"nofollow\">Python&#8217;s end-of-life<\/a>, support for Python 3.6 has been removed.<\/p>\n<h4><code>AcceleratorConnector<\/code>\u00a0rewrite<\/h4>\n<p>To support new accelerator and stategy features, we completely rewrote our internal\u00a0<code>AcceleratorConncetor<\/code>\u00a0class. No backwards compatibility was maintained so it is likely to have broken your code if it was using this class.<\/p>\n<h4>Re-define the\u00a0<code>current_epoch<\/code>\u00a0boundary<\/h4>\n<p>To resolve fault-tolerance issues, we changed where the current epoch value gets increased.<\/p>\n<p><code>trainer.current_epoch<\/code>\u00a0is now increased by 1\u00a0<code>on_train_end<\/code>. This means that if a model is run for 3 epochs (0, 1, 2),\u00a0<code>trainer.current_epoch<\/code>\u00a0will now return 3 instead of 2 after\u00a0<code>trainer.fit()<\/code>. This can also impact custom callbacks that acess this property inside this hook.<\/p>\n<p>This also impacts checkpoints saved during an epoch (e.g.\u00a0<code>on_train_epoch_end<\/code>). For example, a\u00a0<code>Trainer(max_epochs=1, limit_train_batches=1)<\/code>\u00a0instance that saves a checkpoint will have the\u00a0<code>current_epoch=0<\/code>\u00a0value saved instead of\u00a0<code>current_epoch=1<\/code>.<\/p>\n<h4>Re-define the\u00a0<code>global_step<\/code>\u00a0boundary<\/h4>\n<p>To resolve fault-tolerance issues, we changed where the global step value gets increased.<\/p>\n<p>Access to\u00a0<code>trainer.global_step<\/code>\u00a0during an intra-training validation hook will now correctly return the number of optimizer steps taken already. In pseudocode:<\/p>\n<pre>  training_step()\r\n+ global_step += 1\r\n  validation_if_necessary()\r\n- global_step += 1<\/pre>\n<p>Saved checkpoints that use the global step value as part of the filename are now increased by 1 for the same reason. A checkpoint saved after 1 step will be now be named\u00a0<code>step=1.ckpt<\/code>\u00a0instead of\u00a0<code>step=0.ckpt<\/code>.<\/p>\n<p>The\u00a0<code>trainer.global_step<\/code>\u00a0value will now account for TBPTT or multiple optimizers. Users setting\u00a0<code>Trainer({min,max}_steps=...)<\/code>\u00a0under these circumstances will need to adjust their values.<\/p>\n<h4>Removed automatic reduction of outputs in\u00a0<code>training_step<\/code>\u00a0when using DataParallel<\/h4>\n<p>When using\u00a0<code>Trainer(strategy=\"dp\")<\/code>,\u00a0<em>all<\/em>\u00a0the tensors returned by training_step were previously reduced to a scalar (<a class=\"issue-link js-issue-link\" href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11594\" data-error-text=\"Failed to load title\" data-id=\"1112588746\" data-permission-text=\"Title is private\" data-url=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/issues\/11594\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11594\/hovercard\">#11594<\/a>). This behavior was especially confusing when outputs needed to be collected into the\u00a0<code>training_epoch_end<\/code>\u00a0hook.<\/p>\n<p>From now on, outputs are no longer reduced except for the\u00a0<code>loss<\/code>\u00a0tensor, unless you implement\u00a0<code>training_step_end<\/code>, in which case the loss won&#8217;t get reduced either.<\/p>\n<h4>No longer fallback to CPU with no devices<\/h4>\n<p>Previous versions were lenient in that the lack of GPU devices defaulted to running on CPU. This meant that users&#8217; code could be running much slower without them ever noticing that it was running on CPU.<\/p>\n<p>We suggest passing\u00a0<code>Trainer(accelerator=\"auto\")<\/code>\u00a0when this leniency is desired.<\/p>\n<h3>LIGHTNING CHANGELOG<\/h3>\n<h4>Added<\/h4>\n<ul>\n<li>Allow logging to an existing run ID in MLflow with\u00a0<code>MLFlowLogger<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12290\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12290\/hovercard\">#12290<\/a>)<\/li>\n<li>Enable gradient accumulation using Horovod&#8217;s\u00a0<code>backward_passes_per_step<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11911\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11911\/hovercard\">#11911<\/a>)<\/li>\n<li>Add new\u00a0<code>DETAIL<\/code>\u00a0log level to provide useful logs for improving monitoring and debugging of batch jobs (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11008\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11008\/hovercard\">#11008<\/a>)<\/li>\n<li>Added a flag\u00a0<code>SLURMEnvironment(auto_requeue=True|False)<\/code>\u00a0to control whether Lightning handles the requeuing (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10601\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10601\/hovercard\">#10601<\/a>)<\/li>\n<li>Fault Tolerant Manual\n<ul>\n<li>Add\u00a0<code>_Stateful<\/code>\u00a0protocol to detect if classes are stateful (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10646\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10646\/hovercard\">#10646<\/a>)<\/li>\n<li>Add\u00a0<code>_FaultTolerantMode<\/code>\u00a0enum used to track different supported fault tolerant modes (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10645\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10645\/hovercard\">#10645<\/a>)<\/li>\n<li>Add a\u00a0<code>_rotate_worker_indices<\/code>\u00a0utility to reload the state according the latest worker (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10647\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10647\/hovercard\">#10647<\/a>)<\/li>\n<li>Add stateful workers (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10674\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10674\/hovercard\">#10674<\/a>)<\/li>\n<li>Add an utility to collect the states across processes (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10639\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10639\/hovercard\">#10639<\/a>)<\/li>\n<li>Add logic to reload the states across data loading components (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10699\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10699\/hovercard\">#10699<\/a>)<\/li>\n<li>Cleanup some fault tolerant utilities (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10703\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10703\/hovercard\">#10703<\/a>)<\/li>\n<li>Enable Fault Tolerant Manual Training (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10707\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10707\/hovercard\">#10707<\/a>)<\/li>\n<li>Broadcast the\u00a0<code>_terminate_gracefully<\/code>\u00a0to all processes and add support for DDP (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10638\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10638\/hovercard\">#10638<\/a>)<\/li>\n<\/ul>\n<\/li>\n<li>Added support for re-instantiation of custom (subclasses of)\u00a0<code>DataLoaders<\/code>\u00a0returned in the\u00a0<code>*_dataloader()<\/code>\u00a0methods, i.e., automatic replacement of samplers now works with custom types of\u00a0<code>DataLoader<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10680\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10680\/hovercard\">#10680<\/a>)<\/li>\n<li>Added a function to validate if fault tolerant training is supported. (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10465\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10465\/hovercard\">#10465<\/a>)<\/li>\n<li>Added a private callback to manage the creation and deletion of fault-tolerance checkpoints (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11862\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11862\/hovercard\">#11862<\/a>)<\/li>\n<li>Show a better error message when a custom\u00a0<code>DataLoader<\/code>\u00a0implementation is not well implemented and we need to reconstruct it (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10719\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10719\/hovercard\">#10719<\/a>)<\/li>\n<li>Show a better error message when frozen dataclass is used as a batch (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10927\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10927\/hovercard\">#10927<\/a>)<\/li>\n<li>Save the\u00a0<code>Loop<\/code>&#8216;s state by default in the checkpoint (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10784\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10784\/hovercard\">#10784<\/a>)<\/li>\n<li>Added\u00a0<code>Loop.replace<\/code>\u00a0to easily switch one loop for another (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10324\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10324\/hovercard\">#10324<\/a>)<\/li>\n<li>Added support for\u00a0<code>--lr_scheduler=ReduceLROnPlateau<\/code>\u00a0to the\u00a0<code>LightningCLI<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10860\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10860\/hovercard\">#10860<\/a>)<\/li>\n<li>Added\u00a0<code>LightningCLI.configure_optimizers<\/code>\u00a0to override the\u00a0<code>configure_optimizers<\/code>\u00a0return value (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10860\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10860\/hovercard\">#10860<\/a>)<\/li>\n<li>Added\u00a0<code>LightningCLI(auto_registry)<\/code>\u00a0flag to register all subclasses of the registerable components automatically (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12108\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12108\/hovercard\">#12108<\/a>)<\/li>\n<li>Added a warning that shows when\u00a0<code>max_epochs<\/code>\u00a0in the\u00a0<code>Trainer<\/code>\u00a0is not set (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10700\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10700\/hovercard\">#10700<\/a>)<\/li>\n<li>Added support for returning a single Callback from\u00a0<code>LightningModule.configure_callbacks<\/code>\u00a0without wrapping it into a list (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11060\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11060\/hovercard\">#11060<\/a>)<\/li>\n<li>Added\u00a0<code>console_kwargs<\/code>\u00a0for\u00a0<code>RichProgressBar<\/code>\u00a0to initialize inner Console (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10875\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10875\/hovercard\">#10875<\/a>)<\/li>\n<li>Added support for shorthand notation to instantiate loggers with the\u00a0<code>LightningCLI<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11533\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11533\/hovercard\">#11533<\/a>)<\/li>\n<li>Added a\u00a0<code>LOGGER_REGISTRY<\/code>\u00a0instance to register custom loggers to the\u00a0<code>LightningCLI<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11533\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11533\/hovercard\">#11533<\/a>)<\/li>\n<li>Added info message when the\u00a0<code>Trainer<\/code>\u00a0arguments\u00a0<code>limit_*_batches<\/code>,\u00a0<code>overfit_batches<\/code>, or\u00a0<code>val_check_interval<\/code>\u00a0are set to\u00a0<code>1<\/code>\u00a0or\u00a0<code>1.0<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11950\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11950\/hovercard\">#11950<\/a>)<\/li>\n<li>Added a\u00a0<code>PrecisionPlugin.teardown<\/code>\u00a0method (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10990\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10990\/hovercard\">#10990<\/a>)<\/li>\n<li>Added\u00a0<code>LightningModule.lr_scheduler_step<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10249\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10249\/hovercard\">#10249<\/a>)<\/li>\n<li>Added support for no pre-fetching to\u00a0<code>DataFetcher<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11606\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11606\/hovercard\">#11606<\/a>)<\/li>\n<li>Added support for optimizer step progress tracking with manual optimization (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11848\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11848\/hovercard\">#11848<\/a>)<\/li>\n<li>Return the output of the\u00a0<code>optimizer.step<\/code>. This can be useful for\u00a0<code>LightningLite<\/code>\u00a0users, manual optimization users, or users overriding\u00a0<code>LightningModule.optimizer_step<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11711\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11711\/hovercard\">#11711<\/a>)<\/li>\n<li>Teardown the active loop and strategy on exception (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11620\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11620\/hovercard\">#11620<\/a>)<\/li>\n<li>Added a\u00a0<code>MisconfigurationException<\/code>\u00a0if user provided\u00a0<code>opt_idx<\/code>\u00a0in scheduler config doesn&#8217;t match with actual optimizer index of its respective optimizer (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11247\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11247\/hovercard\">#11247<\/a>)<\/li>\n<li>Added a\u00a0<code>loggers<\/code>\u00a0property to\u00a0<code>Trainer<\/code>\u00a0which returns a list of loggers provided by the user (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11683\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11683\/hovercard\">#11683<\/a>)<\/li>\n<li>Added a\u00a0<code>loggers<\/code>\u00a0property to\u00a0<code>LightningModule<\/code>\u00a0which retrieves the\u00a0<code>loggers<\/code>\u00a0property from\u00a0<code>Trainer<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11683\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11683\/hovercard\">#11683<\/a>)<\/li>\n<li>Added support for DDP when using a\u00a0<code>CombinedLoader<\/code>\u00a0for the training data (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11648\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11648\/hovercard\">#11648<\/a>)<\/li>\n<li>Added a warning when using\u00a0<code>DistributedSampler<\/code>\u00a0during validation\/testing (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11479\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11479\/hovercard\">#11479<\/a>)<\/li>\n<li>Added support for\u00a0<code>Bagua<\/code>\u00a0training strategy (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11146\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11146\/hovercard\">#11146<\/a>)<\/li>\n<li>Added support for manually returning a\u00a0<code>poptorch.DataLoader<\/code>\u00a0in a\u00a0<code>*_dataloader<\/code>\u00a0hook (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12116\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12116\/hovercard\">#12116<\/a>)<\/li>\n<li>Added\u00a0<code>rank_zero<\/code>\u00a0module to centralize utilities (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11747\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11747\/hovercard\">#11747<\/a>)<\/li>\n<li>Added a\u00a0<code>_Stateful<\/code>\u00a0support for\u00a0<code>LightningDataModule<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11637\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11637\/hovercard\">#11637<\/a>)<\/li>\n<li>Added\u00a0<code>_Stateful<\/code>\u00a0support for\u00a0<code>PrecisionPlugin<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11638\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11638\/hovercard\">#11638<\/a>)<\/li>\n<li>Added\u00a0<code>Accelerator.is_available<\/code>\u00a0to check device availability (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11797\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11797\/hovercard\">#11797<\/a>)<\/li>\n<li>Enabled static type-checking on the signature of\u00a0<code>Trainer<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11888\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11888\/hovercard\">#11888<\/a>)<\/li>\n<li>Added utility functions for moving optimizers to devices (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11758\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11758\/hovercard\">#11758<\/a>)<\/li>\n<li>Added a warning when saving an instance of\u00a0<code>nn.Module<\/code>\u00a0with\u00a0<code>save_hyperparameters()<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12068\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12068\/hovercard\">#12068<\/a>)<\/li>\n<li>Added\u00a0<code>estimated_stepping_batches<\/code>\u00a0property to\u00a0<code>Trainer<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11599\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11599\/hovercard\">#11599<\/a>)<\/li>\n<li>Added support for pluggable Accelerators (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12030\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12030\/hovercard\">#12030<\/a>)<\/li>\n<li>Added profiling for\u00a0<code>on_load_checkpoint<\/code>\/<code>on_save_checkpoint<\/code>\u00a0callback and LightningModule hooks (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12149\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12149\/hovercard\">#12149<\/a>)<\/li>\n<li>Added\u00a0<code>LayerSync<\/code>\u00a0and\u00a0<code>NativeSyncBatchNorm<\/code>\u00a0plugins (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11754\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11754\/hovercard\">#11754<\/a>)<\/li>\n<li>Added optional\u00a0<code>storage_options<\/code>\u00a0argument to\u00a0<code>Trainer.save_checkpoint()<\/code>\u00a0to pass to custom\u00a0<code>CheckpointIO<\/code>\u00a0implementations (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11891\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11891\/hovercard\">#11891<\/a>)<\/li>\n<li>Added support to explicitly specify the process group backend for parallel strategies (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11745\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11745\/hovercard\">#11745<\/a>)<\/li>\n<li>Added\u00a0<code>device_ids<\/code>\u00a0and\u00a0<code>num_devices<\/code>\u00a0property to\u00a0<code>Trainer<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12151\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12151\/hovercard\">#12151<\/a>)<\/li>\n<li>Added\u00a0<code>Callback.state_dict()<\/code>\u00a0and\u00a0<code>Callback.load_state_dict()<\/code>\u00a0methods (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12232\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12232\/hovercard\">#12232<\/a>)<\/li>\n<li>Added\u00a0<code>AcceleratorRegistry<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12180\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12180\/hovercard\">#12180<\/a>)<\/li>\n<li>Added support for Habana Accelerator (HPU) (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11808\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11808\/hovercard\">#11808<\/a>)<\/li>\n<li>Added support for dataclasses in\u00a0<code>apply_to_collections<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11889\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11889\/hovercard\">#11889<\/a>)<\/li>\n<\/ul>\n<h4>Changed<\/h4>\n<ul>\n<li>Drop PyTorch 1.7 support (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12191\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12191\/hovercard\">#12191<\/a>), (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12432\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12432\/hovercard\">#12432<\/a>)<\/li>\n<li>Make\u00a0<code>benchmark<\/code>\u00a0flag optional and set its value based on the deterministic flag (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11944\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11944\/hovercard\">#11944<\/a>)<\/li>\n<li>Implemented a new native and rich format in\u00a0<code>_print_results<\/code>\u00a0method of the\u00a0<code>EvaluationLoop<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11332\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11332\/hovercard\">#11332<\/a>)<\/li>\n<li>Do not print an empty table at the end of the\u00a0<code>EvaluationLoop<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12427\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12427\/hovercard\">#12427<\/a>)<\/li>\n<li>Set the\u00a0<code>prog_bar<\/code>\u00a0flag to False in\u00a0<code>LightningModule.log_grad_norm<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11472\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11472\/hovercard\">#11472<\/a>)<\/li>\n<li>Raised exception in\u00a0<code>init_dist_connection()<\/code>\u00a0when torch distributed is not available (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10418\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10418\/hovercard\">#10418<\/a>)<\/li>\n<li>The\u00a0<code>monitor<\/code>\u00a0argument in the\u00a0<code>EarlyStopping<\/code>\u00a0callback is no longer optional (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10328\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10328\/hovercard\">#10328<\/a>)<\/li>\n<li>Do not fail if batch size could not be inferred for logging when using DeepSpeed (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10438\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10438\/hovercard\">#10438<\/a>)<\/li>\n<li>Raised\u00a0<code>MisconfigurationException<\/code>\u00a0when\u00a0<code>enable_progress_bar=False<\/code>\u00a0and a progress bar instance has been passed in the callback list (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10520\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10520\/hovercard\">#10520<\/a>)<\/li>\n<li>Moved\u00a0<code>trainer.connectors.env_vars_connector._defaults_from_env_vars<\/code>\u00a0to\u00a0<code>utilities.argsparse._defaults_from_env_vars<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10501\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10501\/hovercard\">#10501<\/a>)<\/li>\n<li>Changes in\u00a0<code>LightningCLI<\/code>\u00a0required for the new major release of jsonargparse v4.0.0 (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10426\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10426\/hovercard\">#10426<\/a>)<\/li>\n<li>Renamed\u00a0<code>refresh_rate_per_second<\/code>\u00a0parameter to\u00a0<code>refresh_rate<\/code>\u00a0for\u00a0<code>RichProgressBar<\/code>\u00a0signature (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10497\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10497\/hovercard\">#10497<\/a>)<\/li>\n<li>Moved ownership of the\u00a0<code>PrecisionPlugin<\/code>\u00a0into\u00a0<code>TrainingTypePlugin<\/code>\u00a0and updated all references (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10570\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10570\/hovercard\">#10570<\/a>)<\/li>\n<li>Fault Tolerant relies on\u00a0<code>signal.SIGTERM<\/code>\u00a0to gracefully exit instead of\u00a0<code>signal.SIGUSR1<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10605\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10605\/hovercard\">#10605<\/a>)<\/li>\n<li><code>Loop.restarting=...<\/code>\u00a0now sets the value recursively for all subloops (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11442\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11442\/hovercard\">#11442<\/a>)<\/li>\n<li>Raised an error if the\u00a0<code>batch_size<\/code>\u00a0cannot be inferred from the current batch if it contained a string or was a custom batch object (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10541\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10541\/hovercard\">#10541<\/a>)<\/li>\n<li>The validation loop is now disabled when\u00a0<code>overfit_batches &gt; 0<\/code>\u00a0is set in the Trainer (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/9709\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/9709\/hovercard\">#9709<\/a>)<\/li>\n<li>Moved optimizer related logics from\u00a0<code>Accelerator<\/code>\u00a0to\u00a0<code>TrainingTypePlugin<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10596\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10596\/hovercard\">#10596<\/a>)<\/li>\n<li>Moved ownership of the lightning optimizers from the\u00a0<code>Trainer<\/code>\u00a0to the\u00a0<code>Strategy<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11444\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11444\/hovercard\">#11444<\/a>)<\/li>\n<li>Moved ownership of the data fetchers from the DataConnector to the Loops (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11621\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11621\/hovercard\">#11621<\/a>)<\/li>\n<li>Moved\u00a0<code>batch_to_device<\/code>\u00a0method from\u00a0<code>Accelerator<\/code>\u00a0to\u00a0<code>TrainingTypePlugin<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10649\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10649\/hovercard\">#10649<\/a>)<\/li>\n<li>The\u00a0<code>DDPSpawnPlugin<\/code>\u00a0no longer overrides the\u00a0<code>post_dispatch<\/code>\u00a0plugin hook (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10034\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10034\/hovercard\">#10034<\/a>)<\/li>\n<li>Integrate the progress bar implementation with progress tracking (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11213\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11213\/hovercard\">#11213<\/a>)<\/li>\n<li>The\u00a0<code>LightningModule.{add_to_queue,get_from_queue}<\/code>\u00a0hooks no longer get a\u00a0<code>torch.multiprocessing.SimpleQueue<\/code>\u00a0and instead receive a list based queue (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10034\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10034\/hovercard\">#10034<\/a>)<\/li>\n<li>Changed\u00a0<code>training_step<\/code>,\u00a0<code>validation_step<\/code>,\u00a0<code>test_step<\/code>\u00a0and\u00a0<code>predict_step<\/code>\u00a0method signatures in\u00a0<code>Accelerator<\/code>\u00a0and updated input from caller side (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10908\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10908\/hovercard\">#10908<\/a>)<\/li>\n<li>Changed the name of the temporary checkpoint that the\u00a0<code>DDPSpawnPlugin<\/code>\u00a0and related plugins save (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10934\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10934\/hovercard\">#10934<\/a>)<\/li>\n<li><code>LoggerCollection<\/code>\u00a0returns only unique logger names and versions (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10976\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10976\/hovercard\">#10976<\/a>)<\/li>\n<li>Redesigned process creation for spawn-based plugins (<code>DDPSpawnPlugin<\/code>,\u00a0<code>TPUSpawnPlugin<\/code>, etc.) (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10896\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10896\/hovercard\">#10896<\/a>)\n<ul>\n<li>All spawn-based plugins now spawn processes immediately upon calling\u00a0<code>Trainer.{fit,validate,test,predict}<\/code><\/li>\n<li>The hooks\/callbacks\u00a0<code>prepare_data<\/code>,\u00a0<code>setup<\/code>,\u00a0<code>configure_sharded_model<\/code>\u00a0and\u00a0<code>teardown<\/code>\u00a0now run under initialized process group for spawn-based plugins just like their non-spawn counterparts<\/li>\n<li>Some configuration errors that were previously raised as\u00a0<code>MisconfigurationException<\/code>s will now be raised as\u00a0<code>ProcessRaisedException<\/code>\u00a0(torch&gt;=1.8) or as\u00a0<code>Exception<\/code>\u00a0(torch&lt;1.8)<\/li>\n<li>Removed the\u00a0<code>TrainingTypePlugin.pre_dispatch()<\/code>\u00a0method and merged it with\u00a0<code>TrainingTypePlugin.setup()<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11137\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11137\/hovercard\">#11137<\/a>)<\/li>\n<\/ul>\n<\/li>\n<li>Changed profiler to index and display the names of the hooks with a new pattern []. (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11026\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11026\/hovercard\">#11026<\/a>)<\/li>\n<li>Changed\u00a0<code>batch_to_device<\/code>\u00a0entry in profiling from stage-specific to generic, to match profiling of other hooks (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11031\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11031\/hovercard\">#11031<\/a>)<\/li>\n<li>Changed the info message for finalizing ddp-spawn worker processes to a debug-level message (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10864\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10864\/hovercard\">#10864<\/a>)<\/li>\n<li>Removed duplicated file extension when uploading model checkpoints with\u00a0<code>NeptuneLogger<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11015\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11015\/hovercard\">#11015<\/a>)<\/li>\n<li>Removed\u00a0<code>__getstate__<\/code>\u00a0and\u00a0<code>__setstate__<\/code>\u00a0of\u00a0<code>RichProgressBar<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11100\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11100\/hovercard\">#11100<\/a>)<\/li>\n<li>The\u00a0<code>DDPPlugin<\/code>\u00a0and\u00a0<code>DDPSpawnPlugin<\/code>\u00a0and their subclasses now remove the\u00a0<code>SyncBatchNorm<\/code>\u00a0wrappers in\u00a0<code>teardown()<\/code>\u00a0to enable proper support at inference after fitting (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11078\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11078\/hovercard\">#11078<\/a>)<\/li>\n<li>Moved ownership of the\u00a0<code>Accelerator<\/code>\u00a0instance to the\u00a0<code>TrainingTypePlugin<\/code>; all training-type plugins now take an optional parameter\u00a0<code>accelerator<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11022\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11022\/hovercard\">#11022<\/a>)<\/li>\n<li>Renamed the\u00a0<code>TrainingTypePlugin<\/code>\u00a0to\u00a0<code>Strategy<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11120\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11120\/hovercard\">#11120<\/a>)\n<ul>\n<li>Renamed the\u00a0<code>ParallelPlugin<\/code>\u00a0to\u00a0<code>ParallelStrategy<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11123\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11123\/hovercard\">#11123<\/a>)<\/li>\n<li>Renamed the\u00a0<code>DataParallelPlugin<\/code>\u00a0to\u00a0<code>DataParallelStrategy<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11183\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11183\/hovercard\">#11183<\/a>)<\/li>\n<li>Renamed the\u00a0<code>DDPPlugin<\/code>\u00a0to\u00a0<code>DDPStrategy<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11142\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11142\/hovercard\">#11142<\/a>)<\/li>\n<li>Renamed the\u00a0<code>DDP2Plugin<\/code>\u00a0to\u00a0<code>DDP2Strategy<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11185\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11185\/hovercard\">#11185<\/a>)<\/li>\n<li>Renamed the\u00a0<code>DDPShardedPlugin<\/code>\u00a0to\u00a0<code>DDPShardedStrategy<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11186\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11186\/hovercard\">#11186<\/a>)<\/li>\n<li>Renamed the\u00a0<code>DDPFullyShardedPlugin<\/code>\u00a0to\u00a0<code>DDPFullyShardedStrategy<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11143\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11143\/hovercard\">#11143<\/a>)<\/li>\n<li>Renamed the\u00a0<code>DDPSpawnPlugin<\/code>\u00a0to\u00a0<code>DDPSpawnStrategy<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11145\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11145\/hovercard\">#11145<\/a>)<\/li>\n<li>Renamed the\u00a0<code>DDPSpawnShardedPlugin<\/code>\u00a0to\u00a0<code>DDPSpawnShardedStrategy<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11210\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11210\/hovercard\">#11210<\/a>)<\/li>\n<li>Renamed the\u00a0<code>DeepSpeedPlugin<\/code>\u00a0to\u00a0<code>DeepSpeedStrategy<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11194\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11194\/hovercard\">#11194<\/a>)<\/li>\n<li>Renamed the\u00a0<code>HorovodPlugin<\/code>\u00a0to\u00a0<code>HorovodStrategy<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11195\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11195\/hovercard\">#11195<\/a>)<\/li>\n<li>Renamed the\u00a0<code>TPUSpawnPlugin<\/code>\u00a0to\u00a0<code>TPUSpawnStrategy<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11190\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11190\/hovercard\">#11190<\/a>)<\/li>\n<li>Renamed the\u00a0<code>IPUPlugin<\/code>\u00a0to\u00a0<code>IPUStrategy<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11193\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11193\/hovercard\">#11193<\/a>)<\/li>\n<li>Renamed the\u00a0<code>SingleDevicePlugin<\/code>\u00a0to\u00a0<code>SingleDeviceStrategy<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11182\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11182\/hovercard\">#11182<\/a>)<\/li>\n<li>Renamed the\u00a0<code>SingleTPUPlugin<\/code>\u00a0to\u00a0<code>SingleTPUStrategy<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11182\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11182\/hovercard\">#11182<\/a>)<\/li>\n<li>Renamed the\u00a0<code>TrainingTypePluginsRegistry<\/code>\u00a0to\u00a0<code>StrategyRegistry<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11233\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11233\/hovercard\">#11233<\/a>)<\/li>\n<\/ul>\n<\/li>\n<li>Marked the\u00a0<code>ResultCollection<\/code>,\u00a0<code>ResultMetric<\/code>, and\u00a0<code>ResultMetricCollection<\/code>\u00a0classes as protected (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11130\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11130\/hovercard\">#11130<\/a>)<\/li>\n<li>Marked\u00a0<code>trainer.checkpoint_connector<\/code>\u00a0as protected (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11550\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11550\/hovercard\">#11550<\/a>)<\/li>\n<li>The epoch start\/end hooks are now called by the\u00a0<code>FitLoop<\/code>\u00a0instead of the\u00a0<code>TrainingEpochLoop<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11201\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11201\/hovercard\">#11201<\/a>)<\/li>\n<li>DeepSpeed does not require lightning module zero 3 partitioning (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10655\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10655\/hovercard\">#10655<\/a>)<\/li>\n<li>Moved\u00a0<code>Strategy<\/code>\u00a0classes to the\u00a0<code>strategies<\/code>\u00a0directory (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11226\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11226\/hovercard\">#11226<\/a>)<\/li>\n<li>Renamed\u00a0<code>training_type_plugin<\/code>\u00a0file to\u00a0<code>strategy<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11239\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11239\/hovercard\">#11239<\/a>)<\/li>\n<li>Changed\u00a0<code>DeviceStatsMonitor<\/code>\u00a0to group metrics based on the logger&#8217;s\u00a0<code>group_separator<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11254\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11254\/hovercard\">#11254<\/a>)<\/li>\n<li>Raised\u00a0<code>UserWarning<\/code>\u00a0if evaluation is triggered with\u00a0<code>best<\/code>\u00a0ckpt and trainer is configured with multiple checkpoint callbacks (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11274\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11274\/hovercard\">#11274<\/a>)<\/li>\n<li><code>Trainer.logged_metrics<\/code>\u00a0now always contains scalar tensors, even when a Python scalar was logged (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11270\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11270\/hovercard\">#11270<\/a>)<\/li>\n<li>The tuner now uses the checkpoint connector to copy and restore its state (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11518\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11518\/hovercard\">#11518<\/a>)<\/li>\n<li>Changed\u00a0<code>MisconfigurationException<\/code>\u00a0to\u00a0<code>ModuleNotFoundError<\/code>\u00a0when\u00a0<code>rich<\/code>\u00a0isn&#8217;t available (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11360\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11360\/hovercard\">#11360<\/a>)<\/li>\n<li>The\u00a0<code>trainer.current_epoch<\/code>\u00a0value is now increased by 1 during and after\u00a0<code>on_train_end<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/8578\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/8578\/hovercard\">#8578<\/a>)<\/li>\n<li>The\u00a0<code>trainer.global_step<\/code>\u00a0value now accounts for multiple optimizers and TBPTT splits (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11805\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11805\/hovercard\">#11805<\/a>)<\/li>\n<li>The\u00a0<code>trainer.global_step<\/code>\u00a0value is now increased right after the\u00a0<code>optimizer.step()<\/code>\u00a0call which will impact users who access it during an intra-training validation hook (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11805\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11805\/hovercard\">#11805<\/a>)<\/li>\n<li>The filename of checkpoints created with\u00a0<code>ModelCheckpoint(filename='{step}')<\/code>\u00a0is different compared to previous versions. A checkpoint saved after 1 step will be named\u00a0<code>step=1.ckpt<\/code>\u00a0instead of\u00a0<code>step=0.ckpt<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11805\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11805\/hovercard\">#11805<\/a>)<\/li>\n<li>Inherit from\u00a0<code>ABC<\/code>\u00a0for\u00a0<code>Accelerator<\/code>: Users need to implement\u00a0<code>auto_device_count<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11521\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11521\/hovercard\">#11521<\/a>)<\/li>\n<li>Changed\u00a0<code>parallel_devices<\/code>\u00a0property in\u00a0<code>ParallelStrategy<\/code>\u00a0to be lazy initialized (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11572\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11572\/hovercard\">#11572<\/a>)<\/li>\n<li>Updated\u00a0<code>TQDMProgressBar<\/code>\u00a0to run a separate progress bar for each eval dataloader (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11657\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11657\/hovercard\">#11657<\/a>)<\/li>\n<li>Sorted\u00a0<code>SimpleProfiler(extended=False)<\/code>\u00a0summary based on mean duration for each hook (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11671\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11671\/hovercard\">#11671<\/a>)<\/li>\n<li>Avoid enforcing\u00a0<code>shuffle=False<\/code>\u00a0for eval dataloaders (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11575\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11575\/hovercard\">#11575<\/a>)<\/li>\n<li>When using DP (data-parallel), Lightning will no longer automatically reduce all tensors returned in training_step; it will only reduce the loss unless\u00a0<code>training_step_end<\/code>\u00a0is overridden (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11594\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11594\/hovercard\">#11594<\/a>)<\/li>\n<li>When using DP (data-parallel), the\u00a0<code>training_epoch_end<\/code>\u00a0hook will no longer receive reduced outputs from\u00a0<code>training_step<\/code>\u00a0and instead get the full tensor of results from all GPUs (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11594\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11594\/hovercard\">#11594<\/a>)<\/li>\n<li>Changed default logger name to\u00a0<code>lightning_logs<\/code>\u00a0for consistency (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11762\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11762\/hovercard\">#11762<\/a>)<\/li>\n<li>Rewrote\u00a0<code>accelerator_connector<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11448\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11448\/hovercard\">#11448<\/a>)<\/li>\n<li>When manual optimization is used with DDP, we no longer force\u00a0<code>find_unused_parameters=True<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12425\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12425\/hovercard\">#12425<\/a>)<\/li>\n<li>Disable loading dataloades if corresponding\u00a0<code>limit_batches=0<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11576\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11576\/hovercard\">#11576<\/a>)<\/li>\n<li>Removed\u00a0<code>is_global_zero<\/code>\u00a0check in\u00a0<code>training_epoch_loop<\/code>\u00a0before\u00a0<code>logger.save<\/code>. If you have a custom logger that implements\u00a0<code>save<\/code>\u00a0the Trainer will now call\u00a0<code>save<\/code>\u00a0on all ranks by default. To change this behavior add\u00a0<code>@rank_zero_only<\/code>\u00a0to your\u00a0<code>save<\/code>\u00a0implementation (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12134\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12134\/hovercard\">#12134<\/a>)<\/li>\n<li>Disabled tuner with distributed strategies (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12179\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12179\/hovercard\">#12179<\/a>)<\/li>\n<li>Marked\u00a0<code>trainer.logger_connector<\/code>\u00a0as protected (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12195\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12195\/hovercard\">#12195<\/a>)<\/li>\n<li>Move\u00a0<code>Strategy.process_dataloader<\/code>\u00a0function call from\u00a0<code>fit\/evaluation\/predict_loop.py<\/code>\u00a0to\u00a0<code>data_connector.py<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12251\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12251\/hovercard\">#12251<\/a>)<\/li>\n<li><code>ModelCheckpoint(save_last=True, every_n_epochs=N)<\/code>\u00a0now saves a &#8220;last&#8221; checkpoint every epoch (disregarding\u00a0<code>every_n_epochs<\/code>) instead of only once at the end of training (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12418\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12418\/hovercard\">#12418<\/a>)<\/li>\n<li>The strategies that support\u00a0<code>sync_batchnorm<\/code>\u00a0now only apply it when fitting (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11919\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11919\/hovercard\">#11919<\/a>)<\/li>\n<li>Avoided fallback on CPU if no devices are provided for other accelerators (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12410\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12410\/hovercard\">#12410<\/a>)<\/li>\n<li>Modified\u00a0<code>supporters.py<\/code>\u00a0so that in the accumulator element (for loss) is created directly on the device (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12430\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12430\/hovercard\">#12430<\/a>)<\/li>\n<li>Removed\u00a0<code>EarlyStopping.on_save_checkpoint<\/code>\u00a0and\u00a0<code>EarlyStopping.on_load_checkpoint<\/code>\u00a0in favor of\u00a0<code>EarlyStopping.state_dict<\/code>\u00a0and\u00a0<code>EarlyStopping.load_state_dict<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11887\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11887\/hovercard\">#11887<\/a>)<\/li>\n<li>Removed\u00a0<code>BaseFinetuning.on_save_checkpoint<\/code>\u00a0and\u00a0<code>BaseFinetuning.on_load_checkpoint<\/code>\u00a0in favor of\u00a0<code>BaseFinetuning.state_dict<\/code>\u00a0and\u00a0<code>BaseFinetuning.load_state_dict<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11887\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11887\/hovercard\">#11887<\/a>)<\/li>\n<li>Removed\u00a0<code>BackboneFinetuning.on_save_checkpoint<\/code>\u00a0and\u00a0<code>BackboneFinetuning.on_load_checkpoint<\/code>\u00a0in favor of\u00a0<code>BackboneFinetuning.state_dict<\/code>\u00a0and\u00a0<code>BackboneFinetuning.load_state_dict<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11887\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11887\/hovercard\">#11887<\/a>)<\/li>\n<li>Removed\u00a0<code>ModelCheckpoint.on_save_checkpoint<\/code>\u00a0and\u00a0<code>ModelCheckpoint.on_load_checkpoint<\/code>\u00a0in favor of\u00a0<code>ModelCheckpoint.state_dict<\/code>\u00a0and\u00a0<code>ModelCheckpoint.load_state_dict<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11887\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11887\/hovercard\">#11887<\/a>)<\/li>\n<li>Removed\u00a0<code>Timer.on_save_checkpoint<\/code>\u00a0and\u00a0<code>Timer.on_load_checkpoint<\/code>\u00a0in favor of\u00a0<code>Timer.state_dict<\/code>\u00a0and\u00a0<code>Timer.load_state_dict<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11887\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11887\/hovercard\">#11887<\/a>)<\/li>\n<li>Replaced PostLocalSGDOptimizer with a dedicated model averaging component (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12378\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12378\/hovercard\">#12378<\/a>)<\/li>\n<\/ul>\n<h4>Deprecated<\/h4>\n<ul>\n<li>Deprecated\u00a0<code>training_type_plugin<\/code>\u00a0property in favor of\u00a0<code>strategy<\/code>\u00a0in\u00a0<code>Trainer<\/code>\u00a0and updated the references (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11141\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11141\/hovercard\">#11141<\/a>)<\/li>\n<li>Deprecated\u00a0<code>Trainer.{validated,tested,predicted}_ckpt_path<\/code>\u00a0and replaced with read-only property\u00a0<code>Trainer.ckpt_path<\/code>\u00a0set when checkpoints loaded via\u00a0<code>Trainer.{fit,validate,test,predict}<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11696\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11696\/hovercard\">#11696<\/a>)<\/li>\n<li>Deprecated\u00a0<code>ClusterEnvironment.master_{address,port}<\/code>\u00a0in favor of\u00a0<code>ClusterEnvironment.main_{address,port}<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10103\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10103\/hovercard\">#10103<\/a>)<\/li>\n<li>Deprecated\u00a0<code>DistributedType<\/code>\u00a0in favor of\u00a0<code>_StrategyType<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10505\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10505\/hovercard\">#10505<\/a>)<\/li>\n<li>Deprecated the\u00a0<code>precision_plugin<\/code>\u00a0constructor argument from\u00a0<code>Accelerator<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10570\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10570\/hovercard\">#10570<\/a>)<\/li>\n<li>Deprecated\u00a0<code>DeviceType<\/code>\u00a0in favor of\u00a0<code>_AcceleratorType<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10503\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10503\/hovercard\">#10503<\/a>)<\/li>\n<li>Deprecated the property\u00a0<code>Trainer.slurm_job_id<\/code>\u00a0in favor of the new\u00a0<code>SLURMEnvironment.job_id()<\/code>\u00a0method (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10622\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10622\/hovercard\">#10622<\/a>)<\/li>\n<li>Deprecated the access to the attribute\u00a0<code>IndexBatchSamplerWrapper.batch_indices<\/code>\u00a0in favor of\u00a0<code>IndexBatchSamplerWrapper.seen_batch_indices<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10870\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10870\/hovercard\">#10870<\/a>)<\/li>\n<li>Deprecated\u00a0<code>on_init_start<\/code>\u00a0and\u00a0<code>on_init_end<\/code>\u00a0callback hooks (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10940\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10940\/hovercard\">#10940<\/a>)<\/li>\n<li>Deprecated\u00a0<code>Trainer.call_hook<\/code>\u00a0in favor of\u00a0<code>Trainer._call_callback_hooks<\/code>,\u00a0<code>Trainer._call_lightning_module_hook<\/code>,\u00a0<code>Trainer._call_ttp_hook<\/code>, and\u00a0<code>Trainer._call_accelerator_hook<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10979\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10979\/hovercard\">#10979<\/a>)<\/li>\n<li>Deprecated\u00a0<code>TrainingTypePlugin.post_dispatch<\/code>\u00a0in favor of\u00a0<code>TrainingTypePlugin.teardown<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10939\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10939\/hovercard\">#10939<\/a>)<\/li>\n<li>Deprecated\u00a0<code>ModelIO.on_hpc_{save\/load}<\/code>\u00a0in favor of\u00a0<code>CheckpointHooks.on_{save\/load}_checkpoint<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10911\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10911\/hovercard\">#10911<\/a>)<\/li>\n<li>Deprecated\u00a0<code>Trainer.run_stage<\/code>\u00a0in favor of\u00a0<code>Trainer.{fit,validate,test,predict}<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11000\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11000\/hovercard\">#11000<\/a>)<\/li>\n<li>Deprecated\u00a0<code>Trainer.lr_schedulers<\/code>\u00a0in favor of\u00a0<code>Trainer.lr_scheduler_configs<\/code>\u00a0which returns a list of dataclasses instead of dictionaries (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11443\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11443\/hovercard\">#11443<\/a>)<\/li>\n<li>Deprecated\u00a0<code>Trainer.verbose_evaluate<\/code>\u00a0in favor of\u00a0<code>EvaluationLoop(verbose=...)<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10931\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10931\/hovercard\">#10931<\/a>)<\/li>\n<li>Deprecated\u00a0<code>Trainer.should_rank_save_checkpoint<\/code>\u00a0Trainer property (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11068\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11068\/hovercard\">#11068<\/a>)<\/li>\n<li>Deprecated\u00a0<code>Trainer.lightning_optimizers<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11444\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11444\/hovercard\">#11444<\/a>)<\/li>\n<li>Deprecated\u00a0<code>TrainerOptimizersMixin<\/code>\u00a0and moved functionality to\u00a0<code>core\/optimizer.py<\/code>(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11155\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11155\/hovercard\">#11155<\/a>)<\/li>\n<li>Deprecated the\u00a0<code>on_train_batch_end(outputs)<\/code>\u00a0format when multiple optimizers are used and TBPTT is enabled (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12182\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12182\/hovercard\">#12182<\/a>)<\/li>\n<li>Deprecated the\u00a0<code>training_epoch_end(outputs)<\/code>\u00a0format when multiple optimizers are used and TBPTT is enabled (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12182\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12182\/hovercard\">#12182<\/a>)<\/li>\n<li>Deprecated\u00a0<code>TrainerCallbackHookMixin<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11148\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11148\/hovercard\">#11148<\/a>)<\/li>\n<li>Deprecated\u00a0<code>TrainerDataLoadingMixin<\/code>\u00a0and moved functionality to\u00a0<code>Trainer<\/code>\u00a0and\u00a0<code>DataConnector<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11282\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11282\/hovercard\">#11282<\/a>)<\/li>\n<li>Deprecated function\u00a0<code>pytorch_lightning.callbacks.device_stats_monitor.prefix_metric_keys<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11254\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11254\/hovercard\">#11254<\/a>)<\/li>\n<li>Deprecated\u00a0<code>Callback.on_epoch_start<\/code>\u00a0hook in favour of\u00a0<code>Callback.on_{train\/val\/test}_epoch_start<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11578\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11578\/hovercard\">#11578<\/a>)<\/li>\n<li>Deprecated\u00a0<code>Callback.on_epoch_end<\/code>\u00a0hook in favour of\u00a0<code>Callback.on_{train\/val\/test}_epoch_end<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11578\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11578\/hovercard\">#11578<\/a>)<\/li>\n<li>Deprecated\u00a0<code>LightningModule.on_epoch_start<\/code>\u00a0hook in favor of\u00a0<code>LightningModule.on_{train\/val\/test}_epoch_start<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11578\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11578\/hovercard\">#11578<\/a>)<\/li>\n<li>Deprecated\u00a0<code>LightningModule.on_epoch_end<\/code>\u00a0hook in favor of\u00a0<code>LightningModule.on_{train\/val\/test}_epoch_end<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11578\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11578\/hovercard\">#11578<\/a>)<\/li>\n<li>Deprecated\u00a0<code>on_before_accelerator_backend_setup<\/code>\u00a0callback hook in favour of\u00a0<code>setup<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11568\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11568\/hovercard\">#11568<\/a>)<\/li>\n<li>Deprecated\u00a0<code>on_batch_start<\/code>\u00a0and\u00a0<code>on_batch_end<\/code>\u00a0callback hooks in favor of\u00a0<code>on_train_batch_start<\/code>\u00a0and\u00a0<code>on_train_batch_end<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11577\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11577\/hovercard\">#11577<\/a>)<\/li>\n<li>Deprecated\u00a0<code>on_configure_sharded_model<\/code>\u00a0callback hook in favor of\u00a0<code>setup<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11627\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11627\/hovercard\">#11627<\/a>)<\/li>\n<li>Deprecated\u00a0<code>pytorch_lightning.utilities.distributed.rank_zero_only<\/code>\u00a0in favor of\u00a0<code>pytorch_lightning.utilities.rank_zero.rank_zero_only<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11747\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11747\/hovercard\">#11747<\/a>)<\/li>\n<li>Deprecated\u00a0<code>pytorch_lightning.utilities.distributed.rank_zero_debug<\/code>\u00a0in favor of\u00a0<code>pytorch_lightning.utilities.rank_zero.rank_zero_debug<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11747\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11747\/hovercard\">#11747<\/a>)<\/li>\n<li>Deprecated\u00a0<code>pytorch_lightning.utilities.distributed.rank_zero_info<\/code>\u00a0in favor of\u00a0<code>pytorch_lightning.utilities.rank_zero.rank_zero_info<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11747\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11747\/hovercard\">#11747<\/a>)<\/li>\n<li>Deprecated\u00a0<code>pytorch_lightning.utilities.warnings.rank_zero_warn<\/code>\u00a0in favor of\u00a0<code>pytorch_lightning.utilities.rank_zero.rank_zero_warn<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11747\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11747\/hovercard\">#11747<\/a>)<\/li>\n<li>Deprecated\u00a0<code>pytorch_lightning.utilities.warnings.rank_zero_deprecation<\/code>\u00a0in favor of\u00a0<code>pytorch_lightning.utilities.rank_zero.rank_zero_deprecation<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11747\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11747\/hovercard\">#11747<\/a>)<\/li>\n<li>Deprecated\u00a0<code>pytorch_lightning.utilities.warnings.LightningDeprecationWarning<\/code>\u00a0in favor of\u00a0<code>pytorch_lightning.utilities.rank_zero.LightningDeprecationWarning<\/code><\/li>\n<li>Deprecated\u00a0<code>on_pretrain_routine_start<\/code>\u00a0and\u00a0<code>on_pretrain_routine_end<\/code>\u00a0callback hooks in favor of\u00a0<code>on_fit_start<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11794\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11794\/hovercard\">#11794<\/a>)<\/li>\n<li>Deprecated\u00a0<code>LightningModule.on_pretrain_routine_start<\/code>\u00a0and\u00a0<code>LightningModule.on_pretrain_routine_end<\/code>\u00a0hooks in favor of\u00a0<code>on_fit_start<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12122\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12122\/hovercard\">#12122<\/a>)<\/li>\n<li>Deprecated\u00a0<code>agg_key_funcs<\/code>\u00a0and\u00a0<code>agg_default_func<\/code>\u00a0parameters from\u00a0<code>LightningLoggerBase<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11871\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11871\/hovercard\">#11871<\/a>)<\/li>\n<li>Deprecated\u00a0<code>LightningLoggerBase.update_agg_funcs<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11871\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11871\/hovercard\">#11871<\/a>)<\/li>\n<li>Deprecated\u00a0<code>LightningLoggerBase.agg_and_log_metrics<\/code>\u00a0in favor of\u00a0<code>LightningLoggerBase.log_metrics<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11832\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11832\/hovercard\">#11832<\/a>)<\/li>\n<li>Deprecated passing\u00a0<code>weights_save_path<\/code>\u00a0to the\u00a0<code>Trainer<\/code>\u00a0constructor in favor of adding the\u00a0<code>ModelCheckpoint<\/code>\u00a0callback with\u00a0<code>dirpath<\/code>\u00a0directly to the list of callbacks (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12084\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12084\/hovercard\">#12084<\/a>)<\/li>\n<li>Deprecated\u00a0<code>pytorch_lightning.profiler.AbstractProfiler<\/code>\u00a0in favor of\u00a0<code>pytorch_lightning.profiler.Profiler<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12106\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12106\/hovercard\">#12106<\/a>)<\/li>\n<li>Deprecated\u00a0<code>pytorch_lightning.profiler.BaseProfiler<\/code>\u00a0in favor of\u00a0<code>pytorch_lightning.profiler.Profiler<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12150\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12150\/hovercard\">#12150<\/a>)<\/li>\n<li>Deprecated\u00a0<code>BaseProfiler.profile_iterable<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12102\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12102\/hovercard\">#12102<\/a>)<\/li>\n<li>Deprecated\u00a0<code>LoggerCollection<\/code>\u00a0in favor of\u00a0<code>trainer.loggers<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12147\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12147\/hovercard\">#12147<\/a>)<\/li>\n<li>Deprecated\u00a0<code>PrecisionPlugin.on_{save,load}_checkpoint<\/code>\u00a0in favor of\u00a0<code>PrecisionPlugin.{state_dict,load_state_dict}<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11978\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11978\/hovercard\">#11978<\/a>)<\/li>\n<li>Deprecated\u00a0<code>LightningDataModule.on_save\/load_checkpoint<\/code>\u00a0in favor of\u00a0<code>state_dict\/load_state_dict<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11893\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11893\/hovercard\">#11893<\/a>)<\/li>\n<li>Deprecated\u00a0<code>Trainer.use_amp<\/code>\u00a0in favor of\u00a0<code>Trainer.amp_backend<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12312\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12312\/hovercard\">#12312<\/a>)<\/li>\n<li>Deprecated\u00a0<code>LightingModule.use_amp<\/code>\u00a0in favor of\u00a0<code>Trainer.amp_backend<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12315\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12315\/hovercard\">#12315<\/a>)<\/li>\n<li>Deprecated specifying the process group backend through the environment variable\u00a0<code>PL_TORCH_DISTRIBUTED_BACKEND<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11745\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11745\/hovercard\">#11745<\/a>)<\/li>\n<li>Deprecated\u00a0<code>ParallelPlugin.torch_distributed_backend<\/code>\u00a0in favor of\u00a0<code>DDPStrategy.process_group_backend<\/code>\u00a0property (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11745\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11745\/hovercard\">#11745<\/a>)<\/li>\n<li>Deprecated\u00a0<code>ModelCheckpoint.save_checkpoint<\/code>\u00a0in favor of\u00a0<code>Trainer.save_checkpoint<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12456\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12456\/hovercard\">#12456<\/a>)<\/li>\n<li>Deprecated\u00a0<code>Trainer.devices<\/code>\u00a0in favor of\u00a0<code>Trainer.num_devices<\/code>\u00a0and\u00a0<code>Trainer.device_ids<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12151\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12151\/hovercard\">#12151<\/a>)<\/li>\n<li>Deprecated\u00a0<code>Trainer.root_gpu<\/code>\u00a0in favor of\u00a0<code>Trainer.strategy.root_device.index<\/code>\u00a0when GPU is used (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12262\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12262\/hovercard\">#12262<\/a>)<\/li>\n<li>Deprecated\u00a0<code>Trainer.num_gpus<\/code>\u00a0in favor of\u00a0<code>Trainer.num_devices<\/code>\u00a0when GPU is used (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12384\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12384\/hovercard\">#12384<\/a>)<\/li>\n<li>Deprecated\u00a0<code>Trainer.ipus<\/code>\u00a0in favor of\u00a0<code>Trainer.num_devices<\/code>\u00a0when IPU is used (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12386\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12386\/hovercard\">#12386<\/a>)<\/li>\n<li>Deprecated\u00a0<code>Trainer.num_processes<\/code>\u00a0in favor of\u00a0<code>Trainer.num_devices<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12388\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12388\/hovercard\">#12388<\/a>)<\/li>\n<li>Deprecated\u00a0<code>Trainer.data_parallel_device_ids<\/code>\u00a0in favor of\u00a0<code>Trainer.device_ids<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12072\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12072\/hovercard\">#12072<\/a>)<\/li>\n<li>Deprecated returning state from\u00a0<code>Callback.on_save_checkpoint<\/code>\u00a0in favor of returning state in\u00a0<code>Callback.state_dict<\/code>\u00a0for checkpointing (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11887\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11887\/hovercard\">#11887<\/a>)<\/li>\n<li>Deprecated passing only the callback state to\u00a0<code>Callback.on_load_checkpoint(callback_state)<\/code>\u00a0in favor of passing the callback state to\u00a0<code>Callback.load_state_dict<\/code>\u00a0and in 1.8, passing the entire checkpoint dictionary to\u00a0<code>Callback.on_load_checkpoint(checkpoint)<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11887\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11887\/hovercard\">#11887<\/a>)<\/li>\n<li>Deprecated\u00a0<code>Trainer.gpus<\/code>\u00a0in favor of\u00a0<code>Trainer.device_ids<\/code>\u00a0or\u00a0<code>Trainer.num_devices<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12436\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12436\/hovercard\">#12436<\/a>)<\/li>\n<li>Deprecated\u00a0<code>Trainer.tpu_cores<\/code>\u00a0in favor of\u00a0<code>Trainer.num_devices<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12437\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12437\/hovercard\">#12437<\/a>)<\/li>\n<\/ul>\n<h4>Removed<\/h4>\n<ul>\n<li>Removed deprecated parameter\u00a0<code>method<\/code>\u00a0in\u00a0<code>pytorch_lightning.utilities.model_helpers.is_overridden<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10507\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10507\/hovercard\">#10507<\/a>)<\/li>\n<li>Remove deprecated method\u00a0<code>ClusterEnvironment.creates_children<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10339\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10339\/hovercard\">#10339<\/a>)<\/li>\n<li>Removed deprecated\u00a0<code>TrainerModelHooksMixin.is_function_implemented<\/code>\u00a0and\u00a0<code>TrainerModelHooksMixin.has_arg<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10322\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10322\/hovercard\">#10322<\/a>)<\/li>\n<li>Removed deprecated\u00a0<code>pytorch_lightning.utilities.device_dtype_mixin.DeviceDtypeModuleMixin<\/code>\u00a0in favor of\u00a0<code>pytorch_lightning.core.mixins.device_dtype_mixin.DeviceDtypeModuleMixin<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10442\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10442\/hovercard\">#10442<\/a>)<\/li>\n<li>Removed deprecated\u00a0<code>LightningModule.loaded_optimizer_states_dict<\/code>\u00a0property (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10346\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10346\/hovercard\">#10346<\/a>)<\/li>\n<li>Removed deprecated\u00a0<code>Trainer.fit(train_dataloader=)<\/code>,\u00a0<code>Trainer.validate(val_dataloaders=)<\/code>, and\u00a0<code>Trainer.test(test_dataloader=)<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10325\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10325\/hovercard\">#10325<\/a>)<\/li>\n<li>Removed deprecated\u00a0<code>every_n_val_epochs<\/code>\u00a0parameter of ModelCheckpoint (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10366\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10366\/hovercard\">#10366<\/a>)<\/li>\n<li>Removed deprecated\u00a0<code>import pytorch_lightning.profiler.profilers<\/code>\u00a0in favor of\u00a0<code>import pytorch_lightning.profiler<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10443\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10443\/hovercard\">#10443<\/a>)<\/li>\n<li>Removed deprecated property\u00a0<code>configure_slurm_dpp<\/code>\u00a0from accelerator connector (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10370\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10370\/hovercard\">#10370<\/a>)<\/li>\n<li>Removed deprecated arguments\u00a0<code>num_nodes<\/code>\u00a0and\u00a0<code>sync_batchnorm<\/code>\u00a0from\u00a0<code>DDPPlugin<\/code>,\u00a0<code>DDPSpawnPlugin<\/code>,\u00a0<code>DeepSpeedPlugin<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10357\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10357\/hovercard\">#10357<\/a>)<\/li>\n<li>Removed deprecated property\u00a0<code>is_slurm_managing_tasks<\/code>\u00a0from AcceleratorConnector (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10353\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10353\/hovercard\">#10353<\/a>)<\/li>\n<li>Removed deprecated\u00a0<code>LightningModule.log(tbptt_reduce_fx, tbptt_reduce_token, sync_dist_op)<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10423\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10423\/hovercard\">#10423<\/a>)<\/li>\n<li>Removed deprecated\u00a0<code>Plugin.task_idx<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10441\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10441\/hovercard\">#10441<\/a>)<\/li>\n<li>Removed deprecated method\u00a0<code>master_params<\/code>\u00a0from PrecisionPlugin (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10372\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10372\/hovercard\">#10372<\/a>)<\/li>\n<li>Removed the automatic detachment of &#8220;extras&#8221; returned from\u00a0<code>training_step<\/code>. For example,\u00a0<code>return {'loss': ..., 'foo': foo.detach()}<\/code>\u00a0will now be necessary if\u00a0<code>foo<\/code>\u00a0has gradients which you do not want to store (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10424\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10424\/hovercard\">#10424<\/a>)<\/li>\n<li>Removed deprecated passthrough methods and properties from\u00a0<code>Accelerator<\/code>\u00a0base class:\n<ul>\n<li>(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10403\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10403\/hovercard\">#10403<\/a>)<\/li>\n<li>(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10448\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10448\/hovercard\">#10448<\/a>)<\/li>\n<\/ul>\n<\/li>\n<li>Removed deprecated signature for\u00a0<code>transfer_batch_to_device<\/code>\u00a0hook. The new argument\u00a0<code>dataloader_idx<\/code>\u00a0is now required (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10480\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10480\/hovercard\">#10480<\/a>)<\/li>\n<li>Removed deprecated\u00a0<code>utilities.distributed.rank_zero_{warn\/deprecation}<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10451\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10451\/hovercard\">#10451<\/a>)<\/li>\n<li>Removed deprecated\u00a0<code>mode<\/code>\u00a0argument from\u00a0<code>ModelSummary<\/code>\u00a0class (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10449\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10449\/hovercard\">#10449<\/a>)<\/li>\n<li>Removed deprecated\u00a0<code>Trainer.train_loop<\/code>\u00a0property in favor of\u00a0<code>Trainer.fit_loop<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10482\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10482\/hovercard\">#10482<\/a>)<\/li>\n<li>Removed deprecated\u00a0<code>Trainer.train_loop<\/code>\u00a0property in favor of\u00a0<code>Trainer.fit_loop<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10482\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10482\/hovercard\">#10482<\/a>)<\/li>\n<li>Removed deprecated\u00a0<code>disable_validation<\/code>\u00a0property from Trainer (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10450\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10450\/hovercard\">#10450<\/a>)<\/li>\n<li>Removed deprecated\u00a0<code>CheckpointConnector.hpc_load<\/code>\u00a0property in favor of\u00a0<code>CheckpointConnector.restore<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10525\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10525\/hovercard\">#10525<\/a>)<\/li>\n<li>Removed deprecated\u00a0<code>reload_dataloaders_every_epoch<\/code>\u00a0from\u00a0<code>Trainer<\/code>\u00a0in favour of\u00a0<code>reload_dataloaders_every_n_epochs<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10481\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10481\/hovercard\">#10481<\/a>)<\/li>\n<li>Removed the\u00a0<code>precision_plugin<\/code>\u00a0attribute from\u00a0<code>Accelerator<\/code>\u00a0in favor of its equivalent attribute\u00a0<code>precision_plugin<\/code>\u00a0in the\u00a0<code>TrainingTypePlugin<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10570\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10570\/hovercard\">#10570<\/a>)<\/li>\n<li>Removed\u00a0<code>DeepSpeedPlugin.{precision,amp_type,amp_level}<\/code>\u00a0properties (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10657\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10657\/hovercard\">#10657<\/a>)<\/li>\n<li>Removed patching of\u00a0<code>on_before_batch_transfer<\/code>,\u00a0<code>transfer_batch_to_device<\/code>\u00a0and\u00a0<code>on_after_batch_transfer<\/code>\u00a0hooks in\u00a0<code>LightningModule<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10603\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10603\/hovercard\">#10603<\/a>)<\/li>\n<li>Removed argument\u00a0<code>return_result<\/code>\u00a0from the\u00a0<code>DDPSpawnPlugin.spawn()<\/code>\u00a0method (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10867\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10867\/hovercard\">#10867<\/a>)<\/li>\n<li>Removed the property\u00a0<code>TrainingTypePlugin.results<\/code>\u00a0and corresponding properties in subclasses (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10034\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10034\/hovercard\">#10034<\/a>)<\/li>\n<li>Removed the\u00a0<code>mp_queue<\/code>\u00a0attribute from\u00a0<code>DDPSpawnPlugin<\/code>\u00a0and\u00a0<code>TPUSpawnPlugin<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10034\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10034\/hovercard\">#10034<\/a>)<\/li>\n<li>Removed unnecessary\u00a0<code>_move_optimizer_state<\/code>\u00a0method overrides from\u00a0<code>TPUSpawnPlugin<\/code>\u00a0and\u00a0<code>SingleTPUPlugin<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10849\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10849\/hovercard\">#10849<\/a>)<\/li>\n<li>Removed\u00a0<code>should_rank_save_checkpoint<\/code>\u00a0property from\u00a0<code>TrainingTypePlugin<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11070\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11070\/hovercard\">#11070<\/a>)<\/li>\n<li>Removed\u00a0<code>model_sharded_context<\/code>\u00a0method from\u00a0<code>Accelerator<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10886\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10886\/hovercard\">#10886<\/a>)<\/li>\n<li>Removed method\u00a0<code>pre_dispatch<\/code>\u00a0from the\u00a0<code>PrecisionPlugin<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10887\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10887\/hovercard\">#10887<\/a>)<\/li>\n<li>Removed method\u00a0<code>setup_optimizers_in_pre_dispatch<\/code>\u00a0from the\u00a0<code>strategies<\/code>\u00a0and achieve the same logic in\u00a0<code>setup<\/code>\u00a0and\u00a0<code>pre_dispatch<\/code>\u00a0methods (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10906\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10906\/hovercard\">#10906<\/a>)<\/li>\n<li>Removed methods\u00a0<code>pre_dispatch<\/code>,\u00a0<code>dispatch<\/code>\u00a0and\u00a0<code>post_dispatch<\/code>\u00a0from the\u00a0<code>Accelerator<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10885\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10885\/hovercard\">#10885<\/a>)<\/li>\n<li>Removed method\u00a0<code>training_step<\/code>,\u00a0<code>test_step<\/code>,\u00a0<code>validation_step<\/code>\u00a0and\u00a0<code>predict_step<\/code>\u00a0from the\u00a0<code>Accelerator<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10890\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10890\/hovercard\">#10890<\/a>)<\/li>\n<li>Removed\u00a0<code>TrainingTypePlugin.start_{training,evaluating,predicting}<\/code>\u00a0hooks and the same in all subclasses (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10989\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10989\/hovercard\">#10989<\/a>,\u00a0<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10896\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10896\/hovercard\">#10896<\/a>)<\/li>\n<li>Removed\u00a0<code>Accelerator.on_train_start<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10999\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10999\/hovercard\">#10999<\/a>)<\/li>\n<li>Removed support for Python 3.6 (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11117\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11117\/hovercard\">#11117<\/a>)<\/li>\n<li>Removed\u00a0<code>Strategy.init_optimizers<\/code>\u00a0in favor of\u00a0<code>Strategy.setup_optimizers<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11236\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11236\/hovercard\">#11236<\/a>)<\/li>\n<li>Removed\u00a0<code>profile(\"training_step_and_backward\")<\/code>\u00a0in\u00a0<code>Closure<\/code>\u00a0class since we already profile calls\u00a0<code>training_step<\/code>\u00a0and\u00a0<code>backward<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11222\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11222\/hovercard\">#11222<\/a>)<\/li>\n<li>Removed\u00a0<code>Strategy.optimizer_zero_grad<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11246\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11246\/hovercard\">#11246<\/a>)<\/li>\n<li>Removed\u00a0<code>Strategy.on_gpu<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11537\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11537\/hovercard\">#11537<\/a>)<\/li>\n<li>Removed\u00a0<code>Strategy.on_tpu<\/code>\u00a0property (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11536\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11536\/hovercard\">#11536<\/a>)<\/li>\n<li>Removed the abstract property\u00a0<code>LightningLoggerBase.experiment<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11603\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11603\/hovercard\">#11603<\/a>)<\/li>\n<li>Removed\u00a0<code>FitLoop.current_epoch<\/code>\u00a0getter and setter (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11562\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11562\/hovercard\">#11562<\/a>)<\/li>\n<li>Removed access to\u00a0<code>_short_id<\/code>\u00a0in\u00a0<code>NeptuneLogger<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11517\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11517\/hovercard\">#11517<\/a>)<\/li>\n<li>Removed\u00a0<code>log_text<\/code>\u00a0and\u00a0<code>log_image<\/code>\u00a0from the\u00a0<code>LightningLoggerBase<\/code>\u00a0API (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11857\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11857\/hovercard\">#11857<\/a>)<\/li>\n<li>Removed calls to\u00a0<code>profile(\"model_forward\")<\/code>\u00a0in favor of profiling\u00a0<code>training_step<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12032\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12032\/hovercard\">#12032<\/a>)<\/li>\n<li>Removed\u00a0<code>get_mp_spawn_kwargs<\/code>\u00a0from\u00a0<code>DDPSpawnStrategy<\/code>\u00a0and\u00a0<code>TPUSpawnStrategy<\/code>\u00a0in favor of configuration in the\u00a0<code>_SpawnLauncher<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11966\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11966\/hovercard\">#11966<\/a>)<\/li>\n<li>Removed\u00a0<code>_aggregate_metrics<\/code>,\u00a0<code>_reduce_agg_metrics<\/code>, and\u00a0<code>_finalize_agg_metrics<\/code>\u00a0from\u00a0<code>LightningLoggerBase<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12053\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12053\/hovercard\">#12053<\/a>)<\/li>\n<li>Removed the\u00a0<code>AcceleratorConnector.device_type<\/code>\u00a0property (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12081\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12081\/hovercard\">#12081<\/a>)<\/li>\n<li>Removed\u00a0<code>AcceleratorConnector.num_nodes<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12107\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12107\/hovercard\">#12107<\/a>)<\/li>\n<li>Removed\u00a0<code>AcceleratorConnector.has_ipu<\/code>\u00a0property (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12111\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12111\/hovercard\">#12111<\/a>)<\/li>\n<li>Removed\u00a0<code>AcceleratorConnector.use_ipu<\/code>\u00a0property (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12110\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12110\/hovercard\">#12110<\/a>)<\/li>\n<li>Removed\u00a0<code>AcceleratorConnector.has_tpu<\/code>\u00a0property (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12109\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12109\/hovercard\">#12109<\/a>)<\/li>\n<li>Removed\u00a0<code>AcceleratorConnector.use_dp<\/code>\u00a0property (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12112\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12112\/hovercard\">#12112<\/a>)<\/li>\n<li>Removed\u00a0<code>configure_sync_batchnorm<\/code>\u00a0from\u00a0<code>ParallelStrategy<\/code>\u00a0and all other strategies that inherit from it (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11754\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11754\/hovercard\">#11754<\/a>)<\/li>\n<li>Removed public attribute\u00a0<code>sync_batchnorm<\/code>\u00a0from strategies (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11754\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11754\/hovercard\">#11754<\/a>)<\/li>\n<li>Removed\u00a0<code>AcceleratorConnector.root_gpu<\/code>\u00a0property (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12262\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12262\/hovercard\">#12262<\/a>)<\/li>\n<li>Removed\u00a0<code>AcceleratorConnector.tpu_id<\/code>\u00a0property (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12387\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12387\/hovercard\">#12387<\/a>)<\/li>\n<li>Removed\u00a0<code>AcceleratorConnector.num_gpus<\/code>\u00a0property (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12384\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12384\/hovercard\">#12384<\/a>)<\/li>\n<li>Removed\u00a0<code>AcceleratorConnector.num_ipus<\/code>\u00a0property (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12386\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12386\/hovercard\">#12386<\/a>)<\/li>\n<li>Removed\u00a0<code>AcceleratorConnector.num_processes<\/code>\u00a0property (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12388\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12388\/hovercard\">#12388<\/a>)<\/li>\n<li>Removed\u00a0<code>AcceleratorConnector.parallel_device_ids<\/code>\u00a0property (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12072\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12072\/hovercard\">#12072<\/a>)<\/li>\n<li>Removed\u00a0<code>AcceleratorConnector.devices<\/code>\u00a0property (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12435\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12435\/hovercard\">#12435<\/a>)<\/li>\n<li>Removed\u00a0<code>AcceleratorConnector.parallel_devices<\/code>\u00a0property (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12075\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12075\/hovercard\">#12075<\/a>)<\/li>\n<li>Removed\u00a0<code>AcceleratorConnector.tpu_cores<\/code>\u00a0property (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12437\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12437\/hovercard\">#12437<\/a>)<\/li>\n<\/ul>\n<h4>Fixed<\/h4>\n<ul>\n<li>Fixed an issue where\u00a0<code>ModelCheckpoint<\/code>\u00a0could delete last checkpoint from the old directory when\u00a0<code>dirpath<\/code>\u00a0has changed during resumed training (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12225\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12225\/hovercard\">#12225<\/a>)<\/li>\n<li>Fixed an issue where\u00a0<code>ModelCheckpoint<\/code>\u00a0could delete older checkpoints when\u00a0<code>dirpath<\/code>\u00a0has changed during resumed training (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12045\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12045\/hovercard\">#12045<\/a>)<\/li>\n<li>Fixed an issue where\u00a0<code>HorovodStrategy.teardown()<\/code>\u00a0did not complete gracefully if an exception was thrown during callback setup\u00a0<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11752\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11752\/hovercard\">#11752<\/a><\/li>\n<li>Fixed security vulnerabilities\u00a0<a title=\"CVE-2020-1747\" href=\"https:\/\/github.com\/advisories\/GHSA-6757-jp84-gxfx\" data-hovercard-type=\"advisory\" data-hovercard-url=\"\/advisories\/GHSA-6757-jp84-gxfx\/hovercard\">CVE-2020-1747<\/a>\u00a0and\u00a0<a title=\"CVE-2020-14343\" href=\"https:\/\/github.com\/advisories\/GHSA-8q59-q68h-6hv4\" data-hovercard-type=\"advisory\" data-hovercard-url=\"\/advisories\/GHSA-8q59-q68h-6hv4\/hovercard\">CVE-2020-14343<\/a>\u00a0caused by the\u00a0<code>PyYAML<\/code>\u00a0dependency (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11099\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11099\/hovercard\">#11099<\/a>)<\/li>\n<li>Fixed security vulnerability &#8220;CWE-94: Improper Control of Generation of Code (Code Injection)&#8221; (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12212\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12212\/hovercard\">#12212<\/a>)<\/li>\n<li>Fixed logging on\u00a0<code>{test,validation}_epoch_end<\/code>\u00a0with multiple dataloaders (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11132\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11132\/hovercard\">#11132<\/a>)<\/li>\n<li>Reset the validation progress tracking state after sanity checking (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11218\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11218\/hovercard\">#11218<\/a>)<\/li>\n<li>Fixed double evaluation bug with fault-tolerance enabled where the second call was completely skipped (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11119\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11119\/hovercard\">#11119<\/a>)<\/li>\n<li>Fixed an issue with the\u00a0<code>TPUSpawnPlugin<\/code>\u00a0handling the\u00a0<code>XLA_USE_BF16<\/code>\u00a0environment variable incorrectly (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10990\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10990\/hovercard\">#10990<\/a>)<\/li>\n<li>Fixed wrong typehint for\u00a0<code>Trainer.lightning_optimizers<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11155\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11155\/hovercard\">#11155<\/a>)<\/li>\n<li>Fixed the lr-scheduler state not being dumped to checkpoint when using the deepspeed strategy (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11307\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11307\/hovercard\">#11307<\/a>)<\/li>\n<li>Fixed bug that forced overriding\u00a0<code>configure_optimizers<\/code>\u00a0with the CLI (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11672\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11672\/hovercard\">#11672<\/a>)<\/li>\n<li>Fixed type promotion when tensors of higher category than float are logged (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11401\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11401\/hovercard\">#11401<\/a>)<\/li>\n<li>Fixed\u00a0<code>SimpleProfiler<\/code>\u00a0summary (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11414\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11414\/hovercard\">#11414<\/a>)<\/li>\n<li>No longer set a\u00a0<code>DistributedSampler<\/code>\u00a0to the\u00a0<code>poptorch.DataLoader<\/code>\u00a0when IPUs are used (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12114\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12114\/hovercard\">#12114<\/a>)<\/li>\n<li>Fixed bug where progress bar was not being disabled when not in rank zero during predict (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11377\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11377\/hovercard\">#11377<\/a>)<\/li>\n<li>Fixed the mid-epoch warning call while resuming training (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11556\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11556\/hovercard\">#11556<\/a>)<\/li>\n<li>Fixed\u00a0<code>LightningModule.{un,}toggle_model<\/code>\u00a0when only 1 optimizer is used (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12088\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12088\/hovercard\">#12088<\/a>)<\/li>\n<li>Fixed an issue in\u00a0<code>RichProgressbar<\/code>\u00a0to display the metrics logged only on main progress bar (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11690\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11690\/hovercard\">#11690<\/a>)<\/li>\n<li>Fixed\u00a0<code>RichProgressBar<\/code>\u00a0progress when refresh rate does not evenly divide the total counter (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11668\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11668\/hovercard\">#11668<\/a>)<\/li>\n<li>Fixed\u00a0<code>RichProgressBar<\/code>\u00a0progress validation bar total when using multiple validation runs within a single training epoch (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11668\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11668\/hovercard\">#11668<\/a>)<\/li>\n<li>Configure native Deepspeed schedulers with interval=&#8217;step&#8217; (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11788\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11788\/hovercard\">#11788<\/a>), (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12031\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12031\/hovercard\">#12031<\/a>)<\/li>\n<li>Update\u00a0<code>RichProgressBarTheme<\/code>\u00a0styles after detecting light theme on colab (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/10993\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/10993\/hovercard\">#10993<\/a>)<\/li>\n<li>Fixed passing\u00a0<code>_ddp_params_and_buffers_to_ignore<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11949\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11949\/hovercard\">#11949<\/a>)<\/li>\n<li>Fixed an\u00a0<code>AttributeError<\/code>\u00a0when calling\u00a0<code>save_hyperparameters<\/code>\u00a0and no parameters need saving (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11827\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11827\/hovercard\">#11827<\/a>)<\/li>\n<li>Fixed environment variable priority for global rank determination (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11406\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11406\/hovercard\">#11406<\/a>)<\/li>\n<li>Fixed an issue that caused the Trainer to produce identical results on subsequent runs without explicit re-seeding (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11870\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11870\/hovercard\">#11870<\/a>)<\/li>\n<li>Fixed an issue that caused the Tuner to affect the random state (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11870\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11870\/hovercard\">#11870<\/a>)<\/li>\n<li>Fixed to avoid common hook warning if no hook is overridden (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12131\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12131\/hovercard\">#12131<\/a>)<\/li>\n<li>Fixed deepspeed keeping old sub-folders in same ckpt path (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12194\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12194\/hovercard\">#12194<\/a>)<\/li>\n<li>Fixed returning logged metrics instead of callback metrics during evaluation (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12224\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12224\/hovercard\">#12224<\/a>)<\/li>\n<li>Fixed the case where\u00a0<code>logger=None<\/code>\u00a0is passed to the Trainer (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12249\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12249\/hovercard\">#12249<\/a>)<\/li>\n<li>Fixed bug where the global step tracked by\u00a0<code>ModelCheckpoint<\/code>\u00a0was still set even if no checkpoint was saved (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12418\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12418\/hovercard\">#12418<\/a>)<\/li>\n<li>Fixed bug where\u00a0<code>ModelCheckpoint<\/code>\u00a0was overriding the\u00a0<code>epoch<\/code>\u00a0and\u00a0<code>step<\/code>\u00a0logged values (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12418\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12418\/hovercard\">#12418<\/a>)<\/li>\n<li>Fixed bug where monitoring the default\u00a0<code>epoch<\/code>\u00a0and\u00a0<code>step<\/code>\u00a0values with\u00a0<code>ModelCheckpoint<\/code>\u00a0would fail (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12418\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12418\/hovercard\">#12418<\/a>)<\/li>\n<li>Fixed initializing optimizers unnecessarily in\u00a0<code>DDPFullyShardedStrategy<\/code>\u00a0(<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12267\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12267\/hovercard\">#12267<\/a>)<\/li>\n<li>Fixed check for horovod module (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12377\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12377\/hovercard\">#12377<\/a>)<\/li>\n<li>Fixed logging to loggers with multiple eval dataloaders (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/12454\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/12454\/hovercard\">#12454<\/a>)<\/li>\n<li>Fixed an issue with resuming from a checkpoint trained with QAT (<a href=\"https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/pull\/11346\" data-hovercard-type=\"pull_request\" data-hovercard-url=\"\/PyTorchLightning\/pytorch-lightning\/pull\/11346\/hovercard\">#11346<\/a>)<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>PyTorch Lightning 1.6 Now Available The PyTorch Lightning team released version 1.6 with support for Intel&#8217;s Habana Accelerator, new efficient DDP strategy (Bagua), manual Fault-tolerance, and other stability and reliability changes. \u26a1Visit the release page on GitHub to download.\u26a1 Lightning Highlights New Hooks New Properties Experimental Features Backward Incompatible Changes Full Lightning Changelog Lightning Highlights<a class=\"excerpt-read-more\" href=\"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-v1-6\/\" title=\"ReadLightning 1.6: Habana Accelerator, Bagua Distributed, Fault-Tolerance Improvements\">&#8230; Read more &raquo;<\/a><\/p>\n","protected":false},"author":16,"featured_media":5646852,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"footnotes":"","_links_to":"","_links_to_target":""},"categories":[104],"tags":[],"glossary":[],"acf":{"hide_from_archive":null,"content_type":null,"code_embed":null,"code_shortcode":null,"custom_styles":null,"sticky":null,"additional_authors":null,"mathjax":null,"default_editor":null,"sections":null,"show_table_of_contents":null,"table_of_contents":null,"tabs":null,"tab_group":null},"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v24.5 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Lightning 1.6: Habana Accelerator, Bagua Distributed, Fault-Tolerance Improvements - Lightning AI<\/title>\n<meta name=\"description\" content=\"Learn more about what&#039;s new in PyTorch Lightning 1.6, the ultimate PyTorch framework to scale your models without the boilerplate.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-v1-6\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Lightning 1.6: Habana Accelerator, Bagua Distributed, Fault-Tolerance Improvements - Lightning AI\" \/>\n<meta property=\"og:description\" content=\"Learn more about what&#039;s new in PyTorch Lightning 1.6, the ultimate PyTorch framework to scale your models without the boilerplate.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-v1-6\/\" \/>\n<meta property=\"og:site_name\" content=\"Lightning AI\" \/>\n<meta property=\"article:published_time\" content=\"2022-03-29T23:08:45+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2022-11-01T19:26:39+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2022\/03\/Lightning-1.6.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1740\" \/>\n\t<meta property=\"og:image:height\" content=\"900\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"JP Hennessy\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@LightningAI\" \/>\n<meta name=\"twitter:site\" content=\"@LightningAI\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"JP Hennessy\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"23 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-v1-6\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-v1-6\/\"},\"author\":{\"name\":\"JP Hennessy\",\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/person\/2518f4d5541f8e98016f6289169141a6\"},\"headline\":\"Lightning 1.6: Habana Accelerator, Bagua Distributed, Fault-Tolerance Improvements\",\"datePublished\":\"2022-03-29T23:08:45+00:00\",\"dateModified\":\"2022-11-01T19:26:39+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-v1-6\/\"},\"wordCount\":3871,\"publisher\":{\"@id\":\"https:\/\/lightning.ai\/pages\/#organization\"},\"image\":{\"@id\":\"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-v1-6\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2022\/03\/Lightning-1.6.png\",\"articleSection\":[\"Lightning Releases\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-v1-6\/\",\"url\":\"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-v1-6\/\",\"name\":\"Lightning 1.6: Habana Accelerator, Bagua Distributed, Fault-Tolerance Improvements - Lightning AI\",\"isPartOf\":{\"@id\":\"https:\/\/lightning.ai\/pages\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-v1-6\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-v1-6\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2022\/03\/Lightning-1.6.png\",\"datePublished\":\"2022-03-29T23:08:45+00:00\",\"dateModified\":\"2022-11-01T19:26:39+00:00\",\"description\":\"Learn more about what's new in PyTorch Lightning 1.6, the ultimate PyTorch framework to scale your models without the boilerplate.\",\"breadcrumb\":{\"@id\":\"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-v1-6\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-v1-6\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-v1-6\/#primaryimage\",\"url\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2022\/03\/Lightning-1.6.png\",\"contentUrl\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2022\/03\/Lightning-1.6.png\",\"width\":1740,\"height\":900},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-v1-6\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/lightning.ai\/pages\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Lightning 1.6: Habana Accelerator, Bagua Distributed, Fault-Tolerance Improvements\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/lightning.ai\/pages\/#website\",\"url\":\"https:\/\/lightning.ai\/pages\/\",\"name\":\"Lightning AI\",\"description\":\"The platform for teams to build AI.\",\"publisher\":{\"@id\":\"https:\/\/lightning.ai\/pages\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/lightning.ai\/pages\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/lightning.ai\/pages\/#organization\",\"name\":\"Lightning AI\",\"url\":\"https:\/\/lightning.ai\/pages\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/02\/image-17.png\",\"contentUrl\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/02\/image-17.png\",\"width\":1744,\"height\":856,\"caption\":\"Lightning AI\"},\"image\":{\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/LightningAI\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/person\/2518f4d5541f8e98016f6289169141a6\",\"name\":\"JP Hennessy\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/28ade268218ae45f723b0b62499f527a?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/28ade268218ae45f723b0b62499f527a?s=96&d=mm&r=g\",\"caption\":\"JP Hennessy\"},\"url\":\"https:\/\/lightning.ai\/pages\/author\/jplightning-ai\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Lightning 1.6: Habana Accelerator, Bagua Distributed, Fault-Tolerance Improvements - Lightning AI","description":"Learn more about what's new in PyTorch Lightning 1.6, the ultimate PyTorch framework to scale your models without the boilerplate.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-v1-6\/","og_locale":"en_US","og_type":"article","og_title":"Lightning 1.6: Habana Accelerator, Bagua Distributed, Fault-Tolerance Improvements - Lightning AI","og_description":"Learn more about what's new in PyTorch Lightning 1.6, the ultimate PyTorch framework to scale your models without the boilerplate.","og_url":"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-v1-6\/","og_site_name":"Lightning AI","article_published_time":"2022-03-29T23:08:45+00:00","article_modified_time":"2022-11-01T19:26:39+00:00","og_image":[{"width":1740,"height":900,"url":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2022\/03\/Lightning-1.6.png","type":"image\/png"}],"author":"JP Hennessy","twitter_card":"summary_large_image","twitter_creator":"@LightningAI","twitter_site":"@LightningAI","twitter_misc":{"Written by":"JP Hennessy","Est. reading time":"23 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-v1-6\/#article","isPartOf":{"@id":"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-v1-6\/"},"author":{"name":"JP Hennessy","@id":"https:\/\/lightning.ai\/pages\/#\/schema\/person\/2518f4d5541f8e98016f6289169141a6"},"headline":"Lightning 1.6: Habana Accelerator, Bagua Distributed, Fault-Tolerance Improvements","datePublished":"2022-03-29T23:08:45+00:00","dateModified":"2022-11-01T19:26:39+00:00","mainEntityOfPage":{"@id":"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-v1-6\/"},"wordCount":3871,"publisher":{"@id":"https:\/\/lightning.ai\/pages\/#organization"},"image":{"@id":"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-v1-6\/#primaryimage"},"thumbnailUrl":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2022\/03\/Lightning-1.6.png","articleSection":["Lightning Releases"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-v1-6\/","url":"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-v1-6\/","name":"Lightning 1.6: Habana Accelerator, Bagua Distributed, Fault-Tolerance Improvements - Lightning AI","isPartOf":{"@id":"https:\/\/lightning.ai\/pages\/#website"},"primaryImageOfPage":{"@id":"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-v1-6\/#primaryimage"},"image":{"@id":"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-v1-6\/#primaryimage"},"thumbnailUrl":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2022\/03\/Lightning-1.6.png","datePublished":"2022-03-29T23:08:45+00:00","dateModified":"2022-11-01T19:26:39+00:00","description":"Learn more about what's new in PyTorch Lightning 1.6, the ultimate PyTorch framework to scale your models without the boilerplate.","breadcrumb":{"@id":"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-v1-6\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-v1-6\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-v1-6\/#primaryimage","url":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2022\/03\/Lightning-1.6.png","contentUrl":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2022\/03\/Lightning-1.6.png","width":1740,"height":900},{"@type":"BreadcrumbList","@id":"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-v1-6\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/lightning.ai\/pages\/"},{"@type":"ListItem","position":2,"name":"Lightning 1.6: Habana Accelerator, Bagua Distributed, Fault-Tolerance Improvements"}]},{"@type":"WebSite","@id":"https:\/\/lightning.ai\/pages\/#website","url":"https:\/\/lightning.ai\/pages\/","name":"Lightning AI","description":"The platform for teams to build AI.","publisher":{"@id":"https:\/\/lightning.ai\/pages\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/lightning.ai\/pages\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/lightning.ai\/pages\/#organization","name":"Lightning AI","url":"https:\/\/lightning.ai\/pages\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/lightning.ai\/pages\/#\/schema\/logo\/image\/","url":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/02\/image-17.png","contentUrl":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/02\/image-17.png","width":1744,"height":856,"caption":"Lightning AI"},"image":{"@id":"https:\/\/lightning.ai\/pages\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/LightningAI"]},{"@type":"Person","@id":"https:\/\/lightning.ai\/pages\/#\/schema\/person\/2518f4d5541f8e98016f6289169141a6","name":"JP Hennessy","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/lightning.ai\/pages\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/28ade268218ae45f723b0b62499f527a?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/28ade268218ae45f723b0b62499f527a?s=96&d=mm&r=g","caption":"JP Hennessy"},"url":"https:\/\/lightning.ai\/pages\/author\/jplightning-ai\/"}]}},"_links":{"self":[{"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/posts\/1376"}],"collection":[{"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/users\/16"}],"replies":[{"embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/comments?post=1376"}],"version-history":[{"count":0,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/posts\/1376\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/media\/5646852"}],"wp:attachment":[{"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/media?parent=1376"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/categories?post=1376"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/tags?post=1376"},{"taxonomy":"glossary","embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/glossary?post=1376"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}