{"id":5646481,"date":"2022-08-31T11:15:28","date_gmt":"2022-08-31T15:15:28","guid":{"rendered":"https:\/\/lightning.ai\/pages\/?p=5646481"},"modified":"2022-09-10T11:31:41","modified_gmt":"2022-09-10T15:31:41","slug":"pytorch-lightning-1-7-release","status":"publish","type":"post","link":"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-1-7-release\/","title":{"rendered":"Lightning 1.7: Apple Silicon, Multi-GPU and more"},"content":{"rendered":"<p>We&#8217;re excited to announce the release of PyTorch Lightning 1.7 \u26a1\ufe0f (<span style=\"text-decoration: underline; color: #9329e5;\"><strong><a style=\"color: #9329e5; text-decoration: underline;\" href=\"https:\/\/github.com\/Lightning-AI\/lightning\/releases\/tag\/1.7.0\">release notes!<\/a><\/strong><\/span>).<\/p>\n<p>v1.7 of PyTorch Lightning is the culmination of work from 106 contributors who have worked on features, bug fixes, and documentation for a total of over 492 commits since 1.6.0.<\/p>\n<h1>Highlights<\/h1>\n<ul>\n<li>Support for Apple Silicon<\/li>\n<li>Native FSDP<\/li>\n<li>Newly-enabled support for multi-GPU in notebooks<\/li>\n<li>Collaborative training<\/li>\n<\/ul>\n<p>In addition to a host of bug fixes as well as feature upgrades and implementations, these four highlights embody the latest and greatest aspects of PyTorch Lightning. As models get larger, more complex, and require more resources to train, we all need the ability to train ever-expanding models with more flexible requirements for hardware without sacrificing speed and performance.<\/p>\n<p><span style=\"text-decoration: underline;\"><a href=\"http:\/\/lightning.ai\"><strong><span style=\"color: #a65500; text-decoration: underline;\">Our mission<\/span><\/strong><\/a><\/span> has always been to make machine learning faster, easier, and more accessible, and these four key points of PyTorch Lightning 1.7 reflect that goal. Whether it&#8217;s new training strategies or novel ways to interact with your projects, Lightning enables you to build faster for less money.<\/p>\n<h4 style=\"text-align: center;\">. . .<\/h4>\n<h2>Apple Silicon Support<\/h2>\n<h5><strong>What it is<\/strong>: Accelerated GPU training on Apple M1\/M2 machines<\/h5>\n<h5><strong>Why we built it<\/strong>: Apple&#8217;s Metal Performance Shaders (MPS) framework helps you more easily extract data from images, run neural networks, and more.<\/h5>\n<p>For those using\u00a0<span style=\"text-decoration: underline; color: #9329e5;\"><strong><a style=\"color: #9329e5; text-decoration: underline;\" href=\"https:\/\/pytorch.org\/blog\/pytorch-1.12-released\/#prototype-introducing-accelerated-pytorch-training-on-mac\" rel=\"nofollow\">PyTorch 1.12<\/a><\/strong><\/span>\u00a0on M1 or M2 Apple machines, we have created the\u00a0<code>MPSAccelerator<\/code>.\u00a0<code>MPSAccelerator<\/code>\u00a0enables accelerated GPU training on Apple\u2019s Metal Performance Shaders (MPS) as a backend process.<\/p>\n<hr \/>\n<p><strong>NOTE<\/strong><\/p>\n<p>Support for this accelerator is currently marked as\u00a0<strong>experimental<\/strong> in PyTorch. Because many operators are still missing, you may run into a few rough edges.<\/p>\n<h4><script src=\"https:\/\/gist.github.com\/c646a498d85977919f5736ff9b637313.js\"><\/script><\/h4>\n<h4 style=\"text-align: center;\">. . .<\/h4>\n<h2>Native Fully Sharded Data Parallel Strategy<\/h2>\n<h5><strong>What it is<\/strong>: Support for FSDP directly within PyTorch<\/h5>\n<h5><strong>Why we built it<\/strong>: Now, this natively supported strategy makes training large models easier to save you time.<\/h5>\n<p>PyTorch 1.12 also added native support for\u00a0<span style=\"text-decoration: underline; color: #9329e5;\"><strong><a style=\"color: #9329e5; text-decoration: underline;\" href=\"https:\/\/pytorch.org\/blog\/introducing-pytorch-fully-sharded-data-parallel-api\" rel=\"nofollow\">Fully Sharded Data Parallel (FSDP)<\/a><\/strong><\/span>. Previously, Lightning enabled this by using the <a href=\"https:\/\/engineering.fb.com\/2021\/07\/15\/open-source\/fsdp\/\" rel=\"nofollow\"><code>fairscale<\/code><\/a>\u00a0project. You can now choose between both options.<\/p>\n<hr \/>\n<p><strong>NOTE<br \/>\n<\/strong><\/p>\n<hr \/>\n<p>Support for this strategy is marked as\u00a0<strong>beta<\/strong> in PyTorch.<\/p>\n<h4><script src=\"https:\/\/gist.github.com\/37d9383c1dda119ef75c57865dc58d51.js\"><\/script><\/h4>\n<h4 style=\"text-align: center;\">. . .<\/h4>\n<h2>A Collaborative Training strategy using Hivemind<\/h2>\n<h5><strong>What it is<\/strong>: Easily train across multiple machines.<\/h5>\n<h5><strong>Why we built it<\/strong>: Collaborative training removes the need for, and cost of, training across multiple expensive GPUs.<\/h5>\n<p>Collaborative Training solves the need for top-tier multi-GPU servers by allowing you to train across unreliable machines such as local ones or even preemptible cloud compute across the Internet.<\/p>\n<p>Under the hood, we use\u00a0<span style=\"text-decoration: underline; color: #9329e5;\"><strong><a style=\"color: #9329e5; text-decoration: underline;\" href=\"https:\/\/github.com\/learning-at-home\/hivemind\">Hivemind<\/a><\/strong><\/span>. This provides de-centralized training across the Internet.<\/p>\n<script src=\"https:\/\/gist.github.com\/34298735f80e429238fd6de75856b5e3.js\"><\/script>\n<p>For more information, check out the\u00a0<span style=\"text-decoration: underline; color: #9329e5;\"><strong><a style=\"color: #9329e5; text-decoration: underline;\" href=\"https:\/\/pytorch-lightning.readthedocs.io\/en\/1.7.0\/strategies\/hivemind.html\" rel=\"nofollow\">docs<\/a><\/strong><\/span>.<\/p>\n<h4 style=\"text-align: center;\">. . .<\/h4>\n<h2>Distributed support in Jupyter Notebooks<\/h2>\n<h5><strong>What it is<\/strong>: Scale to multiple devices, even when prototyping in Jupyter.<\/h5>\n<h5><strong>Why we built it<\/strong>: Distributed training means faster training \u2014 now available in Jupyter Notebooks.<\/h5>\n<p>So far, the only multi-GPU strategy supported in\u00a0<span style=\"text-decoration: underline; color: #9329e5;\"><strong><a style=\"color: #9329e5; text-decoration: underline;\" href=\"https:\/\/jupyter.org\/\" rel=\"nofollow\">Jupyter notebooks<\/a><\/strong><\/span>\u00a0(including\u00a0<span style=\"text-decoration: underline; color: #9329e5;\"><strong><a style=\"color: #9329e5; text-decoration: underline;\" href=\"https:\/\/www.grid.ai\/\" rel=\"nofollow\">Grid.ai<\/a><\/strong><\/span>,\u00a0<span style=\"text-decoration: underline; color: #9329e5;\"><strong><a style=\"color: #9329e5; text-decoration: underline;\" href=\"https:\/\/research.google.com\/colaboratory\/\" rel=\"nofollow\">Google Colab<\/a><\/strong><\/span>, and\u00a0<span style=\"text-decoration: underline; color: #9329e5;\"><strong><a style=\"color: #9329e5; text-decoration: underline;\" href=\"https:\/\/www.kaggle.com\/\" rel=\"nofollow\">Kaggle<\/a><\/strong><\/span>, for example) has been the\u00a0<span style=\"text-decoration: underline; color: #9329e5;\"><strong><a style=\"color: #9329e5; text-decoration: underline;\" href=\"https:\/\/pytorch.org\/docs\/stable\/generated\/torch.nn.DataParallel.html\" rel=\"nofollow\">Data-Parallel<\/a>\u00a0<\/strong><\/span>(DP) strategy (<code>strategy=\"dp\"<\/code>). DP, however, has several limitations that often obstruct users&#8217; workflows. It can be slow, it&#8217;s incompatible with\u00a0<span style=\"text-decoration: underline; color: #9329e5;\"><strong><a style=\"color: #9329e5; text-decoration: underline;\" href=\"https:\/\/torchmetrics.readthedocs.io\/en\/stable\/\" rel=\"nofollow\">TorchMetrics<\/a><\/strong><\/span>, it doesn&#8217;t persist state changes on replicas, and it&#8217;s difficult to use with non-primitive input- and output structures.<\/p>\n<p>In this release, we&#8217;ve added support for Distributed Data Parallel in Jupyter notebooks using the fork mechanism to address these shortcomings. This is only available for MacOS and Linux (sorry Windows!).<\/p>\n<hr \/>\n<p><strong>NOTE<\/strong><\/p>\n<p>This feature is\u00a0<strong>experimental<\/strong>.<\/p>\n<hr \/>\n<p>This is how you use multi-device in notebooks now:<\/p>\n<script src=\"https:\/\/gist.github.com\/a78c2ad082a5a97ad9f391c6791793e9.js\"><\/script>\n<p>By default, the Trainer detects the interactive environment and selects the right strategy for you. Learn more in the\u00a0<span style=\"text-decoration: underline; color: #9329e5;\"><strong><a style=\"color: #9329e5; text-decoration: underline;\" href=\"https:\/\/pytorch-lightning.readthedocs.io\/en\/1.7.0\/accelerators\/gpu_intermediate.html?highlight=fork#distributed-data-parallel-fork\" rel=\"nofollow\">full documentation<\/a><\/strong><\/span>.<\/p>\n<h4 style=\"text-align: center;\">. . .<\/h4>\n<h2>Other New Features<\/h2>\n<h3>Versioning of &#8220;last&#8221; checkpoints<\/h3>\n<p>If a run is configured to save to the same directory as a previous run and\u00a0<code>ModelCheckpoint(save_last=True)<\/code>\u00a0is enabled, the &#8220;last&#8221; checkpoint is now versioned with a simple\u00a0<code>-v1<\/code> suffix to avoid overwriting the existing &#8220;last&#8221; checkpoint. This mimics the behavior for checkpoints that monitor a metric.<\/p>\n<h3><\/h3>\n<h3>Automatically reload the &#8220;last&#8221; checkpoint<\/h3>\n<p>In certain scenarios, like when running in a cloud spot instance with fault-tolerant training enabled, it is useful to load the latest available checkpoint. It is now possible to pass the string\u00a0<code>ckpt_path=\"last\"<\/code> in order to load the latest available checkpoint from the set of existing checkpoints.<\/p>\n<h3><script src=\"https:\/\/gist.github.com\/2a675730c21f96d1d6306ee9b9e234a3.js\"><\/script><\/h3>\n<p>&nbsp;<\/p>\n<h3>Validation every N batches across epochs<\/h3>\n<p>In some cases, for example iteration based training, it is useful to run validation after every\u00a0<code>N<\/code> number of training batches without being limited by the epoch boundary. Now, you can enable validation based on total training batches.<\/p>\n<script src=\"https:\/\/gist.github.com\/c9e07f7ecf39cfd954a63e8678e825a5.js\"><\/script>\n<p>For example, given 5 epochs of 10 batches, setting\u00a0<code>N=25<\/code>\u00a0would run validation in the 3rd and 5th epoch.<\/p>\n<h3><\/h3>\n<h3>CPU stats monitoring<\/h3>\n<p>Lightning provides the <a href=\"https:\/\/pytorch-lightning.readthedocs.io\/en\/1.7.0\/api\/pytorch_lightning.callbacks.DeviceStatsMonitor.html#pytorch_lightning.callbacks.DeviceStatsMonitor\" rel=\"nofollow\"><code>DeviceStatsMonitor<\/code><\/a> callback to monitor the stats of the hardware currently used. However, users often also want to monitor the stats of other hardware. In this release, we have added an option to additionally monitor CPU stats:<\/p>\n<script src=\"https:\/\/gist.github.com\/05c619b6612b1aeeca766ef78b208b7b.js\"><\/script>\n<p>The CPU stats are gathered using the\u00a0<a href=\"https:\/\/github.com\/giampaolo\/psutil\"><code>psutil<\/code><\/a>\u00a0package.<\/p>\n<h3><\/h3>\n<h3>Automatic distributed samplers<\/h3>\n<p>It is now possible to use custom samplers in a distributed environment without the need to set\u00a0<code>replace_ddp_sampler=False<\/code>\u00a0and wrap your sampler manually with the\u00a0<a href=\"https:\/\/pytorch.org\/docs\/stable\/data.html#torch.utils.data.distributed.DistributedSampler\" rel=\"nofollow\"><code>DistributedSampler<\/code><\/a>.<\/p>\n<h3><\/h3>\n<h3>Inference mode support<\/h3>\n<p>PyTorch 1.9 introduced\u00a0<a href=\"https:\/\/pytorch.org\/docs\/stable\/generated\/torch.inference_mode.html\" rel=\"nofollow\"><code>torch.inference_mode<\/code><\/a>, which is a faster alternative for\u00a0<a href=\"https:\/\/pytorch.org\/docs\/stable\/generated\/torch.no_grad.html\" rel=\"nofollow\"><code>torch.no_grad<\/code><\/a>. Lightning will now use\u00a0<code>inference_mode<\/code>\u00a0wherever possible during evaluation.<\/p>\n<h3><\/h3>\n<h3>Support for warn-level determinism<\/h3>\n<p>In Pytorch 1.11, operations that do not have a\u00a0<span style=\"text-decoration: underline; color: #9329e5;\"><strong><a style=\"color: #9329e5; text-decoration: underline;\" href=\"https:\/\/pytorch.org\/docs\/stable\/generated\/torch.use_deterministic_algorithms.html#torch.use_deterministic_algorithms\" rel=\"nofollow\">deterministic<\/a><\/strong><\/span>\u00a0implementation can be set to throw a warning instead of an error when ran in deterministic mode. This is now supported by our\u00a0<code>Trainer<\/code>:<\/p>\n<h3><script src=\"https:\/\/gist.github.com\/1b508af925b1ebe40878296e7cc19307.js\"><\/script><\/h3>\n<p>&nbsp;<\/p>\n<h3>LightningCLI improvements<\/h3>\n<p>After the latest updates to\u00a0<a href=\"https:\/\/github.com\/omni-us\/jsonargparse\"><code>jsonargparse<\/code><\/a>, the library supporting the\u00a0<a href=\"https:\/\/pytorch-lightning.readthedocs.io\/en\/1.7.0\/cli\/lightning_cli.html\" rel=\"nofollow\"><code>LightningCLI<\/code><\/a>, there&#8217;s now complete support for shorthand notation. This includes automatic support for shorthand notation to all arguments, not just the ones that are part of the registries, plus support inside configuration files.<\/p>\n<script src=\"https:\/\/gist.github.com\/ac74a33016c099cc154d69f191324dac.js\"><\/script>\n<p>A header with the version that generated the config is now included.<\/p>\n<p>All subclasses for a given base class can be specified by name, so there&#8217;s no need to explicitly register them. The only requirement is that the module where the subclass is defined is imported prior to parsing.<\/p>\n<script src=\"https:\/\/gist.github.com\/9a54bc81658a6326361e351d1a71b1ce.js\"><\/script>\n<p>The new version renders the registries and the\u00a0<code>auto_registry<\/code>\u00a0flag, introduced in 1.6.0, unnecessary, so we have deprecated them.<\/p>\n<p>Support was also added for list appending; for example, to add a callback to an existing list that might be already configured:<\/p>\n<h3><script src=\"https:\/\/gist.github.com\/9908a8a647a15851fc2e5a2bca3ccc17.js\"><\/script><\/h3>\n<p>&nbsp;<\/p>\n<h3>Callback registration through entry points<\/h3>\n<p><span style=\"text-decoration: underline; color: #9329e5;\"><strong><a style=\"color: #9329e5; text-decoration: underline;\" href=\"https:\/\/setuptools.pypa.io\/en\/stable\/userguide\/entry_point.html\" rel=\"nofollow\">Entry Points<\/a><\/strong><\/span>\u00a0are an advanced feature in Python&#8217;s setuptools that allow packages to expose metadata to other packages. In <span style=\"text-decoration: underline; color: #9329e5;\"><strong><a style=\"color: #9329e5; text-decoration: underline;\" href=\"https:\/\/lightning.ai\/lightning-docs\/\">Lightning<\/a><\/strong><\/span>, we allow an arbitrary package to include callbacks that the Lightning Trainer can automatically use when installed, without you having to manually add them to the Trainer. This is useful in production environments where it is common to provide specialized monitoring and logging callbacks globally for every application.<\/p>\n<p>A\u00a0<code>setup.py<\/code> file for a callbacks plugin package could look something like this:<\/p>\n<script src=\"https:\/\/gist.github.com\/a57d855dcc7103e279246005e1d5202e.js\"><\/script>\n<p>Read more about callback entry points in our\u00a0<span style=\"text-decoration: underline; color: #9329e5;\"><strong><a style=\"color: #9329e5; text-decoration: underline;\" href=\"https:\/\/pytorch-lightning.readthedocs.io\/en\/1.7.0\/extensions\/entry_points.html?highlight=entry%20points\" rel=\"nofollow\">documentation<\/a><\/strong><\/span>.<\/p>\n<h3><\/h3>\n<h3>Rank-zero only\u00a0<code>EarlyStopping<\/code>\u00a0messages<\/h3>\n<p>Our\u00a0<a href=\"https:\/\/pytorch-lightning.readthedocs.io\/en\/1.7.0\/api\/pytorch_lightning.callbacks.EarlyStopping.html\" rel=\"nofollow\"><code>EarlyStopping<\/code><\/a> callback implementation, by default, logs the stopping messages on every rank when it&#8217;s run in a distributed environment. This was done in case the monitored values were not synchronized. However, some users found this verbose. To avoid this, you can now set a flag:<\/p>\n<h3><script src=\"https:\/\/gist.github.com\/512c1d00cbccf235e1787cd57c2b9ca5.js\"><\/script><\/h3>\n<p>&nbsp;<\/p>\n<h3>A base\u00a0<code>Checkpoint<\/code>\u00a0class for extra customization<\/h3>\n<p>If you want to customize\u00a0<a href=\"https:\/\/pytorch-lightning.readthedocs.io\/en\/1.7.0\/api\/pytorch_lightning.callbacks.ModelCheckpoint.html#pytorch_lightning.callbacks.ModelCheckpoint\" rel=\"nofollow\"><code>ModelCheckpoint<\/code><\/a>\u00a0callback, without all the extra functionality this class provides, this release provides an empty class\u00a0<code>Checkpoint<\/code>\u00a0for easier inheritance. In all internal code, the check is made against the\u00a0<code>Checkpoint<\/code>\u00a0class in order to ensure everything works properly for custom classes.<\/p>\n<h3><\/h3>\n<h3>Validation now runs in overfitting mode<\/h3>\n<p>Setting\u00a0<code>overfit_batches=N<\/code>, now enables validation and runs\u00a0<code>N<\/code>\u00a0number of validation batches during\u00a0<code>trainer.fit<\/code>.<\/p>\n<h3><script src=\"https:\/\/gist.github.com\/b5a115041d76edf4f43bbce63398cc3e.js\"><\/script><\/h3>\n<p>&nbsp;<\/p>\n<h3>Device Stats Monitoring support for HPUs<\/h3>\n<p><a href=\"https:\/\/pytorch-lightning.readthedocs.io\/en\/1.7.0\/api\/pytorch_lightning.callbacks.DeviceStatsMonitor.html#pytorch_lightning.callbacks.DeviceStatsMonitor\" rel=\"nofollow\"><code>DeviceStatsMonitor<\/code><\/a> callback can now be used to automatically monitor and log device stats during the training stage with Habana devices.<\/p>\n<h2><script src=\"https:\/\/gist.github.com\/a5ce3e4044273f7b4647d4804abf0962.js\"><\/script><\/h2>\n<p>&nbsp;<\/p>\n<h3>New Hooks<\/h3>\n<h3><code>LightningDataModule.load_from_checkpoint<\/code><\/h3>\n<p>Now, hyper-parameters from\u00a0<code>LightningDataModule<\/code>\u00a0save to checkpoints and reload when training is resumed. And just like you use\u00a0<code>LightningModule.load_from_checkpoint<\/code>\u00a0to load a model using a checkpoint filepath, you can now load\u00a0<code>LightningDataModule<\/code> using the same hook.<\/p>\n<h2><script src=\"https:\/\/gist.github.com\/80e7603ab486cc4af785c5d992997300.js\"><\/script><\/h2>\n<h4 style=\"text-align: center;\">. . .<\/h4>\n<h2>Experimental Features<\/h2>\n<p>&nbsp;<\/p>\n<h3>ServableModule and its Servable Module Validator Callback<\/h3>\n<p>When serving models in production, it generally is a good pratice to ensure that the model can be served and optimzed before starting training to avoid wasting money.<\/p>\n<p>To do so, you can import a\u00a0<code>ServableModule<\/code>\u00a0(an\u00a0<code>nn.Module<\/code>) and add it as an extra base class to your base model as follows:<\/p>\n<script src=\"https:\/\/gist.github.com\/56be95b7eced84f771bd60b27aa91be0.js\"><\/script>\n<p>To make your model servable, you would need to implement three hooks:<\/p>\n<ul>\n<li><code>configure_payload<\/code>: Describe the format of the payload (data sent to the server).<\/li>\n<li><code>configure_serialization<\/code>: Describe the functions used to convert the payload to tensors (de-serialization) and tensors to payload (serialization)<\/li>\n<li><code>serve_step<\/code>: The method used to transform the input tensors to a dictionary of prediction tensors.<\/li>\n<\/ul>\n<script src=\"https:\/\/gist.github.com\/5bc17160352dbc8927365341154ec5c0.js\"><\/script>\n<p>Finally, add the\u00a0<code>ServableModuleValidator<\/code>\u00a0callback to the Trainer to validate the model is servable\u00a0<code>on_train_start<\/code>. This uses a\u00a0<span style=\"text-decoration: underline; color: #9329e5;\"><strong><a style=\"color: #9329e5; text-decoration: underline;\" href=\"https:\/\/fastapi.tiangolo.com\/\" rel=\"nofollow\">FastAPI<\/a><\/strong><\/span> server.<\/p>\n<script src=\"https:\/\/gist.github.com\/b558a03cdcb53139f932222b061d1489.js\"><\/script>\n<p>Have a look at the full example\u00a0<span style=\"text-decoration: underline; color: #9329e5;\"><strong><a style=\"color: #9329e5; text-decoration: underline;\" href=\"https:\/\/pytorch-lightning.readthedocs.io\/en\/1.7.0\/deploy\/production_advanced.html#validate-a-model-is-servable\" rel=\"nofollow\">here<\/a><\/strong><\/span>.<\/p>\n<h3><\/h3>\n<h3>Asynchronous Checkpointing<\/h3>\n<p>You can now save checkpoints asynchronously using the\u00a0<a href=\"https:\/\/pytorch-lightning.readthedocs.io\/en\/1.7.0\/api\/pytorch_lightning.plugins.io.AsyncCheckpointIO.html\" rel=\"nofollow\"><code>AsyncCheckpointIO<\/code><\/a>\u00a0plugin without blocking your training process. To enable this, you can pass a\u00a0<code>AsyncCheckpointIO<\/code>\u00a0plugin to the\u00a0<code>Trainer<\/code>.<\/p>\n<script src=\"https:\/\/gist.github.com\/2b42298e5928cb4e52d48d2981b713d4.js\"><\/script>\n<p>Have a look at the full example\u00a0<span style=\"text-decoration: underline; color: #9329e5;\"><strong><a style=\"color: #9329e5; text-decoration: underline;\" href=\"https:\/\/pytorch-lightning.readthedocs.io\/en\/1.7.0\/common\/checkpointing_expert.html#asynchronous-checkpointing\" rel=\"nofollow\">here<\/a><\/strong><\/span>.<\/p>\n<p>&nbsp;<\/p>\n<p>We&#8217;re very excited about this new release, and we hope you enjoy it. Stay tuned for upcoming posts where we will dive deeper into some of the key features of the new release. If you have any feedback we&#8217;d love to hear from you on the <span style=\"text-decoration: underline;\"><strong><a href=\"https:\/\/pytorch-lightning.slack.com\/join\/shared_invite\/zt-1dm4phlc0-84Jv9_8Mp_tWraICOJ467Q\">Community Slack<\/a><\/strong><\/span>!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>We&#8217;re excited to announce the release of PyTorch Lightning 1.7 \u26a1\ufe0f (release notes!). v1.7 of PyTorch Lightning is the culmination of work from 106 contributors who have worked on features, bug fixes, and documentation for a total of over 492 commits since 1.6.0. Highlights Support for Apple Silicon Native FSDP Newly-enabled support for multi-GPU in<a class=\"excerpt-read-more\" href=\"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-1-7-release\/\" title=\"ReadLightning 1.7: Apple Silicon, Multi-GPU and more\">&#8230; Read more &raquo;<\/a><\/p>\n","protected":false},"author":16,"featured_media":5646568,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"footnotes":"","_links_to":"","_links_to_target":""},"categories":[104],"tags":[96,61,97],"glossary":[],"acf":{"hide_from_archive":null,"content_type":null,"code_embed":null,"code_shortcode":null,"custom_styles":null,"sticky":null,"additional_authors":null,"mathjax":null,"default_editor":null,"sections":null,"show_table_of_contents":null,"table_of_contents":null,"tabs":null,"tab_group":null},"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v24.5 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Lightning 1.7: Apple Silicon, Multi-GPU and more<\/title>\n<meta name=\"description\" content=\"The release of Lightning 1.7 includes Apple Silicon support, native FDSP, and multi-gpu support for notebooks.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-1-7-release\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Lightning 1.7: Apple Silicon, Multi-GPU and more\" \/>\n<meta property=\"og:description\" content=\"The release of Lightning 1.7 includes Apple Silicon support, native FDSP, and multi-gpu support for notebooks.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-1-7-release\/\" \/>\n<meta property=\"og:site_name\" content=\"Lightning AI\" \/>\n<meta property=\"article:published_time\" content=\"2022-08-31T15:15:28+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2022-09-10T15:31:41+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2022\/08\/Lightning-17.png\" \/>\n\t<meta property=\"og:image:width\" content=\"2320\" \/>\n\t<meta property=\"og:image:height\" content=\"1200\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"JP Hennessy\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@LightningAI\" \/>\n<meta name=\"twitter:site\" content=\"@LightningAI\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"JP Hennessy\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-1-7-release\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-1-7-release\/\"},\"author\":{\"name\":\"JP Hennessy\",\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/person\/2518f4d5541f8e98016f6289169141a6\"},\"headline\":\"Lightning 1.7: Apple Silicon, Multi-GPU and more\",\"datePublished\":\"2022-08-31T15:15:28+00:00\",\"dateModified\":\"2022-09-10T15:31:41+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-1-7-release\/\"},\"wordCount\":1685,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/lightning.ai\/pages\/#organization\"},\"image\":{\"@id\":\"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-1-7-release\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2022\/08\/Lightning-17.png\",\"keywords\":[\"ai\",\"lightning\",\"ml\"],\"articleSection\":[\"Lightning Releases\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-1-7-release\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-1-7-release\/\",\"url\":\"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-1-7-release\/\",\"name\":\"Lightning 1.7: Apple Silicon, Multi-GPU and more\",\"isPartOf\":{\"@id\":\"https:\/\/lightning.ai\/pages\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-1-7-release\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-1-7-release\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2022\/08\/Lightning-17.png\",\"datePublished\":\"2022-08-31T15:15:28+00:00\",\"dateModified\":\"2022-09-10T15:31:41+00:00\",\"description\":\"The release of Lightning 1.7 includes Apple Silicon support, native FDSP, and multi-gpu support for notebooks.\",\"breadcrumb\":{\"@id\":\"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-1-7-release\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-1-7-release\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-1-7-release\/#primaryimage\",\"url\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2022\/08\/Lightning-17.png\",\"contentUrl\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2022\/08\/Lightning-17.png\",\"width\":2320,\"height\":1200},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-1-7-release\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/lightning.ai\/pages\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Lightning 1.7: Apple Silicon, Multi-GPU and more\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/lightning.ai\/pages\/#website\",\"url\":\"https:\/\/lightning.ai\/pages\/\",\"name\":\"Lightning AI\",\"description\":\"The platform for teams to build AI.\",\"publisher\":{\"@id\":\"https:\/\/lightning.ai\/pages\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/lightning.ai\/pages\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/lightning.ai\/pages\/#organization\",\"name\":\"Lightning AI\",\"url\":\"https:\/\/lightning.ai\/pages\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/02\/image-17.png\",\"contentUrl\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/02\/image-17.png\",\"width\":1744,\"height\":856,\"caption\":\"Lightning AI\"},\"image\":{\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/LightningAI\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/person\/2518f4d5541f8e98016f6289169141a6\",\"name\":\"JP Hennessy\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/28ade268218ae45f723b0b62499f527a?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/28ade268218ae45f723b0b62499f527a?s=96&d=mm&r=g\",\"caption\":\"JP Hennessy\"},\"url\":\"https:\/\/lightning.ai\/pages\/author\/jplightning-ai\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Lightning 1.7: Apple Silicon, Multi-GPU and more","description":"The release of Lightning 1.7 includes Apple Silicon support, native FDSP, and multi-gpu support for notebooks.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-1-7-release\/","og_locale":"en_US","og_type":"article","og_title":"Lightning 1.7: Apple Silicon, Multi-GPU and more","og_description":"The release of Lightning 1.7 includes Apple Silicon support, native FDSP, and multi-gpu support for notebooks.","og_url":"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-1-7-release\/","og_site_name":"Lightning AI","article_published_time":"2022-08-31T15:15:28+00:00","article_modified_time":"2022-09-10T15:31:41+00:00","og_image":[{"width":2320,"height":1200,"url":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2022\/08\/Lightning-17.png","type":"image\/png"}],"author":"JP Hennessy","twitter_card":"summary_large_image","twitter_creator":"@LightningAI","twitter_site":"@LightningAI","twitter_misc":{"Written by":"JP Hennessy","Est. reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-1-7-release\/#article","isPartOf":{"@id":"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-1-7-release\/"},"author":{"name":"JP Hennessy","@id":"https:\/\/lightning.ai\/pages\/#\/schema\/person\/2518f4d5541f8e98016f6289169141a6"},"headline":"Lightning 1.7: Apple Silicon, Multi-GPU and more","datePublished":"2022-08-31T15:15:28+00:00","dateModified":"2022-09-10T15:31:41+00:00","mainEntityOfPage":{"@id":"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-1-7-release\/"},"wordCount":1685,"commentCount":0,"publisher":{"@id":"https:\/\/lightning.ai\/pages\/#organization"},"image":{"@id":"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-1-7-release\/#primaryimage"},"thumbnailUrl":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2022\/08\/Lightning-17.png","keywords":["ai","lightning","ml"],"articleSection":["Lightning Releases"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-1-7-release\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-1-7-release\/","url":"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-1-7-release\/","name":"Lightning 1.7: Apple Silicon, Multi-GPU and more","isPartOf":{"@id":"https:\/\/lightning.ai\/pages\/#website"},"primaryImageOfPage":{"@id":"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-1-7-release\/#primaryimage"},"image":{"@id":"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-1-7-release\/#primaryimage"},"thumbnailUrl":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2022\/08\/Lightning-17.png","datePublished":"2022-08-31T15:15:28+00:00","dateModified":"2022-09-10T15:31:41+00:00","description":"The release of Lightning 1.7 includes Apple Silicon support, native FDSP, and multi-gpu support for notebooks.","breadcrumb":{"@id":"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-1-7-release\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-1-7-release\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-1-7-release\/#primaryimage","url":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2022\/08\/Lightning-17.png","contentUrl":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2022\/08\/Lightning-17.png","width":2320,"height":1200},{"@type":"BreadcrumbList","@id":"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-1-7-release\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/lightning.ai\/pages\/"},{"@type":"ListItem","position":2,"name":"Lightning 1.7: Apple Silicon, Multi-GPU and more"}]},{"@type":"WebSite","@id":"https:\/\/lightning.ai\/pages\/#website","url":"https:\/\/lightning.ai\/pages\/","name":"Lightning AI","description":"The platform for teams to build AI.","publisher":{"@id":"https:\/\/lightning.ai\/pages\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/lightning.ai\/pages\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/lightning.ai\/pages\/#organization","name":"Lightning AI","url":"https:\/\/lightning.ai\/pages\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/lightning.ai\/pages\/#\/schema\/logo\/image\/","url":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/02\/image-17.png","contentUrl":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/02\/image-17.png","width":1744,"height":856,"caption":"Lightning AI"},"image":{"@id":"https:\/\/lightning.ai\/pages\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/LightningAI"]},{"@type":"Person","@id":"https:\/\/lightning.ai\/pages\/#\/schema\/person\/2518f4d5541f8e98016f6289169141a6","name":"JP Hennessy","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/lightning.ai\/pages\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/28ade268218ae45f723b0b62499f527a?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/28ade268218ae45f723b0b62499f527a?s=96&d=mm&r=g","caption":"JP Hennessy"},"url":"https:\/\/lightning.ai\/pages\/author\/jplightning-ai\/"}]}},"_links":{"self":[{"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/posts\/5646481"}],"collection":[{"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/users\/16"}],"replies":[{"embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/comments?post=5646481"}],"version-history":[{"count":0,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/posts\/5646481\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/media\/5646568"}],"wp:attachment":[{"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/media?parent=5646481"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/categories?post=5646481"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/tags?post=5646481"},{"taxonomy":"glossary","embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/glossary?post=5646481"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}