{"id":5648737,"date":"2023-09-07T07:12:57","date_gmt":"2023-09-07T11:12:57","guid":{"rendered":"https:\/\/lightning.ai\/pages\/?p=5648737"},"modified":"2023-09-07T16:06:10","modified_gmt":"2023-09-07T20:06:10","slug":"pytorch-lightning-and-fabric","status":"publish","type":"post","link":"https:\/\/lightning.ai\/pages\/blog\/pytorch-lightning-and-fabric\/","title":{"rendered":"When to Use PyTorch Lightning or Lightning Fabric"},"content":{"rendered":"<div class=\"takeaways card-glow p-4 my-4\"><h3 class=\"w-100 d-block\">Takeaways<\/h3> <span style=\"font-weight: 400;\">Discover how to use PyTorch Lightning or Lightning Fabric to pretrain your custom model checkpoints or finetune LLMs like <\/span><a href=\"https:\/\/about.fb.com\/news\/2023\/08\/code-llama-ai-for-coding\/\"><span style=\"font-weight: 400;\">CodeLLaMA<\/span><\/a><span style=\"font-weight: 400;\"> and <\/span><a href=\"https:\/\/ai.meta.com\/llama\/\"><span style=\"font-weight: 400;\">LLama 2<\/span><\/a><span style=\"font-weight: 400;\">.<\/span> <\/div>\n<h2>Introduction<\/h2>\n<p><a href=\"https:\/\/lightning.ai\/docs\/pytorch\/latest\/\" target=\"_blank\" rel=\"noopener\">PyTorch Lightning<\/a> and <a href=\"https:\/\/lightning.ai\/docs\/fabric\/stable\/\" target=\"_blank\" rel=\"noopener\">Lightning Fabric<\/a> enable researchers and machine learning engineers to train PyTorch models at scale. Both frameworks do the heavy lifting for you and orchestrate training across multi-GPU and multi-Node environments. All you need to bring is a PyTorch module! And maybe a GPU \ud83d\ude06. So, why are there two frameworks?<\/p>\n<p>Short answer: varying levels of abstraction and control provide the versatility you need for a great development experience.<\/p>\n<p>In other words \u2013 we\u2019ve built some really useful features for you so that you don\u2019t have to. Depending on your need, you can choose either framework based on the amount of abstraction and control that you need or want. This is all about placing the power in your hands to get the job done in the way that works best for you.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-5648753 size-large\" src=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/09\/PyTorch-Lightning-and-Fabric-1024x576.png\" alt=\"\" width=\"1024\" height=\"576\" srcset=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/09\/PyTorch-Lightning-and-Fabric-1024x576.png 1024w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/09\/PyTorch-Lightning-and-Fabric-300x169.png 300w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/09\/PyTorch-Lightning-and-Fabric-1536x864.png 1536w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/09\/PyTorch-Lightning-and-Fabric.png 1600w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/09\/PyTorch-Lightning-and-Fabric-300x169@2x.png 600w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/p>\n<h2>Similarities of PyTorch Lightning and Lightning Fabric<\/h2>\n<p>Becoming familiar with the shared features is a good starting point for understanding why we have two frameworks that focus on model training. Why do they share features? Because they are both built with the same tasks in mind, and we want to make the development experience as seamless as possible for you.<\/p>\n<p>A common benefit of PyTorch Lightning and Lightning Fabric is that both frameworks enable researchers and machine learning engineers to train in multi-device and multi-node environments with common flags such as devices, num_nodes, and strategy. Where devices is either the number of devices (CPUs, GPUs, TPUs or others) to train on, or the specific devices to train on given a multi-device environment. The num_nodes flag is used to set the number of clusters in a multi-node environment. And strategy is used to select the distributed training strategy i.e. DDP, FSDP, or DeepSpeed.<\/p>\n<div class=\"perfect-pullquote vcard pullquote-align-full pullquote-border-placement-left\"><blockquote><p>use the accelerator flag along with devices to properly set the device type i.e. CPU, GPU, or TPU.<br \/>\n<\/p><\/blockquote><\/div>\n<p>Why is this a good thing? The ecosystem has a standard naming convention so that we can easily port concepts from one framework to the other. Meaning, we won\u2019t have to learn new argument names to do the same task.<\/p>\n<div class=\"tabs-wrap\"><div class=\"tab-group\"><button class=\"btn active\">PyTorch Lightning<\/button><button class=\"btn\">Lightning Fabric<\/button><\/div><div class=\"tab-content\"><div class=\"tab-item active\"><pre><pre class=\"code-shortcode dark-theme window- collapse-false \" style=\"--height:falsepx\"><code class=\"language-python \">\r\nimport pytorch_lightning as pl\r\n\r\ntrainer = pl.Trainer(\r\n    accelerator=\"gpu\",\r\n    devices=2,\r\n    num_nodes=1,\r\n    strategy=\"ddp\",\r\n)<\/pre>\n<\/code><div class=\"copy-button\"><button class=\"expand-button\">Expand<\/button><button class=\"copy\">Copy<\/button><\/div><\/pre><\/p>\n<\/div><div class=\"tab-item\"><pre><pre class=\"code-shortcode dark-theme window- collapse-false \" style=\"--height:falsepx\"><code class=\"language-python \">\r\nfrom lightning_fabric import Fabric\r\n\r\nfabric = Fabric(\r\n    accelerator=\"gpu\",\r\n    devices=2,\r\n    num_nodes=1,\r\n    strategy=\"ddp\",\r\n)<\/pre>\n<\/code><div class=\"copy-button\"><button class=\"expand-button\">Expand<\/button><button class=\"copy\">Copy<\/button><\/div><\/pre><\/p>\n<\/div><\/div><\/div>\n<h2>When to Use PyTorch Lightning or Lightning Fabric<\/h2>\n<p>So, what makes the two frameworks different? Lightning Fabric offers more control by integrating directly into the PyTorch training loop and PyTorch Lightning offers a fully managed training solution with Trainer, LightningModule, and LightningDataModule. The image shown below represents this relationship to the PyTorch training loop, and the amount of features that we\u2019ve built for you in either framework.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-large wp-image-5648742\" src=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/09\/Screenshot-2023-09-05-at-12.16.41-PM-1024x189.png\" alt=\"\" width=\"1024\" height=\"189\" srcset=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/09\/Screenshot-2023-09-05-at-12.16.41-PM-1024x189.png 1024w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/09\/Screenshot-2023-09-05-at-12.16.41-PM-300x55.png 300w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/09\/Screenshot-2023-09-05-at-12.16.41-PM.png 1442w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/09\/Screenshot-2023-09-05-at-12.16.41-PM-300x55@2x.png 600w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/p>\n<p>Now, let\u2019s discuss some of the trade offs that are implied by this abstraction and control spectrum shown in the image \u2013 and how these trade offs may affect which framework we are going to choose.<\/p>\n<h3>Lightning Fabric<\/h3>\n<p>Lightning Fabric is a lightweight framework that has just one core API class \u2013 Fabric. That core class handles tasks such as autocasting, broadcasting, gathering, and checkpoint loading and saving. Fabric has what you need to run training at scale without a tedious refactoring process. The image below shows how easy it is to implement Lightning Fabric into PyTorch training logic.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-5648752 size-large\" src=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/09\/Screenshot-2023-09-07-at-3.51.49-PM-1024x736.png\" alt=\"\" width=\"1024\" height=\"736\" srcset=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/09\/Screenshot-2023-09-07-at-3.51.49-PM-1024x736.png 1024w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/09\/Screenshot-2023-09-07-at-3.51.49-PM-300x216.png 300w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/09\/Screenshot-2023-09-07-at-3.51.49-PM.png 1416w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/09\/Screenshot-2023-09-07-at-3.51.49-PM-300x216@2x.png 600w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/p>\n<p>The code snippet above with highlighted diff, illustrates how few changes are needed to migrate from simple vanilla PyTorch code to Fabric. Beyond this minor code change (nothing more is needed except setting some arguments\/parameters) you gained all the power of Fabric &#8211; mixed precision, scaling to multiple devices\/nodes, checkpointing and much more!<\/p>\n<p>When should you use Lightning Fabric?<\/p>\n<ol>\n<li style=\"list-style-type: none;\">\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">When you have an existing PyTorch training loop, and need to scale the training.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">When you are an experienced PyTorch and PyTorch Lightning user who requires keeping close touch with your training flow.<\/span><\/li>\n<\/ol>\n<\/li>\n<\/ol>\n<div class=\"perfect-pullquote vcard pullquote-align-full pullquote-border-placement-left\"><blockquote><p><\/p>\n<p><span style=\"font-weight: 400;\">See <\/span><a href=\"https:\/\/github.com\/Lightning-AI\/lit-gpt\"><span style=\"font-weight: 400;\">Lit-GPT<\/span><\/a><span style=\"font-weight: 400;\"> and <\/span><a href=\"https:\/\/github.com\/Eclectic-Sheep\/sheeprl\"><span style=\"font-weight: 400;\">SheepRL<\/span><\/a><span style=\"font-weight: 400;\"> for Lightning Fabric use cases.<\/span><\/p>\n<p><\/p><\/blockquote><\/div>\n<pre class=\"code-shortcode dark-theme window- collapse-false \" style=\"--height:falsepx\"><code class=\"language-python \">\n\n<pre>import torch\r\nfrom lightning_fabric import Fabric\r\nfrom pytorch_lightning.demos import WikiText2, Transformer\r\n\r\n\r\nfabric = Fabric(\r\n    accelerator=\"gpu\",\r\n    devices=2,\r\n    num_nodes=1,\r\n    strategy=\"ddp\",\r\n)\r\nfabric.launch()\r\n\r\ndataset = WikiText2()\r\ndataloader = torch.utils.data.DataLoader(dataset)\r\nmodel = Transformer(vocab_size=dataset.vocab_size)\r\noptimizer = torch.optim.SGD(model.parameters(), lr=0.1)\r\n\r\nmodel, optimizer = fabric.setup(model, optimizer)\r\ndataloader = fabric.setup_dataloaders(dataloader)\r\n\r\nmodel.train()\r\nfor epoch in range(20):\r\n    for batch in dataloader:\r\n        inputs, target = batch\r\n        optimizer.zero_grad()\r\n        output = model(inputs, target)\r\n        loss = torch.nn.functional.nll_loss(output, target.view(-1))\r\n        fabric.backward(loss)\r\n        optimizer.step()\r\n<\/pre>\n<\/code><div class=\"copy-button\"><button class=\"expand-button\">Expand<\/button><button class=\"copy\">Copy<\/button><\/div><\/pre>\n<h3>PyTorch Lightning<\/h3>\n<p>PyTorch Lightning\u2019s core API consists of three classes \u2013 LightningModule, Trainer, and LightningDataModule. Trainer offers a robust managed training experience, LightningModule wraps PyTorch\u2019s nn.Module with several methods to clearly define the training process , and LightningDataModule encapsulates all the data processing. These three core classes pack a lot of functionality into PyTorch Lightning and enable research such as Stability AI\u2019s <a href=\"https:\/\/github.com\/Stability-AI\/generative-models\/tree\/main\" target=\"_blank\" rel=\"noopener\">Generative Models<\/a>.<\/p>\n<p>Why would you choose PyTorch Lightning over raw PyTorch or Lightning Fabric?<\/p>\n<ol>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">When you use mainstream models and\/or for training you use standard training loops.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">If your goal is simple code without any boilerplate which is easier to share.<\/span><\/li>\n<\/ol>\n<div class=\"perfect-pullquote vcard pullquote-align-full pullquote-border-placement-left\"><blockquote><p><\/p>\n<p><span style=\"font-weight: 400;\">See <\/span><a href=\"https:\/\/github.com\/lightly-ai\/lightly\"><span style=\"font-weight: 400;\">Lightly<\/span><\/a><span style=\"font-weight: 400;\">, <\/span><a href=\"https:\/\/github.com\/asteroid-team\/asteroid\"><span style=\"font-weight: 400;\">Asteroid<\/span><\/a><span style=\"font-weight: 400;\">, <\/span><a href=\"https:\/\/github.com\/jdb78\/pytorch-forecasting\"><span style=\"font-weight: 400;\">PyTorch Forecasting<\/span><\/a><span style=\"font-weight: 400;\">, <\/span><a href=\"https:\/\/github.com\/scverse\/scvi-tools\"><span style=\"font-weight: 400;\">scvi-tools<\/span><\/a><span style=\"font-weight: 400;\">, and <\/span><a href=\"https:\/\/github.com\/HazyResearch\/hyena-dna\/\"><span style=\"font-weight: 400;\">Hyena-DNA<\/span><\/a><span style=\"font-weight: 400;\"> for additional PyTorch Lightning use cases.<\/span><\/p>\n<p><\/p><\/blockquote><\/div>\n<pre class=\"code-shortcode dark-theme window- collapse-670 \" style=\"--height:670px\"><code class=\"language-python \">\n\n<pre>import torch\r\nfrom torch.utils.data import random_split, DataLoader\r\n\r\nimport pytorch_lightning as pl\r\nfrom pytorch_lightning import Trainer\r\nfrom pytorch_lightning.demos import Transformer, WikiText2\r\n\r\n\r\nclass LightningTransformer(pl.LightningModule):\r\n    def __init__(self, vocab_size):\r\n        super().__init__()\r\n        self.model = Transformer(vocab_size=vocab_size)\r\n\r\n    def forward(self, batch):\r\n        inputs, target = batch\r\n        return self.model(inputs.view(1, -1), target.view(1, -1))\r\n\r\n    def training_step(self, batch, batch_idx):\r\n        inputs, target = batch\r\n        output = self.model(inputs.view(1, -1), target.view(1, -1))\r\n        loss = torch.nn.functional.nll_loss(output, target.view(-1))\r\n        return loss\r\n\r\n    def configure_optimizers(self):\r\n        return torch.optim.SGD(self.model.parameters(), lr=0.1)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n    dataset = WikiText2()\r\n    train, val = random_split(dataset, [0.8, 0.2])\r\n    train_dataloader = DataLoader(train)\r\n    val_dataloader = DataLoader(val)\r\n\r\n    model = LightningTransformer(vocab_size=dataset.vocab_size)\r\n\r\n    trainer = Trainer(\r\n        max_epochs=1,\r\n        accelerator = \"gpu\",\r\n\tdevices = 2,\r\n\tnum_nodes = 1,\r\n        strategy = \"ddp\",\r\n    )\r\n    trainer.fit(\r\n        model,\r\n        train_dataloaders=train_dataloader,\r\n        val_dataloaders=val_dataloader,\r\n    )\r\n<\/pre>\n<\/code><div class=\"copy-button\"><button class=\"expand-button\">Expand<\/button><button class=\"copy\">Copy<\/button><\/div><\/pre>\n<h2>Powered by Lightning AI<\/h2>\n<p>Together with <a href=\"https:\/\/torchmetrics.readthedocs.io\/en\/stable\/\" target=\"_blank\" rel=\"noopener\">TorchMetrics<\/a>, PyTorch Lightning and Lightning Fabric exist as part of an ecosystem that drives the latest research and brings it to production. The following industry and community projects are powered by Lightning AI\u2019s open source frameworks under the hood:<\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><a href=\"https:\/\/github.com\/Lightning-AI\/lit-gpt\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">Lit-GPT<\/span><\/a><span style=\"font-weight: 400;\"> is a hackable implementation of state-of-the-art open-source LLMs built by Lightning AI.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><a href=\"https:\/\/github.com\/jzhang38\/TinyLlama\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">Tiny LLama<\/span><\/a>,<span style=\"font-weight: 400;\">\u00a0by <a href=\"https:\/\/www.linkedin.com\/in\/lance-peiyuan-zhang-5b2886194\/\" target=\"_blank\" rel=\"noopener\">Lance Zhang,<\/a> is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><a href=\"https:\/\/stability.ai\/blog\/stable-diffusion-sdxl-1-announcement\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">SDXL<\/span><\/a><span style=\"font-weight: 400;\"> is v2 of Stability AI\u2019s Stable Diffusion text-to-image generation model.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><a href=\"https:\/\/github.com\/NVIDIA\/NeMo\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">NVIDIA&#8217;s NeMo<\/span><\/a><span style=\"font-weight: 400;\"> is a cloud native framework to build and deploy generative AI.<\/span><\/li>\n<li aria-level=\"1\"><a href=\"https:\/\/github.com\/HazyResearch\/hyena-dna\" target=\"_blank\" rel=\"noopener\">HyenaDNA<\/a>, by Stanford&#8217;s <a href=\"https:\/\/hazyresearch.stanford.edu\/\" target=\"_blank\" rel=\"noopener\">Hazy Research<\/a>, is the first recurrent model that is competitive with large scale transformers.<\/li>\n<\/ul>\n<h2>Still Have Questions?<\/h2>\n<p>We have an amazing community and team of core engineers ready to answer questions you might have about PyTorch Lightning and Lightning Fabric. So, join us on <a href=\"https:\/\/lightning.ai\/forums\/\" target=\"_blank\" rel=\"noopener\">Discourse<\/a>, <a href=\"https:\/\/github.com\/Lightning-AI\/lightning\/discussions\" target=\"_blank\" rel=\"noopener\">GitHub Discussions<\/a>, or <a href=\"https:\/\/discord.gg\/XncpTy7DSt\" target=\"_blank\" rel=\"noopener\">Discord<\/a>. See you there!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Introduction PyTorch Lightning and Lightning Fabric enable researchers and machine learning engineers to train PyTorch models at scale. Both frameworks do the heavy lifting for you and orchestrate training across multi-GPU and multi-Node environments. All you need to bring is a PyTorch module! And maybe a GPU \ud83d\ude06. So, why are there two frameworks? Short<a class=\"excerpt-read-more\" href=\"https:\/\/lightning.ai\/pages\/blog\/pytorch-lightning-and-fabric\/\" title=\"ReadWhen to Use PyTorch Lightning or Lightning Fabric\">&#8230; Read more &raquo;<\/a><\/p>\n","protected":false},"author":16,"featured_media":5648751,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"footnotes":"","_links_to":"","_links_to_target":""},"categories":[27,29],"tags":[],"glossary":[],"acf":{"custom_scripts":"","additional_authors":false,"hide_from_archive":false,"content_type":"Blog Post","sticky":false,"tabs":true,"tab_group":[{"tab_group_title":"Common Flags","tab_items":[{"tab_title":"PyTorch Lightning","tab_content":"<pre><pre class=\"code-shortcode dark-theme window- collapse-false \" style=\"--height:falsepx\"><code class=\"language-python \">\r\nimport pytorch_lightning as pl\r\n\r\ntrainer = pl.Trainer(\r\n    accelerator=\"gpu\",\r\n    devices=2,\r\n    num_nodes=1,\r\n    strategy=\"ddp\",\r\n)<\/pre>\n<\/code><div class=\"copy-button\"><button class=\"expand-button\">Expand<\/button><button class=\"copy\">Copy<\/button><\/div><\/pre><\/p>\n"},{"tab_title":"Lightning Fabric","tab_content":"<pre><pre class=\"code-shortcode dark-theme window- collapse-false \" style=\"--height:falsepx\"><code class=\"language-python \">\r\nfrom lightning_fabric import Fabric\r\n\r\nfabric = Fabric(\r\n    accelerator=\"gpu\",\r\n    devices=2,\r\n    num_nodes=1,\r\n    strategy=\"ddp\",\r\n)<\/pre>\n<\/code><div class=\"copy-button\"><button class=\"expand-button\">Expand<\/button><button class=\"copy\">Copy<\/button><\/div><\/pre><\/p>\n"}]}],"custom_styles":".tab-group button.btn.active:hover {\r\n    background: var(--purple);\r\n}"},"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v24.5 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>When to Use PyTorch Lightning or Lightning Fabric - Lightning AI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/lightning.ai\/pages\/blog\/pytorch-lightning-and-fabric\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"When to Use PyTorch Lightning or Lightning Fabric - Lightning AI\" \/>\n<meta property=\"og:description\" content=\"Introduction PyTorch Lightning and Lightning Fabric enable researchers and machine learning engineers to train PyTorch models at scale. Both frameworks do the heavy lifting for you and orchestrate training across multi-GPU and multi-Node environments. All you need to bring is a PyTorch module! And maybe a GPU \ud83d\ude06. So, why are there two frameworks? Short... Read more &raquo;\" \/>\n<meta property=\"og:url\" content=\"https:\/\/lightning.ai\/pages\/blog\/pytorch-lightning-and-fabric\/\" \/>\n<meta property=\"og:site_name\" content=\"Lightning AI\" \/>\n<meta property=\"article:published_time\" content=\"2023-09-07T11:12:57+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-09-07T20:06:10+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/09\/Templates-PL_and_Fabric-2-1024x1024.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1024\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"JP Hennessy\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@LightningAI\" \/>\n<meta name=\"twitter:site\" content=\"@LightningAI\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"JP Hennessy\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/pytorch-lightning-and-fabric\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/pytorch-lightning-and-fabric\/\"},\"author\":{\"name\":\"JP Hennessy\",\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/person\/2518f4d5541f8e98016f6289169141a6\"},\"headline\":\"When to Use PyTorch Lightning or Lightning Fabric\",\"datePublished\":\"2023-09-07T11:12:57+00:00\",\"dateModified\":\"2023-09-07T20:06:10+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/pytorch-lightning-and-fabric\/\"},\"wordCount\":980,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/lightning.ai\/pages\/#organization\"},\"image\":{\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/pytorch-lightning-and-fabric\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/09\/Templates-PL_and_Fabric-2.png\",\"articleSection\":[\"Articles\",\"Blog\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/lightning.ai\/pages\/blog\/pytorch-lightning-and-fabric\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/pytorch-lightning-and-fabric\/\",\"url\":\"https:\/\/lightning.ai\/pages\/blog\/pytorch-lightning-and-fabric\/\",\"name\":\"When to Use PyTorch Lightning or Lightning Fabric - Lightning AI\",\"isPartOf\":{\"@id\":\"https:\/\/lightning.ai\/pages\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/pytorch-lightning-and-fabric\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/pytorch-lightning-and-fabric\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/09\/Templates-PL_and_Fabric-2.png\",\"datePublished\":\"2023-09-07T11:12:57+00:00\",\"dateModified\":\"2023-09-07T20:06:10+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/pytorch-lightning-and-fabric\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/lightning.ai\/pages\/blog\/pytorch-lightning-and-fabric\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/pytorch-lightning-and-fabric\/#primaryimage\",\"url\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/09\/Templates-PL_and_Fabric-2.png\",\"contentUrl\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/09\/Templates-PL_and_Fabric-2.png\",\"width\":2600,\"height\":2600},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/pytorch-lightning-and-fabric\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/lightning.ai\/pages\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"When to Use PyTorch Lightning or Lightning Fabric\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/lightning.ai\/pages\/#website\",\"url\":\"https:\/\/lightning.ai\/pages\/\",\"name\":\"Lightning AI\",\"description\":\"The platform for teams to build AI.\",\"publisher\":{\"@id\":\"https:\/\/lightning.ai\/pages\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/lightning.ai\/pages\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/lightning.ai\/pages\/#organization\",\"name\":\"Lightning AI\",\"url\":\"https:\/\/lightning.ai\/pages\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/02\/image-17.png\",\"contentUrl\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/02\/image-17.png\",\"width\":1744,\"height\":856,\"caption\":\"Lightning AI\"},\"image\":{\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/LightningAI\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/person\/2518f4d5541f8e98016f6289169141a6\",\"name\":\"JP Hennessy\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/28ade268218ae45f723b0b62499f527a?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/28ade268218ae45f723b0b62499f527a?s=96&d=mm&r=g\",\"caption\":\"JP Hennessy\"},\"url\":\"https:\/\/lightning.ai\/pages\/author\/jplightning-ai\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"When to Use PyTorch Lightning or Lightning Fabric - Lightning AI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/lightning.ai\/pages\/blog\/pytorch-lightning-and-fabric\/","og_locale":"en_US","og_type":"article","og_title":"When to Use PyTorch Lightning or Lightning Fabric - Lightning AI","og_description":"Introduction PyTorch Lightning and Lightning Fabric enable researchers and machine learning engineers to train PyTorch models at scale. Both frameworks do the heavy lifting for you and orchestrate training across multi-GPU and multi-Node environments. All you need to bring is a PyTorch module! And maybe a GPU \ud83d\ude06. So, why are there two frameworks? Short... Read more &raquo;","og_url":"https:\/\/lightning.ai\/pages\/blog\/pytorch-lightning-and-fabric\/","og_site_name":"Lightning AI","article_published_time":"2023-09-07T11:12:57+00:00","article_modified_time":"2023-09-07T20:06:10+00:00","og_image":[{"width":1024,"height":1024,"url":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/09\/Templates-PL_and_Fabric-2-1024x1024.png","type":"image\/png"}],"author":"JP Hennessy","twitter_card":"summary_large_image","twitter_creator":"@LightningAI","twitter_site":"@LightningAI","twitter_misc":{"Written by":"JP Hennessy","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/lightning.ai\/pages\/blog\/pytorch-lightning-and-fabric\/#article","isPartOf":{"@id":"https:\/\/lightning.ai\/pages\/blog\/pytorch-lightning-and-fabric\/"},"author":{"name":"JP Hennessy","@id":"https:\/\/lightning.ai\/pages\/#\/schema\/person\/2518f4d5541f8e98016f6289169141a6"},"headline":"When to Use PyTorch Lightning or Lightning Fabric","datePublished":"2023-09-07T11:12:57+00:00","dateModified":"2023-09-07T20:06:10+00:00","mainEntityOfPage":{"@id":"https:\/\/lightning.ai\/pages\/blog\/pytorch-lightning-and-fabric\/"},"wordCount":980,"commentCount":0,"publisher":{"@id":"https:\/\/lightning.ai\/pages\/#organization"},"image":{"@id":"https:\/\/lightning.ai\/pages\/blog\/pytorch-lightning-and-fabric\/#primaryimage"},"thumbnailUrl":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/09\/Templates-PL_and_Fabric-2.png","articleSection":["Articles","Blog"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/lightning.ai\/pages\/blog\/pytorch-lightning-and-fabric\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/lightning.ai\/pages\/blog\/pytorch-lightning-and-fabric\/","url":"https:\/\/lightning.ai\/pages\/blog\/pytorch-lightning-and-fabric\/","name":"When to Use PyTorch Lightning or Lightning Fabric - Lightning AI","isPartOf":{"@id":"https:\/\/lightning.ai\/pages\/#website"},"primaryImageOfPage":{"@id":"https:\/\/lightning.ai\/pages\/blog\/pytorch-lightning-and-fabric\/#primaryimage"},"image":{"@id":"https:\/\/lightning.ai\/pages\/blog\/pytorch-lightning-and-fabric\/#primaryimage"},"thumbnailUrl":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/09\/Templates-PL_and_Fabric-2.png","datePublished":"2023-09-07T11:12:57+00:00","dateModified":"2023-09-07T20:06:10+00:00","breadcrumb":{"@id":"https:\/\/lightning.ai\/pages\/blog\/pytorch-lightning-and-fabric\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/lightning.ai\/pages\/blog\/pytorch-lightning-and-fabric\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/lightning.ai\/pages\/blog\/pytorch-lightning-and-fabric\/#primaryimage","url":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/09\/Templates-PL_and_Fabric-2.png","contentUrl":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/09\/Templates-PL_and_Fabric-2.png","width":2600,"height":2600},{"@type":"BreadcrumbList","@id":"https:\/\/lightning.ai\/pages\/blog\/pytorch-lightning-and-fabric\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/lightning.ai\/pages\/"},{"@type":"ListItem","position":2,"name":"When to Use PyTorch Lightning or Lightning Fabric"}]},{"@type":"WebSite","@id":"https:\/\/lightning.ai\/pages\/#website","url":"https:\/\/lightning.ai\/pages\/","name":"Lightning AI","description":"The platform for teams to build AI.","publisher":{"@id":"https:\/\/lightning.ai\/pages\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/lightning.ai\/pages\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/lightning.ai\/pages\/#organization","name":"Lightning AI","url":"https:\/\/lightning.ai\/pages\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/lightning.ai\/pages\/#\/schema\/logo\/image\/","url":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/02\/image-17.png","contentUrl":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/02\/image-17.png","width":1744,"height":856,"caption":"Lightning AI"},"image":{"@id":"https:\/\/lightning.ai\/pages\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/LightningAI"]},{"@type":"Person","@id":"https:\/\/lightning.ai\/pages\/#\/schema\/person\/2518f4d5541f8e98016f6289169141a6","name":"JP Hennessy","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/lightning.ai\/pages\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/28ade268218ae45f723b0b62499f527a?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/28ade268218ae45f723b0b62499f527a?s=96&d=mm&r=g","caption":"JP Hennessy"},"url":"https:\/\/lightning.ai\/pages\/author\/jplightning-ai\/"}]}},"_links":{"self":[{"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/posts\/5648737"}],"collection":[{"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/users\/16"}],"replies":[{"embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/comments?post=5648737"}],"version-history":[{"count":0,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/posts\/5648737\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/media\/5648751"}],"wp:attachment":[{"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/media?parent=5648737"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/categories?post=5648737"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/tags?post=5648737"},{"taxonomy":"glossary","embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/glossary?post=5648737"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}