{"id":5647584,"date":"2023-03-20T16:33:12","date_gmt":"2023-03-20T20:33:12","guid":{"rendered":"https:\/\/lightning.ai\/pages\/?p=5647584"},"modified":"2023-04-18T16:34:51","modified_gmt":"2023-04-18T20:34:51","slug":"introduction-to-lightning-fabric","status":"publish","type":"post","link":"https:\/\/lightning.ai\/pages\/blog\/introduction-to-lightning-fabric\/","title":{"rendered":"Introduction to Lightning Fabric"},"content":{"rendered":"<p><a href=\"https:\/\/lightning.ai\/pages\/open-source\/fabric\/\">Lightning Fabric<\/a> is a new, open-source library that allows you to quickly and easily scale models while maintaining full control over your training loop.<\/p>\n<p>In the past, getting PyTorch code to run efficiently on GPUs and scaling it up to many machines and large datasets was possible with PyTorch Lightning. As time went on, however, we became aware of the need to provide a scaling option that landed somewhere between a raw deep learning framework like PyTorch on the one hand, and a high-level, feature-rich framework like PyTorch Lightning. <a href=\"https:\/\/lightning.ai\/pages\/open-source\/fabric\/\">Lightning Fabric<\/a> is just that.<\/p>\n<p>While <a href=\"https:\/\/lightning.ai\/pages\/open-source\/pytorch-lightning\/\">PyTorch Lightning<\/a> provides many features to save time and improve readability and collaboration, there are complex use cases where full control over the training loop is needed. That\u2019s why we built Fabric.<\/p>\n<p>&nbsp;<\/p>\n<h2>What&#8217;s inside?<\/h2>\n<p>Fabric is part of the Lightning 2.0 package. You can install or upgrade with:<\/p>\n<pre class=\"code-shortcode dark-theme window- collapse-false \" style=\"--height:falsepx\"><code class=\"language-python\">\n\npip install -U lightning\n\n<\/code><div class=\"copy-button\"><button class=\"expand-button\">Expand<\/button><button class=\"copy\">Copy<\/button><\/div><\/pre>\n<p>The newest addition is the new module <code>lightning.fabric<\/code>. Here&#8217;s what\u2019s inside:<\/p>\n<ul>\n<li>Accelerators (CPU, GPU, TPU, \u2026)<\/li>\n<li>Distributed Training (DDP, DeepSpeed, FSDP, \u2026)<\/li>\n<li>Mixed Precision Training (FP32, FP16, Bfloat16, \u2026)<\/li>\n<li>Loggers (TensorBoard, CSV, \u2026)<\/li>\n<li>Callback system<\/li>\n<li>Checkpointing primitives (supports distributed checkpoints)<\/li>\n<li>Distributed Collectives<\/li>\n<li>Gradient Accumulation<\/li>\n<li>Lots more!<\/li>\n<\/ul>\n<p>All of these features are already available in PyTorch Lightning, but the key difference with Fabric is how they&#8217;re applied to your code:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-5647585\" src=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/PTL-vs-LF.png\" alt=\"\" width=\"1500\" height=\"1000\" srcset=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/PTL-vs-LF.png 1500w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/PTL-vs-LF-300x200.png 300w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/PTL-vs-LF-1024x683.png 1024w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/PTL-vs-LF-300x200@2x.png 600w\" sizes=\"(max-width: 1500px) 100vw, 1500px\" \/><\/p>\n<p>How Fabric works can best be demonstrated with a short example:<\/p>\n<p>&nbsp;<\/p>\n<h2>Accelerate your code without the boilerplate<\/h2>\n<p>Let&#8217;s start with a simple PyTorch training script:<\/p>\n<pre class=\"code-shortcode dark-theme window- collapse-false \" style=\"--height:falsepx\"><code class=\"language-python\"><br \/>\nimport torch<br \/>\nimport torch.nn.functional as F<br \/>\nfrom lightning.pytorch.demos import Transformer, WikiText2<br \/>\n&nbsp;\n\n&nbsp;<br \/>\ndataset = WikiText2()<br \/>\ndataloader = torch.utils.data.DataLoader(dataset)<br \/>\nmodel = Transformer(vocab_size=dataset.vocab_size)<br \/>\noptimizer = torch.optim.SGD(model.parameters(), lr=0.1)<br \/>\n&nbsp;\n\n&nbsp;<br \/>\nmodel.train()<br \/>\nfor epoch in range(20):<br \/>\nfor batch in dataloader:<br \/>\n\u00a0 \u00a0 \u00a0 \u00a0 input, target = batch<br \/>\n\u00a0 \u00a0 \u00a0 \u00a0 optimizer.zero_grad()<br \/>\n\u00a0 \u00a0 \u00a0 \u00a0 output = model(input, target)<br \/>\n\u00a0 \u00a0 \u00a0 \u00a0 loss = F.nll_loss(output, target.view(-1))<br \/>\n\u00a0 \u00a0 \u00a0 \u00a0 loss.backward()<br \/>\n\u00a0 \u00a0 \u00a0 \u00a0 optimizer.step()<br \/>\n<\/code><div class=\"copy-button\"><button class=\"expand-button\">Expand<\/button><button class=\"copy\">Copy<\/button><\/div><\/pre>\n<p>Unfortunately, this code only runs on the CPU. That&#8217;s not super ideal.<\/p>\n<p>To run this on a GPU, it only takes a couple of <code>.to(\"cuda\")<\/code> calls on the model and data. But with larger models and bigger data samples (e.g., high-res images) or bigger batch sizes, we&#8217;re limited by the available GPU memory. To get around this, we could implement mixed precision training and distributed training across multiple GPUs, but we would have to once again change our code. On top of that, doing this correctly and efficiently can be difficult, time-consuming, and will produce a lot of boilerplate code.<\/p>\n<p>With Lightning Fabric, you can add a few calls to your code, once, and then you have the flexibility to run it anywhere, like so:<\/p>\n<pre class=\"code-shortcode dark-theme window- collapse-false \" style=\"--height:falsepx\"><code class=\"language-python\"><br \/>\nimport torch<br \/>\nimport torch.nn.functional as F\n\nimport lightning as L<br \/>\nfrom lightning.pytorch.demos import Transformer, WikiText2\n\nfabric = L.Fabric()\n\ndataset = WikiText2()<br \/>\ndataloader = torch.utils.data.DataLoader(dataset)<br \/>\nmodel = Transformer(vocab_size=dataset.vocab_size)<br \/>\noptimizer = torch.optim.SGD(model.parameters(), lr=0.1)\n\nmodel, optimizer = fabric.setup(model, optimizer)<br \/>\ndataloader = fabric.setup_dataloaders(dataloader)\n\nmodel.train()<br \/>\nfor epoch in range(20):<br \/>\n    for batch in dataloader:<br \/>\n        input, target = batch<br \/>\n        optimizer.zero_grad()<br \/>\n        output = model(input, target)<br \/>\n        loss = F.nll_loss(output, target.view(-1))<br \/>\n        fabric.backward(loss)<br \/>\n        optimizer.step()<br \/>\n<\/code><div class=\"copy-button\"><button class=\"expand-button\">Expand<\/button><button class=\"copy\">Copy<\/button><\/div><\/pre>\n<p>Now, unleash the full power of Fabric on your Python script.<\/p>\n<p>Run on the M1\/M2 GPU of your MacBook:<\/p>\n<pre class=\"code-shortcode dark-theme window- collapse-false \" style=\"--height:falsepx\"><code class=\"language-python\">\n\nlightning run model train.py --accelerator=mps\n\n<\/code><div class=\"copy-button\"><button class=\"expand-button\">Expand<\/button><button class=\"copy\">Copy<\/button><\/div><\/pre>\n<p>Or on a beefy GPU server in float-16 precision:<\/p>\n<pre class=\"code-shortcode dark-theme window- collapse-false \" style=\"--height:falsepx\"><code class=\"language-python\">\n\nlightning run model train.py \\<br \/>\n--accelerator=cuda \\<br \/>\n--devices=8 \\<br \/>\n--precision=bf16\n\n<\/code><div class=\"copy-button\"><button class=\"expand-button\">Expand<\/button><button class=\"copy\">Copy<\/button><\/div><\/pre>\n<p>Or across multiple machines in your cluster:<\/p>\n<pre class=\"code-shortcode dark-theme window- collapse-false \" style=\"--height:falsepx\"><code class=\"language-python\">\n\nlightning run model train.py \\<br \/>\n--accelerator=cuda \\<br \/>\n--devices=8 \\<br \/>\n--num-nodes=4 \\<br \/>\n--main-address=10.10.10.24 \\<br \/>\n--node-rank=1\n\n<\/code><div class=\"copy-button\"><button class=\"expand-button\">Expand<\/button><button class=\"copy\">Copy<\/button><\/div><\/pre>\n<p>You can find plenty of other configurations in the official Fabric documentation.<\/p>\n<p>Most importantly: <strong>None of these configurations require any code changes on your side.<\/strong><\/p>\n<p>&nbsp;<\/p>\n<div class=\"takeaways card-glow p-4 my-4\"><h3 class=\"w-100 d-block\">More use cases for Fabric<\/h3> While the examples above demonstrate some potential use cases for Fabric, we&#8217;ve also got plenty of more robust examples that you can find in our <a href=\"https:\/\/lightning.ai\/docs\/fabric\/stable\/\">official documentation<\/a>. These include image classification, reinforcement learning, large language model pre-training with nanoGPT, and much more. <\/div>\n<p>&nbsp;<\/p>\n<h2>Make your own trainer<\/h2>\n<p>Accelerating your training code is just the tip of the iceberg. Fabric brings a toolset that helps you build a fully-fledged trainer from the ground up.<\/p>\n<p>Here&#8217;s a selection of key building blocks you can add to your own custom Trainer (all optional, of course!):<\/p>\n<p>&nbsp;<\/p>\n<h3>Loggers\/Experiment Trackers<\/h3>\n<p>Fabric doesn\u2019t track metrics by default. It\u2019s all up to the user to decide what and when they want to log a metric value. To add logging capabilities to your Trainer, you can choose one or several loggers from the <code>lightning.fabric.loggers<\/code> module:<\/p>\n<pre class=\"code-shortcode dark-theme window- collapse-false \" style=\"--height:falsepx\"><code class=\"language-python\">\n\nimport lightning as L<br \/>\nfrom lightning.fabric.loggers import TensorBoardLogger\n\n# Pick a logger and add it to Fabric<br \/>\nlogger = TensorBoardLogger(root_dir=\"logs\")<br \/>\nfabric = L.Fabric(loggers=logger)\n\n# Python scalar or tensor scalar<br \/>\nfabric.log(\"some_value\", value)\n\n# Works with TorchMetrics too<br \/>\nfabric.log(\"my_metric\", metric.compute())\n\n<\/code><div class=\"copy-button\"><button class=\"expand-button\">Expand<\/button><button class=\"copy\">Copy<\/button><\/div><\/pre>\n<p>You can control many more things with logging, such as the logging frequency, log media (images, audio, etc.), multiple loggers at once, and more.<\/p>\n<a target=\"blank\" href=\"https:\/\/lightning.ai\/docs\/fabric\/stable\/guide\/logging.html\" class=\"d-inline-block btn btn-\">Read about logging in our docs.<\/a>\n<p>&nbsp;<\/p>\n<h3>Checkpoints<\/h3>\n<p>Saving and resuming training state is important for development and long running experiments. Fabric supports this through the <code>.save<\/code> and <code>.load<\/code> primitives:<\/p>\n<pre class=\"code-shortcode dark-theme window- collapse-false \" style=\"--height:falsepx\"><code class=\"language-python\">\n\n# Define the state of your program\/loop<br \/>\nstate = {\"model1\": model1, \"model2\": model2, \"optimizer\": optimizer, \"iteration\": iteration, \"hparams\": ...}\n\n# Save it to a file<br \/>\nfabric.save(\"path\/to\/checkpoint.ckpt\", state)\n\n# Load the state (in-place)<br \/>\nfabric.load(\"path\/to\/checkpoint.ckpt\", state)\n\n<\/code><div class=\"copy-button\"><button class=\"expand-button\">Expand<\/button><button class=\"copy\">Copy<\/button><\/div><\/pre>\n<p>As you can see, the syntax is almost identical to <code>torch.save<\/code> and <code>torch.load<\/code>, and therefore also very easy to convert your existing PyTorch code. However, Fabric\u2019s saving and loading methods take care of correctly saving sharded models under distributed settings. Without Fabric, this would require the user to write a lot of boilerplate code.<\/p>\n<a target=\"blank\" href=\"https:\/\/lightning.ai\/docs\/fabric\/stable\/guide\/checkpoint.html\" class=\"d-inline-block btn btn-\">Read about checkpoints in our docs.<\/a>\n<p>&nbsp;<\/p>\n<h3>Callbacks<\/h3>\n<p>When you build a Trainer\/framework for your team, a community, or even just for yourself, it can be useful to have a callback system to hook into the machinery and extend its functionality without needing to change the actual source code. Fabric brings the building blocks so you don\u2019t have to reinvent the wheel:<\/p>\n<pre class=\"code-shortcode dark-theme window- collapse-false \" style=\"--height:falsepx\"><code class=\"language-python\"><br \/>\nimport torch<br \/>\nimport torch.nn.functional as F\n\nimport lightning as L<br \/>\nfrom lightning.pytorch.demos import Transformer, WikiText2\n\n# The code of a callback can live anywhere, away from your Trainer<br \/>\nclass MyCallback:<br \/>\n    def on_train_batch_end(self, loss, output):<br \/>\n        # Do something with the loss and output<br \/>\n        print(\"current loss:\", loss)\n\n# Add one or several callbacks:<br \/>\nfabric = L.Fabric(callbacks=[MyCallback()])\n\ndataset = WikiText2()<br \/>\ndataloader = torch.utils.data.DataLoader(dataset)<br \/>\nmodel = Transformer(vocab_size=dataset.vocab_size)<br \/>\noptimizer = torch.optim.SGD(model.parameters(), lr=0.1)\n\nmodel, optimizer = fabric.setup(model, optimizer)<br \/>\ndataloader = fabric.setup_dataloaders(dataloader)\n\n# Anywhere in your Trainer, call the appropriate callback methods<br \/>\nmodel.train()<br \/>\nfor epoch in range(20):<br \/>\n    for batch in dataloader:<br \/>\n        input, target = batch<br \/>\n        optimizer.zero_grad()<br \/>\n        output = model(input, target)<br \/>\n        loss = F.nll_loss(output, target.view(-1))<br \/>\n        fabric.backward(loss)<br \/>\n        optimizer.step()\n\n        # Let a callback add some arbitrary processing at the appropriate place<br \/>\n        # Give the callback access to some varibles<br \/>\n        fabric.call(\"on_train_batch_end\", loss=loss, output=output)<br \/>\n<\/code><div class=\"copy-button\"><button class=\"expand-button\">Expand<\/button><button class=\"copy\">Copy<\/button><\/div><\/pre>\n<a target=\"blank\" href=\"https:\/\/lightning.ai\/docs\/fabric\/stable\/guide\/callbacks.html\" class=\"d-inline-block btn btn-\">Read about callbacks in our docs.<\/a>\n<p>&nbsp;<\/p>\n<h3>LightningModule<\/h3>\n<p>As mentioned before, Fabric can wrap around your PyTorch code no matter how it is organized, giving you maximum flexibility! However, maybe you prefer to standardize and separate the research code (model, loss, optimization, etc.) from the \u201ctrainer\u201d code (training loop, checkpointing, logging, etc.). This is exactly what the the\u00a0LightningModule was made for!<\/p>\n<pre class=\"code-shortcode dark-theme window- collapse-false \" style=\"--height:falsepx\"><code class=\"language-python\"><br \/>\nimport torch<br \/>\nimport torch.nn.functional as F\n\nimport lightning as L<br \/>\nfrom lightning.pytorch.demos import Transformer, WikiText2\n\n# Organize code in LightningModule hooks<br \/>\nclass LitModel(L.LightningModule):<br \/>\n    def __init__(self):<br \/>\n        super().__init__()<br \/>\n        self.dataset = dataset = WikiText2()<br \/>\n        self.model = Transformer(vocab_size=dataset.vocab_size)\n\n    def training_step(self, batch):<br \/>\n        input, target = batch<br \/>\n        output = self.model(input, target)<br \/>\n        loss = F.nll_loss(output, target.view(-1))<br \/>\n        return loss\n\n    def train_dataloader(self):<br \/>\n        return torch.utils.data.DataLoader(self.dataset)\n\n    def configure_optimizers(self):<br \/>\n        return torch.optim.SGD(model.parameters(), lr=0.1)\n\n# Instantiate the LightningModule<br \/>\nmodel = LitModel()<br \/>\n<\/code><div class=\"copy-button\"><button class=\"expand-button\">Expand<\/button><button class=\"copy\">Copy<\/button><\/div><\/pre>\n<p>Now, you can make your Trainer accept an instance of LightningModule. The Trainer needs to call the LightningModule hooks, e.g., <code>training_step<\/code>, at the right time:<\/p>\n<pre class=\"code-shortcode dark-theme window- collapse-false \" style=\"--height:falsepx\"><code class=\"language-python\"><br \/>\nfabric = L.Fabric()\n\n# Call `configure_optimizers` and dataloader hooks<br \/>\nmodel, optimizer = fabric.setup(model, model.configure_optimizers())<br \/>\ndataloader = fabric.setup_dataloaders(model.train_dataloader())\n\n# Call the hooks at the right time<br \/>\nmodel.on_train_start()\n\nmodel.train()<br \/>\nfor epoch in range(20):<br \/>\n    for batch in dataloader:<br \/>\n        optimizer.zero_grad()<br \/>\n        # Call training_step at the right place<br \/>\n        loss = model.training_step(batch)<br \/>\n        fabric.backward(loss)<br \/>\n        optimizer.step()<br \/>\n<\/code><div class=\"copy-button\"><button class=\"expand-button\">Expand<\/button><button class=\"copy\">Copy<\/button><\/div><\/pre>\n<a target=\"blank\" href=\"https:\/\/lightning.ai\/docs\/fabric\/stable\/guide\/lightning_module.html\" class=\"d-inline-block btn btn-\">Learn more about how the LightningModule works with Fabric in our docs.<\/a>\n<p>&nbsp;<\/p>\n<p>These are the essentials, but there is more cool stuff that didn\u2019t fit into this blog post: You can read up about more advanced topics like <a href=\"https:\/\/lightning.ai\/docs\/fabric\/stable\/advanced\/gradient_accumulation.html\">gradient accumulation<\/a>, <a href=\"https:\/\/lightning.ai\/docs\/fabric\/stable\/advanced\/distributed_communication.html\">distributed communication<\/a>, <a href=\"https:\/\/lightning.ai\/docs\/fabric\/stable\/advanced\/multiple_setup.html\">multiple models and optimizers<\/a>, and more in our docs.<\/p>\n<p>An important takeaway here is that all the tools we just saw are opt-in: You pick what you find useful, and leave what you don\u2019t!<\/p>\n<p>&nbsp;<\/p>\n<h2>The future of Fabric and PyTorch Lightning<\/h2>\n<p>Fabric is an important step towards modularizing and \u201cunbundling\u201d our beloved Lightning Trainer as we know it. In the near future we will have the internals of Trainer refactored to rely heavily on Fabric as it becomes the core framework to build *any* Trainer. This means from now on, Lightning will offer two distinct experiences that target different user groups:<\/p>\n<p>&nbsp;<\/p>\n<p><strong>PyTorch Lightning:<\/strong> This is our general-purpose, battle tested, fast and reliable trainer with the most popular features built in. The best solution for ML researchers who want to get started quickly and iterate fast.<\/p>\n<p><strong>Lightning Fabric:<\/strong> For the ML researcher and framework builders out there who want to hack together their own trainers. It is the most flexible way to work with PyTorch while still getting the most valuable benefits of Lightning.<\/p>\n<p>Starting with 2.0, Lightning becomes more modular, easier to customize and hack, easier to understand in terms of code readability, and overall a more lightweight package with fewer dependencies.<\/p>\n<p>&nbsp;<\/p>\n<div class=\"takeaways card-glow p-4 my-4\"><h3 class=\"w-100 d-block\">Wrap-up!<\/h3> With Lightning 2.0 and the introduction of Fabric, we are resolving several existing challenges and address feedback from the PyTorch community. The users who faced difficulties building domain-specific frameworks on top of PyTorch Lightning can now use Fabric, a toolset that is much better-suited to these tasks. At the same time, users who love the pre-built, feature-rich Trainer but were struggling to understand its internals now get a cleaner, more readable and debuggable experience. Plus, this also significantly lowers the bar for contributing to the Lightning source code and discussions around it. Moreover, and we believe this is the best part, Fabric will reach new users that were previously hesitant to adopt Lightning due to the abstractions in the Trainer. Fabric doesn\u2019t force abstractions, follows an opt-in philosophy, and can immediately add value for <strong>any PyTorch code base<\/strong>.<\/p>\n<p>We hope you give Fabric a try. Please reach out to us for feedback on <a href=\"https:\/\/discord.gg\/tfXFetEZxv\">Discord<\/a>, ask questions on the Forums, and report bugs and feature requests in our GitHub!<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Lightning Fabric is a new, open-source library that allows you to quickly and easily scale models while maintaining full control over your training loop. In the past, getting PyTorch code to run efficiently on GPUs and scaling it up to many machines and large datasets was possible with PyTorch Lightning. As time went on, however,<a class=\"excerpt-read-more\" href=\"https:\/\/lightning.ai\/pages\/blog\/introduction-to-lightning-fabric\/\" title=\"ReadIntroduction to Lightning Fabric\">&#8230; Read more &raquo;<\/a><\/p>\n","protected":false},"author":16,"featured_media":5647587,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"footnotes":"","_links_to":"","_links_to_target":""},"categories":[29,41],"tags":[96,178,97,62],"glossary":[],"acf":{"additional_authors":false,"hide_from_archive":false,"content_type":"Blog Post","sticky":false,"custom_styles":"","default_editor":true,"show_table_of_contents":false},"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v24.5 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Introduction to Lightning Fabric<\/title>\n<meta name=\"description\" content=\"Lightning Fabric is an open-source library that allows you to easily scale models while maintaining full control over your training loop.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/lightning.ai\/pages\/blog\/introduction-to-lightning-fabric\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Introduction to Lightning Fabric\" \/>\n<meta property=\"og:description\" content=\"Lightning Fabric is an open-source library that allows you to easily scale models while maintaining full control over your training loop.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/lightning.ai\/pages\/blog\/introduction-to-lightning-fabric\/\" \/>\n<meta property=\"og:site_name\" content=\"Lightning AI\" \/>\n<meta property=\"article:published_time\" content=\"2023-03-20T20:33:12+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-04-18T20:34:51+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Intro-to-Fabric.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1160\" \/>\n\t<meta property=\"og:image:height\" content=\"600\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"JP Hennessy\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:image\" content=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Intro-to-Fabric.png\" \/>\n<meta name=\"twitter:creator\" content=\"@LightningAI\" \/>\n<meta name=\"twitter:site\" content=\"@LightningAI\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"JP Hennessy\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"8 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/introduction-to-lightning-fabric\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/introduction-to-lightning-fabric\/\"},\"author\":{\"name\":\"JP Hennessy\",\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/person\/2518f4d5541f8e98016f6289169141a6\"},\"headline\":\"Introduction to Lightning Fabric\",\"datePublished\":\"2023-03-20T20:33:12+00:00\",\"dateModified\":\"2023-04-18T20:34:51+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/introduction-to-lightning-fabric\/\"},\"wordCount\":1943,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/lightning.ai\/pages\/#organization\"},\"image\":{\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/introduction-to-lightning-fabric\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Intro-to-Fabric.png\",\"keywords\":[\"ai\",\"lightning fabric\",\"ml\",\"pytorch lightning\"],\"articleSection\":[\"Blog\",\"Tutorials\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/lightning.ai\/pages\/blog\/introduction-to-lightning-fabric\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/introduction-to-lightning-fabric\/\",\"url\":\"https:\/\/lightning.ai\/pages\/blog\/introduction-to-lightning-fabric\/\",\"name\":\"Introduction to Lightning Fabric\",\"isPartOf\":{\"@id\":\"https:\/\/lightning.ai\/pages\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/introduction-to-lightning-fabric\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/introduction-to-lightning-fabric\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Intro-to-Fabric.png\",\"datePublished\":\"2023-03-20T20:33:12+00:00\",\"dateModified\":\"2023-04-18T20:34:51+00:00\",\"description\":\"Lightning Fabric is an open-source library that allows you to easily scale models while maintaining full control over your training loop.\",\"breadcrumb\":{\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/introduction-to-lightning-fabric\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/lightning.ai\/pages\/blog\/introduction-to-lightning-fabric\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/introduction-to-lightning-fabric\/#primaryimage\",\"url\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Intro-to-Fabric.png\",\"contentUrl\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Intro-to-Fabric.png\",\"width\":1160,\"height\":600},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/introduction-to-lightning-fabric\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/lightning.ai\/pages\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Introduction to Lightning Fabric\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/lightning.ai\/pages\/#website\",\"url\":\"https:\/\/lightning.ai\/pages\/\",\"name\":\"Lightning AI\",\"description\":\"The platform for teams to build AI.\",\"publisher\":{\"@id\":\"https:\/\/lightning.ai\/pages\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/lightning.ai\/pages\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/lightning.ai\/pages\/#organization\",\"name\":\"Lightning AI\",\"url\":\"https:\/\/lightning.ai\/pages\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/02\/image-17.png\",\"contentUrl\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/02\/image-17.png\",\"width\":1744,\"height\":856,\"caption\":\"Lightning AI\"},\"image\":{\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/LightningAI\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/person\/2518f4d5541f8e98016f6289169141a6\",\"name\":\"JP Hennessy\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/28ade268218ae45f723b0b62499f527a?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/28ade268218ae45f723b0b62499f527a?s=96&d=mm&r=g\",\"caption\":\"JP Hennessy\"},\"url\":\"https:\/\/lightning.ai\/pages\/author\/jplightning-ai\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Introduction to Lightning Fabric","description":"Lightning Fabric is an open-source library that allows you to easily scale models while maintaining full control over your training loop.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/lightning.ai\/pages\/blog\/introduction-to-lightning-fabric\/","og_locale":"en_US","og_type":"article","og_title":"Introduction to Lightning Fabric","og_description":"Lightning Fabric is an open-source library that allows you to easily scale models while maintaining full control over your training loop.","og_url":"https:\/\/lightning.ai\/pages\/blog\/introduction-to-lightning-fabric\/","og_site_name":"Lightning AI","article_published_time":"2023-03-20T20:33:12+00:00","article_modified_time":"2023-04-18T20:34:51+00:00","og_image":[{"width":1160,"height":600,"url":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Intro-to-Fabric.png","type":"image\/png"}],"author":"JP Hennessy","twitter_card":"summary_large_image","twitter_image":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Intro-to-Fabric.png","twitter_creator":"@LightningAI","twitter_site":"@LightningAI","twitter_misc":{"Written by":"JP Hennessy","Est. reading time":"8 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/lightning.ai\/pages\/blog\/introduction-to-lightning-fabric\/#article","isPartOf":{"@id":"https:\/\/lightning.ai\/pages\/blog\/introduction-to-lightning-fabric\/"},"author":{"name":"JP Hennessy","@id":"https:\/\/lightning.ai\/pages\/#\/schema\/person\/2518f4d5541f8e98016f6289169141a6"},"headline":"Introduction to Lightning Fabric","datePublished":"2023-03-20T20:33:12+00:00","dateModified":"2023-04-18T20:34:51+00:00","mainEntityOfPage":{"@id":"https:\/\/lightning.ai\/pages\/blog\/introduction-to-lightning-fabric\/"},"wordCount":1943,"commentCount":0,"publisher":{"@id":"https:\/\/lightning.ai\/pages\/#organization"},"image":{"@id":"https:\/\/lightning.ai\/pages\/blog\/introduction-to-lightning-fabric\/#primaryimage"},"thumbnailUrl":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Intro-to-Fabric.png","keywords":["ai","lightning fabric","ml","pytorch lightning"],"articleSection":["Blog","Tutorials"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/lightning.ai\/pages\/blog\/introduction-to-lightning-fabric\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/lightning.ai\/pages\/blog\/introduction-to-lightning-fabric\/","url":"https:\/\/lightning.ai\/pages\/blog\/introduction-to-lightning-fabric\/","name":"Introduction to Lightning Fabric","isPartOf":{"@id":"https:\/\/lightning.ai\/pages\/#website"},"primaryImageOfPage":{"@id":"https:\/\/lightning.ai\/pages\/blog\/introduction-to-lightning-fabric\/#primaryimage"},"image":{"@id":"https:\/\/lightning.ai\/pages\/blog\/introduction-to-lightning-fabric\/#primaryimage"},"thumbnailUrl":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Intro-to-Fabric.png","datePublished":"2023-03-20T20:33:12+00:00","dateModified":"2023-04-18T20:34:51+00:00","description":"Lightning Fabric is an open-source library that allows you to easily scale models while maintaining full control over your training loop.","breadcrumb":{"@id":"https:\/\/lightning.ai\/pages\/blog\/introduction-to-lightning-fabric\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/lightning.ai\/pages\/blog\/introduction-to-lightning-fabric\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/lightning.ai\/pages\/blog\/introduction-to-lightning-fabric\/#primaryimage","url":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Intro-to-Fabric.png","contentUrl":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Intro-to-Fabric.png","width":1160,"height":600},{"@type":"BreadcrumbList","@id":"https:\/\/lightning.ai\/pages\/blog\/introduction-to-lightning-fabric\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/lightning.ai\/pages\/"},{"@type":"ListItem","position":2,"name":"Introduction to Lightning Fabric"}]},{"@type":"WebSite","@id":"https:\/\/lightning.ai\/pages\/#website","url":"https:\/\/lightning.ai\/pages\/","name":"Lightning AI","description":"The platform for teams to build AI.","publisher":{"@id":"https:\/\/lightning.ai\/pages\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/lightning.ai\/pages\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/lightning.ai\/pages\/#organization","name":"Lightning AI","url":"https:\/\/lightning.ai\/pages\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/lightning.ai\/pages\/#\/schema\/logo\/image\/","url":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/02\/image-17.png","contentUrl":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/02\/image-17.png","width":1744,"height":856,"caption":"Lightning AI"},"image":{"@id":"https:\/\/lightning.ai\/pages\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/LightningAI"]},{"@type":"Person","@id":"https:\/\/lightning.ai\/pages\/#\/schema\/person\/2518f4d5541f8e98016f6289169141a6","name":"JP Hennessy","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/lightning.ai\/pages\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/28ade268218ae45f723b0b62499f527a?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/28ade268218ae45f723b0b62499f527a?s=96&d=mm&r=g","caption":"JP Hennessy"},"url":"https:\/\/lightning.ai\/pages\/author\/jplightning-ai\/"}]}},"_links":{"self":[{"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/posts\/5647584"}],"collection":[{"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/users\/16"}],"replies":[{"embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/comments?post=5647584"}],"version-history":[{"count":0,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/posts\/5647584\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/media\/5647587"}],"wp:attachment":[{"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/media?parent=5647584"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/categories?post=5647584"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/tags?post=5647584"},{"taxonomy":"glossary","embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/glossary?post=5647584"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}