{"id":5647589,"date":"2023-03-22T12:52:10","date_gmt":"2023-03-22T16:52:10","guid":{"rendered":"https:\/\/lightning.ai\/pages\/?p=5647589"},"modified":"2023-03-27T15:05:22","modified_gmt":"2023-03-27T19:05:22","slug":"accelerate-pytorch-code-with-fabric","status":"publish","type":"post","link":"https:\/\/lightning.ai\/pages\/blog\/accelerate-pytorch-code-with-fabric\/","title":{"rendered":"Accelerate PyTorch Code with Fabric"},"content":{"rendered":"","protected":false},"excerpt":{"rendered":"","protected":false},"author":16,"featured_media":5647597,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"footnotes":"","_links_to":"","_links_to_target":""},"categories":[29,41],"tags":[96,179,97,51,62],"glossary":[],"acf":{"additional_authors":false,"hide_from_archive":false,"content_type":"Blog Post","sticky":false,"custom_styles":"","default_editor":false,"sections":[{"acf_fc_layout":"section","heading":"","label":"Intro","content":"<div class=\"takeaways card-glow p-4 my-4\"><h3 class=\"w-100 d-block\">Learn how to:<\/h3> Use <a href=\"https:\/\/lightning.ai\/pages\/open-source\/fabric\/\">Lightning Fabric<\/a> to train and accelerate a PyTorch model using mixed precision and distributed training. <\/div>\n<p>Lightning Fabric provides a unified and simple API to easily switch devices, as well as training strategies that can handle training large <a href=\"https:\/\/github.com\/Lightning-AI\/nanoGPT\">SOTA models<\/a>. We&#8217;ll also show you how to convert your raw PyTorch code so that you can accelerate PyTorch code with Fabric in just a few lines of code.<\/p>\n<p>Fabric allows you to easily leverage underlying hardware like CUDA, GPU, TPU, or Apple Silicon and train your model on multiple GPUs or nodes.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-5647591\" src=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/ezgif.com-optimize.gif\" alt=\"\" width=\"800\" height=\"300\" \/><\/p>\n","subsection":false},{"acf_fc_layout":"section","heading":"Fabric and Pytorch","label":"","content":"<p>PyTorch is by far the most commonly used framework for implementing papers. As part of these implementations, especially as models and datasets grow in size, training and inference optimizations become increasingly important.<\/p>\n<div id=\"attachment_5647593\" style=\"width: 1134px\" class=\"wp-caption alignnone\"><img aria-describedby=\"caption-attachment-5647593\" loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-5647593\" src=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Untitled-2.png\" alt=\"\" width=\"1124\" height=\"458\" srcset=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Untitled-2.png 1124w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Untitled-2-300x122.png 300w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Untitled-2-1024x417.png 1024w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Untitled-2-300x122@2x.png 600w\" sizes=\"(max-width: 1124px) 100vw, 1124px\" \/><p id=\"caption-attachment-5647593\" class=\"wp-caption-text\">Paper implementations by framework. Source: Paperswithcode<\/p><\/div>\n<p>Fabric allows you to accelerate raw PyTorch with just a few lines of code.<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n","subsection":false},{"acf_fc_layout":"section","heading":"How to use Fabric and Pytorch","label":"","content":"<p>Using Fabric with PyTorch is straightforward.<\/p>\n<p>&nbsp;<\/p>\n<div class=\"takeaways card-glow p-4 my-4\"><h3 class=\"w-100 d-block\">1. Install<\/h3> First, you need to install the Fabric library using pip. <\/div>\n<pre class=\"code-shortcode dark-theme window- collapse-false \" style=\"--height:falsepx\"><code class=\"language-python\">\n\npip install lightning\n\n<\/code><div class=\"copy-button\"><button class=\"expand-button\">Expand<\/button><button class=\"copy\">Copy<\/button><\/div><\/pre>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n","subsection":false},{"acf_fc_layout":"section","heading":"","label":"Initialize","content":"<div class=\"takeaways card-glow p-4 my-4\"><h3 class=\"w-100 d-block\">2. Initialize<\/h3> Once you have installed Fabric, to accelerate your PyTorch code you need to create a Fabri<span class=\"notion-enable-hover\" spellcheck=\"false\" data-token-index=\"1\">c object<\/span> and set up your model, optimizer, and dataloaders. <\/div>\n<pre class=\"code-shortcode dark-theme window- collapse-false \" style=\"--height:falsepx\"><code class=\"language-python\">\n\nfrom lightning.fabric import Fabric\n\nfabric = Fabric(accelerator=\"auto\", devices=\"auto\", strategy=\"auto\")<br \/>\nfabric.launch()\n\n<\/code><div class=\"copy-button\"><button class=\"expand-button\">Expand<\/button><button class=\"copy\">Copy<\/button><\/div><\/pre>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n","subsection":true},{"acf_fc_layout":"section","heading":"","label":"Set up your code","content":"<div class=\"takeaways card-glow p-4 my-4\"><h3 class=\"w-100 d-block\">3. Set up your code<\/h3> To set up the model, optimizer, and dataloaders, we&#8217;ll use the <span class=\"notion-enable-hover\" spellcheck=\"false\" data-token-index=\"1\">fabric.setup()<\/span> and <span class=\"notion-enable-hover\" spellcheck=\"false\" data-token-index=\"3\">fabric.setup_dataloaders()<\/span> API.\u00a0<\/div>\n<pre class=\"code-shortcode dark-theme window- collapse-false \" style=\"--height:falsepx\"><code class=\"language-python\">\n\nmodel, optimizer = fabric.setup(model, optimizer)<br \/>\ndataloader = fabric.setup_dataloaders(dataloader)\n\n<\/code><div class=\"copy-button\"><button class=\"expand-button\">Expand<\/button><button class=\"copy\">Copy<\/button><\/div><\/pre>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n","subsection":true},{"acf_fc_layout":"section","heading":"","label":"Remove manual .(device) calls","content":"<div class=\"takeaways card-glow p-4 my-4\"><h3 class=\"w-100 d-block\">4. Remove manual .(device) calls<\/h3> Once you&#8217;ve set up your code with Fabric, you don\u2019t need to manually move your tensors from the CPU to the accelerator (CUDA\/MPS\/TPU), so you should remove <span class=\"notion-enable-hover\" spellcheck=\"false\" data-token-index=\"1\">model.to(device)<\/span> and <span class=\"notion-enable-hover\" spellcheck=\"false\" data-token-index=\"3\">batch.to(device)<\/span> call from your code.\u00a0<\/div>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n","subsection":true},{"acf_fc_layout":"section","heading":"","label":"Backward with Fabric","content":"<div class=\"takeaways card-glow p-4 my-4\"><h3 class=\"w-100 d-block\">5. Backward with Fabric<\/h3> To do back-propagation from the loss, replace <code>loss.backward()<\/code> with <code>fabric.backward(loss).<\/code><\/div>\n<pre class=\"code-shortcode dark-theme window- collapse-false \" style=\"--height:falsepx\"><code class=\"language-python\">\n\n# pip install lightning timm\n\nimport torch<br \/>\nimport torch.nn as nn<br \/>\nimport torchvision<br \/>\nimport torchvision.transforms as transforms<br \/>\nfrom lightning.fabric import Fabric<br \/>\nfrom timm import create_model<br \/>\nfrom tqdm import tqdm\n\n# \u26a1\ufe0f\u26a1\ufe0f\u26a1\ufe0f\u26a1\ufe0f\u26a1\ufe0f Init Fabric \u26a1\ufe0f\u26a1\ufe0f\u26a1\ufe0f\u26a1\ufe0f\u26a1\ufe0f<br \/>\nfabric = Fabric(accelerator=\"auto\", devices=2, strategy=\"auto\")<br \/>\nfabric.launch()  # call launch() for distributed training\n\ndef load_data():<br \/>\n    transform = transforms.Compose(<br \/>\n        [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]<br \/>\n    )\n\n    batch_size = 32\n\n    train_set = torchvision.datasets.CIFAR10(<br \/>\n        root=\"~\/data\", train=True, download=True, transform=transform<br \/>\n    )<br \/>\n    train_loader = torch.utils.data.DataLoader(<br \/>\n        train_set, batch_size=batch_size, shuffle=True, num_workers=4<br \/>\n    )\n\n    return train_loader\n\ntrain_loader = load_data()\n\nmodel = model = create_model(\"resnet50\", num_classes=10)<br \/>\ncriterion = nn.CrossEntropyLoss()<br \/>\noptimizer = torch.optim.Adam(model.parameters(), lr=0.001)\n\n# \u26a1\ufe0f\u26a1\ufe0f\u26a1\ufe0f\u26a1\ufe0f\u26a1\ufe0f Setup model and optimizer with Fabric \u26a1\ufe0f\u26a1\ufe0f\u26a1\ufe0f\u26a1\ufe0f\u26a1\ufe0f<br \/>\nmodel, optimizer = fabric.setup(model, optimizer)<br \/>\n# setup dataloader with Fabric<br \/>\ntrain_loader = fabric.setup_dataloaders(train_loader)\n\n# \u26a1\ufe0f\u26a1\ufe0f\u26a1\ufe0f\u26a1\ufe0f\u26a1\ufe0f Access the Device and strategy \u26a1\ufe0f\u26a1\ufe0f\u26a1\ufe0f\u26a1\ufe0f\u26a1\ufe0f<br \/>\nprint(f\"training on {fabric.device} with {fabric.strategy} strategy\")\n\nfor i in range(2):<br \/>\n    for x, y in tqdm(train_loader):<br \/>\n        # no need to move x, y to devices<br \/>\n        optimizer.zero_grad()<br \/>\n        logits = model(x)<br \/>\n        loss = criterion(logits, y)<br \/>\n        # \u26a1\ufe0f\u26a1\ufe0f\u26a1\ufe0f\u26a1\ufe0f\u26a1\ufe0f fabric.backward(...) instead of loss.backward() \u26a1\ufe0f\u26a1\ufe0f\u26a1\ufe0f\u26a1\ufe0f\u26a1\ufe0f<br \/>\n        fabric.backward(loss)<br \/>\n        optimizer.step()\n\n<\/code><div class=\"copy-button\"><button class=\"expand-button\">Expand<\/button><button class=\"copy\">Copy<\/button><\/div><\/pre>\n<p>With these minimal changes, you\u2019re all set to leverage distributed training strategies, multiple devices, and easily switch hardware.<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n","subsection":true},{"acf_fc_layout":"section","heading":"Mixed precision training with PyTorch and Fabric","label":"","content":"<p>You can save memory by training your model at a lower precision. In a mixed precision setting, we use half-precision (FP16) which gives significant computational speedup while keeping minimal information in single precision (FP32) to maintain model stability and accuracy. Fabric makes it simple to enable mixed precision training with its unified API.<\/p>\n<p>The precision types that are supported include <code>64<\/code>, <code>32<\/code>, <code>16-mixed<\/code>, and <code>bf16-mixed<\/code>. To choose your precision type, simply specify any of these types as an argument in the <code>Fabric<\/code> class. You can read more about mixed precision training with Fabric <a href=\"https:\/\/lightning.ai\/docs\/fabric\/stable\/fundamentals\/precision.html\">here<\/a>.<\/p>\n<pre class=\"code-shortcode dark-theme window- collapse-false \" style=\"--height:falsepx\"><code class=\"language-python\">\n\nfabric = Fabric(precision=\"16-mixed\")\n\n<\/code><div class=\"copy-button\"><button class=\"expand-button\">Expand<\/button><button class=\"copy\">Copy<\/button><\/div><\/pre>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n","subsection":false},{"acf_fc_layout":"section","heading":"Training on multiple GPUs","label":"","content":"<p>You can run distributed training on multiple GPUs and even multiple nodes. PyTorch implements <a class=\"notion-link-token notion-enable-hover\" href=\"https:\/\/pytorch.org\/docs\/stable\/generated\/torch.nn.parallel.DistributedDataParallel.html\" target=\"_blank\" rel=\"noopener noreferrer\" data-token-index=\"1\"><span class=\"link-annotation-unknown-block-id--1566309371\">DistributedDataParallel<\/span><\/a> (DDP) class for distributed model training. To use DDP in raw PyTorch you will have to initialize the process group and make some code changes to accommodate the correct GPU device transfer of data and model. With Fabric, it is very convenient to enable distributed training by updating the flags in <span class=\"notion-enable-hover\" spellcheck=\"false\" data-token-index=\"3\">Fabric<\/span> class. Apart from DDP, Fabric also supports <span class=\"notion-enable-hover\" spellcheck=\"false\" data-token-index=\"5\">DeepSpeed<\/span> and <span class=\"notion-enable-hover\" spellcheck=\"false\" data-token-index=\"7\">fsdp<\/span> out of the box.<\/p>\n<pre class=\"code-shortcode dark-theme window- collapse-false \" style=\"--height:falsepx\"><code class=\"language-python\">\n\n# train on 4 GPUs<br \/>\nfabric = Fabric(devices=4, strategy=\"ddp\")  \n\n# train on 100 GPUs using DeepSpeed<br \/>\nfabric = Fabric(devices=100, strategy=\"deepspeed\")\n\n<\/code><div class=\"copy-button\"><button class=\"expand-button\">Expand<\/button><button class=\"copy\">Copy<\/button><\/div><\/pre>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n","subsection":false},{"acf_fc_layout":"section","heading":"","label":"Conclusion","content":"<div class=\"takeaways card-glow p-4 my-4\"><h3 class=\"w-100 d-block\">Conclusion<\/h3> With Fabric, you can accelerate any PyTorch code to be lightning fast <span role=\"img\" aria-label=\"\u26a1\">\u26a1<\/span>. It was designed for Large Language Models (LLMs) and complex training pipelines such as reinforcement learning. With the unified API, you can control the number of devices, distributed strategy, and precision settings, making your code less redundant and more easily reproducible.\u00a0<\/div>\n<p>&nbsp;<\/p>\n<p style=\"text-align: center;\"><a target=\"blank\" href=\"https:\/\/discord.gg\/tfXFetEZxv\" class=\"d-inline-block btn btn-\">Join the Lightning AI Discord!<\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n","subsection":false}],"show_table_of_contents":true},"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v24.5 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Accelerate PyTorch Code with Fabric<\/title>\n<meta name=\"description\" content=\"Learn how to leverage distributed training strategies, multiple devices, and easily switch hardware with Lightning Fabric.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/lightning.ai\/pages\/blog\/accelerate-pytorch-code-with-fabric\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Accelerate PyTorch Code with Fabric\" \/>\n<meta property=\"og:description\" content=\"Learn how to leverage distributed training strategies, multiple devices, and easily switch hardware with Lightning Fabric.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/lightning.ai\/pages\/blog\/accelerate-pytorch-code-with-fabric\/\" \/>\n<meta property=\"og:site_name\" content=\"Lightning AI\" \/>\n<meta property=\"article:published_time\" content=\"2023-03-22T16:52:10+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-03-27T19:05:22+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Screen-Shot-2023-03-27-at-12.05.03-PM.png\" \/>\n\t<meta property=\"og:image:width\" content=\"790\" \/>\n\t<meta property=\"og:image:height\" content=\"438\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"JP Hennessy\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:image\" content=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Screen-Shot-2023-03-27-at-12.05.03-PM.png\" \/>\n<meta name=\"twitter:creator\" content=\"@LightningAI\" \/>\n<meta name=\"twitter:site\" content=\"@LightningAI\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"JP Hennessy\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/accelerate-pytorch-code-with-fabric\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/accelerate-pytorch-code-with-fabric\/\"},\"author\":{\"name\":\"JP Hennessy\",\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/person\/2518f4d5541f8e98016f6289169141a6\"},\"headline\":\"Accelerate PyTorch Code with Fabric\",\"datePublished\":\"2023-03-22T16:52:10+00:00\",\"dateModified\":\"2023-03-27T19:05:22+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/accelerate-pytorch-code-with-fabric\/\"},\"wordCount\":5,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/lightning.ai\/pages\/#organization\"},\"image\":{\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/accelerate-pytorch-code-with-fabric\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Fabric-speedup.png\",\"keywords\":[\"ai\",\"fabric\",\"ml\",\"pytorch\",\"pytorch lightning\"],\"articleSection\":[\"Blog\",\"Tutorials\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/lightning.ai\/pages\/blog\/accelerate-pytorch-code-with-fabric\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/accelerate-pytorch-code-with-fabric\/\",\"url\":\"https:\/\/lightning.ai\/pages\/blog\/accelerate-pytorch-code-with-fabric\/\",\"name\":\"Accelerate PyTorch Code with Fabric\",\"isPartOf\":{\"@id\":\"https:\/\/lightning.ai\/pages\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/accelerate-pytorch-code-with-fabric\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/accelerate-pytorch-code-with-fabric\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Fabric-speedup.png\",\"datePublished\":\"2023-03-22T16:52:10+00:00\",\"dateModified\":\"2023-03-27T19:05:22+00:00\",\"description\":\"Learn how to leverage distributed training strategies, multiple devices, and easily switch hardware with Lightning Fabric.\",\"breadcrumb\":{\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/accelerate-pytorch-code-with-fabric\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/lightning.ai\/pages\/blog\/accelerate-pytorch-code-with-fabric\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/accelerate-pytorch-code-with-fabric\/#primaryimage\",\"url\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Fabric-speedup.png\",\"contentUrl\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Fabric-speedup.png\",\"width\":1305,\"height\":675},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/accelerate-pytorch-code-with-fabric\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/lightning.ai\/pages\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Accelerate PyTorch Code with Fabric\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/lightning.ai\/pages\/#website\",\"url\":\"https:\/\/lightning.ai\/pages\/\",\"name\":\"Lightning AI\",\"description\":\"The platform for teams to build AI.\",\"publisher\":{\"@id\":\"https:\/\/lightning.ai\/pages\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/lightning.ai\/pages\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/lightning.ai\/pages\/#organization\",\"name\":\"Lightning AI\",\"url\":\"https:\/\/lightning.ai\/pages\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/02\/image-17.png\",\"contentUrl\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/02\/image-17.png\",\"width\":1744,\"height\":856,\"caption\":\"Lightning AI\"},\"image\":{\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/LightningAI\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/person\/2518f4d5541f8e98016f6289169141a6\",\"name\":\"JP Hennessy\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/28ade268218ae45f723b0b62499f527a?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/28ade268218ae45f723b0b62499f527a?s=96&d=mm&r=g\",\"caption\":\"JP Hennessy\"},\"url\":\"https:\/\/lightning.ai\/pages\/author\/jplightning-ai\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Accelerate PyTorch Code with Fabric","description":"Learn how to leverage distributed training strategies, multiple devices, and easily switch hardware with Lightning Fabric.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/lightning.ai\/pages\/blog\/accelerate-pytorch-code-with-fabric\/","og_locale":"en_US","og_type":"article","og_title":"Accelerate PyTorch Code with Fabric","og_description":"Learn how to leverage distributed training strategies, multiple devices, and easily switch hardware with Lightning Fabric.","og_url":"https:\/\/lightning.ai\/pages\/blog\/accelerate-pytorch-code-with-fabric\/","og_site_name":"Lightning AI","article_published_time":"2023-03-22T16:52:10+00:00","article_modified_time":"2023-03-27T19:05:22+00:00","og_image":[{"width":790,"height":438,"url":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Screen-Shot-2023-03-27-at-12.05.03-PM.png","type":"image\/png"}],"author":"JP Hennessy","twitter_card":"summary_large_image","twitter_image":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Screen-Shot-2023-03-27-at-12.05.03-PM.png","twitter_creator":"@LightningAI","twitter_site":"@LightningAI","twitter_misc":{"Written by":"JP Hennessy"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/lightning.ai\/pages\/blog\/accelerate-pytorch-code-with-fabric\/#article","isPartOf":{"@id":"https:\/\/lightning.ai\/pages\/blog\/accelerate-pytorch-code-with-fabric\/"},"author":{"name":"JP Hennessy","@id":"https:\/\/lightning.ai\/pages\/#\/schema\/person\/2518f4d5541f8e98016f6289169141a6"},"headline":"Accelerate PyTorch Code with Fabric","datePublished":"2023-03-22T16:52:10+00:00","dateModified":"2023-03-27T19:05:22+00:00","mainEntityOfPage":{"@id":"https:\/\/lightning.ai\/pages\/blog\/accelerate-pytorch-code-with-fabric\/"},"wordCount":5,"commentCount":0,"publisher":{"@id":"https:\/\/lightning.ai\/pages\/#organization"},"image":{"@id":"https:\/\/lightning.ai\/pages\/blog\/accelerate-pytorch-code-with-fabric\/#primaryimage"},"thumbnailUrl":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Fabric-speedup.png","keywords":["ai","fabric","ml","pytorch","pytorch lightning"],"articleSection":["Blog","Tutorials"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/lightning.ai\/pages\/blog\/accelerate-pytorch-code-with-fabric\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/lightning.ai\/pages\/blog\/accelerate-pytorch-code-with-fabric\/","url":"https:\/\/lightning.ai\/pages\/blog\/accelerate-pytorch-code-with-fabric\/","name":"Accelerate PyTorch Code with Fabric","isPartOf":{"@id":"https:\/\/lightning.ai\/pages\/#website"},"primaryImageOfPage":{"@id":"https:\/\/lightning.ai\/pages\/blog\/accelerate-pytorch-code-with-fabric\/#primaryimage"},"image":{"@id":"https:\/\/lightning.ai\/pages\/blog\/accelerate-pytorch-code-with-fabric\/#primaryimage"},"thumbnailUrl":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Fabric-speedup.png","datePublished":"2023-03-22T16:52:10+00:00","dateModified":"2023-03-27T19:05:22+00:00","description":"Learn how to leverage distributed training strategies, multiple devices, and easily switch hardware with Lightning Fabric.","breadcrumb":{"@id":"https:\/\/lightning.ai\/pages\/blog\/accelerate-pytorch-code-with-fabric\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/lightning.ai\/pages\/blog\/accelerate-pytorch-code-with-fabric\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/lightning.ai\/pages\/blog\/accelerate-pytorch-code-with-fabric\/#primaryimage","url":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Fabric-speedup.png","contentUrl":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Fabric-speedup.png","width":1305,"height":675},{"@type":"BreadcrumbList","@id":"https:\/\/lightning.ai\/pages\/blog\/accelerate-pytorch-code-with-fabric\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/lightning.ai\/pages\/"},{"@type":"ListItem","position":2,"name":"Accelerate PyTorch Code with Fabric"}]},{"@type":"WebSite","@id":"https:\/\/lightning.ai\/pages\/#website","url":"https:\/\/lightning.ai\/pages\/","name":"Lightning AI","description":"The platform for teams to build AI.","publisher":{"@id":"https:\/\/lightning.ai\/pages\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/lightning.ai\/pages\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/lightning.ai\/pages\/#organization","name":"Lightning AI","url":"https:\/\/lightning.ai\/pages\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/lightning.ai\/pages\/#\/schema\/logo\/image\/","url":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/02\/image-17.png","contentUrl":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/02\/image-17.png","width":1744,"height":856,"caption":"Lightning AI"},"image":{"@id":"https:\/\/lightning.ai\/pages\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/LightningAI"]},{"@type":"Person","@id":"https:\/\/lightning.ai\/pages\/#\/schema\/person\/2518f4d5541f8e98016f6289169141a6","name":"JP Hennessy","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/lightning.ai\/pages\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/28ade268218ae45f723b0b62499f527a?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/28ade268218ae45f723b0b62499f527a?s=96&d=mm&r=g","caption":"JP Hennessy"},"url":"https:\/\/lightning.ai\/pages\/author\/jplightning-ai\/"}]}},"_links":{"self":[{"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/posts\/5647589"}],"collection":[{"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/users\/16"}],"replies":[{"embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/comments?post=5647589"}],"version-history":[{"count":0,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/posts\/5647589\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/media\/5647597"}],"wp:attachment":[{"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/media?parent=5647589"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/categories?post=5647589"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/tags?post=5647589"},{"taxonomy":"glossary","embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/glossary?post=5647589"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}