{"id":5646658,"date":"2022-09-22T16:03:08","date_gmt":"2022-09-22T20:03:08","guid":{"rendered":"https:\/\/lightning.ai\/pages\/?p=5646658"},"modified":"2023-03-07T17:33:40","modified_gmt":"2023-03-07T22:33:40","slug":"fully-sharded-data-parallel-fsdp-pytorch","status":"publish","type":"post","link":"https:\/\/lightning.ai\/pages\/community\/tutorial\/fully-sharded-data-parallel-fsdp-pytorch\/","title":{"rendered":"How to Enable Native Fully Sharded Data Parallel in PyTorch"},"content":{"rendered":"<h2>PyTorch&#8217;s native FSDP, now in Lightning<\/h2>\n<h3><em><strong>tl;dr\u00a0<\/strong><\/em>this tutorial teaches you how to overcome hardware constraints when training large models using PyTorch&#8217;s new model sharding strategy<\/h3>\n<p>Model size has grown exponentially in recent years, producing significantly better results in many domains. However, this expansion has been hampered by hardware constraints, as not everyone has access to the necessary hardware to train large-scale models. To tackle this issue, engineers and researchers have been working on strategies for efficient distributed model training, including Fully Sharded Data Parallel (FSDP).<\/p>\n<p>One way to reduce memory overhead is by sharding the optimizer states. Currently, each device handles all the weight updates and gradient computation, which consumes a large chunk of memory. Optimizer sharding comes in handy by reducing the memory footprint on each device. Sometimes, even optimizer sharding isn&#8217;t enough; in such cases, we would shard models as well.<\/p>\n<p>Model Sharding is one technique in which model weights are sharded across devices to reduce memory overhead.<\/p>\n<p>In the release of 1.11, PyTorch added native support for Fully Sharded Data Parallel (FSDP).<\/p>\n<div style=\"width: 4382px\" class=\"wp-caption alignnone\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/pytorch.org\/assets\/images\/fsdp_workflow.png\" alt=\"https:\/\/pytorch.org\/assets\/images\/fsdp_workflow.png\" width=\"4372\" height=\"1975\" \/><p class=\"wp-caption-text\">FSDP workflow (via PyTorch)<\/p><\/div>\n<p>FSDP initially appeared in <strong><span style=\"color: #7f30ec;\"><a style=\"color: #7f30ec;\" href=\"https:\/\/github.com\/facebookresearch\/fairscale\">fairscale<\/a><\/span><\/strong> and later in the official PyTorch repository. Lightning Trainer now supports both of them<a href=\"https:\/\/pytorch-lightning.readthedocs.io\/en\/1.7.0\/advanced\/model_parallel.html#sharded-training\">.<\/a><\/p>\n<h2>Using FSDP with Lightning<\/h2>\n<p>In the <strong><span style=\"color: #7f30ec;\"><a style=\"color: #7f30ec;\" href=\"https:\/\/lightning.ai\/pages\/community\/lightning-releases\/pytorch-lightning-1-7-release\/\">Lightning v1.7.0 release<\/a><\/span><\/strong>, we&#8217;ve added support for this Fully Sharded Native Strategy, which can help you leverage native FSDP support by setting the <code>strategy<\/code> flag as <code>\"fsdp_native\"<\/code>.<\/p>\n<p><script src=\"https:\/\/gist.github.com\/a88d8bf546368280c75fff6090ec0f99.js\"><\/script><code class=\"language-python\"><br \/>\n<\/code><\/p>\n<p>You can customize the strategy configuration by adjusting the arguments of <code>DDPFullyShardedNativeStrategy<\/code> and passing that to the <code>strategy<\/code> inside the <code>Trainer<\/code>.<\/p>\n<p><script src=\"https:\/\/gist.github.com\/4784760c9709a853670c5711f514e99c.js\"><\/script><code class=\"language-python\"><br \/>\n<\/code><\/p>\n<p>Read more about its usage in the <a href=\"https:\/\/pytorch-lightning.readthedocs.io\/en\/latest\/advanced\/model_parallel.html#pytorch-fully-sharded-training\"><span style=\"color: #7f30ec;\"><strong>docs here<\/strong><\/span><\/a>.<\/p>\n<h2>How does FSDP work internally?<\/h2>\n<p>In regular <strong><span style=\"color: #7f30ec;\"><a style=\"color: #7f30ec;\" href=\"https:\/\/pytorch.org\/tutorials\/intermediate\/ddp_tutorial.html\">DDP<\/a><\/span><\/strong>, every GPU holds an exact copy of the model. In contrast, Fully Sharded Training shards the entire model weights across all available GPUs, allowing you to scale model size while using efficient communication to reduce overhead. In practice, this means we can remain at parity with PyTorch DDP while dramatically scaling our model sizes. The technique is similar to <strong><span style=\"color: #7f30ec;\"><a style=\"color: #7f30ec;\" href=\"https:\/\/deepspeed.readthedocs.io\/en\/latest\/zero3.html\">ZeRO-Stage 3<\/a><\/span><\/strong>.<\/p>\n<p>You can read more about this in PyTorch&#8217;s blog post <a href=\"https:\/\/pytorch.org\/blog\/introducing-pytorch-fully-sharded-data-parallel-api\"><strong><span style=\"color: #7f30ec;\">here<\/span><\/strong>.<\/a><\/p>\n<p>We also suggest looking at this <strong><span style=\"color: #7f30ec;\"><a style=\"color: #7f30ec;\" href=\"https:\/\/pytorch.org\/tutorials\/intermediate\/FSDP_tutorial.html\">tutorial<\/a><\/span><\/strong>.<\/p>\n<h6>Note: Since PyTorch has labeled native support for FSDP as beta, the new strategy is in beta as well and therefore subject to change. The interface can bring breaking changes and new features with the next release of PyTorch.<\/h6>\n<h2>Key Points and Differences From the Native FSDP Release<\/h2>\n<ul>\n<li><span style=\"font-size: 12pt;\">This implementation borrows from FairScale\u2019s version while bringing streamlined APIs and additional performance improvements.<\/span><\/li>\n<li><span style=\"font-size: 12pt;\">When we enabled CPU offloading, native FSDP implementation significantly improved model initialization time when compared against FairScale&#8217;s original.<\/span><\/li>\n<li><span style=\"font-size: 12pt;\">Soon, FairScale FSDP will remain in the FairScale repository for research projects. At the same time, generic and widely adopted features will be incrementally upstreamed to PyTorch and hardened as needed.<\/span><\/li>\n<\/ul>\n<h2>Acknowledgment<\/h2>\n<p>We thank <span style=\"color: #c15de8;\"><strong><a style=\"color: #c15de8;\" href=\"https:\/\/github.com\/sisilmehta2000\">Sisil Mehta<\/a><\/strong><\/span> for spearheading the native FSDP integration with Lightning Trainer.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>PyTorch&#8217;s native FSDP, now in Lightning tl;dr\u00a0this tutorial teaches you how to overcome hardware constraints when training large models using PyTorch&#8217;s new model sharding strategy Model size has grown exponentially in recent years, producing significantly better results in many domains. However, this expansion has been hampered by hardware constraints, as not everyone has access to<a class=\"excerpt-read-more\" href=\"https:\/\/lightning.ai\/pages\/community\/tutorial\/fully-sharded-data-parallel-fsdp-pytorch\/\" title=\"ReadHow to Enable Native Fully Sharded Data Parallel in PyTorch\">&#8230; Read more &raquo;<\/a><\/p>\n","protected":false},"author":16,"featured_media":5646660,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"footnotes":"","_links_to":"","_links_to_target":""},"categories":[41],"tags":[],"glossary":[],"acf":{"additional_authors":false,"hide_from_archive":false,"content_type":"Blog Post","custom_styles":""},"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v24.5 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>How to Enable Native Fully Sharded Data Parallel in PyTorch<\/title>\n<meta name=\"description\" content=\"This tutorial teaches you how to enable PyTorch&#039;s native Fully Sharded Data Parallel (FSDP) technique in PyTorch Lightning.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/lightning.ai\/pages\/community\/tutorial\/fully-sharded-data-parallel-fsdp-pytorch\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"How to Enable Native Fully Sharded Data Parallel in PyTorch\" \/>\n<meta property=\"og:description\" content=\"This tutorial teaches you how to enable PyTorch&#039;s native Fully Sharded Data Parallel (FSDP) technique in PyTorch Lightning.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/lightning.ai\/pages\/community\/tutorial\/fully-sharded-data-parallel-fsdp-pytorch\/\" \/>\n<meta property=\"og:site_name\" content=\"Lightning AI\" \/>\n<meta property=\"article:published_time\" content=\"2022-09-22T20:03:08+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-03-07T22:33:40+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2022\/09\/FSDP.png\" \/>\n\t<meta property=\"og:image:width\" content=\"2320\" \/>\n\t<meta property=\"og:image:height\" content=\"1200\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"JP Hennessy\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@LightningAI\" \/>\n<meta name=\"twitter:site\" content=\"@LightningAI\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"JP Hennessy\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/lightning.ai\/pages\/community\/tutorial\/fully-sharded-data-parallel-fsdp-pytorch\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/lightning.ai\/pages\/community\/tutorial\/fully-sharded-data-parallel-fsdp-pytorch\/\"},\"author\":{\"name\":\"JP Hennessy\",\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/person\/2518f4d5541f8e98016f6289169141a6\"},\"headline\":\"How to Enable Native Fully Sharded Data Parallel in PyTorch\",\"datePublished\":\"2022-09-22T20:03:08+00:00\",\"dateModified\":\"2023-03-07T22:33:40+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/lightning.ai\/pages\/community\/tutorial\/fully-sharded-data-parallel-fsdp-pytorch\/\"},\"wordCount\":508,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/lightning.ai\/pages\/#organization\"},\"image\":{\"@id\":\"https:\/\/lightning.ai\/pages\/community\/tutorial\/fully-sharded-data-parallel-fsdp-pytorch\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2022\/09\/FSDP.png\",\"articleSection\":[\"Tutorials\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/lightning.ai\/pages\/community\/tutorial\/fully-sharded-data-parallel-fsdp-pytorch\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/lightning.ai\/pages\/community\/tutorial\/fully-sharded-data-parallel-fsdp-pytorch\/\",\"url\":\"https:\/\/lightning.ai\/pages\/community\/tutorial\/fully-sharded-data-parallel-fsdp-pytorch\/\",\"name\":\"How to Enable Native Fully Sharded Data Parallel in PyTorch\",\"isPartOf\":{\"@id\":\"https:\/\/lightning.ai\/pages\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/lightning.ai\/pages\/community\/tutorial\/fully-sharded-data-parallel-fsdp-pytorch\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/lightning.ai\/pages\/community\/tutorial\/fully-sharded-data-parallel-fsdp-pytorch\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2022\/09\/FSDP.png\",\"datePublished\":\"2022-09-22T20:03:08+00:00\",\"dateModified\":\"2023-03-07T22:33:40+00:00\",\"description\":\"This tutorial teaches you how to enable PyTorch's native Fully Sharded Data Parallel (FSDP) technique in PyTorch Lightning.\",\"breadcrumb\":{\"@id\":\"https:\/\/lightning.ai\/pages\/community\/tutorial\/fully-sharded-data-parallel-fsdp-pytorch\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/lightning.ai\/pages\/community\/tutorial\/fully-sharded-data-parallel-fsdp-pytorch\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/lightning.ai\/pages\/community\/tutorial\/fully-sharded-data-parallel-fsdp-pytorch\/#primaryimage\",\"url\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2022\/09\/FSDP.png\",\"contentUrl\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2022\/09\/FSDP.png\",\"width\":2320,\"height\":1200},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/lightning.ai\/pages\/community\/tutorial\/fully-sharded-data-parallel-fsdp-pytorch\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/lightning.ai\/pages\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"How to Enable Native Fully Sharded Data Parallel in PyTorch\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/lightning.ai\/pages\/#website\",\"url\":\"https:\/\/lightning.ai\/pages\/\",\"name\":\"Lightning AI\",\"description\":\"The platform for teams to build AI.\",\"publisher\":{\"@id\":\"https:\/\/lightning.ai\/pages\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/lightning.ai\/pages\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/lightning.ai\/pages\/#organization\",\"name\":\"Lightning AI\",\"url\":\"https:\/\/lightning.ai\/pages\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/02\/image-17.png\",\"contentUrl\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/02\/image-17.png\",\"width\":1744,\"height\":856,\"caption\":\"Lightning AI\"},\"image\":{\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/LightningAI\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/person\/2518f4d5541f8e98016f6289169141a6\",\"name\":\"JP Hennessy\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/28ade268218ae45f723b0b62499f527a?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/28ade268218ae45f723b0b62499f527a?s=96&d=mm&r=g\",\"caption\":\"JP Hennessy\"},\"url\":\"https:\/\/lightning.ai\/pages\/author\/jplightning-ai\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"How to Enable Native Fully Sharded Data Parallel in PyTorch","description":"This tutorial teaches you how to enable PyTorch's native Fully Sharded Data Parallel (FSDP) technique in PyTorch Lightning.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/lightning.ai\/pages\/community\/tutorial\/fully-sharded-data-parallel-fsdp-pytorch\/","og_locale":"en_US","og_type":"article","og_title":"How to Enable Native Fully Sharded Data Parallel in PyTorch","og_description":"This tutorial teaches you how to enable PyTorch's native Fully Sharded Data Parallel (FSDP) technique in PyTorch Lightning.","og_url":"https:\/\/lightning.ai\/pages\/community\/tutorial\/fully-sharded-data-parallel-fsdp-pytorch\/","og_site_name":"Lightning AI","article_published_time":"2022-09-22T20:03:08+00:00","article_modified_time":"2023-03-07T22:33:40+00:00","og_image":[{"width":2320,"height":1200,"url":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2022\/09\/FSDP.png","type":"image\/png"}],"author":"JP Hennessy","twitter_card":"summary_large_image","twitter_creator":"@LightningAI","twitter_site":"@LightningAI","twitter_misc":{"Written by":"JP Hennessy","Est. reading time":"3 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/lightning.ai\/pages\/community\/tutorial\/fully-sharded-data-parallel-fsdp-pytorch\/#article","isPartOf":{"@id":"https:\/\/lightning.ai\/pages\/community\/tutorial\/fully-sharded-data-parallel-fsdp-pytorch\/"},"author":{"name":"JP Hennessy","@id":"https:\/\/lightning.ai\/pages\/#\/schema\/person\/2518f4d5541f8e98016f6289169141a6"},"headline":"How to Enable Native Fully Sharded Data Parallel in PyTorch","datePublished":"2022-09-22T20:03:08+00:00","dateModified":"2023-03-07T22:33:40+00:00","mainEntityOfPage":{"@id":"https:\/\/lightning.ai\/pages\/community\/tutorial\/fully-sharded-data-parallel-fsdp-pytorch\/"},"wordCount":508,"commentCount":0,"publisher":{"@id":"https:\/\/lightning.ai\/pages\/#organization"},"image":{"@id":"https:\/\/lightning.ai\/pages\/community\/tutorial\/fully-sharded-data-parallel-fsdp-pytorch\/#primaryimage"},"thumbnailUrl":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2022\/09\/FSDP.png","articleSection":["Tutorials"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/lightning.ai\/pages\/community\/tutorial\/fully-sharded-data-parallel-fsdp-pytorch\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/lightning.ai\/pages\/community\/tutorial\/fully-sharded-data-parallel-fsdp-pytorch\/","url":"https:\/\/lightning.ai\/pages\/community\/tutorial\/fully-sharded-data-parallel-fsdp-pytorch\/","name":"How to Enable Native Fully Sharded Data Parallel in PyTorch","isPartOf":{"@id":"https:\/\/lightning.ai\/pages\/#website"},"primaryImageOfPage":{"@id":"https:\/\/lightning.ai\/pages\/community\/tutorial\/fully-sharded-data-parallel-fsdp-pytorch\/#primaryimage"},"image":{"@id":"https:\/\/lightning.ai\/pages\/community\/tutorial\/fully-sharded-data-parallel-fsdp-pytorch\/#primaryimage"},"thumbnailUrl":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2022\/09\/FSDP.png","datePublished":"2022-09-22T20:03:08+00:00","dateModified":"2023-03-07T22:33:40+00:00","description":"This tutorial teaches you how to enable PyTorch's native Fully Sharded Data Parallel (FSDP) technique in PyTorch Lightning.","breadcrumb":{"@id":"https:\/\/lightning.ai\/pages\/community\/tutorial\/fully-sharded-data-parallel-fsdp-pytorch\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/lightning.ai\/pages\/community\/tutorial\/fully-sharded-data-parallel-fsdp-pytorch\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/lightning.ai\/pages\/community\/tutorial\/fully-sharded-data-parallel-fsdp-pytorch\/#primaryimage","url":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2022\/09\/FSDP.png","contentUrl":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2022\/09\/FSDP.png","width":2320,"height":1200},{"@type":"BreadcrumbList","@id":"https:\/\/lightning.ai\/pages\/community\/tutorial\/fully-sharded-data-parallel-fsdp-pytorch\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/lightning.ai\/pages\/"},{"@type":"ListItem","position":2,"name":"How to Enable Native Fully Sharded Data Parallel in PyTorch"}]},{"@type":"WebSite","@id":"https:\/\/lightning.ai\/pages\/#website","url":"https:\/\/lightning.ai\/pages\/","name":"Lightning AI","description":"The platform for teams to build AI.","publisher":{"@id":"https:\/\/lightning.ai\/pages\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/lightning.ai\/pages\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/lightning.ai\/pages\/#organization","name":"Lightning AI","url":"https:\/\/lightning.ai\/pages\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/lightning.ai\/pages\/#\/schema\/logo\/image\/","url":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/02\/image-17.png","contentUrl":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/02\/image-17.png","width":1744,"height":856,"caption":"Lightning AI"},"image":{"@id":"https:\/\/lightning.ai\/pages\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/LightningAI"]},{"@type":"Person","@id":"https:\/\/lightning.ai\/pages\/#\/schema\/person\/2518f4d5541f8e98016f6289169141a6","name":"JP Hennessy","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/lightning.ai\/pages\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/28ade268218ae45f723b0b62499f527a?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/28ade268218ae45f723b0b62499f527a?s=96&d=mm&r=g","caption":"JP Hennessy"},"url":"https:\/\/lightning.ai\/pages\/author\/jplightning-ai\/"}]}},"_links":{"self":[{"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/posts\/5646658"}],"collection":[{"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/users\/16"}],"replies":[{"embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/comments?post=5646658"}],"version-history":[{"count":0,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/posts\/5646658\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/media\/5646660"}],"wp:attachment":[{"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/media?parent=5646658"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/categories?post=5646658"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/tags?post=5646658"},{"taxonomy":"glossary","embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/glossary?post=5646658"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}