{"id":5649158,"date":"2023-11-06T10:58:15","date_gmt":"2023-11-06T15:58:15","guid":{"rendered":"https:\/\/lightning.ai\/pages\/?p=5649158"},"modified":"2023-11-07T11:14:52","modified_gmt":"2023-11-07T16:14:52","slug":"4-bit-quantization-with-lightning-fabric","status":"publish","type":"post","link":"https:\/\/lightning.ai\/pages\/blog\/4-bit-quantization-with-lightning-fabric\/","title":{"rendered":"4-Bit Quantization with Lightning Fabric"},"content":{"rendered":"<div class=\"takeaways card-glow p-4 my-4\"><h3 class=\"w-100 d-block\">Takeaways<\/h3> Readers will learn the basics of Lightning Fabric\u2019s plugin for 4-bit quantization. <\/div>\n<h2>Introduction<\/h2>\n<p>The aim of 4-bit quantization is to reduce the memory usage of the model parameters by using lower precision types than full (float32) or half (bfloat16) precision. Meaning \u2013 4-bit quantization compresses models that have billions of parameters like Llama 2 or SDXL and makes them require less memory.<\/p>\n<p>Thankfully, Lightning Fabric makes quantization as easy as setting a <code>mode<\/code> flag in a plugin!<\/p>\n<h2>4-bit Quantization<\/h2>\n<p>4-bit quantization is discussed in the popular paper QLoRA: Efficient Finetuning of Quantized LLMs. QLoRA is a finetuning method that uses 4-bit quantization. The paper introduces this finetuning technique and demonstrates how it can be used to \u201cfinetune a 65B parameter model on a single 48GB GPU while preserving full 16-bit finetuning task performance\u201d by using the NF4 (normal float) format.<\/p>\n<p>Lightning Fabric can use 4-bit quantization by setting the <code>mode<\/code> flag to either <code>nf4<\/code> or <code>fp4<\/code>.<\/p>\n<pre class=\"snippet-shortcode code-shortcode dark-theme collapse-false\"><code class=\"hljs language-python\">from lightning.fabric import Fabric\r\nfrom lightning.fabric.plugins import BitsandbytesPrecision\r\n\r\n# available 4-bit quantization modes\r\n# (\"nf4\", \"fp4\")\r\n\r\nmode = \"nf4\"\r\nplugin = BitsandbytesPrecision(mode=mode)\r\nfabric = Fabric(plugins=plugin)\r\n\r\nmodel = CustomModule() # your PyTorch model\r\nmodel = fabric.setup_module(model) # quantizes the layers<\/code><div class=\"copy-button\"><button class=\"expand-button active\">Expand<\/button><button class=\"copy\">Copy<\/button><\/div><\/pre>\n<h2>Double Quantization<\/h2>\n<p>Double quantization exists as an extra 4-bit quantization setting introduced alongside NF4 in QLoRA: Efficient Finetuning of Quantized LLMs. Double quantization works by quantizing the quantization constants that are internal to bitsandbytes\u2019 procedures.<\/p>\n<p>Lightning Fabric can use 4-bit double quantization by setting the <code>mode<\/code> flag to either <code>nf4-dq<\/code> or <code>fp4-dq<\/code>.<\/p>\n<pre class=\"snippet-shortcode code-shortcode dark-theme collapse-false\"><code class=\"hljs language-python\">from lightning.fabric import Fabric\r\nfrom lightning.fabric.plugins import BitsandbytesPrecision\r\n\r\n# available 4-bit double quantization modes\r\n# (\"nf4-dq\", \"fp4-dq\")\r\n\r\nmode = \"nf4-dq\"\r\nplugin = BitsandbytesPrecision(mode=mode)\r\nfabric = Fabric(plugins=plugin)\r\n\r\nmodel = CustomModule() # your PyTorch model\r\nmodel = fabric.setup_module(model) # quantizes the layers<\/code><div class=\"copy-button\"><button class=\"expand-button active\">Expand<\/button><button class=\"copy\">Copy<\/button><\/div><\/pre>\n<h2>Conclusion<\/h2>\n<p>Quantization is a must for most production systems given that edge devices and consumer grade hardware typically require models of a much smaller memory footprint than more powerful hardware such as NVIDIA\u2019s A100 80GB. Learning about this technique will enable a better understanding of deployment of LLMs like Llama 2 and SDXL, and requirements for edge devices in robotics, vehicles, and other systems.<\/p>\n<div class=\"takeaways card-glow p-4 my-4\"><h3 class=\"w-100 d-block\">Note<\/h3> 4-bit quantization and double quantization will only quantize the linear layers.<\/div>\n<h2>Still have questions?<\/h2>\n<p>We have an amazing community and team of core engineers ready to answer your questions. So, join us on <a href=\"https:\/\/discord.gg\/XncpTy7DSt\" target=\"_blank\" rel=\"noopener\">Discord<\/a> or <a href=\"https:\/\/lightning.ai\/forums\/\" target=\"_blank\" rel=\"noopener\">Discourse<\/a>. See you there!<\/p>\n<h2>Resources and References<\/h2>\n<ul>\n<li><a href=\"https:\/\/lightning.ai\/docs\/fabric\/latest\/fundamentals\/precision.html#quantization-via-bitsandbytes\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">Quantization in Lightning Fabric<\/span><\/a><\/li>\n<li><a href=\"https:\/\/pytorch.org\/blog\/introduction-to-quantization-on-pytorch\/\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">Introduction to Quantization<\/span><\/a><\/li>\n<li><a href=\"https:\/\/pytorch.org\/docs\/stable\/quantization.html\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">Introduction to Quantization and API Summary<\/span><\/a><\/li>\n<li><a href=\"https:\/\/pytorch.org\/blog\/quantization-in-practice\/\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">Quantization in Practice<\/span><\/a><\/li>\n<li><a href=\"https:\/\/pytorch.org\/TensorRT\/tutorials\/ptq.html\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">Post Training Quantization<\/span><\/a><\/li>\n<li><a href=\"https:\/\/arxiv.org\/abs\/2305.14314\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">QLoRA: Efficient Finetuning of Quantized LLMs<\/span><\/a><\/li>\n<li><a href=\"https:\/\/arxiv.org\/abs\/2210.17323\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers<\/span><\/a><\/li>\n<li><a href=\"https:\/\/developer.nvidia.com\/automatic-mixed-precision\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">Automatic Mixed Precision for Deep Learning<\/span><\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Introduction The aim of 4-bit quantization is to reduce the memory usage of the model parameters by using lower precision types than full (float32) or half (bfloat16) precision. Meaning \u2013 4-bit quantization compresses models that have billions of parameters like Llama 2 or SDXL and makes them require less memory. Thankfully, Lightning Fabric makes quantization<a class=\"excerpt-read-more\" href=\"https:\/\/lightning.ai\/pages\/blog\/4-bit-quantization-with-lightning-fabric\/\" title=\"Read4-Bit Quantization with Lightning Fabric\">&#8230; Read more &raquo;<\/a><\/p>\n","protected":false},"author":16,"featured_media":5649159,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"footnotes":"","_links_to":"","_links_to_target":""},"categories":[27,29],"tags":[],"glossary":[],"acf":{"mathjax":false,"default_editor":true,"show_table_of_contents":false,"additional_authors":false,"hide_from_archive":false,"content_type":"Blog Post","sticky":false,"code_embed":true,"code_shortcode":[{"shortcode_title":"snippet_1","code":"from lightning.fabric import Fabric\r\nfrom lightning.fabric.plugins import BitsandbytesPrecision\r\n\r\n# available 4-bit quantization modes\r\n# (\"nf4\", \"fp4\")\r\n\r\nmode = \"nf4\"\r\nplugin = BitsandbytesPrecision(mode=mode)\r\nfabric = Fabric(plugins=plugin)\r\n\r\nmodel = CustomModule() # your PyTorch model\r\nmodel = fabric.setup_module(model) # quantizes the layers","syntax":"python","collapse":true},{"shortcode_title":"snippet_2","code":"from lightning.fabric import Fabric\r\nfrom lightning.fabric.plugins import BitsandbytesPrecision\r\n\r\n# available 4-bit double quantization modes\r\n# (\"nf4-dq\", \"fp4-dq\")\r\n\r\nmode = \"nf4-dq\"\r\nplugin = BitsandbytesPrecision(mode=mode)\r\nfabric = Fabric(plugins=plugin)\r\n\r\nmodel = CustomModule() # your PyTorch model\r\nmodel = fabric.setup_module(model) # quantizes the layers","syntax":"python","collapse":true}],"tabs":false,"custom_styles":""},"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v24.5 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>4-Bit Quantization with Lightning Fabric - Lightning AI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/lightning.ai\/pages\/blog\/4-bit-quantization-with-lightning-fabric\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"4-Bit Quantization with Lightning Fabric - Lightning AI\" \/>\n<meta property=\"og:description\" content=\"Introduction The aim of 4-bit quantization is to reduce the memory usage of the model parameters by using lower precision types than full (float32) or half (bfloat16) precision. Meaning \u2013 4-bit quantization compresses models that have billions of parameters like Llama 2 or SDXL and makes them require less memory. Thankfully, Lightning Fabric makes quantization... Read more &raquo;\" \/>\n<meta property=\"og:url\" content=\"https:\/\/lightning.ai\/pages\/blog\/4-bit-quantization-with-lightning-fabric\/\" \/>\n<meta property=\"og:site_name\" content=\"Lightning AI\" \/>\n<meta property=\"article:published_time\" content=\"2023-11-06T15:58:15+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-11-07T16:14:52+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/11\/4-bit-Quantization-with-Lightning-Fabric-2-1.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1200\" \/>\n\t<meta property=\"og:image:height\" content=\"1200\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"JP Hennessy\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@LightningAI\" \/>\n<meta name=\"twitter:site\" content=\"@LightningAI\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"JP Hennessy\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"2 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/4-bit-quantization-with-lightning-fabric\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/4-bit-quantization-with-lightning-fabric\/\"},\"author\":{\"name\":\"JP Hennessy\",\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/person\/2518f4d5541f8e98016f6289169141a6\"},\"headline\":\"4-Bit Quantization with Lightning Fabric\",\"datePublished\":\"2023-11-06T15:58:15+00:00\",\"dateModified\":\"2023-11-07T16:14:52+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/4-bit-quantization-with-lightning-fabric\/\"},\"wordCount\":362,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/lightning.ai\/pages\/#organization\"},\"image\":{\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/4-bit-quantization-with-lightning-fabric\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/11\/4-bit-Quantization-with-Lightning-Fabric-2-1.png\",\"articleSection\":[\"Articles\",\"Blog\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/lightning.ai\/pages\/blog\/4-bit-quantization-with-lightning-fabric\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/4-bit-quantization-with-lightning-fabric\/\",\"url\":\"https:\/\/lightning.ai\/pages\/blog\/4-bit-quantization-with-lightning-fabric\/\",\"name\":\"4-Bit Quantization with Lightning Fabric - Lightning AI\",\"isPartOf\":{\"@id\":\"https:\/\/lightning.ai\/pages\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/4-bit-quantization-with-lightning-fabric\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/4-bit-quantization-with-lightning-fabric\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/11\/4-bit-Quantization-with-Lightning-Fabric-2-1.png\",\"datePublished\":\"2023-11-06T15:58:15+00:00\",\"dateModified\":\"2023-11-07T16:14:52+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/4-bit-quantization-with-lightning-fabric\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/lightning.ai\/pages\/blog\/4-bit-quantization-with-lightning-fabric\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/4-bit-quantization-with-lightning-fabric\/#primaryimage\",\"url\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/11\/4-bit-Quantization-with-Lightning-Fabric-2-1.png\",\"contentUrl\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/11\/4-bit-Quantization-with-Lightning-Fabric-2-1.png\",\"width\":1200,\"height\":1200},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/lightning.ai\/pages\/blog\/4-bit-quantization-with-lightning-fabric\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/lightning.ai\/pages\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"4-Bit Quantization with Lightning Fabric\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/lightning.ai\/pages\/#website\",\"url\":\"https:\/\/lightning.ai\/pages\/\",\"name\":\"Lightning AI\",\"description\":\"The platform for teams to build AI.\",\"publisher\":{\"@id\":\"https:\/\/lightning.ai\/pages\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/lightning.ai\/pages\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/lightning.ai\/pages\/#organization\",\"name\":\"Lightning AI\",\"url\":\"https:\/\/lightning.ai\/pages\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/02\/image-17.png\",\"contentUrl\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/02\/image-17.png\",\"width\":1744,\"height\":856,\"caption\":\"Lightning AI\"},\"image\":{\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/LightningAI\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/person\/2518f4d5541f8e98016f6289169141a6\",\"name\":\"JP Hennessy\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/28ade268218ae45f723b0b62499f527a?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/28ade268218ae45f723b0b62499f527a?s=96&d=mm&r=g\",\"caption\":\"JP Hennessy\"},\"url\":\"https:\/\/lightning.ai\/pages\/author\/jplightning-ai\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"4-Bit Quantization with Lightning Fabric - Lightning AI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/lightning.ai\/pages\/blog\/4-bit-quantization-with-lightning-fabric\/","og_locale":"en_US","og_type":"article","og_title":"4-Bit Quantization with Lightning Fabric - Lightning AI","og_description":"Introduction The aim of 4-bit quantization is to reduce the memory usage of the model parameters by using lower precision types than full (float32) or half (bfloat16) precision. Meaning \u2013 4-bit quantization compresses models that have billions of parameters like Llama 2 or SDXL and makes them require less memory. Thankfully, Lightning Fabric makes quantization... Read more &raquo;","og_url":"https:\/\/lightning.ai\/pages\/blog\/4-bit-quantization-with-lightning-fabric\/","og_site_name":"Lightning AI","article_published_time":"2023-11-06T15:58:15+00:00","article_modified_time":"2023-11-07T16:14:52+00:00","og_image":[{"width":1200,"height":1200,"url":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/11\/4-bit-Quantization-with-Lightning-Fabric-2-1.png","type":"image\/png"}],"author":"JP Hennessy","twitter_card":"summary_large_image","twitter_creator":"@LightningAI","twitter_site":"@LightningAI","twitter_misc":{"Written by":"JP Hennessy","Est. reading time":"2 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/lightning.ai\/pages\/blog\/4-bit-quantization-with-lightning-fabric\/#article","isPartOf":{"@id":"https:\/\/lightning.ai\/pages\/blog\/4-bit-quantization-with-lightning-fabric\/"},"author":{"name":"JP Hennessy","@id":"https:\/\/lightning.ai\/pages\/#\/schema\/person\/2518f4d5541f8e98016f6289169141a6"},"headline":"4-Bit Quantization with Lightning Fabric","datePublished":"2023-11-06T15:58:15+00:00","dateModified":"2023-11-07T16:14:52+00:00","mainEntityOfPage":{"@id":"https:\/\/lightning.ai\/pages\/blog\/4-bit-quantization-with-lightning-fabric\/"},"wordCount":362,"commentCount":0,"publisher":{"@id":"https:\/\/lightning.ai\/pages\/#organization"},"image":{"@id":"https:\/\/lightning.ai\/pages\/blog\/4-bit-quantization-with-lightning-fabric\/#primaryimage"},"thumbnailUrl":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/11\/4-bit-Quantization-with-Lightning-Fabric-2-1.png","articleSection":["Articles","Blog"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/lightning.ai\/pages\/blog\/4-bit-quantization-with-lightning-fabric\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/lightning.ai\/pages\/blog\/4-bit-quantization-with-lightning-fabric\/","url":"https:\/\/lightning.ai\/pages\/blog\/4-bit-quantization-with-lightning-fabric\/","name":"4-Bit Quantization with Lightning Fabric - Lightning AI","isPartOf":{"@id":"https:\/\/lightning.ai\/pages\/#website"},"primaryImageOfPage":{"@id":"https:\/\/lightning.ai\/pages\/blog\/4-bit-quantization-with-lightning-fabric\/#primaryimage"},"image":{"@id":"https:\/\/lightning.ai\/pages\/blog\/4-bit-quantization-with-lightning-fabric\/#primaryimage"},"thumbnailUrl":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/11\/4-bit-Quantization-with-Lightning-Fabric-2-1.png","datePublished":"2023-11-06T15:58:15+00:00","dateModified":"2023-11-07T16:14:52+00:00","breadcrumb":{"@id":"https:\/\/lightning.ai\/pages\/blog\/4-bit-quantization-with-lightning-fabric\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/lightning.ai\/pages\/blog\/4-bit-quantization-with-lightning-fabric\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/lightning.ai\/pages\/blog\/4-bit-quantization-with-lightning-fabric\/#primaryimage","url":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/11\/4-bit-Quantization-with-Lightning-Fabric-2-1.png","contentUrl":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/11\/4-bit-Quantization-with-Lightning-Fabric-2-1.png","width":1200,"height":1200},{"@type":"BreadcrumbList","@id":"https:\/\/lightning.ai\/pages\/blog\/4-bit-quantization-with-lightning-fabric\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/lightning.ai\/pages\/"},{"@type":"ListItem","position":2,"name":"4-Bit Quantization with Lightning Fabric"}]},{"@type":"WebSite","@id":"https:\/\/lightning.ai\/pages\/#website","url":"https:\/\/lightning.ai\/pages\/","name":"Lightning AI","description":"The platform for teams to build AI.","publisher":{"@id":"https:\/\/lightning.ai\/pages\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/lightning.ai\/pages\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/lightning.ai\/pages\/#organization","name":"Lightning AI","url":"https:\/\/lightning.ai\/pages\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/lightning.ai\/pages\/#\/schema\/logo\/image\/","url":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/02\/image-17.png","contentUrl":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/02\/image-17.png","width":1744,"height":856,"caption":"Lightning AI"},"image":{"@id":"https:\/\/lightning.ai\/pages\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/LightningAI"]},{"@type":"Person","@id":"https:\/\/lightning.ai\/pages\/#\/schema\/person\/2518f4d5541f8e98016f6289169141a6","name":"JP Hennessy","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/lightning.ai\/pages\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/28ade268218ae45f723b0b62499f527a?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/28ade268218ae45f723b0b62499f527a?s=96&d=mm&r=g","caption":"JP Hennessy"},"url":"https:\/\/lightning.ai\/pages\/author\/jplightning-ai\/"}]}},"_links":{"self":[{"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/posts\/5649158"}],"collection":[{"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/users\/16"}],"replies":[{"embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/comments?post=5649158"}],"version-history":[{"count":0,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/posts\/5649158\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/media\/5649159"}],"wp:attachment":[{"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/media?parent=5649158"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/categories?post=5649158"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/tags?post=5649158"},{"taxonomy":"glossary","embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/glossary?post=5649158"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}