{"id":5647602,"date":"2023-03-23T17:01:26","date_gmt":"2023-03-23T21:01:26","guid":{"rendered":"https:\/\/lightning.ai\/pages\/?p=5647602"},"modified":"2023-03-24T14:59:41","modified_gmt":"2023-03-24T18:59:41","slug":"how-to-speed-up-pytorch-model-training","status":"publish","type":"post","link":"https:\/\/lightning.ai\/pages\/community\/tutorial\/how-to-speed-up-pytorch-model-training\/","title":{"rendered":"How to Speed Up PyTorch Model Training"},"content":{"rendered":"<div class=\"takeaways card-glow p-4 my-4\"><h3 class=\"w-100 d-block\">Takeaways<\/h3> Learn how to improve the training performance of your PyTorch model without compromising its accuracy. To do so, we will wrap a PyTorch model in a LightningModule and use the Trainer class to enable various training optimizations. By changing only a few lines of code, we can reduce the training time on a single GPU from 22.53 minutes to 2.75 minutes while maintaining the model\u2019s prediction accuracy. Yes, that\u2019s a 8x performance boost! <\/div>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-5647603 \" src=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Screenshot-2023-03-22-at-2.02.54-PM.png\" alt=\"\" width=\"1075\" height=\"483\" srcset=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Screenshot-2023-03-22-at-2.02.54-PM.png 2024w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Screenshot-2023-03-22-at-2.02.54-PM-300x135.png 300w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Screenshot-2023-03-22-at-2.02.54-PM-1024x460.png 1024w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Screenshot-2023-03-22-at-2.02.54-PM-1536x691.png 1536w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Screenshot-2023-03-22-at-2.02.54-PM-300x135@2x.png 600w\" sizes=\"(max-width: 1075px) 100vw, 1075px\" \/><\/p>\n<p>&nbsp;<\/p>\n<h2 id=\"introduction\">Introduction<\/h2>\n<p>In this tutorial, we will finetune a\u00a0<a href=\"https:\/\/arxiv.org\/abs\/1910.01108\">DistilBERT model<\/a>, a distilled version of BERT that is 40% smaller at almost identical predictive performance. There are several ways we can finetune a pretrained language model. The figure below depicts the three most common approaches.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-5647604 \" src=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/3-techniques.png\" alt=\"\" width=\"613\" height=\"344\" srcset=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/3-techniques.png 2180w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/3-techniques-300x168.png 300w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/3-techniques-1024x575.png 1024w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/3-techniques-1536x862.png 1536w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/3-techniques-2048x1150.png 2048w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/3-techniques-300x168@2x.png 600w\" sizes=\"(max-width: 613px) 100vw, 613px\" \/><\/p>\n<p>All three approaches above (a-c) assume we have pretrained the model on an unlabeled dataset using self-supervised learning. Then, in step 2, when we transfer the model to the target task, we either<\/p>\n<ul>\n<li>a) extract the embeddings and train a classifier on these (this can be a support vector machine from scikit-learn, for example);<\/li>\n<li>b) replace\/add an output layer and finetune the last layer(s) of the transformer;<\/li>\n<li>c) replace\/add an output layer and finetune all layers.<\/li>\n<\/ul>\n<p>The approaches a-c are ordered by computational efficiency, where a) is typically the fastest. In my experience, this sorting order also reflects the model\u2019s predictive performance, where c) usually yields the highest prediction accuracy.<\/p>\n<p>In this tutorial, we will use approach c) and train a model to predict the sentiment of movie reviews in the\u00a0<a href=\"https:\/\/ai.stanford.edu\/~amaas\/data\/sentiment\/\">IMDB Large Movie Review<\/a>\u00a0dataset consisting of 50,000 movie reviews in total.<\/p>\n<h2 id=\"1-plain-pytorch-baseline\">1) Plain PyTorch Baseline<\/h2>\n<p>As a warm-up exercise, let\u2019s start with the plain PyTorch baseline for training the DistilBERT model on the IMDB movie review dataset. If you want to run the code yourself, you can set up a virtual environment with the relevant Python libraries as follows:<\/p>\n<pre class=\"code-shortcode dark-theme window- collapse-false \" style=\"--height:falsepx\"><code class=\"language-python\">\n\n<p class=\"p1\">conda create -n faster-blog python<b>=<\/b>3.9\n\n<p class=\"p1\">conda activate faster-blog\n\n<p class=\"p1\">pip install watermark transformers datasets torchmetrics lightning\n\n<\/code><div class=\"copy-button\"><button class=\"expand-button\">Expand<\/button><button class=\"copy\">Copy<\/button><\/div><\/pre>\n<p>For reference, the relevant software versions I was using are the following (they will be printed to the terminal when you run the code later in this article.):<\/p>\n<pre class=\"code-shortcode dark-theme window- collapse-false \" style=\"--height:falsepx\"><code class=\"language-python\"><br \/>\nPython version: 3.9.15\n\n<p class=\"p1\">torch <span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>: 2.0.0+cu118\n\n<p class=\"p1\">lightning <span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>: 2.0.0\n\n<p class=\"p1\">transformers<span class=\"Apple-converted-space\">\u00a0 <\/span>: 4.26.1<br \/>\n<\/code><div class=\"copy-button\"><button class=\"expand-button\">Expand<\/button><button class=\"copy\">Copy<\/button><\/div><\/pre>\n<p>To avoid bloating this article with boring data-loading utilities, I will skip over the\u00a0<a href=\"https:\/\/github.com\/rasbt\/faster-pytorch-blog\/blob\/main\/local_dataset_utilities.py\">local_dataset_utilities.py<\/a>\u00a0file, which contains code to load the dataset. The only relevant information here is that we are partitioning the dataset into 35,000 training examples, 5,000 validation set records, and 10,000 test records.<\/p>\n<p>Let\u2019s get to the main PyTorch code. This code is self-contained except for the dataset loading utilities I placed in the\u00a0<a href=\"https:\/\/github.com\/rasbt\/faster-pytorch-blog\/blob\/main\/local_dataset_utilities.py\">local_dataset_utilities.py<\/a> file. Have a look at the PyTorch code before we discuss it below:<\/p>\n<pre class=\"code-shortcode dark-theme window- collapse-false \" style=\"--height:falsepx\"><code class=\"language-python\">\n\n<p class=\"p1\">import os\n\n<p class=\"p1\">import os.path <b>as<\/b> op\n\n<p class=\"p1\">import time\n\n<p class=\"p1\">from datasets import load_dataset\n\n<p class=\"p1\">import torch\n\n<p class=\"p1\">from torch.utils.data import DataLoader\n\n<p class=\"p1\">import torchmetrics\n\n<p class=\"p1\">from transformers import AutoTokenizer\n\n<p class=\"p1\">from transformers import AutoModelForSequenceClassification\n\n<p class=\"p1\">from watermark import watermark\n\n<p class=\"p1\">from local_dataset_utilities import (\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>download_dataset,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>load_dataset_into_to_dataframe,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>partition_dataset,\n\n<p class=\"p1\">)\n\n<p class=\"p1\">from local_dataset_utilities import IMDBDataset\n\n<p class=\"p1\"><b>def<\/b> <b>tokenize_text<\/b>(batch):\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>return<\/b> tokenizer(batch[\"text\"], truncation<b>=<\/b>True, padding<b>=<\/b>True)\n\n<p class=\"p1\"><b>def<\/b> <b>train<\/b>(num_epochs, model, optimizer, train_loader, val_loader, device):\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>for<\/b> epoch <b>in<\/b> range(num_epochs):\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>train_acc <b>=<\/b> torchmetrics.Accuracy(task<b>=<\/b>\"multiclass\", num_classes<b>=<\/b>2).to(device)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span><b>for<\/b> batch_idx, batch <b>in<\/b> enumerate(train_loader):\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>model.train()\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span><b>for<\/b> s <b>in<\/b> [\"input_ids\", \"attention_mask\", \"label\"]:\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>batch[s] <b>=<\/b> batch[s].to(device)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span><i>### FORWARD AND BACK PROP<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>outputs <b>=<\/b> model(\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>batch[\"input_ids\"],\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>attention_mask<b>=<\/b>batch[\"attention_mask\"],\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>labels<b>=<\/b>batch[\"label\"],\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>optimizer.zero_grad()\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>outputs[\"loss\"].backward()\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span><i>### UPDATE MODEL PARAMETERS<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>optimizer.step()\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span><i>### LOGGING<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span><b>if<\/b> <b>not<\/b> batch_idx <b>%<\/b> 300:\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span><b>print<\/b>(\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>f\"Epoch: {epoch<b>+<\/b>1:04d}\/{num_epochs:04d} | Batch {batch_idx:04d}\/{len(train_loader):04d} | Loss: {outputs['loss']:.4f}\"\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>model.eval()\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span><b>with<\/b> torch.no_grad():\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>predicted_labels <b>=<\/b> torch.argmax(outputs[\"logits\"], 1)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>train_acc.update(predicted_labels, batch[\"label\"])\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span><i>### MORE LOGGING<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span><b>with<\/b> torch.no_grad():\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>model.eval()\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>val_acc <b>=<\/b> torchmetrics.Accuracy(task<b>=<\/b>\"multiclass\", num_classes<b>=<\/b>2).to(device)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span><b>for<\/b> batch <b>in<\/b> val_loader:\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span><b>for<\/b> s <b>in<\/b> [\"input_ids\", \"attention_mask\", \"label\"]:\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>batch[s] <b>=<\/b> batch[s].to(device)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>outputs <b>=<\/b> model(\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>batch[\"input_ids\"],\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>attention_mask<b>=<\/b>batch[\"attention_mask\"],\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>labels<b>=<\/b>batch[\"label\"],\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>predicted_labels <b>=<\/b> torch.argmax(outputs[\"logits\"], 1)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>val_acc.update(predicted_labels, batch[\"label\"])\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span><b>print<\/b>(\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>f\"Epoch: {epoch<b>+<\/b>1:04d}\/{num_epochs:04d} | Train acc.: {train_acc.compute()<b>*<\/b>100:.2f}% | Val acc.: {val_acc.compute()<b>*<\/b>100:.2f}%\"\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>print<\/b>(watermark(packages<b>=<\/b>\"torch,lightning,transformers\", python<b>=<\/b>True))\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>print<\/b>(\"Torch CUDA available?\", torch.cuda.is_available())\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>device <b>=<\/b> \"cuda:0\" <b>if<\/b> torch.cuda.is_available() <b>else<\/b> \"cpu\"\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>torch.manual_seed(123)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>##########################<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>### 1 Loading the Dataset<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>##########################<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>download_dataset()\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>df <b>=<\/b> load_dataset_into_to_dataframe()\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>if<\/b> <b>not<\/b> (op.exists(\"train.csv\") <b>and<\/b> op.exists(\"val.csv\") <b>and<\/b> op.exists(\"test.csv\")):\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>partition_dataset(df)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>imdb_dataset <b>=<\/b> load_dataset(\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>\"csv\",\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>data_files<b>=<\/b>{\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>\"train\": \"train.csv\",\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>\"validation\": \"val.csv\",\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>\"test\": \"test.csv\",\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>},\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>#########################################<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>### 2 Tokenization and Numericalization<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>#########################################<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>tokenizer <b>=<\/b> AutoTokenizer.from_pretrained(\"distilbert-base-uncased\")\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>print<\/b>(\"Tokenizer input max length:\", tokenizer.model_max_length, flush<b>=<\/b>True)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>print<\/b>(\"Tokenizer vocabulary size:\", tokenizer.vocab_size, flush<b>=<\/b>True)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>print<\/b>(\"Tokenizing ...\", flush<b>=<\/b>True)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>imdb_tokenized <b>=<\/b> imdb_dataset.map(tokenize_text, batched<b>=<\/b>True, batch_size<b>=<\/b>None)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>del<\/b> imdb_dataset\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>imdb_tokenized.set_format(\"torch\", columns<b>=<\/b>[\"input_ids\", \"attention_mask\", \"label\"])\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>os.environ[\"TOKENIZERS_PARALLELISM\"] <b>=<\/b> \"false\"\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>#########################################<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>### 3 Set Up DataLoaders<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>#########################################<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>train_dataset <b>=<\/b> IMDBDataset(imdb_tokenized, partition_key<b>=<\/b>\"train\")\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>val_dataset <b>=<\/b> IMDBDataset(imdb_tokenized, partition_key<b>=<\/b>\"validation\")\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>test_dataset <b>=<\/b> IMDBDataset(imdb_tokenized, partition_key<b>=<\/b>\"test\")\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>train_loader <b>=<\/b> DataLoader(\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>dataset<b>=<\/b>train_dataset,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>batch_size<b>=<\/b>12,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>shuffle<b>=<\/b>True,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>num_workers<b>=<\/b>1,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>drop_last<b>=<\/b>True,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>val_loader <b>=<\/b> DataLoader(\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>dataset<b>=<\/b>val_dataset,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>batch_size<b>=<\/b>12,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>num_workers<b>=<\/b>1,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>drop_last<b>=<\/b>True,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>test_loader <b>=<\/b> DataLoader(\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>dataset<b>=<\/b>test_dataset,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>batch_size<b>=<\/b>12,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>num_workers<b>=<\/b>1,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>drop_last<b>=<\/b>True,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>#########################################<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>### 4 Initializing the Model<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>#########################################<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>model <b>=<\/b> AutoModelForSequenceClassification.from_pretrained(\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>\"distilbert-base-uncased\", num_labels<b>=<\/b>2\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>model.to(device)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>optimizer <b>=<\/b> torch.optim.Adam(model.parameters(), lr<b>=<\/b>5e-5)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>#########################################<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>### 5 Finetuning<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>#########################################<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>start <b>=<\/b> time.time()\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>train(\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>num_epochs<b>=<\/b>3,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>model<b>=<\/b>model,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>optimizer<b>=<\/b>optimizer,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>train_loader<b>=<\/b>train_loader,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>val_loader<b>=<\/b>val_loader,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>device<b>=<\/b>device,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>end <b>=<\/b> time.time()\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>elapsed <b>=<\/b> end <b>-<\/b> start\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>print<\/b>(f\"Time elapsed {elapsed<b>\/<\/b>60:.2f} min\")\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>with<\/b> torch.no_grad():\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>model.eval()\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>test_acc <b>=<\/b> torchmetrics.Accuracy(task<b>=<\/b>\"multiclass\", num_classes<b>=<\/b>2).to(device)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span><b>for<\/b> batch <b>in<\/b> test_loader:\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span><b>for<\/b> s <b>in<\/b> [\"input_ids\", \"attention_mask\", \"label\"]:\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>batch[s] <b>=<\/b> batch[s].to(device)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>outputs <b>=<\/b> model(\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>batch[\"input_ids\"],\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>attention_mask<b>=<\/b>batch[\"attention_mask\"],\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>labels<b>=<\/b>batch[\"label\"],\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>predicted_labels <b>=<\/b> torch.argmax(outputs[\"logits\"], 1)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>test_acc.update(predicted_labels, batch[\"label\"])\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>print<\/b>(f\"Test accuracy {test_acc.compute()<b>*<\/b>100:.2f}%\")<br \/>\n<\/code><div class=\"copy-button\"><button class=\"expand-button\">Expand<\/button><button class=\"copy\">Copy<\/button><\/div><\/pre>\n<p>(You can also find this code on GitHub here:\u00a0<a href=\"https:\/\/github.com\/rasbt\/faster-pytorch-blog\/blob\/main\/1_pytorch-distilbert.py\">1_pytorch-distilbert.py<\/a>.)<\/p>\n<p>To keep this article focused, I will skip over the PyTorch basics and focus on describing the main outline of this script. However, if you are new to PyTorch, I recommend checking out my free\u00a0<a href=\"https:\/\/lightning.ai\/pages\/courses\/deep-learning-fundamentals\/\">Deep Learning Fundamentals course<\/a>, where I teach PyTorch in great detail in Units 1-4.<\/p>\n<p>The code above is structured into two parts, the function definitions and the code executed under\u00a0<code class=\"language-plaintext highlighter-rouge\">if __name__ == \"__main__\"<\/code>. This recommended structure is necessary to avoid issues with Python\u2019s multiprocessing when using multiple GPUs later.<\/p>\n<p>The first three sections of the\u00a0<code class=\"language-plaintext highlighter-rouge\">if __name__ == \"__main__\"<\/code>\u00a0part contain the code to set up the dataset loaders. The fourth part is where we initialize the model: a pretrained DistilBERT model we will finetune. Then, in the fifth part, we run our training function and evaluate the finetuned model on the test set.<\/p>\n<p>After running the code on an A100 GPU, I got the following results:<\/p>\n<pre class=\"code-shortcode dark-theme window- collapse-false \" style=\"--height:falsepx\"><code class=\"language-python\">\n\n<p class=\"p1\">Epoch: 0001\/0003 | Batch 0000\/2916 | Loss: 0.6867\n\n<p class=\"p1\">Epoch: 0001\/0003 | Batch 0300\/2916 | Loss: 0.3633\n\n<p class=\"p1\">Epoch: 0001\/0003 | Batch 0600\/2916 | Loss: 0.4122\n\n<p class=\"p1\">Epoch: 0001\/0003 | Batch 0900\/2916 | Loss: 0.3046\n\n<p class=\"p1\">Epoch: 0001\/0003 | Batch 1200\/2916 | Loss: 0.3859\n\n<p class=\"p1\">Epoch: 0001\/0003 | Batch 1500\/2916 | Loss: 0.4489\n\n<p class=\"p1\">Epoch: 0001\/0003 | Batch 1800\/2916 | Loss: 0.5721\n\n<p class=\"p1\">Epoch: 0001\/0003 | Batch 2100\/2916 | Loss: 0.6470\n\n<p class=\"p1\">Epoch: 0001\/0003 | Batch 2400\/2916 | Loss: 0.3116\n\n<p class=\"p1\">Epoch: 0001\/0003 | Batch 2700\/2916 | Loss: 0.2002\n\n<p class=\"p1\">Epoch: 0001\/0003 | Train acc.: 89.81% | Val acc.: 92.17%\n\n<p class=\"p1\">Epoch: 0002\/0003 | Batch 0000\/2916 | Loss: 0.0935\n\n<p class=\"p1\">Epoch: 0002\/0003 | Batch 0300\/2916 | Loss: 0.0674\n\n<p class=\"p1\">Epoch: 0002\/0003 | Batch 0600\/2916 | Loss: 0.1279\n\n<p class=\"p1\">Epoch: 0002\/0003 | Batch 0900\/2916 | Loss: 0.0686\n\n<p class=\"p1\">Epoch: 0002\/0003 | Batch 1200\/2916 | Loss: 0.0104\n\n<p class=\"p1\">Epoch: 0002\/0003 | Batch 1500\/2916 | Loss: 0.0888\n\n<p class=\"p1\">Epoch: 0002\/0003 | Batch 1800\/2916 | Loss: 0.1151\n\n<p class=\"p1\">Epoch: 0002\/0003 | Batch 2100\/2916 | Loss: 0.0648\n\n<p class=\"p1\">Epoch: 0002\/0003 | Batch 2400\/2916 | Loss: 0.0656\n\n<p class=\"p1\">Epoch: 0002\/0003 | Batch 2700\/2916 | Loss: 0.0354\n\n<p class=\"p1\">Epoch: 0002\/0003 | Train acc.: 95.02% | Val acc.: 92.09%\n\n<p class=\"p1\">Epoch: 0003\/0003 | Batch 0000\/2916 | Loss: 0.0143\n\n<p class=\"p1\">Epoch: 0003\/0003 | Batch 0300\/2916 | Loss: 0.0108\n\n<p class=\"p1\">Epoch: 0003\/0003 | Batch 0600\/2916 | Loss: 0.0228\n\n<p class=\"p1\">Epoch: 0003\/0003 | Batch 0900\/2916 | Loss: 0.0140\n\n<p class=\"p1\">Epoch: 0003\/0003 | Batch 1200\/2916 | Loss: 0.0220\n\n<p class=\"p1\">Epoch: 0003\/0003 | Batch 1500\/2916 | Loss: 0.0123\n\n<p class=\"p1\">Epoch: 0003\/0003 | Batch 1800\/2916 | Loss: 0.0495\n\n<p class=\"p1\">Epoch: 0003\/0003 | Batch 2100\/2916 | Loss: 0.0039\n\n<p class=\"p1\">Epoch: 0003\/0003 | Batch 2400\/2916 | Loss: 0.0168\n\n<p class=\"p1\">Epoch: 0003\/0003 | Batch 2700\/2916 | Loss: 0.1293\n\n<p class=\"p1\">Epoch: 0003\/0003 | Train acc.: 97.28% | Val acc.: 89.88%\n\n<p class=\"p1\">Time elapsed 21.33 min\n\n<p class=\"p1\">Test accuracy 89.92%\n\n<\/code><div class=\"copy-button\"><button class=\"expand-button\">Expand<\/button><button class=\"copy\">Copy<\/button><\/div><\/pre>\n<p>As we can see above, the model starts overfitting slightly from epochs 2 to 3, and the validation accuracy decreased from 92.09% to 89.88%. The final test accuracy is 89.92%, which we reached after finetuning the model for 21.33 min.<\/p>\n<h2 id=\"2-using-the-trainer-class\">2) Using the Trainer Class<\/h2>\n<p>Now, let\u2019s wrap our PyTorch model in a\u00a0<code class=\"language-plaintext highlighter-rouge\">LightningModule<\/code>\u00a0so that we can use the\u00a0<code class=\"language-plaintext highlighter-rouge\">Trainer<\/code>\u00a0class from Lightning:<\/p>\n<pre class=\"code-shortcode dark-theme window- collapse-false \" style=\"--height:falsepx\"><code class=\"language-python\">\n\n<p class=\"p1\">import os\n\n<p class=\"p1\">import os.path <b>as<\/b> op\n\n<p class=\"p1\">import time\n\n<p class=\"p1\">from datasets import load_dataset\n\n<p class=\"p1\">import lightning <b>as<\/b> L\n\n<p class=\"p1\">from lightning.pytorch.callbacks import ModelCheckpoint\n\n<p class=\"p1\">from lightning.pytorch.loggers import CSVLogger\n\n<p class=\"p1\">import matplotlib.pyplot <b>as<\/b> plt\n\n<p class=\"p1\">import pandas <b>as<\/b> pd\n\n<p class=\"p1\">import torch\n\n<p class=\"p1\">from torch.utils.data import DataLoader\n\n<p class=\"p1\">import torchmetrics\n\n<p class=\"p1\">from transformers import AutoTokenizer\n\n<p class=\"p1\">from transformers import AutoModelForSequenceClassification\n\n<p class=\"p1\">from watermark import watermark\n\n<p class=\"p1\">from local_dataset_utilities import (\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>download_dataset,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>load_dataset_into_to_dataframe,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>partition_dataset,\n\n<p class=\"p1\">)\n\n<p class=\"p1\">from local_dataset_utilities import IMDBDataset\n\n<p class=\"p1\"><b>def<\/b> <b>tokenize_text<\/b>(batch):\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>return<\/b> tokenizer(batch[\"text\"], truncation<b>=<\/b>True, padding<b>=<\/b>True)\n\n<p class=\"p1\"><b>class<\/b> <b>LightningModel<\/b>(L.LightningModule):\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>def<\/b> <b>__init__<\/b>(self, model, learning_rate<b>=<\/b>5e-5):\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>super().__init__()\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>self.learning_rate <b>=<\/b> learning_rate\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>self.model <b>=<\/b> model\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>self.train_acc <b>=<\/b> torchmetrics.Accuracy(task<b>=<\/b>\"multiclass\", num_classes<b>=<\/b>2)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>self.val_acc <b>=<\/b> torchmetrics.Accuracy(task<b>=<\/b>\"multiclass\", num_classes<b>=<\/b>2)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>self.test_acc <b>=<\/b> torchmetrics.Accuracy(task<b>=<\/b>\"multiclass\", num_classes<b>=<\/b>2)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>def<\/b> <b>forward<\/b>(self, input_ids, attention_mask, labels):\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span><b>return<\/b> self.model(input_ids, attention_mask<b>=<\/b>attention_mask, labels<b>=<\/b>labels)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>def<\/b> <b>training_step<\/b>(self, batch, batch_idx):\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>outputs <b>=<\/b> self(\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>batch[\"input_ids\"],\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>attention_mask<b>=<\/b>batch[\"attention_mask\"],\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>labels<b>=<\/b>batch[\"label\"],\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>self.log(\"train_loss\", outputs[\"loss\"])\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span><b>with<\/b> torch.no_grad():\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>logits <b>=<\/b> outputs[\"logits\"]\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>predicted_labels <b>=<\/b> torch.argmax(logits, 1)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>self.train_acc(predicted_labels, batch[\"label\"])\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>self.log(\"train_acc\", self.train_acc, on_epoch<b>=<\/b>True, on_step<b>=<\/b>False)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span><b>return<\/b> outputs[\"loss\"]<span class=\"Apple-converted-space\">\u00a0 <\/span><i># this is passed to the optimizer for training<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>def<\/b> <b>validation_step<\/b>(self, batch, batch_idx):\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>outputs <b>=<\/b> self(\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>batch[\"input_ids\"],\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>attention_mask<b>=<\/b>batch[\"attention_mask\"],\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>labels<b>=<\/b>batch[\"label\"],\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>self.log(\"val_loss\", outputs[\"loss\"], prog_bar<b>=<\/b>True)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>logits <b>=<\/b> outputs[\"logits\"]\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>predicted_labels <b>=<\/b> torch.argmax(logits, 1)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>self.val_acc(predicted_labels, batch[\"label\"])\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>self.log(\"val_acc\", self.val_acc, prog_bar<b>=<\/b>True)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>def<\/b> <b>test_step<\/b>(self, batch, batch_idx):\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>outputs <b>=<\/b> self(\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>batch[\"input_ids\"],\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>attention_mask<b>=<\/b>batch[\"attention_mask\"],\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>labels<b>=<\/b>batch[\"label\"],\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>logits <b>=<\/b> outputs[\"logits\"]\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>predicted_labels <b>=<\/b> torch.argmax(logits, 1)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>self.test_acc(predicted_labels, batch[\"label\"])\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>self.log(\"accuracy\", self.test_acc, prog_bar<b>=<\/b>True)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>def<\/b> <b>configure_optimizers<\/b>(self):\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>optimizer <b>=<\/b> torch.optim.Adam(\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>self.trainer.model.parameters(), lr<b>=<\/b>self.learning_rate\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span><b>return<\/b> optimizer\n\n<p class=\"p1\"><b>if<\/b> __name__ <b>==<\/b> \"__main__\":\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>print<\/b>(watermark(packages<b>=<\/b>\"torch,lightning,transformers\", python<b>=<\/b>True), flush<b>=<\/b>True)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>print<\/b>(\"Torch CUDA available?\", torch.cuda.is_available(), flush<b>=<\/b>True)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>torch.manual_seed(123)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>##########################<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>### 1 Loading the Dataset<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>##########################<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>download_dataset()\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>df <b>=<\/b> load_dataset_into_to_dataframe()\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>if<\/b> <b>not<\/b> (op.exists(\"train.csv\") <b>and<\/b> op.exists(\"val.csv\") <b>and<\/b> op.exists(\"test.csv\")):\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>partition_dataset(df)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>imdb_dataset <b>=<\/b> load_dataset(\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>\"csv\",\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>data_files<b>=<\/b>{\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>\"train\": \"train.csv\",\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>\"validation\": \"val.csv\",\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>\"test\": \"test.csv\",\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>},\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>#########################################<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>### 2 Tokenization and Numericalization<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>########################################<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>tokenizer <b>=<\/b> AutoTokenizer.from_pretrained(\"distilbert-base-uncased\")\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>print<\/b>(\"Tokenizer input max length:\", tokenizer.model_max_length, flush<b>=<\/b>True)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>print<\/b>(\"Tokenizer vocabulary size:\", tokenizer.vocab_size, flush<b>=<\/b>True)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>print<\/b>(\"Tokenizing ...\", flush<b>=<\/b>True)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>imdb_tokenized <b>=<\/b> imdb_dataset.map(tokenize_text, batched<b>=<\/b>True, batch_size<b>=<\/b>None)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>del<\/b> imdb_dataset\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>imdb_tokenized.set_format(\"torch\", columns<b>=<\/b>[\"input_ids\", \"attention_mask\", \"label\"])\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>os.environ[\"TOKENIZERS_PARALLELISM\"] <b>=<\/b> \"false\"\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>#########################################<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>### 3 Set Up DataLoaders<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>#########################################<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>train_dataset <b>=<\/b> IMDBDataset(imdb_tokenized, partition_key<b>=<\/b>\"train\")\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>val_dataset <b>=<\/b> IMDBDataset(imdb_tokenized, partition_key<b>=<\/b>\"validation\")\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>test_dataset <b>=<\/b> IMDBDataset(imdb_tokenized, partition_key<b>=<\/b>\"test\")\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>train_loader <b>=<\/b> DataLoader(\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>dataset<b>=<\/b>train_dataset,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>batch_size<b>=<\/b>12,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>shuffle<b>=<\/b>True,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>num_workers<b>=<\/b>1,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>drop_last<b>=<\/b>True,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>val_loader <b>=<\/b> DataLoader(\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>dataset<b>=<\/b>val_dataset,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>batch_size<b>=<\/b>12,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>num_workers<b>=<\/b>1,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>drop_last<b>=<\/b>True,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>test_loader <b>=<\/b> DataLoader(\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>dataset<b>=<\/b>test_dataset,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>batch_size<b>=<\/b>12,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>num_workers<b>=<\/b>1,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>drop_last<b>=<\/b>True,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>#########################################<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>### 4 Initializing the Model<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>#########################################<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>model <b>=<\/b> AutoModelForSequenceClassification.from_pretrained(\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>\"distilbert-base-uncased\", num_labels<b>=<\/b>2\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>#########################################<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>### 5 Finetuning<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>#########################################<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>lightning_model <b>=<\/b> LightningModel(model)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>callbacks <b>=<\/b> [\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>ModelCheckpoint(save_top_k<b>=<\/b>1, mode<b>=<\/b>\"max\", monitor<b>=<\/b>\"val_acc\")<span class=\"Apple-converted-space\">\u00a0 <\/span><i># save top 1 model<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>]\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>logger <b>=<\/b> CSVLogger(save_dir<b>=<\/b>\"logs\/\", name<b>=<\/b>\"my-model\")\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>trainer <b>=<\/b> L.Trainer(\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>max_epochs<b>=<\/b>3,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>callbacks<b>=<\/b>callbacks,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>accelerator<b>=<\/b>\"gpu\",\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>devices<b>=<\/b>[1],\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>logger<b>=<\/b>logger,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>log_every_n_steps<b>=<\/b>10,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>deterministic<b>=<\/b>True,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>start <b>=<\/b> time.time()\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>trainer.fit(\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>model<b>=<\/b>lightning_model,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>train_dataloaders<b>=<\/b>train_loader,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>val_dataloaders<b>=<\/b>val_loader,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>end <b>=<\/b> time.time()\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>elapsed <b>=<\/b> end <b>-<\/b> start\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>print<\/b>(f\"Time elapsed {elapsed<b>\/<\/b>60:.2f} min\")\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>test_acc <b>=<\/b> trainer.test(lightning_model, dataloaders<b>=<\/b>test_loader, ckpt_path<b>=<\/b>\"best\")\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>print<\/b>(test_acc)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>with<\/b> open(op.join(trainer.logger.log_dir, \"outputs.txt\"), \"w\") <b>as<\/b> f:\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>f.write((f\"Time elapsed {elapsed<b>\/<\/b>60:.2f} min\\n\"))\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>f.write(f\"Test acc: {test_acc}\")\n\n<\/code><div class=\"copy-button\"><button class=\"expand-button\">Expand<\/button><button class=\"copy\">Copy<\/button><\/div><\/pre>\n<p>(You can also find this code on GitHub here:\u00a0<a href=\"https:\/\/github.com\/rasbt\/faster-pytorch-blog\/blob\/main\/2_pytorch-with-trainer.py\">2_pytorch-with-trainer.py<\/a>.)<\/p>\n<p>Again, I am skipping the details of the\u00a0<code class=\"language-plaintext highlighter-rouge\">LightningModule<\/code>\u00a0to keep this article focused on the performance aspects. However, I will cover the\u00a0<code class=\"language-plaintext highlighter-rouge\">LightningModule<\/code>\u00a0and\u00a0<code class=\"language-plaintext highlighter-rouge\">Trainer<\/code>\u00a0classes in more detail in Unit 5 of my\u00a0<a href=\"https:\/\/lightning.ai\/pages\/courses\/deep-learning-fundamentals\/\">Deep Learning Fundamentals course<\/a>, which is set to come out in March. In the meantime, I recommend the\u00a0<a href=\"https:\/\/pytorch-lightning.readthedocs.io\/en\/stable\/starter\/introduction.html\">official PyTorch Lightning tutorial<\/a>.<\/p>\n<p>In short, we set up a\u00a0<code class=\"language-plaintext highlighter-rouge\">LightningModule<\/code>\u00a0that defines how a training, validation, and test step is executed. Then, the main change is in the code section 5, where we finetune the model. What\u2019s new is that we are now wrapping the PyTorch model in the\u00a0<code class=\"language-plaintext highlighter-rouge\">LightningModel<\/code>\u00a0class and using the\u00a0<code class=\"language-plaintext highlighter-rouge\">Trainer<\/code>\u00a0class to fit the model:<\/p>\n<pre class=\"code-shortcode dark-theme window- collapse-false \" style=\"--height:falsepx\"><code class=\"language-python\">\n\n<p class=\"p2\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>#########################################<\/i>\n\n<p class=\"p2\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>### 5 Finetuning<\/i>\n\n<p class=\"p2\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>#########################################<\/i>\n\n<p class=\"p2\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>lightning_model <b>=<\/b> LightningModel(model)\n\n<p class=\"p2\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>callbacks <b>=<\/b> [\n\n<p class=\"p2\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>ModelCheckpoint(save_top_k<b>=<\/b>1, mode<b>=<\/b>\"max\", monitor<b>=<\/b>\"val_acc\")<span class=\"Apple-converted-space\">\u00a0 <\/span><i># save top 1 model<\/i>\n\n<p class=\"p2\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>]\n\n<p class=\"p2\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>logger <b>=<\/b> CSVLogger(save_dir<b>=<\/b>\"logs\/\", name<b>=<\/b>\"my-model\")\n\n<p class=\"p2\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>trainer <b>=<\/b> L.Trainer(\n\n<p class=\"p2\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>max_epochs<b>=<\/b>3,\n\n<p class=\"p2\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>callbacks<b>=<\/b>callbacks,\n\n<p class=\"p2\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>accelerator<b>=<\/b>\"gpu\",\n\n<p class=\"p2\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>devices<b>=<\/b>1,\n\n<p class=\"p2\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>logger<b>=<\/b>logger,\n\n<p class=\"p2\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>log_every_n_steps<b>=<\/b>10,\n\n<p class=\"p2\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>deterministic<b>=<\/b>True,\n\n<p class=\"p2\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>)\n\n<p class=\"p2\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>trainer.fit(\n\n<p class=\"p2\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>model<b>=<\/b>lightning_model,\n\n<p class=\"p2\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>train_dataloaders<b>=<\/b>train_loader,\n\n<p class=\"p2\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>val_dataloaders<b>=<\/b>val_loader,\n\n<p class=\"p2\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>)\n\n<\/code><div class=\"copy-button\"><button class=\"expand-button\">Expand<\/button><button class=\"copy\">Copy<\/button><\/div><\/pre>\n<p>Since we previously noticed that the validation accuracy decreases from epoch 2 to 3, we use a\u00a0<code class=\"language-plaintext highlighter-rouge\">ModelCheckpoint<\/code>\u00a0callback to load the best model (based on the highest validation accuracy) for model evaluation on the test set. Moreover, we will log the performance to a CSV file (my preferred method for record-keeping) and set the PyTorch behavior to deterministic.<\/p>\n<p>On the same machine, this model reached a test accuracy of 92.6% in 21.79 min:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-5647605 \" src=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/2-trainer.png\" alt=\"\" width=\"821\" height=\"277\" srcset=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/2-trainer.png 1700w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/2-trainer-300x101.png 300w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/2-trainer-1024x346.png 1024w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/2-trainer-1536x519.png 1536w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/2-trainer-300x101@2x.png 600w\" sizes=\"(max-width: 821px) 100vw, 821px\" \/><\/p>\n<p>Note that if we disable checkpointing and allow PyTorch to run in non-deterministic mode, we would get the same runtime as will plain PyTorch.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-5647606 \" src=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/benchmark-1.png\" alt=\"\" width=\"531\" height=\"66\" srcset=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/benchmark-1.png 1673w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/benchmark-1-300x37.png 300w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/benchmark-1-1024x127.png 1024w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/benchmark-1-1536x191.png 1536w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/benchmark-1-300x37@2x.png 600w\" sizes=\"(max-width: 531px) 100vw, 531px\" \/><\/p>\n<h2 id=\"3-automatic-mixed-precision-training\">3) Automatic Mixed Precision Training<\/h2>\n<p>If our GPU supports mixed precision training, enabling it is often one of the main ways to boost computational efficiency. In particular, we use automatic mixed precision training, which switches between 32-bit and 16-bit floating point representations during training without sacrificing accuracy.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-5647607 \" src=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/mixed-precision.png\" alt=\"\" width=\"508\" height=\"458\" srcset=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/mixed-precision.png 1276w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/mixed-precision-300x270.png 300w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/mixed-precision-1024x923.png 1024w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/mixed-precision-300x270@2x.png 600w\" sizes=\"(max-width: 508px) 100vw, 508px\" \/><\/p>\n<p>Using the\u00a0<code class=\"language-plaintext highlighter-rouge\">Trainer<\/code>\u00a0class, we can enable automatic mixed precision training with one line of code:<br \/>\n<pre class=\"code-shortcode dark-theme window- collapse-false \" style=\"--height:falsepx\"><code class=\"language-python\"><br \/>\n<span class=\"Apple-converted-space\">\u00a0 \u00a0<\/span>trainer <b>=<\/b> L.Trainer(\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>max_epochs<b>=<\/b>3,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>callbacks<b>=<\/b>callbacks,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>accelerator<b>=<\/b>\"gpu\",\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>precision<b>=<\/b>\"16\",<span class=\"Apple-converted-space\">\u00a0 <\/span><i># &lt;-- NEW<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>devices<b>=<\/b>[1],\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>logger<b>=<\/b>logger,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>log_every_n_steps<b>=<\/b>10,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>deterministic<b>=<\/b>True,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>)\n\n<p class=\"p1\"><\/code><div class=\"copy-button\"><button class=\"expand-button\">Expand<\/button><button class=\"copy\">Copy<\/button><\/div><\/pre><\/p>\n<p>Using mixed precision training, as shown above, boosts the training time from 21.79 min to 8.25 min! That\u2019s almost 3x faster!<\/p>\n<p>The test set accuracy is 93.2% \u2013 even slightly improved compared to the 92.6% before (likely due to rounding-induced differences when switching between the different precision modes.)<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-5647610 \" src=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/benchmark-2.png\" alt=\"\" width=\"785\" height=\"142\" srcset=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/benchmark-2.png 1676w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/benchmark-2-300x54.png 300w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/benchmark-2-1024x185.png 1024w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/benchmark-2-1536x278.png 1536w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/benchmark-2-300x54@2x.png 600w\" sizes=\"(max-width: 785px) 100vw, 785px\" \/><\/p>\n<div class=\"language-python highlighter-rouge\">\n<h2 id=\"5-training-on-4-gpus-with-distributed-data-parallel\">4) Training on 4 GPUs with Distributed Data Parallel<\/h2>\n<p>After adding mixed precision training (and trying to add graph compilation) above to speed up our code on a single GPU, let\u2019s now explore multi-GPU strategies. In particular, we will now run the same code on four instead of one GPU.<\/p>\n<p>Note that there are several different multi-GPU training techniques out there that I summarized in the figure below.<\/p>\n<p>To keep this blog post focused and brief, I recommend checking out my\u00a0<a href=\"https:\/\/leanpub.com\/machine-learning-q-and-ai\/\">Machine Learning Q and AI<\/a>\u00a0book for more details on the different multi-GPU training paradigms. The section is included in the free preview version. Moreover, I will also cover these in my Deep Learning Fundamentals course Unit 9, which is scheduled to be released in April.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-5647612 \" src=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/multi-gpu.png\" alt=\"\" width=\"621\" height=\"317\" srcset=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/multi-gpu.png 2132w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/multi-gpu-300x153.png 300w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/multi-gpu-1024x523.png 1024w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/multi-gpu-1536x784.png 1536w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/multi-gpu-2048x1045.png 2048w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/multi-gpu-300x153@2x.png 600w\" sizes=\"(max-width: 621px) 100vw, 621px\" \/><\/p>\n<p>We will start with the simplest technique, data parallelism via\u00a0<code class=\"language-plaintext highlighter-rouge\">DistributedDataParallel<\/code>. Using the\u00a0<code class=\"language-plaintext highlighter-rouge\">Trainer<\/code>, we only have to modify one line of code:<\/p>\n<p class=\"p1\"><pre class=\"code-shortcode dark-theme window- collapse-false \" style=\"--height:falsepx\"><code class=\"language-python\">\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>trainer <b>=<\/b> L.Trainer(\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>max_epochs<b>=<\/b>3,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>callbacks<b>=<\/b>callbacks,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>accelerator<b>=<\/b>\"gpu\",\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>devices<b>=<\/b>4,<span class=\"Apple-converted-space\">\u00a0 <\/span><i># &lt;-- NEW<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>strategy<b>=<\/b>\"ddp\",<span class=\"Apple-converted-space\">\u00a0 <\/span><i># &lt;-- NEW<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>precision<b>=<\/b>\"16\",\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>logger<b>=<\/b>logger,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>log_every_n_steps<b>=<\/b>10,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>deterministic<b>=<\/b>True,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>)\n\n<p class=\"p1\"><\/code><div class=\"copy-button\"><button class=\"expand-button\">Expand<\/button><button class=\"copy\">Copy<\/button><\/div><\/pre><\/p>\n<p>On my computer, with four A100 GPUs, this code ran in 3.07 min, reaching a test accuracy of 93.1%. Again, the test set improvement is likely due to the gradient averaging when using the data parallelism.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-5647613 \" src=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/benchmark-4.png\" alt=\"\" width=\"622\" height=\"189\" srcset=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/benchmark-4.png 1684w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/benchmark-4-300x91.png 300w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/benchmark-4-1024x311.png 1024w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/benchmark-4-1536x467.png 1536w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/benchmark-4-300x91@2x.png 600w\" sizes=\"(max-width: 622px) 100vw, 622px\" \/><\/p>\n<p>(Explaining data parallelism in detail is another great topic for a future article.)<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-5647614 \" src=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/ddp.png\" alt=\"\" width=\"761\" height=\"288\" srcset=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/ddp.png 1110w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/ddp-300x114.png 300w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/ddp-1024x387.png 1024w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/ddp-300x114@2x.png 600w\" sizes=\"(max-width: 761px) 100vw, 761px\" \/><\/p>\n<h2 id=\"6-deepspeed\">6) DeepSpeed<\/h2>\n<p>Lastly, let us explore the\u00a0<a href=\"https:\/\/github.com\/microsoft\/DeepSpeed\">DeepSpeed<\/a>\u00a0multi-GPU strategies we can use from within the\u00a0<code class=\"language-plaintext highlighter-rouge\">Trainer<\/code>.<\/p>\n<p>But before trying it out in practice, l wanted to share my multi-GPU usage recommendations. Which strategy to use largely depends on the model, the number of GPUs, and the memory size of the GPUs. For example, when pretraining large models where the model does not fit on a single GPU, it\u2019s a good idea to start with the simple\u00a0<code class=\"language-plaintext highlighter-rouge\">\"ddp_sharded<\/code>\u201d strategy, which adds tensor parallelism to\u00a0<code class=\"language-plaintext highlighter-rouge\">\"ddp\"<\/code>. Using the previous code,\u00a0<code class=\"language-plaintext highlighter-rouge\">\"ddp_sharded\"<\/code>\u00a0takes 2.58 min to run.<\/p>\n<p>Alternatively, we can also consider the more sophisticated\u00a0<code class=\"language-plaintext highlighter-rouge\">\"deepspeed_stage_2\"<\/code>\u00a0strategy, which shards the optimizer states and gradients. If this is not enough to fit the model into GPU memory, try the\u00a0<code class=\"language-plaintext highlighter-rouge\">\"deepspeed_stage_2_offload\"<\/code>\u00a0variant, which offloads optimizer and gradient states to CPU memory (at a performance cost). You can find more information about the DeepSpeed strategies and their ZeRO (zero-redundancy optimizer) in the official\u00a0<a href=\"https:\/\/www.deepspeed.ai\/tutorials\/zero\/\">ZeRO tutorial<\/a>\u2014furthermore, see the\u00a0<a href=\"https:\/\/www.deepspeed.ai\/tutorials\/zero-offload\/\">ZeRO offload tutorial<\/a>\u00a0for more information about offloading.<\/p>\n<p>Returning to the recommendations, if you want to finetune a model, computational throughput is usually less of a concern than being able to fit the model into the memory of a smaller number of GPUs. In this case, you can explore the\u00a0<code class=\"language-plaintext highlighter-rouge\">\"stage_3\"<\/code>\u00a0variants of deepspeed, which shard everything, optimizers, gradients, and parameters, i.e.<\/p>\n<ul>\n<li><code class=\"language-plaintext highlighter-rouge\">strategy=\"deepspeed_stage_3\"<\/code><\/li>\n<li><code class=\"language-plaintext highlighter-rouge\">strategy=\"deepspeed_stage_3_offload\"<\/code><\/li>\n<\/ul>\n<p>Since GPU memory is not a concern with a small model like DistilBERT, let\u2019s try out\u00a0<code class=\"language-plaintext highlighter-rouge\">\"deepspeed_stage_2\"<\/code>:<\/p>\n<p>First, we have to install the DeepSpeed Python library:<\/p>\n<p class=\"p1\"><pre class=\"code-shortcode dark-theme window- collapse-false \" style=\"--height:falsepx\"><code class=\"language-python\">\n\n<p class=\"p1\">pip install -U deepspeed\n\n<p class=\"p1\"><\/code><div class=\"copy-button\"><button class=\"expand-button\">Expand<\/button><button class=\"copy\">Copy<\/button><\/div><\/pre><\/p>\n<p>(On my machine, this installed deepspeed-0.8.2.)<\/p>\n<p>Next, we can enable\u00a0<code class=\"language-plaintext highlighter-rouge\">\"deepspeed_stage_2\"<\/code>\u00a0with changing only one line of code:<\/p>\n<p class=\"p1\"><pre class=\"code-shortcode dark-theme window- collapse-false \" style=\"--height:falsepx\"><code class=\"language-python\">\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>trainer <b>=<\/b> L.Trainer(\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>max_epochs<b>=<\/b>3,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>callbacks<b>=<\/b>callbacks,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>accelerator<b>=<\/b>\"gpu\",\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>devices<b>=<\/b>4,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>strategy<b>=<\/b>\"deepspeed_stage_2\",<span class=\"Apple-converted-space\">\u00a0 <\/span><i># &lt;-- NEW<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>precision<b>=<\/b>\"16\",\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>logger<b>=<\/b>logger,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>log_every_n_steps<b>=<\/b>10,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>deterministic<b>=<\/b>True,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>)\n\n<p class=\"p1\"><\/code><div class=\"copy-button\"><button class=\"expand-button\">Expand<\/button><button class=\"copy\">Copy<\/button><\/div><\/pre><\/p>\n<p>This took 2.75 min to run on my machine and achieved 92.6% test accuracy.<\/p>\n<p>Note that PyTorch now also has its own alternative to DeepSpeed, called fully-sharded DataParallel, which we can use via\u00a0<code class=\"language-plaintext highlighter-rouge\">strategy=\"fsdp\"<\/code>.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-5647615 \" src=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/benchmark-5.png\" alt=\"\" width=\"842\" height=\"300\" srcset=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/benchmark-5.png 1690w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/benchmark-5-300x107.png 300w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/benchmark-5-1024x365.png 1024w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/benchmark-5-1536x547.png 1536w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/benchmark-5-300x107@2x.png 600w\" sizes=\"(max-width: 842px) 100vw, 842px\" \/><\/p>\n<h2 id=\"7-fabric\">7) Fabric<\/h2>\n<p>With the recent Lightning 2.0 release, Lightning AI released the new\u00a0<a href=\"https:\/\/lightning.ai\/docs\/fabric\/stable\/\">Fabric open-source library for PyTorch<\/a>. Fabric is essentially an alternative way to scale PyTorch code without using the\u00a0<code class=\"language-plaintext highlighter-rouge\">LightningModule<\/code>\u00a0and\u00a0<code class=\"language-plaintext highlighter-rouge\">Trainer<\/code>\u00a0I introduced above in section\u00a0<em>2) Using the Trainer Class<\/em>.<\/p>\n<p>Fabric only requires changing a few lines of code, as shown in the code below. The\u00a0<code class=\"language-plaintext highlighter-rouge\">-<\/code>\u00a0indicate lines that were removed and\u00a0<code class=\"language-plaintext highlighter-rouge\">+<\/code>\u00a0were the lines that were added to convert the Python code to use Fabric.<\/p>\n<p class=\"p1\"><pre class=\"code-shortcode dark-theme window- collapse-false \" style=\"--height:falsepx\"><code class=\"language-python\">\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0\u00a0 <\/span>import os\n\n<p class=\"p1\">import os.path <b>as<\/b> op\n\n<p class=\"p1\">import time\n\n<p class=\"p1\"><b>+<\/b> from lightning import Fabric\n\n<p class=\"p1\">from datasets import load_dataset\n\n<p class=\"p1\">import matplotlib.pyplot <b>as<\/b> plt\n\n<p class=\"p1\">import pandas <b>as<\/b> pd\n\n<p class=\"p1\">import torch\n\n<p class=\"p1\">from torch.utils.data import DataLoader\n\n<p class=\"p1\">import torchmetrics\n\n<p class=\"p1\">from transformers import AutoTokenizer\n\n<p class=\"p1\">from transformers import AutoModelForSequenceClassification\n\n<p class=\"p1\">from watermark import watermark\n\n<p class=\"p1\">from local_dataset_utilities import download_dataset, load_dataset_into_to_dataframe, partition_dataset\n\n<p class=\"p1\">from local_dataset_utilities import IMDBDataset\n\n<p class=\"p1\"><b>def<\/b> <b>tokenize_text<\/b>(batch):\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>return<\/b> tokenizer(batch[\"text\"], truncation<b>=<\/b>True, padding<b>=<\/b>True)\n\n<p class=\"p1\"><b>def<\/b> <b>plot_logs<\/b>(log_dir):\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>metrics <b>=<\/b> pd.read_csv(op.join(log_dir, \"metrics.csv\"))\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>aggreg_metrics <b>=<\/b> []\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>agg_col <b>=<\/b> \"epoch\"\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>for<\/b> i, dfg <b>in<\/b> metrics.groupby(agg_col):\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>agg <b>=<\/b> dict(dfg.mean())\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>agg[agg_col] <b>=<\/b> i\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>aggreg_metrics.append(agg)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>df_metrics <b>=<\/b> pd.DataFrame(aggreg_metrics)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>df_metrics[[\"train_loss\", \"val_loss\"]].plot(\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>grid<b>=<\/b>True, legend<b>=<\/b>True, xlabel<b>=<\/b>\"Epoch\", ylabel<b>=<\/b>\"Loss\"\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>plt.savefig(op.join(log_dir, \"loss.pdf\"))\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>df_metrics[[\"train_acc\", \"val_acc\"]].plot(\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>grid<b>=<\/b>True, legend<b>=<\/b>True, xlabel<b>=<\/b>\"Epoch\", ylabel<b>=<\/b>\"Accuracy\"\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>plt.savefig(op.join(log_dir, \"acc.pdf\"))\n\n<p class=\"p1\"><b>-<\/b> <b>def<\/b> <b>train<\/b>(num_epochs, model, optimizer, train_loader, val_loader, device):\n\n<p class=\"p1\"><b>+<\/b> <b>def<\/b> <b>train<\/b>(num_epochs, model, optimizer, train_loader, val_loader, fabric):\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 <\/span><b>for<\/b> epoch <b>in<\/b> range(num_epochs):\n\n<p class=\"p1\"><b>-<\/b> <span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>train_acc <b>=<\/b> torchmetrics.Accuracy(task<b>=<\/b>\"multiclass\", num_classes<b>=<\/b>2).to(device)\n\n<p class=\"p1\"><b>+<\/b> <span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>train_acc <b>=<\/b> torchmetrics.Accuracy(task<b>=<\/b>\"multiclass\", num_classes<b>=<\/b>2).to(fabric.device)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>model.train()\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span><b>for<\/b> batch_idx, batch <b>in<\/b> enumerate(train_loader):\n\n<p class=\"p1\"><b>-<\/b> <span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span><b>for<\/b> s <b>in<\/b> [\"input_ids\", \"attention_mask\", \"label\"]:\n\n<p class=\"p1\"><b>-<\/b> <span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>batch[s] <b>=<\/b> batch[s].to(device)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>outputs <b>=<\/b> model(batch[\"input_ids\"], attention_mask<b>=<\/b>batch[\"attention_mask\"], labels<b>=<\/b>batch[\"label\"])<span class=\"Apple-converted-space\">\u00a0<\/span>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>optimizer.zero_grad()\n\n<p class=\"p1\"><b>-<\/b><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>outputs[\"loss\"].backward()\n\n<p class=\"p1\"><b>+<\/b><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>fabric.backward(outputs[\"loss\"])\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span><i>### UPDATE MODEL PARAMETERS<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>optimizer.step()\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span><i>### LOGGING<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span><b>if<\/b> <b>not<\/b> batch_idx <b>%<\/b> 300:\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span><b>print<\/b>(f\"Epoch: {epoch<b>+<\/b>1:04d}\/{num_epochs:04d} | Batch {batch_idx:04d}\/{len(train_loader):04d} | Loss: {outputs['loss']:.4f}\")\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>model.eval()\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span><b>with<\/b> torch.no_grad():\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>predicted_labels <b>=<\/b> torch.argmax(outputs[\"logits\"], 1)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>train_acc.update(predicted_labels, batch[\"label\"])\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span><i>### MORE LOGGING<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>model.eval()\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span><b>with<\/b> torch.no_grad():\n\n<p class=\"p1\"><b>-<\/b><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>val_acc <b>=<\/b> torchmetrics.Accuracy(task<b>=<\/b>\"multiclass\", num_classes<b>=<\/b>2).to(device)\n\n<p class=\"p1\"><b>+<\/b><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>val_acc <b>=<\/b> torchmetrics.Accuracy(task<b>=<\/b>\"multiclass\", num_classes<b>=<\/b>2).to(fabric.device)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span><b>for<\/b> batch <b>in<\/b> val_loader:\n\n<p class=\"p1\"><b>-<\/b><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span><b>for<\/b> s <b>in<\/b> [\"input_ids\", \"attention_mask\", \"label\"]:\n\n<p class=\"p1\"><b>-<\/b><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>batch[s] <b>=<\/b> batch[s].to(device)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>outputs <b>=<\/b> model(batch[\"input_ids\"], attention_mask<b>=<\/b>batch[\"attention_mask\"], labels<b>=<\/b>batch[\"label\"])\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>predicted_labels <b>=<\/b> torch.argmax(outputs[\"logits\"], 1)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>val_acc.update(predicted_labels, batch[\"label\"])\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span><b>print<\/b>(f\"Epoch: {epoch<b>+<\/b>1:04d}\/{num_epochs:04d} | Train acc.: {train_acc.compute()<b>*<\/b>100:.2f}% | Val acc.: {val_acc.compute()<b>*<\/b>100:.2f}%\")\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>train_acc.reset(), val_acc.reset()\n\n<p class=\"p1\"><b>if<\/b> __name__ <b>==<\/b> \"__main__\":\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>print<\/b>(watermark(packages<b>=<\/b>\"torch,lightning,transformers\", python<b>=<\/b>True))\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>print<\/b>(\"Torch CUDA available?\", torch.cuda.is_available()) <span class=\"Apple-converted-space\">\u00a0 \u00a0<\/span>\n\n<p class=\"p1\"><b>-<\/b> <span class=\"Apple-converted-space\">\u00a0 <\/span>device <b>=<\/b> \"cuda\" <b>if<\/b> torch.cuda.is_available() <b>else<\/b> \"cpu\"\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>torch.manual_seed(123)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>##########################<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>### 1 Loading the Dataset<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>##########################<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>download_dataset()\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>df <b>=<\/b> load_dataset_into_to_dataframe()\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>if<\/b> <b>not<\/b> (op.exists(\"train.csv\") <b>and<\/b> op.exists(\"val.csv\") <b>and<\/b> op.exists(\"test.csv\")):\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>partition_dataset(df)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>imdb_dataset <b>=<\/b> load_dataset(\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>\"csv\",\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>data_files<b>=<\/b>{\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>\"train\": \"train.csv\",\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>\"validation\": \"val.csv\",\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>\"test\": \"test.csv\",\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>},\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>#########################################<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>### 2 Tokenization and Numericalization<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>#########################################<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>tokenizer <b>=<\/b> AutoTokenizer.from_pretrained(\"distilbert-base-uncased\")\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>print<\/b>(\"Tokenizer input max length:\", tokenizer.model_max_length, flush<b>=<\/b>True)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>print<\/b>(\"Tokenizer vocabulary size:\", tokenizer.vocab_size, flush<b>=<\/b>True)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>print<\/b>(\"Tokenizing ...\", flush<b>=<\/b>True)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>imdb_tokenized <b>=<\/b> imdb_dataset.map(tokenize_text, batched<b>=<\/b>True, batch_size<b>=<\/b>None)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>del<\/b> imdb_dataset\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>imdb_tokenized.set_format(\"torch\", columns<b>=<\/b>[\"input_ids\", \"attention_mask\", \"label\"])\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>os.environ[\"TOKENIZERS_PARALLELISM\"] <b>=<\/b> \"false\"\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>#########################################<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>### 3 Set Up DataLoaders<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>#########################################<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>train_dataset <b>=<\/b> IMDBDataset(imdb_tokenized, partition_key<b>=<\/b>\"train\")\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>val_dataset <b>=<\/b> IMDBDataset(imdb_tokenized, partition_key<b>=<\/b>\"validation\")\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>test_dataset <b>=<\/b> IMDBDataset(imdb_tokenized, partition_key<b>=<\/b>\"test\")\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>train_loader <b>=<\/b> DataLoader(\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>dataset<b>=<\/b>train_dataset,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>batch_size<b>=<\/b>12,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>shuffle<b>=<\/b>True,<span class=\"Apple-converted-space\">\u00a0<\/span>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>num_workers<b>=<\/b>2,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>drop_last<b>=<\/b>True,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>val_loader <b>=<\/b> DataLoader(\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>dataset<b>=<\/b>val_dataset,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>batch_size<b>=<\/b>12,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>num_workers<b>=<\/b>2,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>drop_last<b>=<\/b>True,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>test_loader <b>=<\/b> DataLoader(\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>dataset<b>=<\/b>test_dataset,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>batch_size<b>=<\/b>12,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>num_workers<b>=<\/b>2,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>drop_last<b>=<\/b>True,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>#########################################<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>### 4 Initializing the Model<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>#########################################<\/i>\n\n<p class=\"p1\"><b>+<\/b><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>fabric <b>=<\/b> Fabric(accelerator<b>=<\/b>\"cuda\", devices<b>=<\/b>4,<span class=\"Apple-converted-space\">\u00a0<\/span>\n\n<p class=\"p1\"><b>+<\/b><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>strategy<b>=<\/b>\"deepspeed_stage_2\", precision<b>=<\/b>\"16-mixed\")\n\n<p class=\"p1\"><b>+<\/b><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>fabric.launch()\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>model <b>=<\/b> AutoModelForSequenceClassification.from_pretrained(\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>\"distilbert-base-uncased\", num_labels<b>=<\/b>2)\n\n<p class=\"p1\"><b>-<\/b> <span class=\"Apple-converted-space\">\u00a0 <\/span>model.to(device)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>optimizer <b>=<\/b> torch.optim.Adam(model.parameters(), lr<b>=<\/b>5e-5)\n\n<p class=\"p1\"><b>+<\/b><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>model, optimizer <b>=<\/b> fabric.setup(model, optimizer)\n\n<p class=\"p1\"><b>+<\/b><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>train_loader, val_loader, test_loader <b>=<\/b> fabric.setup_dataloaders(\n\n<p class=\"p1\"><b>+<\/b><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>train_loader, val_loader, test_loader)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>#########################################<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>### 5 Finetuning<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><i>#########################################<\/i>\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>start <b>=<\/b> time.time()\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>train(\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>num_epochs<b>=<\/b>3,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>model<b>=<\/b>model,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>optimizer<b>=<\/b>optimizer,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>train_loader<b>=<\/b>train_loader,\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>val_loader<b>=<\/b>val_loader,\n\n<p class=\"p1\"><b>-<\/b> <span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 <\/span>device<b>=<\/b>device\n\n<p class=\"p1\"><b>+<\/b> <span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 <\/span>fabric<b>=<\/b>fabric\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>end <b>=<\/b> time.time()\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span>elapsed <b>=<\/b> end<b>-<\/b>start\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>print<\/b>(f\"Time elapsed {elapsed<b>\/<\/b>60:.2f} min\")\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>with<\/b> torch.no_grad():\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span>model.eval()\n\n<p class=\"p1\"><b>-<\/b> <span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 <\/span>test_acc <b>=<\/b> torchmetrics.Accuracy(task<b>=<\/b>\"multiclass\", num_classes<b>=<\/b>2).to(device)\n\n<p class=\"p1\"><b>+<\/b> <span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 <\/span>test_acc <b>=<\/b> torchmetrics.Accuracy(task<b>=<\/b>\"multiclass\", num_classes<b>=<\/b>2).to(fabric.device)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 <\/span><b>for<\/b> batch <b>in<\/b> test_loader:\n\n<p class=\"p1\"><b>-<\/b> <span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span><b>for<\/b> s <b>in<\/b> [\"input_ids\", \"attention_mask\", \"label\"]:\n\n<p class=\"p1\"><b>-<\/b> <span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>batch[s] <b>=<\/b> batch[s].to(device)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>outputs <b>=<\/b> model(batch[\"input_ids\"], attention_mask<b>=<\/b>batch[\"attention_mask\"], labels<b>=<\/b>batch[\"label\"])\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>predicted_labels <b>=<\/b> torch.argmax(outputs[\"logits\"], 1)\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 <\/span>test_acc.update(predicted_labels, batch[\"label\"])\n\n<p class=\"p1\"><span class=\"Apple-converted-space\">\u00a0 \u00a0 <\/span><b>print<\/b>(f\"Test accuracy {test_acc.compute()<b>*<\/b>100:.2f}%\")\n\n<p class=\"p1\"><\/code><div class=\"copy-button\"><button class=\"expand-button\">Expand<\/button><button class=\"copy\">Copy<\/button><\/div><\/pre><\/p>\n<p>As we can see, the modifications are really lightweight! How well does it run? Fabric completed the finetuning in just 1.8 min! Fabric is a bit more lightweight than the Trainer \u2013 although it\u2019s capable using callbacks and logging as well, we haven\u2019t enabled these features here to demonstrate Fabric with a minimalist example. It\u2019s blazing fast, isn\u2019t it?<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-5647603 aligncenter\" src=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Screenshot-2023-03-22-at-2.02.54-PM.png\" alt=\"\" width=\"905\" height=\"407\" srcset=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Screenshot-2023-03-22-at-2.02.54-PM.png 2024w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Screenshot-2023-03-22-at-2.02.54-PM-300x135.png 300w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Screenshot-2023-03-22-at-2.02.54-PM-1024x460.png 1024w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Screenshot-2023-03-22-at-2.02.54-PM-1536x691.png 1536w, https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Screenshot-2023-03-22-at-2.02.54-PM-300x135@2x.png 600w\" sizes=\"(max-width: 905px) 100vw, 905px\" \/><\/p>\n<p>When to use the Lightning Trainer or Fabric depends on your personal preference. As a rule of thumb, if you prefer a light wrapper around existing PyTorch code, check out Fabric. On the other hand, if you move towards bigger projects and prefer the code organization that Lightning provides, I recommend the Trainer.<\/p>\n<h2 id=\"conclusion\">Conclusion<\/h2>\n<p>In this article, we explored various techniques to improve the training speed of PyTorch models. If we use the Lightning Trainer, we can toggle between these options with one line of code, which is very convenient \u2013 especially if you are toggling between a CPU and GPU machine when debugging your code.<\/p>\n<p>Another aspect we haven\u2019t explored yet is maximizing the batch size, which could further improve the throughput of our model. However, we will leave this optimization for another day.<\/p>\n<p><a href=\"https:\/\/github.com\/rasbt\/faster-pytorch-blog\">If you want to try the codes yourself, I shared them all on GitHub here<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>&nbsp; Introduction In this tutorial, we will finetune a\u00a0DistilBERT model, a distilled version of BERT that is 40% smaller at almost identical predictive performance. There are several ways we can finetune a pretrained language model. The figure below depicts the three most common approaches. All three approaches above (a-c) assume we have pretrained the model<a class=\"excerpt-read-more\" href=\"https:\/\/lightning.ai\/pages\/community\/tutorial\/how-to-speed-up-pytorch-model-training\/\" title=\"ReadHow to Speed Up PyTorch Model Training\">&#8230; Read more &raquo;<\/a><\/p>\n","protected":false},"author":16,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"footnotes":"","_links_to":"","_links_to_target":""},"categories":[106,41],"tags":[],"glossary":[],"acf":{"additional_authors":false,"default_editor":true,"show_table_of_contents":true,"hide_from_archive":true,"content_type":"Blog Post","sticky":false,"custom_styles":"","table_of_contents":"<h4>Table of Contents<\/h4>\n<ul>\n<li><a href=\"#introduction\">Introduction<\/a><\/li>\n<li><a href=\"#1-plain-pytorch-baseline\">Plain PyTorch Baseline<\/a><\/li>\n<li><a href=\"#2-using-the-trainer-class\">Using the Trainer Class<\/a><\/li>\n<li><a href=\"#3-automatic-mixed-precision-training\">Automatic Mixed Precision Training<\/a><\/li>\n<li><a href=\"#5-training-on-4-gpus-with-distributed-data-parallel\"> Training on 4 GPUs with Distributed Data Parallel<\/a><\/li>\n<li><a href=\"#7-fabric\">Fabric<\/a><\/li>\n<li><a href=\"#conclusion\">Conclusion<\/a><\/li>\n<\/ul>\n"},"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v24.5 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>How to Speed Up PyTorch Model Training - Lightning AI<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/lightning.ai\/pages\/community\/tutorial\/how-to-speed-up-pytorch-model-training\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"How to Speed Up PyTorch Model Training\" \/>\n<meta property=\"og:description\" content=\"Learn how to improve the training performance of your PyTorch model without compromising its accuracy.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/lightning.ai\/pages\/community\/tutorial\/how-to-speed-up-pytorch-model-training\/\" \/>\n<meta property=\"og:site_name\" content=\"Lightning AI\" \/>\n<meta property=\"article:published_time\" content=\"2023-03-23T21:01:26+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-03-24T18:59:41+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/benchmark-last.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1690\" \/>\n\t<meta property=\"og:image:height\" content=\"874\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"JP Hennessy\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:title\" content=\"How to Speed Up PyTorch Model Training\" \/>\n<meta name=\"twitter:description\" content=\"Learn how to improve the training performance of your PyTorch model without compromising its accuracy.\" \/>\n<meta name=\"twitter:image\" content=\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/benchmark-last.png\" \/>\n<meta name=\"twitter:creator\" content=\"@LightningAI\" \/>\n<meta name=\"twitter:site\" content=\"@LightningAI\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"JP Hennessy\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"20 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/lightning.ai\/pages\/community\/tutorial\/how-to-speed-up-pytorch-model-training\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/lightning.ai\/pages\/community\/tutorial\/how-to-speed-up-pytorch-model-training\/\"},\"author\":{\"name\":\"JP Hennessy\",\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/person\/2518f4d5541f8e98016f6289169141a6\"},\"headline\":\"How to Speed Up PyTorch Model Training\",\"datePublished\":\"2023-03-23T21:01:26+00:00\",\"dateModified\":\"2023-03-24T18:59:41+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/lightning.ai\/pages\/community\/tutorial\/how-to-speed-up-pytorch-model-training\/\"},\"wordCount\":4108,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/lightning.ai\/pages\/#organization\"},\"image\":{\"@id\":\"https:\/\/lightning.ai\/pages\/community\/tutorial\/how-to-speed-up-pytorch-model-training\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Screenshot-2023-03-22-at-2.02.54-PM.png\",\"articleSection\":[\"Community\",\"Tutorials\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/lightning.ai\/pages\/community\/tutorial\/how-to-speed-up-pytorch-model-training\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/lightning.ai\/pages\/community\/tutorial\/how-to-speed-up-pytorch-model-training\/\",\"url\":\"https:\/\/lightning.ai\/pages\/community\/tutorial\/how-to-speed-up-pytorch-model-training\/\",\"name\":\"How to Speed Up PyTorch Model Training - Lightning AI\",\"isPartOf\":{\"@id\":\"https:\/\/lightning.ai\/pages\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/lightning.ai\/pages\/community\/tutorial\/how-to-speed-up-pytorch-model-training\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/lightning.ai\/pages\/community\/tutorial\/how-to-speed-up-pytorch-model-training\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Screenshot-2023-03-22-at-2.02.54-PM.png\",\"datePublished\":\"2023-03-23T21:01:26+00:00\",\"dateModified\":\"2023-03-24T18:59:41+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/lightning.ai\/pages\/community\/tutorial\/how-to-speed-up-pytorch-model-training\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/lightning.ai\/pages\/community\/tutorial\/how-to-speed-up-pytorch-model-training\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/lightning.ai\/pages\/community\/tutorial\/how-to-speed-up-pytorch-model-training\/#primaryimage\",\"url\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Screenshot-2023-03-22-at-2.02.54-PM.png\",\"contentUrl\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Screenshot-2023-03-22-at-2.02.54-PM.png\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/lightning.ai\/pages\/community\/tutorial\/how-to-speed-up-pytorch-model-training\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/lightning.ai\/pages\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"How to Speed Up PyTorch Model Training\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/lightning.ai\/pages\/#website\",\"url\":\"https:\/\/lightning.ai\/pages\/\",\"name\":\"Lightning AI\",\"description\":\"The platform for teams to build AI.\",\"publisher\":{\"@id\":\"https:\/\/lightning.ai\/pages\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/lightning.ai\/pages\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/lightning.ai\/pages\/#organization\",\"name\":\"Lightning AI\",\"url\":\"https:\/\/lightning.ai\/pages\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/02\/image-17.png\",\"contentUrl\":\"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/02\/image-17.png\",\"width\":1744,\"height\":856,\"caption\":\"Lightning AI\"},\"image\":{\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/LightningAI\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/person\/2518f4d5541f8e98016f6289169141a6\",\"name\":\"JP Hennessy\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/lightning.ai\/pages\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/28ade268218ae45f723b0b62499f527a?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/28ade268218ae45f723b0b62499f527a?s=96&d=mm&r=g\",\"caption\":\"JP Hennessy\"},\"url\":\"https:\/\/lightning.ai\/pages\/author\/jplightning-ai\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"How to Speed Up PyTorch Model Training - Lightning AI","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/lightning.ai\/pages\/community\/tutorial\/how-to-speed-up-pytorch-model-training\/","og_locale":"en_US","og_type":"article","og_title":"How to Speed Up PyTorch Model Training","og_description":"Learn how to improve the training performance of your PyTorch model without compromising its accuracy.","og_url":"https:\/\/lightning.ai\/pages\/community\/tutorial\/how-to-speed-up-pytorch-model-training\/","og_site_name":"Lightning AI","article_published_time":"2023-03-23T21:01:26+00:00","article_modified_time":"2023-03-24T18:59:41+00:00","og_image":[{"width":1690,"height":874,"url":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/benchmark-last.png","type":"image\/png"}],"author":"JP Hennessy","twitter_card":"summary_large_image","twitter_title":"How to Speed Up PyTorch Model Training","twitter_description":"Learn how to improve the training performance of your PyTorch model without compromising its accuracy.","twitter_image":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/benchmark-last.png","twitter_creator":"@LightningAI","twitter_site":"@LightningAI","twitter_misc":{"Written by":"JP Hennessy","Est. reading time":"20 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/lightning.ai\/pages\/community\/tutorial\/how-to-speed-up-pytorch-model-training\/#article","isPartOf":{"@id":"https:\/\/lightning.ai\/pages\/community\/tutorial\/how-to-speed-up-pytorch-model-training\/"},"author":{"name":"JP Hennessy","@id":"https:\/\/lightning.ai\/pages\/#\/schema\/person\/2518f4d5541f8e98016f6289169141a6"},"headline":"How to Speed Up PyTorch Model Training","datePublished":"2023-03-23T21:01:26+00:00","dateModified":"2023-03-24T18:59:41+00:00","mainEntityOfPage":{"@id":"https:\/\/lightning.ai\/pages\/community\/tutorial\/how-to-speed-up-pytorch-model-training\/"},"wordCount":4108,"commentCount":0,"publisher":{"@id":"https:\/\/lightning.ai\/pages\/#organization"},"image":{"@id":"https:\/\/lightning.ai\/pages\/community\/tutorial\/how-to-speed-up-pytorch-model-training\/#primaryimage"},"thumbnailUrl":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Screenshot-2023-03-22-at-2.02.54-PM.png","articleSection":["Community","Tutorials"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/lightning.ai\/pages\/community\/tutorial\/how-to-speed-up-pytorch-model-training\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/lightning.ai\/pages\/community\/tutorial\/how-to-speed-up-pytorch-model-training\/","url":"https:\/\/lightning.ai\/pages\/community\/tutorial\/how-to-speed-up-pytorch-model-training\/","name":"How to Speed Up PyTorch Model Training - Lightning AI","isPartOf":{"@id":"https:\/\/lightning.ai\/pages\/#website"},"primaryImageOfPage":{"@id":"https:\/\/lightning.ai\/pages\/community\/tutorial\/how-to-speed-up-pytorch-model-training\/#primaryimage"},"image":{"@id":"https:\/\/lightning.ai\/pages\/community\/tutorial\/how-to-speed-up-pytorch-model-training\/#primaryimage"},"thumbnailUrl":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Screenshot-2023-03-22-at-2.02.54-PM.png","datePublished":"2023-03-23T21:01:26+00:00","dateModified":"2023-03-24T18:59:41+00:00","breadcrumb":{"@id":"https:\/\/lightning.ai\/pages\/community\/tutorial\/how-to-speed-up-pytorch-model-training\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/lightning.ai\/pages\/community\/tutorial\/how-to-speed-up-pytorch-model-training\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/lightning.ai\/pages\/community\/tutorial\/how-to-speed-up-pytorch-model-training\/#primaryimage","url":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Screenshot-2023-03-22-at-2.02.54-PM.png","contentUrl":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/03\/Screenshot-2023-03-22-at-2.02.54-PM.png"},{"@type":"BreadcrumbList","@id":"https:\/\/lightning.ai\/pages\/community\/tutorial\/how-to-speed-up-pytorch-model-training\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/lightning.ai\/pages\/"},{"@type":"ListItem","position":2,"name":"How to Speed Up PyTorch Model Training"}]},{"@type":"WebSite","@id":"https:\/\/lightning.ai\/pages\/#website","url":"https:\/\/lightning.ai\/pages\/","name":"Lightning AI","description":"The platform for teams to build AI.","publisher":{"@id":"https:\/\/lightning.ai\/pages\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/lightning.ai\/pages\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/lightning.ai\/pages\/#organization","name":"Lightning AI","url":"https:\/\/lightning.ai\/pages\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/lightning.ai\/pages\/#\/schema\/logo\/image\/","url":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/02\/image-17.png","contentUrl":"https:\/\/lightningaidev.wpengine.com\/wp-content\/uploads\/2023\/02\/image-17.png","width":1744,"height":856,"caption":"Lightning AI"},"image":{"@id":"https:\/\/lightning.ai\/pages\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/LightningAI"]},{"@type":"Person","@id":"https:\/\/lightning.ai\/pages\/#\/schema\/person\/2518f4d5541f8e98016f6289169141a6","name":"JP Hennessy","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/lightning.ai\/pages\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/28ade268218ae45f723b0b62499f527a?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/28ade268218ae45f723b0b62499f527a?s=96&d=mm&r=g","caption":"JP Hennessy"},"url":"https:\/\/lightning.ai\/pages\/author\/jplightning-ai\/"}]}},"_links":{"self":[{"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/posts\/5647602"}],"collection":[{"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/users\/16"}],"replies":[{"embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/comments?post=5647602"}],"version-history":[{"count":0,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/posts\/5647602\/revisions"}],"wp:attachment":[{"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/media?parent=5647602"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/categories?post=5647602"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/tags?post=5647602"},{"taxonomy":"glossary","embeddable":true,"href":"https:\/\/lightning.ai\/pages\/wp-json\/wp\/v2\/glossary?post=5647602"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}