Lightning AI Studios: Never set up a local environment again →

Log in or create a free Lightning.ai account to track your progress and access additional course materials  

4.6 Speeding Up Model Training Using GPUs

References

 

What we covered in this video lecture

In this lecture, we learned how to transfer tensors from the CPU to GPU memory to train neural networks more efficiently — GPUs are especially great for linear algebra operations that can be parallelized, for example, dot products and matrix multiplication.

If you have any questions or need tips or help with your PyTorch GPU setup, please don’t hesitate to reach out via the Discussion Forum.

Also, please note that this was just a short introduction to using GPUs in PyTorch. We will revisit this topic many times in this course. For instance, GPUs will become more relevant in Unit 7, where we work with computer vision models. Also, GPUs are essential for modern large language models, which we will cover in Unit 8. Finally, how do we train neural networks using not one but multiple GPUs? That’s a topic we will talk about in Unit 9!

Additional resources if you want to learn more

If you don’t have a suitable GPU in your computer, you can consider cloud resources. For example, as of this writing, if you have a Lightning account, you get $30 worth of GPU credits for free when you sign up. You could then use the SO-AND-SO App to train a model on a GPU. (NOTE to Olya: need to fill this in based on a response here: https://lightning-ai-corp.slack.com/archives/C047613N6Q6/p1673638728969079) Alternative resources include Google Colab and Kaggle Notebooks.

If you are interested in further information why GPUs are so efficient for deep learning, and if you are perhaps interested in purchasing your own GPU instead of using cloud resources, check out this excellent Which GPU(s) to Get for Deep Learning: My Experience and Advice for Using GPUs in Deep Learning guide.

 

Log in or create a free Lightning.ai account to access:

  • Quizzes
  • Completion badges
  • Progress tracking
  • Additional downloadable content
  • Additional AI education resources
  • Notifications when new units are released
  • Free cloud computing credits

Quiz: 4.6 Speeding Up Model Training Using GPUs

Which of the following do we have to move to the GPU memory in order to take advantage of GPU-accelerated training?

Correct. We need to have the features on the GPU since we use them to compute the forward and backward pass.

Correct. It is necessary to have those in GPU memory if we compute the loss that also involves the predictions in GPU memory.

Correct. The model weights need to be on the GPU for both the forward and the backward pass.

Please answer all questions to proceed.
Watch Video 1

Unit 4.6

Videos