Introducing Lit-LLaMA: a minimal, optimized rewrite of LLaMA licensed under Apache 2.0 →

Log in or create a free account to track your progress and access additional course materials  

Overview: Organizing your Code with PyTorch Lightning

The previous units focused on learning how deep neural networks work from scratch. Along the way, we introduced PyTorch in units 2 and 3, and we trained our first multilayer neural networks in Unit 4. Personally, I really like PyTorch’s balance between customizability and user-friendliness.

However, as we start working with more sophisticated features, including model checkpointing, logging, multi-GPU training, and distributed computing, PyTorch can sometimes be a bit too verbose. Hence, in this unit, we will introduce the Lightning Trainer, which helps us organize our PyTorch code and take care of lots of the mundane boilerplate code.

So, in Unit 5, we will learn how to …

  • organize our PyTorch code with Lightning;
  • compute metrics efficiently with TorchMetrics;
  • make our code reproducible;
  • organize our data loaders via DataModules;
  • logging results during training;
  • adding extra functionality with callbacks.

Log in or create a free account to access:

  • Quizzes
  • Completion badges
  • Progress tracking
  • Additional downloadable content
  • Additional AI education resources
  • Notifications when new units are released
  • Free cloud computing credits

Unit 5

Questions or Feedback?

Join the Discussion