How-to GuidesΒΆ Avoid overfitting Learn how to add validation and test loops Build a model Step by step guide to build your model Configure hyperparameters from the CLI Make your experiments modular via command line interface Customize the progress bar Change the progress bar monitoring and tracking Deploy models into production Deploy models with different levels of scal Optimize training Explore advanced training techniques Find bottlenecks in your code Learn how to profile your experiments to find bottlenecks Finetune a model Learn how to use pretrained models Manage data How to use basic to advanced data techniques Manage experiments Learn to track and visualize with experiment managers Organize existing PyTorch into Lightning Convert your vanila PyTorch to Lightning Run on an on-prem cluster Learn to run on your own cluster Save and load model progress Save and load progress with checkpoints Save memory with half-precision Use precision techniques to train faster and save memory Set up large models efficiently Avoid memory peaks and speed up the initialization of large models Train models with billions of parameters Scale GPU training for models with billions of parameters Train in a notebook Train models in interactive notebooks (Jupyter, Colab, Kaggle, etc.) Train on single or multiple GPUs Train models faster with GPU accelerators Train on single or multiple HPUs Train models faster with HPU accelerators Train on single or multiple IPUs Train models faster with IPU accelerators Train on single or multiple TPUs TTrain models faster with TPU accelerators Train on MPS Train models faster with Apple Silicon GPUs Track and Visualize Experiments Learn to track and visualize experiments Use a pretrained model Improve results with transfer learning on pretrained models Use a pure PyTorch training loop Run your pure PyTorch loop with Lightning