Lightning AI Studios: Never set up a local environment again →

Log in or create a free Lightning.ai account to track your progress and access additional course materials  

5.7 Evaluating and Using Models on New Data

References

Code

What we covered in this video lecture

In this lecture, we explored the concept of checkpoints. During the training of a model, the model performance evolves as it is exposed to more data. For various reasons, it is recommended to save the state of the model at various intervals throughout the training process. Once training is completed, we can then load the checkpoint that corresponds to the highest performance.

Additionally, checkpoints allow for the training process to be resumed from its previous state in the event of an interruption. The Lightning checkpoints are fully compatible with plain PyTorch and can be easily used in either framework.

Additional resources if you want to learn more

If you want to learn additional detail about checkpoints in Lightning, check out the official documentation here.

Log in or create a free Lightning.ai account to access:

  • Quizzes
  • Completion badges
  • Progress tracking
  • Additional downloadable content
  • Additional AI education resources
  • Notifications when new units are released
  • Free cloud computing credits

Quiz: 5.7 Evaluating and Using Models on New Data (Part 1)

Suppose you are looking at a confusion matrix for a handwritten digit classifier, and the model predicts 13 digits images of the digit nine as digit three by mistake. Which of the following statements is correct?

Incorrect. Since the question does not specify what the positive class is, we cannot say whether misclassifying these images are cases of false positives or false negatives.

Incorrect. Since the question does not specify what the positive class is, we cannot say whether misclassifying these images are cases of false positives or false negatives.

Correct. Since the question does not specify what the positive class is, we cannot say whether misclassifying these images are cases of false positives or false negatives.

Please answer all questions to proceed.

Quiz: 5.7 Evaluating and Using Models on New Data (Part 2)

When we load a model from a checkpoint file using LightningModel.load_from_checkpoint(...), we provide the PyTorch model as an additional argument because the LightningModel.load_from_checkpoint(...) method does not load hyperparameter settings by default.

Incorrect. By default, a LightningModule treats all its __init__ parameters as hyperparameters that are automatically saved to a checkpoint file so that they are automatically loaded via .load_from_checkpoint(...). The way we constructed the LightningModel, it receives a PyTorch model as input, and we explicitly excluded it from the hyperparameter list, which is why we have to load it explicitly via LightningModel.load_from_checkpoint(..., model=pytorch_model)

Correct. By default, a LightningModule treats all its __init__ parameters as hyperparameters that are automatically saved to a checkpoint file so that they are automatically loaded via .load_from_checkpoint(...). The way we constructed the LightningModel, it receives a PyTorch model as input, and we explicitly excluded it from the hyperparameter list, which is why we have to load it explicitly via LightningModel.load_from_checkpoint(..., model=pytorch_model)

Please answer all questions to proceed.
Watch Video 1

Unit 5.7

Videos
Follow along in a Lightning Studio

DL Fundamentals 5: PyTorch Lightning

Sebastian
Launch Studio →
Questions or Feedback?

Join the Discussion