5.4 Making Code Reproducible
What we covered in this video lecture
This lecture covered some common sources of randomness that we face when training neural networks. Randomness is not necessarily a bad thing. Sometimes, a particular initial random weight configuration can lead to bad local minima, so it is recommended to train a network multiple times with different initial random weights. However, sometimes randomness can be annoying when we want to share or reproduce results — this is especially true when we are working on research papers or writing unit test. So, in this lecture, we covered some methods for reducing the sources of randomness in our neural network training code.
(Note that deterministic settings on a GPU, for example, when we work with convolutional networks later, make neural networks train a tad slower, which is whhy they are typically not used by default.)
Additional resources if you want to learn more
If you want to learn more about determinism when using GPUs, you may enjoy this recorded talk, Deep Learning Determinism, by Duncan Riach at the Nvidia GTC conference.
Log in or create a free Lightning.ai account to access:
- Completion badges
- Progress tracking
- Additional downloadable content
- Additional AI education resources
- Notifications when new units are released
- Free cloud computing credits