Seeding when resume_from_checkpoint


I wonder how pytorch_lightning does seeding when I resume from a checkpoint using something like Trainer(resume_from_checkpoint="my_ckpt")? Does it set a new seed for everything, use the seed I set when I started training from scratch, or resume from the random states of the loaded checkpoint? My version is 1.9.3. Thanks for your help!

It doesn’t do that. You can call lightning’s seed_everything(seed) before loading the checkpoint to get that behavior.

FYI, if you ever wonder what Lightning saves into the checkpoint, you can always just load using torch.load() and inspect the dictionary items :slight_smile:

1 Like

Thanks! Just a follow up question. In one my experiment, I trained with seed_everything(seed) at the beginning of the training, and later I broke the training. I used ‘seed_everything’ to set a differrent seed when I resumed with Trainer(resume_from_checkpoint="my_ckpt"). In this case, would the different seed used in resuming cause training problems? I asked because the trend of training and validation accuracy seemed different from that before resuming. I am not sure if it conflicts with some internal mechanism of pytorch lightning when resuming training. Thanks!