Training epochs and validation epochs are not matched

I solved it!!!
For the people who want to know, just set num_sanity_val_step to 0. In default, pytorch lightning module will run n validation batches before the model starts its training process.

2 Likes