I have 10000s of images for training a semantic segmentation model.
I am therefore using limit_val_batches
and val_check_interval
.
I however note (i.e Pytorch Lightning limit_val_batches and val_check_interval behavior - Stack Overflow) that when using limit_val_batches=N
, the first N
batches from the underlying dataloader are returned for each validation iteration. Training therefore only every sees the first ‘N’ validation batches
Rather than using the same ‘N’ batches starting at index 0 for each validation epoch, I would like my dataloader when using limit_val_batches
to sequentially chunk through all the validation data (e.g 0-19 batches the first epoch/step and then 20-39 the next)
How would I go about implementing this behavior with limit_val_batches
? or is this not the expected thing to do?