Hello there all!
I’m training a self-supervised model and would like to probe the quality of the encoder (ability to generalise) with several small classification tasks every couple of epochs. Training of the main net is distributed, I’m happy to run the small validation classifier on a single GPU, whichever is simpler.
Any thoughts or references how best to go about this? Will anything fail or interfere if I wrap this in a Callback
or validation_epoch_end
and either use another Trainer or vanilla PyTorch?
Cheers and all the best,
Jonas