I wasn’t sure how exactly to categorize this, but my guess is that it’s some change in Bolts leading up to the SimCLR release, so I’ve put it here for now.
Training in the traditional self-supervised linear classification evaluation setup for Moco, I’m seeing a substantial performance drop after updating versions this weekend - seemingly in the supervised (evaluation) portion, but not the self-supervised pretraining (making me think it’s something about the Bolts DataModules rather than base Lightning, but I could be interpreting incorrectly).
With the STL10 Bolts DataModule, I am getting a performance drop of 16% top-1 validation accuracy (88->72), and a 0.5 increase in cross-entropy loss (0.3 to 0.8). On CIFAR10 I get a smaller, but consistent drop of 2% top-1 validation accuracy. Both of these are using the exact same code, just with switching conda envs. (The reason for changing both the Lightning version and the Bolts version is that it seems required for compatibility between the two.)
I have some more minor details here (Linear classification performance on CIFAR10/STL10 DataModules drops from Lightning 0.8.5/Bolts 0.1.0 to Lightning 0.9.0/Bolts 0.1.1 · Issue #175 · Lightning-AI/lightning-bolts · GitHub - sorry for the double post, didn’t realize Bolts-related stuff could go here) and am happy to provide the self-supervised pretrained models/hparam configs to speed things along. Ideally if something has changed in the DataModules/Bolts elsewhere that would lead to this kind of drop it would be great to be pointed to what changed and why.