Checkpoint model predictions not same as original model

actually model.eval() doesn’t turn off the gradients, but toggle the behavior of BatchNorm and Dropout layers since they behave differently while training/evaluation. To turn off gradients you need to use torch.no_grad(). In short use model.eval() and torch.no_grad().

Also do model = FFNPL.load_from_checkpoint(s, param=prm) since load_from_checkpoint is a class method.

1 Like