Difference between trained model and model loaded from checkpoint

Let’s assume I have a small training loop for my model.

BATCH_SIZE = 4
EPOCHS = 5


for i, (train_index, test_index) in enumerate(kf.split(patients)):
    train_patients, test_patients = (
        [patients[index] for index in train_index],
        [patients[index] for index in test_index],
    )

    total_volumes = []

    train_ds = BrainDataset(train_patients)

    train_dataloader = (
        torch.utils.data.DataLoader(
            train_ds,
            batch_size=BATCH_SIZE,
            shuffle=True,
        ),
    )

    model = Model("deeplabv3plus", "resnet50", in_channels=1, out_classes=1)

    trainer = L.Trainer(max_epochs=EPOCHS, enable_checkpointing=False)

    trainer.fit(
        model,
        train_dataloaders=train_dataloader,
        val_dataloaders=valid_dataloader,
    )

    trainer.save_checkpoint(f"./weights/temp/5.ckpt")

    break

When I perform forward on this model

result = model(torch.ones([4, 1, 256, 256]))

print(result)

I get a result
[-1.4535, -1.7266, -1.9997, ..., -1.7220, -1.7167, -1.7115],

However, when I load the model from the checkpoint, I have completely different results

model = Model.load_from_checkpoint("weights/temp/5.ckpt")
model.freeze()

result = model(torch.ones([4, 1, 256, 256]))

print(result)

[-0.7397, -0.8071, -0.8746,  ..., -1.0166, -1.0332, -1.0499],

How could this be and what should I change to have the same output?