Transfer learning

Hello! I use PyTorch Lightning for deep reinforcement learning. I would like to incorporate transfer learning in my work. Specifically, I first train model A, and then use this pre-trained model as a starting point to train model B as follows.

class DDPG(LightningModule):
   def __init__(self, some_hparams):
      …

%Train model A
model_A = DDPG()
trainer = Trainer(max_epochs=1000)
trainer.fit(model_A)

%Use pre-trained model A as starting point
model_B = DDPG.load_from_checkpoint(“path_modelA.ckpt”)
trainer = Trainer(max_epochs=1000)
trainer.fit(model_B)

I don’t encounter any error. However, even after several experiments, I’m still unable to see any gain (in terms of convergence speed) from using a pre-trained model, compared to training from scratch. Thus, I would just like to confirm whether I’m doing it correctly with the code above. Does model B indeed make use of the (loaded) weights of pre-trained model A at the start of its training process? Thank you.