Hi, I’d like to apply the .manual_backward()
without do the usual optimizer step.
Then when the Trainer().fit method is finished I’d like to get parameters gradients, but unfortunately I got all zeros as gradients value. Below you can see pictured the situation:
def training_step(self, batch, batch_idx):
opt = self.optimizers()
x, y = batch
logits = self(x)
loss = F.nll_loss(logits, y)
self.manual_backward(loss, opt, retain_graph=True)
...
dm = MNISTDataModule()
# Init model from datamodule's attributes
model = MNISTNet(*dm.size(), dm.num_classes)
# Init trainer
trainer = pl.Trainer(gpus=-1,max_epochs=3, progress_bar_refresh_rate=20)
# Train
trainer.fit(model, dm)
post_trained_parameters = model.named_parameters()
Finally here the result will be all zeros
[p.grad for (_,p) in post_trained_parameters]
Any suggestion are welcome, thanks in advance!