v: 1.5.10
The docs (Optimization — PyTorch Lightning 1.8.0dev documentation) state that:
* Before v1.3, Lightning automatically called `lr_scheduler.step()` in both automatic and manual optimization. From 1.3, `lr_scheduler.step()` is now for the user to call at arbitrary intervals.
* Note that the `lr_scheduler_config` keys, such as `"frequency"` and `"interval"` , will be ignored even if they are provided in your `configure_optimizers()` during manual optimization.
But this is not the case.
If you do end specifying an interval in your scheduler definition, for example:
def configure_optimizers(self):
optim_cls = {'Adam': torch.optim.Adam, 'RAdam': torch.optim.RAdam, 'AdamW': torch.optim.AdamW}[self.args.optim]
optimizer = optim_cls(filter(lambda p: p.requires_grad, self.parameters()), lr=self.args.lr, weight_decay=self.args.wd)
lr_scheduler = torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr=args.lr, steps_per_epoch=self.dataloader_len,
epochs=args.num_epochs, anneal_strategy='linear', div_factor=100)
lr_scheduler_config = {"scheduler": lr_scheduler,
"interval": "step",
}
return {"optimizer": optimizer, "lr_scheduler": lr_scheduler_config}
and then also manually call step():
def training_step(self, batch, batch_nb):
if self.pairs:
x1, x2, y, mask = batch
x = [x1, x2]
else:
x, y = batch
y_hat = self(x)
loss = self.loss(y_hat, y)
self.lr_schedulers().step()
You will end up doubling your _step_count at each train step and run into this error. So dont call manually step() if you specify it in interval def. This is reproducible for lightning version 1.5.10