Using SequentialLR with Step, Epoch and ReduceLROnPlateau

I am trying as the title to use 2 scheduler.
First perform a learning rate warm up on N epochs, or M steps (depending on if the dataset is very big or not), for that i use LambdaLR
Then use the ReduceLROnPlateau.

For that I use the SequentialLR to chaine them.

There is multiples issues:

  1. from SequentialLR, the step() method has no support for ReduceLROnPLateau, so I did:
class SequentialLR2(SequentialLR):
    def step(self, monitor=None):
        self.last_epoch += 1
        idx = bisect_right(self._milestones, self.last_epoch)
        scheduler = self._schedulers[idx]
        if idx > 0 and self._milestones[idx - 1] == self.last_epoch:
            if isinstance(scheduler, ReduceLROnPlateau):

        if isinstance(scheduler, ReduceLROnPlateau):
            self._last_lr = scheduler.optimizer.param_groups[0]['lr']
            self._last_lr = scheduler.get_last_lr()
  1. From Pytorch Lightning:
    You have to return a dict with your scheduler:
    ‘lr_scheduler’ = {
    ‘scheduler’: sequentiallr_scheduler,
    ‘interval’: ‘step’, # step is here necessery since LambdaLR could be working in the step level
    ‘monitor’: plt_params.monitor,
    ‘strict’: True,
    ‘name’: ‘SequentialLR’,
    ‘reduce_on_plateau’: True,

the line with 'reduce_on_plateau': True, cause an issue, either it’s True I am on the warmup, and the metric is not here which provokes an error, either it’s False and the 2nd part with the ReduceLROnPlateau would not trigger.

Is there a work around for this issue?

Also on another note, since I use 'interval': 'step', I guess I have to take that into account and multiply the patience from the ReduceLROnPlateau by the number of batch in the dataloader