LearningRateMonitor¶
- class lightning.pytorch.callbacks.LearningRateMonitor(logging_interval=None, log_momentum=False)[source]¶
- Bases: - lightning.pytorch.callbacks.callback.Callback- Automatically monitor and logs learning rate for learning rate schedulers during training. - Parameters
- logging_interval¶ ( - Optional[- str]) – set to- 'epoch'or- 'step'to log- lrof all optimizers at the same interval, set to- Noneto log at individual interval according to the- intervalkey of each scheduler. Defaults to- None.
- log_momentum¶ ( - bool) – option to also log the momentum values of the optimizer, if the optimizer has the- momentumor- betasattribute. Defaults to- False.
 
- Raises
- MisconfigurationException – If - logging_intervalis none of- "step",- "epoch", or- None.
 - Example: - >>> from lightning.pytorch import Trainer >>> from lightning.pytorch.callbacks import LearningRateMonitor >>> lr_monitor = LearningRateMonitor(logging_interval='step') >>> trainer = Trainer(callbacks=[lr_monitor]) - Logging names are automatically determined based on optimizer class name. In case of multiple optimizers of same type, they will be named - Adam,- Adam-1etc. If a optimizer has multiple parameter groups they will be named- Adam/pg1,- Adam/pg2etc. To control naming, pass in a- namekeyword in the construction of the learning rate schedulers. A- namekeyword can also be used for parameter groups in the construction of the optimizer.- Example: - def configure_optimizer(self): optimizer = torch.optim.Adam(...) lr_scheduler = { 'scheduler': torch.optim.lr_scheduler.LambdaLR(optimizer, ...) 'name': 'my_logging_name' } return [optimizer], [lr_scheduler] - Example: - def configure_optimizer(self): optimizer = torch.optim.SGD( [{ 'params': [p for p in self.parameters()], 'name': 'my_parameter_group_name' }], lr=0.1 ) lr_scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, ...) return [optimizer], [lr_scheduler] - on_train_batch_start(trainer, *args, **kwargs)[source]¶
- Called when the train batch begins. - Return type