LearningRateMonitor¶
- class lightning.pytorch.callbacks.LearningRateMonitor(logging_interval=None, log_momentum=False)[source]¶
Bases:
Callback
Automatically monitor and logs learning rate for learning rate schedulers during training.
- Parameters
logging_interval¶ (
Optional
[str
]) – set to'epoch'
or'step'
to loglr
of all optimizers at the same interval, set toNone
to log at individual interval according to theinterval
key of each scheduler. Defaults toNone
.log_momentum¶ (
bool
) – option to also log the momentum values of the optimizer, if the optimizer has themomentum
orbetas
attribute. Defaults toFalse
.
- Raises
MisconfigurationException – If
logging_interval
is none of"step"
,"epoch"
, orNone
.
Example:
>>> from lightning.pytorch import Trainer >>> from lightning.pytorch.callbacks import LearningRateMonitor >>> lr_monitor = LearningRateMonitor(logging_interval='step') >>> trainer = Trainer(callbacks=[lr_monitor])
Logging names are automatically determined based on optimizer class name. In case of multiple optimizers of same type, they will be named
Adam
,Adam-1
etc. If a optimizer has multiple parameter groups they will be namedAdam/pg1
,Adam/pg2
etc. To control naming, pass in aname
keyword in the construction of the learning rate schedulers. Aname
keyword can also be used for parameter groups in the construction of the optimizer.Example:
def configure_optimizer(self): optimizer = torch.optim.Adam(...) lr_scheduler = { 'scheduler': torch.optim.lr_scheduler.LambdaLR(optimizer, ...) 'name': 'my_logging_name' } return [optimizer], [lr_scheduler]
Example:
def configure_optimizer(self): optimizer = torch.optim.SGD( [{ 'params': [p for p in self.parameters()], 'name': 'my_parameter_group_name' }], lr=0.1 ) lr_scheduler = torch.optim.lr_scheduler.LambdaLR(optimizer, ...) return [optimizer], [lr_scheduler]
- on_train_batch_start(trainer, *args, **kwargs)[source]¶
Called when the train batch begins.
- Return type