Callback¶
- class pytorch_lightning.callbacks.Callback[source]¶
Bases:
abc.ABC
Abstract base class used to build new callbacks.
Subclass this class and override any of the relevant hooks
- on_after_backward(trainer, pl_module)[source]¶
Called after
loss.backward()
and before optimizers are stepped.- Return type
- on_before_accelerator_backend_setup(trainer, pl_module)[source]¶
Called before accelerator is being setup.
- Return type
- on_before_optimizer_step(trainer, pl_module, optimizer, opt_idx)[source]¶
Called before
optimizer.step()
.- Return type
- on_before_zero_grad(trainer, pl_module, optimizer)[source]¶
Called before
optimizer.zero_grad()
.- Return type
- on_configure_sharded_model(trainer, pl_module)[source]¶
Called before configure sharded model.
- Return type
- on_epoch_end(trainer, pl_module)[source]¶
Called when either of train/val/test epoch ends.
- Return type
- on_epoch_start(trainer, pl_module)[source]¶
Called when either of train/val/test epoch begins.
- Return type
- on_exception(trainer, pl_module, exception)[source]¶
Called when any trainer execution is interrupted by an exception.
- Return type
- on_init_end(trainer)[source]¶
Called when the trainer initialization ends, model has not yet been set.
- Return type
- on_init_start(trainer)[source]¶
Called when the trainer initialization begins, model has not yet been set.
- Return type
- on_keyboard_interrupt(trainer, pl_module)[source]¶
Deprecated since version v1.5: This callback hook was deprecated in v1.5 in favor of on_exception and will be removed in v1.7.
Called when any trainer execution is interrupted by KeyboardInterrupt.
- Return type
- on_load_checkpoint(trainer, pl_module, callback_state)[source]¶
Called when loading a model checkpoint, use to reload state.
- Parameters
pl_module¶ (
LightningModule
) – the currentLightningModule
instance.callback_state¶ (
Dict
[str
,Any
]) – the callback state returned byon_save_checkpoint
.
Note
The
on_load_checkpoint
won’t be called with an undefined state. If youron_load_checkpoint
hook behavior doesn’t rely on a state, you will still need to overrideon_save_checkpoint
to return adummy state
.- Return type
- on_predict_batch_end(trainer, pl_module, outputs, batch, batch_idx, dataloader_idx)[source]¶
Called when the predict batch ends.
- Return type
- on_predict_batch_start(trainer, pl_module, batch, batch_idx, dataloader_idx)[source]¶
Called when the predict batch begins.
- Return type
- on_predict_epoch_end(trainer, pl_module, outputs)[source]¶
Called when the predict epoch ends.
- Return type
- on_predict_epoch_start(trainer, pl_module)[source]¶
Called when the predict epoch begins.
- Return type
- on_pretrain_routine_end(trainer, pl_module)[source]¶
Called when the pretrain routine ends.
- Return type
- on_pretrain_routine_start(trainer, pl_module)[source]¶
Called when the pretrain routine begins.
- Return type
- on_sanity_check_end(trainer, pl_module)[source]¶
Called when the validation sanity check ends.
- Return type
- on_sanity_check_start(trainer, pl_module)[source]¶
Called when the validation sanity check starts.
- Return type
- on_save_checkpoint(trainer, pl_module, checkpoint)[source]¶
Called when saving a model checkpoint, use to persist state.
- Parameters
pl_module¶ (
LightningModule
) – the currentLightningModule
instance.checkpoint¶ (
Dict
[str
,Any
]) – the checkpoint dictionary that will be saved.
- Return type
- Returns
The callback state.
- on_test_batch_end(trainer, pl_module, outputs, batch, batch_idx, dataloader_idx)[source]¶
Called when the test batch ends.
- Return type
- on_test_batch_start(trainer, pl_module, batch, batch_idx, dataloader_idx)[source]¶
Called when the test batch begins.
- Return type
- on_train_batch_end(trainer, pl_module, outputs, batch, batch_idx, unused=0)[source]¶
Called when the train batch ends.
- Return type
- on_train_batch_start(trainer, pl_module, batch, batch_idx, unused=0)[source]¶
Called when the train batch begins.
- Return type
- on_train_epoch_end(trainer, pl_module)[source]¶
Called when the train epoch ends.
To access all batch outputs at the end of the epoch, either:
Implement training_epoch_end in the LightningModule and access outputs via the module OR
Cache data across train batch hooks inside the callback implementation to post-process in this hook.
- Return type
- on_validation_batch_end(trainer, pl_module, outputs, batch, batch_idx, dataloader_idx)[source]¶
Called when the validation batch ends.
- Return type
- on_validation_batch_start(trainer, pl_module, batch, batch_idx, dataloader_idx)[source]¶
Called when the validation batch begins.
- Return type
- on_validation_epoch_start(trainer, pl_module)[source]¶
Called when the val epoch begins.
- Return type
- on_validation_start(trainer, pl_module)[source]¶
Called when the validation loop begins.
- Return type
- setup(trainer, pl_module, stage=None)[source]¶
Called when fit, validate, test, predict, or tune begins.
- Return type
- teardown(trainer, pl_module, stage=None)[source]¶
Called when fit, validate, test, predict, or tune ends.
- Return type
- property state_key: str¶
Identifier for the state of the callback.
Used to store and retrieve a callback’s state from the checkpoint dictionary by
checkpoint["callbacks"][state_key]
. Implementations of a callback need to provide a unique state key if 1) the callback has state and 2) it is desired to maintain the state of multiple instances of that callback.- Return type