TrainingEpochLoop
- class pytorch_lightning.loops.epoch.TrainingEpochLoop(min_steps=None, max_steps=- 1)[source]
Bases:
pytorch_lightning.loops.base.Loop
[List
[List
[Union
[Dict
[int
,Dict
[str
,Any
]],Dict
[str
,Any
]]]]]Runs over all batches in a dataloader (one epoch).
- Parameters
- advance(data_fetcher)[source]
Runs a single training batch.
- Raises
StopIteration – When the epoch is canceled by the user returning -1
- Return type
- connect(batch_loop=None, val_loop=None)[source]
Optionally connect a custom batch or validation loop to this training epoch loop.
- Return type
- on_load_checkpoint(state_dict)[source]
Called when loading a model checkpoint, use to reload loop state.
- Return type
- on_run_end()[source]
Hook to be called at the end of the run.
Its return argument is returned from
run
.
- on_run_start(data_fetcher)[source]
Hook to be called as the first thing after entering
run
(except the state reset).Accepts all arguments passed to
run
.- Return type
- on_save_checkpoint()[source]
Called when saving a model checkpoint, use to persist loop state.
- Return type
- Returns
The current loop state.
- update_lr_schedulers(interval, update_plateau_schedulers)[source]
updates the lr schedulers based on the given interval.
- Return type