TrainingEpochLoop
- class pytorch_lightning.loops.epoch.TrainingEpochLoop(min_steps=0, max_steps=- 1)[source]
Bases:
pytorch_lightning.loops.base.Loop
[List
[List
[Union
[Dict
[int
,Dict
[str
,Any
]],Dict
[str
,Any
]]]]]Runs over all batches in a dataloader (one epoch).
- Parameters
- advance(*args, **kwargs)[source]
Runs a single training batch.
- Parameters
dataloader_iter – the iterator over the dataloader producing the new batch
- Raises
StopIteration – When the epoch is canceled by the user returning -1
- Return type
- connect(batch_loop=None, val_loop=None)[source]
Optionally connect a custom batch or validation loop to this training epoch loop.
- Return type
- on_advance_end()[source]
Runs validation and Checkpointing if necessary.
- Raises
StopIteration – if
done
evaluates toTrue
to finish this epoch
- on_load_checkpoint(state_dict)[source]
Called when loading a model checkpoint, use to reload loop state.
- Return type
- on_run_end()[source]
Calls the on_epoch_end hook.
- Return type
- Returns
The output of each training step for each optimizer
- Raises
MisconfigurationException –
train_epoch_end
does not returnNone
- on_run_start(data_fetcher, **kwargs)[source]
Hook to be called as the first thing after entering
run
(except the state reset).Accepts all arguments passed to
run
.- Return type
- on_save_checkpoint()[source]
Called when saving a model checkpoint, use to persist loop state.
- Return type
- Returns
The current loop state.
- update_lr_schedulers(interval, update_plateau_schedulers)[source]
updates the lr schedulers based on the given interval.
- Return type