Shortcuts

TrainingEpochLoop

class pytorch_lightning.loops.epoch.TrainingEpochLoop(min_steps=None, max_steps=- 1)[source]

Bases: pytorch_lightning.loops.loop.Loop[List[List[Union[Dict[int, Dict[str, Any]], Dict[str, Any]]]]]

Runs over all batches in a dataloader (one epoch).

Parameters:
  • min_steps (Optional[int]) – The minimum number of steps (batches) to process

  • max_steps (int) – The maximum number of steps (batches) to process

advance(data_fetcher)[source]

Runs a single training batch.

Raises:

StopIteration – When the epoch is canceled by the user returning -1

Return type:

None

connect(batch_loop=None, val_loop=None)[source]

Optionally connect a custom batch or validation loop to this training epoch loop.

Return type:

None

on_advance_end()[source]

Hook to be called each time after advance is called.

Return type:

None

on_load_checkpoint(state_dict)[source]

Called when loading a model checkpoint, use to reload loop state.

Return type:

None

on_run_end()[source]

Hook to be called at the end of the run.

Its return argument is returned from run.

Return type:

List[List[Union[Dict[int, Dict[str, Any]], Dict[str, Any]]]]

on_run_start(data_fetcher)[source]

Hook to be called as the first thing after entering run (except the state reset).

Accepts all arguments passed to run.

Return type:

None

on_save_checkpoint()[source]

Called when saving a model checkpoint, use to persist loop state.

Return type:

Dict

Returns:

The current loop state.

reset()[source]

Resets the internal state of the loop for a new run.

Return type:

None

teardown()[source]

Use to release memory etc.

Return type:

None

update_lr_schedulers(interval, update_plateau_schedulers)[source]

updates the lr schedulers based on the given interval.

Return type:

None

property batch_idx: int

Returns the current batch index (within this epoch)

property done: bool

Evaluates when to leave the loop.

property total_batch_idx: int

Returns the current batch index (across epochs)