Shortcuts

EvaluationEpochLoop

class pytorch_lightning.loops.epoch.EvaluationEpochLoop[source]

Bases: pytorch_lightning.loops.loop.Loop

This is the loop performing the evaluation.

It mainly loops over the given dataloader and runs the validation or test step (depending on the trainer’s current state).

advance(data_fetcher, dl_max_batches, kwargs)[source]

Calls the evaluation step with the corresponding hooks and updates the logger connector.

Parameters:
  • data_fetcher (AbstractDataFetcher) – iterator over the dataloader

  • dl_max_batches (int) – maximum number of batches the dataloader can produce

  • kwargs (OrderedDict) – the kwargs passed down to the hooks.

Raises:

StopIteration – If the current batch is None

Return type:

None

on_load_checkpoint(state_dict)[source]

Called when loading a model checkpoint, use to reload loop state.

Return type:

None

on_run_end()[source]

Returns the outputs of the whole run.

Return type:

List[Union[Tensor, Dict[str, Any]]]

on_run_start(data_fetcher, dl_max_batches, kwargs)[source]

Adds the passed arguments to the loop’s state if necessary.

Parameters:
  • data_fetcher (AbstractDataFetcher) – the current data_fetcher wrapping the dataloader

  • dl_max_batches (int) – maximum number of batches the dataloader can produce

  • kwargs (OrderedDict) – the kwargs passed down to the hooks.

Return type:

None

on_save_checkpoint()[source]

Called when saving a model checkpoint, use to persist loop state.

Return type:

Dict

Returns:

The current loop state.

reset()[source]

Resets the loop’s internal state.

Return type:

None

teardown()[source]

Use to release memory etc.

Return type:

None

property done: bool

Returns True if the current iteration count reaches the number of dataloader batches.