RichProgressBar¶
- class pytorch_lightning.callbacks.RichProgressBar(refresh_rate=1, leave=False, theme=RichProgressBarTheme(description='white', progress_bar='#6206E0', progress_bar_finished='#6206E0', progress_bar_pulse='#6206E0', batch_progress='white', time='grey54', processing_speed='grey70', metrics='white'), console_kwargs=None)[source]¶
Bases:
pytorch_lightning.callbacks.progress.base.ProgressBarBase
Create a progress bar with rich text formatting.
Install it with pip:
pip install rich
from pytorch_lightning import Trainer from pytorch_lightning.callbacks import RichProgressBar trainer = Trainer(callbacks=RichProgressBar())
- Parameters:
refresh_rate¶ (
int
) – Determines at which rate (in number of batches) the progress bars get updated. Set it to0
to disable the display.leave¶ (
bool
) – Leaves the finished progress bar in the terminal at the end of the epoch. Default: Falsetheme¶ (
RichProgressBarTheme
) – Contains styles used to stylize the progress bar.console_kwargs¶ (
Optional
[Dict
[str
,Any
]]) – Args for constructing a Console
- Raises:
ModuleNotFoundError – If required rich package is not installed on the device.
Note
PyCharm users will need to enable “emulate terminal” in output console option in run/debug configuration to see styled output. Reference: https://rich.readthedocs.io/en/latest/introduction.html#requirements
- enable()[source]¶
You should provide a way to enable the progress bar.
The
Trainer
will call this in e.g. pre-training routines like the learning rate finder. to temporarily enable and disable the main progress bar.- Return type:
- on_exception(trainer, pl_module, exception)[source]¶
Called when any trainer execution is interrupted by an exception.
- Return type:
- on_predict_batch_end(trainer, pl_module, outputs, batch, batch_idx, dataloader_idx)[source]¶
Called when the predict batch ends.
- Return type:
- on_predict_batch_start(trainer, pl_module, batch, batch_idx, dataloader_idx)[source]¶
Called when the predict batch begins.
- Return type:
- on_sanity_check_end(trainer, pl_module)[source]¶
Called when the validation sanity check ends.
- Return type:
- on_sanity_check_start(trainer, pl_module)[source]¶
Called when the validation sanity check starts.
- Return type:
- on_test_batch_end(trainer, pl_module, outputs, batch, batch_idx, dataloader_idx)[source]¶
Called when the test batch ends.
- Return type:
- on_test_batch_start(trainer, pl_module, batch, batch_idx, dataloader_idx)[source]¶
Called when the test batch begins.
- Return type:
- on_train_batch_end(trainer, pl_module, outputs, batch, batch_idx)[source]¶
Called when the train batch ends. :rtype:
None
Note
The value
outputs["loss"]
here will be the normalized value w.r.taccumulate_grad_batches
of the loss returned fromtraining_step
.
- on_train_epoch_end(trainer, pl_module)[source]¶
Called when the train epoch ends.
To access all batch outputs at the end of the epoch, either: :rtype:
None
Implement training_epoch_end in the LightningModule and access outputs via the module OR
Cache data across train batch hooks inside the callback implementation to post-process in this hook.
- on_validation_batch_end(trainer, pl_module, outputs, batch, batch_idx, dataloader_idx)[source]¶
Called when the validation batch ends.
- Return type:
- on_validation_batch_start(trainer, pl_module, batch, batch_idx, dataloader_idx)[source]¶
Called when the validation batch begins.
- Return type: