for this, you need to override some method init_train_tqdm, init_validation_tqdm, … as per your requirement in ProgressBar as @asvskartheek mentioned.
PS: you can use this to remove it/s
I was able to modify the progress bar to tailor my need.
Here is how I did it:
Remove the number of iterations per second
@goku was right: I created a LitProgressBar class that inherits from pytorch_lightning.callbacks.progress.ProgressBar and modified the following methods: init_[sanity|train|validation|test]_tqdm such that the tqdm bar was built with:
Print the loss in a different format (e.g. {loss:.3e})
To solve these two problems I did override the get_progress_bar_dict in my custom LightningModule in the following way:
def get_progress_bar_dict(self):
items = super().get_progress_bar_dict()
# discard the version number
items.pop("v_num", None)
# discard the loss
items.pop("loss", None)
# Override tqdm_dict logic in `get_progress_bar_dict`
# call .item() only once but store elements without graphs
running_train_loss = self.trainer.running_loss.mean()
avg_training_loss = (
running_train_loss.cpu().item()
if running_train_loss is not None
else float("NaN")
)
# convert the loss in the proper format
items["loss"] = f"{avg_training_loss:.3e}"
return items
Hope that helps!
By the way, the loss format could be handled directly by PL in the get_progress_bar_dict at line 1350 as it is now done with {loss:.3f} which I find quite limiting. Do you think it warrants a change/PR?