Trainer prints every step in validation

Hello everyone,
absolute beginner here.
I read an article about the difference between pytorch and pytorch lightning (From PyTorch to PyTorch Lightning — A gentle introduction | by William Falcon | Towards Data Science)

I tried the code given in the example myself and am a bit confused about the progress that is printed.
For training the progress bar is shown for the entire epoch.
For validation on the other hand the output is printed every time the data loader is called, even though I can’t see any reason for this.

Epoch 2: 100%|██████████| 938/938 [00:13<00:00, 68.42it/s, v_num=0]
Validation: 0it [00:00, ?it/s]
Validation:   0%|          | 0/157 [00:00<?, ?it/s]
Validation DataLoader 0:   0%|          | 0/157 [00:00<?, ?it/s]
Validation DataLoader 0:   1%|          | 1/157 [00:00<00:00, 999.60it/s]
Validation DataLoader 0:   1%|▏         | 2/157 [00:00<00:01, 153.81it/s]
Validation DataLoader 0:   2%|▏         | 3/157 [00:00<00:01, 119.97it/s]
Validation DataLoader 0:   3%|▎         | 4/157 [00:00<00:01, 111.08it/s]

The code I used can be seen in raw form here on github
How can I get the validation to be printed for the entire epoch instead of every single time?

What terminal is this running in? If it is something like PyCharm, the multi-tqdm bar is not well supported unfortunately. There is not much we can do.

See the GitHub issue here: Progress Bar prints many lines when validation_step is defined · Issue #15283 · Lightning-AI/lightning · GitHub
And tqdm’s known issues: GitHub - tqdm/tqdm: A Fast, Extensible Progress Bar for Python and CLI

1 Like

Sorry for the late reply.
Yes, you are right. The error occured in pycharm, but not when running the code in the command line.
Thank you for the answer.

1 Like