Clarification on log_every_n_steps with accumulate_grad_batches

Hi I’m on Lightning 2.0.0 using the TensorBoardLogger and need some clarification on logging with accumulate_grad_batches>1.
I think the confusion comes from the term step which can refer to both “training step” and “optimizer step”.

When accumulate_grad_batches>1 does log_every_n_steps refere to training steps or optimizer steps? Is the x-axis step in TensorBoard training steps or optimizer steps? Is any kind of reduction done on logged values for accumulated batches?

Thanks!

Hey @johan-sightic

The logger isn’t aware of any gradient accumulation. So step in the logger means what’s looged in every “training_step”. There is no special reduction done over the logged values over the accumulation window.