Hi I’m on Lightning 2.0.0 using the TensorBoardLogger
and need some clarification on logging with accumulate_grad_batches>1
.
I think the confusion comes from the term step
which can refer to both “training step” and “optimizer step”.
When accumulate_grad_batches>1
does log_every_n_steps
refere to training steps or optimizer steps? Is the x-axis step in TensorBoard training steps or optimizer steps? Is any kind of reduction done on logged values for accumulated batches?
Thanks!