Does self.log(xxx) respect the trainer parameter log_every_n_steps?
If I call self.log(xxx) inside training_step, does that mean it will not log anything until it reaches the required step?
If xxx is expensive to compute, how should one do this?
it depends, it self.log(xxx) is done with on_epoch=False
then it will be somewhat ignored if it doesn’t reach the required step w.r.t log_every_n_step
but if on_epoch=True
then it will be used to aggregate results at epoch end.
Hey!
Thank you for your answer, but I don’t understand it. Can you try to explain it one more time?
-
So if I set log_every_n_steps = 100 in trainer.
in training_step, I compute loss and want it to be logged every log_every_n_steps = 100 steps and in addition, I want it to log at the end of an epoch. Then in this case I just set on_step=True, on_epoch=True? -
Also does self.log (when on_epoch = True) execute compute() inside? In case let’s say I want to compute torchmetrics.Accuracy metric on the whole processed batches by the end of my epoch, and I set on_epoch=True, should I add .compute() before and .reset() after? or it is done automatically?
Thank you again!