I am aware that when we use self.log("train_loss",loss)
for instance, the loss tensor is automatically detached to avoid CPU RAM leak.
However, if I am logging something else through the method self.logger.experiment.add_scalars()
or self.logger.experiment.add_image()
, do I need to detach what is being logged manually ?