Hey,
I have trained a checkpoint and want to resume training. For this, I use the resume_from_checkpoint argument from the Trainer class. For logging, I use the TensorboardLogger. Unfortunately, a new logger will be created for the new training, but ideally, I want the logging to be continued with the old logger. I tried to save the logger in the LightningModule of the checkpoint but it will still create a new logging directory. Is there a way to reuse the old logger?