I know you can create a list of loggers and pass it to the trainer, however creating a tensorboard logger and an mlflow logger isn’t giving me what I want.
if there a way to keep the default logging (as if I don 't pass in a logger) and add the mlflow logger.
specifically, I want every stored locally in ./lightning_logs (logs, checkpoints, you know the default stuff) while logging to my MLFlow server.
thanks so much for responding.
I tried your suggestion and got really close, however, it created 2 directories “lightning_logs” and “1_lightning_logs”.
It seems that “1_lightning_logs” has all the checkpoints, and “lightning_logs” has a version folder with an events.out.tfevents file along with the hparams.yaml.
Is there anyway to get the checkpoints in the lightning_logs folder and not create the 1_lightning_logs folder?