Loggers¶
Lightning supports the most popular logging frameworks (TensorBoard, Comet, Neptune, etc…). TensorBoard is used by default,
but you can pass to the Trainer
any combination of the following loggers.
Note
All loggers log by default to os.getcwd(). To change the path without creating a logger set Trainer(default_root_dir=’/your/path/to/save/checkpoints’)
Read more about logging options.
To log arbitrary artifacts like images or audio samples use the trainer.log_dir property to resolve the path.
def training_step(self, batch, batch_idx):
img = ...
log_image(img, self.trainer.log_dir)
Comet.ml¶
Comet.ml is a third-party logger.
To use CometLogger
as your logger do the following.
First, install the package:
pip install comet-ml
Then configure the logger and pass it to the Trainer
:
import os
from pytorch_lightning.loggers import CometLogger
comet_logger = CometLogger(
api_key=os.environ.get("COMET_API_KEY"),
workspace=os.environ.get("COMET_WORKSPACE"), # Optional
save_dir=".", # Optional
project_name="default_project", # Optional
rest_api_key=os.environ.get("COMET_REST_API_KEY"), # Optional
experiment_name="default", # Optional
)
trainer = Trainer(logger=comet_logger)
The CometLogger
is available anywhere except __init__
in your
LightningModule
.
class MyModule(LightningModule):
def any_lightning_module_function_or_hook(self):
some_img = fake_image()
self.logger.experiment.add_image("generated_images", some_img, 0)
See also
CometLogger
docs.
MLflow¶
MLflow is a third-party logger.
To use MLFlowLogger
as your logger do the following.
First, install the package:
pip install mlflow
Then configure the logger and pass it to the Trainer
:
from pytorch_lightning.loggers import MLFlowLogger
mlf_logger = MLFlowLogger(experiment_name="default", tracking_uri="file:./ml-runs")
trainer = Trainer(logger=mlf_logger)
See also
MLFlowLogger
docs.
Neptune.ai¶
Neptune.ai is a third-party logger.
To use NeptuneLogger
as your logger do the following.
First, install the package:
pip install neptune-client
or with conda:
conda install -c conda-forge neptune-client
Then configure the logger and pass it to the Trainer
:
from pytorch_lightning.loggers import NeptuneLogger
neptune_logger = NeptuneLogger(
api_key="ANONYMOUS", # replace with your own
project="common/pytorch-lightning-integration", # format "<WORKSPACE/PROJECT>"
tags=["training", "resnet"], # optional
)
trainer = Trainer(logger=neptune_logger)
The NeptuneLogger
is available anywhere except __init__
in your
LightningModule
.
class MyModule(LightningModule):
def any_lightning_module_function_or_hook(self):
# generic recipe for logging custom metadata (neptune specific)
metadata = ...
self.logger.experiment["your/metadata/structure"].log(metadata)
Note that syntax: self.logger.experiment["your/metadata/structure"].log(metadata)
is specific to Neptune and it extends logger capabilities.
Specifically, it allows you to log various types of metadata like scores, files,
images, interactive visuals, CSVs, etc. Refer to the
Neptune docs
for more detailed explanations.
You can always use regular logger methods: log_metrics()
and log_hyperparams()
as these are also supported.
Tensorboard¶
To use TensorBoard as your logger do the following.
from pytorch_lightning.loggers import TensorBoardLogger
logger = TensorBoardLogger("tb_logs", name="my_model")
trainer = Trainer(logger=logger)
The TensorBoardLogger
is available anywhere except __init__
in your
LightningModule
.
class MyModule(LightningModule):
def any_lightning_module_function_or_hook(self):
some_img = fake_image()
self.logger.experiment.add_image("generated_images", some_img, 0)
See also
TensorBoardLogger
docs.
Test Tube¶
Test Tube is a
TensorBoard logger but with nicer file structure.
To use TestTubeLogger
as your logger do the following.
First, install the package:
pip install test_tube
Then configure the logger and pass it to the Trainer
:
from pytorch_lightning.loggers import TestTubeLogger
logger = TestTubeLogger("tb_logs", name="my_model")
trainer = Trainer(logger=logger)
The TestTubeLogger
is available anywhere except __init__
in your
LightningModule
.
class MyModule(LightningModule):
def any_lightning_module_function_or_hook(self):
some_img = fake_image()
self.logger.experiment.add_image("generated_images", some_img, 0)
See also
TestTubeLogger
docs.
Weights and Biases¶
Weights and Biases is a third-party logger.
To use WandbLogger
as your logger do the following.
First, install the package:
pip install wandb
Then configure the logger and pass it to the Trainer
:
from pytorch_lightning.loggers import WandbLogger
# instrument experiment with W&B
wandb_logger = WandbLogger(project="MNIST", log_model="all")
trainer = Trainer(logger=wandb_logger)
# log gradients and model topology
wandb_logger.watch(model)
The WandbLogger
is available anywhere except __init__
in your
LightningModule
.
class MyModule(LightningModule):
def any_lightning_module_function_or_hook(self):
some_img = fake_image()
self.log({"generated_images": [wandb.Image(some_img, caption="...")]})
See also
WandbLogger
docs.Demo in Google Colab with hyperparameter search and model logging
Multiple Loggers¶
Lightning supports the use of multiple loggers, just pass a list to the
Trainer
.
from pytorch_lightning.loggers import TensorBoardLogger, TestTubeLogger
logger1 = TensorBoardLogger("tb_logs", name="my_model")
logger2 = TestTubeLogger("tb_logs", name="my_model")
trainer = Trainer(logger=[logger1, logger2])
The loggers are available as a list anywhere except __init__
in your
LightningModule
.
class MyModule(LightningModule):
def any_lightning_module_function_or_hook(self):
some_img = fake_image()
# Option 1
self.logger.experiment[0].add_image("generated_images", some_img, 0)
# Option 2
self.logger[0].experiment.add_image("generated_images", some_img, 0)