Shortcuts

NeptuneLogger

class lightning.pytorch.loggers.NeptuneLogger(*, api_key=None, project=None, name=None, run=None, log_model_checkpoints=True, prefix='training', **neptune_run_kwargs)[source]

Bases: lightning.pytorch.loggers.logger.Logger

Log using Neptune.

Install it with pip:

pip install neptune-client

or conda:

conda install -c conda-forge neptune-client

Quickstart

Pass NeptuneLogger instance to the Trainer to log metadata with Neptune:

from lightning.pytorch import Trainer
from lightning.pytorch.loggers import NeptuneLogger

neptune_logger = NeptuneLogger(
    api_key="ANONYMOUS",  # replace with your own
    project="common/pytorch-lightning-integration",  # format "<WORKSPACE/PROJECT>"
    tags=["training", "resnet"],  # optional
)
trainer = Trainer(max_epochs=10, logger=neptune_logger)

How to use NeptuneLogger?

Use the logger anywhere in your LightningModule as follows:

from neptune.new.types import File
from lightning.pytorch import LightningModule


class LitModel(LightningModule):
    def training_step(self, batch, batch_idx):
        # log metrics
        acc = ...
        self.log("train/loss", loss)

    def any_lightning_module_function_or_hook(self):
        # log images
        img = ...
        self.logger.experiment["train/misclassified_images"].log(File.as_image(img))

        # generic recipe
        metadata = ...
        self.logger.experiment["your/metadata/structure"].log(metadata)

Note that syntax: self.logger.experiment["your/metadata/structure"].log(metadata) is specific to Neptune and it extends logger capabilities. Specifically, it allows you to log various types of metadata like scores, files, images, interactive visuals, CSVs, etc. Refer to the Neptune docs for more detailed explanations. You can also use regular logger methods log_metrics(), and log_hyperparams() with NeptuneLogger as these are also supported.

Log after fitting or testing is finished

You can log objects after the fitting or testing methods are finished:

neptune_logger = NeptuneLogger(project="common/pytorch-lightning-integration")

trainer = pl.Trainer(logger=neptune_logger)
model = ...
datamodule = ...
trainer.fit(model, datamodule=datamodule)
trainer.test(model, datamodule=datamodule)

# Log objects after `fit` or `test` methods
# model summary
neptune_logger.log_model_summary(model=model, max_depth=-1)

# generic recipe
metadata = ...
neptune_logger.experiment["your/metadata/structure"].log(metadata)

Log model checkpoints

If you have ModelCheckpoint configured, Neptune logger automatically logs model checkpoints. Model weights will be uploaded to the: “model/checkpoints” namespace in the Neptune Run. You can disable this option:

neptune_logger = NeptuneLogger(project="common/pytorch-lightning-integration", log_model_checkpoints=False)

Pass additional parameters to the Neptune run

You can also pass neptune_run_kwargs to specify the run in the greater detail, like tags or description:

from lightning.pytorch import Trainer
from lightning.pytorch.loggers import NeptuneLogger

neptune_logger = NeptuneLogger(
    project="common/pytorch-lightning-integration",
    name="lightning-run",
    description="mlp quick run with pytorch-lightning",
    tags=["mlp", "quick-run"],
)
trainer = Trainer(max_epochs=3, logger=neptune_logger)

Check run documentation for more info about additional run parameters.

Details about Neptune run structure

Runs can be viewed as nested dictionary-like structures that you can define in your code. Thanks to this you can easily organize your metadata in a way that is most convenient for you.

The hierarchical structure that you apply to your metadata will be reflected later in the UI.

You can organize this way any type of metadata - images, parameters, metrics, model checkpoint, CSV files, etc.

See also

Parameters
  • api_key (Optional[str]) – Optional. Neptune API token, found on https://neptune.ai upon registration. Read: how to find and set Neptune API token. It is recommended to keep it in the NEPTUNE_API_TOKEN environment variable and then you can drop api_key=None.

  • project (Optional[str]) – Optional. Name of a project in a form of “my_workspace/my_project” for example “tom/mask-rcnn”. If None, the value of NEPTUNE_PROJECT environment variable will be taken. You need to create the project in https://neptune.ai first.

  • name (Optional[str]) – Optional. Editable name of the run. Run name appears in the “all metadata/sys” section in Neptune UI.

  • run (None) – Optional. Default is None. The Neptune Run object. If specified, this Run` will be used for logging, instead of a new Run. When run object is passed you can’t specify other neptune properties.

  • log_model_checkpoints (Optional[bool]) – Optional. Default is True. Log model checkpoint to Neptune. Works only if ModelCheckpoint is passed to the Trainer.

  • prefix (str) – Optional. Default is "training". Root namespace for all metadata logging.

  • **neptune_run_kwargs – Additional arguments like tags, description, capture_stdout, etc. used when run is created.

Raises
  • ModuleNotFoundError – If required Neptune package is not installed.

  • ValueError – If argument passed to the logger’s constructor is incorrect.

after_save_checkpoint(checkpoint_callback)[source]

Automatically log checkpointed model. Called after model checkpoint callback saves a new checkpoint.

Parameters

checkpoint_callback (Checkpoint) – the model checkpoint callback instance

Return type

None

finalize(status)[source]

Do any processing that is necessary to finalize an experiment.

Parameters

status (str) – Status that the experiment finished with (e.g. success, failed, aborted)

Return type

None

log_hyperparams(params)[source]

Log hyper-parameters to the run.

Hyperparams will be logged under the “<prefix>/hyperparams” namespace.

Note

You can also log parameters by directly using the logger instance: neptune_logger.experiment["model/hyper-parameters"] = params_dict.

In this way you can keep hierarchical structure of the parameters.

Parameters

params (Union[Dict[str, Any], Namespace]) – dict. Python dictionary structure with parameters.

Example:

from lightning.pytorch.loggers import NeptuneLogger

PARAMS = {
    "batch_size": 64,
    "lr": 0.07,
    "decay_factor": 0.97
}

neptune_logger = NeptuneLogger(
    api_key="ANONYMOUS",
    project="common/pytorch-lightning-integration"
)

neptune_logger.log_hyperparams(PARAMS)
Return type

None

log_metrics(metrics, step=None)[source]

Log metrics (numeric values) in Neptune runs.

Parameters
  • metrics (Dict[str, Union[Tensor, float]]) – Dictionary with metric names as keys and measured quantities as values.

  • step (Optional[int]) – Step number at which the metrics should be recorded, currently ignored.

Return type

None

property experiment: None

Actual Neptune run object. Allows you to use neptune logging features in your LightningModule.

Example:

class LitModel(LightningModule):
    def training_step(self, batch, batch_idx):
        # log metrics
        acc = ...
        self.logger.experiment["train/acc"].log(acc)

        # log images
        img = ...
        self.logger.experiment["train/misclassified_images"].log(File.as_image(img))

Note that syntax: self.logger.experiment["your/metadata/structure"].log(metadata) is specific to Neptune and it extends logger capabilities. Specifically, it allows you to log various types of metadata like scores, files, images, interactive visuals, CSVs, etc. Refer to the Neptune docs for more detailed explanations. You can also use regular logger methods log_metrics(), and log_hyperparams() with NeptuneLogger as these are also supported.

Return type

None

property name: Optional[str]

Return the experiment name or ‘offline-name’ when exp is run in offline mode.

Return type

Optional[str]

property save_dir: Optional[str]

Gets the save directory of the experiment which in this case is None because Neptune does not save locally.

Return type

Optional[str]

Returns

the root directory where experiment logs get saved

property version: Optional[str]

Return the experiment version.

It’s Neptune Run’s short_id

Return type

Optional[str]


© Copyright Copyright (c) 2018-2023, Lightning AI et al...

Built with Sphinx using a theme provided by Read the Docs.