Shortcuts

DeviceStatsMonitor

class pytorch_lightning.callbacks.DeviceStatsMonitor(cpu_stats=None)[source]

Bases: pytorch_lightning.callbacks.callback.Callback

Automatically monitors and logs device stats during training stage. DeviceStatsMonitor is a special callback as it requires a logger to passed as argument to the Trainer.

Parameters:

cpu_stats (Optional[bool]) – if None, it will log CPU stats only if the accelerator is CPU. It will raise a warning if psutil is not installed till v1.9.0. If True, it will log CPU stats regardless of the accelerator, and it will raise an exception if psutil is not installed. If False, it will not log CPU stats regardless of the accelerator.

Raises:

MisconfigurationException – If Trainer has no logger.

Example

>>> from pytorch_lightning import Trainer
>>> from pytorch_lightning.callbacks import DeviceStatsMonitor
>>> device_stats = DeviceStatsMonitor() 
>>> trainer = Trainer(callbacks=[device_stats]) 
on_train_batch_end(trainer, pl_module, outputs, batch, batch_idx)[source]

Called when the train batch ends. :rtype: None

Note

The value outputs["loss"] here will be the normalized value w.r.t accumulate_grad_batches of the loss returned from training_step.

on_train_batch_start(trainer, pl_module, batch, batch_idx)[source]

Called when the train batch begins.

Return type:

None

setup(trainer, pl_module, stage)[source]

Called when fit, validate, test, predict, or tune begins.

Return type:

None