DeviceStatsMonitor¶
- class pytorch_lightning.callbacks.DeviceStatsMonitor(cpu_stats=None)[source]¶
Bases:
pytorch_lightning.callbacks.callback.Callback
Automatically monitors and logs device stats during training, validation and testing stage.
DeviceStatsMonitor
is a special callback as it requires alogger
to passed as argument to theTrainer
.- Parameters:
cpu_stats¶ (
Optional
[bool
]) – ifNone
, it will log CPU stats only if the accelerator is CPU. It will raise a warning ifpsutil
is not installed till v1.9.0. IfTrue
, it will log CPU stats regardless of the accelerator, and it will raise an exception ifpsutil
is not installed. IfFalse
, it will not log CPU stats regardless of the accelerator.- Raises:
MisconfigurationException – If
Trainer
has no logger.
Example
>>> from pytorch_lightning import Trainer >>> from pytorch_lightning.callbacks import DeviceStatsMonitor >>> device_stats = DeviceStatsMonitor() >>> trainer = Trainer(callbacks=[device_stats])
- on_test_batch_end(trainer, pl_module, outputs, batch, batch_idx, dataloader_idx)[source]¶
Called when the test batch ends.
- Return type:
- on_test_batch_start(trainer, pl_module, batch, batch_idx, dataloader_idx)[source]¶
Called when the test batch begins.
- Return type:
- on_train_batch_end(trainer, pl_module, outputs, batch, batch_idx)[source]¶
Called when the train batch ends. :rtype:
None
Note
The value
outputs["loss"]
here will be the normalized value w.r.taccumulate_grad_batches
of the loss returned fromtraining_step
.
- on_train_batch_start(trainer, pl_module, batch, batch_idx)[source]¶
Called when the train batch begins.
- Return type:
- on_validation_batch_end(trainer, pl_module, outputs, batch, batch_idx, dataloader_idx)[source]¶
Called when the validation batch ends.
- Return type: