DeviceStatsMonitor¶
- class lightning.pytorch.callbacks.DeviceStatsMonitor(cpu_stats=None)[source]¶
Bases:
Callback
Automatically monitors and logs device stats during training, validation and testing stage.
DeviceStatsMonitor
is a special callback as it requires alogger
to passed as argument to theTrainer
.- Parameters:
cpu_stats¶ (
Optional
[bool
]) – ifNone
, it will log CPU stats only if the accelerator is CPU. IfTrue
, it will log CPU stats regardless of the accelerator. IfFalse
, it will not log CPU stats regardless of the accelerator.- Raises:
MisconfigurationException – If
Trainer
has no logger.ModuleNotFoundError – If
psutil
is not installed and CPU stats are monitored.
Example:
from lightning import Trainer from lightning.pytorch.callbacks import DeviceStatsMonitor device_stats = DeviceStatsMonitor() trainer = Trainer(callbacks=[device_stats])
- on_test_batch_end(trainer, pl_module, outputs, batch, batch_idx, dataloader_idx=0)[source]¶
Called when the test batch ends.
- Return type:
- on_test_batch_start(trainer, pl_module, batch, batch_idx, dataloader_idx=0)[source]¶
Called when the test batch begins.
- Return type:
- on_train_batch_end(trainer, pl_module, outputs, batch, batch_idx)[source]¶
Called when the train batch ends. :rtype:
None
Note
The value
outputs["loss"]
here will be the normalized value w.r.taccumulate_grad_batches
of the loss returned fromtraining_step
.
- on_train_batch_start(trainer, pl_module, batch, batch_idx)[source]¶
Called when the train batch begins.
- Return type:
- on_validation_batch_end(trainer, pl_module, outputs, batch, batch_idx, dataloader_idx=0)[source]¶
Called when the validation batch ends.
- Return type: