The docs for ModelCheckpoint say:
every_n_epochs: Number of epochs between checkpoints.
Ifevery_n_epochs == None or every_n_epochs == 0
, we skip saving when the epoch ends.
To disable, setevery_n_epochs = 0
. This value must beNone
or non-negative.
However, by default every_n_epochs==None, and in the simplest case of
checkpoint_callback = ModelCheckpoint(monitor="val_loss")
trainer = Trainer(callbacks=[checkpoint_callback])
it seems like checkpoints are still saved. Are the docs correct?