Why pytorch_lightning would evaluate for one batch after resuming from the checkpoint?

I am using the resuming way as

trainer.fit(model, ckpt_path=resume_ckpt)

The trainer will evaluate one batch after the resuming. Moreover, my ModelCheckpoint would save checkpoint according to this incomplete evaluation! This is really unreasonable.

My ModelCheckpoint is pytorch_lightning.callbacks.ModelCheckpoint. pytorch_lightning version is 2.0.8.

Setting num_sanity_val_steps=0 is useless.