Modifying the Trainer when calling Trainer.fit() multiple times

Hi, I want to change some parameters of the Trainer (e.g. modify the callbacks or change the logging frequency) after each call to Trainer.fit(). The training loop would then look something like this:

model = ...
trainer = pl.Trainer()

for _ in range(num_stages):
    trainer.fit(model, train_dataloader)
    trainer.fit_loop.max_steps += max_steps
    # Changing the parameters as shown below does not work!
    trainer.log_every_n_steps += 100
    trainer.callbacks = [....]

An alternative would be to create a new Trainer instance in each iteration. However, the optimizer state would not be preserved (e.g. the learning rate).

Is there some way to either modify parameters of the Trainer later on, or can I somehow create a new instance which continues the training at the same learning rate and optimizer state?

Thanks a lot!

I have a similar question as well. I am trying to find a way to call trainer.fit() in a loop where I can update the data_loader in each iteration but continue the training with the same optimizer state.

Is there any way to do this?

1 Like

It’s not possible to do exactly how you described it. But couldn’t you just have one trainer.fit() call, but instead re-load the dataloader every epoch? Trainer(reload_dataloaders_every_n_epochs=1). This way you can train the same model/optimizer for many epochs and switch dynamically between your dataloader(s).
You would just put conditional logic in your train_dataloader() method either in the LightningModule or LightningDataModule.