I am training a model where I need to calculate the gradients of loss functions wrt to the model parameters after all the training batches have been used. Can I use train_epoch_end(), pass the losses of each individual batches to train_epoch_end() and call optimizer_step(). Otherwise the model parameters will be update wrt each batch-
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
On_training_epoch_end does not get called | 3 | 2581 | March 21, 2023 | |
Is there a way to only log on epoch end using the new Result APIs? | 7 | 13940 | August 27, 2020 | |
Understanding logging and validation_step, validation_epoch_end
|
7 | 31880 | March 13, 2024 | |
Global_step increased at new epoch regardless of gradient accumulation | 2 | 970 | March 26, 2023 | |
In def on_fit_end() model is in "train" mode?
|
0 | 602 | June 3, 2021 |