How to implement Linear Probing for first N epochs and then switch to fine-tuning?

Hi @Konrad, you can use the BaseFinetuning callback to achieve this.

You will need to override the freeze_before_training and finetune_function methods with logic to unfreeze 1 top layer at the start of each epoch. Let me know if you face any issue while implementing it.

Also, We are moving support and community discussion from this forum to GitHub Discussions , as it makes questions more discoverable and keeps all the knowledge in one single place!