Fine-Tuning Scheduler¶
Author: Dan Dale
License: CC BY-SA
Generated: 2023-10-04T00:59:41.547882
This notebook introduces the Fine-Tuning Scheduler extension and demonstrates the use of it to fine-tune a small foundation model on the RTE task of SuperGLUE with iterative early-stopping defined according to a user-specified schedule. It uses Hugging Face’s datasets
and transformers
libraries to
retrieve the relevant benchmark data and foundation model weights. The required dependencies are installed via the finetuning-scheduler [examples]
extra.
Give us a ⭐ on Github | Check out the documentation | Join us on Slack
Setup¶
This notebook requires some packages besides pytorch-lightning.
[1]:
! pip install --quiet "finetuning-scheduler[examples]>=2.0.0" "setuptools>=68.0.0, <68.3.0" "urllib3" "torchmetrics>=0.7, <1.3" "torch>=1.12.1" "matplotlib>=3.0.0, <3.9.0" "pytorch-lightning>=1.4, <2.1.0" "ipython[notebook]>=8.0.0, <8.17.0" "torch>=1.8.1, <2.1.0"
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
Scheduled Fine-Tuning with the Fine-Tuning Scheduler Extension¶
The Fine-Tuning Scheduler extension accelerates and enhances model experimentation with flexible fine-tuning schedules.
Training with the extension is simple and confers a host of benefits:
it dramatically increases fine-tuning flexibility
expedites and facilitates exploration of model tuning dynamics
enables marginal performance improvements of fine-tuned models
Setup is straightforward, just install from PyPI! Since this notebook-based example requires a few additional packages (e.g. transformers
, sentencepiece
), we installed the finetuning-scheduler
package with the [examples]
extra above. Once the finetuning-scheduler
package is installed, the FinetuningScheduler callback (FTS) is available for
use with Lightning. For additional installation options, please see the Fine-Tuning Scheduler README.
Fundamentally, Fine-Tuning Scheduler enables scheduled, multi-phase, fine-tuning of foundation models. Gradual unfreezing (i.e. thawing) can help maximize foundation model knowledge retention while allowing (typically upper layers of) the model to optimally adapt to new tasks during transfer learning [1, 2, 3]
The FinetuningScheduler callback orchestrates the gradual unfreezing of models via a fine-tuning schedule that is either implicitly generated (the default) or explicitly provided by the user (more computationally efficient). Fine-tuning phase transitions are driven by
FTSEarlyStopping criteria (a multi-phase extension of EarlyStopping
packaged with FinetuningScheduler), user-specified epoch transitions or a composition of the two (the default mode). A
FinetuningScheduler training session completes when the final phase of the schedule has its stopping criteria met. See the early stopping documentation for more details on that callback’s configuration.
Basic Usage¶
If no fine-tuning schedule is provided by the user, FinetuningScheduler will generate a default schedule and proceed to fine-tune according to the generated schedule, using default
FTSEarlyStopping and FTSCheckpoint callbacks with monitor=val_loss
.