• Docs >
  • Fine-Tuning Scheduler
Shortcuts

Fine-Tuning Scheduler

  • Author: Dan Dale

  • License: CC BY-SA

  • Generated: 2023-03-15T10:44:42.950992

This notebook introduces the Fine-Tuning Scheduler extension and demonstrates the use of it to fine-tune a small foundation model on the RTE task of SuperGLUE with iterative early-stopping defined according to a user-specified schedule. It uses Hugging Face’s datasets and transformers libraries to retrieve the relevant benchmark data and foundation model weights. The required dependencies are installed via the finetuning-scheduler [examples] extra.


Open in Open In Colab

Give us a ⭐ on Github | Check out the documentation | Join us on Slack

Setup

This notebook requires some packages besides pytorch-lightning.

[1]:
! pip install --quiet "setuptools==67.4.0" "lightning>=2.0.0rc0" "ipython[notebook]>=8.0.0, <8.12.0" "datasets<2.8.0" "torch>=1.8.1, <1.14.0" "pytorch-lightning>=1.4, <2.0.0" "finetuning-scheduler[examples]>=0.4.0" "torchmetrics>=0.7, <0.12"
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv

Scheduled Fine-Tuning with the Fine-Tuning Scheduler Extension

Fine-Tuning Scheduler logo

The Fine-Tuning Scheduler extension accelerates and enhances model experimentation with flexible fine-tuning schedules.

Training with the extension is simple and confers a host of benefits:

  • it dramatically increases fine-tuning flexibility

  • expedites and facilitates exploration of model tuning dynamics

  • enables marginal performance improvements of fine-tuned models

Setup is straightforward, just install from PyPI! Since this notebook-based example requires a few additional packages (e.g. transformers, sentencepiece), we installed the finetuning-scheduler package with the [examples] extra above. Once the finetuning-scheduler package is installed, the FinetuningScheduler callback is available for use with PyTorch Lightning. For additional installation options, please see the Fine-Tuning Scheduler README.

Fundamentally, Fine-Tuning Scheduler enables scheduled, multi-phase, fine-tuning of foundation models. Gradual unfreezing (i.e. thawing) can help maximize foundation model knowledge retention while allowing (typically upper layers of) the model to optimally adapt to new tasks during transfer learning 1, 2, 3

The FinetuningScheduler callback orchestrates the gradual unfreezing of models via a fine-tuning schedule that is either implicitly generated (the default) or explicitly provided by the user (more computationally efficient). Fine-tuning phase transitions are driven by FTSEarlyStopping criteria (a multi-phase extension of EarlyStopping packaged with FinetuningScheduler), user-specified epoch transitions or a composition of the two (the default mode). A FinetuningScheduler training session completes when the final phase of the schedule has its stopping criteria met. See the early stopping documentation for more details on that callback’s configuration.

FinetuningScheduler explicit loss animation

Basic Usage

If no fine-tuning schedule is provided by the user, FinetuningScheduler will generate a default schedule and proceed to fine-tune according to the generated schedule, using default FTSEarlyStopping and FTSCheckpoint callbacks with monitor=val_loss.

from pytorch_lightning import Trainer
from finetuning_scheduler import FinetuningScheduler
trainer = Trainer(callbacks=[FinetuningScheduler()])

The Default Fine-Tuning Schedule

Schedule definition is facilitated via the gen_ft_schedule method which dumps a default fine-tuning schedule (by default using a naive, 2-parameters per level heuristic) which can be adjusted as desired by the user and/or subsequently passed to the callback. Using the default/implicitly generated schedule will likely be less computationally efficient than a user-defined fine-tuning schedule but is useful for exploring a model’s fine-tuning behavior and can serve as a good baseline for subsequent explicit schedule refinement. While the current version of FinetuningScheduler only supports single optimizer and (optional) lr_scheduler configurations, per-phase maximum learning rates can be set as demonstrated in the next section.

Specifying a Fine-Tuning Schedule

To specify a fine-tuning schedule, it’s convenient to first generate the default schedule and then alter the thawed/unfrozen parameter groups associated with each fine-tuning phase as desired. Fine-tuning phases are zero-indexed and executed in ascending order.

  1. First, generate the default schedule to Trainer.log_dir. It will be named after your LightningModule subclass with the suffix _ft_schedule.yaml.

from pytorch_lightning import Trainer
from finetuning_scheduler import FinetuningScheduler
trainer = Trainer(callbacks=[FinetuningScheduler(gen_ft_sched_only=True)])
  1. Alter the schedule as desired.

side_by_side_yaml

  1. Once the fine-tuning schedule has been altered as desired, pass it to FinetuningScheduler to commence scheduled training:

from pytorch_lightning import Trainer
from finetuning_scheduler import FinetuningScheduler

trainer = Trainer(callbacks=[FinetuningScheduler(ft_schedule="/path/to/my/schedule/my_schedule.yaml")])

Early-Stopping and Epoch-Driven Phase Transition Criteria

By default, FTSEarlyStopping and epoch-driven transition criteria are composed. If a max_transition_epoch is specified for a given phase, the next fine-tuning phase will begin at that epoch unless FTSEarlyStopping criteria are met first. If FinetuningScheduler.epoch_transitions_only is True, FTSEarlyStopping will not be used and transitions will be exclusively epoch-driven.

Tip: Use of regex expressions can be convenient for specifying more complex schedules. Also, a per-phase base maximum lr can be specified: