←
Back
to glossary
Prefix tuning
Prefix-tuning is a lightweight alternative to fine-tuning for natural language generation tasks. It keeps language model parameters frozen but optimizes a small continuous task-specific vector (called the prefix). Prefix-tuning draws inspiration from prompting, allowing subsequent tokens to attend to this prefix as if it were “virtual tokens”.
Read paper