Lightning AI Studios: Never set up a local environment again →

← Back to glossary

Instruction Tuning

Instruction tuning, in the context of large language models, is a technique where the model is fine-tuned by optimizing its responses based on specific input instructions or examples, improving its performance in generating relevant and accurate outputs for given prompts or contexts.

Related content

How To Finetune GPT Like Large Language Models on a Custom Dataset
Finetuning Falcon LLMs More Efficiently With LoRA and Adapters
Finetuning LLMs with LoRA and QLoRA: Insights from Hundreds of Experiments