←
Back
to glossary
Pretraining
Pretraining an LLM involves training the model on a large, unsupervised dataset to learn general language patterns and foundational knowledge. This pretrained model can then be fine-tuned on a smaller, task-specific dataset, reducing the need for labeled data and training time while still achieving high performance on NLP tasks.