Lightning AI Studios: Never set up a local environment again →

← Back to glossary

LLaMA

LLaMA is a foundational large language model that has been released by Meta AI. LLaMA comes in four size variants: 7B, 13B, 33B, and 65B parameters. The paper shows that training smaller foundation models on large enough tokens is desirable, as it requires less computing power and resources. The 65B parameter models have been trained on 1.4 trillion tokens, while the LLaMA 7B model has been trained on 1 trillion tokens.

Related content

Accelerating LLaMA with Fabric: A Comprehensive Guide to Training and Fine-Tuning LLaMA
llama-adapter pseudo-code
Understanding Parameter-Efficient Finetuning of Large Language Models: From Prefix Tuning to LLaMA-Adapters
How to Contribute to Lit-GPT and Lit-LLaMA
The Ultimate Battle of Language Models: Lit-LLaMA vs GPT3.5 vs Bloom vs …