Lightning AI Studios: Never set up a local environment again →

Author: Sebastian Raschka

Doubling Neural Network Finetuning Efficiency with 16-bit Precision Techniques
Finetuning LLMs with LoRA and QLoRA: Insights from Hundreds of Experiments
Optimizing LLMs from a Dataset Perspective
The NeurIPS 2023 LLM Efficiency Challenge Starter Guide
Optimizing Memory Usage for Training LLMs and Vision Transformers in PyTorch
Finetuning Falcon LLMs More Efficiently With LoRA and Adapters
Accelerating Large Language Models with Mixed-Precision Techniques
Parameter-Efficient LLM Finetuning With Low-Rank Adaptation (LoRA)
llama-adapter pseudo-code
Understanding Parameter-Efficient Finetuning of Large Language Models: From Prefix Tuning to LLaMA-Adapters
Finetuning LLMs on a Single GPU Using Gradient Accumulation