Introducing Lit-GPT: Hackable implementation of open-source large language models released under Apache 2.0 →

Author: Sebastian Raschka

Optimizing LLMs from a Dataset Perspective
The NeurIPS 2023 LLM Efficiency Challenge Starter Guide
Optimizing Memory Usage for Training LLMs and Vision Transformers in PyTorch
Finetuning Falcon LLMs More Efficiently With LoRA and Adapters
Accelerating Large Language Models with Mixed-Precision Techniques
Parameter-Efficient LLM Finetuning With Low-Rank Adaptation (LoRA)
llama-adapter pseudo-code
Understanding Parameter-Efficient Finetuning of Large Language Models: From Prefix Tuning to LLaMA-Adapters
Finetuning LLMs on a Single GPU Using Gradient Accumulation
How to Speed Up PyTorch Model Training
How To Build a Super Resolution GAN Demo in Lightning AI
How To Build a Super Resolution GAN Demo in Lightning AI