Untitled

Mixed-precision Training


https://arxiv.org/pdf/1710.03740.pdf

https://arxiv.org/pdf/1710.03740.pdf

Untitled

Untitled

Example: Adam optimizer

Untitled

https://github.com/jiaweizzhao/galore

https://github.com/jiaweizzhao/galore

Untitled

Untitled

Untitled

Our approach reduces memory usage by up to 65.5% in optimizer states while maintaining both efficiency and performance for pre-training on LLaMA 1B and 7B architectures with C4 dataset with up to 19.7B tokens, and on fine-tuning RoBERTa on GLUE tasks. Our 8-bit GaLore further reduces optimizer memory by up to 82.5% and total training memory by 63.3%, compared to a BF16 baseline. Notably, we demonstrate, for the first time, the feasibility of pre-training a 7B model on consumer GPUs with 24GB memory (e.g., NVIDIA RTX 4090) without model parallel, checkpointing, or offloading strategies.