Bit-by-Bit: Progressive QAT Strategy with Outlier Channel Splitting for Stable Low-Bit LLMs
Authors: Binxing Xu, Hao Gu, Lujun Li, Hao Wang, Bei Liu, Jiacheng Liu, Qiyuan Zhu, Xintong Yang, Chao Li, Sirui Han, Yike Guo. Training LLMs at ultra-low precision remains a formidable challenge. Direct low-bit QAT often suffers from convergence instability and substantial training costs, exacerbated by quantization noise from heavy-tailed outlier channels and error accumulation across layers.
To address these issues, we present Bit-by-Bit, a progressive QAT framework with outlier channel splitting. Our approach integrates three key components: block-wise progressive training that reduces precision stage by stage, ensuring stable initialization for low-bit optimization; nested structure of integer quantization grids to enable a "train once, deploy any precision" paradigm, allowing a single model to support multiple bit-widths without retraining; rounding-aware outlier channel splitting, which mitigates quantization error while acting as an identity transform that preserves the quantized outputs.
Furthermore, we follow microscaling groups with E4M3 scales, capturing dynamic activation ranges in alignment with OCP/NVIDIA standards. To address the lack of efficient 2-bit kernels, we developed custom operators for both W2A2 and W2A16 configurations, achieving up to 11$\times$ speedup over BF16. Under W2A2 settings, Bit-by-Bit significantly outperforms baselines like BitDistiller and EfficientQAT on both Llama2/3, achieving a loss of only 2.25 WikiText2 PPL compared to full-precision models.
Subjects: Machine Learning (cs.LG) Cite as: arXiv:2604.07888 [cs.LG] (or arXiv:2604.07888v1 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2604.07888 arXiv-issued DOI via DataCite (pending registration) Submission history From: Hao Gu [view email] [v1] Thu, 9 Apr 2026 06:56:39 UTC (1,542 KB)
No replies yet. Be first.