Initialisation Determines the Basin: Efficient Codebook Optimisation for Extreme LLM Quantization
Additive quantization enables extreme LLM compression with O(1) lookup-table dequantization, making it attractive for edge deployment. Yet at 2-bit precision, it often fails catastrophically, even with extensive search and finetuning. We show that the dominant bottleneck is codebook initialisation. Greedy sequential initialisation frequently places the model in poor optimisation regions that subsequent beam search and PV-tuning struggle to overcome.
We analyse this behaviour through the representational ratio \r{ho} = N/KM, which characterises the relationship between weight groups and codebook capacity, and propose OA-EM, an output-aware EM initialisation method using Hessian-weighted Mahalanobis distance. Across compression rates, search budgets, and three architectures (Llama 3.2 3B, Llama 3.1 8B, Qwen 2.5 3B), OA-EM consistently produces better solutions after PV-tuning and dominates the quality-compute frontier.
The severity of the bottleneck scales with \r{ho}: moderate at 3 bpp but extreme at 2 bpp, where poor initialisation can degrade perplexity by orders of magnitude. More broadly, our results highlight the importance of optimisation geometry in compressed model spaces, where initialisation can dominate subsequent search and fine-tuning. Comments: 9 pages (+ references and appendix). Under review at ACL Rolling Review Subjects: Computation and Language (cs.CL); Machine Learning (cs.LG) Cite as: arXiv:2604.08118 [cs.CL] (or arXiv:2604.08118v1 [cs.CL] for this version) https://doi.org/10.48550/arXiv.2604.08118 arXiv-issued DOI via DataCite (pending registration) Submission history From: Ian Kennedy [view email] [v1] Thu, 9 Apr 2026 11:38:24 UTC (39 KB)
No replies yet. Be first.