arxiv
PublishedApril 24, 2026 at 4:00 AM
—neutral
MixLLM: LLM Quantization with Global Mixed-precision between Output-features and Highly-efficient System Design
Publisher summary· verbatim
arXiv:2412.14590v2 Announce Type: replace Abstract: Quantization has become one of the most effective methodologies to compress LLMs into smaller size. However, the existing quantization solutions still show limitations of either non-negligible accuracy drop or low system efficiency. In this paper,
Discussion
No replies yet. Be first.
Originally published on arxiv ↗