arxivApril 13, 2026 at 4:00 AM1 min read
Quantisation Reshapes the Metacognitive Geometry of Language Models
arXiv:2604.08976v1 Announce Type: new Abstract: We report that model quantisation restructures domain-level metacognitive efficiency in LLMs rather than degrading it uniformly. Evaluating Llama-3-8B-Instruct on the same 3,000 questions at Q5_K_M and f16 precision, we find that M-ratio profiles acros
No replies yet. Be first.