Revisiting the Reliability of Language Models in Instruction-Following
View PDF HTML (experimental) Abstract:Advanced LLMs have achieved near-ceiling instruction-following accuracy on benchmarks such as IFEval. However, these impressive scores do not necessarily translate to reliable services in real-world use, where users often vary their phrasing, contextual framing, and task formulations. In this paper, we study nuance-oriented reliability: whether models exhibit consistent competence across cousin prompts that convey analogous user intents but with subtle nuances. To quantify this, we introduce a new metric, reliable@k, and develop an automated pipeline that generates high-quality cousin prompts via data augmentation. Building upon this, we construct IFEval++ for systematic evaluation. Across 20 proprietary and 26 open-source LLMs, we find that current models exhibit substantial insufficiency in nuance-oriented reliability -- their performance can drop by up to 61.8% with nuanced prompt modifications. What's more, we characterize it and explore three potential improvement recipes. Our findings highlight nuance-oriented reliability as a crucial yet underexplored next step toward more dependable and trustworthy LLM behavior. Our code and benchmark are accessible: this https URL. Comments: ACL 2026 main Subjects: Software Engineering (cs.SE); Artificial Intelligence (cs.AI); Computation and Language (cs.CL) Cite as: arXiv:2512.14754 [cs.SE] (or arXiv:2512.14754v2 [cs.SE] for this version) https://doi.org/10.48550/arXiv.2512.14754 arXiv-issued DOI via DataCite Submission history From: Jianshuo Dong [view email] [v1] Mon, 15 Dec 2025 02:57:55 UTC (545 KB) [v2] Tue, 14 Apr 2026 06:00:19 UTC (534 KB)
No replies yet. Be first.