Red Teaming Large Reasoning Models
View PDF HTML (experimental) Abstract:Large Reasoning Models (LRMs) have emerged as a powerful advancement in multi-step reasoning tasks, offering enhanced transparency and logical consistency through explicit chains of thought (CoT). However, these models introduce novel safety and reliability risks, such as CoT-hijacking and prompt-induced inefficiencies, which are not fully captured by existing evaluation methods. To address this gap, we propose RT-LRM, a unified benchmark designed to assess the trustworthiness of LRMs. RT-LRM evaluates three core dimensions: truthfulness, safety and efficiency. Beyond metric-based evaluation, we further introduce the training paradigm as a key analytical perspective to investigate the systematic impact of different training strategies on model trustworthiness. We achieve this by designing a curated suite of 30 reasoning tasks from an observational standpoint. We conduct extensive experiments on 26 models and identify several valuable insights into the trustworthiness of LRMs. For example, LRMs generally face trustworthiness challenges and tend to be more fragile than Large Language Models (LLMs) when encountering reasoning-induced risks. These findings uncover previously underexplored vulnerabilities and highlight the need for more targeted evaluations. In addition, we release a scalable toolbox for standardized trustworthiness research to support future advancements in this important field. Our code and datasets will be open-sourced. Comments: 30 pages, 9 figures Subjects: Cryptography and Security (cs.CR); Artificial Intelligence (cs.AI) Cite as: arXiv:2512.00412 [cs.CR] (or arXiv:2512.00412v4 [cs.CR] for this version) https://doi.org/10.48550/arXiv.2512.00412 arXiv-issued DOI via DataCite Submission history From: Jiawei Chen [view email] [v1] Sat, 29 Nov 2025 09:45:03 UTC (899 KB) [v2] Thu, 1 Jan 2026 11:28:57 UTC (922 KB) [v3] Wed, 14 Jan 2026 12:25:57 UTC (924 KB) [v4] Tue, 14 Apr 2026 02:57:44 UTC (929 KB)
No replies yet. Be first.