arxiv
PublishedApril 24, 2026 at 4:00 AM
—neutral
Learning Reasoning Reward Models from Expert Demonstration via Inverse Reinforcement Learning
Publisher summary· verbatim
arXiv:2510.01857v3 Announce Type: replace Abstract: Current approaches to improving reasoning in large language models (LLMs) primarily rely on either supervised fine-tuning (SFT) over expert traces or reinforcement learning (RL) with outcome-level rewards. However, SFT is fundamentally imitative, w
Discussion
No replies yet. Be first.
Originally published on arxiv ↗