Incoherence in Goal-Conditioned Autoregressive Models
View PDF HTML (experimental) Abstract:We investigate mathematically the notion of incoherence: a structural issue with reinforcement learning policies derived by naive goal-conditioning of autoregressive models. We focus on the process of re-training models on their own actions, that is, fine-tuning offline-learned policies with online RL. We prove that it decreases incoherence and leads to an improvement in return, and we aim to characterize the resulting trajectory of policies. By re-framing standard notions of control-as-inference and soft Q learning, we establish a three-way correspondence with two other ways of understanding the iterative re-training process: as folding the posterior into the reward and, in the deterministic case, as decreasing the temperature parameter; the correspondence has computational content via the training-inference trade-off. Through soft-conditioning generative models, we discuss the link between incoherence and the effective horizon. Comments: To appear in the Proceedings of the 29th International Conference on Artificial Intelligence and Statistics (AISTATS) 2026 Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI) Cite as: arXiv:2510.06545 [cs.LG] (or arXiv:2510.06545v2 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2510.06545 arXiv-issued DOI via DataCite Submission history From: Jacek Karwowski [view email] [v1] Wed, 8 Oct 2025 00:52:13 UTC (263 KB) [v2] Wed, 1 Apr 2026 11:58:51 UTC (1,029 KB)
No replies yet. Be first.