StructRL: Recovering Dynamic Programming Structure from Learning Dynamics in Distributional Reinforcement Learning
View PDF HTML (experimental) Abstract:Reinforcement learning is typically treated as a uniform, data-driven optimization process, where updates are guided by rewards and temporal-difference errors without explicitly exploiting global structure. In contrast, dynamic programming methods rely on structured information propagation, enabling efficient and stable learning. In this paper, we provide evidence that such structure can be recovered from the learning dynamics of distributional reinforcement learning. By analyzing the temporal evolution of return distributions, we identify signals that capture when and where learning occurs in the state space. In particular, we introduce a temporal learning indicator t*(s) that reflects when a state undergoes its strongest learning update during training. Empirically, this signal induces an ordering over states that is consistent with a dynamic programming-style propagation of information. Building on this observation, we propose StructRL, a framework that exploits these signals to guide sampling in alignment with the emerging propagation structure. Our preliminary results suggest that distributional learning dynamics provide a mechanism to recover and exploit dynamic programming-like structure without requiring an explicit model. This offers a new perspective on reinforcement learning, where learning can be interpreted as a structured propagation process rather than a purely uniform optimization procedure. Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI) Cite as: arXiv:2604.08620 [cs.LG] (or arXiv:2604.08620v1 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2604.08620 arXiv-issued DOI via DataCite Submission history From: Ivo Nowak [view email] [v1] Thu, 9 Apr 2026 09:21:20 UTC (569 KB)
No replies yet. Be first.