Early-Warning Signals of Grokking via Loss-Landscape Geometry
View PDF HTML (experimental) Abstract:Grokking -- the abrupt transition from memorization to generalization after prolonged training -- has been linked to confinement on low-dimensional execution manifolds in modular arithmetic. Whether this mechanism extends beyond arithmetic remains open. We study two sequence-learning benchmarks: SCAN compositional generalization and Dyck-1 depth prediction. Across both tasks and a wide range of learning rates, the commutator defect -- a curvature measure derived from non-commuting gradient updates -- rises well before generalization, with lead times following a superlinear power law (alpha approximately 1.18 for SCAN, approximately 1.13 for Dyck), consistent with prior results on modular arithmetic. Weight-space PCA reveals that spectral concentration is not a universal precursor; the commutator defect is. Causal interventions demonstrate a mechanistic role: amplifying non-commutativity accelerates grokking (roughly 32% on SCAN, roughly 50% on Dyck), while suppressing orthogonal gradient flow delays or prevents it. The three task families form a spectrum of causal sensitivity -- modular arithmetic is rigid, Dyck is responsive, SCAN is intermediate -- yet suppression delays or prevents grokking in all cases, establishing necessity as a universal finding. These results identify the commutator defect as a robust, architecture-agnostic, causally implicated early-warning signal for delayed generalization in transformers. Comments: 33 pages, 16 figures Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI) Cite as: arXiv:2602.16967 [cs.LG] (or arXiv:2602.16967v3 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2602.16967 arXiv-issued DOI via DataCite Submission history From: Yongzhong Xu [view email] [v1] Thu, 19 Feb 2026 00:14:36 UTC (2,649 KB) [v2] Sat, 14 Mar 2026 04:52:44 UTC (4,226 KB) [v3] Fri, 3 Apr 2026 00:26:34 UTC (4,228 KB)
No replies yet. Be first.