Modeling LLM Unlearning as an Asymmetric Two-Task Learning Problem
View PDF HTML (experimental) Abstract:Machine unlearning for large language models (LLMs) aims to remove targeted knowledge while preserving general capability. In this paper, we recast LLM unlearning as an asymmetric two-task problem: retention is the primary objective and forgetting is an auxiliary. From this perspective, we propose a retention-prioritized gradient synthesis framework that decouples task-specific gradient extraction from conflict-aware combination. Instantiating the framework, we adapt established PCGrad to resolve gradient conflicts, and introduce SAGO, a novel retention-prioritized gradient synthesis method. Theoretically, both variants ensure non-negative cosine similarity with the retain gradient, while SAGO achieves strictly tighter alignment through constructive sign-constrained synthesis. Empirically, on WMDP Bio/Cyber and RWKU benchmarks, SAGO consistently pushes the Pareto frontier: e.g., on WMDP Bio (SimNPO+GD), recovery of target model MMLU performance progresses from 44.6% (naive) to 94.0% (+PCGrad) and further to 96.0% (+SAGO), while maintaining comparable forgetting strength. Our results show that re-shaping gradient geometry, rather than re-balancing losses, is the key to mitigating unlearning-retention trade-offs. Comments: ACL 2026 Subjects: Computation and Language (cs.CL) Cite as: arXiv:2604.14808 [cs.CL] (or arXiv:2604.14808v1 [cs.CL] for this version) https://doi.org/10.48550/arXiv.2604.14808 arXiv-issued DOI via DataCite (pending registration) Submission history From: Zeguan Xiao [view email] [v1] Thu, 16 Apr 2026 09:31:36 UTC (1,208 KB)
No replies yet. Be first.