The Specification Trap: Why Static Value Alignment Alone Is Insufficient for Robust Alignment
View PDF HTML (experimental) Abstract:Static content-based AI value alignment is insufficient for robust alignment under capability scaling, distributional shift, and increasing autonomy. This holds for any approach that treats alignment as optimizing toward a fixed formal value-object, whether reward function, utility function, constitutional principles, or learned preference representation. Three philosophical results create compounding difficulties: Hume's is-ought gap (behavioral data underdetermines normative content), Berlin's value pluralism (human values resist consistent formalization), and the extended frame problem (any value encoding will misfit future contexts that advanced AI creates). RLHF, Constitutional AI, inverse reinforcement learning, and cooperative assistance games each instantiate this specification trap, and their failure modes reflect structural vulnerabilities, not merely engineering limitations that better data or algorithms will straightforwardly resolve. Known workarounds for individual components face mutually reinforcing difficulties when the specification is closed: the moment it ceases to update from the process it governs. Drawing on compatibilist philosophy, the paper argues that behavioral compliance under training conditions does not guarantee robust alignment under novel conditions, and that this gap grows with system capability. For value-laden autonomous systems, known closed approaches face structural vulnerabilities that worsen with capability. The constructive burden shifts to open, developmentally responsive approaches, though whether such approaches can be achieved remains an empirical question. Comments: 29 pages, no figures. Version 4. First posted as arXiv:2512.03048 in November 2025. First in a six-paper research program on AI alignment Subjects: Artificial Intelligence (cs.AI); Computers and Society (cs.CY); Machine Learning (cs.LG); Multiagent Systems (cs.MA) MSC classes: 68T01, 03B80, 91B06 ACM classes: I.2.0; I.2.6; K.4.1 Cite as: arXiv:2512.03048 [cs.AI] (or arXiv:2512.03048v4 [cs.AI] for this version) https://doi.org/10.48550/arXiv.2512.03048 arXiv-issued DOI via DataCite Submission history From: Austin Spizzirri [view email] [v1] Wed, 19 Nov 2025 23:31:29 UTC (12 KB) [v2] Tue, 10 Feb 2026 22:06:48 UTC (16 KB) [v3] Thu, 9 Apr 2026 00:36:10 UTC (20 KB) [v4] Wed, 15 Apr 2026 23:15:00 UTC (25 KB)
No replies yet. Be first.