Knowledge Capsules: Structured Nonparametric Memory Units for LLMs
arXiv:2604.20487v1 Announce Type: cross Abstract: Large language models (LLMs) encode knowledge in parametric weights, making it costly to update or extend without retraining. Retrieval-augmented generation (RAG) mitigates this limitation by appending retrieved text to the input, but operates purely