arxiv
PublishedApril 24, 2026 at 4:00 AM
—neutral
Pre-trained LLMs Meet Sequential Recommenders: Efficient User-Centric Knowledge Distillation
Publisher summary· verbatim
arXiv:2604.21536v1 Announce Type: cross Abstract: Sequential recommender systems have achieved significant success in modeling temporal user behavior but remain limited in capturing rich user semantics beyond interaction patterns. Large Language Models (LLMs) present opportunities to enhance user un
Discussion
No replies yet. Be first.
Originally published on arxiv ↗