A Scoping Review of Large Language Model-Based Pedagogical Agents
This scoping review examines the emerging field of Large Language Model (LLM)-based pedagogical agents in educational settings. While traditional pedagogical agents have been extensively studied, the integration of LLMs represents a transformative advancement with unprecedented capabilities in natural language understanding, reasoning, and adaptation. Following PRISMA-ScR guidelines, we analyzed 52 studies across five major databases from November 2022 to January 2025.
Our findings reveal diverse LLM-based agents spanning K-12, higher education, and informal learning contexts across multiple subject domains. We identified four key design dimensions characterizing these agents: interaction approach (reactive vs. proactive), domain scope (domain-specific vs. general-purpose), role complexity (single-role vs. multi-role), and system integration (standalone vs. integrated). Emerging trends include multi-agent systems that simulate naturalistic learning environments, virtual student simulation for agent evaluation, integration with immersive technologies, and combinations with learning analytics.
We also discuss significant research gaps and ethical considerations regarding privacy, accuracy, and student autonomy. This review provides researchers and practitioners with a comprehensive understanding of LLM-based pedagogical agents while identifying crucial areas for future development in this rapidly evolving field. The review is categorized under Artificial Intelligence (cs.AI) and can be cited as arXiv:2604.12253 [cs.AI] or arXiv:2604.12253v1 [cs.AI] for this version, with a DOI of https://doi.org/10.48550/arXiv.2604.12253, which is an arXiv-issued DOI via DataCite, pending registration. The submission history of the review is also available, with the first version submitted by Shan Li on Tue, 14 Apr 2026 03:58:11 UTC.
No replies yet. Be first.