DLink: Distilling Layer-wise and Dominant Knowledge from EEG Foundation Models
View PDF HTML (experimental) Abstract:EEG foundation models (FMs) achieve strong cross-subject and cross-task generalization but impose substantial computational and memory costs that hinder deployment on embedded BCI systems. Knowledge distillation is a natural solution; however, conventional methods fail for EEG FMs because task-relevant semantics are often distributed across intermediate layers, and aggressive dimensionality reduction can distort oscillatory structure via representational collapse and aliasing. To address these challenges, we propose DLink (Distilling Layer-wise and Dominant Knowledge), a unified framework for transferring knowledge from large EEG FMs to compact students with three key innovations: (1) a dynamic Router that adaptively aggregates teacher layers to capture dominant intermediate representations; (2) an EEG MiC student with a Mimic-then-Compress pipeline, which inherits high-dimensional teacher features and then applies structured spatio-temporal compression to avoid a heavy classification head; and (3) spectral distillation that aligns teacher-student representations in the frequency domain to regularize compression and mitigate aliasing and temporal jitter. Experiments on four EEG benchmarks show that DLink enables compact students to outperform lightweight baselines while approaching fully fine-tuned FM performance at substantially lower model size and inference cost. Subjects: Machine Learning (cs.LG) Cite as: arXiv:2604.15016 [cs.LG] (or arXiv:2604.15016v1 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2604.15016 arXiv-issued DOI via DataCite (pending registration) Submission history From: Jingyuan Wang [view email] [v1] Thu, 16 Apr 2026 13:43:51 UTC (1,698 KB)
No replies yet. Be first.