EEGDM: Learning EEG Representation with Latent Diffusion Model
View PDF HTML (experimental) Abstract:Recent advances in self-supervised learning for EEG representation have largely relied on masked reconstruction, where models are trained to recover randomly masked signal segments. While effective at modeling local dependencies, such objectives are inherently limited in capturing the global dynamics and long-range dependencies essential for characterizing neural activity. To address this limitation, we propose EEGDM, a novel self-supervised framework that leverages latent diffusion models to generate EEG signals as an objective. Unlike masked reconstruction, diffusion-based generation progressively denoises signals from noise to realism, compelling the model to capture holistic temporal patterns and cross-channel relationships. Specifically, EEGDM incorporates an EEG encoder that distills raw signals and their channel augmentations into a compact representation, acting as conditional information to guide the diffusion model for generating EEG signals. This design endows EEGDM with a compact latent space, which not only offers ample control over the generative process but also can be leveraged for downstream tasks. Experimental results show that EEGDM (1) reconstructs high-quality EEG signals, (2) learns robust representations, and (3) achieves competitive performance across diverse downstream tasks, thus exploring a new direction for self-supervised EEG representation learning. Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI) Cite as: arXiv:2508.20705 [cs.LG] (or arXiv:2508.20705v3 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2508.20705 arXiv-issued DOI via DataCite Submission history From: Tong Liu [view email] [v1] Thu, 28 Aug 2025 12:23:28 UTC (473 KB) [v2] Fri, 19 Dec 2025 02:47:26 UTC (677 KB) [v3] Thu, 16 Apr 2026 01:44:53 UTC (674 KB)
No replies yet. Be first.