A Novel Automatic Framework for Speaker Drift Detection in Synthesized Speech
View PDF HTML (experimental) Abstract:Recent diffusion-based text-to-speech (TTS) models achieve high naturalness and expressiveness, yet often suffer from speaker drift, a subtle, gradual shift in perceived speaker identity within a single utterance. This underexplored phenomenon undermines the coherence of synthetic speech, especially in long-form or interactive settings. We introduce the first automatic framework for detecting speaker drift by formulating it as a binary classification task over utterance-level speaker consistency. Our method computes cosine similarity across overlapping segments of synthesized speech and prompts large language models (LLMs) with structured representations to assess drift. We provide theoretical guarantees for cosine-based drift detection and demonstrate that speaker embeddings exhibit meaningful geometric clustering on the unit sphere. To support evaluation, we construct a high-quality synthetic benchmark with human-validated speaker drift annotations. Experiments with multiple state-of-the-art LLMs confirm the viability of this embedding-to-reasoning pipeline. Our work establishes speaker drift as a standalone research problem and bridges geometric signal analysis with LLM-based perceptual reasoning in modern TTS. Comments: The paper has been accepted by the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2026 Subjects: Sound (cs.SD); Artificial Intelligence (cs.AI) Cite as: arXiv:2604.06327 [cs.SD] (or arXiv:2604.06327v1 [cs.SD] for this version) https://doi.org/10.48550/arXiv.2604.06327 arXiv-issued DOI via DataCite (pending registration) Submission history From: Jia-Hong Huang [view email] [v1] Tue, 7 Apr 2026 18:05:28 UTC (198 KB)
No replies yet. Be first.