OmniVoice: Towards Omnilingual Zero-Shot Text-to-Speech with Diffusion Language Models
View PDF HTML (experimental) Abstract:We present OmniVoice, a massive multilingual zero-shot text-to-speech (TTS) model that scales to over 600 languages. At its core is a novel diffusion language model-style discrete non-autoregressive (NAR) architecture. Unlike conventional discrete NAR models that suffer from performance bottlenecks in complex two-stage (text-to-semantic-to-acoustic) pipelines, OmniVoice directly maps text to multi-codebook acoustic tokens. This simplified approach is facilitated by two key technical innovations: (1) a full-codebook random masking strategy for efficient training, and (2) initialization from a pre-trained LLM to ensure superior intelligibility. By leveraging a 581k-hour multilingual dataset curated entirely from open-source data, OmniVoice achieves the broadest language coverage to date and delivers state-of-the-art performance across Chinese, English, and diverse multilingual benchmarks. Our code and pre-trained models are publicly available at this https URL. Subjects: Computation and Language (cs.CL); Audio and Speech Processing (eess.AS) Cite as: arXiv:2604.00688 [cs.CL] (or arXiv:2604.00688v2 [cs.CL] for this version) https://doi.org/10.48550/arXiv.2604.00688 arXiv-issued DOI via DataCite Submission history From: Han Zhu [view email] [v1] Wed, 1 Apr 2026 09:45:51 UTC (136 KB) [v2] Thu, 2 Apr 2026 10:24:50 UTC (139 KB)
No replies yet. Be first.