arxiv
PublishedApril 24, 2026 at 4:00 AM
▲bullish
Basic syntax from speech: Spontaneous concatenation in unsupervised deep neural networks
Publisher summary· verbatim
arXiv:2305.01626v4 Announce Type: replace-cross Abstract: Computational models of syntax are predominantly text-based. Here we propose that the most basic first step in the evolution of syntax can be modeled directly from raw speech in a fully unsupervised way. We focus on one of the most ubiquitous
Discussion
No replies yet. Be first.
Related coverage
More from ARXIV
arxivFrom Local to Cluster: A Unified Framework for Causal Discovery with Latent Variables9harxivConsequentialist Objectives and Catastrophe9harxivEgoMAGIC- An Egocentric Video Field Medicine Dataset for Training Perception Algorithms9harxivReCast: Recasting Learning Signals for Reinforcement Learning in Generative Recommendation9hOriginally published on arxiv ↗