A Self-Supervised Neural Network Learning Representations Similar to the Brain
Toward a realistic model of speech processing in the brain with self-supervised learning
We compare the brain activity of 412 individuals recorded with functional magnetic resonance imaging (fmri) to the behavior of 386 additional participants, while they listened to ~1h of audio books.
We show that this algorithm learns brain-like representations with as little as 600 hours of unlabelled speech, a quantity comparable to what infants can be exposed to during language acquisition.
Moreover, its functional hierarchy aligns with the cortical hierarchy of speechprocessing.
Authors
Juliette Millet, Charlotte Caucheteux, Pierre Orhan, Yves Boubenec, Alexandre Gramfort, Ewan Dunbar, Christophe Pallier, Jean-Remi King