Talk to me

Anumanchipalli et al. reported on speech synthesis from neural decoding of spoken sentences via brain - machine interface (ECoG) applying RNN. It's remarkable in many ways.

  • They used signals of vocal articulators for decoding. It's a kind of biomimetic approach mimic normal motor function

  • Decoded articulatory representations were highly conserved across speakers, enabling a component of the decoder to be transferrable across participants

  • Decoder could synthesize speech when a participant silently mimed sentences

  • Easily replicable measures of speech intelligibility involving human volunteer listeners

Hopefully it will lead to implantable devices to restore speech disability. As the authors put it "These findings advance the clinical viability of using speech neuroprosthetic technology to restore spoken communication."

Photo: Pexels.com

#ECog

Search By Tags
No tags yet.