A man who lost
the ability to speak can now hold real-time conversations and even sing through
a brain-controlled synthetic voice. The brain-computer interface reads the
man’s neural activity via electrodes implanted in his brain and then instantaneously
generates speech sounds that reflect his intended pitch, intonation and
emphasis. To synthesise speech more realistically, researchers implanted 256
electrodes into the parts of the man’s brain that help control the facial
muscles used for speaking. Then, across multiple sessions, the researchers
showed him thousands of sentences on a screen and asked him to try saying them
aloud, sometimes with specific intonations, while recording his brain activity.

Next, the team
fed that data into an artificial intelligence model that was trained to
associate specific patterns of neural activity with the words and inflections
the man was trying to express. The machine then generated speech based on the
brain signals, producing a voice that reflected both what he intended to say
and how he wanted to say it. The researchers even trained the AI on voice
recordings from before the man’s condition progressed, using voice-cloning
technology to make the synthetic voice sound like his own. In another part of
the experiment, the researchers had him try to sing simple melodies using
different pitches. Their model decoded his intended pitch in real time and then
adjusted the singing voice it produced.
More
information:
https://www.newscientist.com/article/2483913-mind-reading-ai-turns-paralysed-mans-brainwaves-into-instant-speech/