Scientists reported that they have developed a virtual prosthetic voice, a system that decodes the brain’s vocal intentions and translates them into mostly understandable speech, with no need to move a muscle, even those in the mouth. The system deciphers the brain’s motor commands guiding vocal movement during speech (the tap of the tongue, the narrowing of the lips) and generates intelligible sentences that approximate a speaker’s natural cadence. Experts said the new work represented a proof of principle, a preview of what may be possible after further experimentation and refinement. The system was tested on people who speak normally but it has not been tested in people whose neurological conditions or injuries, such as common strokes, could make the decoding difficult or impossible.
For the new trial, scientists at the University of California and U.C. Berkeley recruited five patient people who were in the hospital being evaluated for epilepsy surgery. Each had been implanted with one or two electrode arrays: stamp-size pads, containing hundreds of tiny electrodes, that were placed on the surface of the brain. As each participant recited hundreds of sentences, the electrodes recorded the firing patterns of neurons in the motor cortex. The researchers associated those patterns with the subtle movements of the patient’s lips, tongue, larynx and jaw that occur during natural speech. The team then translated those movements into spoken sentences. Native English speakers were asked to listen to the sentences to test the fluency of the virtual voices. As much as 70 percent of what was spoken by the virtual system was intelligible, the study found.