Scientists Create Device to Turn Brain Signals into Speech
By Bryan Lynn
Scientists say they have created a new device that can turn brain signals into electronic speech.
The invention could one day give people who have lost the ability to speak a better way of communicating than current methods.
The device was developed by researchers from the University of California, San Francisco. Their results were recently published in a study in the journal Nature.
Scientists created a "brain machine interface" that is implanted in the brain. The device was built to read and record brain signals that help control the muscles that produce speech. These include the lips, larynx, tongue and jaw. The experiment involved a two-step process. First, the researchers used a "decoder" to turn electrical brain signals into representations of human vocal movements. A synthesizer then turns the representations into spoken sentences.
Other brain-computer interfaces already exist to help people who cannot speak on their own. Often these systems are trained to follow eye or facial movements of people who have learned to spell out their thoughts letter-by-letter.
But researchers say this method can produce many errors and is very slow, permitting at most about 10 spoken words per minute. This compares to between 100 and 150 words per minute used in natural speech.
Edward Chang is a professor of neurological and member of the university's Weill Institute for Neuroscience. He was a lead researcher on the project. In a statement, he said the new two-step method presents a "proof of principle" with great possibilities for "real-time communication" in the future.
"For the first time, this study demonstrates that we can generate entire spoken sentences based on an individual's brain activity," Chang said.
The study involved five volunteer patients who were being treated for epilepsy. The individuals had the ability to speak and already had electrodes implanted in their brains. The volunteers were asked to read several hundred sentences aloud while the researchers recorded their brain activity.
The researchers used audio recordings of the voice readings to reproduce the vocal muscle movements needed to produce human speech. This process permitted the scientists to create a realistic "virtual voice" for each individual, controlled by their brain activity.
Future studies will test the technology on people who are unable to speak.
Josh Chartier is a speech scientist and doctoral student at the University of California, San Francisco. He said the research team was "shocked" when it first heard the synthesized speech results.
The study reports the spoken sentences were understandable to hundreds of human listeners asked to write out what they heard. The listeners were able to write out 43 percent of sentences with perfect accuracy.
The researchers noted that - as is the case with natural speech - listeners had the highest success rate identifying shorter sentences. The team also reported more success synthesizing slower speech sounds like "sh," and less success with harder sounds like "b" or "p." Chartier admitted that much more research of the system will be needed to reach the goal of perfectly reproducing spoken language. But he added: "The levels of accuracy we produced here would be an amazing improvement in real-time communication compared to what's currently available."
Source:VOA