Originally published January 2, 2019
Here is an excerpt:
Finally, neurosurgeon Edward Chang and his team at the University of California, San Francisco, reconstructed entire sentences from brain activity captured from speech and motor areas while three epilepsy patients read aloud. In an online test, 166 people heard one of the sentences and had to select it from among 10 written choices. Some sentences were correctly identified more than 80% of the time. The researchers also pushed the model further: They used it to re-create sentences from data recorded while people silently mouthed words. That's an important result, Herff says—"one step closer to the speech prosthesis that we all have in mind."
However, "What we're really waiting for is how [these methods] are going to do when the patients can't speak," says Stephanie Riès, a neuroscientist at San Diego State University in California who studies language production. The brain signals when a person silently "speaks" or "hears" their voice in their head aren't identical to signals of speech or hearing. Without external sound to match to brain activity, it may be hard for a computer even to sort out where inner speech starts and ends.
Decoding imagined speech will require "a huge jump," says Gerwin Schalk, a neuroengineer at the National Center for Adaptive Neurotechnologies at the New York State Department of Health in Albany. "It's really unclear how to do that at all."
One approach, Herff says, might be to give feedback to the user of the brain-computer interface: If they can hear the computer's speech interpretation in real time, they may be able to adjust their thoughts to get the result they want. With enough training of both users and neural networks, brain and computer might meet in the middle.
The info is here.