Scientists Have Found a Way to Convert Human Brain Signals Directly Into Speech
Columbia neuroengineers have created a system that translates thought into intelligible, recognizable speech.
By monitoring someone’s brain activity, This harnesses the power of speech synthesizers and artificial intelligence, could lead to new ways for computers to communicate directly with the brain.
A paper on the work now features in the Journal of Neuroscience.
The team foresees the technology using the brain’s own encoding of the sounds together with the commands that control the muscles in the lips, tongue, palate, and voice box to produce them.
“Our voices help connect us to our friends, family and the world around us, which is why losing the power of one’s voice due to injury or disease is so devastating,”says one of the team, Nima Mesgarani from Columbia University in New York.
“With today’s study, we have a potential way to restore that power. We’ve shown that, with the right technology, these people’s thoughts could be decoded and understood by any listener.”
More ‘intuitive’ than Hawking’s technology
A Great Theoretical physicist Stephen Hawking died earlier at the age of 76.
Hawking spend most of his life on wheel chair, had a rare disease called amyotrophic lateral sclerosis that left him paralyzed and unable to speak naturally for most of his life.
However, thanks to a computer interface that he could control by moving his cheek, he could write words and sentences that a speech synthesizer then read out.
Although the method does the job, it is slow and laborious. It is not articulating the speech that the brain encodes and sends to the muscles that make the sounds.
Instead, it requires that the person go through a process that is more akin to writing; they have to think, for instance, about the written form of the words and sentences they wish to articulate, not just their sounds.
The authors explain:
“These findings suggest that speech production shares a similar critical organizational structure with movement of other body parts.”
“This has important implications,” they conclude, “both for our understanding of speech production and for the design of brain-machine interfaces to restore communication to people who cannot speak.”
Based on their results, they now plan to build a brain-machine interface algorithm that, as well as decoding gestures, will also be able to form words by combining them.
Get real time update about this post categories directly on your device, subscribe now.