
A newly developed brain-computer interface brings hope to people who have lost their ability to speak.
Researchers from the University of California, Davis, demonstrated how their new technology can instantaneously translate brain activity into voice.
A study participant with amyotrophic lateral sclerosis (ALS) was able to “speak” via a computer in real time. He could even change his intonation and sing some tunes.
What sets this technology apart from other assistive technologies is that it doesn’t feel like a delayed conversation anymore.

“By comparison, this new real-time voice synthesis is more like a voice call,” said Sergey Stavisky, senior author of the paper published in the scientific journal Nature and an assistant professor in the UC Davis Department of Neurological Surgery.
With an instantaneous translation of brain activities, users will feel more “included” in a conversation.
“They can interrupt, and people are less likely to interrupt them accidentally,” Stavisky said.
The study participant has a surgically implanted investigational brain-computer interface (BCI) that consists of four microelectrode arrays. Implanted into the region of the brain responsible for producing speech, it records brain activity and sends it to the computer to interpret it and reconstruct the voice.
“Our algorithms map neural activity to intended sounds at each moment of time. This makes it possible to synthesize nuances in speech and give the participant control over the cadence of his BCI-voice,” said Maitreyee Wairagkar, first author of the study and project scientist in the Neuroprosthetics Lab at UC Davis.

The delay seems to be very short – the “translation” takes only one-fortieth of a second. This provides hope for people who can’t talk.
Your email address will not be published. Required fields are markedmarked