Thanks to brain implant and AI, stroke survivor speaks as digital avatar


Translating brain activity into speech is one of the ways that neurotech can be a force for good. Recent progress bodes well for the future: implants and artificial intelligence have helped a pair of mute patients to speak.

Thanks to recent leaps in neurotechnology and AI, various headsets and implants can now basically hack our brains. Sure, this might seem dangerous, but, as this story shows, it can also be extremely useful. There are now devices that can help people who can’t move or speak, for example.

Indeed, two new scientific studies published in Nature this week have demonstrated major advances in the effort to translate brain activity into speech.

ADVERTISEMENT

Each study involved a woman who had lost her ability to speak intelligibly. One lost the ability after a stroke 18 years ago, and the other couldn’t talk because of ALS, a progressive neurodegenerative disease.

Thanks to recording devices implanted in their brains, both participants of the studies managed to speak at a rate of about 60-70 words per minute. This is half the rate of normal speech but more than four times faster than had been previously reported.

One team, led by Edward Chang, a neurosurgeon at the University of California in San Francisco, even managed to create a digital avatar that represented the woman’s speech almost in real time.

They did it by capturing the brain signals controlling the small movements that provide facial expressions.

In 2021, Chang’s team already demonstrated that they could capture brain activity from a person who had suffered a brain-stem stroke and translate those signals into written words and sentences. The process was slow, though.

In its latest paper, the team used a larger implant with double the number of electrodes – a device about the size of a credit card – to capture signals from the study participant’s brain.

To be clear, the implant does not record thoughts – it captures the electrical signals that control the muscle movements of the lips, tongue, jaw, and voice box. These are all movements that enable speech.

The signals are then transferred to a computer where AI algorithms decode them, and a language model improves accuracy with autocorrect capabilities.

ADVERTISEMENT

The second team, led by researchers from Stanford, gave a participant with ALS four much smaller implants – each about the size of an aspirin – that can record signals from single neurons. The participant trained the system by reading syllables, words, and sentences over the course of 25 sessions.

The researchers then tested the technology by having her read sentences that hadn’t been used during training. When the sentences were drawn from a vocabulary of 50 words, the error rate was about 9%. However, when the team expanded the vocabulary to 125,000 words, the error rate rose to about 24%.

This is a timely reminder that translating brain signals into speech isn’t perfect and is still a very long way from the tech that would eventually be made available to the wider public.

Neurotech devices can already track the slow-down of activities in brain regions associated with conditions like Alzheimer’s disease, schizophrenia, and dementia. They can also warn people who suffer from epilepsy of an impending seizure.