Ann looking at an avatar on a screen during the study
Ann, a participant in the study, uses a digital link wired to her cortex to interface with an avatar © Noah Berger

Two research teams in California have developed brain implants that they say are much more effective than previous devices at giving a voice to people who cannot speak.

The two groups, working independently at the University of California, San Francisco and Stanford University, have used new electrode arrays and artificial intelligence programs to turn thoughts into text and speech. The UCSF scientists also designed a lifelike avatar to speak the decoded words.

The details of both brain-computer interfaces were jointly published in the journal Nature on Wednesday.

“Both studies are a big leap forward towards decoding brain activity with the speed and accuracy required to restore fluent communications to people with [voice] paralysis,” said Frank Willett, one of the Stanford team.

The two implants were significantly different in design. The UCSF team placed a paper-thin rectangle with 253 electrodes on the surface of the cortex to record brain activity from an area known to be critical for speech. The Stanford device inserted two smaller arrays with a total of 128 microelectrodes deeper into the brain.

Each team worked with a single volunteer: Stanford with Pat Bennett, 68, who has amyotrophic lateral sclerosis, and UCSF with a 47-year-old stroke patient known as Ann.

Signals from Ann’s brain are converted into decoded speech via an avatar

Tell me about yourself.

What are you looking for?

Let me tell you what I did.

Will I see you later?

I am doing well today.

Anything is possible.

Signals from Ann’s brain are converted into decoded speech via an avatar © Nature/Chang Lab

Despite the differences between the implants and research participants, the results from the two studies were broadly similar. They achieved average speech rates of about 60 to 80 words per minute — almost half the speed of a normal conversation but at least three times faster than any previous brain-computer interface has achieved.

Both projects used an artificial intelligence algorithm to decode electrical signals from their subject’s brain, teaching itself to distinguish the distinct pattern associated with individual phonemes, the subunits of speech that form spoken words. The systems needed long training sessions — 25 in Bennett’s case, each lasting four hours, during which she repeated in her mind different sentences chosen from a large data set of phone conversations.

“These initial results have proven the concept, and eventually technology will catch up to make it easily accessible to people who cannot speak,” Bennett wrote. “For those who are non-verbal, this means they can stay connected to the bigger world, perhaps continue to work, maintain friends and family relationships.”

A researcher connects a neural data port to Ann’s head during the study
A researcher connects a neural data port to Ann’s head during the study © Noah Berger

The UCSF scientists, working with colleagues at UC Berkeley, created a personalised voice for Ann, based on a recording of her speaking at her wedding. They also created an avatar for her, using software that simulates face muscle movements from Speech Graphics, a company in Edinburgh that makes facial animation software.

Signals from Ann’s brain as she tried to speak were converted into corresponding movements on the avatar’s face. “When the subject first used this system to speak and move the avatar’s face in tandem, I knew that this was going to be something that would have a real impact,” said Kaylo Littlejohn of UC Berkeley.

Much more development work will be needed to translate the laboratory proof of concept into devices simple and safe enough for patients and their carers to operate at home. An important step, the researchers say, will be to produce a wireless version that would not require the user to be wired to the implant.

In a Nature editorial two neurologists who were not involved in the research, Nick Ramsey of Utrecht University and Nathan Crone of Johns Hopkins University, called the results “a great advance in neuroscientific and neuro-engineering research, [showing] great promise in alleviating the suffering of individuals who have lost their voice as a result of paralysing neurological injuries and diseases”.

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments