New device helps 47-year-old stroke survivor speak after 18 years

Other brain-computer interfaces, or BCIs, for speech typically have a slight delay between thoughts of sentences and computerized verbalization. Such delays can disrupt the natural flow of conversation, potentially leading to miscommunication and frustration, researchers said.

This is “a pretty big advance in our field,” said Jonathan Brumberg of the Speech and Applied Neuroscience Lab at the University of Kansas, who was not part of the study.

A team in California recorded the woman’s brain activity using electrodes while she spoke sentences silently in her brain. The scientists used a synthesizer they built using her voice before her injury to create a speech sound that she would have spoken. They trained an AI model that translates neural activity into units of sound.

It works similarly to existing systems used to transcribe meetings or phone calls in real time, said Anumanchipalli, of the University of California, Berkeley.

The implant itself sits on the speech center of the brain so that it’s listening in, and those signals are translated to pieces of speech that make up sentences. It’s a “streaming approach,” Anumanchipalli said, with each 80-millisecond chunk of speech – about half a syllable – sent into a recorder.

“It’s not waiting for a sentence to finish,” Anumanchipalli said. “It’s processing it on the fly.”

Leave a Comment