AI Tools helped to restore a speech for a woman with paralysis: “She felt embodied”
The technology that allows you to rewrite your work meetings can help people with paralysis talk again.
Researchers at UC Berkeley and UC San Francisco used a generative AI to reduce the slowdown between when a person with heavy paralysis tries to talk when a computer device plays the sound. The work helped them with a woman named Ann, who suffered a brain stroke in 2005 at the age of 30, to communicate near real time. Anne was talking in a voice that sounded like her own because the model was trained in records by her before her blow.
The deployment of GEN AI in several different ways has allowed researchers to make improvements in neuroprosthesis, which can take much more time, said Cheol Jun, Doctor of UC Berkeley. An Electrical Engineering Student and Computer Science and the author of the survey accomplices that appeared in March through Natural neuronuumS
This is an example of how generative AI tools – using the same basic technology that powers chatbot Like Chatgpt and Anthropic on Claude or Anthropic transcripts on Google Meet – help medical and researchers solve problems that could take a lot more time, Cho told me. Experts and supporters have indicated the use of technology in medicine as hugewhether in Creating new drugs or secure Better testing and diagnosticsS
“AI speeds up progress,” Cho said. “Sometimes we imagined that the timeline would be a decade or two. Now this pace is like three years.”
The technology that helped Anne is proof of the concept, Cho said, but shows a way to tools that could be more closed and play in the future.
Acceleration
The problem with existing neuroprostheses is latency. There is a lag between the time when the person begins to try to speak and when he actually generates and hears a sentence. Cho said previous technology means that Anne should wait until a sentence ends before the next one begins.
Anne, seen during the first study in 2023, was able to communicate through computers that read the signals that her brain tries to send to the muscles that control speech.
“The main breakthrough here is that there is no need to wait until the sentence ends,” he said. “Now we can actually convey the decoding procedure when she intends to talk.”
The prosthesis includes something of an array of electrodes implanted on the surface of its brain and connected by cable to a bank of computers. It decodes the controls that the brain of AN sends to the muscles that control speech. Once Anne has chosen the words she intends to say, AI reads these signals from the motor crust and gives them life.
To train the model, the team tried to speak sentences shown on the screen. They then used data on this activity to map signals in the motor bark using Gen AI to fill the gaps.
Cho said the team hoped that the breakthrough leads to scales and more accessible devices.
“We are still in the ongoing efforts to make it more accurate and more latency,” he said. “We are trying to build something that can be more plug and play.”
Using AI to move from the thought to speech
Cho said the team used Gen Ai in several different ways. One was to repeat Ann’s voice before injury. They used records from before her injury to train a model that could make the sound of her voice.
“She was very excited when he first heard her own voice,” Cho said.
https://www.youtube.com/watch?v=mgsokgbbxk
The big change was in real -time transcription. Cho compares it to tools that transcribe presentations or meetings as they happen.
The work built on a 2023 study who used AI tools to help Ann communicate. This work still had a significant delay between when Anne tried to speak and when the words were produced. This study reduced this delay and Anne told the team that she was feeling more natural.
“She announced that she was feeling embodied that it was her own speech,” Cho said.