Two women in the United States afflicted with paralysis – one from ALS and the other from a stroke – that prevents them from speaking have been able to communicate with their surroundings again thanks to brain implants that decode neural activity and translate it to words These are two technologies presented on Wednesday in articles in the journal Nature, which facilitate communication with a speed, precision and richness of language previously unheard of.
A team from the University of California at San Francisco (UCSF), in the United States, demonstrated in 2021 that it was possible to decode the brain signals that a person produces when trying to speak and transform them into text. His first attempt enabled a man with severe paralysis to communicate with a vocabulary of 50 words. The system showed that translation was possible, but limited: it made a mistake one out of four times and transcribed signals at a rate of 18 words per minute, much slower than in a normal conversation, where the speed is about 160.
The implants presented on Wednesday by UCSF and Stanford University, both from the United States, multiply the speed and richness of communication. The former have achieved a rate of 78 words per minute formulating sentences with a vocabulary of more than 1,000 different terms, while the latter have reached 62 words per minute, but with a very wide vocabulary of 125,000 words. The results bring closer the possibility that people who have lost their voice can hold fluent conversations with their environment and “are a milestone in this field”, according to Edward Chang, a neurosurgeon at UCSF and leader of one of the investigations.
Both technologies collect the neural activity that should activate the muscles of the patients’ tongue, pharynx, jaw and face, which would allow them to speak if they were not paralyzed, and use artificial intelligence to transform the signals in words. However, the groups differ in how they collect the data and train the AI.
While the scientists at the University of California have read the neuronal activity of the set of cells on the surface of the brain, the Stanford team has inserted electrodes inside the patient’s cerebral cortex to read the neuronal activity to neuron
The fact that the two approaches have given similar results fills the researchers with optimism. “The most important message is that there is hope, that this will continue to improve and that it will provide a solution for years to come,” concludes the UCSF neurosurgeon. For now, both technologies are purely experimental.
To translate the signal into words, research teams and patients trained artificial intelligence for hundreds of hours. Stanford University asked its volunteer to repeat more than 10,000 different sentences, taken randomly from telephone conversations, over the course of 25 days. The algorithm was able to translate the neural impulses into words from a very wide vocabulary, of more than 125,000 terms, getting 24% of the words wrong. Although the error rate is high, it is the same as it was two years ago with a much poorer language of only 50 words. The new technology was wrong only one out of ten times with such a small vocabulary.
Instead, the University of California chose to train the AI ??by repeating over and over sentences from a vocabulary of about 1,000 words. With this they managed to make the system miss only 5% of the terms when trying to verbalize sentences from a repertoire of 50 statements. For new formulations, of course, the error happened again in one out of four words.