Let paralyzed aphasia patients “talk”, the brain-computer interface decodes complete sentences from brain activity for the first time, published in the New England Journal of Medicine

This technology is an important milestone in the field of neuroscience.

Facebook and the University of California, San Francisco (UCSF) Chang Lab’s brain-computer interface project Project Steno has made the latest progress. This research decodes the brain signals sent from the motor cortex to the vocal tract, allowing severely paralyzed patients with aphasia to regain the ability to communicate.

In recent years, brain-computer interface (BCI) research has attracted more and more interest in scientific research institutions and technology companies, have also appeared very much attention technological achievements, such as brain-computer interface Musk Neuralink company has in pigs, monkeys, etc. The implantation of brain-computer interface equipment and Stanford University brain-computer interface equipment allows paralyzed patients to achieve “mind writing” and so on. These results provide new hope for paralyzed patients to interact with the world again.

For a long time, Facebook has also been committed to the research of brain-computer interfaces and focused on the development of head-mounted brain-computer interface devices. However, Facebook recently published a blog stating that it will stop developing head-mounted brain-computer interface technology and instead focus on a different neural interface method, that is, a wristband device driven by electromyography. The reason is that the company believes that wrist devices can enter the market in the short term. However, Facebook still believes in the long-term prospects of head-mounted brain-computer interface technology.

At the same time, Facebook also announced that its brain-computer interface project Project Steno with the University of California San Francisco (UCSF) Chang Lab has made the latest progress. They launched a new study called “Brain-Computer Interface Restoration of Arm and Voice”, and used a 36-year-old man who was paralyzed in bed and had been speechless for many years as the subject . Researchers in the brain of the subject in the control implantation a channel region of the electrode array, when he tried to answer questions displayed on the screen, machine learning algorithms to automatically identify a word occurring in his mind, is converted to real-time sentence.

It is understood that this is the first time that a complete sentence has been successfully decoded directly from the brain activity of the speech cortex of a paralyzed patient with aphasia.

Related paper research “Neuroprosthesis for Decoding Speech in a Paralyzed Person with Anarthria” was published in “New England Journal of Medicine”.

Let paralyzed aphasia patients "talk", the brain-computer interface decodes complete sentences from brain activity for the first time, published in the New England Journal of Medicine

Paper address: https://www.nejm.org/doi/full/10.1056/NEJMoa2027540?query=featured_home

Decode complete sentences directly from brain activity

The subject was paralyzed in bed due to a stroke 16 years ago and was aphasia for many years. He usually uses head motion control to assist with computer typing to communicate with others.

Let paralyzed aphasia patients "talk", the brain-computer interface decodes complete sentences from brain activity for the first time, published in the New England Journal of Medicine

Subject.

In the experiment, the researcher implanted a subdural, high-density, multi-electrode array in the sensor imo tor cortex area of his brain to control the subject’s speech.

Let paralyzed aphasia patients "talk", the brain-computer interface decodes complete sentences from brain activity for the first time, published in the New England Journal of Medicine

The multi-electrode array implanted in the experiment.

The researchers recorded 22 hours of cortical activity, while the subjects tried to say a single word from a 50-word vocabulary set (including vocabulary essential in daily life such as water, family, and good). In the process of realizing the words spoken by the subjects, the researchers used deep learning algorithms to create corresponding computational models for detecting and classifying words from the patterns of recorded cortical activity.

In addition to these computational models, they also used a natural language model that generates the probability of the next word given the previous word in the sequence to decode the complete sentence. As shown in the figure below, the researchers used techniques such as neural signal processing, speech generation, word classification, and language modeling.

Let paralyzed aphasia patients "talk", the brain-computer interface decodes complete sentences from brain activity for the first time, published in the New England Journal of Medicine

The schematic diagram is shown below. You can see from the subjects trying to answer (Figure A) to cortical signals (Figure B), neural signal processing (Figure C), speech generation (Figure D), word classification (Figure E), language Modeling (Figure F) and the final decoding response (Figure G), they are a complete process.

Let paralyzed aphasia patients "talk", the brain-computer interface decodes complete sentences from brain activity for the first time, published in the New England Journal of Medicine

In order to test whether this method is effective, the researcher will display questions on the screen, such as “How are you doing today”, “Do you want to drink some water”, etc. The subjects will respond accordingly: “I am fine” , “No, I don’t want to drink water” etc.

Let paralyzed aphasia patients "talk", the brain-computer interface decodes complete sentences from brain activity for the first time, published in the New England Journal of Medicine

Experimental results show that this brain-computer interface system can decode 15.2 words in real time per minute on average, with an average accuracy of 74%. And, it can decode up to 18 words per minute, and the accuracy rate can reach up to 93%. In the post-mortem analysis, the researchers detected that the probability of the subjects trying to generate a single word was 98%, and during the 81-week study period, their accuracy of classifying words using stable cortical signals reached 47.1%.

Let paralyzed aphasia patients "talk", the brain-computer interface decodes complete sentences from brain activity for the first time, published in the New England Journal of Medicine

Brain-computer interface technology still needs improvement

In 2019,2020 years, Chang Lab published a research project on early Project Steno, the project list next electric pole and an array of predictive models can be created relatively quickly and complex thinking type system. Previous typing methods have involved using brain implants to touch the cursor on the screen , although other researchers have tried methods such as visualizing handwritten fonts, but they are not ideal. The laboratory’s early research involved decoding the speaker’s brain activity, and the latest research shows that it works even if the subject does not (or cannot) speak aloud.

Let paralyzed aphasia patients "talk", the brain-computer interface decodes complete sentences from brain activity for the first time, published in the New England Journal of Medicine

Facebook Reality Labs headsets not used in this study.

Eddie Chang, director of neurosurgery at UCSF, said the next step is to improve the system and get more people to test it. “In terms of hardware, we need to build a system with higher data resolution to record more information from the brain faster. In terms of algorithms, we need to have a system that can convert these very complex signals in the brain into a language system. ability, not text, but the real, audible been spoken. the most important thing is to greatly expand the vocabulary. “

This research is very valuable for those who do not have available keyboard input and other existing interfaces, even with a limited vocabulary, it can help them communicate better. But this is far from the grand goal set by Facebook in 2017. The goal is to study a non-invasive BCI system that allows people to type 100 words per minute, which is equivalent to the speed of typing on a traditional keyboard. UCSF’s recent research involves implantation technology, but it has not yet reached this number (100 words per minute), and even the speed at which most people press the keyboard of a mobile phone has not reached it.

Since then, Facebook acquired EMG wristband company CTRL-Labs in 2019, providing it with alternative control options for AR and VR. “We are still in the early stages of unleashing the potential of wrist electromyography (EMG), but we believe it will become the core input of AR glasses. Understanding BCI will help us achieve this goal faster,” Facebook Reality Labs research Director Sean Keller said. Facebook will not completely abandon the head-mounted brain interface system. Instead, it plans to open source the software technology and share hardware prototypes with external researchers. At the same time, it will stop developing head-mounted brain-computer interface technology and focus on a different neural interface. method.

Reference link:

https://tech.fb.com/bci-milestone-new-research-from-ucsf-with-support-from-facebook-shows-the-potential-of-brain-computer-interfaces-for-restoring-speech- communication/

Posted by:CoinYuppie,Reprinted with attribution to:https://coinyuppie.com/let-paralyzed-aphasia-patients-talk-the-brain-computer-interface-decodes-complete-sentences-from-brain-activity-for-the-first-time-published-in-the-new-england-journal-of-medicine/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.

Like (0)
Donate Buy me a coffee Buy me a coffee
Previous 2021-07-19 07:09
Next 2021-07-19 07:11

Related articles