In recent years, Facebook, Valve, and Neurallink have all entered the brain-computer interface. This technology was originally created for patients who have lost language skills such as cerebral palsy, and it seems that it will also be used in next-generation computing platforms such as AR/VR in the future. In order to explore the prospects of brain-computer interfaces in AR/VR applications, Facebook has been exploring the application of BCI technology in AR/VR for the past four years and has achieved great results.
According to Qingting.com , Facebook’s BCI project mainly includes two parts: 1) wearable optical BCI researched by the internal team; 2) implantable BCI developed in cooperation with the University of California, San Francisco (UCSF). Among them, Facebook pays more attention to the research direction of non-invasive brain-computer interfaces, and the technology and algorithms of implantable brain-computer interfaces may be applied to non-invasive interfaces such as EMG wristbands in the future . Future application scenarios include all-weather AR glasses.
Recently, Facebook and UCSF published their latest brain-computer interface research program: Project Steno in the New England Journal of Medicine. It is reported that the program can analyze the part of the cerebral cortex responsible for language in paralyzed patients and decode it into complete text. The feature is that people with severe aphasia also perform well the first time they use it.
At present, there are various types of invasive and non-invasive brain-computer interface technologies on the market, most of which have functions to control robotic arms, operate menu interfaces, identify biological information, etc., and rarely can directly transfer brain signals. Written in text. Moreover, small and lightweight brain-computer interface designs are rare.
Turning the thoughts in your mind into words seems to be the ideal effect depicted in science fiction movies, but this function has been implemented on Project Steno. This solution can quickly interpret the signals sent by the cerebral motor cortex to the vocal tract, and recognize simple conversations such as greetings and describing states. A total of 50 words can be distinguished and combined to generate 1000 sentences. The accuracy of text recognition can reach 93%, and the speed can reach 18 words per minute.
In fact, Facebook Reality Labs has been exploring the BCI project since 2017. Its long-term goal is to develop a silent, non-intrusive voice interface that can convert the ideas in the user’s brain into text, helping the aphasia to restore natural communication skills.
Project Steno began research and development in 2019. This brain-computer interface technology marks an important milestone in neuroscience. At the same time, it also marks the end of the Facebook-UCSF cooperation project. After the end of this cooperation project, Facebook will open source the brain-computer interface software, and share the prototype of the head-mounted brain-computer interface with researchers and colleagues, hoping to promote the continued development of the brain-computer interface ecology and technology, and apply it to help languages Medical assistance scenarios such as communication with disabled patients.
Facebook said: This brain-computer interface technology can be used in clinical experimental scenarios, and may also be used in non-invasive consumer-grade products such as optical BCI and EMG wristbands, and even as an input method for AR glasses.
However, in the short term, Facebook seems to have no interest in implantable technology. Next, Facebook will shift its focus from the brain-computer interface to the EMG wristband to accelerate the combination of wristband neural interface and AR/VR. In other words, in the future, Facebook will use the BCI team’s basic research results to optimize the wristband input method, and at the same time will abandon the silent non-intrusive voice interface.
About Project Steno
Facebook said: At present, the brain-computer interface for patients, Project Steno is lighter than all the BCI solutions he has used in 16 years, and this technology is also a key milestone in the field of neuroscience.
Two years ago, the program could recognize a large number of words in real time with very low error. Later last year, researchers used machine learning technology to realize the recognition of complete sentences.
Edward Chang, director of the Department of Neurosurgery of UCSF, said: Our research team at UCSF has been engaged in the research of “speech neuroprosthetics” for more than ten years. In the past 5 years, with the development of machine learning technology, the technology of speech neuroprosthesis has been developed by leaps and bounds. Therefore, using Facebook’s machine learning expertise and financial support, we have accelerated the development of brain-computer interfaces and better analyzed the way the brain processes language tasks.
However, the previous study was conducted on people who speak loudly. In order to verify the effect of the Project Steno program on patients with aphasia, researchers performed elective surgery for a participant and implanted under his cortex. Electrode module. It is reported that this patient had experienced multiple strokes and was unable to speak and communicate normally.
During the experiment, the researchers patient provides several ten hours of text-to-write data, then these data are used for training machine learning models, used to identify the voice intent, as well as classification terms.
Facebook brain-computer interface layout
In this cooperative project, Facebook provided UCSF researchers with machine learning-related suggestions and feedback, as well as scientific research funding. At the same time, the entire project’s experiments were designed and supervised by UCSF.
Facebook said: We are not involved in any form of data collection, and we are not interested in the development of implantable BCI. The self-service UCSF is mainly to help researchers expand server capacity, accelerate model testing, and obtain more accurate experimental results.
In addition to Project Steno, Facebook has also previously studied brain blood oxygenation detection schemes based on the near-infrared principle, and non-invasive brain-computer interface schemes for identifying tissue movements. It is reported that Facebook’s wearable optical BCI is based on near-infrared light technology (in the future, it may use LiDAR or mobile phone cameras), and its appearance and volume look like large headphones. In contrast, the implantable BCI program is smaller, but requires surgery.
Facebook said: The goal of the implantable BCI program is to verify whether the silent brain-computer interface can achieve a calculation speed of 100 words per minute, and to explore the types of neural signals that need to be identified.
Project Steno demonstrated for the first time the BCI that combines speech intent and language model, and the potential of BCI technology to combine language statistical features. By predicting and inferring the continuous combination of words to form sentences (similar to the automatic correction and automatic correlation function of the mobile phone input method), the accuracy of BCI can be greatly improved.
AR/VR interaction focus will shift to EMG electromyography control
After completing the development of the Project Steno project, Facebook Reality Labs began to reassess the overall goals of its BCI project. Although FRL’s long-term goal is still head-mounted optical BCI technology, FRL has decided to focus on the development of AR/VR interaction on the EMG wristband, because this form of neural interface has a faster and shorter path to the market.
FRL Research Director Sean Keller said: We are developing a more natural and intuitive way of interaction for all-weather AR glasses. This way of interaction will not affect the user’s behavior in daily life. Although the current research on EMG wristbands is still in its infancy, FRL believes that it will become the core input method of AR glasses. In the future, we will apply our research experience on BCI to accelerate the development of EMG wristband technology.
Mugler said: We are aware that the biofeedback and real-time decoding algorithms used in the optical BCI research can also be used to improve EMG wristband technology, allowing you to achieve intentional reading within minutes of wearing the wristband. In addition, in order to improve the accuracy of the EMG wristband, the real-time decoding algorithm can infer the user’s input intention through the statistical characteristics of the language.
It seems that BCI technology may improve the text input capability of EMG wristbands and achieve the effect of fast typing with gestures.
In short, Facebook has been looking for a natural input method for AR glasses in the past four years, but they have found that BCI is difficult to become an AR input method in the short term. Later, after the EMG wristband technology became more mature, I saw the prospect of the wristband solution and planned to use EMG to replace the more complex and costly BCI.
Next, Facebook will also publish the latest research on wearable somatosensory devices. This technology will optimize the user’s sense of presence in AR/VR and bring more forms of interaction scenarios.
Posted by:CoinYuppie，Reprinted with attribution to:https://coinyuppie.com/facebooks-brain-computer-interface-welcomes-new-breakthroughs-but-ar-vr-interaction-turns-to-emg-emg-control/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.