Human thoughts can finally be converted into words in real time, but Facebook does not intend to continue

Facebook announced the open source brain-computer interface software LabGraph, and shared the head-mounted hardware prototypes with researchers and other colleagues to help advance the exploration in this direction.

Facebook announced the open source brain-computer interface software LabGraph, and shared the head-mounted hardware prototypes with researchers and other colleagues to help advance the exploration in this direction.

Facebook is abandoning the research and development of Brain Computer Interface (BCI) equipment, although a related research project it invested recently has made substantial progress-enabling people with severe aphasia to have “communication skills.”

In recent years, as one of the most cutting-edge technologies, the technological progress and commercialization progress in the field of brain-computer interface have been attracting attention. But Facebook admits that consumer-grade brain-computer interfaces for the mass market are still far away. At the same time, in order to continue to promote the development of optical BCI in the entire field, Facebook announced the opening of the source code of the related BCI software LabGraph, and shared the head-mounted hardware prototype with researchers and other colleagues to help advance the exploration in this direction.

Facebook’s “mind reading”

The spring of 2017 may be an important period to be recorded in the annals of history, when a number of large technology companies have successively launched activities to “penetrate the hearts of the people”. First, Elon Musk founded Neuralink, a brain-computer interface company, and said that he is studying how to implant thousands of electrodes into the human brain. A few days later, Facebook also joined this exploration. The media revealed that the secret department of Facebook Reality Labs (FRL) called Building 8 is trying to develop a headset or headband to help users send text messages in a way of thinking-input speed can be Reach 100 words per minute.

According to the plan, Facebook hopes that any user can enjoy this kind of human-computer interaction through VR virtual reality. Regina Dugan, a former DARPA official and the head of the Hardware Department of Building 8 at the time, excitedly emphasized, “Does it sound amazing to type directly through your mind? Although it is extremely difficult, the progress we have made has far exceeded everyone’s imagination. .”

However, it seems that the progress of reality has not far exceeded imagination. In a blog post, Facebook stated that it would stop the project and instead focus its research on a wrist controller for virtual reality experiments that can read the muscle signals of the arm. The company said, “While our long-term growth potential for head-mounted optical BCI still confident, but I decided to concentrate on another different neural interfaces, hoping to more quickly get a viable market products .”

Initially, Facebook’s brain-computer interface (BCI) project team set a long-term goal: to develop a silent, non-invasive voice interface, where people only need to use their mind to think of the words they want to speak, and then they can implement the corresponding text input operations.

Mark Chevillet, a physicist and neuroscientist who only began to take over and lead the “silent” speech recognition project last year, but recently began to study election topic management on the Facebook platform, said, “We have gained a wealth of practical accumulation at the technical level. So we can Confidently, judging from the perspective of consumer-level interfaces, head-mounted optical silent voice devices still have a long way to go, far longer than we expected.”

BCI is difficult to apply to consumer products

Facebook’s vision is undoubtedly to combine the “silent” voice project with VR. After all, it acquired Oculus VR at a high price of US$2 billion as early as 2014. Chevillet said that in order to achieve this goal, Facebook has adopted a two-pronged approach. First, they need to determine whether the idea-to-speech interface is feasible. To this end, Facebook decided to sponsor a study at the University of California, San Francisco, where researcher Edward Chang tried to place electrode pads on the surface of the human brain.

The implanted electrodes can read data from a single neuron. This technique called galvanocortex (ECoG) can measure a large number of neurons at once.

This research team finally made a series of surprising progress. According to a report in the New England Journal of Medicine, they used these electrode pads to complete real-time speech decoding. Which is an object of study, code-named “Bravo-1” the 36-year-old man, he was a stroke due to severe loss of the normal language table Danone force, can barely emit intermittent grunt. In the report, the team of researchers stated that Bravo-1 has been able to express sentences on the computer at a rate of 15 words per minute through electrode pads on the surface of the brain. As for the specific implementation, they will test the nerve signals in the motor cortex that control the tongue and vocal tract when Bravo-1 attempts to meditate.

In order to achieve this result, Chang’s team provided Bravo-1 with 50 common vocabularies, each of which performed nearly 10,000 silent chants, and then input the patient’s neural signals into the deep learning model. After the training model matches the vocabulary with the neural signal, the team can determine the vocabulary Bravo-1 wants to express with a 40% correct probability (much higher than the 2% at the beginning of the training). Nevertheless, his expression results are still full of errors, such as understanding “Hi, how are you?” as “Black, are you yelling?”

But the scientists decided to add a set of language models to further improve the performance, which is used to determine which word sequences are more likely to appear in the English context. Through this effort, the accuracy rate was increased to 75%. In this way, the system has been able to correctly adjust Bravo-1’s expression “I am my nurse” to “I like my nurse.”

But it is also worth noting that the English language contains a total of more than 170,000 words, and once it exceeds the vocabulary scope of Bravo-1, its performance will plummet. In other words, although this technology is expected to be used for medical assistance, it is still far from Facebook’s initial expectations. Chevillet said frankly, “In the foreseeable future, this technology should be able to achieve clinical auxiliary applications, and this has nothing to do with Facebook’s business. The current results are far from enough for the consumer-level applications that we really care about.”

Human thoughts can finally be converted into words in real time, but Facebook does not intend to continue

The diffuse optical tomography device developed by Facebook uses light to measure blood oxygen changes in the brain.

Application scenarios to be expanded

The speed of technological development is often much faster than the landing of applications and products. Over the years, brain science more poly focus on the basis of theoretical exploration stage, on the one hand it is extremely complex cross-discipline, on the other hand the technology has not been poured into the daily life, the lack of sufficient market support.

And while the development of brain-computer interface technology faces many challenges, the sector still attracted many technology giants comes into play, in addition to Facebook, Google, Alibaba , IFLYTEK and other companies also have layout. In April of this year, Musk’s Neuralink company announced that it could allow a macaque to control the computer through brain activity without having to manipulate the joystick with the hand. As soon as the news came out, the brain-computer interface became a hot topic again.

The researchers inserted more than 2,000 filaments into the cerebral cortex of the monkey, recorded the neuron activity of the monkey’s brain when the monkey interacted with the computer, and input these neuron activity data into the “decoder algorithm” to observe And predict the monkey’s hand movement in real time. Although many industry experts believe that Neuralink’s series of results are not innovative in the brain-computer field, this also shows that humans are always curious about brain-computer interfaces.

Among many industries, the medical field is regarded as the first direction in which brain-computer interfaces are implemented. Currently, clinical application products have been implemented, with functions focused on disease diagnosis, system monitoring, and auxiliary treatment for neurological diseases.

Alibaba Dharma Academy also pointed out in the 2021 Top Ten Technology Trends that brain-computer interfaces help humans to exceed the limits of biology. Academia and industry are working hard to overcome the problem of brain signal acquisition and processing, and help humans better understand the working principle of the brain. The maturity of technology will accelerate the clinical application of brain-computer interfaces, and the future will be patients who cannot speak or move hands. Provide precise rehabilitation services.

Reference link:

Posted by:CoinYuppie,Reprinted with attribution to:
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.

Like (0)
Donate Buy me a coffee Buy me a coffee
Previous 2021-07-16 07:25
Next 2021-07-16 07:26

Related articles