Special research on the Metaverse industry: VR, AR, and brain-computer interface are the entrance to the Metaverse

Metaverse applications put forward clearer requirements for hardware, which will promote the gradual upgrade of human-computer interaction

We believe that VR/AR/brain-computer interface is a next-generation human-computer interaction platform that integrates multiple technologies such as microdisplays, sensors, chips and algorithms. Looking back at the entire development of human-computer interaction, we can see that both the command input form and feedback output form of human-computer interaction are evolving towards a lower operating threshold and higher interaction efficiency. We are currently standing at the junction of the smartphone era and the next form of interaction. We believe that although VR/AR has made a significant leap over the previous generation of interactive devices in terms of input technology (sensing) and output technology (display), it is still still in an early stage of development. With the development of Metaverse applications and the improvement of content ecology, Metaverse’s demand for hardware will gradually become clear, which will promote the gradual upgrade of VR/AR/brain-computer interface devices, and finally it is expected that the next generation of hardware comparable to PCs and smartphones will appear. .

From the perspective of the history of human-computer interaction, what stage are we currently in?

Low operating threshold and high interactive bandwidth are the core development directions of human-computer interaction platforms

We have sorted out the characteristics and evolution trends of input and output forms in different development stages of human-computer interaction, and believe that AR/VR/brain-computer interfaces with both convenient operability and high interactive bandwidth are expected to lead the next generation of interaction methods.Human-computer interaction mainly refers to the cyclic process in which people and systems interact and interact with each other. Specifically, humans output instructions through behavior after receiving and processing information, and computers change the system shape after receiving instructions, and then display and output feedback information and are perceived by humans, thereby triggering the information processing of the human brain and the next human-computer interaction process. Looking back at the development of human-computer interaction, the past human-computer interaction mainly experienced three stages: cassette interaction, question-and-answer interaction, and audio-visual interaction. The input and output forms continued to evolve closer to human instinct.

Special research on the Metaverse industry: VR, AR, and brain-computer interface are the entrance to the Metaverse

From the perspective of the evolution of the command input form, in the earliest cassette-type interaction stage, people could only use punched cards to transfer digital commands to the computer in one direction, and the commands were single and had a very high threshold for use; The question and answer with the computer can be realized by typing command statements on the keyboard (corresponding to the command line interface). Although the typing efficiency is improved compared with the previous generation of input devices, it still requires the operator to memorize a large number of command languages; Graphical user interface (GUI), by using a combination of mouse and keyboard, combined with actions such as “point/tap, scrolling, dragging”, the operator can easily achieve fast switching and precise positioning, which significantly reduces the operating threshold; in the era of mobile phones, physical The disappearance of buttons, audio input, and the successive appearance of touch screens further enrich user input methods and simplify the interaction process. We see that the input form of human-computer interaction has evolved from interacting with machines in machine language to interacting with machines in natural language.

From the perspective of the form of feedback input, human-computer interaction has gone through the changing process of “Command Line Interface (CLI) – Graphical User Interface (GUI) – Natural User Interface (NUI)”, corresponding to the visual output content from monotonous one-dimensional statements to two-dimensional The graphics are finally expected to be presented in the form of three-dimensional space objects, supplemented by acoustic equipment to enhance the auditory output effect. In addition, output devices have also evolved from mainframes and desktop screen displays to notebook computers, mobile phones and even micro-projectors, and gradually become mobile.

The advent of the Metaverse is expected to drive the vigorous development of VR/AR/brain-computer interfaces

We believe that VR/AR/brain-computer interface will be the representative operating platform in the next interactive era, mainly because it is highly in line with the evolution trend of input and output forms. In terms of input angle, VR/AR eliminates the previous physical buttons and mainly combines gesture input, eye tracking, facial expression recognition and voice control, while the brain-computer interface is further transformed from EMG input to EEG input; for output angle, VR will Build a mobile virtual space for users that integrates multi-dimensional sensory experiences such as vision, hearing, touch, etc. AR superimposes it with the real space to fully realize the integration of virtual and real.

The initial VR/AR concepts were proposed in the 1950s and 1960s, respectively. After 20 years of laboratory development and B-side commercial exploration, the product form continued to be lightweight, miniaturized, and deeply immersive. After 2010, with the gradual maturity of the Internet and smartphone terminals and the continuous penetration of the consumer side, ARVR applications began to explore the C-side landing. Entering the second decade of the 21st century, the Metaverse is predicted to become the next form of the Internet, and ARVR is also expected to become a brand-new human-computer interaction platform in the Metaverse.

Qculus quest 2 drives VR shipments to 10 million for the first time. After the launch of Oculus quest 2 in September 2020, it quickly became a hit and continued to sell well. In 2021, driven by this VR product, VR sales will reach a record high of nearly 10 million units, and its design paradigm is also imitated by domestic manufacturers. In the future, we see that as the application ecosystem continues to mature, corresponding upgrade requirements for VR hardware are also put forward. We believe that the clarity of the next-generation display unit may be improved from the current 4K to 8K, and the weight will also be reduced from nearly 500g to about 300g. At the same time, the thickness of the eyepiece will be reduced to 1/3 of the current one, and it will also be equipped with more sensors. Implement eye tracking, gesture tracking, and more.

Special research on the Metaverse industry: VR, AR, and brain-computer interface are the entrance to the Metaverse

There are many AR technology paths, and micro-LED + diffractive optical waveguides are highly expected, but mature products are difficult to appear in the short term. At this stage, AR is still in the exploratory period. From the perspective of shipments, it will remain between 200,000 and 300,000 units from 2020 to 2021 (IDC data); from the perspective of product form, it is still dominated by large manufacturers, with All-in-one products are the mainstream; from the technical point of view, there are currently many paths, but there are problems such as performance, yield, and volume. Micro-LED + diffractive optical waveguide is considered to be the most promising AR optical system for large-scale commercialization. However, due to the technical problems of micro-LED in mass transfer, it may be difficult to achieve large-scale commercialization in the short term.

The application scenarios of the Metaverse will be implemented in sequence, or will define the next-generation VR/AR/brain-computer interface upgrade direction

We believe that with the clarification of Metaverse application scenarios, the future development direction of VR/AR/brain-computer interface will gradually become clear. The early hardware devices were subject to defects such as single application scenarios and content, and imperfect user experience of hardware devices. The first generation of VR/AR did not achieve large-scale growth. At the current point in time, we see that Metaverse application scenarios such as games, e-commerce, collaborative office, social networking, fitness, medical care, video, and simulation training (education) are gradually becoming clearer, which is a great challenge for VR/AR/brain-computer interface hardware. Higher demand is expected to drive the continuous improvement of a number of underlying technologies including microdisplay technology, 3D reconstruction, biosensors, EMG/EEG processing, whole-body tracking, and spatial positioning.

Special research on the Metaverse industry: VR, AR, and brain-computer interface are the entrance to the Metaverse

The applications of the Metaverse era emphasize more immersion and interaction than the mobile Internet era, and different applications have different emphasis on the two effects. Among them, the sense of immersion can be obtained through richer audio-visual effects and more dimensional sensory interaction. For example, the use of scene rendering, immersive sound field, temperature simulation, tactile sensing and other technologies to create a realistic virtual scene, so that the brain can feel “immersed” The sense of interaction requires a variety of input methods to lower the operating threshold of human-computer interaction, such as directly conveying instructions by recognizing voice or reading gestures, without typing or operating the keyboard, mouse/button, and enhancing interaction efficiency. . We believe that according to the requirements of different applications for immersion and interaction, they can be divided into three levels:

1) Mature: video and simulation training (education). Among them, simulation training (education) includes safety education, public safety drills, ideological and political education, etc., with the lowest requirements for immersion and interaction, and there are commercial cases; while the video field has relatively higher requirements for immersion, but due to The content ecology of streaming media platforms has been relatively complete. With the penetration of VR supporting hardware to the C-side, we believe that video will be one of the first areas to mature.

2) In development: e-commerce, social networking, gaming, office, fitness. Among them, e-commerce and games focus more on the pursuit of immersion, while social networking and collaborative office have higher requirements for interaction.

3) Infancy stage: medical treatment, including disease monitoring, assisted minimally invasive surgery, signal reading, stimulation intervention and bionics, etc. Disease monitoring is expected to accelerate with the maturity of biosensing technologies such as ECG, blood glucose, blood oxygen, etc., and fields such as assisted surgery, stimulation intervention, and bionics require extremely high accuracy of input and output, and related enterprises and medical institutions are still Exploring.

Games: Metaverse games that emphasize “immersion” require multi-platform/VRAR/cloud-native technologies as the underlying technical support

The current game already has the characteristics of virtual identity, friends, and economic system that the Metaverse has, but it fails to bring players a sense of “immersion”. The hardware is mainly limited by the immaturity of near-eye display and multi-dimensional sensory sensing technology. In the future, Metaverse games will develop towards a stronger sense of immersion and a richer content ecology. It needs to use mature scene rendering and immersive sound field technologies to enhance the sound and picture effects, and use technologies such as full-body motion tracking, sensors, and spatial positioning to enhance the presence of the scene. sense. We believe that high-quality game content innovation will form a positive feedback effect with VR/AR hardware upgrades, promote the development of the Metaverse game ecosystem, and bring incremental growth to high-performance computing chips, silicon-based OLEDs, Micro LEDs and related equipment assembly companies space.

Special research on the Metaverse industry: VR, AR, and brain-computer interface are the entrance to the Metaverse

E-commerce: The immersive shopping mode that blends virtual and reality brings development opportunities for near-eye displays, AI chips, and sensors

Traditional e-commerce platforms still mainly display products in flat forms such as pictures and videos. Although the rise of e-commerce live broadcasts and AR makeup try-on in recent years has made up for the relatively thin look and feel experience of traditional e-commerce when shopping to a certain extent, users are still unable to test online non-standard products with rich SKUs such as clothing. Taste. Driven by the ultimate demand of “online is presence”, e-commerce in the Metaverse era is expected to further break through the barrier of the material world, and realize a multi-sensory interactive shopping experience such as audio-visual and even touch through AR/VR/MR and other new-generation human-computer interaction platforms. Create consumer purchase scenarios such as 3D virtual shopping malls and digital exhibition halls. We believe that this process mainly depends on the maturity of technologies such as near-eye display, 3D reconstruction, tactile sensing and even virtual human, which will bring growth space for related micro-display, sensor and chip companies.

Collaborative Office/Social: Interact with gesture tracking, voice recognition, eye tracking, and avatars

In the future, Metaverse Office/Social is expected to break through the limitations of physical space, bringing the closest face-to-face work and making friends experience, and improving the efficiency of office production, communication and collaboration. There is a certain gap between the ideal model of remote office distance in the current mobile Internet stage, and there are still limitations in work efficiency and communication effects. On the other hand, Metaverse Office/Social emphasizes the sense of interaction. For example, users can operate the whole process through gestures, which can satisfy functions such as raising their hands, giving thumbs-ups and likes in the VR virtual space, significantly reducing the operating threshold of the human-computer interaction platform, and at the same time realizing Interaction without distance. The realization of this scene will mainly rely on VR/AR underlying technologies such as gesture reading, eye tracking, speech recognition, and spatial positioning.

Special research on the Metaverse industry: VR, AR, and brain-computer interface are the entrance to the Metaverse

Medical and health: VR/AR/brain-computer hardware will be equipped with advanced biological monitoring and EEG signal processing technology

In terms of VR/AR, although there have been applications for auxiliary fitness such as boxing, rock climbing, and ball sports, the poor hardware wearing experience limits the user’s usage time. On the one hand, the dizziness of VR equipment has not been completely eliminated, and the high-speed changing scene in sports and fitness applications will further aggravate the discomfort; on the other hand, the weight of the current mainstream VR headsets is mostly above 300g, and the VR all-in-one machine generally exceeds 500g. , which greatly increases the burden on the wearer when exercising. Therefore, display technology and thinning are the key directions of hardware manufacturers. We are optimistic about silicon-based OLEDs (Sony, Shiya, etc.) with ultra-thin, high-definition, low power consumption, low latency, etc. The development opportunities of eye-catching AR devices (InWith, Mojo Vision, etc.) that do not hinder the normal line of sight advantage.

In terms of biological monitoring, with the maturity of heart rate monitoring and blood oxygen detection technologies, some smart bracelets and watch products have introduced medical-grade functions. We believe that the evolution of more professional medical equipment will be an important development direction for smart wear. . In the future, smart wearable products are expected to be equipped with new functions such as ECG electrocardiogram and non-invasive blood glucose detection on a large scale, providing more professional services for the elderly and chronically ill groups, which also puts forward higher requirements for biological monitoring technologies such as blood glucose and blood oxygen. In the long run, human-computer interaction hardware is expected to expand to serve patients with paralyzed nervous system and muscular system (such as brain, spinal cord disease, stroke, trauma, etc.) in the medical and health fields. This demand will create considerable opportunities for brain-computer interface technology. Prospects.

Video: VR/AR technology brings a highly immersive streaming viewing experience

Traditional film and television works, long videos and short videos are still mainly disseminated through media such as TV, cinema, and video platforms. Subject to the flat form of expression, there is still much room for improvement in the expressiveness of content. In the Metaverse era, audiences are expected to use advanced VR/AR equipment to watch movies, live events, concerts and other content more immersively, and the entertainment and experience will usher in a qualitative leap. At present, long video platforms including Netflix and iQIYI have actively explored the implementation of “Metaverse + video”. For example, Netflix launched VR experience for the American TV series “Stranger Things”, and iQIYI launched the main movie viewing function. Phone case. Drawing on the incubation process of the short video ecosystem in the mobile Internet era, Metaverse, as the next stop of the Internet, also provides new possibilities for video creation. For example, virtual characters created through modeling, motion capture, and artificial intelligence can participate in the performance. Film and television series, film and television and video content creation are expected to usher in a new highlight period.

Special research on the Metaverse industry: VR, AR, and brain-computer interface are the entrance to the Metaverse

Simulation training/education: will realize the simulation mapping of the physical environment in the virtual space

Simulation training refers to the reproduction of real scenes into the virtual world, which is used in military training, industrial design, teaching training, safety emergency drills, and other complex or high-risk fields. In the industrial field, manufacturing enterprises will make full use of all kinds of data in the simulation virtual space to optimize the equipment process and operation process in the industrial production link. There is a need for simulation exercises in the military and security emergency fields. In the future, it is expected to carry out larger-scale and more complex military and emergency training in virtual scenarios to achieve the purpose of saving training costs and improving safety. For example, Manheng Technology developed Shanghai Pudong The airport’s VR fire emergency drill system uses VR and 5G cloud rendering technology to simulate airport fire emergencies and how airport firefighters carry out fire emergency rescue in dangerous scenarios, helping to improve the overall emergency response capability of the airport system. In view of the particularity of simulation training use, its requirements for immersion and interaction are relatively low, and there is no need to rely on high-end hardware equipment additions. At present, companies such as Manheng Technology and Yichuancheng have realized commercialization.

2 AR/VR: The Next Generation Human-Computer Interaction Platform

VR: Oculus quest 2 creates an explosive paradigm, with a clear path for technological innovation

VR is the abbreviation of Virtual Reality (Virtual Reality). The simulation of tactile senses and other senses can enable users to have an immersive sense of immersion and a new human-computer interaction method with a perfect interaction ability with the environment.

The current common VR consists of a head-mounted display device and a controller. Among them, the head-mounted display device integrates display, computing, sensors and other equipment. By closing people’s vision and hearing from the outside world, and displaying the images of the left and right eyes on the left and right eye screens, the user is guided to create a virtual environment. the stereoscopic effect. The handle is responsible for assisting in tracking the position of the user’s hand, providing buttons for interactive use, and simple tactile vibration feedback.

Special research on the Metaverse industry: VR, AR, and brain-computer interface are the entrance to the Metaverse

The VR headset has gone through three stages: VR box, VR helmet, and VR all-in-one machine, and popular products continue to dominate the hardware consumer market.  In 2Q21, global VR product shipments reached 2.126 million units, a year-on-year increase of 136.4%, of which Oculus Quest 2 shipments accounted for 75%, continuing to dominate the market. Since 2014, the characteristics of industry sales dominated by popular products have not changed (Samsung VR box in 2015-2017, PS VR in 2016-2018, Oculus all-in-one machine in 2019-present).

1) Samsung Gear VR: A mainstream product in the VR box era. Created in partnership with Oculus, it was launched as a bundle with the Galaxy series, with annual sales peaking at nearly 4 million units in 2016. The method of use is to put the mobile phone in front of the VR box and use the dedicated APP to watch the movie. However, due to problems such as fever and dizziness, the experience is actually not excellent.

2) PS VR: No. 1 sales in the era of VR helmets. In fact, the performance of HTC/Valve and other products in the era of VR helmets is better than that of PS VR, but they are mainly commercialized, with high prices and low shipments. The PS VR price is relatively low, and it is bundled with the PS 4, and the annual sales are between 1 million and 2 million units.

3) Oculus quest 2: The explosion of VR all-in-one machines. Model product for a slew of other VR brands.

Special research on the Metaverse industry: VR, AR, and brain-computer interface are the entrance to the Metaverse

Oculus Quest 2 is a product full of compromise art. It realizes the basic functional vision of VR products under the balance of cost, hardware performance, and consumer experience.

Compared to VR headsets, we believe the Quest has been successful for several reasons:

1) Although it will increase the weight of the device (battery + chip), it also saves the user’s cost of purchasing a PC host by nearly 10,000 yuan and reduces the user’s entry threshold. 2) The connection line connected to the host is removed, which improves the user’s mobile space and usage scenarios. The user no longer needs an independent space and is equipped with a host, but only needs to be used in any idle space indoors. 3) In the tracking method, the traditional outside-in method is abandoned, so no external stand-alone transmitter and receiver is required. Instead, the camera-based inside-out approach is adopted to achieve 6DoF head and hand tracking. 4) When the computing power of the chip is insufficient, Quest 2 also supports streaming mode, which can be used as PCVR, and also meets the needs of consumers for high-rendering 3A masterpieces.

Compared with previous generation all-in-ones such as Quest 1:

1) Compared with Oculus quest 1, the second-generation product replaces OLED with Fast-LCD screen, which simplifies the appearance design and reduces the cost. 2) Upgrading the chip from Snapdragon 835 to XR2 improves processor, display, imaging and AI performance. After the change, the refresh rate of the product has been significantly improved, the resolution has been improved, and the dizziness problem has been greatly improved, basically realizing the vision of an entry-level VR product.

The current VR hardware composition is highly overlapped with the smartphone supply chain: through Ifixit’s disassembly, we can see that the main components of the Oculus quest 2 headset include display, optical lenses, sensors, motherboards, and battery products:

1) The display module mainly uses a Fast-LCD display with a resolution close to 4K and a refresh rate of 90-120Hz; 2) The optics use software preprocessing and Fresnel lens to provide correct images with wide field of view and less chromatic aberration; 3 ) Sensor: including four cameras for tracking head, hand movement and displaying gray and white perspective images; 4) Motherboard: including Qualcomm’s SoC chip XR2, power management chip, DRAM (Samsung, Micron, Hynix), NAND ( Sandisk), WiFi and other chips; 5) Power supply: 3640mAh.

Special research on the Metaverse industry: VR, AR, and brain-computer interface are the entrance to the Metaverse

In 2022, VR will usher in a wave of innovative technology trends, MetaVR products will usher in upgrades, and Apple will launch high-end products. According to digitimes, we speculate that Meta’s next-generation VR upgrade products will be launched next year, and will introduce pancake optical modules and more sensors to achieve product weight reduction, and upgrade functions such as gesture recognition and eye tracking; and Apple will also launch in 2022. At the end of the year, a high-end VR solution will be launched. This high-end product may redefine the product form of VR. We expect this product to be equipped with a Micro-OLED display, a composite Fresnel lens pancake solution, full-color image perspective, and more sensors, bringing consumers a new mixed reality experience.

Pancake short-focus optics is recognized as the next-generation VR upgrade direction, making VR headsets thinner and lighter. In Meta’s earlier pancake technology patent, we can see a display assembly consisting of a first lens with a quarter wave plate and a partially reflective surface, a second lens with a reflective polarizer, and a display, enabling a thin and light headset . We believe that Apple is also exploring the use of three Fresnel lens stacks to form a thin and light lens group. VR products with improved optical lenses will be thinner and lighter, and the weight of the headset may be reduced from the original 500g to 200-300g.

The number of Meta cameras may increase, making full use of the computing power of the Snapdragon XR2 chip.We believe that Meta’s next-generation VR products and Apple’s MR products will increase the number of sensors, mainly the type and number of cameras. Qualcomm disclosed on its official website that the computing power of the Qualcomm Snapdragon XR2 chip can support up to 7 cameras (2 eye tracking, 2 mixed reality, 2 head 6DoF tracking, 1 other), and can achieve MR mixing. Reality function. We believe that the next generation of Meta may make full use of the computing power of the Snapdragon XR2 to upgrade the functions of the product.

Display implementation method: Meta may follow the Fast-LCD display screen, while Apple may use Micro-OLED to provide an upgraded visual experience . We believe that the next generation of Meta may follow the FastLCD screen, which is not much different from the quest 2 resolution, but has advanced backlighting with pixel-level control, which can display the same pure black background as OLED; while Apple may use high-resolution, high-contrast, A wide color gamut, fast-response Micro-OLED display comes with a high price, and the new generation of Apple MR products may cost $1500-3000, which is higher than the current minimum price of Oculus quest 2 of $299 Dollar.

Special research on the Metaverse industry: VR, AR, and brain-computer interface are the entrance to the Metaverse

AR: The product is in the concept stage, and the breakthrough of Micro-LED + diffractive optical waveguide technology is highly expected

AR (Augmented Reality, Augmented Reality) is a relatively new technical content that promotes the integration of real world information and virtual world information content. Unlike VR, AR can overlap the real environment and virtual objects. The same picture and space exist at the same time. The key technologies in AR include tracking and positioning technology, virtual and reality merging technology, display technology and interactive technology.

At present, AR glasses can also be divided into one-piece and split-type. From the perspective of shipments, the current one-piece is the mainstream. Split means that the computing unit or battery and other structures are separated from the HMD. For example, the Nreal HMD supports connection with smartphones and PCs through the type-C interface, allowing the content of smartphones and PCs to be seamlessly transferred to the glasses. Users can where to view the content. All-in-one AR products integrate displays, sensors, computing, human understanding, environment understanding and other systems on one head-mounted display to provide a more convenient experience.

AR sales are small, and the growth rate fluctuates significantly, and it is still in the concept stage. According to IDC, the annual AR shipments (excluding Screenless viewers) in 2020-2021 will be between 200,000 and 300,000, and the growth rate will fluctuate greatly. From the brand point of view, except for Epson and Microsoft, many other brands have not achieved continuous large-scale sales of AR, and often disappear after the outbreak of 1-2 quarters. There is no benchmarking brand in the consumer market. It is believed that AR as a consumer electronics product is still in the concept stage.

Special research on the Metaverse industry: VR, AR, and brain-computer interface are the entrance to the Metaverse

In the long run, AR has greater incremental potential, and the C-end market is still waiting for mature technical solutions. We believe that AR terminals are expected to replace mobile phones in the long run and achieve annual shipments of more than 1 billion units (compared to more than 1.3 billion mobile phone shipments), but it is still too early to achieve this goal. From the application point of view, AR products have not yet appeared killer application scenarios.From a technical point of view, although the OLED+Birdbath solution is relatively mature, due to poor light transmission and other reasons, the design of the sunglasses cannot support the use of the whole environment. While other microdisplay systems such as LBS/LCoS/DLP and other solutions with optical waveguides are still in the process of exploration.

Starting from the demand: what kind of configuration does a qualified AR glasses need

Display: The micro-display unit and the optomechanical module determine indicators such as brightness, contrast, refresh rate, and resolution. At present, the near-eye display system of AR glasses on the market uses a microdisplay as an image source device, which generates an image and projects it into an optical module such as a free-form surface/optical waveguide, and then enters the human eye. Since the image generated by the AR image source will enter the human eye together with sunlight, if you do not add sunglasses outdoors, the brightness of the eye will need to exceed 2,000 nits, or even 5,000 nits, so that the image can be clearly displayed in various weather conditions. According to our estimation, the optical efficiency of a current optical waveguide glasses is about 3-5%, that is, the brightness of the image source must be at least about 100,000 nits to meet the brightness requirements of AR glasses. In addition, a refresh rate of 75Hz or more, a resolution of 720P under a 25° field of view, support for partial refresh and the maintenance of static images in a low-power state are the passing lines for AR glasses.

Effective interaction of human, machine, and environment: SLAM+sensor+AI is used to understand the environment, understand the user, and realize the combination of virtual information and the real world. In order to realize the superposition of the virtual information and the real scene, it is necessary to realize the spatial positioning tracking of the user and the positioning of the virtual object in the real space. In addition, in order to seamlessly combine virtual information with the input real scene and enhance the experience of AR users, it is also necessary to consider the occlusion relationship between virtual things and real things, as well as to achieve geometric consistency, model reality, and lighting consistency. same color. From the development in the 1980s to the present, the continuous improvement of SLAM sensors, algorithms, and technical frameworks is the main means to realize self-pose evaluation and virtual image feedback, and to build an effective interaction between people and virtual content.

Special research on the Metaverse industry: VR, AR, and brain-computer interface are the entrance to the Metaverse

Others: energy consumption, adaptability, volume weight. Generally speaking, a relatively mature AR product also needs to meet other requirements, including the temperature adaptation range of -40°~80°, the overall service life of more than 5000 hours, the appropriate counterweight, the weight of about 300g, etc. These requirements also impose constraints on the selection of other components such as microdisplay systems, batteries, and optical modules.

Microdisplay technology: MicroLED is expected to become mainstream in AR

Currently proposed microdisplay technologies include OLED (Organic Light Emitting Diode) / LCoS (Liquid Crystal on Silicon) / DLP (Digital Light Processing) / LBS (Laser Beam Scanner), but these technologies cannot take into account maturity, performance and cost and other indicators. MicroLED is recognized as the best solution for AR display in the industry, but there are problems such as immature technology and difficulty in mass production. The real large-scale commercial use may take around 2025.

LCoS – more restrictive, fades out gradually

As a microdisplay technology, LCoS has obvious limitations and gradually fades out of the microdisplay field. The advantages of LCoS lie in its mature technology, low cost, high pixel density and low power consumption. It is widely used in early AR devices, such as Lingxi Micro-Light Lingxi AR (LCoS + geometric optical waveguide), Magic Leap One (LCoS + diffractive optical waveguide) . However, the disadvantages are also relatively obvious, such as low contrast, especially in the case of large incident angles; the miniaturization and light-weight process of the overall opto-mechanical is limited due to the necessity of using it in conjunction with PBS (currently the smaller digilens LCoS opto-mechanical volume). 2.5 cubic centimeters); cannot work at low temperature, poor environmental adaptability, etc. Therefore, a large number of manufacturers are actively seeking to use solutions such as LBS/DLP to replace LCoS, and new models equipped with LCoS will gradually fade out after 2018.

Special research on the Metaverse industry: VR, AR, and brain-computer interface are the entrance to the Metaverse

Silicon-based OLED – low brightness, currently difficult to apply to outdoor AR scenes

The shortcomings of silicon-based OLEDs are also relatively clear, and the application is limited to VR and similar devices. At present, the brightness of mainstream silicon-based OLED products on the market is less than 3000 nits, which is far from the requirement of 100,000 nits, and it is difficult to apply to outdoor AR scenes. At the same time, because the production process of the product is more complicated, its price is more than 50% more expensive than LCoS, but the service life will be less than 3000 hours in high brightness mode, and the screen burn-in is very likely to occur, and the overall cost performance is lower. Therefore, although some AR manufacturers have used silicon-based OLEDs to replace LCoS, it is still not the best solution for AR image sources.

LBS – Laser diodes are temperature sensitive and have poor resolution

Compared with other display technologies such as LCoS, LBS technology has obvious advantages. The LBS system is mainly composed of lasers, optical devices and MEMS mirrors. Since LBS uses laser light sources for pixel-by-pixel rendering, it naturally has low latency (laser nanoseconds vs ordinary light source milliseconds) and short screen retention time compared to other non-laser, frame-by-frame rendering solutions , high brightness, low energy consumption, rich color advantages. In addition, other technologies must increase the number of micromirrors and enlarge the product size in order to obtain a larger field of view and higher resolution, while the LBS scheme can only be achieved by changing the vibration frequency and inversion angle of the MEMS micromirrors, Therefore, it is easier to achieve light weight and miniaturization of the optomechanical. (Current LBS optomechanical volume is roughly 0.5-1.5 cubic centimeters).

A possible limitation of LBS technology comes from lower resolution and image quality. The resolution of the current mainstream LBS products is about 720P, and it may require higher cost to improve the resolution. Karl Guttag, chief scientist of Rave, an AR hardware/software company, compared the HoloLens 2 with LBS optics and the HoloLens 1 with LCoS optics. 30 degrees vs 17.5 degrees), but it performed worse in terms of resolution, color uniformity, etc. In addition, the HoloLens 2 real photos have lower color saturation, blurry appearance, and greater haze.

Special research on the Metaverse industry: VR, AR, and brain-computer interface are the entrance to the Metaverse

DLP – Sensitive to temperature, difficult to miniaturize

Due to the high cost and large size of DLP, the application of DLP in AR scenarios has certain limitations. The core of the DLP (Digital Light Processing) system is TI’s patented DMD chip (Digital Micromirror Device), which consists of millions of highly reflective aluminum independent micro-mirrors, each of which is controlled by a huge number of ultra-small digital light switches angle. These switches accept data bytes represented by electrical signals and generate optical byte outputs that convert the video or graphics signal input to the DMD into a high-definition, high-grayscale image. DLP The DLP projection system is brighter than all other display systems due to its lens-based, improved luminous flux efficiency. However, due to its disadvantages such as difficult design, complex structure, high production cost, and large volume, it is currently not widely used in AR, HUD and other equipment.

MircoLED – still in the early stage, many technical issues need to be resolved

MicroLED products have excellent performance and are recognized as the best solution for AR display in the industry. Micro LED is LED miniaturization technology. By arraying and miniaturizing traditional LEDs and transferring a large amount of addresses to the circuit substrate to form ultra-fine-pitch LEDs, the length of millimeter-level LEDs can be further reduced to micron-level (about 50um, the original LED’s length is about 50um). 1%). Compared with other technologies, MicroLED product performance has great advantages in brightness, contrast, operating temperature range, refresh rate, resolution, color gamut, power consumption, delay, volume, life, etc., and is expected to be the next generation mainstream display. important path of technology.

Special research on the Metaverse industry: VR, AR, and brain-computer interface are the entrance to the Metaverse

The bottleneck of MicroLED development lies in the huge challenges brought by the micron-scale pixel size and pitch to mass production and full-color solutions.  The production of MicroLED includes chip and backplane manufacturing, mass transfer, bonding, driving, inspection and maintenance, etc. Since its grain size is in the micron scale, the production of a single finished product requires processing millions or even tens of millions of grains. The efficiency and yield rate of the technology put forward extremely stringent requirements, and the current technology level cannot meet its mass production requirements. The luminous efficiency, wavelength consistency and yield of MicroLED chips have not yet met the requirements of MicroLED color display. Based on this, the price of the existing MicroLED screen is high, and the single-chip price is more than 1,000 US dollars. In 2018, Samsung demonstrated The Wall TV with microLED technology. The 146-inch version was quoted above $100,000.

Optical Modules: From Geometrical Optics to Nano-Optics

The difference from VR is that AR glasses need see-through, not only to see the real external world, but also to see virtual information, so the imaging system cannot block the front of the realization, which requires one more or one more. Group optical combination, which integrates virtual information and real scene through layering. The design includes free-form surfaces, optical waveguides, etc.

Production methods range from geometric optics to nano optics. The traditional optical lens processing methods are cutting / injection molding / coating / polishing, etc., but with the complexity of optical modules such as optical waveguides, the traditional processing technology brings problems such as complex production process and low yield. Digilens, WaveOptics, Zhige Technology, Longjing Optoelectronics and other manufacturers have begun to explore processing solutions such as nano-imprinting and UV processing.

Freeform Solutions: Freeform Prisms/Mirrors, BirdBath

The light sources of the microdisplays of the three schemes all come from above the eyes:

1) BirdBath solution: The beam splitter reflects and projects light at the same time, so that users can see the digital images generated by the micro-display when they see the physical scene in the real world. A concave mirror on one side of the beamsplitter is used to reflect light, redirecting it to the eye. AR glasses with Birdbath structure are usually larger in size and have a medium field of view (around 50°). Since the beam splitter is a semi-transparent mirror, the light is reflected multiple times when it passes through the beam splitter, and each reflection will cause a 50% light loss, so the energy loss is serious.

2) Freeform Mirror: Only one curved mirror is used to collect light from the microdisplay and the real world. AR glasses with a free-form surface mirror structure also have a large volume, and the achievable field of view is 50°~100°, but the size of the field of view depends on the size of the light source. Since the light is reflected only once, the light loss of the free-form mirror structure is significantly reduced.

3) Freeform Prism: Ingeniously combine two refractive surfaces, a total internal reflection surface and a partial reflection surface into one element, increasing the freedom of the structure. This structure can increase the field of view and improve the imaging quality at the same time, but the thickness of the optical element is large, and a correction prism is usually required to eliminate the refraction of ambient light from the free-form prism.

Optical waveguide technology solutions: geometric/array optical waveguides, relief grating optical waveguides, Bragg grating optical waveguides

Optical waveguide technology is a relatively unique optical component developed in response to the needs of AR. Because of its thinness and high penetration of external light, it is considered to be the optical solution of choice for consumer-grade AR glasses.

The key to the transmission of light in AR glasses is “total reflection”. In fact, waveguide technology is not a new invention. Optical fiber is a kind of waveguide, but it transmits light in the infrared band that we cannot see. After the optical machine completes the imaging process, the waveguide couples the light into its own glass substrate, transmits the light to the front of the eye through the principle of “total reflection” and then releases it, completing the transmission of the image.

Special research on the Metaverse industry: VR, AR, and brain-computer interface are the entrance to the Metaverse

The need for a viewing angle places demands on the glass base material. The larger the field of view, the higher the refractive index of the glass substrate is required to achieve. Therefore, traditional glass manufacturers such as Corning (GLW US) and Schott (not listed) have been developing special high-refractive index and thin glass substrates for the near-eye display market in recent years, and are also working hard to continuously increase the wafer size. To reduce the unit cost of waveguide production.

Specifically, the current optical waveguide technology can be divided into the following three types:

1) Geometric/array optical waveguides. The concept and patent have been proposed by the Israeli company Lumus and have been continuously optimized and iterated. The basic principle is that the coupling light into the waveguide is generally a reflective surface or a prism. When the light reaches the front of the glasses after multiple rounds of total reflection, it encounters an array of “semi-transparent” mirrors that couple the light out of the waveguide.

Most of the geometric/array optical waveguides can only achieve one-dimensional pupil dilation at present. The “semi-transparent” mirror array here is equivalent to duplicating the exit pupil in the horizontal direction, and each exit pupil outputs the same image, so that the eyes can see the image when moving laterally, which is one-dimensional expansion. Pupil technique (1D EPE).

The geometry/array optical waveguide process is complex, and it is extremely difficult to improve the yield. In the coating process of the “semi-transparent” mirror array, each of the five or six mirrors in the array requires a different reflection-to-transmittance (R/T) because the light will become less and less during the propagation process to ensure The light output is uniform throughout the entire range of the eye frame. And because the light propagating in the geometric waveguide is usually polarized, the number of coating layers on each mirror surface may reach ten or even dozens of layers.

These mirrors are layered together after being coated and bonded with special glue, and then the shape of the waveguide is cut at an angle. During this process, the parallelism between the mirrors and the cutting angle will affect the imaging quality. Therefore, even though high yields can be achieved at each process step, the total yield combined with these dozens of steps is a challenge. The failure of each step of the process may lead to defects in the imaging, such as background black stripes, uneven brightness of the light, ghosting and so on.

Special research on the Metaverse industry: VR, AR, and brain-computer interface are the entrance to the Metaverse

2) Relief grating diffracting optical waveguide. The traditional optical structure is replaced by a flat diffraction grating, which creates a periodic variation of the refractive index in the material through the peaks and valleys embossed on the surface of the material. By designing the parameters of the grating (material refractive index, grating shape, thickness, duty cycle, etc.), the diffraction efficiency of a certain diffraction order (that is, a certain direction) can be optimized to the highest, so that most of the light is mainly diffracted along this direction. spread in one direction.

Two-dimensional pupil dilation can be achieved with diffraction gratings, and digilens and WaveOptics have two technical solutions respectively. The method used by Hololens I, Vuzix Blade, Magic Leap One, Digilens, etc. is that when the incident grating couples the light into the waveguide, it will enter a region of the turning grating, and the grating groove direction in this region is at a certain angle with the incident grating, then It is like a mirror that reflects the light coming in the X direction and turns it into the Y direction. Another way to achieve 2D pupil dilation is to directly use a 2D grating, that is, the grating has periodicity in at least two directions, turning the unidirectional “grooves” into a columnar array. WaveOptics adopts this structure. The light coupled into the waveguide from the incident grating directly enters a emitting area with a two-dimensional columnar array, which can expand the light in both X and Y directions at the same time, and couple part of the light while propagating. out into the human eye.

Special research on the Metaverse industry: VR, AR, and brain-computer interface are the entrance to the Metaverse

3) Bragg grating diffraction optical waveguide (also called holographic grating optical waveguide). Optical holography is used to record the interference fringes of the point light source on the recording material film, and then processed into a thin film optical element with a grating fringe structure, which has the functions of beam collimation, focusing and deflection. The diffraction of light complies with Bragg’s law, only the incident light that satisfies the Bragg condition will be diffracted, and the incident light that does not satisfy the Bragg condition will not be diffracted. At present, there are relatively few manufacturers making holographic volume grating (VHG) waveguide solutions, including Digilens, which made AR helmets for the US military ten years ago, Sony, which once produced monochrome AR glasses, and Akonia, which was acquired by Apple.

The advantages are significant and the exploration continues. This technology has the advantages of thin volume, light weight, and the ability to record multiple holograms simultaneously, making it possible to replace many traditional optical components such as prisms, cube beamsplitters, and gratings, further reducing the size of AR head-mounted displays. volume. Due to the limitation of the available materials, the volume grating has limited refractive index difference, so it has not yet reached the same level as the surface relief grating in terms of FOV, optical efficiency, and clarity. However, because it has certain advantages in design barriers, process difficulty and manufacturing cost, the industry’s exploration of this direction has never stopped.

SLAM: Understand the environment and users, and realize the combination of virtual information and the real world

SLAM (Simulataneous Localization and Mapping), synchronous localization and map construction, refers to locating one’s own position and attitude by repeatedly observing environmental features during the movement process, and then constructing an incremental map of the surrounding environment according to its own position, so as to achieve simultaneous positioning and mapping. The purpose of the map construction.

Modern popular SLAM systems can be roughly divided into front-end and back-end. The front-end realizes data association through sensors, studies the transformation relationship between frames, mainly completes real-time pose tracking, processes the input images, and calculates pose changes. The back-end mainly optimizes the output of the front-end to obtain the optimal pose estimation and map.

Special research on the Metaverse industry: VR, AR, and brain-computer interface are the entrance to the Metaverse

Continuous improvement of SLAM sensors, algorithms, technical frameworks, etc., to achieve self-pose evaluation and virtual image feedback, and build effective interaction between people and virtual content. From the development in the 1980s to the display, the sensors with the SLAM algorithm have appeared vision (monocular, binocular, RGBD, ToF and other cameras), inertial/magnetic (sensors such as IMU), as well as sonar, 2D/3D lidar and a series of solutions. The SLAM algorithm has also changed from the initial filter-based method (EKF, PF, etc.) to the optimization-based method, and the technical framework has also evolved from the initial single thread to multi-threading.

SLAM has many applications in ARVR. AR is mainly 1) effective interaction between real objects and virtual objects, 2) realization of semantic understanding and optimization of intelligent auxiliary functions:

Realize the superposition of coordinates between the virtual world and the real world, and realize the interaction of geometric and physical information. Different from the 3D display of computers, tablets and mobile phones, AR pays more attention to the seamless integration of virtual information and real information, that is, the plane position and depth of field where the image appears is accurate, and brings a good experience of immersion. This requires the use of the SLAM algorithm to accurately superimpose the virtual coordinate system and the real coordinate system. At the same time, there are ups and downs, obstacles, and occlusions in the real environment. AR can allow virtual information to interact with physical information in these real environments.

Special research on the Metaverse industry: VR, AR, and brain-computer interface are the entrance to the Metaverse

Achieve semantic understanding and optimize intelligent accessibility. With the development of machine learning and deep learning, virtual information can “understand” the real world, making the integration of the two more natural. At present, the computer can already recognize the content on the picture, but it does not understand the relationship between the content. One of the current research directions is to apply SLAM+AI technology, through feature extraction, to realize the semantic understanding of the machine, and to optimize the AR system. Accessibility.

Sensors: The upgrade of interaction methods and application scenarios promotes sensor upgrades

The upgrade of the interactive mode in AR brings more diverse information needs. As human-computer interaction moves from 2D to 3D, the interaction methods gradually diversify and develop towards human instinct. Gesture interaction, gesture interaction, eye movement interaction, voice interaction, and even the combination of biological signals and surrounding environment interaction continue to evolve. Various types of information are required. User motion, biological information, and other environmental information will provide underlying support for human-computer interaction.

The large amount of information demand provides incremental opportunities for motion, biological, and environmental sensors. At present, Apple mobile phones and watches widely use a variety of motion and biological sensors. In contrast, the Oculus quest 2 headset, a popular VR product, is only equipped with 4 black and white cameras, and the handle is equipped with two sets of gyroscope accelerometer sensors. In the future, in order to achieve deeper immersion and more convenient interaction, ranging cameras, eye tracking cameras, refined pressure sensors, and even biological and environmental sensors will be gradually equipped.

Special research on the Metaverse industry: VR, AR, and brain-computer interface are the entrance to the Metaverse

3 Brain-computer interface: How far are we from sci-fi movies?

For most people, the earliest exposure to the concept of brain-computer interface comes from science fiction movies.Whether it is the mind control object of the doctor in “X-Men”, or the Zion people in “The Matrix” are connected to the computer through the interface, they quickly learn a variety of knowledge and skills, and enter the virtual world of the Matrix. Or in “Dune”, people continue to develop their brain potential through the exploration of brain science. The brain of the trained navigator can be comparable to that of a large computer. These plots are impressive and are also the direction that scientists continue to explore.

The potential of the human brain: a supercomputer?

Using an electronic computer to simulate the human brain requires 172 PFlops of computations. The human brain has nearly 86 billion neurons, and each neuron has 10,000 connection points, which are in charge of human movement, hearing, language, smell, memory, thinking, personality, emotions and other functions. According to our estimation, if you want to simulate the activity of the human brain with a computer, you need 172PFlops (corresponding to the Sunway Taihu Light 93PFlops, and the US Summit supercomputer 122.3PFlops). The potential of the human brain may reach the computing power of a supercomputer.

Brain-computer interface may support the continuous development of human brain potential. A classic statement put forward by Musk is that “Humans cannot be eliminated by AI, but must be integrated with AI to create an interface between the brain and the computer”. With our continuous understanding of brain science and the continuous breakthrough of human limb limitations under brain-computer interface technology, the potential of the human brain may be released.

Special research on the Metaverse industry: VR, AR, and brain-computer interface are the entrance to the Metaverse

Brain-Machine Interface: How do we define it?

Brain Computer Interface (BCI) was proposed in 1976 by Jacques J. Vidal of the University of California, Los Angeles . A complete brain-computer interface process includes four steps: signal acquisition, information decoding and processing, signal output/execution, and feedback.

Brain-computer interfaces can collect and feedback signals through electricity, magnetism, light, and sound, and EEG technology is currently the mainstream exploration direction.  In fact, there are many ways to collect central nervous system signals to monitor brain activity, including EEG, functional near-infrared spectroscopy (fNIRS), functional magnetic resonance imaging (fMRI), etc. Feedback technology It also includes electricity, magnetism, sound, and light.Among these monitoring technologies, EEG has gradually become the most mainstream exploration direction in brain-computer interface research due to its advantages of high temporal resolution, low price and portability of equipment.

1) EEG acquisition: EEG acquisition is a key step in BCI. The acquisition effect, signal strength, stability and bandwidth directly determine the subsequent processing and output. Due to changes in the membrane potential of central neurons in the brain, spikes, or action potentials, will be generated, and the movement of ions transmitted between neuronal synapses will form field potentials. External or implanted micro-electrodes at the position of motor nerves can collect and amplify these neurophysiological signals.

2) Signal decoding processing: Signal processing is to convert brain activity into electrical signals, remove interfering radio waves and other signals, classify and process the target, and convert it into a corresponding signal that can be output.

3) Signal output and execution: Signal output refers to the transmission of the collected and processed brainwave signals to the connected equipment, as data-based processing content, or feedback to the terminal machine to form instructions, or even to achieve direct interaction.

4) Feedback: After the signal is executed, the device will generate an action or display content, and the participant will feel through sight, touch or hearing that the brainwaves generated in the first step have been executed and trigger the feedback signal.

Special research on the Metaverse industry: VR, AR, and brain-computer interface are the entrance to the Metaverse

According to the acquisition method of EEG, the current brain-computer interface can be divided into invasive and non-invasive

Non-invasive is more used for EEG monitoring on the consumer side. The non-invasive type is to wear a brain-computer interface device outside the human/animal brain, and obtain brain information by collecting EEG and neuroelectricity, but the information accuracy and resolution are low, which can be used for simple signal judgment and feedback, but it is difficult to convey Complex commands, such as helping physically handicapped people manipulate mechanical bones through their minds, or basic gesture control for VR/AR gaming applications. Non-invasive methods can be divided into EEG (collecting EEG) and MEG (collecting magnetic field) according to the different information collected.

1) EEG: Ag/AgCI electrodes are fixed on the scalp through conductive gel to measure scalp EEG signals, but generally only information in a relatively narrow frequency band of 0-50Hz can be monitored.

2) MEG: The signal is obtained by measuring the small magnetic field caused by the intracellular ionic current, but MEG is not an ideal solution due to the high cost and cumbersome operation method (electromagnetically block the environment and keep it absolutely still).

Invasive brain-computer interfaces are mainly used in the field of medical rehabilitation. The invasive device is directly implanted into the gray matter or cranial cavity of the human/animal brain, which can obtain relatively high-frequency and accurate neural signals, not only to control external devices by reading EEG signals, but also to stimulate the brain through precise current stimulation. produce a certain feeling. Invasive brain-computer interface can be divided into ECoG, LFP, SUA and other types.

1) ECoG: It measures cerebral cortical potential, which is similar to EEG technology, but can monitor information of larger bandwidth;

2) LFP, SUA: Measure cerebral cortex field potential and spike potential, which can be realized by Mircowire array, Michigan array, Utah array, Neurotrophic electrode and other sensors.

The invasive method of using electrical signals has high spatial resolution, good signal-to-noise ratio and wider frequency band, but it still faces safety problems caused by invasiveness, it is difficult to obtain long-term stable records, and medical staff are required. For problems such as long-term continuous observation, the current application is still limited to the field of medical rehabilitation.

Special research on the Metaverse industry: VR, AR, and brain-computer interface are the entrance to the Metaverse

Invasive tools have also emerged with new non-craniotomy implant options. In August 2021, the minimally invasive brain-computer interface developed by Synchron from the University of California, Berkeley, received approval for human clinical trials from the U.S. Food and Drug Administration (FDA). The brain-machine device is tiny and can be safely passed through blood vessels, so the BCI is implanted directly using the jugular vein, and the device is delivered to the brain and spine using catheter surgery, and the device can be implanted in the patient within two hours without a craniotomy within the brain.

Since no craniotomy is required, the sensor can be flexibly placed in multiple locations in the brain to capture various types of signals. The BrainPort receiver connected to the sensor is implanted in the patient’s chest. It does not have a built-in battery, but wirelessly provides power and data transmission, further enhancing safety. Through the BrainOS operating system developed by Synchron, the signal read by the sensor can be converted into a general signal that interacts with the outside world, so as to communicate with the outside world with the brain.

Special research on the Metaverse industry: VR, AR, and brain-computer interface are the entrance to the Metaverse

The continuous expansion of applications supports the commercial market of nearly 100 billion US dollars

The continuous expansion of medical and consumer market applications may support a market scale of 100 billion.With the improvement of people’s cognition of the brain, electrode design, and artificial intelligence algorithms, the application in the field of brain-computer interface continues to expand and develop toward more refinement. The research and development related to brain-computer interface has been continuously explored in many fields such as bionics, medical diagnosis and intervention, and consumer electronics. We believe that related products may be commercialized in the next 20-30 years, supporting a market of nearly 100 billion US dollars. scale.

From the 1970s to the end of the 1990s, brain-computer interface technology experienced the development from the concept period to the scientific demonstration period. From the 1970s to the 1980s, the term “brain-computer interface” appeared. In 1977, Jacques J. Vidal developed a brain-computer interface system based on visual event-related potentials. Choice of 4 control commands. In 1980, German scholars proposed a brain-computer interface system based on cortical slow potentials. After the 1980s, a few pioneers developed real-time and executable brain-computer interface systems and defined several paradigms that are still in use today:

1) In 1988, LA Farewell and E. Donchin proposed the famous and widely used brain-computer interface paradigm P300 Speller, indicating that the system is expected to help severely paralyzed patients to communicate and interact with the environment. Soon after, researchers developed a sensorimotor rhythm-based brain-computer interface system, which can control a one-dimensional cursor to feed back the motor rhythm amplitude to the user, so as to control the movement of the ball up or down through imagination through training.

2) Around 1990, Gert Pfurtscheller developed another brain-computer interface based on sensorimotor rhythms. The user must explicitly imagine left or right hand movements and convert them into computer commands through machine learning, which defines the brain-computer interface based on motor imagery. machine interface.

3) In 1992, Erich E. Sutter proposed an efficient brain-computer interface system based on visual evoked potentials, in which an 8×8 speller was designed, and the visual evoked potentials collected from the visual cortex were used to identify the direction of the user’s eye gaze to determine Which symbol in the speller he chooses. People with ALS can achieve a communication speed of 10 words per minute.

Since the 21st century, brain-computer interface technology has grown rapidly, and new paradigms, new algorithms, and new devices have emerged one after another, and the performance of early paradigms has improved significantly. New experimental paradigms for BCIs have emerged one after another, such as auditory BCIs, verbal BCIs, emotional BCIs, and hybrid BCIs. Advanced EEG signal processing and machine learning algorithms are applied to brain-computer interfaces, such as spatial pattern algorithms, xDAWN algorithms, etc., new brain signal acquisition methods such as blood oxygen level-dependent signals measured by fMRI and functional near-infrared spectroscopy. The cortical tissue hemoglobin concentration etc. were used to construct a non-invasive brain-computer interface. In addition, the performance of the brain-computer interface based on P300 and visual evoked potentials developed in the early stage has been significantly improved, and it has been proved to be suitable for patients with amyotrophic lateral sclerosis, stroke and spinal cord injury in preliminary clinical trials.

In the past decade, the scope and scale of brain-computer interface research has continued to expand. In terms of scale, the 7th International Conference on Brain-Computer Interfaces in 2018 gathered 221 research teams. From the perspective of technology popularization, consumer-grade EEG sensors and brain-computer interface systems have come out and entered the market, free and open-source brain-computer interface software has been continuously updated, and the performance of EEG signal processing algorithms has been significantly improved. Engineering, from the user level (ie user experience, psychological state, user training) to improve the satisfaction and practicality of brain-computer interface.At present, the application of brain-computer interface has exceeded the field of clinical medicine, and has been extended to non-medical fields such as emotion recognition, virtual reality and games. Passive brain-computer interface, collaborative brain-computer interface, adaptive brain-computer interface, cognitive brain-computer interface, many Numerous paradigms such as the human brain-brain interface are emerging.

Application Scenario #1: The medical and health field is the field where brain-computer interfaces are currently closest to commercialization

Brain-computer interfaces can help monitor and measure the state of the nervous system in real time and assist clinical interpretation. The application directions of “monitoring” brain-computer interfaces are very diverse, including evaluating the level of consciousness of patients in deep coma, measuring the state of neural pathways in patients with visual/hearing impairments, and assisting doctors in locating the cause. In addition, by combining multiple information such as EEG and video for diagnosis and treatment, it can assist doctors in interpreting various clinical indications such as brain injury and brain development.

The monitored EEG information can be used for processing, feedback, and corresponding recovery training for ADHD, stroke, depression, etc.  For example, for stroke patients with damaged parts of the motor cortex, brain-computer interfaces can collect signals from damaged cortical areas, and then stimulate disabled muscles or control orthoses to improve arm movement; motor imagery-like brain-computer interfaces can be used for loneliness. Rehabilitation training for children with autism can improve their self-control of the activation of sensorimotor cortex, thereby improving the symptoms of autism. It can also train users’ concentration through the feedback of EEG signals.

Special research on the Metaverse industry: VR, AR, and brain-computer interface are the entrance to the Metaverse

Brain-computer interfaces based on electrical, acoustic, optical, and magnetic stimulation for neuromodulation have been commercialized. Relevant applications include: neurorehabilitation through electrical stimulation therapy, mainly for motor dysfunction caused by stroke, Parkinson’s and other central or peripheral nerve injuries, such as hemiplegia, muscle atrophy, low muscle strength, walking disorders, hand dysfunction; Treatment of depression by magnetic cranial stimulation, as well as treatment of speech dysfunction, dysphagia, and cognitive dysfunction caused by stroke. Transcranial magnetic stimulation for depression treatment has been clearly approved in the United States, Canada, New Zealand, Israel and other countries. Compared with drug treatment, transcranial magnetic stimulation has fewer side effects, high safety, no pain, less addiction, and no Affect cognitive function and other advantages.

The non-invasive brain-computer interface intelligent prosthesis has been used on the consumer side. After BrainCo launched the world’s first non-invasive intelligent bionic hand that can consciously control each finger in 2019, this year it launched a bionic leg product suitable for different levels of disability. According to the company’s introduction, this product can extract 20,000 EMG and neuroelectric data per second, so it can quickly and accurately identify the user’s intention, and adjust the gait according to the environment and muscle conditions to prevent falls, achieve a high bionic experience, and also It supports a variety of complex operations such as rock climbing and wading, creating a high-quality life for the disabled, and expanding the application of brain-computer interface technology in the direction of prosthetics.

Brain-computer interfaces could also enable people who have lost the ability to speak to regain the ability to communicate. Output text through the brain-computer interface, or generate it through a speech synthesizer, to help patients with severe movement disorders, such as patients with lateral sclerosis, myasthenia gravis, and patients with high paraplegia caused by accidents, to transfer everything in their brains through the brain-computer interface system. want to express.

Special research on the Metaverse industry: VR, AR, and brain-computer interface are the entrance to the Metaverse

Application Scenario #2: Consumer Applications in Consumer Electronics and AIoT

Brain-computer interface technology can be combined with consumer products to provide a more intuitive interactive experience. As early as 2014, the company Thalmic Labs launched an armband controller Myo, which allows users to wirelessly control computers and other digital products around them just by moving their fingers by sensing the bioelectric activity of muscles. . With the continuous upgrading of technology, the current armband-type controller can realize control by identifying the current brought by active thoughts, and it is no longer a fantasy to type with thoughts and operate toys with thoughts.

With the support of the brain-computer interface, game players can use their thoughts to control the menu navigation and option control of the VR interface, and obtain a new operating experience independent of traditional game control methods; at the same time, people can also use their thoughts to control switches, etc., Even control home service robots to realize a new sense of smart home.

The penetration rate may continue to increase with the popularity of AR and other wearable products. Current simpler forms of control, such as eye-tracking cameras, touch controls, etc., may limit the need for brain-computer interface interaction. We believe that in the future, with the popularity of a series of wearable devices such as AR glasses and the continuous construction of the Metaverse, the penetration rate of consumer electronics products based on brain-computer interface technology will continue to increase.

Special research on the Metaverse industry: VR, AR, and brain-computer interface are the entrance to the Metaverse

Posted by:CoinYuppie,Reprinted with attribution to:https://coinyuppie.com/special-research-on-the-metaverse-industry-vr-ar-and-brain-computer-interface-are-the-entrance-to-the-metaverse/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.

Like (0)
Donate Buy me a coffee Buy me a coffee
Previous 2022-06-28 10:42
Next 2022-06-28 10:46

Related articles