Metaverse Technology Research

With data as a shuttle, the second era of human navigation has begun.

In November 2020, in Tencent’s annual internal magazine “Three Views”, Ma Huateng talked about the concept of “True Internet”. As a preface to the entire publication, he said: 

“Now, an exciting opportunity is coming. The ten-year development of the mobile Internet is about to usher in the next wave of upgrades, which we call the true Internet. A series of basic technologies from real-time communication to audio and video have been prepared. Ability is rapidly improved, and the mode of promoting information contact and human-computer interaction is undergoing richer changes… I believe that another major reshuffle is about to begin. Just like the transformation of the mobile Internet, people who can’t get on board will gradually fall behind.” 

In April 2011, ByteDance invested 100 million yuan in the code Qiankun of the meta universe concept company, which fired the first shot of a major domestic factory entering the market. In August, the Baidu World Congress opened the VR venue, allowing participants who could not see the venue in person to participate in a virtual way. In the same month, Pico, a domestic VR startup company, disclosed in the full letter that the company has been acquired by ByteDance; NVIDIA ‘s engineers around the world released Omniverse, the world’s first simulation and collaboration platform that provides the foundation for the establishment of Metaverse. In October, Facebook and Microsoft announced their entry into the meta universe. 

The big global science and technology companies may enter as dumplings, or scoop up the “territory” of the meta universe with a lot of money. 

Today, we will lift the veil of the meta universe-from the four basic technologies of the meta universe : VR hardware, VR/AR interaction, 5G, and the Internet of things: how are we preparing to enter the meta universe?

01 The depth of kung fu depends on the shortcomings of the barrel

[1] To experience the meta-universe, you must first enter the meta-universe.

The current path to the meta-universe is generally considered to be VR/AR, but VR is still considered a “less mature product” today. 

In 1987, Jaron Lanier, the “father of virtual reality” in the United States, first proposed the concept of VR. The VPL company he founded subsequently developed a series of virtual reality devices, including the Data Glove headset and matching Eye Phone gloves. However, until 2010, most of the applications of VR still remained in scientific research institutions and military laboratories, and it was difficult to enter the vision of consumers through commercial methods. 

It was not until 2012 that Oculus Rift made significant improvements in cost, latency, field of view, and comfort, before VR began to be paid attention to by capital. During this period, a number of technology giants led by Google entered the VR field, driving the first wave of VR/AR upsurge. 

In May 2014, Google publicly sold Google Glasses in the US market. In July of the same year, Facebook completed the acquisition of Oculus at a cost of $3 billion. In March of the following year, HTC released the Vive headset. This product was known as one of the “best VR devices” at the time. 

Unfortunately, the price of this generation of VR is too expensive for ordinary consumers, the chip performance is insufficient, the VR wearing plug-in cable restricts movement, and the experience content is too little. All these reasons led to this wave of wave of the final few people experience ended. 

The reason for the display is that the main display components at that time chose OLED screens in order to prevent the afterglow effect and delay of LED screens. However, at that time, Samsung, a manufacturer of small and medium-sized consumer-grade OLED screens, could not solve the problem of OLED product yield, which made the cost still unable to be suppressed, and the high equipment price became the first threshold to limit the experience. 

In addition, it is subject to the restrictions on video decoding by mobile consumer-grade chips that year. Take HTC VIVE as an example, it uses two 1080p screens corresponding to both eyes. Although the resolution of mobile phones was mostly 1080p at that time, the results of this resolution applied to near-eye display were vomited by “big fruit particles” and the screen window effect was serious. For chip solutions, Qualcomm only released the XR dedicated high-end chip XR2 for the first time in December 2019. Prior to this, VR products have been mainly using mobile phone processors. 

The threshold of the display level makes users embarrassed, and the lack of content is even more “cheating”. Currently, the only mainstream VR content platform is Steam. As of today, the number of VR content is more than 4,000. Compared with other content, most of them are re-enactments of previously sold content. The amount of original content is small, and some of the content that can be experienced is not high-quality. All these performances made early adopters and technology enthusiasts shouted at the time, and there was no content to support these valuable devices to escape the fate of “eating ashes”. 

Time has come to today, and the overall VR market has begun to pick up again with the release of Oculus Quest 2. Coupled with the release of the best VR game “Half-Life Alyx” so far, and the overwhelming publicity about the meta-universe, VR equipment has once again been pushed to the forefront. This wave seems to be in line with the prediction of the technological maturity curve given by Ganther in 2016, known as the first year of VR, and VR has begun to move from a trough period to a mature period. But in fact, there are many shortcomings of VR equipment and homework to make up. 

Metaverse Technology Research

In the display field, with the increasingly clear market demand, the industry has shown higher expectations for the VR display field, but the current product performance is still limited by the development of core optical devices and display solutions. At present, mainstream representative VR uses a fast-response liquid crystal solution. For example, Quest 2 uses an improved Fast-LCD to replace two AMOLEDs in the previous generation. The characteristics of this screen are mainly ultra-high-definition (resolution close to 4K), light and thin, while taking into account the cost. 

Even with 4K resolution, this screen is far from “scientific” in terms of display effect. This is because the screen of the head-mounted device is very close to the eyes. Studies have shown that the ideal pixel density of a suitable VR display needs to be above 2000ppi. But this has far exceeded the current level that LCD and OLED displays can achieve. 

In the display field, Micro-LED will become the next-generation display technology after LCD and OLED. The prospect of the vast market has accelerated the strategic layout of many industry giants. Apple (LuxVue), Meta (InfiniLED), Google (Glo), Intel (Aledia) and others have invested or acquired startups in this field. Major domestic panel manufacturers ( Jingdong party, Huaxing power, three optical) are now also have followed the layout. 

Micro-LED has the advantages of low power consumption, high brightness, high contrast, fast response speed, and thin thickness when applied to virtual reality displays. However, due to the high cost of LED epitaxial wafers at this stage, the product yield has not yet reached the level of mass production. Based on the current mass production plans of Micro-LED manufacturers, it may not be possible to achieve large-scale production until 2022. 

In the display scheme, the VR headset is used as an optical structure centered on the human eye. The trade-offs between visual quality, volume weight, field of view, cost, etc. are key indicators for VR product production. Among them, the weight of the headset can be said to be the most intuitive experience of the user experience. 

The current mainstream product Oculus Quest 2 weighs 503g, while the weight of HTC VIVE Focus 3 is as high as 785g. I don’t want to talk about the comfort of wearing a pound of monitor. I have to pay attention to the compression of the cervical spine on the front of the head. The current mainstream solution for VR headsets is the Fresnel lens . This solution requires a certain distance between the lens and the display panel. 

Through this scheme, the viewing angle, image quality, and cost can be coordinated to a certain extent, but it is “unbearable” in terms of the size and weight of the display module. (Image source: Oculus Quest 2 disassembled by GOROman) 

Metaverse Technology Research

In order to solve this problem, the use of ultra-thin VR (Pancake) semi-transparent and semi-reflective polarizing film double-lens system to fold the optical path can reduce the weight of the headset to less than 200g, and the volume is reduced to one-third of the traditional terminal. In addition to solving the weight and portability problems, it can also ensure a better display effect and a larger field of view. However, the disadvantage of this solution is also the “ghosting” phenomenon caused by multiple reflections at the lens interface, and a certain amount of brightness loss will occur in this process. 

This means that if the Pancake solution wants to achieve the same brightness as the Fresnel lens solution, a higher-brightness display panel is required. And this part of the defects may be combined with the mass production of Micro-LED in the future to form the final solution. (Picture: Pancake display schematic diagram) 

Metaverse Technology Research

Today in 2021, VR headset technology and production capacity have gradually matured, and the price has gradually fallen to a range that is more acceptable to ordinary players. According to the survey, during the Double Eleven period in 2021, major domestic e-commerce companies sold more than 90,000 VR headsets. This represents the consumer’s current recognition of product quality, but there are still many improvements that can be made in the future details of repairs. 

Metaverse Technology Research

[2] The feeling of the meta-universe experience depends on whether the interaction is “smooth” or not.

After passing the threshold of experiencing VR/AR to enter the meta-universe world, in order to experience in-depth experience, it depends on interaction. In 2016 the first year is called VR, millet follow the trend made eye millet VR lens to play with a edition. This product is a VR box supporting an APP to be used in conjunction with a mobile phone. The application only supports counterclockwise rotation of the phone in the initial stage. If it is not rotated correctly, the application will always stay on the interface of putting the phone into the box. Such “mentally handicapped” interactive experience makes some users at a loss. Some users even gave up the experience in angrily after getting stuck in this interface. 

Metaverse Technology Research

A bad interactive experience is equivalent to setting an unnecessary “threshold” for users. At the same time, it limits the developer’s ability to carefully outline the world in the meta-universe. Interaction determines the lower limit of the meta-universe that can be accepted by users, but in the same way, good interaction can bring unlimited possibilities for users to participate in this world. Due to the different forms of VR and AR, let’s look at the hardware and software interactions of these two portals respectively. 

VR hardware is easy to meet basic needs but difficult to expand. The basic design idea of ​​VR is to generate artificial stimulation to create a virtual feeling. The combination of the brain-computer interface of the movie “The Matrix” and the VR of the movie “Ready Player One” plus the somatosensory suit is just such an imagination. But the current mainstream configuration of VR hardware is still making a fuss on the headset. 

Since users need to experience VR with a headset that shields their eyes, it is very important to ensure safety during the movement and set play boundaries. The implementation of location tracking technology is mainly divided into two categories, namely “Outside-in” and “Inside-out”. Outside-in tracking and positioning technology requires sensors to be placed in the room. Consumers need to spend an extra sum of money on the positioning device, the solution also requires the size of the layout space. 

After Microsoft Hololens adopted the Inside-out solution in 2017, this tracking technology that gets rid of external devices quickly began to replace the previous solution. Inside-out tracking and positioning technology can realize cordless equipment. With the gradual maturity of machine vision algorithms, the Inside-out solution can be accurately positioned only by the camera on the VR headset, effectively reducing hardware costs. The Inside-out scheme is to use the corresponding algorithm to calculate the position information of the object (including six degrees of freedom in three axes and rotation, which is the meaning of 6DOF). 

With the maturity of algorithms and computing power, VR devices have evolved from the initial 3DOF to 6DOF. For example, Oculus launched the first 6DOF all-in-one machine Oculus Quest; Pico upgraded its 3DOF Pico monster to 6DOF Pico Neo. On the user input side, the handle control is still the current mainstream, the method is to integrate the Inside-out 6DOF head movement + 6DOF handle interaction. Representative manufacturers include Oculus Quest, Pico, Nolo (Lingyu Intelligent Control), Ximmerse (Suiguang), etc. (Picture: Schematic diagram of Outside-in principle) 

Metaverse Technology Research

The design of VR handles differs greatly from the designs of different manufacturers, but usually the joystick, touchpad, operation buttons, and the grip sensor are used as the solution to interact. However, the operation mode of the handheld controller still has a barrier to adapt to content developers and to learn from users. Bare-hand interaction, the solution of giving up the use of controllers was gradually brought to the front of the stage. 

This solution needs to identify the key points of the hand skeleton, supplemented by algorithms to identify the posture and position of the hand. Hardware solutions for bare-hand interaction include RGB cameras, 3D cameras (TOF, structured light, binocular vision), and data gloves. 

At present, the industry is still exploring in this field. Leap Motion and uSens use the binocular infrared camera solution, but the accuracy of the operation is still not satisfactory. Meta company made a smart glove solutions through each hand at the same time refers to placing the actuator (actuator) to achieve immersive touch. Although this solution guarantees accuracy, the cost (the cost of the prototype is about $5,000) is a big problem. 

In addition to these controller improvements, the research on a series of VR interactive accessories such as the VR cockpit and universal treadmill has just begun. The solution of VR headsets and controllers is currently only improving the VR equipment from “not easy to use” to a level where it can be used. There is still a long way to go for the research and development of interactive hardware in the future. 

VR software, out of the “virtual desktop”, how far is it to be immersive? Operational optimization of virtual reality is also an important technical direction of interaction. Combined with the display characteristics of VR, the virtual reality OS is expected to become the first 3D operating system, but the current application development of the virtual OS is based on the inheritance of the mobile terminal OS. The virtual reality OS inherits the characteristics of the mobile terminal on the operating system and the underlying software. 

Visually, the user interface is as far as the user can see it. The advantage is that it has huge advantages over flat displays in 3D graphics rendering, content transmission, and display. However, limited by the field of view, the virtual display still cannot escape the object-oriented presentation scheme. This leads to the embarrassing situation of “desktop application” in the interaction of consoles that use VR, although they have the multi-functions of the controller. 

Metaverse Technology Research

The interaction of VR software needs to be combined with perceptual interaction to highlight the characteristics of steady state and real-time. Regardless of whether the user is actively operating or not, the virtual reality OS needs to maintain real-time and stable operation from posture to rendering. In addition, the multi-directional rendering effect should be matched with the user’s switching back and forth. Reality delay is also a core technical challenge, otherwise it will directly lead to dizziness. In terms of graphics rendering, due to the high rendering delay caused by the complex composition system, the virtual reality OS cannot copy the rendering method of 2D layer overlay on the mobile terminal, and the virtual reality OS must find its own optimization method. 

In terms of screen presentation and sensor interaction, the virtual reality OS also needs to find a more convenient operation method than the finger sliding and opening the menu on the mobile terminal. In addition to the presentation effect, solving the misjudgment problems such as unconscious operation errors of users is also the direction to be continuously optimized next. For example, shaking hands, or dizziness caused by switching to watch too many interfaces at the same time. 

Nowadays, the application of 3D development tools on 2D panels has long been mature, and it has begun to gradually move towards simplicity, weight reduction, and visualization. The current market is represented by 3D graphics development engines such as Unreal, Frost, and Origin Engine, which are mainly used in content production such as games and movies. 

The graphical interaction and development of the virtual space has unique advantages over the 2D panel, but the current ease of use of the virtual reality OS is not enough to support the immersive creation of developers . Created in the meta universe, it is extremely convenient to borrow these graphics engines. But the creation in the meta-universe may have to wait. 

AR hardware, optical display, multi-path struggle. Google Glass in 2012 allowed many science fiction fans to call out the future; in 2016, the popularity of “Pokemon Go” also made the public feel the charm of AR intuitively for the first time. At present, the optical display solutions in the more mature augmented reality technology are mainly divided into the prism solution (Google Glass), the birdbath solution (Lenovo Mirage AR), the free-form surface solution, the off-axis holographic lens solution and the optical waveguide solution. 

Among them, the prism scheme, birdbath scheme, and free-form surface scheme all have the contradiction between the visual effect and the volume of the equipment, that is, the larger the field of view, the thicker the optical lens, and the larger the volume response. This contradiction has caused the three solutions to be limited in smart wear. 

Although the off-axis holographic lens solution can reduce the volume of the device, it needs to be customized and has a small field of view and low resolution, which is not practical. With the promotion of manufacturers such as Microsoft (HoloLens), Google, Megic Leap, DigiLens, etc., optical waveguide technology has gradually become the mainstream solution for AR hardware display. Although the optical waveguide solution is better than others, it still has very distinct advantages and disadvantages. 

The current mainstream optical waveguide solutions are the polarization array optical waveguide solution in the geometric optical waveguide, the surface relief grating waveguide solution in the diffractive optical waveguide, and the volume holographic grating waveguide solution. The basic principle of the optical waveguide solution is to put the interlayer in the display area material, and the light emitted by the light source is controlled by the interlayer, and the light source is directly reflected to the human eye to realize visualization. 

Metaverse Technology Research

Optical waveguides can increase eye sockets through one-dimensional and two-dimensional technologies, so as to adapt to more people and promote the realization of consumer-grade products. However, the optical efficiency of this solution is low due to energy loss during the process of light reflection and output inside the material. For geometric waveguides, the tedious manufacturing process leads to low yields of current products. The diffractive waveguide is easy to cause dispersion of the input light source, and the design threshold is high. 

In the short term, there are still many improvements to be made in the process and route of AR display equipment. But at the same time, high barriers and potentially vast market space also bring a lot of opportunities to this track. At present, the optical waveguide AR headset companies that have products include Bright Vision , Ricoh, Rokid, and Vuzix. 

Optical components account for about half of the cost of AR terminals. Regardless of other parts for the time being, display as the first step for the user to contact the device, there is still a long way to go. 

Metaverse Technology Research

 Image source: China Securities 

AR software and SLAM have begun to spread, and major factories have laid out one after another. SLAM (simultaneous localization and mapping), the system can answer users in unfamiliar places where you are, what is around, and how to go next. Through integrated mapping, scene modeling, and path planning, SLAM also shines in the field of autonomous driving (Tesla’s FSD is purely visual SLAM). 

SLAM is an essential core technology for AR. In order to achieve the integration of virtual and reality, SLAM is required for positioning and virtual extension of any element. At present, the AR SDK launched by Apple (ARKit), Huawei (AR Engine), and Google (ARCore) all adopt the technical route of monocular vision + IMU fusion positioning. At present, there is also a differentiation of small companies in key subdivision areas such as sensors, software, algorithms, and hardware. In addition to continuous investment in research and development, the major giants are also making arrangements and acquisitions. 

The application of SLAM needs to receive external information and refined processing requirements, which makes SLAM extremely dependent on the quality of the sensor. Although this field is gradually being promoted by giants, the current SLAM products and hardware are still highly differentiated. At the same time, the computing power of the mobile terminal hardware is not enough, which makes the fullness of SLAM still to be expected. 

02 Pioneers who pioneered meta-universe technology

[3] The upper limit of 5G, Meta Universe is here to “accept the move”

2019 is regarded as the first year of 5G in the world. After several years of development, ordinary people’s understanding of 5G is only to use mobile phones, and the Internet speed is faster. In addition to the maximum understanding 5G it is easy to “big words” in conjunction with various together a. For example, “5G empowers the industrial Internet”, “5G enables all things to grow” and so on. 

In fact, the potential of 5G has not yet been widely used today. Today, 5G is a tiger catching a rabbit—it’s hard to use it. Before talking about the possibility of the cooperation between Metaverse and 5G, I want to briefly talk about how awesome 5G is. In one sentence, 5G is a culmination that mankind can achieve by combining all the power of communication science and technology for more than a century. 

The transformation of modern communication technology science from theory to practice originated from the phenomenon that Antonio Meucci accidentally discovered that vibrations can be converted into electric current and thus can transmit sound. Later, Bell “plagiarized” Meucci’s achievements and opened the world’s first telephone company. From here on, how to transform objective and realistic information into different electrical signals has become the starting point of modern communication technology research. 

But communication equipment in this period must be connected to wires, which is completely different from today’s wireless communication. After Maxwell predicted and defined the existence of electromagnetic waves, the form of communication technology we use today gradually emerged. As of today, our wireless communication technology still obeys Shannon’s theorem . 

The theorem gives several basic rules of communication technology: 

1. We can transmit information in radio waves with noise interference; 

2. How many information channels can we set up in a certain radio wave frequency band; 

3. How much information can we “squeeze” in an information channel at most. From 1G to 2G to 3G and 4G, modern communication technology uses cellular communication, CDMA, TDMA, etc. to increase the communication frequency band and increase the information density, constantly making the amount of information transfer close to the Shannon limit of that era. Until 4G-LTE, LTE adopted Orthogonal Frequency Division Multiple Access (OFDMA) technology, which basically reached the extreme in terms of channel utilization. However, 5G can be said to subvert everything in front of it under obedience to Shannon’s theorem. 

Metaverse Technology Research

The cause is the emergence of smart phones represented by the iPhone plus the Internet of Things. This has caused a fundamental change in the nature of the mobile network. It no longer only satisfies people’s daily communication, entertainment, and information service needs, but is also widely used in data transmission between various devices. Therefore, the network has evolved from satisfying a single data transmission to a wide range of connections that need to satisfy different types. In short, 4G feels like a “card”, because not only will it require more traffic, but also the “things” connected to the network are “grabbing Internet speed” with you. 

The construction of 5G is roughly reflected in the following aspects: large-scale antenna arrays and beamforming, millimeter wave communications, software-defined networking (SDN) and network function virtualization (NFV).

Massive MIMO and Beamforming are important supports for 5G multi-feature polarization. The former allows the antenna to have many antenna heads. Each antenna head can communicate with the device independently, which is equivalent to establishing numerous channels between the base station and the terminal. The more antenna heads, the more communication channels. 

In the 5G era, the antenna can have 256 antenna heads, far more than the 16 antenna heads of MIMO in the 4G era. This increases the equipment capacity that can be supported by a unit network area. The beamforming technology can enable the carrier frequency of the antenna head to be aligned with the communication terminal in an almost straight manner with a very small sector angle to establish a wireless communication channel. It can almost be said that the antenna can communicate with the terminal device. 

Not only that, these channels can also be aggregated, or monopolized, so that the bandwidth and reliability of communication can get different results. Just like a very wide road, with the ability to flexibly and dynamically plan lanes, the 5G communication network can realize large-scale communication device connections, ultra-stable low-latency connections, or ultra-high-bandwidth connections according to the needs of the scene. 

The millimeter wave has a very high bandwidth, can carry more than 200G of data, and can carry any known application. Millimeter wave components are small, so communication equipment is easier to miniaturize. Software-defined networking and network function virtualization have completely overturned the previous model in terms of information transmission and interconnection. It can even be said that if the previous network requires equipment to continuously exchange information, what 5G wants is that there is no such network. 

SDN (Software Defined Network) can dynamically plan information sending paths in real time, which not only improves the efficiency of network resource use, but also reduces network maintenance costs. NFV (Network Function Virtualization) enables various dedicated network devices to be integrated into one hardware device in the form of software. This can reduce network construction and operation and maintenance costs. The original complex network only needs to provide IT decoupling for data. The flexibility of these technologies lays a rich and solid foundation for future 5G applications . 

Ordinary users currently do not have a deep perception of 5G, and even some voices express dissatisfaction with current 5G applications. But in the 4G era, the same voices such as high tariffs, sufficient 3G network speeds, and no real feelings have appeared. With the follow-up of 4G promoting the explosion of the Internet industry, new things such as live video, short video, and data exchange have subsequently emerged. 

With these direct user dividends, people began to gradually accept living in the 4G era. However, the progress of 5G is not even in the same order of magnitude as 4G, and the existing consumer and industrial communication needs are far from reaching the level of applying 5G. The massive data transmission, edge computing, autonomous driving, intelligent manufacturing, etc. required to realize the meta-universe concept are extremely dependent on the network characteristics of 5G. Although 5G technology is still being promoted and developed at the moment, it is urgent for Metaverse to “accept” if it wants to give play to its current available strength. 

Metaverse Technology Research

[4] “Moving” the reality into the virtual, the Internet of Things has been waiting for a long time

In April of this year, Nvidia CEO Huang Renxun announced that Omniverse had begun to land. At the same time, it plans to build the world’s most powerful artificial intelligence supercomputer, dedicated to predicting climate change. The system is called Earth-2 or E-2, which will create a digital twin of the earth in Omniverse. 

A digital twin is the virtuality of a physical object, system or process. In the 1960s, the National Aeronautics and Space Administration (NASA) proposed this concept. In the beginning, a miniature model was created to infer and detect possible problems. 

Later, physical simulation was gradually replaced by all-digital simulation. In digital models, software applications can obtain real-world information related to the physical object or system and generate predictions or simulations. The benefit of digital twins is that by creating copies of physical objects or systems, testing can be performed more easily, quickly, and cost-effectively. For example, in the automotive industry, the digital twin of a vehicle can even be used to trace back the cause of an accident. 

In the process of constructing the digital twin model, accurately grasping the state and data of the target object is the core task. The connected devices and sensors that make up the Internet of Things can accurately collect the requirements needed to build a digital twin. 

Under the “Internet of Everything” technology trend, the market capacity of the Internet of Things continues to expand, and the global shipment of Internet of Things terminal equipment is growing rapidly. According to IoT Analytics, the number of global IoT terminals maintained a compound growth rate of 30% from 2015 to 2020. In the future, due to the increase in the base number, it will still maintain a relatively high growth rate. According to IoT Analytics forecasts, 2020-2025 will maintain a compound growth rate of 21%. 

Due to the better domestic innovation environment, the emergence of “explosive apps” such as shared bicycles and mobile payments, and policy support, the number of connections has grown faster than overseas in recent years. According to the data of the three major operators, the number of cellular Internet of Things connections in my country will be 1.351 billion in 2020, accounting for more than half of the world. 

In 2022, the Internet of Things industries such as NB-IoT and the Internet of Vehicles will continue to benefit from the support of national policies such as the “Notice on Further Promoting the Comprehensive Development of the Mobile Internet of Things” and the “Three-year Action Plan for the Construction of New Internet of Things Infrastructure”. The growth rate of my country’s cellular Internet of Things connections will still exceed the global average. 

Metaverse Technology Research

The continuous emergence of downstream applications will become the driving force for the growing demand. Therefore, the overall shipment of IoT communication modules is expected to maintain a growth trend of about 20% in the next few years. Among them, mobile Internet, mobile payment, and Internet of Vehicles are important application scenarios. At the same time, driven by new applications, the construction of new infrastructure such as 5G, artificial intelligence, the Internet of Things, and cloud computing will accelerate. In 2022, new application scenarios such as smart factories, smart cities, smart transportation, and smart mines are expected to accelerate their implementation. 

Smart Factory Utilizes technologies such as machine vision and video Internet of Things to realize highly automated, digital, networked, and intelligent factory management. Smart factories can improve quality, reduce costs, reduce inventory, and increase efficiency, and ultimately accelerate the digital transformation of domestic manufacturing and industrial chain automation. 

Smart city refers to the use of Internet of Things, big data, cloud computing, artificial intelligence, blockchain and other digital means to sense, analyze and integrate city data in the fields of city planning, city construction, and city management to realize smart city management. 

The application of the Internet of Things in industry and production has been gradually promoted over time, but it is still necessary for Metaverse to expand in the virtual world in terms of data collaboration and connection. In the future, the unique creative environment of Metaverse will be accompanied by digital twins to feed back the innovation of the real world. 

Today, the Internet of Things is gradually penetrating the industry with the dual-carbon policy and the principle of reducing costs and increasing efficiency. In the future, with 5G’s ability to connect 1,000,000 devices per square kilometer, coupled with the development of sensor types and accuracy, the Internet of Things can directly “transmit” real objects into the meta universe through digital twins. You can directly call items from the real world without modeling and design, and the future of the meta-universe can give full play to its characteristics and focus on doing more. 

03 The second era of navigation

On August 3, 1492, the Spanish adventurer Columbus’s fleet pulled anchor and set sail from Palos Port. After two months of drifting on the sea, no shore was seen, and people were panicked. On October 6, Columbus convened a meeting on the flagship, and decided to sail forward for another five days, and return when there is no land. At 22:00 on the night of October 11, the Columbus fleet noticed a bright light ahead, so they were convinced that land was near. 

At 2 o’clock in the morning on October 12, the Pingta crew saw the land conclusively. In the morning, Columbus and his party finally arrived and landed on the first piece of land in the Western Hemisphere. This is a coral island, and Columbus named it San Salvador Island. 

Today, the meta universe may be the island of San Salvador in this virtual world, and the discovery of the meta universe may herald the discovery of a new continent that will change the shape of the future world.

The perfection of Metaverse requires many technologies as the underlying support. The six major technical requirements for Metaverse are blockchain technology, interactive technology, electronic game technology, artificial intelligence technology, network and computing technology, and Internet of Things technology. Many of these six technologies are emerging, and at the same time they almost encompass all modern human technologies. To expand the full strength of the metaverse requires the cooperation of these six major technologies. Although some technologies still need to be improved, entering the meta-universe and designing the meta-universe have become possible today. 

Metaverse Technology Research

Rome was not built in a day. As seen in this article, some of these technologies may not be mature, and some have already begun to form related industries. However, new opportunities can never be grasped until they are mature, and it is certainly not a coincidence that the meta-universe concept has erupted from the novel “Avalanche” to today. There is always a single spark in the power of a prairie prairie; planning for a rainy day is because of insight into the future. New concepts are often criticized before they are enjoyed by the general public, but one day, staying in Metaverse will also go with the flow like online shopping today, and even irreplaceable. 

The future is never natural, and the road to exploring the future may be full of uncertainty, but the pioneers will reap rewards far beyond this era. Until his death, Columbus believed that the new world he discovered was Asia, but his successors found the prosperity of civilization on this dangerous and passionate road. 

The second great nautical era in human history has begun. The first time it used wood as a boat, and this time it used data as a shuttle.

*This article is written based on public information and is only used for information exchange and does not constitute any investment advice.

Posted by:CoinYuppie,Reprinted with attribution to:
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.