BIGCHINA: A Framework for Understanding Metaverse Technology

Eight categories of technologies most relevant to the development of the Metaverse.

This article is excerpted from “The Hitchhiker’s Guide to the Metaverse” by Chen Yongwei and Lu Linyuan

The Metaverse is the coupling between people’s imagination and the actual technical conditions. Under different technical conditions, the shape and realization of the Metaverse will be different. This means that in order to understand the current state of Metaverse development and gain insight into the direction the Metaverse may develop in the future, it is necessary to have a certain understanding of the technologies associated with it.

The technology associated with the Metaverse can be described as intricate. We can see all kinds of terms with a sense of technology – VR, AR, artificial intelligence, blockchain, Internet of Things… It seems that all cool and technological words can find an intersection with the Metaverse. It is not grandstanding that all kinds of technological terms are related to the Metaverse, because from a technical point of view, the Metaverse is indeed very related, and many technological changes will have an impact on the construction of the Metaverse.

Since the technologies related to the Metaverse are too complicated, for the convenience of discussion, we divide the key technologies into eight categories:

(1) Blockchain technology (Blockchain);

(2) Interactivity;

(3) Communication Technology (5G, 6G);

(4) Cloud and Edge Computing;

(5) High-Performance Computing and Quantum Computing;

(6) IoT and Robotics;

(7) Network technology (Network);

(8) Artificial intelligence technology (Artificial Intelligence);

Since the abbreviation of these eight types of technologies happens to be BIGCHINA, we call it the “BIGCHINA technology system” or “Great China System” that supports the development of the Metaverse. Below, we will introduce the basic situation of these eight types of technologies one by one. Of course, the order of introduction will be adjusted according to the connection status of various technologies and Metaverses, not according to the word order of “BIGCHINA”.

interactive technology

From a technical point of view, the Metaverse is generated by computers, so a computer is a necessary way for people to enter the Metaverse, and all the actions of a person in the Metaverse are also realized by computers. Therefore, when considering the Metaverse problem, human-computer interaction becomes the primary problem.

Although the computer is a tool invented by people, it has been playing the role of “subject” in a sense since it was invented. In other words, “people have to revolve around the machine”, and people have to adjust the way they interact with the machine according to the characteristics of the machine. Under such conditions, human creativity and initiative are restricted. Therefore, it is very important to realize the fundamental change of human-computer interaction and realize the transformation from “machine is the subject” to “people are the subject”. An important significance of the Metaverse is to free people from the situation of human-machine communication through text, code, etc. in the past, and to use a more natural way to communicate in a virtual environment.

Adult computer interaction. To do this, a variety of technologies are needed to support it.

There are three main types of human-computer interaction technologies related to the Metaverse: Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR). In some literatures, these three types of technologies are sometimes collectively referred to as Extended Reality (XR).

BIGCHINA: A Framework for Understanding Metaverse Technology

Parallax seen in VR devices

The so-called VR refers to the use of machines to simulate a virtual scene, so that people can have an immersive experience. If the goal of VR is to use a computer to simulate a virtual world out of thin air, then the goal of AR is to add graphics, sound, touch and other elements to the real world; MR is to combine AR and VR to completely achieve virtual reality. Combination, virtual and real interaction.

Strictly speaking, AR and MR are different, and there are two criteria to distinguish them: one is whether the relative position of the virtual object will move with the movement of the device; the other is that under ideal conditions, the virtual object and real objects are not separable. If the relative position of the virtual position does not change with the device, and the virtual and reality are separable, then it is AR; on the contrary, if the relative position of the virtual position changes with the device, and the virtual and real have been integrated, Then it is MR. Of course, in practice, people often mix these two words, and many products that should be called MR are called AR.

artificial intelligence

If the interactive technologies represented by VR and AR are the gateway for people to enter the Metaverse, then artificial intelligence technology is the engine that enables the entire virtual world to interact with people.

We understand “artificial intelligence” as the ability of an agent to achieve goals under complex conditions. Historically, people have tried many ways to achieve artificial intelligence. For example, early people tried to start with neuroscience and simulate the operation of the human brain to achieve artificial intelligence. More recently, machine learning has become the mainstream approach to artificial intelligence.

The so-called machine learning is through the analysis of a large amount of data, so that the computer has the ability to learn without being explicitly programmed in advance, and find a way to deal with the problem. For example, if we want to train a computer to recognize cats in pictures, we can find a bunch of animal pictures to “feed” to the machine and let it “learn”.

Of course, there are many ways to “learn”. Traditionally, we will use a method called supervised learning: find out those pictures of cats, and then identify the characteristics of cats, such as big eyes, round faces, fat claws, etc… After the computer has learned a lot of pictures, it will know that an animal with these characteristics is called a cat.

However, this training method is obviously very time-consuming and labor-intensive, and it takes a lot of manpower and material resources just to identify the data. As a result, an unsupervised learning method known as “deep learning” has now become more popular.

The so-called deep learning is a method of imitating the thinking process of the human brain. It uses a multi-layer neural network to learn, and forms a more abstract high-level representation attribute category or feature by combining low-level features, and finally makes a comprehensive judgment. Looking back on the process of learning to recognize cats, no one has found a lot of cats to label, let us learn, we just see more, and we will naturally know what a cat is. To dissect this “natural” process, we actually summarize a lot of cat characteristics in the brain through a large number of observations of cats, and then combine these characteristics to determine whether the animal we see is a cat. Similarly, deep learning also learns from a large number of samples, and gradually summarizes some general characteristics for judging whether an animal is a cat or not, and then makes judgments based on these characteristics.

BIGCHINA: A Framework for Understanding Metaverse Technology

Using deep learning to teach a computer to recognize a cat

Under the condition of big data, compared with traditional machine learning, the efficiency advantage of this learning method is very obvious. In terms of object recognition, it can basically achieve the same accuracy as manual recognition.However, there are pros and cons. In terms of interpretability, deep learning has obvious shortcomings – although we know that a computer can identify a cat from a picture, it is difficult to know what criteria it uses to achieve this.

At this stage, the application of artificial intelligence in the Metaverse mainly includes three aspects: real-time scene and digital twin manufacturing, intelligentization of virtual humans, and personalization of digital avatars.

In the Metaverse, the user’s position will change at any time. Correspondingly, the virtual scene will be changed accordingly to ensure its interaction with the user. In this process, a large number of graphics and shadow changes require artificial intelligence to make judgments in real time.

The so-called digital twin is a digital clone that has a high degree of integrity to a physical entity or system and can maintain interaction with the physical world. They can be used in the Metaverse to process physical entities in real time.

In the Metaverse, in addition to human digital avatars, there will be many virtual human NPCs that exist as “atmosphere groups”. Just like in the movie “Runaway Player”, these NPCs need to have a certain ability to interact with people to meet the needs of people to interact with them. To realize the intelligence of these NPCs, artificial intelligence technology needs to be applied.

In order to greatly improve the training effect of virtual human NPC, reinforcement learning is often used. This learning method allows the agents to keep trial and error in a complex environment, and give “rewards” or “penalties” according to the correctness of the returned results. With this idea, their AI level can be improved in a very short time. Great improvement. Now, reinforcement learning has been widely used in computer games, and it is believed that in the future Metaverse, this learning method will also be used more and more.

In the Metaverse, every user needs to walk the rivers and lakes through an avatar. Thus, in order to create a more realistic virtual environment, rich avatar representations are necessary. However, in many Metaverse projects, creators provide only a few specific models or only allow players to create full avatars with only a few optional submodels, such as nose, eyes, mouth, etc. As a result, players’ avatars are highly identical, which largely detracts from the fun of the Metaverse itself.

High Performance Computing and Quantum Computing

Just when the whole world was showing infinite longing for the Metaverse, Intel came out and poured cold water on the “Metaverse fever”. Not long ago, the US technology media website The Verge published an article citing a comment on the Metaverse by Raja Koduri, Intel senior vice president and head of accelerated computing systems and graphics. Koduri believes: “The Metaverse may be the next major computing platform after the Internet and mobile Internet. However, our computing, storage and network infrastructure today is simply not enough to realize this vision.” It should be pointed out that, Koduri is particularly concerned about the conditions of computing power, saying: “The computing power required to realize the Metaverse will be a thousand times the total computing power now.” If Koduri’s judgment is correct, then the computing power The constraints of the Metaverse will become the biggest obstacle to entering the Metaverse. If you want to truly embrace the Metaverse, you must work hard to break through this bottleneck.

From the current point of view, there are many possible technical paths to break through the bottleneck of computing power. Including high-performance computing, quantum computing, neuromorphic computing, probabilistic computing, etc. Due to space limitations, here we will mainly introduce high-performance computing and quantum computing.

The so-called high-performance computing, in layman’s terms, refers to the use of aggregated computing power to process data-intensive computing tasks that standard workstations cannot complete. As a comprehensive field, the issues involved in high-performance computing are very complex, including software, hardware and other levels.

In high-performance computing, the most important core technology is Parallel Computing. The so-called parallel computing is relative to the serial computing. In serial computing, computing tasks will not be split, and the execution of one task will occupy a fixed computing resource. In parallel computing, tasks are decomposed and handed over to multiple computing resources for processing.

Of course, the decomposition and distribution of such tasks can be varied. It can be to divide the computing tasks to multiple processors and let them solve them collaboratively, or it can decompose the solved problem into several parts, each of which is composed of an independent processors for parallel computing. A parallel computing system can be either a supercomputer with multiple processors, or a cluster of several independent computers interconnected in some way.

From an architectural point of view, parallel computing can be divided into homogeneous parallel computing and heterogeneous parallel computing. As the name implies, homogeneous parallel computing assigns computing tasks to a series of identical computing units; heterogeneous parallel computing assigns computing tasks to computing units with different process architectures, different instruction sets, and different functions. For example, the parallel operation of multi-core CPU belongs to homogeneous parallelism, while the architecture of CPU+GPU belongs to heterogeneous parallelism.

Compared with homogeneous parallelism, heterogeneous parallelism has many advantages. Explained in plain language, this advantage comes from the “professional specialization” between various computing units. Under the heterogeneous architecture, the advantages of different computing units can be better complemented. It is for this reason that heterogeneous parallel computing is getting more and more attention. In particular, in the Metaverse field, the computing solutions given by many large enterprises are based on heterogeneous parallelism.

If high-performance computing is about the allocation of computing resources, then quantum computing is trying to improve computing efficiency by changing the entire logic of classical computing.

We know that the basic unit of classical computing is a bit, and the state of a bit is either 0 or 1, so all problems in classical computers can be decomposed into operations on 0 and 1. The basic unit of quantum computing is the qubit, and its state can be a vector. In this way, quantum memory has a big advantage over classical memory.

An inappropriate analogy: Most of the friends who have played action games know that in the game, the heroes we play can often use many tricks, some tricks can only be output for a single object; Enemy output. Here, the single output tricks of the former category are equivalent to classical computing, while the group output tricks of the latter category are equivalent to quantum computing. We know that in the face of a large number of mobs besieged, the effect of a group output can be worth many single output moves. In the same way, in some specific cases, quantum computing can achieve a very large efficiency improvement over classical computing.

For example, factorization of large numbers is of great value in breaking public-key encryption. If a computer is used to factorize the number N by using the commonly used Shor algorithm, the operation time will increase exponentially with the length of the binary number corresponding to N. In 1994, someone organized 1,600 workstations around the world to factor a number of binary length 129. The work took 8 months to complete. In theory, if we were to decompose a number with a binary length of 1000, it would take 1025 years – I don’t know if the Milky Way will still be there! However, if the same problem is solved with quantum computing instead, the whole problem can be solved in under 1 second. This shows the power of quantum computing.

However, while seeing the power of quantum computing, we must also realize that, at least so far, the power of quantum computing can only be reflected in the processing of a few special problems, and its generality is relatively weak. In fact, all kinds of quantum computers reported now can only execute specialized algorithms, but cannot execute general-purpose calculations. For example, the D-Wave jointly developed by Google and NASA can only perform the Quantum Annealing algorithm, while the optical quantum computer “Nine Chapters” developed by my country is specially used to study the “Gaussian Bose sampling” problem. Although they excel in their respective areas of expertise, neither can be used to solve general problems. This is like a group attack in the game. Although the attack range is wide, the lethality to each individual is relatively weak. Therefore, if you encounter a large group of mobs, the group attack is powerful, but if you encounter a boss with high defense and thick health bars, this attack will not be useful.

From this perspective, if we want to use the power of quantum computing in the Metaverse, we must first find problems and scenarios suitable for quantum computing applications, and then find the corresponding algorithms. At the same time, we must also realize that although the research and development and exploration of quantum computing are very important, it should be more complementary to the exploration of other technological paths, rather than a substitute relationship.

Cloud Computing and Edge Computing

If neither high performance computing nor quantum computing can fully respond to the computing power challenge brought by the Metaverse, then another possible solution is to apply cloud computing.

We can use a common metaphor to understand cloud computing. Traditionally, users mainly call their own single IT resources, which is like each household generates electricity for their own use; while cloud computing seems to build a large power station, and then convert the “power” (IT resources) output to all users.

Users can choose the corresponding IT resources according to their needs. For example, if users of the Metaverse need more computing power or storage, and local machines cannot meet it, they can obtain “foreign assistance” from the cloud. If one cloud CPU is not enough, then add a few more and use them as needed. It is convenient and not wasteful.

Although in theory, cloud computing can well bear the huge computing and storage requirements generated by the Metaverse, its shortcomings are also obvious. More importantly, when performing cloud computing, there is a large amount of data to be exchanged between on-premises and the cloud, which can cause significant delays. Especially when the data throughput is too large, this kind of delay is more serious. For users of the Metaverse, this can have a very negative effect on their experience.

So how can this problem be overcome? An intuitive idea is to place a platform that is capable of computing, storage, and transmission on the side close to the user or device. On the one hand, this platform can act as an intermediary between the terminal and the cloud, and on the other hand, it can respond in real time to various requirements of the terminal. This idea is called edge computing. Since the edge platform is close to the user, the data exchange between it and the user needs to be more timely, and the delay problem can be solved better. Studies have shown that with edge computing, latency can be reduced by more than 60%.

BIGCHINA: A Framework for Understanding Metaverse Technology

edge computing

Of course, the benefits of edge computing don’t stop there. For example, the application of edge computing can also better protect the privacy of users. Compared to the traditional Internet, the Metaverse will collect more user data than ever before, and the resulting privacy risks will be more serious than ever. Most of the current cloud services are controlled by some Internet giants, and these giants spare no effort in collecting user information. This means that while users are browsing the Metaverse, all their data, movement trajectories, and even biological information are constantly being watched by giants.

In contrast, edge computing allows data to be processed and stored on edge devices, providing better protection of user privacy. On the one hand, edge services can not only delete highly private data from applications during the authorization process to protect user privacy. On the other hand, edge platforms can also more easily use algorithms such as “Federated Learning” that can protect user privacy. The federated learning mentioned here is a machine learning algorithm that is different from the traditional centralized learning. It does not require pre-collection

The user data is collected and then analyzed, but the program can be sent to the local, and the learning results can be returned directly. Finally, the analyst only needs to aggregate the returned results to get the final analysis conclusion.Obviously, after adopting such an algorithm, machine learning can no longer conflict with privacy.

communication technology

The use of the Metaverse will generate huge data throughput, while at the same time, the common use of VR and AR will require lower latency. In order to meet the requirements of high throughput and low latency at the same time, it is necessary to use higher performance communication technology.

Communication can be divided into wired communication and wireless communication according to the difference of the medium required for communication. Generally speaking, the speed of wired communication is much higher than that of wireless communication. In July 2021, Japan has set the record for wired communication to 319 Tbps, which means that the amount of data that can be transmitted per second has reached 39.9 TBytes. At this speed, it only takes 0.0003 seconds to transmit a 10G high-definition movie. It should be said that at this transmission speed, the needs of the Metaverse can already be fully met.

But the problem is that people want to be able to move freely in the Metaverse, and don’t want to be tied to a computer or some fixed device. As a result, communication in the Metaverse will rely more on wireless communication.

In wireless communication, light wave is the most important carrier. The propagation speed of a light wave is mainly determined by its bandwidth, and the size of the bandwidth depends on its fluctuation frequency. Therefore, the development of wireless communication technology from 1G to 5G is to continuously increase the fluctuation frequency of light waves, thereby increasing its bandwidth.

Some people may think that if this is the case, isn’t the problem simple? As long as we keep working hard to increase the fluctuation frequency of light, can the speed of wireless transmission be infinitely improved? But the problem is obviously not so simple. The reason is that the product of the wave frequency and the wavelength of light is a constant value, that is, the speed of light. Therefore, the faster the fluctuation frequency, the shorter the corresponding wavelength. And once the wavelength is short, there will be many problems. For example, it will have very short coverage and poor ability to traverse obstacles.

So, how should these problems be solved? The solution proposed by current 5G technology is to build more dense micro base stations. Isn’t it just that the signal coverage is small and the traversal is poor? Then I built base stations around. Of course, such a base station cannot be built very large, but can only be miniaturized. Fortunately, because the length of the wires in wireless communication needs to be matched to the wavelength, for these very short wavelength signals, very short antennas are used. Moreover, since each antenna is very short, many antennas can be placed on each base station for transmission and many antennas for reception. This design is MIMO, which is the so-called “multiple-input multiple-output” (Multiple-Input Multiple-Output).

Now, 5G technology has gradually begun to spread. However, even so, the transmission speed of 5G is still difficult to compare with wired transmission. If compared with the limit speed of wired transmission mentioned above, the transmission speed of 5G is roughly only 1/16 000 of it. Not only that, because 5G needs to build a large number of base stations, its cost will be very high. Therefore, in practice, only some densely populated large cities have the conditions to popularize 5G, while for some remote areas, it is difficult for 5G to be applied. Based on the above reasons, we believe that if only relying on the current 5G technology, it may be difficult to effectively meet the communication requirements proposed by the Metaverse. 6G and newer wireless communication schemes must be introduced.

network technology

In addition to communication technology, the Metaverse also puts forward many new requirements for the design of the network.

Let us imagine such a scenario: there are two cities, A and B, which do not communicate with each other frequently. But suddenly one day, the residents of city B suddenly became fascinated by the fruits produced in city A. As a result, the demand for freight between the two cities suddenly increased dozens of times. We know that fruit cannot be kept old, so the freight from city A to city B must be both much faster and faster. This situation is the portrayal of the transition from the traditional Internet to the Metaverse. Under the conditions of the Metaverse, the amount of content transmission will skyrocket by dozens or even hundreds of times, but the tolerance for delay will be lower.

So, how can we meet this challenge of high throughput and low latency? Let’s take a look at the scene of urban freight first. Obviously, in order to meet the demand for increased freight, our first reaction is to prepare more and faster vehicles – just like when we consider transmission, we must first find a breakthrough in communication technology. However, for freight, it is obviously not enough to simply increase the number of vehicles. If the roads are not planned, regulated and modified accordingly, the speeding vans will collide and cause chaos. Similarly, if the network is not designed accordingly, the improvement of communication capabilities alone will not be able to adapt well to the challenges of the Metaverse.

In the face of congested traffic situation, generally speaking, we will use macro and micro methods to ease traffic. At the macro level, we classify roads so that different cars take different paths. Vehicles with urgent tasks, such as police cars, ambulances, and fire engines, will be given special lanes for passage, while ordinary private cars will be arranged with other passages. At the micro level, we will arrange for traffic police to coordinate at each intersection. If some people are in a hurry, the traffic police will let them pass first, while for other drivers, they need to wait more. In network design, similar ideas are still useful.

Traffic splitting is called “network slicing” in network design. In short, it is to divide a whole network into several layers, so that different applications can be transmitted at different layers. In this way, under the circumstance that the total transmission capacity is limited, those requirements with higher network requirements can be guaranteed first. It is conceivable that when the needs of the Metaverse are truly activated, the total amount of transmissions and types of transmissions will increase dramatically. In this case, in order to better macro-configure the resources of the network, it is necessary to carry out more scientific and finer slicing of the network.

BIGCHINA: A Framework for Understanding Metaverse Technology

Schematic diagram of 5G network slicing

The traffic police’s micro-diversion of roads corresponds to the network field, which is the so-called Quality of Service (QoS) management. We know that when road traffic is limited, we have to decide who goes first and who waits. Similarly, when network traffic is congested, packet loss is bound to occur. At this time, whose data packet is lost, it becomes a problem. The logic of QoS management is to determine the priority according to the requirements of the service on the transmission quality, and discard the data of the services that do not require high transmission first, so as to keep the data of the services that have high transmission requirements as much as possible. And which services have higher requirements for transmission, it mainly depends on a set of objective technical standards.

There is nothing wrong with this logic itself. However, under the conditions of the Metaverse, people’s subjective experience may become more and more important, so some scholars believe that QoS management should be replaced by quality of experience (QoE) management as the priority for transmission standard. For example, from a purely technical point of view, the transmission of a machine command may be more important than the transmission of a game signal, so based on QoS standards, the command to the machine should be passed first. But for the user, this may not be true. In fact, for many people, there is no problem with a machine executing a task early and a few minutes late, and if a game’s signal is a few milliseconds late, his experience will be greatly reduced. Therefore, based on the idea of ​​QoE, the game signal should be passed first.

Of course, there are many application scenarios for the Metaverse, including life scenarios and work scenarios. As you can imagine, both QoS and QoE management will have their markets. As for how to switch between the two management modes at any time according to changes in the scene, this may become an important issue that needs to be considered in network design under the conditions of the Metaverse.

blockchain technology

In the Metaverse, blockchain is a very important technology. We have seen that numerous Metaverse projects including Sandbox, Decentraland, Axie Infinity have adopted blockchain as the technical foundation for their economic and governance systems.

Strictly speaking, Blockchain is not a single technology, but a collection of multiple technologies. Its idea can be traced back to the foundation stone published by Satoshi Nakamoto in 2008. sex essay. At first, “blockchain” was just a metaphor used to describe the technology that supports Bitcoin. Later, with the gradual popularity of the Bitcoin architecture system, the name became a convention and gradually spread. Today, blockchain is often used to refer to a decentralized infrastructure and computing paradigm. It uses encrypted chain block structure to verify and store data, uses distributed node consensus algorithm to generate and update data, and uses automated script code (smart contracts) to program and operate data.

After integrating the advantages of chain structure, distributed consensus algorithm, and smart contracts, blockchain has become a very powerful set of tools. In nature, the operation of the blockchain does not depend on a centralized coordinator, it can realize point-to-point interaction between people, and it can ensure the security of interaction under the condition that people are not familiar with each other. User privacy and data security can be guaranteed as much as possible. All of these properties make it a perfect fit for the “free association of man with man” organization in the Metaverse.

IoT and Robotics

Now when we talk about the Metaverse, we mostly think of it as a virtual world as opposed to the real world. Whether it is AR, VR, or artificial intelligence discussed above, all discussions revolve around this virtual world. However, this narrative of separation between reality and reality is obviously not satisfactory to us.

Just imagine, if we watch a food show in the Metaverse, visual VR technology can already simulate the food and its cooking process as real. If we want, we can also simulate the smell of this food with the help of olfactory VR technology.Well, now the atmosphere is done, and our gluttons have been caught in the throat. However, this food is fake in the end, no matter how greedy we are, we can only be lonely. So, how can we make up for this regret? At this time, technologies such as the Internet of Things and robotics can come in handy.

The so-called Internet of Things, as the name suggests, is the Internet of Things. It can collect various information about objects in real time through various information sensors, radio frequency identification and other devices and technologies, and connect through various possible networks to achieve the interconnection of objects and objects, people and objects, and realize the realization of objects and processes. identification and management. For the Internet of Things, there are several important pillar technologies: First, Radio Frequency Identification, also known as RFID technology. It can realize contactless identification and information collection of objects through radio frequency signals.The second is the sensor. It can automatically and real-time extract the relevant information of the object. The third is the embedded system. It can be embedded inside the controlled object, allowing the object to take corresponding actions after receiving relevant instructions. These technologies, coupled with infrastructure such as communication networks and cloud, can further realize the interconnection of everything on the basis of interconnection between people.

If the Internet of Things is fully popularized, then when we see the food we want to eat in the Metaverse, we can send instructions to cook the corresponding food to nearby robots through the Internet of Things. The robot can make the corresponding food according to the program, and then send it to our side. If this virtual-real interaction can be achieved, then the Metaverse will no longer be a virtual world for us, but a part of the real world we live in.

It should be pointed out that in addition to this imagination of the future, the interaction between the Metaverse, the Internet of Things, and robots has actually had many practical applications. In industrial enterprises, for example, there are large pieces of equipment, such as robotic arms, that are difficult to handle due to their shape constraints. In this case, if you can use AR, coupled with the Internet of Things, you can better control it.

Epilogue

In each era, people have their imagination and practice of the Metaverse, but the imagination and practice of the Metaverse in each era are different. Behind this difference is the difference in technical conditions. At any time, people can only use the existing technical conditions to create the Metaverse. What kind of technology is, the relevant practice can only be carried out within the constraints given by it. In this sense, understanding the current status and trends of various technologies related to the Metaverse will be crucial for us to judge the development direction of the Metaverse.

We propose a BIGCHINA analytical framework that identifies eight categories of technologies most relevant to the development of the Metaverse. In our opinion, the development status of these eight types of technologies will play a decisive role in the realization of the Metaverse, and the trend of these technologies will also affect the direction of the development of the Metaverse.

So, if you want to understand the Metaverse fundamentally, you might as well spend a little more time and follow BIGCHINA!

BIGCHINA: A Framework for Understanding Metaverse Technology

Title: “The Hitchhiker’s Guide to the Metaverse” Authors: Chen Yongwei and Lu Linyuan, Publisher: Shanghai People’s Publishing House

About the Author

Chen Yongwei

Director of the Research Department of “Comparison” magazine, the main research directions are industrial economics, digital economy, anti-monopoly and regulatory economics. He has published more than 60 academic papers in Chinese and English journals and hundreds of articles in newspapers and magazines. He has won the “Financial Research” annual best paper award, excellent paper award, and “Economic Observer” best column award. Author of “Blockchain General: 111 Questions About Blockchain”.

Lu Linyuan

Professor of University of Electronic Science and Technology of China, his main research direction is the field of complex network information mining, including massive information navigation, mining, recommendation and prediction. Winner of the National Natural Science Foundation of China Outstanding Youth Fund, Sichuan Province Young Talent. Deputy Director of Alibaba Complex Science Research Center. In 2018, he was selected as one of the “35 Innovators Under 35” by MIT Technology Review. Author of Reinvention: The Structure of the Information Economy.

Posted by:CoinYuppie,Reprinted with attribution to:https://coinyuppie.com/bigchina-a-framework-for-understanding-metaverse-technology/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.

Like (0)
Donate Buy me a coffee Buy me a coffee
Previous 2022-04-10 10:17
Next 2022-04-10 23:49

Related articles