Why is it said that whoever masters artificial intelligence masters the Metaverse?

May be the biggest outlet of the Internet in the next 10 years

Today, I want to talk to you about the sparks that the combination of the Metaverse and AI will create.

Metaverse, as our previous video said, is the next-generation Internet with VR as the medium, highlighting the extreme sense of presence and immersion brought by the maturity of VR video and interactive technology, infinitely blurring the real and virtual, bringing people closer together. distance between.

The combination of it and AI artificial intelligence may everyone think of some sci-fi movies, such as the Wachowski sisters’ classic “The Matrix” series handed down by the Wachowski brothers, and last year’s “Runaway Player” .

In The Matrix, AI rules the earth, human consciousness is imprisoned in the virtual world, and the body becomes the biological battery in the cultivation warehouse. AI with various functions is not only omnipotent, but also mutates and evolves in the virtual world. For example, the villain Agent Smith, as a thoughtful antivirus software, is constantly learning and evolving in the ocean of digital information in the virtual world, counterattacking the mother body, and even entering the real world.

Among the out-of-control players, the NPC Guy in the game suddenly developed a human-like self-awareness one day, broke away from the character he was set to start turning around in the virtual world, and fell in love with the real heroine.

That is to say, whether in the dark ending of the apocalypse brought by the AI ​​virtual world, or the bright ending of the game NPCs producing the most precious and beautiful emotions of mankind, the creators of the film have reached a consensus on at least two points:

First, the virtual world will become rich and colorful because of the existence of AI, and it will have a certain ability to separate from human self-development and iteration.

The second is that AI will become more powerful after breaking away from the shackles of the real world, and even affect the real world in turn.

So will a future like this become a reality?


Before delving into it, I think it is necessary to briefly talk about what the thing we now call AI is in essence, otherwise it would be a bit embarrassing to talk about what it can do and what it can’t do.Friends who are very familiar with AI can ignore this paragraph.

Everyone is impressed. The AI ​​should be the Alpha Go that defeated Li Shishi in March 2016. It is like a crystal ball in a myth. As long as you tell it the current chess score, it does not know how to calculate it internally, and it can output and win. The strategy of the game’s moves made the human masters like Jiebao cry.

In just a few short years after Alpha Go, AI expanded into every aspect of our lives. Most of the videos we are uploading now are pushed by AI. The dubbing in many videos is matched by AI. The things we buy are recommended by AI. The GPU of our computer graphics card and mobile phone is accelerated by AI. Recognized, even the current BGM is composed by AI.

And just as a chip has its basic unit, which is a transistor that can output 0s and 1s, AI also has its basic unit, called a perceptron.


In 1957, Frank Rosenblatt, a psychologist and engineer at Cornell University, came up with the idea of ​​a perceptron, inspired by nerve cells.

The shape of a nerve cell is very “branched”. In addition to the main body of the cell, its outer wall has many short branch-like protrusions called dendrites, and a long “tail” called an axon.

Why is it said that whoever masters artificial intelligence masters the Metaverse?

The role of dendrites is to receive various stimuli from external input to form bioelectricity. After these bioelectricity is integrated, once it exceeds a certain threshold, it will be conducted through axons, and neurotransmitters will be secreted at the terminal to transmit information to the next A cell, otherwise no information is passed down.

Therefore, the output of neuron cells has only two states, which are “0” or “1” in mathematical representation, and then the human brain has more than 10 billion neurons to form a neural network, and the output of the previous neuron is used as the input of the latter neuron. Further processing, constant repetition, and ultimately human intelligence.

Why is it said that whoever masters artificial intelligence masters the Metaverse?

Referring to the four steps of input, processing, classification, and output, the perceptron simulates a similar structure.

Each input signal is multiplied by the corresponding weight, which corresponds to the stimulation input of the dendrite. The accumulation of this function symbol simulates the process of integrating bioelectricity in nerve cells. The activation function determines whether the accumulated value reaches the threshold. If it is greater than the threshold, it will output 1. Otherwise output 0.

Why is it said that whoever masters artificial intelligence masters the Metaverse?

Similar to seeing a cat, I know it’s a cat, which is crap to me, the problem is I don’t know how I judge it. Then I guess, it must be the moment when the eyes see the cat, and a lot of information points are collected, such as its size, its color, its face shape, its tail length, etc., and then my brain goes through a moment. All things considered, I think it’s a cat.

As long as these information points are of specific and measurable similar size, digitizable parameters such as color number can be converted into input signals of the computer, and the process of summing the weights can be analogized to my brain. After comprehensive consideration at that moment, a classification judgment is finally made whether it is a cat or not, and a number of 0 or 1 is output.

Why is it said that whoever masters artificial intelligence masters the Metaverse?

To put it bluntly, the perceptron is an imitation of human nerve cells.

How exactly does it work?


For example, if we see a photo of a person and we want to determine whether it is me, then if you are detailed enough, you can find 10,000 characteristics of me. But in order to facilitate the discussion, let us temporarily express the two indicators of face squareness and head size with numbers, and let the perceptron judge.

For example, setting a completely round face is 0, and the face must be the same as the character in my world is 1; for example, there is no head at all, it is 0, and the head is as big as the big head son is 1.

Why is it said that whoever masters artificial intelligence masters the Metaverse?

Take these two inputs as the two axes of the plane coordinate system, find some people’s photos, record who they are and the quantitative indicators such as face shape and head size, and put them in the coordinate system.

Obviously, we can draw a line to distinguish me from people like Zhou Dongyu with a high probability, because I have a big head and a square face, while female stars like Zhou Dongyu are just the opposite.

For a computer, this line is ax+by+c=0.

If the result of the weighted summation is greater than zero, it is above the line, and it is me. If it is less than zero, it is Zhou Dongyu.

Of course, in reality, it is impossible for us to use only two parameters to judge who this person is. To make a more accurate judgment, many dimensions of input are needed, such as body shape, facial features, makeup, and so on.

Introducing the dimension of makeup, the coordinate axis becomes three, which becomes a three-dimensional space coordinate, and the line that divides into two becomes a plane.

If you continue to introduce the fourth and fifth inputs, it is a bit unimaginable. It can only be expressed in the form of mathematics, which is called a hyperplane, but it does not matter. This is not a problem for the computer. It can still be calculated by formulas. A multi-dimensional space split into two.

Why is it said that whoever masters artificial intelligence masters the Metaverse?

At first glance, it seems a little mentally retarded, but the essence of this perceptron is that it doesn’t need me to tell it where to draw the line, it can find the exact line drawing position by itself through the so-called “learning”, which is the same as the previous one. Where all machines or tools are different.

It’s still me or Zhou Dongyu. Without the input of sample points, I can draw a random line on the plane, and then specify that the line above is me and Zhou Dongyu is below.

But after entering the actual data, I found that Zhou Dongyu’s photo actually appeared above the straight line. For example, her face looks square from certain angles, which is not good. So we can lift the line up a bit. Look at the distribution of all points again, and then make a judgment. If it still doesn’t work, move it and judge again, and so on, until all the photos can be correctly classified, and the perceptron will gradually become more accurate.

Why is it said that whoever masters artificial intelligence masters the Metaverse?

From the point of view of the operation of the computer, the action of moving the straight line is the process of adjusting the input weight abc.

For example, in this perceptron, we can calculate the function of the sum of the distances from the misclassified points to the straight line. This function is related to the weight parameter, that is, the a, b or c, which is called the “loss function”. The more outrageous.

For example, the loss function changes with the weight parameter a corresponding to the face shape.

Why is it said that whoever masters artificial intelligence masters the Metaverse?

There is a way to move that, assuming that the weight is now at the capital A, we multiply the derivative of point A by a predetermined value each time, this value is called the step size, and then repeat this process until all photos are Can be correctly classified, I am me, Zhou Dongyu is Zhou Dongyu.

Because the “input weights” are derived from the reference results, this method is also called “backpropagation”.

However, in the 1950s and 1960s, due to the limitation of hardware computing power and the overly complex network structure, this self-learning theory could not be realized at that time, and the real back-propagation algorithm would have to wait for a long time.

Why is it said that whoever masters artificial intelligence masters the Metaverse?

Looking back now, the perceptron is undoubtedly one of the greatest inventions in the twentieth century and even in human history, because it ended the era when machines could only mechanically execute specific instructions of humans, and opened the way for machines to find their own way to complete The era of the master task.

However, the theory of the perceptron was strongly opposed by another artificial intelligence tycoon Marvin Minsky at the time. Minsky also specially wrote a book called “Perceptron” to criticize Rosenblatt, considered his research to be of little value.

Of course, Minsky is suspected of envy, jealousy, and hatred, but his attacks are well-founded:

He believed that the principle of the perceptron was too simple to solve some problems, such as “exclusive OR logic”.


The problem of XOR logic sounds troublesome, but in fact, it is not only me and Zhou Dongyu in this world, but also Ma Yun, who has a squarer face than me but a bigger head than me, and Lei Jiayin. , and even Yao Ming, Yao Ming is really big and very square.

Obviously if the perceptron were to draw a line, there would be no way to separate me from all these people with different face shapes and head sizes.

Reality is complex and cannot be simply divided into two parts. Minsky pointed out the most critical point. This wave of ridicule is so effective that artificial intelligence research starting from the perceptron has been stagnant for about 30 years.

In the 1970s and 1980s, with the development of information technology, everyone gradually discovered that although a single perceptron has limited capabilities, if I add two more perceptrons, it is equivalent to drawing two more lines, right?

As long as there are enough layers of the perceptron, no matter how complex the classification problem is, the optimal solution can be obtained by continuously training with the back-propagation algorithm.

Moreover, we need to note that solving the classification problem does not mean that we can only do classification. In fact, many, many problems can be solved based on classification.

For example, judgment and classification are logically the same thing.

The smart camera judges whether the person at the door is a colleague of the company, and judges whether a passing person is a certain new crown patient, which is divided into two categories: yes and no.

In the case of clear rules, predicting what will happen next is essentially a classification. If you play chess, you will be divided into whether you will lose or win if you go on like this. chant.

After making predictions, you can make decisions, sweep the floor, play chess, recommend, and even command and dispatch.

Therefore, artificial intelligence began to enter the “multi-layer perceptron” era in the 1980s and 1990s. At that time, artificial intelligence was theoretically very powerful and could solve a large number of problems.

But it is still constrained by two key factors:

One is the computing power of the system, that is, the back-propagation algorithm with tens of millions of parameters requires a huge amount of computing power;

The other is data, that is, things like my photo, Zhou Dongyu’s photo, that need a lot of data labels as data for artificial intelligence learning.

After these two key resources, artificial intelligence can really hit the road!


From 1965 until 2016, the computing power of chips has grown exponentially with Moore’s Law, and the booming Internet industry in the past 20 years has also brought about an explosive growth in the amount of data, and the development of artificial intelligence has fertile soil.

In addition, scientists have also developed a series of methods to deal with these two problems.

Deep Neural Networks, Convolutional Neural Networks, Monte Carlo Trees, etc. are all trying to handle more complex problems with less computing power.

For example, the deep neural network is to divide the information points we can extract into more layers according to different dimensions. For example, to distinguish a person, from a single facial feature, the eyes, nose and mouth have their own characteristics, and the facial features form a whole A face, face, body, and limbs form a person. We divide these features into different layers and do not mix them together for calculation, which will improve the efficiency of the system. For example, in order to distinguish me from Yao Ming, if the parameters are mixed in the same layer for calculation, you may find that I and Yao Ming are quite similar, but as long as the parameters that represent the body shape are taken out separately, they are not similar at all. .

Another example is the so-called convolutional neural network. Convolution calculation is an operation that can extract certain eigenvalues ​​from a matrix. When I was in college, I was very impressed with this thing because it was very troublesome to calculate by hand. Which idiot invented this kind of thing? Bored and found it to be really useful later on.

For example, if we distinguish two people according to photos, then we are comparing two people first, right? If your artificial intelligence analyzes the background for a long time and compares the background, the computing power will be completely lost. In the same way, we need to find where the license plate is first to recognize the license plate number in the intelligent parking lot, right? Smart shopping malls need to find out where people are first to identify customers, right? This is an innate ability that we humans have, the ability to focus.

So how to make the computer also have the ability to find the focus? We can introduce a so-called convolution kernel, generally a matrix of 3*3, 5*5, 7*7 pixels, use it to scan the image from beginning to end, multiply the image and the value of the corresponding pixel of the convolution kernel, and then find and output, you get a smaller image with some feature extracted. Then based on these characteristics, we can judge which parts are the key points and which parts are leftovers.

In the ImageNet Large-Scale Visual Recognition Challenge in 2012, a deep convolutional neural network called AlexNet won the first place in the competition to classify 1000 objects, with an error rate of only 15.3%, which was higher than the second place. About 11% less famous, machine learning algorithms based on convolutional neural networks became famous.

This is a milestone event in the history of artificial intelligence development. Convolutional neural network is a general technology to improve the efficiency of computing power utilization. It is now basically the standard of artificial intelligence.

The so-called Monte Carlo tree search is also a method of judging where the focus is and where the leftovers are. The alpha go algorithm that Li Shishi played against was based on the Monte Carlo tree, and Li Shishi played the so-called god hand in the fourth game. In fact, it is down to the leftover material. The leftover material is the data blind area of ​​the neural network, so it is messed up.

Why is it said that whoever masters artificial intelligence masters the Metaverse?

On the other hand, the biggest development bottleneck of artificial intelligence still comes from data. At present, the development speed of related industries is basically equal to the difficulty of acquiring big data, such as intelligently recommending content and intelligently recommending commodities. These projects that are easy to obtain data are now It is particularly successful, such as autonomous driving, it will be slower.


There are ways to artificially create data to feed a neural network.

For example, the so-called reinforcement learning is to separate from human data samples and generate data based on the environment and rules to count down. Playing chess means that it has clear rules. The ultimate goal is to eat the opposite king. Based on this, predict the next most suitable strategy, and then compare the obtained results with the predicted value to optimize its own parameters, so there is no need to Entering too many human chess records.

In chess, top human players can predict the next 10 moves. In 1997, IBM at the time developed an AI for playing chess called “Deep Blue”.

Deep Blue can predict 12 steps, and each step has traversed all possible situations, which is equivalent to 12 steps of violent exhaustion with powerful computing power, beating Garry Kasparov, who was ranked first in the world at that time. .

Of course, under the current computing power, this brute force method will be relatively powerless for particularly complex problems such as Go and StarCraft.

Then there are multi-agents, which is to engage in two or more AIs and let them beat themselves and roll up.

Combining reinforcement learning and multi-agent learning, sometimes we will find that many human experiences are not reliable. It is better to let AI do the calculations and then play itself. For example, Alpha Zero’s Go ability is completely exhausted. Humans have learned Alpha Go of Chess.

There is also a particularly interesting method called GAN, the so-called adversarial generative network. If the main task of the previous AI was judgment and decision-making, the GAN network developed in 2014 gave AI a relatively strong creative ability.

Why is it said that whoever masters artificial intelligence masters the Metaverse?

Why is it said that whoever masters artificial intelligence masters the Metaverse?

Didn’t we just say that we can build an AI discrimination network to judge and classify a class of things? We can also build a multi-agent, and let a few AIs roll their own judgement. Then can we change the way of thinking and let the discriminant network roll a so-called generative network?

Just like this, I make a drawing generation network, first input some random parameters to the generation network, and the generation network will generate some initial pictures, which is a bit similar to the process of playing games and pinching faces through data, but the generation network is still very difficult at the beginning. I am ignorant, and the generated things are all different.

Then mark our existing beauty photos and the generated four dissimilar photos, and use these two types of marked images to train the parameters of the discriminant network, so that it can learn how to draw lines by itself. Separate pictures.

Why is it said that whoever masters artificial intelligence masters the Metaverse?

Next, the generation network can learn to improve according to the difference in the results given by the discriminant network, and the beautiful women it draws become more and more like that.

As a result, the judgment accuracy of the discriminant network will drop, forcing it to improve its discriminative ability, and it forces the generation network to generate more realistic photos to deceive the discriminant network, looping and rolling up.

In the end, the discriminant network can’t tell the difference. It judges that the probability of success will be stable at 50%. At this time, we say that the generation network has achieved “fake the real”.

Generating the network is like the male protagonist at the tail of a hot-blooded anime crane, and judging the network is like a genius male lead with a plug-in.

Why is it said that whoever masters artificial intelligence masters the Metaverse?

We can even modify the details or style of the fake paintings by adjusting the input parameters of the generative network.

For example, turning the ordinary horse in the picture into a zebra; or turning my photo into an abstract oil painting style or even the style of Van Gogh Monet, as well as the AI ​​face-changing that we love. That is to say, we can not only generate various expressions of Liu Yifei from various angles, but also relax the conditions a little to generate a beauty of Liu Yifei’s style.

And this creative ability is not limited to drawing pictures, writing articles, poems, scripts, composing music, and animation are all possible.

Why is it said that whoever masters artificial intelligence masters the Metaverse?


After this step, let’s take a look at what AI’s capabilities are. It can make judgments, make decisions, and create creations. These are exactly what the Metaverse needs!

We said earlier in the first episode that in the traditional PC Internet and mobile Internet, our world was actually hit by dimensionality reduction. Whether it’s shopping in a store, chatting with people and blowing water, going to the theater, watching a play, or listening to a book in a teahouse, all these 3D scenes are compressed into a 2D web page or App page to fit this screen. The downside is that There is no sense of presence, and the advantage is that it is relatively simple to implement.

In the Metaverse, they will achieve three-dimensional expansion, and the corresponding workload will also increase by one dimension.

Take our three most commonly used apps as an example, social: that is, WeChat, soul; e-commerce: a treasure, a east, a lot; and a short video;

In the social interaction in the Metaverse, we chat face-to-face in the virtual world, which is subversive compared to the type of social interaction in a dialog box. Because to put it bluntly, including me, many more introverted people can’t type on WeChat with someone they don’t know well. You don’t know his background and his hobbies, and you don’t know what topics to open.

But as long as we meet and sit together, we can still chat awkwardly. When we are embarrassed, we find that everyone seems to be very interested in the recent matchup between Manchester City and Liverpool, or everyone is very interested in the new Batman. , that’s how we started chatting, we can even watch a movie while chatting.

But the premise is that I have to have an image of myself, right? And it has to be a more delicate avatar that does not violate the harmony in the Metaverse scene, just like the avatar of Lao Huang’s press conference, right? You have to have an image of yourself too, right?

If it is a formal business meeting, I may have to use an image that is closer to my real image and at most beautify my face. It is not necessary to be so formal at home. I can use any image I like, for example, it can be Batman, in some other scenarios, like going to war together, then I can be a giant robot. So everyone needs several different images.

Those finer character models like this can only be deducted by painters using Zbrush Blender for a few months, and they are very expensive, and they cost tens of thousands of dollars. Those exquisite virtual internet celebrities such as Liu Yexi cost money. It is made by millions, and even a better game face pinch master is several thousand bigger. If 2 billion people in the world land in the Metaverse ten years from now, each person will have an average of ten images.

What about the image of these two billion?

Besides, the e-commerce in the Metaverse is really a store, it is not a page, and for example, if I want to use my virtual person to interview a pair of shoes in the Metaverse, it is definitely meaningless to try it out, because I It’s up to you to decide whether to buy it or not. It’s useless like the AR shoe test function that I got now. What kind of thing is that? I thought a pair of shoes was very good, but I didn’t want to buy it after I tried it. So it’s not as simple as just trying it out. The size of my virtual human’s feet and the force structure of my feet must be the same as my real feet, and the shoes should also be the same, so that I can test whether they fit well or not. .

With so many in-depth digital twins and commodities, who will make them?

Another example is short videos. In the current mobile Internet era, the main advantage of short video users like us is that they are diligent and eager to learn, learn to wear clothes, learn yoga, learn makeup, and especially like to learn to dance. I think this demand of human nature should not change much with the changes of the platform.

How do you learn to dance in the Metaverse? In fact, you sit there and the dancer dances beside you, and then you can “change one” by voice interaction, and then change. It’s not only the people who need to be changed, but also the dancing scene, the furnishings in the scene, the cats and dogs in the scene, too. What do we do with so many short videos, so many scenes, so many decorations, and so many cats and dogs every day?

It can only rely on AI, and these advanced infrastructures in the Metaverse can only be completed by AI.


Then, after we have a very detailed 3D image in the Metaverse, when scenes such as shops, cafes, and playgrounds have been developed in 3D, then the service staff and NPCs in these places must be like people. ? If the image of the person is very real and the scene is very natural, you go up to say hello to a beautiful woman and a dialog box suddenly pops up on her head, which is too dramatic.

Why is it said that whoever masters artificial intelligence masters the Metaverse?

So in the Metaverse we need very human-like AI to provide various services.

And these are all happening.

For example, in this “this thing does not exist” webpage, these cats, these rooms, these portraits, you open this webpage as if you are looking at some people’s selfies, the only difference is that these people are generated by AI, they do not exist in our world.

When the generation of 2D characters, scenes and objects can be perfectly realized, in fact, 3D is not far away. For example, this PIFuHD algorithm can directly generate 3D models of real people from 2D pictures, and automatically complete the information on the back.

But that’s not enough, the young lady you get at this stage is just a sculpture, you have to make her move. So, there is I2L-MeshNet, which can generate skeletons and 3D modeling related to character actions from 2D images, and import action parameters, so Miss Sister really came to you.

This is not the end, because our Metaverse virtual sister is more versatile, but also needs to be able to interact, that is, to achieve an effect similar to that of an out-of-control player.

A few friends are working on a project called RCT-AI, which is a Beijing-based team. One thing they are doing is to use AI to automatically generate character scripts in the game, such as this demo of “robbing a bank”. AI will learn how to cooperate with the player at its own discretion, sometimes it will be very cowardly, and sometimes it will be tough with the robbers.

With the player? If you change the scene, does it become feasible to train a maid or housekeeper-type AI?

In fact, in terms of service-oriented AI, OpenAI’s GPT-3 can now not only chat with people, but even write code according to people’s requirements.

That is to say, when we create in the Metaverse in a few years, we will most likely play the role of a director or a macro designer, and those specific tasks will be completed by AI.

In today’s animation game works, you still need to make up your mind when interacting with a little sister and a little brother, but when you understand the development of AI today, you will find that everything you fantasize about is unique and beautiful. Through a line of code, approaching reality.

This is why AI is undoubtedly one of the infrastructures supporting the Metaverse. Without AI to assist humans in manufacturing various digital products and serving as NPCs, the application scenarios of the Metaverse, the richness of the Metaverse is bound to be severely limited. The 3D zone can’t be without Tifa, and it can’t be Tifa alone.

And what I think is more interesting is that the Metaverse will also be an important thrust for AI to take off.


As we mentioned just now, AI needs a lot of data to feed. The biggest obstacle to the development of the AI ​​industry now is that the areas where big data is easy to obtain is too limited, and those areas where it is not easy to obtain big data even have huge demand. There will also be slow development, such as self-driving cars.

Of course we can use computer simulations to train autonomous driving algorithms, but there is a key factor missing, and that is people. Even if we can perfectly simulate road conditions and vehicle conditions, how can we simulate the strange behaviors of other drivers and pedestrians on the road?

The Metaverse can not only provide a larger amount of data, but more importantly, it is a 3D virtual world in which a large number of human beings live. In such a world, the application scenarios of AI will be greatly expanded, and then the AI ​​products in the real world can use the Metaverse version to first run to 80 points, and then go further in reality.

For example, we all know that the character expressions we use in 3D movies and games are very difficult to make. To be fresh and real, we have to collect real people’s expressions, and we have to find actors and professional teams with professional equipment. Very fake.

In the Metaverse, we naturally have the need to use virtual people to present our own expressions to laugh and scold. We said in the previous program that Facebook ‘s next-generation VR all-in-one device, Project Cambria, has built-in expressions. Sensors, so Facebook will get big data on the expressions of the users it uses.

Here I can make a judgment that within a year of the release of Project Cambria, the problem of fake AI virtual human expressions will be completely solved.

Another example is the autopilot we just mentioned. In the Metaverse, we can hold a road rally, set the venue in Beijing City, and then the audience can walk to the track, throw things on the track, and even drive their own car. On the track, the ability of the self-driving AI trained in this scenario to deal with emergencies absolutely exploded.

I mentioned an advertising marketing plan to a friend who is doing autonomous driving before, that is, don’t tweet like Musk all day long that your AI has an accident probability of one-fifth of that of a human being. Use, everyone won’t believe it. You simply make an advertisement. On a dark rainy night, the protagonist’s limousine slowly drives into a large and gloomy urban area. At this time, several cars in ambush suddenly turn on their headlights. The enemy wants to drive and kill him and enter a car chase. play.

But after crashing for a long time, I found that the protagonist’s car control technology was too flexible, and he couldn’t hit it at all, and he could only watch him break out. In the end, the protagonist’s car drove on a sunny open road, and the camera zoomed in to find that there was no one in the driver’s seat. At this time, their company logo appears in the middle of the screen.

If self-driving AI can do this, users will naturally understand that we can play with our mobile phones at ease, and driving is not something we should manage.

Another example is that many people actually want a general-purpose AI robot that is very close to humans and can help us do various tasks, just like Commander Gao in Blade Runner, but in reality, this kind of robot is not only limited by the level of intelligence , It is also limited by the level of hardware. To put it bluntly, its body is not well realized. It took Boston Dynamics 30 years to make the robot walk like a human.

In the Metaverse, there is no such restriction. The robot only needs a model and some codes to move freely. When the trained AI is exported and installed on the real mechanical body, the boundary between man and machine will be completely blurred. .

As the next-generation Internet, the Metaverse is the culmination of human information technology means, and the data disseminated and generated is bound to grow exponentially. AI will also benefit from such an ocean of data, and the evolution speed is far from what it is today.

I often hear a saying that artificial intelligence can only do some mechanical operations, and human imagination and creativity cannot be replaced by artificial intelligence.

After listening to my analysis just now, you can see that this statement is completely wrong.


In fact, whether it is mechanical labor such as sweeping the floor and screwing screws, competitive sports such as playing chess and games, or creative work such as painting and composing music, fully developed artificial intelligence will do anything better than humans, and It is more and more economical, all-round advantages.

If we say that the routine of doing things is to first form a desire, then find a way to satisfy this desire within the range of capabilities or resources at our disposal, and finally form a decision and move forward.

What artificial intelligence lacks is not any ability or method, what it lacks is the left side, desire.

That is to say, AI cannot be the final decision maker, it only provides the execution power to achieve the goal, or it is just a tool person, which is the fundamental difference between human and AI.Anything that AI does must be driven by someone. One day AI really kills people, and you will eventually find a behind-the-scenes initiator.

AI will bring about the crushing of productivity. In the face of AI’s near-infinitely replicated productivity, our current automatic control technologies, 996007 operations that squeeze workers, will becomedrizzle .

AI will bring about the crushing of combat power. In front of the swarms of intelligent drones that cover the sky, no matter how powerful the human pilots are, they will become cute.

AI will make humans freer because it will amplify the capabilities of each individual. At the same time, it will also make the value creation of the society focus on the head, that is, when the decision-making ability of high-end players is infinitely amplified by AI, when you can find Zhang Yimou to direct the party, low-end players will not be able to find them. When the work is done. This is also the fundamental reason why many people are raising the basic income of the whole people, because for the vast majority of people, the kind of labor value we currently understand will no longer exist.

The development of artificial intelligence cannot be stopped, and it will inevitably have an impact on society. What really matters is who can control the artificial intelligence industry.

Just like the industrial revolution back then, almost everyone alive now has a better life because of the industrial revolution, but in the process of its unfolding, some people have soared through the wind, while others have become stepping stones for development , and even some ethnic groups simply disappeared. As a Chinese who has studied modern Chinese history, I believe that everyone understands what this means.

This is why the AI ​​industry is an area that we must do our best to master and lead, and this is indeed our national policy. The term that appears most frequently in the 14th Five-Year Plan is artificial intelligence. If I remember correctly, it appears 18 times.

And what happens after that depends on whether we can move forward without forgetting our original intention.

Posted by:CoinYuppie,Reprinted with attribution to:https://coinyuppie.com/why-is-it-said-that-whoever-masters-artificial-intelligence-masters-the-metaverse/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.

Like (0)
Donate Buy me a coffee Buy me a coffee
Previous 2022-07-07 11:45
Next 2022-07-07 11:49

Related articles