What is the state of technology in the Metaverse now? What kind of equipment and content support does the Metaverse need? Why will qualitative change occur in 2030? In Ma Xiaoyi’s sharing, the answer may be found.
On April 24, Tencent Senior Vice President Ma Xiaoyi shared his views on Metaverse, VR/AR and other content at the Fudan University School of Management alumni meeting.
Under the upsurge of the Metaverse, Ma Xiaoyi has a conservative view on the changes in the Metaverse in the short-term 2-3 years. Although there have been breakthroughs in many specialized technologies, the general-purpose devices required by the Metaverse age (like the home PC that brought about the computer revolution, and the smartphone that drove the development of the mobile Internet) have not yet appeared. Whether from a technical or commercial point of view, there is still a distance from its invention and popularization.
But he has strong confidence in the longer-term 10-year development prospects. On the one hand, the Metaverse concept has received more technical support in the past two years, and is solving its own bottleneck problem in many ways; on the other hand, the society’s cognition of it is also breaking through, from the concept limited to games and software. , moving in the direction of spanning multiple scenarios. And the time point for the qualitative change of the Metaverse, he thinks, is 2030.
What is the state of technology in the Metaverse now? What kind of equipment and content support does the Metaverse need? Why will qualitative change occur in 2030? In Ma Xiaoyi’s sharing, the answer may be found.
The Metaverse has been very popular recently, but in retrospect, this is not a new topic.
In 1997, a game called Ultima Online came up with the concept at the time: this is a world that is constantly moving.Under this concept, the game’s world continues to exist and continue to develop even when no one is online to interact.This concept is completely consistent with the concept of the current Metaverse.
It was almost also in the years from 1995 to 1998 that various Internet companies started their businesses, thinking about how to put scenes and relationships in life online.
The term Metaverse comes from a novel in 1992. The last time it was mentioned in the business world was an article written by an American analyst in 2018 with the title: What is Tencent’s dream?
The article analyzes why Tencent invests in Epic Games, Roblox, Reddit, Snapchat, and various content and platform-based companies. When he put his investment portfolio together, he thought it would be better to use a word called Metaverse.
With Roblox going public and Facebook changing its name to Meta, the Metaverse became an increasingly hot concept.Why is such an early concept suddenly hot today? I think it’s because a lot has changed in the past few years.
The first is that the entire computing power has reached a critical point. There was a wave of VR in 2016, and I was conservative and pessimistic at the time. After chatting with many people in the industry, we feel that to truly support a good experience on VR devices and achieve large-scale application, the required technology is far from mature.
For example, the mobile computing chips at that time were completely unable to support high-resolution displays, and coupled with the limitations of the battery and the display device itself, the experience was poor.
Secondly, some key new technologies are not in place. I am very concerned and have been watching many upstream key technologies, and even the progress of some device parts production. We have already seen some roadmaps becoming clear, and there will be a breakthrough development in these key technologies in the next few years.
Third, I think it is more important that the penetration rate of the Internet has also reached a critical point. Especially after two years of global epidemic, you can find that working from home is becoming mainstream – if you are a technology company, if you do not provide employees with the option of working from home, then you will lose your appeal.Everyone has become accustomed to a large part of their lives through the network.
Finally, we stand on the user’s point of view, from the point of view of demand. As a species, the biggest feature of human beings is “synergy”. As society as a whole becomes more complex, so does the technology we need. When society sees that some technology has the opportunity to meet these needs, it will push it to achieve a very big breakthrough.
Based on the above points, the first attempt of “Second Life” in 2006 was obviously too early. In the era of the first wave of VR in 2015 and 2016, I think it is too early.
Today may be the right time.
So just through this meeting, we can talk about our vision for the future Metaverse – to be honest, I personally have some different opinions from many outside media reports. For example, I think it is too early to talk about the details of the Metaverse. .
After all, what the Metaverse should look like and what parts should it consist of… It is still a job that is difficult to define clearly and precisely.
However, we can now regard the Metaverse as a possible way of integrating new technologies and new applications in the future, and a collection of imaginations of human-computer interaction and human-human interaction in the future.
I mainly talk about the future of the Metaverse from the immersive experience and content development.
Immersion: After 2030, the Metaverse may begin to spread
First of all, immersion, I think there are mainly two directions: 1. Human-machine interface & interaction method; 2. Experience problems of VR and other devices.
Now that we mention the Metaverse, people can’t help but associate it with VR. Of course, this is based on two reasons. On the one hand, Facebook made a very big bet in the direction of VR some time ago; on the other hand, if we want to achieve the concept of Metaverse, then a new device such as VR is inevitable.
why? Ordinary people input information to the computer with the help of keyboard, mouse, touch and other devices. From the technical point of view of the computer, their bandwidth is actually very narrow, and the information that can be input is very limited. The computer can only give feedback based on these things, and there is no way to directly input more information.
However, as the bandwidth of the input increases, we can actually add more dimensions.
For example, more cameras and sensors can be added to the VR device, and environmental information can also be input into the computer. It can track the state of the human body, and now the latest technology can achieve 3D tracking in 6 dimensions – front, back, left, right, up, down, including head position, eye orientation, and easy interaction with the surrounding environment.
One of the most noteworthy is the operation of the hand. Now the latest Oculus Quest 2 still needs to use the handle to input information, but there is a consensus in the industry that the more natural interaction is to use the hand to interact.
This progress has appeared on mobile phones: early mobile phones were written with a stylus, but Apple stood up and said that this is counterintuitive. Human intuition should be to touch the screen with a finger – the same principle applies to VR devices. So now we have a lot of progress, that is, to capture the movements of the fingers.
In addition, we see some of the latest technologies focus on doing facial recognition and capturing expressions. We all know that when people communicate with each other, the reaction of the other party is very important feedback.
For example, in our current online conference, my experience is actually not good, because I can’t see everyone’s expressions, and I don’t know whether my speech is fast or slow. But offline, I can see the audience’s reaction right away.So in the future, we also need to use facial recognition to track people’s real-time expressions, and then project them into virtual scenes…
These new input methods, integrated together, make information bandwidth larger and can greatly change the information input of computers.
There are also some very exciting technological developments.
For example, recently we looked at the display unit on the VR headset. If you have tried the Oculus Quest 2, you will know that it uses Fast-LCD technology. However, there is a problem with this technology, that is, the screen window effect.When the device is too close to your eyes, you can see the pixels in front of your eyes, as if looking through a screen window. This is actually the same as the low-resolution mobile phone display of the year.
We all know the story after that, one of the reasons for the success of the iPhone is that they put forward the concept of the retina screen in the iPhone 4, which greatly improves the experience of using the screen of the mobile phone.
VR has developed to this day, and some new technologies are being born in response to these problems. For example, the 4K-resolution silicon-based OLED display that is already on the production line has been able to provide a high enough resolution for VR headsets.
How to achieve the effect of retina screen on VR headset? There is now a standard, that is, how many pixels should be in each degree of human viewing angle. Roughly speaking, when there are 30, you can’t feel the boundaries of the pixels. If you reach 60/degree, you can’t see the pixels at all. Therefore, the 4K silicon-based OLED display is enough for everyone to have a very good experience.
Brightness is also a point of change for a VR headset.
From today’s silicon-based OLED displays to future MicroOLED displays, higher brightness can be provided. Before, the brightness of LCD was only about 400-500nit. Now we can see 2000nit, and from the roadmap of future development, we have seen 10000nit devices under development.
This solves a very big bottleneck problem, that is, improving the technology of the display unit and increasing the user’s immersion. There are also some formed products on the market.
But in the future, there is still a very important development direction, that is the technology of folding optical path.
Previous VR devices were often thick, heavy, and bulky, making them uncomfortable to wear on your head. In the future, we can see a device that folds the light path, and the VR head display unit can be made very thin, which is closer to the feeling of ordinary wearing glasses.
Another breakthrough is the VR/AR debate. VR says all the content comes from the display unit, but AR says it has to overlap and combine the content with the real environment.
Looking at it now, everyone in the industry is doing the solution of folding the light path – still using the display screen to present all the content, but adding more sensors to the outside world to capture the user’s surroundings, and finally superimpose it on your display. On-screen – There is also a rumor in the industry that around the beginning of next year, Apple will release such a device that uses the solution of folding optical paths to achieve VR/AR effects.
In summary, the technology roadmap for human-machine interfaces is clearly developed. In addition to the many limitations of the battery, almost all other aspects have solutions, and they are all in progress.
This is why we have confidence in the development of the Metaverse and VR/AR.
Another point is that in order to make the technical experience good enough, it is not enough to have hardware support inside the device, but also software development.
It should be pointed out that the software here refers to technology, not content. We boil it down to 3 dimensions:
1. Believable environment: Metaverse scenarios are becoming more and more realistic, allowing users to believe that this is a real world.
2. Credible characters: People are very social creatures, and I need to interact, communicate, and collaborate with groups.Of these, what is most needed is a credible character. I’m sure everyone has some experience with this.
3. Credible interaction: It is still an online meeting. This form is more like a speech than a discussion. Someone offline will insert into the discussion and there will be collisions, but these things are not available online now.
If you want to make a more realistic interaction, you need to involve a lot of details that are enough to make people believe. For example, I ask you to take a bottle of water, I shake hands with you, I can feel the temperature and weight in it, and these are all interactions that traditional computers and Pads can’t do at all… But now there are Technology has made a lot of progress in this regard.
To sum up, with the development of technology, we need to provide a lot of tools to companies doing Metaverse and VR/AR, and at the same time, help everyone to make more credible world and character relationships.
Content volume: Who will provide enough content?
This is quite a big topic. Because we still don’t know what the Metaverse needs to contain. But looking at it now, we will feel that there are several dimensions: professional content and general content.
I will be more pessimistic about this part. Because although the technology is developing on the roadmap, it takes more time to prepare. Dedicated to general, it will take more time.
When a technology has not yet reached a pivotal point, it is often just a relatively professional and dedicated application, such as games, film and television, etc.; and general categories, such as mobile phones that everyone is using, now there are thousands of kinds way to use it.
Roughly speaking, when I communicate with the internal team, I generally use 2030 as a baseline. Before that, the Metaverse will be a dedicated stage. After that, there may be some opportunities to move towards a more general state. Begin to challenge the existing usage scenarios of computers and mobile phones. Now it’s more of a new, supplemental scene positioning.
The difference between private and general may need to be said in more detail here.
About a month and a half ago, I had a meeting with Tim Sweeney, CEO of Epic Games. When we discussed this issue, my conclusion was: our current game is a “game”, and when you watch a video, what you watch is just a “video”. But the Metaverse is different, you should “live” in the Metaverse. There are essential differences between periods.
For example, when you go to an online cinema, it is a dedicated world, separated from reality. But Meta is to integrate and integrate everyone’s life into this virtual world. So it needs to have enough content that you want to live in it.
Enough of this can be broken down into several routes:
1. Professional production of PGC. For example, companies such as Tencent make movies, games, and TV series. This is a professional production. PGC is still a very important part of the Metaverse.
2. UGC. That is, user-generated content.
3. Combination of virtual and real. We just talked about a lot of technology, which is to integrate and integrate the real world into the virtual world.
These three parts are all very critical content components.
So how to achieve it?
First of all, large-scale PGC content, in fact, there have been some breakthroughs recently. Students who follow the new game know that some time ago, Epic Games had a very shocking demo. They made a virtual city at the same time as the release of “Matrix”. The size of the city is about 10-20 square kilometers, among which there are With 30,000 citizens, there is a lot of traffic and buildings.
The key is that the vehicles in the middle have their own movements: they will stop at the traffic lights, and they will stop when they see people. Moreover, each of the 30,000 citizens has different clothes, behaviors, and looks. If you put it in the past, To complete the creation of such a large-scale virtual world, it is almost an impossible task.
But today, with the development of technology, these impossible tasks are gradually realized, because, on the one hand, technical capabilities are growing, and on the other hand, there are more good ways to solve these things.
For example, we have two projects that we are working on with Epic Games. A project that can be imported into the real world through AI. Before we went to a valley in New Zealand, it was a valley of 10 square kilometers, we took 7000-8000 photos there, and then imported the photos into the engine, the engine would roughly restore the valley to 90%, and then pass through With some manual fine-tuning, we can generate a very large world in a short time, and the world looks very real.
Another project we are doing is a virtual human project – called Siren in China. Later Epic Games also released the project Meta Human, a highly simulated person. This actually all goes back to the concept we just mentioned: a credible person is an important component in the virtual world.
Also in the past, it took a lot of time for you to make a virtual character.
You may have heard this story: Many Hollywood movies need to be a virtual person, and it takes several months to render a 10s film online. Rendering in seconds, achieving 95% of the quality of the former.
These all make large-scale production have a better way. Including content such as motion capture, with the blessing of AI, it can make it easier to create a content.
The second is the UGC part, and my views are somewhat different from those of the outside world.
For example, when it comes to UGC externally, the most important thing is decentralization, but on this point, I had a lot of exchanges with Roblox CEO David – we participated in the construction of Roblox very early, and began to participate in around 2015. A lot of investment has been made – when we talk about the key to the success of UGC content, we all mention the principles and transparency of the community, especially the part of maintaining order and state, which requires strong centralized execution.
But this does not mean that I disagree with external opinions. I believe that decentralization should refer to the decentralization of capabilities: giving tools, capabilities, and resources to users, so that everyone can create their own without a centralized platform. Content. What can support these contents is what I just said, a sustainable, principled and long-term stable platform. In my opinion, this is the most important point in making UGC.
And the third point, the combination of virtual and real, largely goes back to the new technologies of Metaverse and VR/AR that we just mentioned. In fact, the goal we hope to achieve is consistent with the previous efforts of the Internet and human society. Humans have always wanted to break the limitations of physics, such as airplanes and automobiles, to eliminate physical limitations, shorten physical losses, and improve the efficiency of large-scale collaboration.
So in the long run, Metaverse is more about how to introduce more external, real-world services and content to break the physical boundaries.
more realistic issues
Finally, I want to talk about the problems of Metaverse and VR/AR.
Although what I said just now seems to be optimistic, I am still a little pessimistic in my heart.
Usually, when I talk to you, I will say this: Looking at the short-term 1-3 years, I am more pessimistic than most people, looking at the long-term 10 years later, I will be more optimistic than most people, I will think about the scenarios of these new technologies It will bring great changes to the whole human society.
From my point of view, these technologies will be available on a large scale between 2025-2027, but to be rolled out, we will have to wait until 2030. This is a timeline in my mind.
To give a simple example, price is an important reference. For example, the 4K silicon-based OLED display just mentioned, its experience is very good, but the price is too expensive. The price of one display unit is the same as the price of the entire Oculus Quest 2. So, it will take a long time for the price to come down.
Second, there are still some problems with the business model itself. As I said earlier, it is still too early.
If we want to map our current stage to the past history, then there are two points in time:
Popularity of general purpose equipment.
Many young people may not know that the first dedicated device for home use is actually a game console – Atari 2600, released in 1977, it can be compared to the current situation of Oculus Quest 2.
The well-known IBM PC entered the market in 1981. The real popularization of home PCs was after Windows was introduced. The first version was in 1985. The large-scale promotion of version 3.0 was only in 1992, and it is more widely known. Windows 95 is already 95 years.
It can be seen that this process has at least 6-10 years of development.
The establishment of a business model.
In fact, the Internet appeared in 1990, but all known Internet companies, including Tencent and Google, were only established in 1998. Moreover, these companies really found the business models to support them, probably around 2004.This process also took many years.
Therefore, in the short term, the development of the entire Metaverse still needs several years of cultivation, but the direction of cultivation is clear and the potential is also very clear.
In the last 20 years, the contribution of the entire Internet to human society, economy, and technology has been obvious to all. Therefore, I believe that in the next 20-30 years, the integrated Internet experience with Metaverse as the core will bring greater benefits to the entire society. shock.
Among them, there will be many points that are worth discussing, collaborating, and promoting.
Posted by:CoinYuppie，Reprinted with attribution to:https://coinyuppie.com/tencent-executives-talk-about-the-metaverse-the-time-point-for-qualitative-change-may-be-in-2030/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.