Xu Lin, Vice President of Haima Cloud Strategy: The high precision, immersion and ultra-low latency required by Metaverse require a brand new cloud computing infrastructure

Our vision is to provide Metaverse with a cloud-native solution and related infrastructure services in the future.

In 2021, with the listing of Roblox, the first stock of the meta universe, and this concept of the future derived from the science fiction novel “Avalanche” gradually became known. Metaverse has become the hottest track in the investment circle, but building such a grand, high-degree-of-freedom, and highly fantasy virtual world requires AI, content creation engine, content ecology, network communication technology, Solutions in different fields such as AR/VR ecology and blockchain are jointly completed. Domestically, due to the rise of the meta-universe concept, and the entry of Internet companies and investment institutions such as Bytedance, NetEase, and Wuyuan Capital, a number of start-ups in the meta-universe ecosystem have risen. In the fields of content creation engines, AR/VR and AI, due to the scarcity and complexity of related technologies in China, some companies have obtained extremely high valuations when the products are not completed.

So is the meta universe a bubble or a real future? 36Kr held the Advanced Experience·Meta Universe Summit in Shenzhen on November 24. The summit invited many investors, scholars and industry changers, including Professor Cai Weide from Beijing University of Aeronautics and Astronautics and expert on Interchain Network, Xu Lin, Vice President of Haima Cloud Strategy, Guo Cheng, President of STEPVR, and Special Assistant to the CEO of “Soul” Huang Ziyang, He Wei, vice president of Chizicheng Technology, Song Lei, co-founder and CTO of EM3, Li Renjie, co-head of NetEase Fuxi Lab, Du Zhenglin, head of Tencent’s magic core market, and many other guests discussed the new development trend of the metaverse industry. Under the promising trend of meta universe and robots, 36Kr will continue to focus on this field through the most cutting-edge vision, and provide comprehensive support for the industry. It is believed that under the layout of the meta universe, 36 krypton will also explore new growth space.

At this Dimensional Universe Conference, Shanghai Jack Ma’s Vice President of Strategy Xu Lin gave an independent speech. In his speech, he believed that Metaverse has a very important feature, that is, the sense of immersion and ultra-low latency , which is how to experience, effect and extend. Timely to provide a very good technical foundation. For Metaverse, because of the delay requirements of virtual reality and augmented reality, as well as the entire 3D content, the requirements for computing power and real-time requirements are far beyond the current status of cloud games.

The following is the transcript of a speech by Xu Lin, Vice President of Haima Cloud Strategy, on “From Cloud Games to Metaverse, Cloud Native Thinking and Exploration”:

Xu Lin, Vice President of Haima Cloud Strategy: The high precision, immersion and ultra-low latency required by Metaverse require a brand new cloud computing infrastructure

The topic I shared today is called from cloud games to meta universe.

Let me briefly introduce us first, our name is Haima Yun. We provide the infrastructure for real-time interactive content. The most important representative of real-time interactive content at this stage is cloud gaming. Of course, in the future, we believe that Metaverse is also a very typical representative of this real-time interactive content.

At this stage, we are the largest infrastructure provider in China that provides computing power for cloud games. We focus on enterprise services. Therefore, we are basically not well-known on the user side, but almost all domestic companies doing cloud games on the enterprise side are Customers of Haima Cloud. We have a full-stack self-developed solution. From the bottom to the top, we support the ARM and X86 hybrid architecture networking, which means that we can provide cloud computing services and cloud gaming services for all game content.

We have developed to the present and have built the largest cloud gaming edge computing network covering all provinces in the country. The monthly number of independent users of our platform exceeds 35 million, and the peak number of concurrent users exceeds 250,000. This is the current domestic The largest cloud gaming computing platform.

Our vision is to provide Metaverse with a cloud-native solution and related infrastructure services in the future.

How do we consider the relationship between cloud games and meta universe?

After the meta-universe became popular this year, in fact, there is a consensus in the industry, that is, from the perspective of technical form, cloud games should have the highest similarity with meta-universe . But like the practitioners of our cloud games, we do not have much fanfare to promote Metaverse, because we feel that from a technical point of view, Metaverse may have to solve a lot of technical problems. When we are doing cloud games, at this stage we are doing cloud games. What do we do technically? The core thing we do is to upload the stock game to the cloud, stream it, and then through real-time audio and video interaction, so that players can play the game.

In the process of doing this work, we have to consider some technical issues, such as how do we balance between picture quality and delay ?

The industry knows that the popularity of cloud games is due to the commercial use of 5G, starting in 2019. But in fact, with the development of 5G up to now, the low-latency network infrastructure has not yet been fully completed, so when we are doing cloud games now, we will face the complexity of the network environment, and we will balance the picture in a particularly complex environment. Quality and delay, this is limited by the underlying network capabilities. We also need to consider experience and cost. Experience is nothing more than definition and resolution. Cost is the core point and there is a bandwidth consumption problem. So in fact, we will also go back to how we balance image quality, bandwidth consumption, and bandwidth consumption on the technical side. Delay and other issues.

All of these underlying technical bases, one is computing power and the other is network . In Haima Cloud, we do computing power. We want to build on a basic network facility, so we will build edge nodes nationwide, and we will use basic operations. To realize the entire technical architecture.

On this basis, we are thinking about a direction called cloud native, which is how to make the game adapt to the architecture of a cloud game from the design, creativity and development process. The industry is currently doing some exploration in this direction, and has not yet formed a lot of consensus, and has not seen a lot of results. Maybe the results we can see in the field of cloud games are just the accuracy of the entire image quality and the resource package. In terms of size and computing power consumption, a special compilation package may be made for cloud games to better utilize the computing power of cloud games to provide players with better services.

However, there hasn’t been much change in the design architecture of the entire game. Later films will also talk about why it is in this state.

When we talk about the meta-universe, there are various concepts and definitions. When we look at the meta-universe from a technical point of view, we first think that the meta-universe represents some basic forms of the next generation of the Internet. We think At the meta-universe stage, there may be new changes in the entrance of the entire consumer to the Internet. It may be a virtual device, a somatosensory device, and the entry may be a virtual space, a 3D space, or it may be like a virtual reality. Combine such an AR space with mixed reality. We think it is a change in some basic forms of the next-generation Internet.

In this process, from the upper-level logic, the meta-universe is firstly open. We don’t think that a company has the ability to build a meta-universe that covers all players around the world, because of the computational power consumption and the production of all digital content. The investment in the process should not be independently completed by any enterprise.

On the basis of openness, everyone will define standards, and there will be participants in the vast industry chain to jointly build the content and ecology of the meta universe. There are some more important points that we may involve, one is virtual identity and sociality, but the bottom layer of virtual identity may be NFT and NFR, and the upper layer is a digital person, which is a virtual image. We will bring it here. It is completely different, some basic experience of socializing with a virtual identity in a virtual space.

Of course there is an economic system, and one thing that cannot be avoided is the economic system.

From a technical point of view, let’s take a look at the meta-universe has a very important feature called immersion and ultra-low latency . In fact, we have already encountered this in cloud games, which is how to provide between experience, effect and latency. A very good technical foundation. For Metaverse, because of the delay requirements of virtual reality and augmented reality, as well as the entire 3D content, the requirements for computing power and real-time requirements are far beyond the current status of cloud games. So from this perspective, we would think that there will be a very big challenge to the infrastructure. Similarly, in addition to AI , its infrastructure, in addition to the blockchain, is the core piece of computing power and the network. How do we provide the underlying computing power and network for the immersive content and ultra-low latency requirements of the meta-universe? This piece is actually what we are thinking and exploring.

Of course, we are also faced with cloud nativeness, and how to define infrastructure and upper-layer ecology in the context of cloud nativeness.

Just now we talked about the meta-universe because it has immersiveness, ultra-low latency, a very high-precision modeling of the real world, and an immersive experience combining virtual and real, so what should it require for infrastructure? ? In fact, this piece of infrastructure will specifically refer to the part of the infrastructure that we care about computing power. We believe that the requirements for computing power should have these characteristics:

Feature one, collaborative computing. The rendering accuracy and rendering complexity of Metaverse content should exceed the performance of any single graphics card or stand-alone computer, and even according to the development speed of semiconductors, in the foreseeable future, it is impossible to achieve such a level of Metaverse with a single graphics card. A rendering of, so we will definitely have a lot of algorithm breakthroughs in the collaborative computing of the cluster. This piece not only requires multi-card, but also realizes cross-physical machines, and even cross-physical nodes in the future, in a network that covers the national computing power Go up and do collaborative computing. This area will face many challenges, including the low-level, including the algorithm, including the engine level.

Feature two, real-time. All rendering and calculations must be guaranteed to be real-time, because the future meta-universe must be real-time content, the content is generated in real time, and the content will be generated in real time based on user interaction. Therefore, any previous industrial standards may face great challenges in the context of Metaverse, because real-time optimization of the entire algorithm is completely different for the construction of the underlying architecture.

We believe that in accordance with the requirements of real-time and collaborative computing, the entire computing power in the future must be a distributed network, a distributed computing power network covering the center, the edge, and even the end-to-side collaboration, and then we consider this entire foundation The facilities are built based on the concept of cloud native, because only based on the concept of cloud native can we achieve sufficient performance and cost optimization to meet the requirements of the underlying facilities for the development of the entire meta universe in the future.

Here we are faced with many key technologies when discussing infrastructure construction:

The first piece, we think we should define the cloud-native GPU. The existing graphics card is essentially defined from the stand-alone perspective of the workstation. Even if the current graphics card is placed on the server side, the optimization of the architecture itself It is not specifically optimized for cloud native, specifically for the needs of Metaverse, so it is actually not optimal in terms of performance and cost. We believe that in the future, we must meet the requirements of the entire meta-universe infrastructure and the entire computing cluster. From the very beginning, the GPU design must be optimized for cloud native, such as rendering and encoding. How can we be able to do it at the chip level? Get it straight, how do we eliminate some data handling bottlenecks at the entire IO and memory level? These are all things to think about at the GPU design level. We are also discussing with the domestic GPU team how to define the future cloud-native GPU based on the meta-universe.

The second piece, the server must be highly customized, considering the density, power consumption, volume, etc. of the computing power, there must be a sufficiently high price-performance ratio to support the upper-level business.

There is a very important point in the entire infrastructure is the network, because we want to achieve cross-physical nodes, cross-IDC nodes collaborative computing, data handling and throughput will require very high network transmission and delay requirements. The existing It should be difficult for the network to meet such a large number of collaborative computing requirements. Therefore, operators are also planning a thing called a computing power network. The plan should be to build a network base based on all-optical switching to realize the interconnection between the entire physical nodes.

So after the hardware layer is built, we just talked about collaborative rendering at the software layer, and a series of tasks involving graphics interfaces, engines, etc. need to be redefined. Then there will be AI-based content generation on the upper level to solve the problem of how to build and generate a large amount of 3D content in the entire environment of the meta universe.

After the entire bottom layer is completed, we also need a set of cloud-native toolkits to provide a set of development kits based on the cloud-native context for the entire development ecology, use ecology, consumer ecology, and producer ecology, enabling participants in the broad industry chain Able to develop content and build content based on the entire meta-universe infrastructure.

Therefore, in Haima Cloud, we are thinking about building a meta-universe open platform. We are doing a lot of internal R&D and planning considerations, and we are also making some technical reserves. We envision the entire meta-universe, the cloud-native open platform should be such an architecture, the bottom layer is computing power and network, the middle layer is what we call collaborative computing, collaborative rendering engine, and the upper layer is all applications. In the cloud-native context, we let the upper-level application development WYSIWYG. Because the content is viewed in the context of the meta-universe, the existing workstation development model is actually difficult to achieve WYSIWYG, and its own computing power It will exceed the limit that can be supported locally, so we are based on distributed computing power and network, and then build a layer of general-purpose rendering engine, and then provide the entire meta-universe toolkit on the upper layer.

There are several points in it. The first point is developer tools . How do we let this industry chain participate together to build the content of the metaverse, whether it is to generate the virtual image of a digital person or the scene content of the metaverse, Or to generate some content that can be used in other meta-universes, we need to have a set of toolkits for developers, and then develop directly on the cloud based on the underlying computing power.

After the entire development, we should have a set of deployment tools to deploy the entire application directly to the cloud to provide consumers with access. We will directly provide related tools in the background for rendering and streaming, so that all developed applications and Meta Universe applications can directly stream it, render it, and push the result directly.

This also involves application updates, that is, all applications will have iterations and updates. When updating, how do we deploy in the entire distributed architecture and do real-time updates under the distributed architecture.

So in the end, after we have done all of this, we will define the interface SDK, and then allow the upper business layer and the entire ecosystem to have easy access to the underlying computing power and network, as well as the infrastructure of all tools. This is what we are working on. Things to do. This matter must also be coordinated with the industry chain, so we are also making friends and discussing with all parties in the industry chain how to build the technology needed by the entire Metaverse on an open platform.

Here we are also faced with some underlying architecture thinking, some specific technical points, today this opportunity is also shared and discussed with you.

The first one is the choice of the underlying architecture. We have compared a lot of routes. We finally think that we still have to embrace the open community and the open source community, so we decided to choose such a system of ARM+Linux+Vulkan to build the entire base. , Which includes the chip layer, the hardware layer, including the operating system, including the graphics interface. The advantage is that in fact, these three things have a mature open source community that is evolving. We only need to use the power of the open source community to further advance collaborative computing and meta-universe technology.

There is a more important point here, which is the deployment of CS architecture under cloud native.

In this matter, when we consider the cloud native context, we are actually considering the industry standards for the entire application development. For example, in the Internet era, everyone has gradually abandoned the CS architecture and turned to the BS architecture, because the BS architecture is very easy to deploy, and the BS architecture Going to the cloud is actually very simple, because all S can be deployed directly to the cloud, and the client is a thin terminal, which is just a browser. In the mobile Internet era, after starting to make APP, the CS architecture will be put on a main development method. The CS architecture is typically games. When we are doing cloud games, we will face the entire CS architecture system. How to uninstall the computing power on the C-side on the cloud.

In fact, there is a lot of computing power that we can adjust through the division of labor in CS, so that all of this computing power is offloaded on the server. For example, computing power like blockchain, computing power like AI, etc., can actually be offloaded to the server. But there is only one computing power that is difficult to uninstall, that is rendering, because the last rendering side you see is directly related to what you see on the user side and the display, so when we are doing cloud games, we can only use Streaming The way to solve the problem of how to offload the computing power on the side of rendering to the server. Because the end user must see a graphical interface, the same for Metaverse, no matter how you adjust the computing power of the server, your end user must see an immersive scene combining virtual and real, and the cloudification of graphical computing power is currently involved. It can only be solved with a fluidization solution.

So this is what we think in this context, after we look at the entire meta-universe going to the cloud in the future, or when we consider the development of the meta-universe in the cloud-native context, how we can solve this problem. Our basic thinking in this area is to say that no matter what we do, the rendering part will eventually be offloaded to a distributed architecture to ensure that the rendering computing power is close enough to the user to ensure a very low latency, so this side Still considering the problem of distributed deployment and distributed update, it involves a series of low-level tasks related to distribution, such as large-bandwidth, large-capacity distributed storage, to solve the application update of the entire edge network problem.

The third piece is that we are considering the cost of our entire bottom layer in the future. What can we do? Because we have developed cloud games so far, in fact, there is a bigger point for the business model, that is, how to save costs? Through the customization of computing architecture, we can well define a high-performance and cost-effective solution. So what can we reduce the cost of Meta Universe? Of course, the first one we will think of is to say, if we do collaborative rendering, is there some content that we only use to render once, but it can be seen by everyone, so that in fact, we will make the entire computing power not linear Growth, when the number of users increases, the computing power will increase slowly, so that we can control the marginal cost to increase profits.

In fact, if we do it now, we will find that based on the algorithm level sharing, the savings in computing power may actually be limited. It means that everyone sees a different perspective. In graphics, it is actually difficult for you to say it once. Rendering solves all problems, because it has to do with everyone’s perspective and the location of each person’s push-stream camera. But can you save this piece? I think it can be saved, but the savings will not be as big as imagined.

How can we compress the cost of this piece in the future? We believe that we still need to return to the breakthrough of the underlying semiconductor. For example, our domestic cloud-native GPU can come out. This will greatly change the current situation of the entire industry being restricted by two foreign GPU manufacturers. This piece is a very important point for the cost saving and capacity improvement of the entire infrastructure in the future.

The above are some of my sharing, which represents some simple thinking for us, and we are also looking forward to discussing the evolution of the future metaverse technology with the entire industry chain.

Posted by:CoinYuppie,Reprinted with attribution to:https://coinyuppie.com/xu-lin-vice-president-of-haima-cloud-strategy-the-high-precision-immersion-and-ultra-low-latency-required-by-metaverse-require-a-brand-new-cloud-computing-infrastructure/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.

Leave a Reply