The QQ show of the post-80s generation, the Metaverse of the post-15s generation

Why do giants and capital invest in virtual digital people in batches?

The QQ show of the post-80s generation, the Metaverse of the post-15s generation

Behind the capital-intensive investment in virtual people, virtual images are sinking from idol IP to ordinary people. From the Gartner curve, this could be a key application inflection point.

Virtual humans are becoming the first outlet in 2022.

On January 6, a news that Byte invested “Li Weike” pushed the popularity of virtual people to a new height. Before that, Teresa Teng’s “resurrection” appeared on Jiangsu Satellite TV’s New Year’s Eve concert, and Liu Yexi’s experience of raising 3 million followers in a video was constantly teasing the market’s enthusiasm for virtual humans.

In fact, since Zuckerberg hyped the concept of the metaverse in July 2021, technology companies have more or less started the layout of the metaverse. In China, according to incomplete statistics from the Business Data School, since July 2021, there have been 18 core investment and financing events related to virtual humans in the primary market.

The QQ show of the post-80s generation, the Metaverse of the post-15s generation

Enterprises frequently raise funds, and new avatars are frequently launched. Among them, there are not only short video IPs like Liu Yexi and Li Weike, but also corporate IPs like Cui Xiaopan, an employee of Vanke; Star IP like Tasman.

But even so, for many people, they may still have many doubts about virtual people.

After all, on the one hand, virtual digital people have become a new thing that people are familiar with; but on the other hand, what is the difference between today’s virtual people and previous QQ shows? Are the previous virtual idols and two-dimensional images also considered virtual people? What is the difference between them and the current virtual people?

Following these questions, we will find that the current virtual digit and the previous avatar do not draw a clear line. In this process, if you really want to clarify the two, many virtual digital people today may not be called real “virtual digital people”.

Most of the “virtual people” are high-end QQ shows

How to define a virtual digital human?

According to Cheng Weizhong, founder and CEO of Zhongke Shenzhi, as long as it can move, have expressions and other settings, and can interact with real people, it should be included in the category of virtual people.

In general industry research and academic definitions, virtual humans are also a very broad concept. For example, according to the definition in the White Paper on the Development of Virtual Digital Humans in 2020 issued by the China Intelligent Industry Development Alliance, virtual digital humans can be roughly divided into service-type and identity-type. This can be subdivided into AI assistants, alternative real-life services, virtual anchors, virtual identities, etc.

The QQ show of the post-80s generation, the Metaverse of the post-15s generation

But from a more specific point of view, in the movie “Out of Control Player”, which is considered by most to be an interpretation of the metaverse, the protagonist played by Reynolds was originally just a piece of program code. Like all NPCs in games now, they can only execute some Set up tasks.

But because of “out of control”, this originally lifeless program began to respond to some external environments, which made him a “virtual human” in the true sense.

The characteristics of the so-called “people” refer to the ability to make independent judgments and decisions, and to have freedom within a certain range.

Judging from the implementation form of most virtual digital humans, such as Luo Tianyi or Liu Yexi; in addition to the external image being the same as a human being, there is actually a whole screenwriter and director production team behind it. Everything it presents in front of us is interpreted in the way prescribed by the script, which is no different from the stereotyped NPC in the game.

So from this point of view, the current virtual people are not real virtual people. They are at most called “virtual puppets” and high-end QQ shows.

From a technical point of view, or from what these companies are doing in the current virtual human race, what everyone cares about is actually the “face”. What they are pursuing is how to use technology to make a projection of ourselves in the online world more conveniently.

Therefore, no matter it is applied by many enterprises now, the so-called deep learning algorithm and real-time rendering technology are essentially sinking the technology used in film and television special effects in the past, so that every ordinary person can use these technologies. Make a movie-like virtual image in the network.

From this point of view, the emergence of a real virtual person is definitely not just an application of images in the end. It may need to help people replace some simple and repetitive labor through its AI capabilities, like technologies such as robots and industrial robotic arms.

For example, Cui Xiaopan, Vanke’s 2021 Outstanding Employee of the Year and Virtual Digital Person, according to Vanke’s board chairman Yu Liang mentioned in his circle of friends that Cui Xiaopan quickly learned that people can discover in processes and data with the blessing of the system algorithm. The method of the problem is a thousand times more efficient than human beings in various types of receivable/overdue reminders and abnormal work detection. The write-off rate of prepaid receivables overdue orders she urged reached 91.44%.

The QQ show of the post-80s generation, the Metaverse of the post-15s generation

Another example is the first AI sign language anchor created by CCTV News and Baidu Smart Cloud in November last year, which can not only provide sign language services to the audience without any interruptions throughout the year, but also undertake sign language translation work at the Winter Olympics.

Therefore, in these virtual digital people, the key is not just the generation of external images. It lies in the intelligent voice interaction technology, image recognition and other technologies behind its “human” image, and has created a set of decision-making systems that meet the needs of the position.

Although from the current point of view, many virtual people have already applied these technologies in the production process, but it is more used to generate a virtual image at a lower cost, rather than to receive information, judgment and decision-making, feedback Information and other applications that can make digital people more intelligent.

From this point of view, in addition to scenarios such as virtual anchors, virtual idols, and virtual spokespersons that have appeared now, virtual digital people will have more application scenarios, and may even replace/supplement many of the original human Participating service positions.

For example, in Neiflix’s sci-fi drama “Carbon Change” in 2018, the hotel where the protagonist played by Joel Kinnaman settled is an artificial intelligence-themed hotel. The hotel is equipped with artificial intelligence (virtual humans) from the front desk to the waiters. bear.

These will likely be the roles that real virtual humans need to take on in the future.

The QQ show of the post-80s generation, the Metaverse of the post-15s generation

(The hotel AI boss in “Carbon Change”)

who is making virtual people

Looking back, looking at the current virtual digital human industry chain, it can be seen from the above investment and financing cases that capital favors landmarks, and can be roughly divided into two parts.

One is the creation and operation of virtual human IP.

For example, Burnmai Technology, which received millions of Pre-A rounds of financing in June last year, is mainly engaged in the incubation of hyper-realistic digital human storylines and traffic pools, business customization, derivatives and independent brand operations.

Shiyue Xingcheng, which received an angel round of financing led by NetEase in December last year, is mainly engaged in the research and development and operation of products and content such as virtual digital and virtual clothing in the field of fashion trends. Its three virtual human IP-Vila, Reddi and Vince have cooperated with many brands including Gucci, Max Mara, Air Jordan, China Li Ning and so on.

The culture of the next world has also created a rich virtual image business around virtual digital people. On the B side, Next World Culture created avatars Dili Lengba and Taosman for stars such as Dilireba and Huang Zitao; created avatars Beco, Huaxizi and Diandianzi for brands such as IDo Jewelry, Huaxizi and Yili Jindian; On the C side, the next-generation culture has also incubated its own independent IP images of virtual people, such as Nan Mengxia, a girl of national style and vitality, and Ling Ling, who appeared in the variety show “Colorful Youth”.

The QQ show of the post-80s generation, the Metaverse of the post-15s generation

(Dilraba and her avatar Dili Lengba)

The advantage of these companies lies in their rich experience in IP customization and operation. For example, before Ranmai Technology entered the virtual human race, it had the experience and ability to incubate real-life Internet celebrities with over 100 million fans for many years. The virtual digital human AYAYI it created currently has 121,000 fans on Xiaohongshu.

But here it is also showing that this type of enterprise still regards it as a virtual idol when they operate virtual people.There is a professional content team behind it to customize all the content, cooperation, copywriting, and finally put it in the name of the virtual image.

In essence, the virtual person we saw at the front desk is just a concrete manifestation of the entire team behind it.

Different from the enterprises that build and operate virtual human IP, the other type is enterprises that focus on virtual digital human technology.

For example, on January 9 this year, Shiyou Technology, a full-stack virtual technology provider that completed the first phase of investment in the A+ round of tens of millions of dollars, mastered core technologies such as fast animation and real-time digital human, and independently developed a set of “” The real-time rapid virtual content production system of Puppeteer Virtual Factory can output virtual content through real-time motion capture, expression capture, and real-time rendering.

There are also Mofa Technology, which completed hundreds of millions of yuan in Series A financing in June 2019, focusing on key technologies such as real-time AI performance animation, real-time rendering, and real-time calculation.Not only can it complete the creation and operation of virtual humans, but also self-developed a three-dimensional AI virtual human capability platform to provide end-to-end virtual live broadcast technology services for downstream enterprises.

In November last year, PhaseCore Technology obtained 70 million yuan in financing. With its long-term accumulation in the field of 3D modeling, animation and rendering technology, it independently developed a virtual digital human engine and a hyper-realistic digital object platform to provide virtual human images. Technical Support.

Similarly, Zhongke Shenzhi, which received three rounds of financing last year, focuses on content production technology in the field of AI+XR, and has successively launched VR multi-person large space interaction system, AI digital human interaction system, virtual idol interpretation system, virtual preview A number of virtual human production technologies such as system, XR shooting system, etc., can not only help virtual human content production enterprises to speed up the process of producing animation videos, but also help enterprise users to quickly realize virtual live broadcast.

But as mentioned earlier, even in terms of technology, what most companies do focus on reducing the cost of virtual image production and simplifying the production process.

For example, Doug Roble, the technical director of Digital Domain, the company that made the virtual image for Teresa Teng, once introduced that when they first started the research and development of digital human technology, the production of a virtual digital human generally required a large amount of data to be collected through data capture and motion capture; Build and train a deep neural network, and finally render it through rendering technology.

The QQ show of the post-80s generation, the Metaverse of the post-15s generation

(Doug Roble collects data in all directions)

Today, when these previous jobs and technologies are integrated, no professional equipment is required, and only an ordinary camera is needed to create a virtual image.

However, the AI ​​technology that can help realize the intelligent interaction of virtual human has no particularly significant substantive breakthrough after the deep learning algorithm. Although there have been many in-depth explorations in practical application scenarios, it is far from a strong human who can understand and make decisions. Intelligence still has a long way to go.

Why is capital optimistic about virtual people?

Not all virtual people are real virtual people, so why is capital still so keen on virtual people? Is the current virtual person a bubble or an IQ tax?

First of all, we must be clear that the development of any technology is not achieved overnight, and the realization of a real virtual human must be a long-term construction process. Although virtual human technology has developed to a certain extent after more than ten years of precipitation, it is clear that there is still a long way to go.

On the other hand, for the current virtual human, the market is already very large and mature.

For example, thanks to the investment of many companies mentioned above in the production technology of virtual people, the production cost of ordinary virtual anchors has reached a very low state.

Taking the virtual live broadcast system “Chuangmengyi” launched by Zhongke Shenzhi in July 2020 as an example, its product price is 3588 yuan a year, which is less than 10 yuan a day on average. Its software not only has more than 100 virtual anchor characters of various styles, but also more than 100 2D and 3D scenes for different needs of freight yards. Users can also make models and scenes according to their own needs, and realize the camera, sound, etc. And many other ways to carry out the role-driven way.

The QQ show of the post-80s generation, the Metaverse of the post-15s generation

(Yunbot Technology’s “Little K Live Ji” display)

In terms of entertainment virtual anchors, Yunbo Technology not only provides a full-stack 3D virtualization solution, but also introduces key technologies such as “Small K Video Motion Capture”, so that ordinary users can not only generate virtual images with one click for free, but also realize no need to wear Any hardware motion capture device can achieve high-precision 3D real-time capture of body, fingers, expressions, etc. only by relying on ordinary cameras.

The application of these technologies has greatly lowered the entry barriers for virtual anchors. According to the second quarter financial report of Bilibili, virtual anchors have become the fastest-growing category of Bilibili live broadcasts. In addition, according to the calculation of the virtual anchor data monitoring platform “vtbs.moe”, the subscriptions and rewards of virtual idols on the B station last year increased by 350% year-on-year.

In addition to the sinking technology and lowering the threshold, more and more players join the virtual anchor, and more importantly, the value of the head virtual anchor is more and more recognized by the brand side.

Taking the early virtual idol Luo Tianyi as an example, in a live broadcast with Li Jiaqi in 2020, the pit fee has reached 900,000, which is much higher than the same as the head anchor, but the pit fee is only 60 Wan’s Luo Yonghao. And this also delineates a possible path for the commercialization of virtual anchors.

In addition, virtual human has the opportunity to be applied to all aspects of social life. For example, in the movie “Blade Runner 2049”, the giant avatar that can be seen everywhere has become the main form of advertising and content. Now in real life, this form of advertising display is also being gradually realized through virtual human technology.

All of the above, at the same time, allow the virtual human race track to have more room for imagination.

The QQ show of the post-80s generation, the Metaverse of the post-15s generation

(Picture: The movie “Blade Runner 2049”)

Finally, for many entrepreneurs and entrepreneurs who are looking forward to the Metaverse, they also have high hopes for the digital virtual human race known as the “Entrance to the Metaverse”.

For example, after investing in the next-generation culture, Feng Zheng, vice president of Shunwei Capital, said: “Digital human is the native interaction method in the virtual world, and will become the ‘APP’, ‘robot’ and ‘ID’ of the virtual world.”

Cheng Weizhong, founder and CEO of Zhongke Zhishen, also believes that the biggest difference between the Metaverse and the mobile Internet is that we will move from textual interactions that rely on mobile phones and computer systems to personalized exchanges that rely on talking to virtual people and issuing instructions. to transfer.

Therefore, the virtual human will become the interactive interface of the metaverse users in the future.

The virtual person and the metaverse are similar to the WeChat to the mobile Internet. Whoever can preempt the user’s entrance into the metaverse and integrate the user’s social relationship chain in the metaverse will be able to take the lead in the future metaverse and expand to other fields.

Perhaps, the QQ show of the post-80s is the metaverse of the post-15s.

It’s just that the imagination is beautiful, but whether the virtual digital person can bear the burden of the “portal” of the metaverse may take more time to stand the test.

Posted by:CoinYuppie,Reprinted with attribution to:https://coinyuppie.com/the-qq-show-of-the-post-80s-generation-the-metaverse-of-the-post-15s-generation/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.

Like (0)
Donate Buy me a coffee Buy me a coffee
Previous 2022-01-12 09:08
Next 2022-01-12 09:13

Related articles