Virtual people exist in digital form, have human appearance, characteristics, and behaviors, and rely on virtual images displayed by science and technology. As an important track of the Metaverse, virtual humans can bring rich content and immersive experience to the Metaverse. Virtual humans can be copied, but how to give virtual humans a different “soul”? Let’s start with the logic of industrial development-
Four-quadrant classification of virtual humans based on content generation and usage. From the two dimensions of creation, operation and application scenarios, we can divide virtual people into four categories: service virtual people (PGC+functional), virtual idols (PGC+IP value), digital avatars (UGC+functional), creative carriers (UGC+IP value). The four types of virtual humans share the same technology in production, but have completely different development paths in application.
Policy: The public media scene has become a breakthrough point in the development of service-oriented virtual people. On October 22, the “14th Five-Year Plan for Scientific and Technological Development of Broadcasting, Television and Network Audiovisualization” pointed out that the promotion of virtual anchors and animated sign language in news broadcasts, weather forecasts, variety shows, science and education and other programs, and innovative program formats. This has accelerated the attempts of service-oriented virtual humans in media scenes. For example, Xinhua News Agency virtual reporter Xiaozhen and the first AI sign language anchor introduced by CCTV News, the public’s acceptance of virtual humans has continued to increase.
Technology: AI helps lower the threshold, and the value of edge computing power is prominent. With the support of machine learning, computer vision and other technologies, virtual humans are becoming more and more refined in the dimensions of facial image, motion display, voice recognition and synthesis, reflecting their high-precision characteristics in the future. At the same time, with the development of AI algorithms and the emergence of production platforms, the barriers to production of virtual humans are constantly lowering, becoming an application that can be reached by the C-side. The market sees the virtual person itself, but ignores the IT support behind it. The statically displayed virtual person only needs to use modeling and rendering technology to output super-realistic images comparable to real people. The dynamic display needs to be based on modeling, plus animation and voice, and requires motion capture and rendering technology. Interactive virtual humans have a great demand for edge computing power, which will give rise to new IT requirements.
Business model: virtual idols first, UGC stars and seas. Virtual idols are mainly in the form of virtual anchor Vtuber. The main reason is that the threshold is low and the realization is relatively easy and fast. The number of virtual anchors at station B has steadily increased, with head monthly revenue reaching 2 million yuan. Virtual idols in various vertical fields are expected to appear on media platforms, and we believe that the creation of UGC that truly gives the soul of virtual humans.
DAO: A community creation model that gives virtual human souls. The famous “Hatsune Miku” is a typical case of community operation and self-generated content mechanism. There are a large number of creators who present their music in the incarnation of “Hatsune Miku”, but they are currently facing the dilemma of loss of creators due to the lack of a business model. Relying on the virtual idol ecology created by the UGC community, it can be combined with DAO under Web3.0 to produce a new business operation model, and participants share the IP growth dividend.
Investment advice: It is recommended to pay attention to virtual person production and IP operation. 1) Virtual human production: Nvidia, Tencent, Unity, HKUST Xunfei, Jebsen, Fengshang Culture; 2) Virtual human operation (media coverage): Mango Supermedia, Blue Cursor, Saturday, Zhewen Internet, Aofei Entertainment; 3) Virtual human computing power: American Pixel, ZTE, Xinyisheng, Zhongji Innolux, Guanghetong, MG Intelligence, etc.
Risk warning : technological innovation is not as expected; blockchain policy supervision risks; blockchain infrastructure development does not meet expectations.
When focusing on the meta-universal track, virtual humans are valued by the market as an important interactive carrier. In the past few years, with the improvement of modeling technology, the production and manufacturing of virtual humans has matured. However, as a “human” in the digital world, it is only An anime? How to make virtual humans have a different “soul”? In this article, we will combine Metaverse, UGC and DAO to discuss the future development of the virtual human industry.
Virtual human status: co-creation between the company and the community, and the development of functions and IP values
The popularity of virtual humans has been increasing recently. On October 31st, virtual human Liu Yexi was born on Douyin, and his fans gained one million overnight. Liu Yexi is not the first virtual person. Hua Zhibing of Tsinghua University, Douyin celebrity Axi, and the cross-dressing goddess Jiyuanmei, have a lot of fan love and pursuit on all major social platforms. Nearly 400 million people across the country have followed virtual idols. Last year, the virtual idol market reached 200 billion, doubling in two years. At the 2021 Nvidia GTC conference, Huang Renxun focused on the virtual avatar (avatar) Toy-me and virtual people, and the answers were impressive.
What is a virtual person? Virtual humans exist in digital form, have human appearance, characteristics, and behaviors, and rely on virtual images displayed by display devices. Simply put, a virtual person is a digital image that allows users to feel personality. Currently, virtual people can play a variety of roles, including virtual anchors, virtual idols, virtual reporters, virtual assistants, and so on.
Virtual humans are an important track of the Metaverse, which can bring rich content and immersive experience to the Metaverse.
The creation and operation of virtual humans mainly rely on the commercial operation of professional teams and the UGC content of the fan community; and the application scenarios of virtual humans mainly emphasize function and IP value. From the two dimensions of creation, operation and application scenarios, we can classify virtual people into the following four categories: service virtual people (PGC+functional type), virtual idols (PGC+IP value), digital avatars (UGC+functional type), creative carriers (UGC+IP value). The four types of virtual humans have completely different development paths in their applications, and share related technologies in production, but they are slightly different.
Service virtual humans provide users with anthropomorphic social services. Compared with chat robots, digital assistants and digital humans, the advantage of service virtual humans is that high-precision modeling and artificial intelligence enable them to undertake social work on a larger scale. From film and television to finance to games, virtual humans can take on a variety of service-type roles and provide users with intelligent and efficient humanized services. The virtual reporters of Xinhua News Agency and the virtual sign language teachers of CCTV all belong to this category. These jobs require a human image and become a good landing scene for service-oriented virtual people.
Virtual idols are virtual images displayed through technological means and have their own personal settings. Virtual idols can also hold concerts, live broadcasts, distribute peripheral products, participate in product endorsements, and even play roles in film and television dramas. The virtual idol is operated by the company’s IP, and a professional team produces content. For example, in 2016, the Japanese virtual idol “Kizuna Ai” was born. The main creative team gave it an “artificial mental retardation” dumb character. The number of fans reached 400,000 within 4 months. The works were mainly released on video platforms. This type of “Vtuber” began to emerge. .
Both service virtual people and virtual idols rely on professional teams to produce and operate, and to better enhance the interaction between platforms and users. In recent years, the rise of MCN has given such virtual people more monetization channels and business models, which have become new “New favorites” such as the media and Douyin. On the whole, this type of virtual person is based on PGC and is well-made but relatively closed in ecology. The production and operation process is completed by its own team or outsourcing. The “soul” of the virtual person is defined by the organizer and created by the production platform.
A virtual avatar (Avatar) evolved from the game “face pinching”, bringing users a high degree of immersion. The “face pinch” game that meets the individual needs of users originated from the stand-alone game “The Elder Scrolls 3” and has been widely used in various game types. In its long course of development, the type of adjustable parameter data of the “face pinching” system has been greatly enriched, and the complexity has gradually increased with the refinement of the algorithm. In the future, Avatar will not only become popular in game applications, but also promote the networking of social activities. The virtual avatar can enhance the user’s sense of substitution, and at the same time can meet the virtual application scenarios.
As a carrier of community creation, the image and expression of this type of virtual person are not fixed. The most well-known creative carrier “Hatsune Miku” is the sound source sold by Crypton on the speech synthesis engine VOCALOID, with an anthropomorphic image attached. Because the sound source has wonderful human voice synthesis and the open copyright of Crypton, Hatsune Miku’s community creation is very active, and a large number of high-quality community works have emerged. In 2009, the world’s first holographic imagery was held. Virtual idol concert. Later in China, “Luo Tianyi” was also introduced. The creators purchased the sound source of “Luo Tianyi” to let Luo Tianyi sing his favorite songs.
Virtual human production: PGC and UGC go hand in hand; dual-line development of ultra-realism and low threshold
The rise of the virtual human racetrack is not for a while. After decades of technology accumulation, its application scenarios are constantly expanding. As early as the 1980s, creators began to try to create a personal digital image. However, due to technical limitations, digital humans at that time were mainly drawn by hand in 2D, and their applications were very limited. At the beginning of the 21st century, CG (computer generated animation), motion capture, vocal synthesis and other technologies gradually matured, virtual humans began to develop rapidly, and digital virtual humans produced by CG technology were widely used in movies. In the past five years, thanks to the breakthrough of artificial intelligence technology, the production of virtual humans has been simplified and more interactive, and has entered the fast lane of development. At present, the precision of modeling, motion capture, and AI interaction are constantly improving. Virtual humans have reached the level of realism and have the ability to express emotions and communicate.
While the production of virtual humans continues to be refined, the production threshold is also constantly lowered. Fueled by the technological revolution, virtual humans, with the support of machine learning, deep learning, computer vision and other technologies, are becoming more and more refined in the dimensions of facial image, motion display, voice recognition and synthesis, reflecting their high precision in the future Traits. At the same time, with the development of AI algorithms and the emergence of production platforms, the barriers to production of virtual humans are constantly lowering. The statically displayed virtual person only needs to use modeling and rendering technology to be able to count and divide the super-realistic image comparable to the real person. The dynamic display needs to be based on modeling, plus animation and voice, which often requires motion capture technology. Interactive virtual humans need artificial intelligence technology to identify and interact with user feedback.
1. Virtual human production-computing power is king, the edge rises
At present, the mainstream way of virtual human design is scanning modeling. It can be divided into two types of technologies: static reconstruction and dynamic light field reconstruction. Among them, static scanning is still in the mainstream position, and the high-fidelity dynamic light field 3D reconstruction technology is beginning to shine. In the future, dynamic light field reconstruction technology will be further applied to the production of static virtual humans to improve the light and shadow effects of virtual humans and the user’s visual senses. Due to the availability of technology, the threshold for the use of dynamic light field reconstruction technology in the future will be gradually lowered and reach the trend of popularization. .
Platform-based tools support the creation of high-precision virtual humans with a low threshold. In early 2021, EpicGames released MetahumanCreator, a tool that can generate high-fidelity character images. Based on pre-made high-quality models, users can easily and quickly customize their own virtual human models. The positioning of this tool is to allow small teams and individuals to quickly and low-threshold the roles they need, greatly improve the effect of art, and save creative costs.
The driving of the virtual human relies on the migration of human motion capture to the virtual human model. Migrating the actions collected by motion capture to virtual humans is currently the main way to generate 3D virtual human motions. Motion capture technology can be divided into optical, inertial, and computer vision-based motion capture methods according to the differences in implementation methods.
Optical capture and inertial capture are commonly used in professional production. Optical capture is mostly used in professional fields such as medical treatment, sports, and movies. For example, in October 2021, China Qingpu Vision teamed up with Huawei to bring the world’s first 5G+VR two-dimensional idol live broadcast, which perfectly demonstrated the graceful dance of virtual idols. Inertial capture also has more applications in film and television works, better presenting the image of 3D virtual idols and interacting with users. In the future, with the further development of computer technology, motion capture technology needs to replace inefficient motion recording and broadcasting technology and become the mainstream technology for virtual human animation production.
The visual capture technology greatly reduces the threshold for use. Visual capture is mostly used in the consumer market, and basic facial and body capture can be done through the built-in deep-sensing camera of the mobile phone. As virtual idols accelerate to attract young people, low-threshold visual capture solutions are expected to become the first choice for UGC creators to flood the virtual idol track. For example, the APPlivelinkface launched by Epic can easily capture the user’s facial movements and push them to the production platform.
Rendering technology can be divided into real-time rendering and offline rendering. The former is fast and is suitable for games or interactive scenes, while the latter is powerful and more suitable for scenes that require high precision. Real-time rendering refers to the real-time calculation and output of graphics data. Each frame is an image calculated for the actual environmental light source, camera position, and material parameters. Early real-time rendering technology has a short rendering time and limited computing resources. However, with the increase in algorithm computing power and the improvement of hardware level, the rendering speed and fidelity have achieved a qualitative leap. Offline rendering technology image data is not real-time calculation output, its rendering time is long, high quality, and rich computing resources.
The production of virtual humans-computing power is king and the edge rises. Today, when making virtual humans facing the metaverse, more emphasis will be placed on marginal computing power. As mentioned earlier, Metaverse emphasizes the combination of virtual and real, and offline rendering alone is not enough. Real-time rendering places extremely high demands on computing power. The market believes that most of the computing power is concentrated in the cloud, but real-time rendering needs to be solved on the edge side, which consumes a large amount of edge + terminal computing power. This architecture is quite different from the previous traditional communication computing power architecture. Prior to this, Unity has cooperated with Verizon to develop high-speed, low-latency digital solutions, covering everything from entertainment applications to enterprise toolkits. This is due to the fact that independent engine vendors cannot solve the computing power problems of edge computing. Communication and IT infrastructure service providers will play a role. Greater effect.
2. Virtual human interaction-AI + real human fusion of virtual and real
Interactive digital humans can be divided into AI intelligent driving type and real human driving type according to different driving methods. Real human operation combined with motion capture can enable virtual humans to interact with the audience in real time. The intelligent-driven digital person can automatically read and analyze and recognize external input information through the intelligent system, and then decide the subsequent output text of the digital person based on the analysis result, and then drive the character model to generate corresponding voices and actions to enable the digital person to interact with the user. The field of virtual human development is gradually opening up the market space. Thanks to the development of advanced computer technologies such as deep learning, machine learning, computer vision, and natural language processing, virtual humans will gradually integrate customer service answering, intelligent marketing and other functions, and shape a good brand image for customers, and will soon become a human-computer interaction product. Value breakthrough point.
AI-driven virtual people rely on multiple technologies such as speech recognition, natural language processing, speech synthesis, and speech-driven facial animation. In the field of speech recognition, domestic iFLYTEK, Baidu, Tencent, and Ali all have layouts. The progress of semantic understanding in natural language processing is relatively slow, which is several times more difficult than speech recognition. Companies that do relatively well include Google and IBM. Speech synthesis has been widely used at present, but it is often recorded and broadcasted in fragments, which is still far from true autonomous expression. Virtual humans have realized intelligent synthesis of mouth movements, which is mainly achieved by establishing an association mapping from input text to output audio and output visual information. The main design idea is to perform model training on the collected text-to-speech or mouth shape animation data to obtain a model that can drive the mouth shape animation by inputting any text, and then intelligently synthesize the virtual human mouth shape through the model.
What is the value of commercial virtual humans?
1. Service-oriented virtual people are expected to develop rapidly
With policy support, the media scene will become a breakthrough point in the development of service-oriented virtual people. On October 22, the “14th Five-Year Plan for Scientific and Technological Development of Broadcasting, Television and Network Audiovisualization” pointed out that the promotion of virtual anchors and animated sign language in news broadcasts, weather forecasts, variety shows, science and education and other programs, and innovative program formats. The service-oriented virtual person in the media scene accelerates its attempts: The virtual reporter Xiaozhen, jointly created by the State Key Laboratory of Media Convergence Production Technology and System of Xinhua News Agency and NExT Studios, a subsidiary of Tencent Interactive Entertainment, debuted on June 17th on Shenzhou 12 On the day of the launch of the manned spacecraft, a space interview was brought to the audience. CCTV News introduced AI technology to create the first virtual AI sign language anchor, bringing sign language services for the Winter Olympics to the hearing impaired in China.
At present, the realism of service virtual humans has broken through the limits of the uncanny valley, and the application scenarios will usher in development. The “Uncanny Valley Theory” reveals that human beings have positive emotions towards humanoid things, until they reach a certain level, their reactions will suddenly become extremely repulsive. Even if there is only a slight difference between a robot and a human, it will be very conspicuous and dazzling, and it will be very stiff and terrifying. With the improvement of virtual human production technology, super-realistic precision virtual human modeling makes the services provided by virtual humans become more natural. When technology crosses the user experience limitation of this uncanny valley, when virtual people and real people are indistinguishable from the outside, application scenarios will usher in more room for development.
Service virtual people will bring changes in many traditional fields. By creating virtual humans in specific application scenarios, the user’s business experience can be greatly improved. Typical scenes include film and television, finance, cultural tourism, education, healthcare and retail.
2. The virtual idol market has grown steadily
At present, virtual idols are the most mature commercial application of virtual people, and the market scale is growing steadily. Among them, virtual people are mainly in the form of virtual anchor Vtuber, mainly because of low threshold and relatively easy and fast realization. In 2016, the virtual anchor “Kizuna Ai” was launched on youtube and gradually began to spread widely. From January 2020 to June this year, the number of virtual anchors on Bilibili has increased by nearly 7 times. The catalytic factors are mainly derived from the following two points: The impact of the epidemic: The overall growth of the online entertainment market has spawned a new audience market. Increased demand. Head virtual IP exit: The leading virtual idol group “hololive” exits. The sudden vacancy of the head IP makes fans, markets, and official resources eager to seek new bindings, giving new virtual anchors a certain development opportunity.
Referring to the revenue of Bilibili virtual anchors in November 2021, the total revenue of virtual anchors that month reached 54.66 million yuan, and the number of paying users reached 255,000. The number one virtual anchor “Carol Carol” has achieved a monthly income of 2.14 million yuan. Among them, a live broadcast of a birthday party that lasted for 4 hours generated a single revenue of 1.89 million.
Virtual idols do not rely on super-realistic virtual person production technology, but high-precision virtual person production technology has brought new operation methods to virtual idols. With the improvement of modeling technology, super-realistic virtual idols such as AYAYI and Miquela appear, which are difficult to distinguish from real people. Therefore, there can be more innovations in display methods and business models, and they can be used as beauty bloggers, models, etc. . In the early days, virtual idols were often two-dimensional images, and their display methods were music, animation, CG, etc. The content of these virtual idols was generally entertainment videos.
Metaverse-Give virtual human “soul”
1. DAO will empower community creation
The production and consumption of digital content are inseparable from the community. The famous “Hatsune Miku” is a typical case of community operation and self-generated content mechanism. “Hatsune Miku” was founded in 2007. The music software production company CRYPTON developed a Vocalid sound library with the image of “Hatsune Miku” characters and certain action scripts. On September 10, 2007, it accounted for about 30.4% of the Japanese music software market, four times the second place. The community is directly involved in creating value, and sharing and disseminating it online, “I support idols” has become “I create idols”.
There are a large number of creators presenting their music in the incarnation of “Hatsune Miku”, but they are currently facing the dilemma of loss of creators. “ Hatsune Miku” has harvested a large number of different styles of content, and musicians as content providers have also gained attention, reaching a win-win situation and forming a positive cycle. This is one of the reasons why the life cycle of “Hatsune Miku” is so long. But in recent years, with the departure of Hatsune Miku’s P master (creator), high-quality content has decreased, and IP influence has been threatened. The main reason is that P has no corresponding economic rewards for the creation of the community.
The virtual idol ecology created by the UGC community can be combined with DAO to produce a new business model, enabling ecological participants to share the IP growth dividend. DAO, or Decentralized Autonomous Organization, is a form of digital world organization based on blockchain technology. Its organizational rules are executed by distributed programs, which can align the interests of participants and jointly achieve organizational goals. DAO has several major characteristics: information transparency, token incentives, code open source, community autonomy, participants have ownership of the organization, and freedom and openness. The DAO organization based on the token economy can allow creators and fans to enjoy the benefits of IP ecological development.
(For a detailed introduction to DAO, please refer to our previous report “Metaverse (6): D AO in the Operation of Metaverse “)
2. Virtual avatars and digital identities
Web3.0 has the characteristics of trustlessness and decentralization, and users have control over their own data privacy. In order to realize Web3.0, it needs the blessing of various technologies, including blockchain, artificial intelligence, Internet of things, etc. Among them, the blockchain technology is particularly in line with the vision of Web 3.0, which is also known as the blockchain. One of the reasons for the “Internet of Value”.
Decentralized digital identity is the core feature of Web3.0. The most important thing in Web3.0 is that users control their own identity data. Identity owners can use their identity data wherever they need it without relying on a specific identity service provider. At the same time, with the help of blockchain technology, the security, autonomy and portability of user identities can be guaranteed. For example, decentralized domain names and NFT avatars, as symbols of digital identity, are in the hands of users and can be used across projects.
As a symbol of digital identity, virtual people can be authenticated and used across platforms in the future. At present, Twitter, Discord, Xiaohongshu, etc. are all promoting the authentication function of NFT digital collections, ensuring that the unique virtual avatar can be interoperable on multiple platforms, and truly becoming a status symbol in the digital world. As for the 3D model data of virtual people, it is expected to be opened up in various project platforms in the future to truly realize the digital identity in the Metaverse.
Investment strategy: Virtual humans are in the ascendant, layout production + operation + edge computing track
We expect that more and more virtual humans will emerge in the next 5 years. They will not only be NPCs (non-player characters) in the game, but will also be further endowed with souls in the Metaverse, appearing in the form of NFTs and cooperating with them. AiGC (AI generates content), the operating model will also be upgraded from a professional team to a community. At the same time, virtual humans have higher requirements for underlying computing power, and edge computing (including computing, communication, storage, etc.) scenarios are becoming more abundant. It is recommended to pay attention to virtual person production and IP operation. We have sorted out the following subject matter for reference. It is expected that more companies will join this camp in the future:
Technological innovation is not as expected: the technological development of virtual humans is less than expected.
Blockchain policy supervision risk: Blockchain is currently in the early stage of development, and there are certain uncertainties in the supervision of blockchain technology, project financing and tokens in various countries around the world. Therefore, there is uncertainty in the development of industry company projects.
The development of blockchain infrastructure fell short of expectations. Blockchain is the core technology to solve supply chain finance and digital identity. At present, the blockchain infrastructure cannot support high-performance network deployment. The degree of decentralization and security will restrict high performance to a certain extent. Blockchain infrastructure exists. The risk of development not meeting expectations.
This article is an excerpt from the report “Guosheng Blockchain | Metaverse (7) What is the “soul” of virtual humans published by Guosheng Securities Research Institute on December 20, 2021? “, please refer to the relevant report for details.
Song Jiaji S0680519010002 email@example.com
Special statement: The “Administrative Measures for the Suitability of Securities and Futures Investors” will be formally implemented on July 1, 2017. This information prepared through WeChat is only for professional investors among Guosheng Securities clients. Please do not forward this information in any form. If you are not a professional investor among Guosheng Securities clients, in order to ensure service quality and control investment risks, please unfollow and do not subscribe, accept or use any information in this material. Because it is difficult to set access permissions for this subscription account, if it causes you inconvenience, please forgive me! Thank you for your understanding and cooperation.
Posted by:CoinYuppie，Reprinted with attribution to:https://coinyuppie.com/what-is-the-soul-of-a-virtual-human/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.