Lao Huang said that Metaverse is called Omniverse but not Meta. Who has an opinion?
The kitchen familiar to Lao Huang’s family reappeared, but this time Nvidia did not sell off like the “virtual digital human”. The kitchen began to be blurred shortly after the keynote speech of the GTC Conference (GPU Technology Conference) on November 9, right. , The person is real but the kitchen is fake! The scene is converted to the Nvidia building.
Omniverse technology is used in the fake kitchen this time and the virtual old yellow last time.
Every GTC conference of Nvidia always brings a lot of surprises. Before the official start of the conference, Nvidia’s stock price began to rise last week. Many investors believe that this conference may highlight Nvidia’s opportunities under this year’s popular “Meta Universe” theme. The GPU that Nvidia excels at is to Meta Universe, which is equivalent to lithium batteries to new energy vehicles. Although Lao Huang did not deliberately emphasize the concept of meta universe in his keynote speech for more than one and a half hours, in this conference, there are indeed many new technology products that have meta universe behind them.
Let me make a conclusion first. Huang’s entire keynote speech actually wanted to convey a core concept—Nvidia played a key role in advancing the development of AI in all walks of life. The content of this concept is naturally to show its own core technology. In short, NVIDIA showcased its own products in the field of enterprise and data center AI, conversational AI and natural language processing. The latest technology, as well as the application of edge AI, such as robots , Medical and self-driving cars.
Use Omniverse Avatar to be the AI incarnation of Lao Huang
The highlight of this keynote speech is naturally inextricably linked to the uproar of the outside world. NVIDIA released NVIDIA Omniverse at this GTC conference-a virtual world simulation and collaboration platform for 3D workflow. In fact, Nvidia released a public beta version of the Omniverse platform in December last year, which allows creators to collaborate in real-time in physics-accurate simulation or 3D rendering.
Today, the Omniverse platform has been upgraded again, with the newly released Omniverse Avatar and Omniverse Replicator. Among them, Omniverse Avatar is a technology platform for generating interactive AI avatars. It brings together Nvidia’s accumulated technologies in voice AI, computer vision, natural language understanding, recommendation engines, and simulation technologies, opening the door to the creation of artificial intelligence assistants that can help handle billions of daily customer service interactions. Omniverse Replicator is a synthetic data generation engine that can continuously generate synthetic data for training based on existing data.
The portal of Omniverse is USD (Universal Scenario Description) Huang Renxun believes that the essence of Omniverse is a digital wormhole . In the future, any computer can connect to Omniverse and connect one Omniverse world to another. USD to Omniverse is like HTML (a markup language that can unify document formats on the web) based on websites.
Huang Renxun is quite ambitious about Omniverse. Lao Huang said that people often say that “the Internet has changed everything.” In the current Internet of Everything, the Internet is essentially a digital expression of the world. This digitization covers all 2D levels of information, namely text, voice, images, and video. Nowadays, with the further development of technology, information on the 3D level is constantly emerging.
In Huang Renxun’s vision, in the future, many designers and creators will design digital things in virtual reality and Metaverse, and then complete the design in the real world, including products such as cars, bags, shoes, and so on. The Omniverse platform released this time has the technology to create a new 3D world or model the physical world.
To use an Internet slang that means that everything that physically exists or does not exist in the real world is worth doing it again on Omniverse. So it seems can, Omniverse game engine and not just as a thing, but Huang said the Omniverse is the size of the data center for the design of, or in the future may reach global scale data, which means that Nvidia’s expected future Omniverse Can really model the physical world.
Of course, there are still many challenges for real-time interaction between virtual items in Omniverse and people in the physical world. Huang Renxun said: “How to use Omniverse to simulate warehouses, factories, physical and biological systems, 5G edges, robots, self-driving cars, and even digital twins of avatars is an eternal theme.”
Next Huang Renxun demonstrated the real-time application produced by Omniverse Avatar combined with Nvidia’s various technologies. The first is Project Tokkio for customer support. “Tokkio” is a smart console application. In the video case, “Tokkio” serves in a fast food restaurant, talks directly with two customers and helps customers order food.
Tokkio, a smart console application based on Omniverse Avatar
If you combine the Omniverse Avatar with Drive Concierge, a cute virtual assistant for custom driving will appear.
Another example shown by Omniverse Avatar is Lao Huang’s own AI incarnation. Nvidia employees have long used Lao Huang’s voice to construct a conversational speech synthesis AIToy-Me-a toy AI Lao Huang. I have to say that Lao Huang does know a lot. The toy version of AI Lao Huang easily answered professional questions from experts in the three fields of climate, astronomy, and biology in the video case.
Huang said: “You will see this avatar is based on the largest language training language model of the current process to create, including the sound is synthesized using my own voice, you can see real-time ray tracing-based fine beauties like.”
Real-time conversation with AI robot “Toy Jensen Omniverse Avatar”
Huang also combined Omniverse Avatar with Maxine, a video conferencing platform, to add audio and video functions to virtual collaboration and content creation applications. In the video case, a woman in a noisy coffee shop can simply remove the background noise and talk in the video conference. At the same time, her English can be converted into multiple languages in real time, and a virtual image corresponding to the mouth shape and intonation can be generated. .
A seemingly simple AI digital person is actually not simple. The reason why Omniverse Avatar is so powerful lies in the technological breakthroughs made by Nvidia in recent years. Huang said that it was almost impossible to realize the functions of the above cases a few years ago. Today, Omniverse Avatar’s recommendation engine uses the Merlin solution, which allows companies to build a deep learning recommendation system that handles large amounts of data; perception capabilities come from the computer vision framework Metropolis; avatar animations are powered by Video2Face and Audio2Face (two-dimensional and three-dimensional AI-driven facial Animation and rendering technology).
Omniverse Replicator, the old driver of autonomous driving
Another product, Omniverse Replicator, released by Huang Renxun in his speech, has produced two applications for generating synthetic data-NVIDIA DRIVE Sim™ and NVIDIA Isaac Sim™. They are the virtual world used to carry the digital twin of autonomous vehicles and the virtual world used to steer the digital twin of robots.
The advantage of Omniverse Replicator is that it can replace real people to perform expensive and laborious data labeling to a certain extent. At the same time, the data generated in these virtual worlds can cover a variety of different scenarios, including scenarios that cannot be often experienced in the real world and extremes. Dangerous scene. It can also generate truth data that is difficult or impossible for humans to mark, such as speed, depth, obscured objects, severe weather conditions, and tracking the movement of objects on various sensors. When self-driving cars and robots are fully trained in a series of virtual environments , they will gradually be applied to the real world.
In addition, Huang Renxun also announced four other functions of Omniverse. Showroom-a demo and example application that shows the core technology of Omniverse; Farm-a system layer that is used to coordinate batch processing across multiple systems, workstations, servers, and virtualization, and can be used for batch rendering and AI synthesis Data generation or distributed computing; Omniverse AR-can stream graphics to mobile phones or AR glasses; Omniverse VR-the first full-frame interactive ray tracing VR.
Nvidia’s expectations for Omniverse go far beyond that. Nvidia will also use Omniverse to build a digital twin model to simulate and predict climate change. Huang Renxun said: “Predicting climate change in order to formulate mitigation and adaptation strategies is arguably one of the biggest challenges facing society today.”
Nvidia’s last supercomputer was Cambridge-1, and the new supercomputer used to simulate and predict climate change will be called E-2 (Earth Two), which means the digital twin of the earth. It can be used in Omniverse. The AI physics model created by Modulus runs millions of times faster.
GPU in NVIDIA Cambrigde-1
At the end of the keynote speech, Huang Renxun stated that humans need to take action to alleviate and adapt to today’s increasingly frequent extreme weather before it’s too late. Can’t think of a more magnificent and important use than this.”
Posted by:CoinYuppie，Reprinted with attribution to:https://coinyuppie.com/nvidia-huang-renxun-we-built-a-shuttle-door-between-the-real-world-and-the-meta-universe/ Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.