“You will see an eternal theme-how Omniverse can be used to simulate warehouses, factories, physical and biological systems, 5G edges, robots, self-driving cars, and even the incarnation of digital twins,” Nvidia CEO Huang Renxun GTC, November 9 Said at the conference.
The Omniverse used to explore the “meta universe” is a real-time simulation and collaboration platform developed by NVIDIA . “Omniverse is very different from a game engine. It is designed for data center scale and is expected to reach global data scale one day. Omniverse’s portal is USD (Universal Scene Description), essentially a digital wormhole. Connect humans and computers to Omniverse, and connect one Omniverse world to another. USD is to Omniverse what HTML (web page) is to a website ,” Huang Renxun said.
Previously, according to Nvidia’s introduction, Omniverse’s operation was divided into three major parts.
The first part is the database engine Omniverse Nucleus , where users can connect and exchange 3D assets and scene descriptions. Designers who need modeling, layout, shading, animation, lighting, special effects or rendering can collaborate to create scenes.
The second is the compositing, rendering and animation engine-the simulation of the virtual world . For example, Nvidia’s graphics technology can simulate how each light is reflected in the virtual world in real time.
The third part is NVIDIA CloudXR, which includes client and server software that can be used to transfer extended reality content from the OpenVR application to Android and Windows devices, allowing users to enter and exit the Omniverse.
Huang Renxun introduced Omniverse Replicator and Omniverse Avatar in his keynote speech at the GTC conference.
Omniverse Replicator is a synthetic data generation engine designed to help build better digital twins. A noteworthy use case is Nvidia Isaac Sim, “Using Isaac for data replication can test virtual instances of robots in a world full of synthetic data,” said Deepu Talla, vice president and general manager of embedded and edge computing at Nvidia, “in the physical field It’s really difficult to train robots. It’s cheaper, safer, and faster to do so in simulation.” In addition, he added that since the data is synthetic, the “labeling” step of training the machine learning model can be skipped because the system I already know what the corresponding object in the virtual world should be.
This can also be applied to autonomous driving. “We use Omniverse to simulate vehicle training and testing to ensure their safety,” said Danny Shapiro, vice president of Nvidia Automotive. “Using synthetic data to test autonomous driving software under simulated road conditions can Save time and money, simplify issues such as marking objects in the environment, and finally coordinate with the behavior of the vehicle under real conditions.”
The Omniverse Avatar is used to create AI avatars that can interact with humans. In his keynote speech, Huang Renxun demonstrated the “Toy-Me” made in accordance with his voice and image, a voice interactive AI that can understand and respond to complex questions raised by humans.
This technology brings together Nvidia’s technologies in voice AI, computer vision, natural language understanding, recommendation engines and simulation technologies. The simulation “Huang Renxun” in the picture above is an interactive character with ray-traced 3D graphics. He can talk about a wide range of topics with humans and understand the intentions of human speakers. Huang Renxun’s virtual animated image “Jensen” is a good answer. ” What is the biggest threat from climate change”, “How do astronomers find exoplanets” and other issues.
The technology behind Avatar language recognition includes Riva, a new large-scale software development kit for processing advanced speech AI; Avatar’s natural language understanding is based on the Megatron 530B large-scale language model.
It is worth noting that Riva Custom Voice is available in the latest version of the Nvidia Riva conversational AI software development kit, which uses semi-supervised learning to create synthetic, customized voices for software, IVR, and other business applications. According to Huang Renxun, with only 30 minutes of training data, this conversational artificial intelligence technology has been improved to generate synthetic speech from any speech.
Huang Renxun believes that these assistants can be easily customized for almost any industry, helping to handle the service interactions of billions of daily customers, including restaurant orders, bank transactions, and so on.
Foreign media “Venture Beat” said that with the help of Riva’s custom voice, Nvidia can keep pace with Google. In 2019, Google launched a new artificial intelligence synthesized WaveNet voice in its cloud text-to-speech service. It also raised concerns about possible voice abuse, such as simulating language for fraud.
The GTC (GPU Technology Conference) was held in San Jose, California for the first time in 2009. It is Nvidia’s flagship event and has always been a channel for Nvidia to deliver important information to the outside world. Initially focused on the potential of GPUs to solve computing challenges. In recent years, the focus of the conference has shifted to various applications of artificial intelligence and deep learning, such as autonomous vehicles, healthcare, high-performance computing, and deep learning.
In the keynote speech this fall, Huang Renxun emphasized his expectations for the future of Omniverse. He revealed: Nvidia will build an E-2 (Earth Two), which is a digital twin model of the Earth, whose purpose is to simulate and predict long-term climate change.
Posted by:CoinYuppie，Reprinted with attribution to:https://coinyuppie.com/huang-renxun-metaverse-construction-tools-are-essentially-digital-wormholes/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.