With a palm-sized AI supercomputing platform, a platform that can generate virtual avatars, and an AI training framework that lowers the barrier to development, NVIDIA exhibited many updates this time at GTC.
At the GTC 2021 conference, Nvidia CEO Huang Renxun appeared in an iconic leather suit and released a variety of AI technologies and products, as well as a virtual avatar platform related to the meta universe.
First of all, NVIDIA released a new AI supercomputer “Jetson AGX Orin”, mainly used in robots, autonomous machines, medical equipment and other forms of edge embedded computing.
The processing power of “Jetson AGX Orin” has been increased by 6 times, and it maintains the same size and pin compatibility as the previous model. It can perform 200 trillion operations per second (TOPS), and the processing power can be compared with the GPU servers are comparable, but the size is only the size of a human palm.
According to Huang Renxun, the characteristics of this supercomputer are the world’s smallest, the most powerful, and the highest energy efficiency at the same time.
In addition, NVIDIA this time also launched a virtual avatar platform “Omniverse Avatar” that can carry the vision of “Metaverse”, which can generate visual images with reasoning and dialogue capabilities. This platform integrates the perception capabilities, voice recognition capabilities, recommendation capabilities, and animation rendering capabilities that NVIDIA has previously reserved.
At the press conference, Huang Renxun showed application cases of this platform, such as the mini version of Huang Renxun “Toy-Me” generated by this platform. This avatar can be used for natural questions and answers with other people.
In addition, there are virtual images generated when users order food, and virtual images used in industrial applications. “Omniverse” has greater ambitions, Huang Renxun said, this is designed for the scale of the data center.
Huang Renxun stated on today’s GTC that the number of NVIDIA developers is now close to 3 million, and the CUDA architecture has been downloaded more than 30 million times in the past 15 years. Nevertheless, Nvidia continues to develop a variety of tools and frameworks to lower the development threshold of enterprises and developers on AI.
Nvidia launched the acceleration framework “NeMo Megatron” this time, optimized for training trillions of parameter language models. Companies can use this framework to further train it to serve new fields and languages. In addition to this framework, NVIDIA has also launched a framework for developing Physics-ML models “NVIDIA Modulus”, which can be applied to climate prediction and so on. Scientists can quickly build climate digital twin models based on this.
Nvidia announced today that more than 25,000 customers, including Capital One, Microsoft, Samsung Medison, Siemens Energy, and Snap, are using Nvidia’s AI platform.
Nvidia’s Riva voice software has also made progress. Huang Renxun announced today the customization function of this voice. Enterprises only need to output 30 minutes of audio training data to build their own brand-specific voice.
In addition, NVIDIA also launched several servers this time: the “Triton Inference Server” with multi-node distributed inference function, and the “A2 Tensor Core GPU accelerator”. Among them, the A2 GPU is a low-power, small-size accelerator used for edge AI inference, and its inference performance is 20 times higher than that of the CPU.
Posted by:CoinYuppie，Reprinted with attribution to:https://coinyuppie.com/nvidia-releases-the-worlds-smallest-ai-supercomputing-and-meta-universe-virtual-avatar-platform/ Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.