Register now for better personalized quote!

HOT NEWS

Nvidia plans for a more robust Omniverse with avatars, synthetic data

Nov, 09, 2021 Hi-network.com

Omniverse Replicator is a simulation framework that produces physically accurate synthetic data to accelerate training of deep neural networks for AI applications. NVIDIA has created Omniverse Replicators for DRIVE Sim - for training of AI perception networks for autonomous vehicles - and for Isaac Sim, for training robots.

Nvidia

As enterprises prepare to bring more of their business and operations to the virtual world, Nvidia is building out Omniverse, its platform for extending workflows into the virtual sphere. The latest updates to the platform, introduced during GTC 2021, include Omniverse Avatar, a tool for creating embodied AIs, as well as Omniverse Replicator, a synthetic data-generation engine. 

AR + VR

  • These$400 XR glasses gave my MacBook a 120-inch screen to work with
  • I tried Apple Vision Pro and it's far ahead of where I expected
  • The best VR headsets for gaming, work, and beyond
  • Meet Apple's AR/VR Vision Pro headset: Price, features, release date, and everything else to know

Nvidia rolled out Omniverse in open beta last December -- nearly a year before Facebook committed to the concept of a "metaverse" by renaming itself Meta. Omniverse gives 3D designers a shared virtual world from which they can collaborate across different software applications and from different geographic locations. 

Since the December launch, Omniverse has been downloaded by more than 70,000 individual creators. Professionals are also using it at over 700 companies, including BMW Group, CannonDesign, Epigraph, Ericsson, architectural firms HKS and KPF, Lockheed Martin and Sony Pictures Animation.

"Virtual worlds are essential for the next era of innovation," Richard Kerris, VP of Omniverse for Nvidia, said to reporters last week.

Nvidia's virtual world version has primarily focused on building "digital twins" -- an accurate, digital replica of physical entities. Here's how Rev Lebaredian, Nvidia's VP of simulation technology and Omniverse engineering, explained the concept of a digital twin: 

"For me, digital twins essentially are a way to take stuff in the real world and represent them in the virtual world so we can give ourselves some superpowers. Once you have an accurate representation... and you can simulate how that world behaves, you can do some pretty amazing things. You can teleport -- if you have a representation of your factory, or a model of the Earth, or a city where cars are driving around, you can jump to any point in that world and feel it and perceive it as if you were there. With simulation, we also have the potential of time travel -- we can record what has happened in the past and rewind to play back what happened in your factory, in a situation on the roads... You have the potential to fast forward as well. Not only can you go into the future, but you can go into alternative futures by changing parameters inside this world. That allows us to plan much brighter futures, to optimize our businesses, to create a better Earth and a future for ourselves."

Omniverse Replicator is a tool that should ultimately help organizations build better digital twins -- and thus, better AI-powered tools in the real world. Nvidia is introducing two different applications built with Replicator, which demonstrate some of its use cases: The first application is Nvidia Drive Sim, a virtual world for hosting the digital twin of vehicles. Next, Nvidia Isaac Sim is a virtual world for the digital twin of manipulation robots. 

Data is a necessary prerequisite for building AI models, but "you never have enough data, and never of high enough quality and diversity to make your system as intelligent as you want," Lebaredian explained. "But if you can synthesize your data, you effectively have an unlimited amount.... of a quality impossible to extract from the real world."

Autonomous vehicles and robots built using this data generated by Replicator can master skills across a range of virtual environments before applying them in the physical world. 

While its first two use cases are in robotics and automotive, "this general problem of creating data for AI is one that everyone has," Lebaredian said. Omniverse Replicator will be available next year to developers to build domain-specific data-generation engines.

To further demonstrate the value of digital twins, Nvidia showcased two customer stories. First, Ericsson is using Omniverse to build digital twins for 5G networks. The telecom equipment maker is building city-scale digital twins to help accurately simulate the interplay between 5G cells and the environment. This should help optimize 5G performance and coverage.

Lockheed Martin and NVIDIA are partnering to build an AI and digital twin-enabled lab to combat wildfires. Image shows a simulation of wildfires in an Omniverse digital twin world.

Nvidia

Next, Nvidia is working with Lockheed Martin, as well as the US Department of Agriculture Forest Service and the Colorado Division of Fire Prevention & Control, to run simulations of wildfires with Omniverse. The team will use variables like wind direction, topography and whatever other information is available to create a digital twin of a wildfire and predict how it will play out. 

While building digital twins has clear enterprise value, Nvidia is taking Omniverse beyond replications of the real world with the new Omniverse Avatar platform. Avatar is a full end-to-end platform for creating embodied AIs that humans interact with. It connects Nvidia's technologies in speech AI, computer vision, natural language understanding, recommendation engines and simulation technologies. Avatars created in the platform are interactive characters with ray-traced 3D graphics. They can see and speak on a wide range of subjects and understand naturally spoken intent. 

A stylized avatar of NVIDIA founder and CEO Jensen Huang, part of Omniverse Avatar's Project Tokkio, a reference application for interactive AI avatars for customer service.

Nvidia

While there's a great deal of justified skepticism about the use of avatars -- given that efforts like Second Life failed to catch on -- Nvidia's Lebaredian argued that "today there are many examples of people who use avatars on a daily basis." He pointed to video games like Fortnite and Roblox. 

"You can go to Twitch and see gamers streaming games where they have a virtual avatar in a game engine representing themselves," he said. "This is very natural for this generation that has grown up with video games and virtual worlds being just like air."

So far, Nvidia has launched a few initiatives targeting Avatar for specific use cases: Project Tokkio leverages Avatar to build customer support agents, Nvidia Drive Concierge is focused on intelligent services in vehicles, and Project Maxine will help customers build avatars -- which may or may not look like their real selves -- for video conferencing. The company said it had seen notal interest in Project Tokkio from the retail sector. 

The many Nvidia technologies behind Avatar include Riva, a new, large software development kit for dealing with advanced speech AI. It recognizes speech across multiple languages and can generate human-like speech responses using text-to-speech capabilities. The Avatar platform's natural language understanding is based on the Megatron 530B large language model that can recognize, understand and generate human language. 

Nvidia has also incorporated Metropolis for computer vision and perception abilities -- so avatars can see and understand the humans they're interacting with -- and the Nvidia Merlin framework for recommendations. Avatar animation is powered by Nvidia Video2Face and Audio2Face, 2D and 3D facial animation and rendering technologies. Avatar applications are processed in real-time using the Nvidia Unified Compute Framework. Customers can use Fleet Command for deploying avatars in the field. 

Beyond Replicator and Avatar, Nvidia announced a range of other updates to Omniverse, including new AR, VR and multi-GPU rendering features. There are also new integrations for infrastructure and industrial digital-twin applications with software from Bentley Systems and Esri.

Innovation

I tried Apple Vision Pro and it's far ahead of where I expectedThis tiny satellite communicator is packed full of features and peace of mindHow to use ChatGPT: Everything you need to knowThese are my 5 favorite AI tools for work
  • I tried Apple Vision Pro and it's far ahead of where I expected
  • This tiny satellite communicator is packed full of features and peace of mind
  • How to use ChatGPT: Everything you need to know
  • These are my 5 favorite AI tools for work

tag-icon Hot Tags : Tech Wearables

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.
Our company's operations and information are independent of the manufacturers' positions, nor a part of any listed trademarks company.