Editor's Letter Insights

Into the Omniverse for Autonomous Vehicles

Written by Jeff Child

Aligned with its GTC 2021 Fall conference, Nvidia in early November unveiled an official version of its 3D development platform called Omniverse. Created for developing augmented reality (AR) and virtual reality (VR) applications, Nvidia launched a beta of Omniverse in December 2020, which has been downloaded by more than 70,000 developers.

 Nvidia has now officially launched the platform and, along with that, it’s rolled out some specific Omniverse Avatar and Omniverse Replicator applications and new features across the Omniverse product line such as and multi-GPU and AR/VR enhancements. The Omniverse Avatar application is aimed at generating interactive AI avatars.

 While the Omniverse Avatar technology looks interesting, what attracted my interest much more— from an engineering development standpoint—was the new Omniverse Replicator platform. Nvidia Omniverse Replicator is a synthetic-data-generation engine that produces physically simulated synthetic data for training deep neural networks. In its first implementations of the engine, the company introduced two applications for generating synthetic data: one for Nvidia DRIVE Sim, a virtual world for hosting the digital twin of autonomous vehicles, and another for Nvidia Isaac Sim, a virtual world for the digital twin of manipulation robots.

 According to Nvidia, these two replicators allow developers to bootstrap AI models, fill real-world data gaps and label the ground truth in ways humans can’t. Data generated in these virtual worlds can cover a broad range of diverse scenarios, including rare or dangerous conditions that can’t regularly or safely be experienced in the real world. Because the concept is the same for both, I’ll just focus here on DRIVE Sim.

 DRIVE Sim is a simulation tool built on Omniverse that takes advantage of the many capabilities of the platform. Data generated by DRIVE Sim is used to train deep neural networks (DNN) that make up the perception systems in autonomous vehicles. The deep neural networks powering an autonomous vehicle’s perception are composed of two parts: an algorithmic model, and the data used to train that model. Engineers have dedicated significant time to refining algorithms. However, says Nvidia, the data side of the equation is still underdeveloped due to the limitations of real-world data, which is incomplete and time consuming and costly to collect.

 This imbalance often leads to a plateau in DNN development, hindering progress where the data cannot meet the demands of the model. With synthetic data generation, developers have more control over data development, tailoring it to the specific needs of the model. A major issue is the gap between synthetically generated data and real-world data. A content gap can be caused by the lack of real-world content diversity and by differences between sim and real-world contexts.

 To narrow the appearance gap, DRIVE Sim takes advantage of Omniverse’s RTX path-tracing renderer to generate physically based sensor data for cameras, radars and lidars and ultrasonics sensors. Real-world effects are captured in the sensor data, including phenomena such as LED flicker, motion blur, rolling shutter, lidar beam divergence and Doppler effect.

 According to the company, DRIVE Sim has already produced significant results in accelerating perception development with synthetic data at Nvidia. One example is the migration to the latest Nvidia DRIVE Hyperion sensor set. The Nvidia DRIVE Hyperion 8 platform includes sensors for complete production AV development. However, before these sensors were even available, the Nvidia DRIVE team was able to bring up DNNs for the platform using synthetic data. DRIVE Sim generated millions of images and ground-truth data for training. As a result, the networks were ready to deploy as soon as the sensors were installed, saving valuable months of development time.

 The engineering challenges of creating truly safe autonomous vehicles are great enough simply in terms of functionality. If driverless cars are ever going to see widespread use, an enormous amount of simulation and testing must be done. Technologies like Nvidia DRIVE Sim offer way to bring realism based on real-world data into the simulated testing realm.


Keep up-to-date with our FREE Weekly Newsletter!

Don't miss out on upcoming issues of Circuit Cellar.

Note: We’ve made the Dec 2022 issue of Circuit Cellar available as a free sample issue. In it, you’ll find a rich variety of the kinds of articles and information that exemplify a typical issue of the current magazine.

Would you like to write for Circuit Cellar? We are always accepting articles/posts from the technical community. Get in touch with us and let's discuss your ideas.

Former Editor-in-Chief at Circuit Cellar | Website | + posts

Jeff served as Editor-in-Chief for both LinuxGizmos.com and its sister publication, Circuit Cellar magazine 6/2017—3/2022. In nearly three decades of covering the embedded electronics and computing industry, Jeff has also held senior editorial positions at EE Times, Computer Design, Electronic Design, Embedded Systems Development, and COTS Journal. His knowledge spans a broad range of electronics and computing topics, including CPUs, MCUs, memory, storage, graphics, power supplies, software development, and real-time OSes.

Supporting Companies

Upcoming Events

Copyright © KCK Media Corp.
All Rights Reserved

Copyright © 2024 KCK Media Corp.

Into the Omniverse for Autonomous Vehicles

by Jeff Child time to read: 3 min