Drivers have always needed maps, and self-driving cars are no exception. In fact, highly accurate maps are crucial to making autonomous vehicles a reality.
Enter NVIDIA’s end-to-end mapping platform for self-driving cars announced today by NVIDIA CEO Jen-Hsun Huang at the GPU Technology Conference.
This system is designed to help automakers, map companies and startups rapidly create HD maps and keep them updated, using the compute power of NVIDIA DRIVE PX 2 in the car and NVIDIA Tesla GPUs in the data center.
“We have to map the world before cars can safely drive themselves around,” Huang said. “Using four cameras, our platform is able to detect up to 1.8 million points per second, not only in 3D space but in color, giving autonomous vehicles a complete view of their surroundings.”
Mapping the Road Ahead
Why are maps so important for self-driving cars? You might take your morning commute for granted, but the process of driving is incredibly complex.
Automakers will need to equip vehicles with powerful on-board supercomputers capable of processing inputs from multiple sensors to precisely understand their environments. Adding detailed maps to this equation simplifies the problem.
When a human driver knows what to expect around the next corner, it frees up more of their attention to be alert for hazards. It’s no different when the car is its own driver.
NVIDIA’s open mapping platform is built on the NVIDIA DriveWorks software toolkit. It combines deep learning and a technique known as “visual simultaneous localization and mapping” to handle every stage of the mapping process.
A New Approach to Mapmaking
Traditional mapping techniques require numerous expensive sensors in the car to collect large volumes of data that then need to be recorded and processed offline. By contrast, we designed our system to be highly efficient, moving much of the data processing into the car, and minimizing communication with the cloud.
When it comes to guiding self-driving cars, GPS alone isn’t enough. Pin-point accuracy is required.Structure from motion algorithms, essentially 3D graphics in reverse, help it convert 2D data from cameras into lush 3D information. Combining this mapping information with data from inertial sensors in the car, along with GPS, enables the precision location of key landmarks.
Add deep learning algorithms for detecting important features like lanes and road signs, and you have a system that can both create maps and recognize changes to its environment. The result is a highly efficient system for car and map makers to develop upon as they plot a course to autonomous driving