Mobility researchers at the University of Michigan in Ann Arbor have devised a way to test autonomous vehicles, bypassing the billions of miles needed for consumers to consider them ready for the road.
The process was developed using data from more than 25 million miles of real-world driving, and can reduce the time required to evaluate robotic vehicles’ handling of dangerous situations by 300 to 100,000 times, saving 99.9 percent of testing time and costs, researchers say.
The new accelerated evaluation process essentially breaks down difficult real-world driving situations into components that can be tested or simulated repeatedly, exposing automated vehicles to a condensed set of challenging driving situations. Under this program, just 1,000 miles of testing can yield the equivalent of 300,000 to 100 million miles of real-world driving.
The new approach was outlined in a white paper published by MCity, a U-M led public-private partnership to accelerate advanced mobility vehicles and technologies.
“Even the most advanced and largest-scale efforts to test automated vehicles today fall woefully short of what is needed to thoroughly test these robotic cars,” says Huei Peng, director of Mcity and the Roger L. McCarthy Professor of Mechanical Engineering at U-M.
Peng adds that while 100 million miles might sound like overkill, it’s not enough for researchers to get sufficient data to certify the safety of a driverless vehicle because the scenarios they need to zero in on are rare. Additionally, for consumers to accept driverless vehicles as a safe mobility alternative, researchers say they’ll need to prove with 80 percent confidence that they’re 90 percent safer than human drivers.
To get that confidence level, test vehicles would need to be driven or simulated for 11 billion miles, which would take several decades of round-the-clock testing.
To create the new four-step accelerated approach, U-M researchers analyzed data from 25.2 million miles of real-world driving collected through two U-M Transportation Research Institute projects, which involved nearly 3,000 vehicles and volunteers over two years. From that data, researchers:
- Identified events that could contain “meaningful interactions” between an automated vehicle and one driven by a human, and created a simulation that replaced all the uneventful miles with these meaningful interactions.
- Programmed their simulation to consider human drivers the major threat to automated vehicles and placed human drivers randomly throughout.
- Conducted mathematical tests to assess the risk and probability of certain outcomes, including crashes, injuries, and near-misses.
- Interpreted the accelerated test results, using a technique called “importance sampling” to learn how the automated vehicle would perform, statistically, in everyday driving situations.
In piloting this method, researchers focused on the two most common situations they expected to result in serious crashes: an automated car following a human driver and a human driver merging in front of an automated car. Additional research will be needed involving other driving situations.