Michigan Tech Researchers Tackle Autonomous Driving in Winter

Researchers at Michigan Technological University in Houghton are exploring the challenge of fully autonomous vehicles navigating in bad weather.
563
A snowy roadway from above
Researchers at Michigan Technological University in Houghton are exploring the challenge of autonomous vehicles navigating in bad weather. // Photo courtesy of Michigan Tech

Researchers at Michigan Technological University in Houghton are exploring the challenge of fully autonomous vehicles navigating in bad weather.

Averaging more than 200 inches of snow every winter, Michigan’s Keweenaw Peninsula is an ideal test bed for pushing autonomous vehicle tech to its limits.

In two papers presented at the recent SPIE Defense + Commercial Sensing 2021 conference, researchers from Michigan Tech discussed solutions for snowy driving scenarios that could help bring self-driving options to snowy cities like Chicago, Detroit, Minneapolis, and Toronto.

The first paper dealt with the fusion of multiple sensors and artificial intelligence to improve autonomous vehicle navigation.

In autonomous vehicles, two cameras mounted on gimbals scan and perceive depth using stereo vision to mimic human vision, while balance and motion can be gauged using an inertial measurement unit. Computers, however, can only react to scenarios they have encountered before or been programmed to recognize.

Since artificial brains aren’t around yet, task-specific AI algorithms must take the wheel — which means autonomous vehicles must rely on multiple sensors. Fisheye cameras widen the view while other cameras act much like the human eye. Infrared picks up heat signatures. Radar can see through the fog and rain. Light detection and ranging (LiDAR) pierces through the dark and weaves a neon tapestry of laser beam threads.

“Every sensor has limitations, and every sensor covers another one’s back,” says Nathir Rawashdeh, assistant professor of computing in Michigan Tech’s College of Computing and one of the study’s lead researchers. He works on bringing the sensors’ data together through an AI process called sensor fusion.

“Sensor fusion uses multiple sensors of different modalities to understand a scene,” he says. “You cannot exhaustively program for every detail when the inputs have difficult patterns. That’s why we need AI.”

Rawashdeh’s Michigan Tech collaborators include Nader Abu-Alrub, his doctoral student in electrical and computer engineering, and Jeremy Bos, assistant professor of electrical and computer engineering, along with master’s degree students and graduates from Bos’s lab: Akhil Kurup, Derek Chopp, and Zach Jeffries. Bos explains that LiDAR, infrared, and other sensors on their own are like the hammer in an old adage. “To a hammer, everything looks like a nail,” quotesBos. “Well, if you have a screwdriver and a rivet gun, then you have more options.”

The other MTU presentation dealt with autonomous vehicle systems being able to differentiate between animals and snow.

Most autonomous sensors and self-driving algorithms are being developed in sunny, clear landscapes. Knowing that the rest of the world is not like Arizona or southern California, Bos’s lab began collecting local data in a Michigan Tech autonomous vehicle (safely driven by a human) during heavy snowfall. Rawashdeh’s team, notably Abu-Alrub, poured over more than 1,000 frames of LiDAR, radar, and image data from snowy roads in Germany and Norway to start teaching its AI program what snow looks like and how to see past it.

“All snow is not created equal,” Bos says, pointing out that the variety of snow makes sensor detection a challenge. Rawashdeh adds that pre-processing the data and ensuring accurate labeling is an important step to ensure accuracy and safety: “AI is like a chef — if you have good ingredients, there will be an excellent meal,” he said. “Give the AI learning network dirty sensor data and you’ll get a bad result.”

Low-quality data is one problem and so is actual dirt. Much like road grime, snow buildup on the sensors is a solvable but bothersome issue. Once the view is clear, autonomous vehicle sensors are still not always in agreement about detecting obstacles. Bos mentioned a great example of discovering a deer while cleaning up locally gathered data. LiDAR said that blob was nothing (30 percent chance of an obstacle), the camera saw it like a sleepy human at the wheel (50 percent chance), and the infrared sensor shouted “whoa” (90 percent sure that is a deer).

Getting the sensors and their risk assessments to talk and learn from each other is like the Indian parable of three blind men who find an elephant: each touches a different part of the elephant — the creature’s ear, trunk, and leg — and comes to a different conclusion about what kind of animal it is. Using sensor fusion, Rawashdeh and Bos want autonomous sensors to collectively figure out the answer — be it elephant, deer, or snowbank. As Bos puts it, “Rather than strictly voting, by using sensor fusion we will come up with a new estimate.”