
Researchers at the University of Michigan in Ann Arbor have developed a 3-D motion tracking system that could one day replace LiDAR and cameras in autonomous technologies.
Still in its infancy, the technology combines transparent light detectors with neural network methods that interpret what the technology “sees.” Future applications include automated manufacturing, biomedical engineering, and autonomous driving. A paper on the system was published in Nature Communications.
The system uses transparent, nanoscale, highly sensitive graphene detectors developed by Zhaohui Zhong, associate professor of electrical and computer engineering, and are believed to be the first of their kind.
“The in-depth combination of graphene nanodevices and machine learning algorithms can lead to fascinating opportunities in both science and technology,” says Dehui Zhang, a doctoral student in electrical and computer engineering. “Our system combines computational power efficiency, fast tracking speed, compact hardware, and a lower cost compared with several other solutions.”
The graphene photodetectors have been adjusted to absorb only 10 percent of the light they’re exposed to, making them nearly transparent. Because graphene is so sensitive to light, this is sufficient to generate images that can be reconstructed through computational imaging. The photodetectors are stacked behind each other, resulting in a compact system, and each layer focuses on a different focal plane, which enables 3-D imaging.
The team also tackled real-time motion tracking, which is used in autonomous robotic applications. To do this, they needed a way to determine the position and orientation of an object being tracked. Typical approaches involve LiDAR systems and light-field cameras, both of which have significant limitations, the researchers say. Others use metamaterials or multiple cameras.
The systems, however, need more than hardware; deep learning algorithms also are necessary. Zhen Xu, a doctoral student in electrical and computer engineering, built the optical setup and worked with the team to enable a neural network to decipher the positional information.
The network is trained to search for specific objects in the scene and then focus only on the object of interest — for example, a pedestrian or an object moving into your lane. The technology works well for stable systems such as automated manufacturing or projecting human body structures in 3-D for the medical community.
“It takes time to train your neural network,” says Ted Norris, project leader and professor of electrical and computer engineering. “But once it’s done, it’s done. So, when a camera sees a certain scene, it can give an answer in milliseconds.”
Doctoral student Zhengyu Huang led the algorithm design for the neural network. The type of algorithms the team developed are unlike traditional signal processing algorithms and used for long-standing imaging technologies such as X-ray and MRI.
The team demonstrated success tracking a beam of light and a ladybug. They also proved their technique is scalable and believe it would take as few as 4,000 pixels for some practical applications, and 400×600 pixel arrays for many more.
The technology could be made with other materials, but graphene doesn’t require artificial illumination and is environmentally friendly.
“Graphene is now what silicon was in 1960,” Norris says. “As we continue to develop this technology, it could motivate the kind of investment that would be needed for commercialization.”
The paper is titled “Neural Network Based 3D Tracking with a Graphene Transparent Focal Stack Imaging System.”