Researchers at the University of Michigan in Ann Arbor have developed the first programmable memristor computer, which could lead to the processing of artificial intelligence directly on small, energy-constrained devices such as smartphones and sensors.
A smartphone AI processor would mean that voice commands would no longer have to be sent to the cloud for interpretation, speeding up response time.
“Everyone wants to put an AI processor on smartphones, but you don’t want your cell phone battery to drain very quickly,” says Wei Lu, a U-M professor of electrical and computer engineering at U-M and senior author of the study in Nature Electronics.
In medical devices, the ability to run AI algorithms without the cloud would enable better security and privacy.
The circuit element, an electrical resistor with a memory, can serve as a form of information storage. Because memristors store and process information in the same location, they can get around the biggest bottleneck for computing speed and power, which is the connection between memory and processor.
This is important for machine-learning algorithms that deal with lots of data to do things such as identify objects in photos and videos or predict which hospital patients are at higher risk of infection. Programmers prefer to run these algorithms on graphical processing units (GPU) rather than a computer’s main processor, the central processing unit (CPU).
GPUs perform better at machine learning tasks because they have thousands of small cores for running calculations all at once instead of the string of calculations waiting their turn on one of the few powerful cores in a CPU. Memristor arrays take this even further as each individual memristor can do its own calculation and allow thousands of operations within a core to be performed at once. The experimental-scale computer contained more than 5,800 memristors. A commercial design could include millions.
Memristor arrays are suited to machine learning problems due to the way machine learning algorithms turn data into vectors, or lists of data points. In predicting a patient’s risk of infection, the vector might list numerical representations of a patient’s risk factors.
Machine learning algorithms compare input vectors with feature vectors representing certain traits of data that are stored in memory. If matched, the system recognizes the input data as having that specific trait. Vectors are stored in matrices that can be mapped directly onto the memristor arrays.
As data is fed through the array, the bulk of mathematical processing occurs through the natural resistances in the memristors, which eliminates the need to move feature vectors in and out of the memory to perform the computations. This makes the arrays highly efficient at complicated matrix calculations. While earlier studies demonstrated the potential of memristor arrays for speeding up machine learning, they needed external computing elements to function.
Lu’s team worked with associate professor Zhengya Zhang and professor Michael Flynn to design a chip that could integrate the memristor array with all the other elements needed to program and run it. The components included a conventional digital processor and communication channels and digital/analog converters to serve as interpreters between the analog memristor array and the rest of the computer.
Lu’s team then integrated the memristor array directly on the chip at U-M’s Lurie Nanofabrication Facility. They also developed software to map machine learning algorithms onto the matrix-like structure of the memristor array.
The team demonstrated the device with machine learning algorithms: perceptron, used to clarify information; sparse coding, which compresses and categorizes data; and two-layer neural network, designed to find patterns in complex data.
Memristors can’t yet be made as identical as they need to be for commercial use, and the information stored in the array isn’t entirely reliable because it runs on analog’s continuum rather than digital.
Lu plans to commercialize this technology. The study is titled “A Fully Integrated Reprogrammable Memristor-CMOS System for Efficient Multiply-Accumulate Operations.” The research was funded by the Defense Advanced Research Projects Agency, the Center for Applications Driving Architectures, and the National Science Foundation.