MSU, Facebook Develop New Deepfake Detection Model

Artificial Intelligence experts from Michigan State University and Facebook have partnered on a reverse-engineering research method able detect and attribute deepfakes, allowing for easier investigation into incidents of coordinated disinformation and opens new doors for future research.
597
MSU and Facebook AI researchers have developed a reverse-engineered research method able to detect and attribute deepfakes. // Stock Photo
MSU and Facebook AI researchers have developed a reverse-engineered research method able to detect and attribute deepfakes. // Stock Photo

Artificial Intelligence experts from Michigan State University and Facebook have partnered on a reverse-engineering research method able detect and attribute deepfakes, allowing for easier investigation into incidents of coordinated disinformation and opens new doors for future research.

Deepfake technology — or when an existing image or video of a person is manipulated and replaces with someone else’s likeness — is new and presents a massive cybersecurity challenge when they are created with malicious intent.

Despite its relative youth, deepfake technology has made it nearly impossible to tell whether an image of someone online is a real human being or not. Current methods focus on distinguishing a deepfake from a real image based on models seen during training. The method being developed by MSU and Facebook go beyond this.

“Our method will facilitate deepfake detection and tracing in real-world settings where the deepfake image itself is often the only information detectors have to work with,” says Xiaoming Liu, MSU Foundation professor of computer science. “It’s important to go beyond current methods of image attribution because a deepfake could be created using a generative model that the current detector has not seen during its training.”

The new method detailed in the report: “Reverse Engineering of Generative Models: Inferring Model Hyperparameters from Generated Images,” was developed by Liu and MSU College of Engineering doctoral candidate Vishnal Asnani along with Facebook AI researchers Xi Yin and Tal Hassner.

Reverse engineering, while not a new concept in machine learning, is a different way of approaching the problem of deepfakes. Prior work on reverse engineering relies on preexisting knowledge, which limits its effectiveness in real-world cases.

“Our reverse engineering method relies on uncovering the unique patterns behind the AI model used to generate a single deepfake image,” Hassner says. “With model parsing, we can estimate properties of the generative models used to create each deepfake, and even associate multiple deepfakes to the model that possibly produced them. This provides information about each deepfake, even ones where no prior information existed.”The method was tested by performing cross-validation — a practice used to determine how accurately a predictive model will perform in practice — in mimicked real-world scenarios from a dataset of 100,000 synthetic images generated from 100 publicly available generative models.

“The main idea is to estimate the fingerprint for each image and use it for model parsing,” Liu says. “Our framework can not only perform model parsing, but also extend to deepfake detection and image attribution.”

Fingerprint data works on deepfake images the same way it does on digital cameras, where small bits of data are left behind by the device the photograph was taken on. The system will separate real image fingerprints from deepfake fingerprints based on the small bits of data left behind by the generative models.

Lin says model parsing then uses this fingerprint to predict the hyperparameters — or individual components that make a functioning whole when combined — of the generative model. This information can be applied by companies like Facebook to root out harmful deepfake material from its site.