Is Robust Machine Learning Possible?

Machine learning has shown remarkable success in solving complex classification problems, but current machine learning techniques produce models that are vulnerable to adversaries who may wish to confuse them, especially when used for security applications like malware classification.

The key assumption of machine learning is that a model that is trained on training data will perform well in deployment because the training data is representative of the data that will be seen when the classifier is deployed.

When machine learning classifiers are used in security applications, however, adversaries may be able to generate samples that exploit the invalidity of this assumption.

Our project is focused on understanding, evaluating, and improving the effectiveness of machine learning methods in the presence of motivated and sophisticated adversaries.


Genetic Search
Evolutionary framework to automatically find variants that preserve malicious behavior but evade a target classifier.
Feature Squeezing
Reducing the search space for adversaries by coalescing inputs.
(The top row shows L0 adversarial examples, squeezed by median smoothing.)


Weilin Xu, David Evans, Yanjun Qi. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks. arXiv preprint, 4 April 2017. [PDF]

Weilin Xu, Yanjun Qi, and David Evans. Automatically Evading Classifiers A Case Study on PDF Malware Classifiers. Network and Distributed Systems Symposium 2016, 21-24 February 2016, San Diego, California. Full paper (15 pages): [PDF]


David Evans’ Talk at USENIX Enigma 2017, Oakland, CA, 1 February 2017. [Speaker Deck]

More Talks…



Weilin Xu (Lead PhD Student, leading work on Feature Squeezing and Genetic Evasion)
Anant Kharkar (Undergraduate Researcher working on Genetic Evasion)
Helen Simecek (Undergraduate Researcher working on Genetic Evasion)

David Evans (Faculty Co-Advisor)
Yanjun Qi (Faculty Co-Advisor)