Papers

Xiao Zhang and David Evans. Cost-Sensitive Robustness against Adversarial Examples. arXiv preprint, 22 October 2018. [PDF]

Weilin Xu, David Evans, Yanjun Qi. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks. 2018 Network and Distributed System Security Symposium. 18-21 February, San Diego, California. Full paper (15 pages): [PDF]

Qixue Xiao, Kang Li, Deyue Zhang, and Weilin Xu. Security Risks in Deep Learning Implementations. 1st Deep Learning and Security Workshop (co-located with the 39th IEEE Symposium on Security and Privacy). San Francisco, California. 24 May 2018. [PDF]

Weilin Xu, David Evans, Yanjun Qi. Feature Squeezing Mitigates and Detects Carlini/Wagner Adversarial Examples. arXiv preprint, 30 May 2017. [PDF, 3 pages]

Ji Gao, Beilun Wang, Zeming Lin, Weilin Xu, Yanjun Qi. DeepCloak: Masking Deep Neural Network Models for Robustness against Adversarial Samples. ICLR Workshops, 24-26 April 2017. [PDF]

Weilin Xu, Yanjun Qi, and David Evans. Automatically Evading Classifiers A Case Study on PDF Malware Classifiers. Network and Distributed Systems Symposium 2016, 21-24 February 2016, San Diego, California. Full paper (15 pages): [PDF]