Papers
Xiao Zhang and David Evans. Incorporating Label Uncertainty in Understanding Adversarial Robustness. In 10th International Conference on Learning Representations (ICLR). April 2022. [arXiv] [OpenReview] [Code]
Yulong Tian, Fnu Suya, Fengyuan Xu and David Evans. Stealthy Backdoors as Compression Artifacts. IEEE Transactions on Information Forensics and Security (Volume 17). 16 March 2022. [PDF] [arXiv] [IEEE Page] [Code]
Fnu Suya, Saeed Mahloujifar, Anshuman Suri, David Evans, and Yuan Tian. Model-Targeted Poisoning Attacks with Provable Convergence. In 38th International Conference on Machine Learning (ICML). July 2021. [arXiv] [PMLR] [(PDF)] [Code] [Blog]
Jack Prescott, Xiao Zhang, and David Evans. Improved Estimation of Concentration Under ℓp-Norm Distance Metrics Using Half Spaces. In Ninth International Conference on Learning Representations (ICLR). May 2021. [arXiv, Open Review] [Code]
Hannah Chen, Yangfeng Ji, and David Evans. <a href="https://arxiv.org/abs/2011.01856"“>Finding Friends and Flipping Frenemies: Automatic Paraphrase Dataset Augmentation Using Graph Theory. In Findings of ACL: Empirical Methods in Natural Language Processing. 16 –18 Novemeber 2020. [PDF] [Arxiv] [ACL] [Code]
Fnu Suya, Jianfeng Chi, David Evans, and Yuan Tian. Hybrid Batch Attacks: Finding Black-box Adversarial Examples with Limited Queries. In 29th USENIX Security Symposium. Boston, MA. August 12–14, 2020. [PDF] [ArXiV] [Code]
Sicheng Zhu, Xiao Zhang, and David Evans. Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization. In International Conference on Machine Learning (ICML). 12–18 July 2020. [arXiv]
Xiao Zhang★, Jinghui Chen★, Quanquan Gu, David Evans. Understanding the Intrinsic Robustness of Image Distributions using Conditional Generative Models. In 23rd International Conference on Artificial Intelligence and Statistics (AISTATS). Palermo, Italy. June 3–5,2020. [PDF] [arXiv] [Code]
Saeed Mahloujifar★, Xiao Zhang★, Mohammad Mahmoody, and David Evans. Empirically Measuring Concentration: Fundamental Limits on Intrinsic Robustness. In NeurIPS 2019. Vancouver, December 2019. (Earlier versions appeared in Debugging Machine Learning Models and Safe Machine Learning: Specification, Robustness and Assurance, workshops attached to Seventh International Conference on Learning Representations (ICLR). New Orleans. May 2019. [PDF] [arXiv] [Post] [Code]
Xiao Zhang and David Evans. Cost-Sensitive Robustness against Adversarial Examples. In Seventh International Conference on Learning Representations (ICLR). New Orleans. May 2019. [arXiv] [OpenReview] [PDF]
Weilin Xu, David Evans, Yanjun Qi. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks. 2018 Network and Distributed System Security Symposium. 18-21 February, San Diego, California. Full paper (15 pages): [PDF]
Qixue Xiao, Kang Li, Deyue Zhang, and Weilin Xu. Security Risks in Deep Learning Implementations. 1st Deep Learning and Security Workshop (co-located with the 39th IEEE Symposium on Security and Privacy). San Francisco, California. 24 May 2018. [PDF]
Weilin Xu, David Evans, Yanjun Qi. Feature Squeezing Mitigates and Detects Carlini/Wagner Adversarial Examples. arXiv preprint, 30 May 2017. [PDF, 3 pages]
Ji Gao, Beilun Wang, Zeming Lin, Weilin Xu, Yanjun Qi. DeepCloak: Masking Deep Neural Network Models for Robustness against Adversarial Samples. ICLR Workshops, 24-26 April 2017. [PDF]
Weilin Xu, Yanjun Qi, and David Evans. Automatically Evading Classifiers A Case Study on PDF Malware Classifiers. Network and Distributed Systems Symposium 2016, 21-24 February 2016, San Diego, California. Full paper (15 pages): [PDF]