X. Hua, H. Xu, J. Blanchet and V. A. Nguyen, “Human Imperceptible Attacks and Applications to Improve Fairness,” 2022 Winter Simulation Conference (WSC), Singapore, 2022, pp. 2641-2652, doi: 10.1109/WSC57314.2022.10015376.

View Publication

Abstract

Modern neural networks are able to perform at least as well as humans in numerous tasks involving object classification and image generation. However, small perturbations which are imperceptible to humans may significantly degrade the performance of well-trained deep neural networks. We provide a Distributionally Robust Optimization (DRO) framework which integrates human-based image quality assessment methods to design optimal attacks that are imperceptible to humans but significantly damaging to deep neural networks. Through extensive experiments, we show that our attack algorithm generates better-quality (less perceptible to humans) attacks than other state-of-the-art human imperceptible attack methods. Moreover, we demonstrate that DRO training using our optimally designed human imperceptible attacks can improve group fairness in image classification. Towards the end, we provide an …

Authors
Xinru Hua, Huanzhong Xu, Jose Blanchet, Viet Anh Nguyen
Publication date
2022/12/11
Conference
2022 Winter Simulation Conference (WSC)
Pages
2641-2652
Publisher
IEEE