Understanding and mitigating universal adversarial perturbations for computer vision neural networks
Kenneth Tan Co
Deep neural networks (DNNs) have become the algorithm of choice for many computer vision applications. They are able to achieve human level performance in many computer vision tasks, and enable the automation and large-scale deployment of applications such as object tracking, autonomous vehicles, and medical imaging. However, DNNs expose software applications to systemic vulnerabilities in the form of Universal Adversarial Perturbations (UAPs): input perturbation attacks that can cause DNNs to make classification errors on large sets of inputs.
Our aim is to improve the robustness of computer vision DNNs to UAPs without sacrificing the models’ predictive performance. To this end, we increase our understanding of these vulnerabilities by investigating the visual structures and patterns commonly appearing in UAPs. We demonstrate the efficacy and pervasiveness of UAPs by showing how Procedural Noise patterns can be used to generate efficient zero-knowledge attacks for different computer vision models and tasks at minimal cost to the attacker. We then evaluate the UAP robustness of various shape and texture-biased models, and found that applying them in ensembles provides marginal improvement to robustness.
To mitigate UAP attacks, we develop two novel approaches. First, we propose the Jacobian of DNNs to measure the sensitivity of computer vision DNNs. We derive theoretical bounds and provide empirical evidence that shows how a combination of Jacobian regularisation and ensemble methods allow for increased model robustness against UAPs without degrading the predictive performance of computer vision DNNs. Our results evince a robustness-accuracy trade-off against UAPs that is better than those of models trained in conventional ways. Finally, we design a detection method that analyses the hidden layer activation values to identify a variety of UAP attacks in real-time with low-latency. We show that our work outperforms existing defences under realistic time and computation constraints.
Cite as: Co, Kenneth Tan, Understanding and mitigating universal adversarial perturbations for computer vision neural networks. PhD Thesis, Department of Computing, Imperial College London, https://doi.org/10.25560/103574, March 2023