Zaid joined the group as a Research Associate in May 2020. His activities focus on federated learning and adversarial machine learning.
Our paper on procedural noise adversarial examples has been accepted to the 26th ACM Conference on Computer and Communications Security (ACM CCS ’19). official: https://dl.acm.org/citation.cfm?id=3345660 code: https://github.com/kenny-co/procedural-advml Abstract: Deep Convolutional Networks (DCNs) have been shown to be vulnerable to adversarial examples—perturbed inputs specifically designed to produce intentional errors in the learning algorithms at test time. Existing input-agnostic adversarial perturbations exhibit interesting visual patterns that are currently unexplained. In this paper, we introduce a structured approach for generating Universal Adversarial Perturbations (UAPs) with procedural noise functions. Our approach unveils the systemic vulnerability of popular DCN models like Inception v3 and YOLO […]
We have released the code with a demo or our poisoning attack described in the paper “Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization.” You can access the code in this link.
Aaron will be presenting a paper based on his MSc thesis work “Exploiting Correlations to Detect False Data Injections in Low-Density Wireless Sensor Networks” at 5th ACM CPSS 2019, co-located workshop with ACM ASIACCS.