Neural network compression methods like pruning and quantization are very effective at efficiently deploying Deep Neural Networks (DNNs) on edge devices. However, DNNs remain vulnerable to adversarial examples-inconspicuous inputs that are specifically designed to fool these models. In particular, Universal Adversarial Perturbations (UAPs), are a powerful class of adversarial attacks which create adversarial perturbations that can generalize across a large set of inputs. In this work, we analyze the effect of various compression techniques to UAP attacks, including different forms of pruning and quantization. We test the robustness of compressed models to white-box and transfer attacks, comparing them with their […]
LiDARs play a critical role in Autonomous Vehicles’ (AVs) perception and their safe operations. Recent works have demonstrated that it is possible to spoof LiDAR return signals to elicit fake objects. In this work we demonstrate how the same physical capabilities can be used to mount a new, even more dangerous class of attacks, namely Object Removal Attacks (ORAs). ORAs aim to force 3D object detectors to fail. We leverage the default setting of LiDARs that record a single return signal per direction to perturb point clouds in the region of interest (RoI) of 3D objects. By injecting illegitimate points […]
Authors: Kenneth Co, David Martinez Rego, Emil Lupu Universal Adversarial Perturbations (UAPs) are input perturbations that can fool a neural network on large sets of data. They are a class of attacks that represents a significant threat as they facilitate realistic, practical, and low-cost attacks on neural networks. In this work, we derive upper bounds for the effectiveness of UAPs based on norms of data-dependent Jacobians. We empirically verify that Jacobian regularization greatly increases model robustness to UAPs by up to four times whilst maintaining clean performance. Our theoretical analysis also allows us to formulate a metric for the strength […]
With advanced video and sensing capabilities, un-occupied aerial vehicles (UAVs) are increasingly being usedfor numerous applications that involve the collaboration andautonomous operation of teams of UAVs. Yet such vehiclescan be affected by cyber attacks, impacting the viability oftheir missions. We propose a method to conduct mission via-bility analysis under cyber attacks for missions that employa team of several UAVs that share a communication network.We apply our method to a case study of a survey mission ina wildfire firefighting scenario. Within this context, we showhow our method can help quantify the expected missionperformance impact from an attack and determine if themission […]
Increasing shape-bias in deep neural networks has been shown to improve robustness to common corruptions and noise. In this paper we analyze the adversarial robustness of texture and shape-biased models to Universal Adversarial Perturbations (UAPs). We use UAPs to evaluate the robustness of DNN models with varying degrees of shape-based training. We find that shape-biased models do not markedly improve adversarial robustness, and we show that ensembles of texture and shape-biased models can improve universal adversarial robustness while maintaining strong performance. Citation: K. T. Co, L. Muñoz-González, L. Kanthan, B. Glocker and E. C. Lupu, “Universal Adversarial Robustness of Texture […]
Adversarial actors have shown their ability to infiltrate enterprise networks deployed around Cyber Physical Systems (CPSs) through social engineering, credential stealing and file-less infections. When inside, they can gain enough privileges to maliciously call legitimate APIs and apply unsafe control actions to degrade the system performance and undermine its safety. Our work lies at the intersection of security and safety, and aims to understand dependencies among security, reliability and safety in CPS/IoT. We present a methodology to perform hazard driven threat modelling and impact assessment in the context of CPSs. The process starts from the analysis of behavioural, functional and […]
Zaid joined the group as a Research Associate in May 2020. His activities focus on federated learning and adversarial machine learning.
Our paper on procedural noise adversarial examples has been accepted to the 26th ACM Conference on Computer and Communications Security (ACM CCS ’19). official: https://dl.acm.org/citation.cfm?id=3345660 code: https://github.com/kenny-co/procedural-advml Abstract: Deep Convolutional Networks (DCNs) have been shown to be vulnerable to adversarial examples—perturbed inputs specifically designed to produce intentional errors in the learning algorithms at test time. Existing input-agnostic adversarial perturbations exhibit interesting visual patterns that are currently unexplained. In this paper, we introduce a structured approach for generating Universal Adversarial Perturbations (UAPs) with procedural noise functions. Our approach unveils the systemic vulnerability of popular DCN models like Inception v3 and YOLO […]
We have released the code with a demo or our poisoning attack described in the paper “Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization.” You can access the code in this link.
Fulvio joined the group as a Visiting Researcher. His activities focused on analysing and modelling hybrid threats.