We have a new position to work at the intersection of Cybersecurity and Resilience. Our particular aim is to model and reason about the security and resilience of systems with particular focus on the software and networking infrastructure of dynamic systems such as fleets of autonomous aerial vehicles. We are particularly interested in the analysis of the propagation of cyber-attacks, their impact upon the operation of the system and the adaptation of the system in response to an attack to preserve its operation. This requires a modelling and simulation approach that combines models of threat and attack propagation such as attack graphs with models of the system operation and interconnection. Aspects of system safety and system validation can also be taken into account. For details and how to apply please follow this link.
Marwa Salayma is performing cutting-edge research for the resilience of systems, Internet of Things (IoT) and cybersecurity at RISS group. Before joining Imperial College London, she worked as a research associate in Heriot-Watt University and was part of two collaborative projects related to underwater acoustic sensor networks: 1) Harvesting of Underwater Data from SensOr Networks using Autonomous Underwater Vehicles (AUV) (HUDSON) and 2) Smart dust for large scale underwater wireless sensing (USMART).
Marwa received the PhD degree from Edinburgh Napier University in 2018 and her PhD topic was in the field of Wireless Body Area Networks (WBAN), and the M.Sc. degree in Computer Science from Jordan University of Science and Technology in 2013, and the BEng (Hons) degree in Electrical Engineering from Palestine Polytechnic University in 2007. Besides security, resilience and reliability of dynamic systems, her research interests are in (but not limited to) wireless communication technologies deployed in different network environments, eHealth, network layer stack protocols including cross layering algorithms, energy efficient communication, reliable scheduling, and QoS provisioning in wireless networks.
We are increasingly relying on systems that use machine learning to learn from their environment and often to detect anomalies in the behaviour that they observe. But the consequences of a malicious adversary targeting the machine learning algorithms themselves by compromising part of the data from which the system learns are poorly understood and represent a significant threat. The objective of this project is to propose systematic and realistic ways of assessing, testing and improving the robustness of machine learning algorithms to poisoning attacks. We consider both indiscriminate attacks, which aim to cause an overall degradation of the model’s performance, and targeted attacks that aim to induce specific errors. We focus in particular on “optimal” attack strategies seeking to maximise the impact of the poisoning points, thus representing a “worst-case” scenario. However, we consider sophisticated adversaries that also take into account detectability constraints.
PhD Studentship funded by DSTL
We occasionally host graduate and undergraduate students for summer internships mostly in July and August. If you are interested please get in touch with us with a brief proposal of the activities you propose to undertake and what you would like to achieve.
H. Chizari and E. Lupu, “Extracting Randomness from the Trend of IPI for Cryptographic Operations in Implantable Medical Devices,” in IEEE Transactions on Dependable and Secure Computing, vol. 18, no. 2, pp. 875-888, 1 March-April 2021, doi: 10.1109/TDSC.2019.2921773.
Achieving secure communication between an Implantable Medical Device (IMD) and a gateway or programming device outside the body has showed its criticality in recent reports of vulnerabilities in cardiac devices, insulin pumps and neural implants, amongst others. The use of asymmetric cryptography is typically not a practical solution for IMDs due to the scarce computational and power resources. Symmetric key cryptography is preferred but its security relies on agreeing and using strong keys, which are difficult to generate. A solution to generate strong shared keys without using extensive resources, is to extract them from physiological signals already present inside the body such as the Inter-Pulse interval (IPI). The physiological signals must therefore be strong sources of randomness that meet five conditions: Universality (available on all people), Liveness (available at any-time), Robustness (strong random number), Permanence (independent from its history) and Uniqueness (independent from other sources). However, these conditions (mainly the last three) have not been systematically examined in current methods for randomness extraction from IPI. In this study, we first propose a methodology to measure the last three conditions: Information secrecy measures for Robustness, Santha-Vazirani Source delta value for Permanence and random sources dependency analysis for Uniqueness. Then, using a large dataset of IPI values (almost 900,000,000 IPIs), we show that IPI does not have Robustness and Permanence as a randomness source. Thus, extraction of a strong uniform random number from IPI values is impossible. Third, we propose to use the trend of IPI, instead of its value, as a source for a new randomness extraction method named Martingale Randomness Extraction from IPI (MRE-IPI). We evaluate MRE-IPI and show that it satisfies the Robustness condition completely and Permanence to some level. Finally, we use the NIST STS and Dieharder test suites and show that MRE-IPI is able to outperform all recent randomness extraction methods from IPIs and achieves a quality roughly half that of the AES random number generator. MRE-IPI is still not a strong random number and cannot be used as key to secure communications in general. However, it can be used as a one-time pad to securely exchange keys between the communication parties. The usage of MRE-IPI will thus be kept at a minimum and reduces the probability of breaking it. To the best of our knowledge, this is the first work in this area which uses such a comprehensive method and large dataset to examine the randomness of physiological signals.
Zhongyuan Hau, Soteris Demetriou, Luis Muñoz-González, Emil C. Lupu, Shadow-Catcher: Looking Into Shadows to Detect Ghost Objects in Autonomous Vehicle 3D Sensing, 26th European Symposium on Research in Computer Security (ESORICS), 2021.
LiDAR-driven 3D sensing allows new generations of vehicles to achieve advanced levels of situation awareness. However, recent works have demonstrated that physical adversaries can spoof LiDAR return signals and deceive 3D object detectors to erroneously detect “ghost” objects. Existing defenses are either impractical or focus only on vehicles. Unfortunately, it is easier to spoof smaller objects such as pedestrians and cyclists, but harder to defend against and can have worse safety implications. To address this gap, we introduce Shadow-Catcher, a set of new techniques embodied in an end-to-end prototype to detect both large and small ghost object attacks on 3D detectors. We characterize a new semantically meaningful physical invariant (3D shadows) which Shadow-Catcher leverages for validating objects. Our evaluation on the KITTI dataset shows that Shadow-Catcher consistently achieves more than 94% accuracy in identifying anomalous shadows for vehicles, pedestrians, and cyclists, while it remains robust to a novel class of strong “invalidation” attacks targeting the defense system. Shadow-Catcher can achieve real-time detection, requiring only between 0.003s-0.021s on average to process an object in a 3D point cloud on commodity hardware and achieves a 2.17x speedup compared to prior work
IoT systems evolve dynamically and are increasingly used in critical applications. Understanding how to maintain the operation of the system when systems have been partially compromised is therefore of critical importance. This requires to continuously assess the risk to other parts of the system, determine the impact of the compromise and to select appropriate mitigation strategies to respond to the attack. The ability to cope with dynamic system changes is a key and significant challenge in achieving these objectives.
RACE is articulated into four broad themes of work: understanding attacks and mitigation strategies, maintaining an adequate representation of risk to the other parts of the system by understanding how attacks can evolve and propagate, understanding the impact of the compromise upon the functionality of the system and selecting countermeasure strategies taking into account trade-offs between minimising disruption to the system operation and functionality provided and minimising the risk to the other parts of the system.
A.G. Matachana, K.T. Co, L. Muñoz-González, D. Martinez, E.C. Lupu. Robustness and Transferability of Universal Attacks on Compressed Models. AAAI 2021 Workshop: Towards Robust, Secure and Efficient Machine Learning. 2021.
Neural network compression methods like pruning and quantization are very effective at efficiently deploying Deep Neural Networks (DNNs) on edge devices. However, DNNs remain vulnerable to adversarial examples-inconspicuous inputs that are specifically designed to fool these models. In particular, Universal Adversarial Perturbations (UAPs), are a powerful class of adversarial attacks which create adversarial perturbations that can generalize across a large set of inputs. In this work, we analyze the effect of various compression techniques to UAP attacks, including different forms of pruning and quantization. We test the robustness of compressed models to white-box and transfer attacks, comparing them with their uncompressed counterparts on CIFAR-10 and SVHN datasets. Our evaluations reveal clear differences between pruning methods, including Soft Filter and Post-training Pruning. We observe that UAP transfer attacks between pruned and full models are limited, suggesting that the systemic vulnerabilities across these models are different. This finding has practical implications as using different compression techniques can blunt the effectiveness of black-box transfer attacks. We show that, in some scenarios, quantization can produce gradient-masking, giving a false sense of security. Finally, our results suggest that conclusions about the robustness of compressed models to UAP attacks is application dependent, observing different phenomena in the two datasets used in our experiments.
LiDARs play a critical role in Autonomous Vehicles’ (AVs) perception and their safe operations. Recent works have demonstrated that it is possible to spoof LiDAR return signals to elicit fake objects. In this work we demonstrate how the same physical capabilities can be used to mount a new, even more dangerous class of attacks, namely Object Removal Attacks (ORAs). ORAs aim to force 3D object detectors to fail. We leverage the default setting of LiDARs that record a single return signal per direction to perturb point clouds in the region of interest (RoI) of 3D objects. By injecting illegitimate points behind the target object, we effectively shift points away from the target objects’ RoIs. Our initial results using a simple random point selection strategy show that the attack is effective in degrading the performance of commonly used 3D object detection models.
Z. Hau, K.T. Co, S. Demetriou, E.C. Lupu. Object Removal Attacks on LiDAR-based 3D Object Detectors. Automotive and Autonomous Vehicle Security (AutoSec) Workshop @ NDSS Symposium 2021.
Authors: Kenneth Co, David Martinez Rego, Emil Lupu
Universal Adversarial Perturbations (UAPs) are input perturbations that can fool a neural network on large sets of data. They are a class of attacks that represents a significant threat as they facilitate realistic, practical, and low-cost attacks on neural networks. In this work, we derive upper bounds for the effectiveness of UAPs based on norms of data-dependent Jacobians. We empirically verify that Jacobian regularization greatly increases model robustness to UAPs by up to four times whilst maintaining clean performance. Our theoretical analysis also allows us to formulate a metric for the strength of shared adversarial perturbations between pairs of inputs. We apply this metric to benchmark datasets and show that it is highly correlated with the actual observed robustness. This suggests that realistic and practical universal attacks can be reliably mitigated without sacrificing clean accuracy, which shows promise for the robustness of machine learning systems.
Kenneth Co, David Martinez Rego, Emil Lupu, Jacobian Regularization for Mitigating Universal Adversarial Perturbations. 30th International Conference on Artificial Neural Networks (ICANN 21), Sept. 2021.
With advanced video and sensing capabilities, un-occupied aerial vehicles (UAVs) are increasingly being usedfor numerous applications that involve the collaboration andautonomous operation of teams of UAVs. Yet such vehiclescan be affected by cyber attacks, impacting the viability oftheir missions. We propose a method to conduct mission via-bility analysis under cyber attacks for missions that employa team of several UAVs that share a communication network.We apply our method to a case study of a survey mission ina wildfire firefighting scenario. Within this context, we showhow our method can help quantify the expected missionperformance impact from an attack and determine if themission can remain viable under various attack situations.Our method can be used both in the planning of themission and for decision making during mission operation.Our approach to modeling attack progression and impactanalysis with Petri nets is also more broadly applicable toother settings involving multiple resources that can be usedinterchangeably towards the same objective.
J. Soikkeli, C. Perner and E. Lupu, “Analyzing the Viability of UAV Missions Facing Cyber Attacks,” in 2021 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), Vienna, Austria, 2021 pp. 103-112.
Increasing shape-bias in deep neural networks has been shown to improve robustness to common corruptions and noise. In this paper we analyze the adversarial robustness of texture and shape-biased models to Universal Adversarial Perturbations (UAPs). We use UAPs to evaluate the robustness of DNN models with varying degrees of shape-based training. We find that shape-biased models do not markedly improve adversarial robustness, and we show that ensembles of texture and shape-biased models can improve universal adversarial robustness while maintaining strong performance.
Citation: K. T. Co, L. Muñoz-González, L. Kanthan, B. Glocker and E. C. Lupu, “Universal Adversarial Robustness of Texture and Shape-Biased Models,” 2021 IEEE International Conference on Image Processing (ICIP), 2021, pp. 799-803, doi: 10.1109/ICIP42928.2021.9506325.
Luca Maria Castiglione and Emil C. Lupu. 2020. Hazard Driven Threat Modelling for Cyber Physical Systems. In Proceedings of the 2020 Joint Workshop on CPS&IoT Security and Privacy(CPSIOTSEC’20). Association for Computing Machinery, New York, NY, USA, 13–24.
Adversarial actors have shown their ability to infiltrate enterprise networks deployed around Cyber Physical Systems (CPSs) through social engineering, credential stealing and file-less infections. When inside, they can gain enough privileges to maliciously call legitimate APIs and apply unsafe control actions to degrade the system performance and undermine its safety. Our work lies at the intersection of security and safety, and aims to understand dependencies among security, reliability and safety in CPS/IoT. We present a methodology to perform hazard driven threat modelling and impact assessment in the context of CPSs. The process starts from the analysis of behavioural, functional and architectural models of the CPS. We then apply System Theoretic Process Analysis (STPA) on the functional model to highlight high-level abuse cases. We leverage a mapping between the architectural and the system theoretic(ST) models to enumerate those components whose impairment provides the attacker with enough privileges to tamper with or disrupt the data-flows. This enables us to find a causal connection between the attack surface (in the architectural model) and system level losses. We then link the behavioural and system theoretic representations of the CPS to quantify the impact of the attack. Using our methodology it is possible to compute a comprehensive attack graph of the known attack paths and to perform both a qualitative and quantitative impact assessment of the exploitation of vulnerabilities affecting target nodes. The framework and methodology are illustrated using a small scale example featuring a Communication Based Train Control (CBTC) system. Aspects regarding the scalability of our methodology and its application in real world scenarios are also considered. Finally, we discuss the possibility of using the results obtained to engineer both design time and real time defensive mechanisms.
Zaid joined the group as a Research Associate in May 2020. His activities focus on federated learning and adversarial machine learning.
Kenneth T. Co, Luis Muñoz-González, Sixte de Maupeou, and Emil C. Lupu. 2019. Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security (CCS ’19). Association for Computing Machinery, New York, NY, USA, 275–289. DOI:https://doi.org/10.1145/3319535.3345660
Abstract: Deep Convolutional Networks (DCNs) have been shown to be vulnerable to adversarial examples—perturbed inputs specifically designed to produce intentional errors in the learning algorithms at test time. Existing input-agnostic adversarial perturbations exhibit interesting visual patterns that are currently unexplained. In this paper, we introduce a structured approach for generating Universal Adversarial Perturbations (UAPs) with procedural noise functions. Our approach unveils the systemic vulnerability of popular DCN models like Inception v3 and YOLO v3, with single noise patterns able to fool a model on up to 90% of the dataset. Procedural noise allows us to generate a distribution of UAPs with high universal evasion rates using only a few parameters. Additionally, we propose Bayesian optimization to efficiently learn procedural noise parameters to construct inexpensive untargeted black-box attacks. We demonstrate that it can achieve an average of less than 10 queries per successful attack, a 100-fold improvement on existing methods. We further motivate the use of input-agnostic defences to increase the stability of models to adversarial perturbations. The universality of our attacks suggests that DCN models may be sensitive to aggregations of low-level class-agnostic features. These findings give insight on the nature of some universal adversarial perturbations and how they could be generated in other applications.
We have released the code with a demo or our poisoning attack described in the paper “Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization.”
You can access the code in this link.
Fulvio was a Visiting Researcher in the group working on various aspects relating to hybrid-threats and formal analysis applied to network policies. He is with the department of Control and Computer Engineering at the Politecnico di Torino and continues to work in cyber-security within the context of network and systems management.
Aaron will be presenting a paper based on his MSc thesis work “Exploiting Correlations to Detect False Data Injections in Low-Density Wireless Sensor Networks” at 5th ACM CPSS 2019, co-located workshop with ACM ASIACCS.