Emil Lupu

Understanding and mitigating universal adversarial perturbations for computer vision neural networks

Kenneth Tan Co

Deep neural networks (DNNs) have become the algorithm of choice for many computer vision applications. They are able to achieve human level performance in many computer vision tasks, and enable the automation and large-scale deployment of applications such as object tracking, autonomous vehicles, and medical imaging. However, DNNs expose software applications to systemic vulnerabilities in the form of Universal Adversarial Perturbations (UAPs): input perturbation attacks that can cause DNNs to make classification errors on large sets of inputs.

Our aim is to improve the robustness of computer vision DNNs to UAPs without sacrificing the models’ predictive performance. To this end, we increase our understanding of these vulnerabilities by investigating the visual structures and patterns commonly appearing in UAPs. We demonstrate the efficacy and pervasiveness of UAPs by showing how Procedural Noise patterns can be used to generate efficient zero-knowledge attacks for different computer vision models and tasks at minimal cost to the attacker. We then evaluate the UAP robustness of various shape and texture-biased models, and found that applying them in ensembles provides marginal improvement to robustness.

To mitigate UAP attacks, we develop two novel approaches. First, we propose the Jacobian of DNNs to measure the sensitivity of computer vision DNNs. We derive theoretical bounds and provide empirical evidence that shows how a combination of Jacobian regularisation and ensemble methods allow for increased model robustness against UAPs without degrading the predictive performance of computer vision DNNs. Our results evince a robustness-accuracy trade-off against UAPs that is better than those of models trained in conventional ways. Finally, we design a detection method that analyses the hidden layer activation values to identify a variety of UAP attacks in real-time with low-latency. We show that our work outperforms existing defences under realistic time and computation constraints.

Link to thesis PDF

Cite as: Co, Kenneth Tan, Understanding and mitigating universal adversarial perturbations for computer vision neural networks. PhD Thesis, Department of Computing, Imperial College London, https://doi.org/10.25560/103574, March 2023

PhD Theses: A Data Protection Architecture for Derived Data Control in Partially Disconnected Networks

Enrico Scalavino

Every organisation needs to exchange and disseminate data constantly amongst its employees, members, customers and partners. Disseminated data is often sensitive or confidential and access to it should be restricted to authorised recipients. Several enterprise rights management (ERM) systems and data protection solutions have been proposed by both academia and industry to enable usage control on disseminated data, i.e. to allow data originators to retain control over whom accesses their information, under which circumstances, and how it is used. This is often obtained by means of cryptographic techniques and thus by disseminating encrypted data that only trustworthy recipients can decrypt. Most of these solutions assume data recipients are connected to the network and able to contact remote policy evaluation authorities that can evaluate usage control policies and issue decryption keys. This assumption oversimplifies the problem by neglecting situations where connectivity is not available, as often happens in crisis management scenarios. In such situations, recipients may not be able to access the information they have received. Also, while using data, recipients and their applications can create new derived information, either by aggregating data from several sources or transforming the original data’s content or format. Existing solutions mostly neglect this problem and do not allow originators to retain control over this derived data despite the fact that it may be more sensitive or valuable than the data originally disseminated. In this thesis we propose an ERM architecture that caters for both derived data control and usage control in partially disconnected networks. We propose the use of a novel policy lattice model based on information flow and mandatory access control. Sets of policies controlling the usage of data can be specified and ordered in a lattice according to the level of protection they provide. At the same time, their association with specific data objects is mandated by rules (content verification procedures) defined in a data sharing agreement (DSA) stipulated amongst the organisations sharing information. When data is transformed, the new policies associated with it are automatically determined depending on the transformation used and the policies currently associated with the input data. The solution we propose takes into account transformations that can both increase or reduce the sensitivity of information, thus giving originators a flexible means to control their data and its derivations. When data must be disseminated in disconnected environments, the movement of users and the ad hoc connections they establish can be exploited to distribute information. To allow users to decrypt disseminated data without contacting remote evaluation authorities, we integrate our architecture with a mechanism for authority devolution, so that users moving in the disconnected area can be granted the right to evaluate policies and issue decryption keys. This allows recipients to contact any nearby user that is also a policy evaluation authority to obtain decryption keys. The mechanism has been shown to be efficient so that timely access to data is possible despite the lack of connectivity. Prototypes of the proposed solutions that protect XML documents have been developed. A realistic crisis management scenario has been used to show both the flexibility of the presented approach for derived data control and the efficiency of the authority devolution solution when handling data dissemination in simulated partially disconnected networks. While existing systems do not offer any means to control derived data and only offer partial solutions to the problem of lack of connectivity (e.g. by caching decryption keys), we have defined a set of solutions that help data originators faced with the shortcomings of current proposals to control their data in innovative, problem-oriented ways.

http://hdl.handle.net/10044/1/10203

PhD Theses: Monitoring the health and integrity of Wireless Sensor Networks

Rodrigo Vieira Steiner

Wireless Sensor Networks (WSNs) will play a major role in the Internet of Things collecting the data that will support decision-making and enable the automation of many applications. Nevertheless, the introduction of these devices into our daily life raises serious concerns about their integrity. Therefore, at any given point, one must be able to tell whether or not a node has been compromised. Moreover, it is crucial to understand how the compromise of a particular node or set of nodes may affect the network operation. In this thesis, we present a framework to monitor the health and integrity of WSNs that allows us to detect compromised devices and comprehend how they might impact a network’s performance. We start by investigating the use of attestation to identify malicious nodes and advance the state of the art by exploring limitations of existing mechanisms. Firstly, we tackle effectiveness and scalability by combining attestation with measurements inspection and show that the right combination of both schemes can achieve high accuracy whilst significantly reducing power consumption. Secondly, we propose a novel stochastic software-based attestation approach that relaxes a fundamental and yet overlooked assumption made in the literature significantly reducing time and energy consumption while improving the detection rate of honest devices. Lastly, we propose a mathematical model to represent the health of a WSN according to its abilities to perform its functions. Our model combines the knowledge regarding compromised nodes with additional information that quantifies the importance of each node. In this context, we propose a new centrality measure and analyse how well existing metrics can rank the importance each sensor node has on the network connectivity. We demonstrate that while no measure is invariably better, our proposed metric outperforms the others in the vast majority of cases.

http://hdl.handle.net/10044/1/73884

Ensuring the resilience of wireless sensor networks to malicious data injections through measurements inspectionPhD Theses:

Vittorio Illiano

Malicious data injections pose a severe threat to the systems based on Wireless Sensor Networks (WSNs) since they give the attacker control over the measurements, and on the system’s status and response in turn. Malicious measurements are particularly threatening when used to spoof or mask events of interest, thus eliciting or preventing desirable responses. Spoofing and masking attacks are particularly difficult to detect since they depict plausible behaviours, especially if multiple sensors have been compromised and collude to inject a coherent set of malicious measurements. Previous work has tackled the problem through measurements inspection, which analyses the inter-measurements correlations induced by the physical phenomena. However, these techniques consider simplistic attacks and are not robust to collusion. Moreover, they assume highly predictable patterns in the measurements distribution, which are invalidated by the unpredictability of events. We design a set of techniques that effectively detect malicious data injections in the presence of sophisticated collusion strategies, when one or more events manifest. Moreover, we build a methodology to characterise the likely compromised sensors. We also design diagnosis criteria that allow us to distinguish anomalies arising from malicious interference and faults. In contrast with previous work, we test the robustness of our methodology with automated and sophisticated attacks, where the attacker aims to evade detection. We conclude that our approach outperforms state-of-the-art approaches. Moreover, we estimate quantitatively the WSN degree of resilience and provide a methodology to give a WSN owner an assured degree of resilience by automatically designing the WSN deployment. To deal also with the extreme scenario where the attacker has compromised most of the WSN, we propose a combination with software attestation techniques, which are more reliable when malicious data is originated by a compromised software, but also more expensive, and achieve an excellent trade-off between cost and resilience.

http://hdl.handle.net/10044/1/56624

PhD Theses: Compositional behaviour and reliability models for adaptive component-based architectures

Pedro Rodrigues Fonseca

The increasing scale and distribution of modern pervasive computing and service-based platforms makes manual maintenance and evolution difficult and too slow. Systems should therefore be designed to self-adapt in response to environment changes, which requires the use of on-line models and analysis. Although there has been a considerable amount of work on architectural modelling and behavioural analysis of component-based systems, there is a need for approaches that integrate the architectural, behavioural and management aspects of a system. In particular, the lack of support for composability in probabilisitic behavioural models prevents their systematic use for adapting systems based on changes in their non-functional properties. Of these non-functional properties, this thesis focuses on reliability. We introduce Probabilistic Component Automata (PCA) for describing the probabilistic behaviour of those systems. Our formalism simultaneously overcomes three of the main limitations of existing work: it preserves a close correspondence between the behavioural and architectural views of a system in both abstractions and semantics; it is composable as behavioural models of composite components are automatically obtained by combining the models of their constituent parts; and lastly it is probabilistic thereby enabling analysis of non-functional properties. PCA also provide constructs for representing failure, failure propagation and failure handling in component-based systems in a manner that closely corresponds to the use of exceptions in programming languages. Although PCA is used throughout this thesis for reliability analysis, the model can also be seen as an abstract process algebra that may be applicable for analysis of other system properties. We further show how reliability analysis based on PCA models can be used to perform architectural adaptation on distributed component-based systems and evaluate the computational cost of decentralised adaptation decisions. To mitigate the state-explosion problem associated with composite models, we further introduce an algorithm to reduce a component’s PCA model to one that only represents its interface behaviour. We formally show that such model preserves the properties of the original representation. By experiment, we show that the reduced models are significantly smaller than the original, achieving a reduction of more than 80\% on both the number of states and transitions. A further benefit of the approach is that it allows component profiling and probabilistic interface behaviour to be extracted independently for each component, thereby enabling its exchange between different organisations without revealing commercially sensitive aspects of the components’ implementations. The contributions and results of this work are evaluated both through a series of small scale examples and through a larger case study of an e-Banking application derived from Java EE training materials. Our work shows how probabilistic non-functional properties can be integrated with the architectural and behavioural models of a system in an intuitive and scalable way that enables automated architecture reconfiguration based on reliability properties using composable models.

http://hdl.handle.net/10044/1/26896

PhD-Theses: Improving resilience to cyber-attacks by analysing system output impacts and costs

Jukka Soikkeli

Abstract

Cyber-attacks cost businesses millions of dollars every year, a key component of which is the cost of business disruption from system downtime. As cyber-attacks cannot all be prevented, there is a need to consider the cyber resilience of systems, i.e. the ability to withstand cyber-attacks and recover from them.

Previous works discussing system cyber resilience typically either offer generic high-level guidance on best practices, provide limited attack modelling, or apply to systems with special characteristics. There is a lack of an approach to system cyber resilience evaluation that is generally applicable yet provides a detailed consideration for the system-level impacts of cyber-attacks and defences.

We propose a methodology for evaluating the effectiveness of actions intended to improve resilience to cyber-attacks, considering their impacts on system output performance, and monetary costs. It is intended for analysing attacks that can disrupt the system function, and involves modelling attack progression, system output production, response to attacks, and costs from cyber-attacks and defensive actions.

Studies of three use cases demonstrate the implementation and usefulness of our methodology. First, in our redundancy planning study, we considered the effect of redundancy additions on mitigating the impacts of cyber-attacks on system output performance. We found that redundancy with diversity can be effective in increasing resilience, although the reduction in attack-related costs must be balanced against added maintenance costs. Second, our work on attack countermeasure selection shows that by considering system output impacts across the duration of an attack, one can find more cost-effective attack responses than without such considerations. Third, we propose an approach to mission viability analysis for multi-UAV deployments facing cyber-attacks, which can aid resource planning and determining if the mission can conclude successfully despite an attack. We provide different implementations of our model components, based on use case requirements.

https://spiral.imperial.ac.uk/handle/10044/1/100910

Meet Javi

Javier Carnerero Cano, a PhD student in the RISS group makes a DoC Clock video to introduce himself and the things he likes!

Marwa Salayma

Marwa Salayma is performing cutting-edge research for the resilience of systems, Internet of Things (IoT) and cybersecurity at RISS group. Before joining Imperial College London, she worked as a research associate in Heriot-Watt University and was part of two collaborative projects related to underwater acoustic sensor networks: 1) Harvesting of Underwater Data from SensOr Networks using Autonomous Underwater Vehicles (AUV) (HUDSON) and 2) Smart dust for large scale underwater wireless sensing (USMART).

Marwa received the PhD degree from Edinburgh Napier University in 2018 and her PhD topic was in the field of Wireless Body Area Networks (WBAN), and the M.Sc. degree in Computer Science from Jordan University of Science and Technology in 2013, and the BEng (Hons) degree in Electrical Engineering from Palestine Polytechnic University in 2007.  Besides security, resilience and reliability of dynamic systems, her research interests are in (but not limited to) wireless communication technologies deployed in different network environments, eHealth, network layer stack protocols including cross layering algorithms, energy efficient communication, reliable scheduling, and QoS provisioning in wireless networks.

ERASE: Evaluating the Robustness of Machine Learning Algorithms in Adversarial Settings

We are increasingly relying on systems that use machine learning to learn from their environment and often to detect anomalies in the behaviour that they observe. But the consequences of a malicious adversary targeting the machine learning algorithms themselves by compromising part of the data from which the system learns are poorly understood and represent a significant threat. The objective of this project is to propose systematic and realistic ways of assessing, testing and improving the robustness of machine learning algorithms to poisoning attacks. We consider both indiscriminate attacks, which aim to cause an overall degradation of the model’s performance, and targeted attacks that aim to induce specific errors. We focus in particular on “optimal” attack strategies seeking to maximise the impact of the poisoning points, thus representing a “worst-case” scenario. However, we consider sophisticated adversaries that also take into account detectability constraints.   

PhD Studentship funded by DSTL

Fellowships

If you are interested to be hosted within the group for fellowship such as MSCA Fellowships, please do not hesitate to contact Prof. Emil Lupu.

Extracting Randomness from the Trend of IPI for Cryptographic Operations in Implantable Medical Devices

H. Chizari and E. Lupu, “Extracting Randomness from the Trend of IPI for Cryptographic Operations in Implantable Medical Devices,” in IEEE Transactions on Dependable and Secure Computing, vol. 18, no. 2, pp. 875-888, 1 March-April 2021, doi: 10.1109/TDSC.2019.2921773.

Achieving secure communication between an Implantable Medical Device (IMD) and a gateway or programming device outside the body has showed its criticality in recent reports of vulnerabilities in cardiac devices, insulin pumps and neural implants, amongst others. The use of asymmetric cryptography is typically not a practical solution for IMDs due to the scarce computational and power resources. Symmetric key cryptography is preferred but its security relies on agreeing and using strong keys, which are difficult to generate. A solution to generate strong shared keys without using extensive resources, is to extract them from physiological signals already present inside the body such as the Inter-Pulse interval (IPI). The physiological signals must therefore be strong sources of randomness that meet five conditions: Universality (available on all people), Liveness (available at any-time), Robustness (strong random number), Permanence (independent from its history) and Uniqueness (independent from other sources). However, these conditions (mainly the last three) have not been systematically examined in current methods for randomness extraction from IPI. In this study, we first propose a methodology to measure the last three conditions: Information secrecy measures for Robustness, Santha-Vazirani Source delta value for Permanence and random sources dependency analysis for Uniqueness. Then, using a large dataset of IPI values (almost 900,000,000 IPIs), we show that IPI does not have Robustness and Permanence as a randomness source. Thus, extraction of a strong uniform random number from IPI values is impossible. Third, we propose to use the trend of IPI, instead of its value, as a source for a new randomness extraction method named Martingale Randomness Extraction from IPI (MRE-IPI). We evaluate MRE-IPI and show that it satisfies the Robustness condition completely and Permanence to some level. Finally, we use the NIST STS and Dieharder test suites and show that MRE-IPI is able to outperform all recent randomness extraction methods from IPIs and achieves a quality roughly half that of the AES random number generator. MRE-IPI is still not a strong random number and cannot be used as key to secure communications in general. However, it can be used as a one-time pad to securely exchange keys between the communication parties. The usage of MRE-IPI will thus be kept at a minimum and reduces the probability of breaking it. To the best of our knowledge, this is the first work in this area which uses such a comprehensive method and large dataset to examine the randomness of physiological signals.

Shadow-Catcher: Looking Into Shadows to Detect Ghost Objects in Autonomous Vehicle 3D Sensing

Zhongyuan Hau, Soteris Demetriou, Luis Muñoz-González, Emil C. Lupu, Shadow-Catcher: Looking Into Shadows to Detect Ghost Objects in Autonomous Vehicle 3D Sensing, 26th European Symposium on Research in Computer Security (ESORICS), 2021.

LiDAR-driven 3D sensing allows new generations of vehicles to achieve advanced levels of situation awareness. However, recent works have demonstrated that physical adversaries can spoof LiDAR return signals and deceive 3D object detectors to erroneously detect “ghost” objects. Existing defenses are either impractical or focus only on vehicles. Unfortunately, it is easier to spoof smaller objects such as pedestrians and cyclists, but harder to defend against and can have worse safety implications. To address this gap, we introduce Shadow-Catcher, a set of new techniques embodied in an end-to-end prototype to detect both large and small ghost object attacks on 3D detectors. We characterize a new semantically meaningful physical invariant (3D shadows) which Shadow-Catcher leverages for validating objects. Our evaluation on the KITTI dataset shows that Shadow-Catcher consistently achieves more than 94% accuracy in identifying anomalous shadows for vehicles, pedestrians, and cyclists, while it remains robust to a novel class of strong “invalidation” attacks targeting the defense system. Shadow-Catcher can achieve real-time detection, requiring only between 0.003s-0.021s on average to process an object in a 3D point cloud on commodity hardware and achieves a 2.17x speedup compared to prior work

Responding to Attacks and Compromise at the Edge (RACE)

IoT systems evolve dynamically and are increasingly used in critical applications. Understanding how to maintain the operation of the system when systems have been partially compromised is therefore of critical importance. This requires to continuously assess the risk to other parts of the system, determine the impact of the compromise and to select appropriate mitigation strategies to respond to the attack. The ability to cope with dynamic system changes is a key and significant challenge in achieving these objectives.

RACE is articulated into four broad themes of work: understanding attacks and mitigation strategies, maintaining an adequate representation of risk to the other parts of the system by understanding how attacks can evolve and propagate, understanding the impact of the compromise upon the functionality of the system and selecting countermeasure strategies taking into account trade-offs between minimising disruption to the system operation and functionality provided and minimising the risk to the other parts of the system.

Robustness and Transferability of Universal Attacks on Compressed Models

A.G. Matachana, K.T. Co, L. Muñoz-González, D. Martinez, E.C. Lupu. Robustness and Transferability of Universal Attacks on Compressed Models. AAAI 2021 Workshop: Towards Robust, Secure and Efficient Machine Learning. 2021.

Neural network compression methods like pruning and quantization are very effective at efficiently deploying Deep Neural Networks (DNNs) on edge devices. However, DNNs remain vulnerable to adversarial examples-inconspicuous inputs that are specifically designed to fool these models. In particular, Universal Adversarial Perturbations (UAPs), are a powerful class of adversarial attacks which create adversarial perturbations that can generalize across a large set of inputs. In this work, we analyze the effect of various compression techniques to UAP attacks, including different forms of pruning and quantization. We test the robustness of compressed models to white-box and transfer attacks, comparing them with their uncompressed counterparts on CIFAR-10 and SVHN datasets. Our evaluations reveal clear differences between pruning methods, including Soft Filter and Post-training Pruning. We observe that UAP transfer attacks between pruned and full models are limited, suggesting that the systemic vulnerabilities across these models are different. This finding has practical implications as using different compression techniques can blunt the effectiveness of black-box transfer attacks. We show that, in some scenarios, quantization can produce gradient-masking, giving a false sense of security. Finally, our results suggest that conclusions about the robustness of compressed models to UAP attacks is application dependent, observing different phenomena in the two datasets used in our experiments.

Object Removal Attacks on LiDAR-based 3D Object Detectors

LiDARs play a critical role in Autonomous Vehicles’ (AVs) perception and their safe operations. Recent works have demonstrated that it is possible to spoof LiDAR return signals to elicit fake objects. In this work we demonstrate how the same physical capabilities can be used to mount a new, even more dangerous class of attacks, namely Object Removal Attacks (ORAs). ORAs aim to force 3D object detectors to fail. We leverage the default setting of LiDARs that record a single return signal per direction to perturb point clouds in the region of interest (RoI) of 3D objects. By injecting illegitimate points behind the target object, we effectively shift points away from the target objects’ RoIs. Our initial results using a simple random point selection strategy show that the attack is effective in degrading the performance of commonly used 3D object detection models.

Z. Hau, K.T. Co, S. Demetriou, E.C. Lupu. Object Removal Attacks on LiDAR-based 3D Object Detectors. Automotive and Autonomous Vehicle Security (AutoSec) Workshop @ NDSS Symposium 2021.

Paper

Jacobian Regularization for Mitigating Universal Adversarial Perturbations

Authors: Kenneth Co, David Martinez Rego, Emil Lupu

Universal Adversarial Perturbations (UAPs) are input perturbations that can fool a neural network on large sets of data. They are a class of attacks that represents a significant threat as they facilitate realistic, practical, and low-cost attacks on neural networks. In this work, we derive upper bounds for the effectiveness of UAPs based on norms of data-dependent Jacobians. We empirically verify that Jacobian regularization greatly increases model robustness to UAPs by up to four times whilst maintaining clean performance. Our theoretical analysis also allows us to formulate a metric for the strength of shared adversarial perturbations between pairs of inputs. We apply this metric to benchmark datasets and show that it is highly correlated with the actual observed robustness. This suggests that realistic and practical universal attacks can be reliably mitigated without sacrificing clean accuracy, which shows promise for the robustness of machine learning systems.

Kenneth Co, David Martinez Rego, Emil Lupu, Jacobian Regularization for Mitigating Universal Adversarial Perturbations. 30th International Conference on Artificial Neural Networks (ICANN 21), Sept. 2021.

Pre-print on arxiv

 

Analyzing the Viability of UAV Missions Facing Cyber Attacks

With advanced video and sensing capabilities, un-occupied aerial vehicles (UAVs) are increasingly being usedfor numerous applications that involve the collaboration andautonomous operation of teams of UAVs. Yet such vehiclescan be affected by cyber attacks, impacting the viability oftheir missions. We propose a method to conduct mission via-bility analysis under cyber attacks for missions that employa team of several UAVs that share a communication network.We apply our method to a case study of a survey mission ina wildfire firefighting scenario. Within this context, we showhow our method can help quantify the expected missionperformance impact from an attack and determine if themission can remain viable under various attack situations.Our method can be used both in the planning of themission and for decision making during mission operation.Our approach to modeling attack progression and impactanalysis with Petri nets is also more broadly applicable toother settings involving multiple resources that can be usedinterchangeably towards the same objective.

J. Soikkeli, C. Perner and E. Lupu, “Analyzing the Viability of UAV Missions Facing Cyber Attacks,” in 2021 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), Vienna, Austria, 2021 pp. 103-112.
doi: 10.1109/EuroSPW54576.2021.00018

Universal Adversarial Robustness of Texture and Shape-Biased Models

Increasing shape-bias in deep neural networks has been shown to improve robustness to common corruptions and noise. In this paper we analyze the adversarial robustness of texture and shape-biased models to Universal Adversarial Perturbations (UAPs). We use UAPs to evaluate the robustness of DNN models with varying degrees of shape-based training. We find that shape-biased models do not markedly improve adversarial robustness, and we show that ensembles of texture and shape-biased models can improve universal adversarial robustness while maintaining strong performance.

Citation: K. T. Co, L. Muñoz-González, L. Kanthan, B. Glocker and E. C. Lupu, “Universal Adversarial Robustness of Texture and Shape-Biased Models,” 2021 IEEE International Conference on Image Processing (ICIP), 2021, pp. 799-803, doi: 10.1109/ICIP42928.2021.9506325.

Paper in IEEE Archive  Pre-print on arxiv

Muhammad Zaid Hameed

Zaid joined the group as a Research Associate in May 2020. His activities focus on federated learning and adversarial machine learning.