Emil Lupu

Exploiting contextual anomalies to detect perception attacks on cyber-physical systems

Zhongyuan Hau

Abstract

Perception is a key component in Cyber-Physical Systems (CPS) where information is collected by sensors and processed for decision-making. Perception attacks have been crafted specifically to subvert the CPS decision-making process. CPS are used in many safety-critical applications and perception attacks could lead to catastrophic consequences. Hence, there is a need to study how effective detection systems can be designed to detect such attacks. Designing detection systems for perception attacks in CPS is difficult as each CPS is domain-specific and existing detection systems for one CPS in one domain cannot be easily transferred to another. Current proposed detection systems are implemented to mitigate specific attacks and most offer only high-level insights on how the detection is performed. A systematic approach to designing detection for perception attacks that is generally applicable for CPS is needed. We propose a threat-modelling based methodology to design perception attack detection systems for CPS. An information model of the CPS, together with a threat model are used to determine how information correlations, defined as invariants, can be exploited as context for detecting anomalies. The proposed methodology was first applied to design perception attack detection Autonomous Driving, where we tackle the problem of attacks on LiDAR-based perception to spoof and hide objects. A novel specified physical invariant, the 3D shadow, was identified and shown that it is a robust verifier of genuine objects and was used to detect spoofed and hidden objects. Another learnt physical invariant of an object, where its motion needs to be temporally consistent, was shown to be effective in detecting object spoofing. Secondly, we apply the methodology to design the detection of false data injection in low-density sensor networks. We show that the use of learnt correlations across sensor measurements is effective even in a constrained setting with few sensors and heterogeneous data.

Indiscriminate data poisoning against supervised learning: general attack formulations, robust defences, and poisonability

Javier Carnerero Cano

Abstract

Machine learning (ML) systems often rely on data collected from untrusted sources, such as humans or sensors, that can be compromised. These scenarios expose ML algorithms to data poisoning attacks, where adversaries manipulate a fraction of the training data to degrade the ML system performance. However, previous works lack a systematic evaluation of attacks considering the ML pipeline, and focus on classification settings. This is concerning since regression models are also applied in safety-critical systems. We characterise indiscriminate data poisoning attacks and defences in worst-case scenarios against supervised learning algorithms, considering the ML pipeline: data sanitisation, hyperparameter learning, and training. We propose a novel attack formulation that considers the effect of the attack on the model’s hyperparameters. We apply this attack formulation to several ML classifiers using L2 and L1 regularisation. Our evaluation shows the benefits of using regularisation to help mitigate poisoning attacks, when hyperparameters are learnt using a trusted dataset. We then introduce a threat model for poisoning attacks against regression models, and propose a novel stealthy attack formulation via multiobjective bilevel optimisation, where the two objectives are attack effectiveness and detectability. We experimentally show that state-of-the-art defences do not mitigate these stealthy attacks. Furthermore, we theoretically justify the detectability objective and methodology designed. We also propose a novel defence, built upon Bayesian linear regression, that rejects points based on the model’s predictive variance. We empirically show its effectiveness to mitigate stealthy attacks and attacks with a large fraction of poisoning points. Finally, we introduce the concept of “poisonability”, which allows us to find the number of poisoning points required so that the mean error of the clean points matches the mean error of the poisoning points on the poisoned model. This challenges the underlying assumption of most defences. Specifically, we determine the poisonability of linear regression.

Identifying safety-critical attacks targeting cyber-physical systems: a systems theoretic approach

Luca Maria Castiglione

Abstract

Over the last decades, society has witnessed a sharp increase in the use of complex and interconnected computer systems to monitor and assist in several aspects of everyday life. As operators of safety-critical systems deploy network-enabled devices aiming to enhance connectivity and streamline remote operations, the attack surface of these systems has increased. Whilst more attacks are becoming possible, only some of them will impact safety. Identifying such critical attacks is our priority; unfortunately, the complexity of modern cyber-physical systems (CPSs) renders this task challenging. Sophisticated attacks often rely on the effect of apparently legitimate commands, which can trigger cascading effects within the CPS itself, rendering it vulnerable to further attacks and causing harm. To help prevent these scenarios, tools and methodologies need to be developed that support integrated safety and security analysis in the context of CPS also considering their behaviours and internal dynamics. We present Cassandra, a novel methodology to identify safety-critical threat scenarios and reason about their risk and applicable security measures in specific deployment contexts. Unlike other methodologies, Cassandra leverages existing relations between high-level threats and the system architecture to identify safety-critical attack paths. The qualitative and quantitative analysis of the paths found allows us to estimate the risk associated with safety-critical attacks, identify applicable security controls, and evaluate their effectiveness. Cassandra offers an integrated set of tools that enable the automated derivation of safety-critical sequences of threats and their respective attack paths. This provides an important step towards making integrated safety and security analyses less subjective, more reproducible and thus more suitable for applications in safety-critical contexts. We have applied Cassandra to analyse the safe operation of safety-critical systems in three distinct use cases, including railway traffic control, power grid, and avionics. The scenarios analysed progressively increase in complexity and mirroring of real-world conditions.

RESICS : Resilience and Safety to attacks in Industrial Control and Cyber-Physical Systems

We all critically depend on and use digital systems that sense and control physical processes and environments. Electricity, gas, water, and other utilities require the continuous operation of both national and local infrastructures. Industrial processes, for example for chemical manufacturing, production of materials and manufacturing chains similarly lie at this intersection of the digital and the physical. This intersection also applies in other CPS such as robots, autonomous cars, and drones. Ensuring the resilience of such systems, their survivability and continued operation when exposed to malicious threats requires the integration of methods and processes from security analysis, safety analysis, system design and operation that have traditionally been done separately and that each involve specialist skills and a significant amount of human effort. This is not only costly, but also error prone and delays response to security events. 

RESICS aims to significantly advance the state-of-the-art and deliver novel contributions that facilitate:

  • Risk analysis in the face of adversarial threats taking into account the impact of security events across cascading inter-dependencies
  • Characterising attacks that can have an impact on system safety and identifying the paths that make such attacks possible
  • Identifying countermeasures that can be applied to mitigate threats and contain the impact of attacks
  • Ensuring that such countermeasures can be applied whilst preserving the system’s safety and operational constraints and maximising its availability.

These contributions will be evaluated across several test beds, digital twins, a cyber range and a number of use-cases across different industry sectors.

To achieve these goals RESICS will combine model-driven and empirical approaches across both security and safety analysis, adopting a systems-thinking approach which emphasises Security, Safety and Resilience as emerging properties of the system. RESICS leverages preliminary results in the integration of safety and security methodologies with the application of formal methods and the combination of model-based and empirical approaches to the analysis of inter-dependencies in ICSs and CPSs.

Funded by DSTL, this is a joint project between the Resilient Information Systems Security (RISS) Group at Imperial College and the Bristol Cyber Security Group. The work will be conducted in collaboration with: Adelard (part of NCC Group), Airbus, Qinetiq, Reperion, Siemens, Thales as industry partners and CMU, University of Naples and SUTD as academic partners. The project is affiliated with the Research Institute in Trustworthy Inter-Connected Cyber-Physical Systems (RITICS)

Project Publications

  • L. M. Castiglione, S. Guerra, E. C. Lupu, Automated Identification of Safety-Critical Attacks against CPS and Generation of Assurance Case Fragments. To be presented at Safety Critical Systems Symposium SSS’25.
  • Mathuros, Kornkamon, Sarad Venugopalan, and Sridhar Adepu. “WaXAI: Explainable Anomaly Detection in Industrial Control Systems and Water Systems.” Proceedings of the 10th ACM Cyber-Physical System Security Workshop. 2024. Awarded Best paper Award.
  • Ruizhe Wang, Sarad Venugopalan and Sridhar Adepu. “Safety Analysis for Cyber-Physical Systems under Cyber Attacks Using Digital Twin” in IEEE Cyber Security and Resilience 2024.

Other relevant publications

Presentations

Understanding and mitigating universal adversarial perturbations for computer vision neural networks

Kenneth Tan Co

Deep neural networks (DNNs) have become the algorithm of choice for many computer vision applications. They are able to achieve human level performance in many computer vision tasks, and enable the automation and large-scale deployment of applications such as object tracking, autonomous vehicles, and medical imaging. However, DNNs expose software applications to systemic vulnerabilities in the form of Universal Adversarial Perturbations (UAPs): input perturbation attacks that can cause DNNs to make classification errors on large sets of inputs.

Our aim is to improve the robustness of computer vision DNNs to UAPs without sacrificing the models’ predictive performance. To this end, we increase our understanding of these vulnerabilities by investigating the visual structures and patterns commonly appearing in UAPs. We demonstrate the efficacy and pervasiveness of UAPs by showing how Procedural Noise patterns can be used to generate efficient zero-knowledge attacks for different computer vision models and tasks at minimal cost to the attacker. We then evaluate the UAP robustness of various shape and texture-biased models, and found that applying them in ensembles provides marginal improvement to robustness.

To mitigate UAP attacks, we develop two novel approaches. First, we propose the Jacobian of DNNs to measure the sensitivity of computer vision DNNs. We derive theoretical bounds and provide empirical evidence that shows how a combination of Jacobian regularisation and ensemble methods allow for increased model robustness against UAPs without degrading the predictive performance of computer vision DNNs. Our results evince a robustness-accuracy trade-off against UAPs that is better than those of models trained in conventional ways. Finally, we design a detection method that analyses the hidden layer activation values to identify a variety of UAP attacks in real-time with low-latency. We show that our work outperforms existing defences under realistic time and computation constraints.

Link to thesis PDF

Cite as: Co, Kenneth Tan, Understanding and mitigating universal adversarial perturbations for computer vision neural networks. PhD Thesis, Department of Computing, Imperial College London, https://doi.org/10.25560/103574, March 2023

A Data Protection Architecture for Derived Data Control in Partially Disconnected Networks

Enrico Scalavino

Every organisation needs to exchange and disseminate data constantly amongst its employees, members, customers and partners. Disseminated data is often sensitive or confidential and access to it should be restricted to authorised recipients. Several enterprise rights management (ERM) systems and data protection solutions have been proposed by both academia and industry to enable usage control on disseminated data, i.e. to allow data originators to retain control over whom accesses their information, under which circumstances, and how it is used. This is often obtained by means of cryptographic techniques and thus by disseminating encrypted data that only trustworthy recipients can decrypt. Most of these solutions assume data recipients are connected to the network and able to contact remote policy evaluation authorities that can evaluate usage control policies and issue decryption keys. This assumption oversimplifies the problem by neglecting situations where connectivity is not available, as often happens in crisis management scenarios. In such situations, recipients may not be able to access the information they have received. Also, while using data, recipients and their applications can create new derived information, either by aggregating data from several sources or transforming the original data’s content or format. Existing solutions mostly neglect this problem and do not allow originators to retain control over this derived data despite the fact that it may be more sensitive or valuable than the data originally disseminated. In this thesis we propose an ERM architecture that caters for both derived data control and usage control in partially disconnected networks. We propose the use of a novel policy lattice model based on information flow and mandatory access control. Sets of policies controlling the usage of data can be specified and ordered in a lattice according to the level of protection they provide. At the same time, their association with specific data objects is mandated by rules (content verification procedures) defined in a data sharing agreement (DSA) stipulated amongst the organisations sharing information. When data is transformed, the new policies associated with it are automatically determined depending on the transformation used and the policies currently associated with the input data. The solution we propose takes into account transformations that can both increase or reduce the sensitivity of information, thus giving originators a flexible means to control their data and its derivations. When data must be disseminated in disconnected environments, the movement of users and the ad hoc connections they establish can be exploited to distribute information. To allow users to decrypt disseminated data without contacting remote evaluation authorities, we integrate our architecture with a mechanism for authority devolution, so that users moving in the disconnected area can be granted the right to evaluate policies and issue decryption keys. This allows recipients to contact any nearby user that is also a policy evaluation authority to obtain decryption keys. The mechanism has been shown to be efficient so that timely access to data is possible despite the lack of connectivity. Prototypes of the proposed solutions that protect XML documents have been developed. A realistic crisis management scenario has been used to show both the flexibility of the presented approach for derived data control and the efficiency of the authority devolution solution when handling data dissemination in simulated partially disconnected networks. While existing systems do not offer any means to control derived data and only offer partial solutions to the problem of lack of connectivity (e.g. by caching decryption keys), we have defined a set of solutions that help data originators faced with the shortcomings of current proposals to control their data in innovative, problem-oriented ways.

http://hdl.handle.net/10044/1/10203

Monitoring the health and integrity of Wireless Sensor Networks

Rodrigo Vieira Steiner

Wireless Sensor Networks (WSNs) will play a major role in the Internet of Things collecting the data that will support decision-making and enable the automation of many applications. Nevertheless, the introduction of these devices into our daily life raises serious concerns about their integrity. Therefore, at any given point, one must be able to tell whether or not a node has been compromised. Moreover, it is crucial to understand how the compromise of a particular node or set of nodes may affect the network operation. In this thesis, we present a framework to monitor the health and integrity of WSNs that allows us to detect compromised devices and comprehend how they might impact a network’s performance. We start by investigating the use of attestation to identify malicious nodes and advance the state of the art by exploring limitations of existing mechanisms. Firstly, we tackle effectiveness and scalability by combining attestation with measurements inspection and show that the right combination of both schemes can achieve high accuracy whilst significantly reducing power consumption. Secondly, we propose a novel stochastic software-based attestation approach that relaxes a fundamental and yet overlooked assumption made in the literature significantly reducing time and energy consumption while improving the detection rate of honest devices. Lastly, we propose a mathematical model to represent the health of a WSN according to its abilities to perform its functions. Our model combines the knowledge regarding compromised nodes with additional information that quantifies the importance of each node. In this context, we propose a new centrality measure and analyse how well existing metrics can rank the importance each sensor node has on the network connectivity. We demonstrate that while no measure is invariably better, our proposed metric outperforms the others in the vast majority of cases.

Ensuring the resilience of wireless sensor networks to malicious data injections through measurements inspection

Vittorio Illiano

Malicious data injections pose a severe threat to the systems based on Wireless Sensor Networks (WSNs) since they give the attacker control over the measurements, and on the system’s status and response in turn. Malicious measurements are particularly threatening when used to spoof or mask events of interest, thus eliciting or preventing desirable responses. Spoofing and masking attacks are particularly difficult to detect since they depict plausible behaviours, especially if multiple sensors have been compromised and collude to inject a coherent set of malicious measurements. Previous work has tackled the problem through measurements inspection, which analyses the inter-measurements correlations induced by the physical phenomena. However, these techniques consider simplistic attacks and are not robust to collusion. Moreover, they assume highly predictable patterns in the measurements distribution, which are invalidated by the unpredictability of events. We design a set of techniques that effectively detect malicious data injections in the presence of sophisticated collusion strategies, when one or more events manifest. Moreover, we build a methodology to characterise the likely compromised sensors. We also design diagnosis criteria that allow us to distinguish anomalies arising from malicious interference and faults. In contrast with previous work, we test the robustness of our methodology with automated and sophisticated attacks, where the attacker aims to evade detection. We conclude that our approach outperforms state-of-the-art approaches. Moreover, we estimate quantitatively the WSN degree of resilience and provide a methodology to give a WSN owner an assured degree of resilience by automatically designing the WSN deployment. To deal also with the extreme scenario where the attacker has compromised most of the WSN, we propose a combination with software attestation techniques, which are more reliable when malicious data is originated by a compromised software, but also more expensive, and achieve an excellent trade-off between cost and resilience.

Compositional behaviour and reliability models for adaptive component-based architectures

Pedro Rodrigues Fonseca

The increasing scale and distribution of modern pervasive computing and service-based platforms makes manual maintenance and evolution difficult and too slow. Systems should therefore be designed to self-adapt in response to environment changes, which requires the use of on-line models and analysis. Although there has been a considerable amount of work on architectural modelling and behavioural analysis of component-based systems, there is a need for approaches that integrate the architectural, behavioural and management aspects of a system. In particular, the lack of support for composability in probabilisitic behavioural models prevents their systematic use for adapting systems based on changes in their non-functional properties. Of these non-functional properties, this thesis focuses on reliability. We introduce Probabilistic Component Automata (PCA) for describing the probabilistic behaviour of those systems. Our formalism simultaneously overcomes three of the main limitations of existing work: it preserves a close correspondence between the behavioural and architectural views of a system in both abstractions and semantics; it is composable as behavioural models of composite components are automatically obtained by combining the models of their constituent parts; and lastly it is probabilistic thereby enabling analysis of non-functional properties. PCA also provide constructs for representing failure, failure propagation and failure handling in component-based systems in a manner that closely corresponds to the use of exceptions in programming languages. Although PCA is used throughout this thesis for reliability analysis, the model can also be seen as an abstract process algebra that may be applicable for analysis of other system properties. We further show how reliability analysis based on PCA models can be used to perform architectural adaptation on distributed component-based systems and evaluate the computational cost of decentralised adaptation decisions. To mitigate the state-explosion problem associated with composite models, we further introduce an algorithm to reduce a component’s PCA model to one that only represents its interface behaviour. We formally show that such model preserves the properties of the original representation. By experiment, we show that the reduced models are significantly smaller than the original, achieving a reduction of more than 80\% on both the number of states and transitions. A further benefit of the approach is that it allows component profiling and probabilistic interface behaviour to be extracted independently for each component, thereby enabling its exchange between different organisations without revealing commercially sensitive aspects of the components’ implementations. The contributions and results of this work are evaluated both through a series of small scale examples and through a larger case study of an e-Banking application derived from Java EE training materials. Our work shows how probabilistic non-functional properties can be integrated with the architectural and behavioural models of a system in an intuitive and scalable way that enables automated architecture reconfiguration based on reliability properties using composable models.

Improving resilience to cyber-attacks by analysing system output impacts and costs

Jukka Soikkeli

Abstract

Cyber-attacks cost businesses millions of dollars every year, a key component of which is the cost of business disruption from system downtime. As cyber-attacks cannot all be prevented, there is a need to consider the cyber resilience of systems, i.e. the ability to withstand cyber-attacks and recover from them.

Previous works discussing system cyber resilience typically either offer generic high-level guidance on best practices, provide limited attack modelling, or apply to systems with special characteristics. There is a lack of an approach to system cyber resilience evaluation that is generally applicable yet provides a detailed consideration for the system-level impacts of cyber-attacks and defences.

We propose a methodology for evaluating the effectiveness of actions intended to improve resilience to cyber-attacks, considering their impacts on system output performance, and monetary costs. It is intended for analysing attacks that can disrupt the system function, and involves modelling attack progression, system output production, response to attacks, and costs from cyber-attacks and defensive actions.

Studies of three use cases demonstrate the implementation and usefulness of our methodology. First, in our redundancy planning study, we considered the effect of redundancy additions on mitigating the impacts of cyber-attacks on system output performance. We found that redundancy with diversity can be effective in increasing resilience, although the reduction in attack-related costs must be balanced against added maintenance costs. Second, our work on attack countermeasure selection shows that by considering system output impacts across the duration of an attack, one can find more cost-effective attack responses than without such considerations. Third, we propose an approach to mission viability analysis for multi-UAV deployments facing cyber-attacks, which can aid resource planning and determining if the mission can conclude successfully despite an attack. We provide different implementations of our model components, based on use case requirements.

Meet Javi

Javier Carnerero Cano, a PhD student in the RISS group makes a DoC Clock video to introduce himself and the things he likes!

Marwa Salayma

Marwa Salayma is performing cutting-edge research for the resilience of systems, Internet of Things (IoT) and cybersecurity at RISS group. Before joining Imperial College London, she worked as a research associate in Heriot-Watt University and was part of two collaborative projects related to underwater acoustic sensor networks: 1) Harvesting of Underwater Data from SensOr Networks using Autonomous Underwater Vehicles (AUV) (HUDSON) and 2) Smart dust for large scale underwater wireless sensing (USMART).

Marwa received the PhD degree from Edinburgh Napier University in 2018 and her PhD topic was in the field of Wireless Body Area Networks (WBAN), and the M.Sc. degree in Computer Science from Jordan University of Science and Technology in 2013, and the BEng (Hons) degree in Electrical Engineering from Palestine Polytechnic University in 2007.  Besides security, resilience and reliability of dynamic systems, her research interests are in (but not limited to) wireless communication technologies deployed in different network environments, eHealth, network layer stack protocols including cross layering algorithms, energy efficient communication, reliable scheduling, and QoS provisioning in wireless networks.

ERASE: Evaluating the Robustness of Machine Learning Algorithms in Adversarial Settings

We are increasingly relying on systems that use machine learning to learn from their environment and often to detect anomalies in the behaviour that they observe. But the consequences of a malicious adversary targeting the machine learning algorithms themselves by compromising part of the data from which the system learns are poorly understood and represent a significant threat. The objective of this project is to propose systematic and realistic ways of assessing, testing and improving the robustness of machine learning algorithms to poisoning attacks. We consider both indiscriminate attacks, which aim to cause an overall degradation of the model’s performance, and targeted attacks that aim to induce specific errors. We focus in particular on “optimal” attack strategies seeking to maximise the impact of the poisoning points, thus representing a “worst-case” scenario. However, we consider sophisticated adversaries that also take into account detectability constraints.   

PhD Studentship funded by DSTL

Fellowships

If you are interested to be hosted within the group for fellowship such as MSCA Fellowships, please do not hesitate to contact Prof. Emil Lupu.

Extracting Randomness from the Trend of IPI for Cryptographic Operations in Implantable Medical Devices

H. Chizari and E. Lupu, “Extracting Randomness from the Trend of IPI for Cryptographic Operations in Implantable Medical Devices,” in IEEE Transactions on Dependable and Secure Computing, vol. 18, no. 2, pp. 875-888, 1 March-April 2021, doi: 10.1109/TDSC.2019.2921773.

Achieving secure communication between an Implantable Medical Device (IMD) and a gateway or programming device outside the body has showed its criticality in recent reports of vulnerabilities in cardiac devices, insulin pumps and neural implants, amongst others. The use of asymmetric cryptography is typically not a practical solution for IMDs due to the scarce computational and power resources. Symmetric key cryptography is preferred but its security relies on agreeing and using strong keys, which are difficult to generate. A solution to generate strong shared keys without using extensive resources, is to extract them from physiological signals already present inside the body such as the Inter-Pulse interval (IPI). The physiological signals must therefore be strong sources of randomness that meet five conditions: Universality (available on all people), Liveness (available at any-time), Robustness (strong random number), Permanence (independent from its history) and Uniqueness (independent from other sources). However, these conditions (mainly the last three) have not been systematically examined in current methods for randomness extraction from IPI. In this study, we first propose a methodology to measure the last three conditions: Information secrecy measures for Robustness, Santha-Vazirani Source delta value for Permanence and random sources dependency analysis for Uniqueness. Then, using a large dataset of IPI values (almost 900,000,000 IPIs), we show that IPI does not have Robustness and Permanence as a randomness source. Thus, extraction of a strong uniform random number from IPI values is impossible. Third, we propose to use the trend of IPI, instead of its value, as a source for a new randomness extraction method named Martingale Randomness Extraction from IPI (MRE-IPI). We evaluate MRE-IPI and show that it satisfies the Robustness condition completely and Permanence to some level. Finally, we use the NIST STS and Dieharder test suites and show that MRE-IPI is able to outperform all recent randomness extraction methods from IPIs and achieves a quality roughly half that of the AES random number generator. MRE-IPI is still not a strong random number and cannot be used as key to secure communications in general. However, it can be used as a one-time pad to securely exchange keys between the communication parties. The usage of MRE-IPI will thus be kept at a minimum and reduces the probability of breaking it. To the best of our knowledge, this is the first work in this area which uses such a comprehensive method and large dataset to examine the randomness of physiological signals.

Shadow-Catcher: Looking Into Shadows to Detect Ghost Objects in Autonomous Vehicle 3D Sensing

Zhongyuan Hau, Soteris Demetriou, Luis Muñoz-González, Emil C. Lupu, Shadow-Catcher: Looking Into Shadows to Detect Ghost Objects in Autonomous Vehicle 3D Sensing, 26th European Symposium on Research in Computer Security (ESORICS), 2021.

LiDAR-driven 3D sensing allows new generations of vehicles to achieve advanced levels of situation awareness. However, recent works have demonstrated that physical adversaries can spoof LiDAR return signals and deceive 3D object detectors to erroneously detect “ghost” objects. Existing defenses are either impractical or focus only on vehicles. Unfortunately, it is easier to spoof smaller objects such as pedestrians and cyclists, but harder to defend against and can have worse safety implications. To address this gap, we introduce Shadow-Catcher, a set of new techniques embodied in an end-to-end prototype to detect both large and small ghost object attacks on 3D detectors. We characterize a new semantically meaningful physical invariant (3D shadows) which Shadow-Catcher leverages for validating objects. Our evaluation on the KITTI dataset shows that Shadow-Catcher consistently achieves more than 94% accuracy in identifying anomalous shadows for vehicles, pedestrians, and cyclists, while it remains robust to a novel class of strong “invalidation” attacks targeting the defense system. Shadow-Catcher can achieve real-time detection, requiring only between 0.003s-0.021s on average to process an object in a 3D point cloud on commodity hardware and achieves a 2.17x speedup compared to prior work

Responding to Attacks and Compromise at the Edge (RACE)

IoT systems evolve dynamically and are increasingly used in critical applications. Understanding how to maintain the operation of the system when systems have been partially compromised is therefore of critical importance. This requires to continuously assess the risk to other parts of the system, determine the impact of the compromise and to select appropriate mitigation strategies to respond to the attack. The ability to cope with dynamic system changes is a key and significant challenge in achieving these objectives.

RACE is articulated into four broad themes of work: understanding attacks and mitigation strategies, maintaining an adequate representation of risk to the other parts of the system by understanding how attacks can evolve and propagate, understanding the impact of the compromise upon the functionality of the system and selecting countermeasure strategies taking into account trade-offs between minimising disruption to the system operation and functionality provided and minimising the risk to the other parts of the system.

Robustness and Transferability of Universal Attacks on Compressed Models

A.G. Matachana, K.T. Co, L. Muñoz-González, D. Martinez, E.C. Lupu. Robustness and Transferability of Universal Attacks on Compressed Models. AAAI 2021 Workshop: Towards Robust, Secure and Efficient Machine Learning. 2021.

Neural network compression methods like pruning and quantization are very effective at efficiently deploying Deep Neural Networks (DNNs) on edge devices. However, DNNs remain vulnerable to adversarial examples-inconspicuous inputs that are specifically designed to fool these models. In particular, Universal Adversarial Perturbations (UAPs), are a powerful class of adversarial attacks which create adversarial perturbations that can generalize across a large set of inputs. In this work, we analyze the effect of various compression techniques to UAP attacks, including different forms of pruning and quantization. We test the robustness of compressed models to white-box and transfer attacks, comparing them with their uncompressed counterparts on CIFAR-10 and SVHN datasets. Our evaluations reveal clear differences between pruning methods, including Soft Filter and Post-training Pruning. We observe that UAP transfer attacks between pruned and full models are limited, suggesting that the systemic vulnerabilities across these models are different. This finding has practical implications as using different compression techniques can blunt the effectiveness of black-box transfer attacks. We show that, in some scenarios, quantization can produce gradient-masking, giving a false sense of security. Finally, our results suggest that conclusions about the robustness of compressed models to UAP attacks is application dependent, observing different phenomena in the two datasets used in our experiments.

Object Removal Attacks on LiDAR-based 3D Object Detectors

LiDARs play a critical role in Autonomous Vehicles’ (AVs) perception and their safe operations. Recent works have demonstrated that it is possible to spoof LiDAR return signals to elicit fake objects. In this work we demonstrate how the same physical capabilities can be used to mount a new, even more dangerous class of attacks, namely Object Removal Attacks (ORAs). ORAs aim to force 3D object detectors to fail. We leverage the default setting of LiDARs that record a single return signal per direction to perturb point clouds in the region of interest (RoI) of 3D objects. By injecting illegitimate points behind the target object, we effectively shift points away from the target objects’ RoIs. Our initial results using a simple random point selection strategy show that the attack is effective in degrading the performance of commonly used 3D object detection models.

Z. Hau, K.T. Co, S. Demetriou, E.C. Lupu. Object Removal Attacks on LiDAR-based 3D Object Detectors. Automotive and Autonomous Vehicle Security (AutoSec) Workshop @ NDSS Symposium 2021.

Paper