Object Removal Attacks on LiDAR-based 3D Object Detectors

LiDARs play a critical role in Autonomous Vehicles’ (AVs) perception and their safe operations. Recent works have demonstrated that it is possible to spoof LiDAR return signals to elicit fake objects. In this work we demonstrate how the same physical capabilities can be used to mount a new, even more dangerous class of attacks, namely Object Removal Attacks (ORAs). ORAs aim to force 3D object detectors to fail. We leverage the default setting of LiDARs that record a single return signal per direction to perturb point clouds in the region of interest (RoI) of 3D objects. By injecting illegitimate points behind the target object, we effectively shift points away from the target objects’ RoIs. Our initial results using a simple random point selection strategy show that the attack is effective in degrading the performance of commonly used 3D object detection models.

Z. Hau, K.T. Co, S. Demetriou, E.C. Lupu. Object Removal Attacks on LiDAR-based 3D Object Detectors. Automotive and Autonomous Vehicle Security (AutoSec) Workshop @ NDSS Symposium 2021.

Paper

Jacobian Regularization for Mitigating Universal Adversarial Perturbations

Authors: Kenneth Co, David Martinez Rego, Emil Lupu

Universal Adversarial Perturbations (UAPs) are input perturbations that can fool a neural network on large sets of data. They are a class of attacks that represents a significant threat as they facilitate realistic, practical, and low-cost attacks on neural networks. In this work, we derive upper bounds for the effectiveness of UAPs based on norms of data-dependent Jacobians. We empirically verify that Jacobian regularization greatly increases model robustness to UAPs by up to four times whilst maintaining clean performance. Our theoretical analysis also allows us to formulate a metric for the strength of shared adversarial perturbations between pairs of inputs. We apply this metric to benchmark datasets and show that it is highly correlated with the actual observed robustness. This suggests that realistic and practical universal attacks can be reliably mitigated without sacrificing clean accuracy, which shows promise for the robustness of machine learning systems.

Kenneth Co, David Martinez Rego, Emil Lupu, Jacobian Regularization for Mitigating Universal Adversarial Perturbations. 30th International Conference on Artificial Neural Networks (ICANN 21), Sept. 2021.

Pre-print on arxiv

 

Analyzing the Viability of UAV Missions Facing Cyber Attacks

With advanced video and sensing capabilities, un-occupied aerial vehicles (UAVs) are increasingly being usedfor numerous applications that involve the collaboration andautonomous operation of teams of UAVs. Yet such vehiclescan be affected by cyber attacks, impacting the viability oftheir missions. We propose a method to conduct mission via-bility analysis under cyber attacks for missions that employa team of several UAVs that share a communication network.We apply our method to a case study of a survey mission ina wildfire firefighting scenario. Within this context, we showhow our method can help quantify the expected missionperformance impact from an attack and determine if themission can remain viable under various attack situations.Our method can be used both in the planning of themission and for decision making during mission operation.Our approach to modeling attack progression and impactanalysis with Petri nets is also more broadly applicable toother settings involving multiple resources that can be usedinterchangeably towards the same objective.

J. Soikkeli, C. Perner and E. Lupu, “Analyzing the Viability of UAV Missions Facing Cyber Attacks,” in 2021 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), Vienna, Austria, 2021 pp. 103-112.
doi: 10.1109/EuroSPW54576.2021.00018

Universal Adversarial Robustness of Texture and Shape-Biased Models

Increasing shape-bias in deep neural networks has been shown to improve robustness to common corruptions and noise. In this paper we analyze the adversarial robustness of texture and shape-biased models to Universal Adversarial Perturbations (UAPs). We use UAPs to evaluate the robustness of DNN models with varying degrees of shape-based training. We find that shape-biased models do not markedly improve adversarial robustness, and we show that ensembles of texture and shape-biased models can improve universal adversarial robustness while maintaining strong performance.

Citation: K. T. Co, L. Muñoz-González, L. Kanthan, B. Glocker and E. C. Lupu, “Universal Adversarial Robustness of Texture and Shape-Biased Models,” 2021 IEEE International Conference on Image Processing (ICIP), 2021, pp. 799-803, doi: 10.1109/ICIP42928.2021.9506325.

Paper in IEEE Archive  Pre-print on arxiv

Hazard Driven Threat Modelling for Cyber Physical Systems

Luca Maria Castiglione and Emil C. Lupu. 2020. Hazard Driven Threat Modelling for Cyber Physical Systems. In Proceedings of the 2020 Joint Workshop on CPS&IoT Security and Privacy(CPSIOTSEC’20). Association for Computing Machinery, New York, NY, USA, 13–24.

Adversarial actors have shown their ability to infiltrate enterprise networks deployed around Cyber Physical Systems (CPSs) through social engineering, credential stealing and file-less infections. When inside, they can gain enough privileges to maliciously call legitimate APIs and apply unsafe control actions to degrade the system performance and undermine its safety. Our work lies at the intersection of security and safety, and aims to understand dependencies among security, reliability and safety in CPS/IoT. We present a methodology to perform hazard driven threat modelling and impact assessment in the context of CPSs. The process starts from the analysis of behavioural, functional and architectural models of the CPS. We then apply System Theoretic Process Analysis (STPA) on the functional model to highlight high-level abuse cases. We leverage a mapping between the architectural and the system theoretic(ST) models to enumerate those components whose impairment provides the attacker with enough privileges to tamper with or disrupt the data-flows. This enables us to find a causal connection between the attack surface (in the architectural model) and system level losses. We then link the behavioural and system theoretic representations of the CPS to quantify the impact of the attack. Using our methodology it is possible to compute a comprehensive attack graph of the known attack paths and to perform both a qualitative and quantitative impact assessment of the exploitation of vulnerabilities affecting target nodes. The framework and methodology are illustrated using a small scale example featuring a Communication Based Train Control (CBTC) system. Aspects regarding the scalability of our methodology and its application in real world scenarios are also considered. Finally, we discuss the possibility of using the results obtained to engineer both design time and real time defensive mechanisms.

Muhammad Zaid Hameed

Zaid joined the group as a Research Associate in May 2020. His activities focus on federated learning and adversarial machine learning.

Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks (CCS ’19)

Kenneth T. Co, Luis Muñoz-González, Sixte de Maupeou, and Emil C. Lupu. 2019. Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security (CCS ’19). Association for Computing Machinery, New York, NY, USA, 275–289. DOI:https://doi.org/10.1145/3319535.3345660

Abstract: Deep Convolutional Networks (DCNs) have been shown to be vulnerable to adversarial examples—perturbed inputs specifically designed to produce intentional errors in the learning algorithms at test time. Existing input-agnostic adversarial perturbations exhibit interesting visual patterns that are currently unexplained. In this paper, we introduce a structured approach for generating Universal Adversarial Perturbations (UAPs) with procedural noise functions. Our approach unveils the systemic vulnerability of popular DCN models like Inception v3 and YOLO v3, with single noise patterns able to fool a model on up to 90% of the dataset. Procedural noise allows us to generate a distribution of UAPs with high universal evasion rates using only a few parameters. Additionally, we propose Bayesian optimization to efficiently learn procedural noise parameters to construct inexpensive untargeted black-box attacks. We demonstrate that it can achieve an average of less than 10 queries per successful attack, a 100-fold improvement on existing methods. We further motivate the use of input-agnostic defences to increase the stability of models to adversarial perturbations. The universality of our attacks suggests that DCN models may be sensitive to aggregations of low-level class-agnostic features. These findings give insight on the nature of some universal adversarial perturbations and how they could be generated in other applications.

Fulvio Valenza

Fulvio was a Visiting Researcher in the group working on various aspects relating to hybrid-threats and formal analysis applied to network policies. He is with the department of Control and Computer Engineering at the Politecnico di Torino and continues to work in cyber-security within the context of network and systems management.

MUSKETEER: Machine learning to augment shared knowledge in federated privacy-preserving scenarios

The massive increase in data collected and stored worldwide calls for new ways to preserve privacy while still allowing data sharing among multiple data owners. Today, the lack of trusted and secure environments for data sharing inhibits data economy while legality, privacy, trustworthiness, data value and confidentiality hamper the free flow of data. By the end of the project, MUSKETEER aims to create a validated, federated, privacy-preserving machine learning platform tested on industrial data that is inter-operable, scalable and efficient enough to be deployed in real use cases. MUSKETEER aims to alleviate data sharing barriers by providing secure, scalable and privacy-preserving analytics over decentralized datasets using machine learning. Data can continue to be stored in different locations with different privacy constraints, but shared securely. The MUSKETEER cross-domain platform will validate progress in the industrial scenarios of smart manufacturing and health. MUSKETEER strives to (1) create machine learning models over a variety of privacy-preserving scenarios, (2) ensure security and robustness against external and internal threats, (3) provide a standardized and extendable architecture, (4) demonstrate and validate in two different industrial scenarios and (5) enhance data economy by boosting sharing across domains. The MUSKETEER impact crosses industrial, scientific, economic and strategic domains. Real-world industry requirements and outcomes are validated in an operational setting. Federated machine learning approaches for data sharing are innovated. Data economy is fostered by creating a rewarding model capable of fairly monetizing datasets according to the real data value. Finally, Europe is positioned as a leader in innovative data sharing technologies.

Project Introduction
H2020

Towards More Practical Software-based Attestation

Rodrigo Vieira Steiner, EmilLupu, Towards more practical software-based attestation, J. Computer Networks, v. 149, pp 43-55, Elsevier, 2019.

Abstract: Software-based attestation promises to enable the integrity verification of untrusted devices without requiring any particular hardware. However, existing proposals rely on strong assumptions that hinder their deployment and might even weaken their security. One of such assumptions is that using the maximum known network round-trip time to define the attestation timeout allows all honest devices to reply in time. While this is normally true in controlled environments, it is generally false in real deployments and especially so in a scenario like the Internet of Things where numerous devices communicate over an intrinsically unreliable wireless medium. Moreover, a larger timeout demands more computations, consuming extra time and energy and restraining the untrusted device from performing its main tasks. In this paper, we review this fundamental and yet overlooked assumption and propose a novel stochastic approach that significantly improves the overall attestation performance. Our experimental evaluation with IoT devices communicating over real-world uncontrolled Wi-Fi networks demonstrates the practicality and superior performance of our approach that in comparison with the current state of the art solution reduces the total attestation time and energy consumption around seven times for honest devices and two times for malicious ones, while improving the detection rate of honest devices (8% higher TPR) without compromising security (0% FPR).

Luca Maria Castiglione

Luca joined the group as PhD student on HiPEDS in October 2018. He received his MSc in Computer Science and Engineering from University of Napoli Federico II, defending his thesis entitled “Negotiation of traffic junctions over 5G networks”. The thesis work has been carried out at Ericsson, Gothenburg (Sweden), within a joint project between University of Napoli Federico II, Chalmers University of Technology and Ericsson.

He strongly believes in open source development and he currently is a mentor within the Open Leadership Programme offered by Mozilla.

His research interests are on the edge between cybersecurity and control engineering. In particular, his studies aim to investigate resilience of networked systems and industrial plants against cyberattacks.

You can also find him on Linkedin.

A Formal Approach to Analyzing Cyber-Forensics Evidence

Erisa Karafili’s paper “A Formal Approach to Analyzing Cyber-Forensics Evidence” was accepted at the European Symposium on Research in Computer Security (ESORICS) 2018. This work is part of the AF-Cyber Project, and was a joint collaboration with King’s College London and the University of Verona.

Title: A Formal Approach to Analyzing Cyber-Forensics Evidence

Authors: Erisa Karafili, Matteo Cristani, Luca Viganò

Abstract: The frequency and harmfulness of cyber-attacks are increasing every day, and with them also the amount of data that the cyber-forensics analysts need to collect and analyze. In this paper, we propose a formal analysis process that allows an analyst to filter the enormous amount of evidence collected and either identify crucial information about the attack (e.g., when it occurred, its culprit, its target) or, at the very least, perform a pre-analysis to reduce the complexity of the problem in order to then draw conclusions more swiftly and efficiently. We introduce the Evidence Logic EL for representing simple and derived pieces of evidence from different sources. We propose a procedure, based on monotonic reasoning, that rewrites the pieces of evidence with the use of tableau rules, based on relations of trust between sources and the reasoning behind the derived evidence, and yields a consistent set of pieces of evidence. As proof of concept, we apply our analysis process to a concrete cyber-forensics case study.

 

You can find the paper here.

This work was funded from the European Union’s Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 746667.

WSNs Under Attack! How Bad Is It? Evaluating Connectivity Impact Using Centrality Measures

Our paper WSNs Under Attack! How Bad Is It? Evaluating Connectivity Impact Using Centrality Measures has been presented at the Living in the Internet of Things: A PETRAS, IoTUK & IET Conference, Forum & Exhibition.

AuthorsRodrigo Vieira SteinerMartín BarrèreEmil C. Lupu

Abstract: We propose a model to represent the health of WSNs that allows us to evaluate a network’s ability to execute its functions. Central to this model is how we quantify the importance of each network node. As we focus on the availability of the network data, we investigate how well different centrality measures identify the significance of each node for the network connectivity. In this process, we propose a new metric named current-flow sink betweenness. Through a number of experiments , we demonstrate that while no metric is invariably better in identifying sensors’ connectivity relevance, the proposed current-flow sink betweenness outperforms existing metrics in the vast majority of cases.

Download a copy here.

Kenneth Co

Kenny joined the group as a PhD student in April 2018. He received an MSc in Machine Learning from Imperial College London and an MA in Mathematics from Johns Hopkins University.

His general interests are in machine learning, cryptography, and mathematics. His current research is on the security of machine learning algorithms, primarily adversarial machine learning. He is also interested in health or lifestyle optimization, and is very much into enjoying good food.

Find him on LinkedIn.

Javier Carnerero Cano

Javi joined the group as a PhD Candidate in May 2018. He received his MEng in Telecommunications Engineering and his MRes in Multimedia and Communications from Universidad Carlos III de Madrid (Spain).

He is currently interested in adversarial machine learning, aiming to investigate the security of machine learning algorithms (with special focus on data poisoning attacks); bilevel optimization problems; Generative Adversarial Networks (GANs); and applications of machine learning in security.

You can also find him on his personal website, LinkedIn, Google Scholar, ResearchGate and GitHub.