News

RESICS : Resilience and Safety to attacks in Industrial Control and Cyber-Physical Systems

We all critically depend on and use digital systems that sense and control physical processes and environments. Electricity, gas, water, and other utilities require the continuous operation of both national and local infrastructures. Industrial processes, for example for chemical manufacturing, production of materials and manufacturing chains similarly lie at this intersection of the digital and the physical. This intersection also applies in other CPS such as robots, autonomous cars, and drones. Ensuring the resilience of such systems, their survivability and continued operation when exposed to malicious threats requires the integration of methods and processes from security analysis, safety analysis, system design and operation that have traditionally been done separately and that each involve specialist skills and a significant amount of human effort. This is not only costly, but also error prone and delays response to security events. 

RESICS aims to significantly advance the state-of-the-art and deliver novel contributions that facilitate:

  • Risk analysis in the face of adversarial threats taking into account the impact of security events across cascading inter-dependencies
  • Characterising attacks that can have an impact on system safety and identifying the paths that make such attacks possible
  • Identifying countermeasures that can be applied to mitigate threats and contain the impact of attacks
  • Ensuring that such countermeasures can be applied whilst preserving the system’s safety and operational constraints and maximising its availability.

These contributions will be evaluated across several test beds, digital twins, a cyber range and a number of use-cases across different industry sectors.

To achieve these goals RESICS will combine model-driven and empirical approaches across both security and safety analysis, adopting a systems-thinking approach which emphasises Security, Safety and Resilience as emerging properties of the system. RESICS leverages preliminary results in the integration of safety and security methodologies with the application of formal methods and the combination of model-based and empirical approaches to the analysis of inter-dependencies in ICSs and CPSs.

Funded by DSTL, this is a joint project between the Resilient Information Systems Security (RISS) Group at Imperial College and the Bristol Cyber Security Group. The work will be conducted in collaboration with: Adelard (part of NCC Group), Airbus, Qinetiq, Reperion, Siemens, Thales as industry partners and CMU, University of Naples and SUTD as academic partners. The project is affiliated with the Research Institute in Trustworthy Inter-Connected Cyber-Physical Systems (RITICS)

Project Publications

  • L. M. Castiglione, S. Guerra, E. C. Lupu, Automated Identification of Safety-Critical Attacks against CPS and Generation of Assurance Case Fragments. To be presented at Safety Critical Systems Symposium SSS’25.
  • Mathuros, Kornkamon, Sarad Venugopalan, and Sridhar Adepu. “WaXAI: Explainable Anomaly Detection in Industrial Control Systems and Water Systems.” Proceedings of the 10th ACM Cyber-Physical System Security Workshop. 2024. Awarded Best paper Award.
  • Ruizhe Wang, Sarad Venugopalan and Sridhar Adepu. “Safety Analysis for Cyber-Physical Systems under Cyber Attacks Using Digital Twin” in IEEE Cyber Security and Resilience 2024.

Other relevant publications

Presentations

Meet Javi

Javier Carnerero Cano, a PhD student in the RISS group makes a DoC Clock video to introduce himself and the things he likes!

Object Removal Attacks on LiDAR-based 3D Object Detectors

LiDARs play a critical role in Autonomous Vehicles’ (AVs) perception and their safe operations. Recent works have demonstrated that it is possible to spoof LiDAR return signals to elicit fake objects. In this work we demonstrate how the same physical capabilities can be used to mount a new, even more dangerous class of attacks, namely Object Removal Attacks (ORAs). ORAs aim to force 3D object detectors to fail. We leverage the default setting of LiDARs that record a single return signal per direction to perturb point clouds in the region of interest (RoI) of 3D objects. By injecting illegitimate points behind the target object, we effectively shift points away from the target objects’ RoIs. Our initial results using a simple random point selection strategy show that the attack is effective in degrading the performance of commonly used 3D object detection models.

Z. Hau, K.T. Co, S. Demetriou, E.C. Lupu. Object Removal Attacks on LiDAR-based 3D Object Detectors. Automotive and Autonomous Vehicle Security (AutoSec) Workshop @ NDSS Symposium 2021.

Paper

Jacobian Regularization for Mitigating Universal Adversarial Perturbations

Authors: Kenneth Co, David Martinez Rego, Emil Lupu

Universal Adversarial Perturbations (UAPs) are input perturbations that can fool a neural network on large sets of data. They are a class of attacks that represents a significant threat as they facilitate realistic, practical, and low-cost attacks on neural networks. In this work, we derive upper bounds for the effectiveness of UAPs based on norms of data-dependent Jacobians. We empirically verify that Jacobian regularization greatly increases model robustness to UAPs by up to four times whilst maintaining clean performance. Our theoretical analysis also allows us to formulate a metric for the strength of shared adversarial perturbations between pairs of inputs. We apply this metric to benchmark datasets and show that it is highly correlated with the actual observed robustness. This suggests that realistic and practical universal attacks can be reliably mitigated without sacrificing clean accuracy, which shows promise for the robustness of machine learning systems.

Kenneth Co, David Martinez Rego, Emil Lupu, Jacobian Regularization for Mitigating Universal Adversarial Perturbations. 30th International Conference on Artificial Neural Networks (ICANN 21), Sept. 2021.

Pre-print on arxiv

 

Analyzing the Viability of UAV Missions Facing Cyber Attacks

With advanced video and sensing capabilities, un-occupied aerial vehicles (UAVs) are increasingly being usedfor numerous applications that involve the collaboration andautonomous operation of teams of UAVs. Yet such vehiclescan be affected by cyber attacks, impacting the viability oftheir missions. We propose a method to conduct mission via-bility analysis under cyber attacks for missions that employa team of several UAVs that share a communication network.We apply our method to a case study of a survey mission ina wildfire firefighting scenario. Within this context, we showhow our method can help quantify the expected missionperformance impact from an attack and determine if themission can remain viable under various attack situations.Our method can be used both in the planning of themission and for decision making during mission operation.Our approach to modeling attack progression and impactanalysis with Petri nets is also more broadly applicable toother settings involving multiple resources that can be usedinterchangeably towards the same objective.

J. Soikkeli, C. Perner and E. Lupu, “Analyzing the Viability of UAV Missions Facing Cyber Attacks,” in 2021 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), Vienna, Austria, 2021 pp. 103-112.
doi: 10.1109/EuroSPW54576.2021.00018

Hazard Driven Threat Modelling for Cyber Physical Systems

Luca Maria Castiglione and Emil C. Lupu. 2020. Hazard Driven Threat Modelling for Cyber Physical Systems. In Proceedings of the 2020 Joint Workshop on CPS&IoT Security and Privacy(CPSIOTSEC’20). Association for Computing Machinery, New York, NY, USA, 13–24.

Adversarial actors have shown their ability to infiltrate enterprise networks deployed around Cyber Physical Systems (CPSs) through social engineering, credential stealing and file-less infections. When inside, they can gain enough privileges to maliciously call legitimate APIs and apply unsafe control actions to degrade the system performance and undermine its safety. Our work lies at the intersection of security and safety, and aims to understand dependencies among security, reliability and safety in CPS/IoT. We present a methodology to perform hazard driven threat modelling and impact assessment in the context of CPSs. The process starts from the analysis of behavioural, functional and architectural models of the CPS. We then apply System Theoretic Process Analysis (STPA) on the functional model to highlight high-level abuse cases. We leverage a mapping between the architectural and the system theoretic(ST) models to enumerate those components whose impairment provides the attacker with enough privileges to tamper with or disrupt the data-flows. This enables us to find a causal connection between the attack surface (in the architectural model) and system level losses. We then link the behavioural and system theoretic representations of the CPS to quantify the impact of the attack. Using our methodology it is possible to compute a comprehensive attack graph of the known attack paths and to perform both a qualitative and quantitative impact assessment of the exploitation of vulnerabilities affecting target nodes. The framework and methodology are illustrated using a small scale example featuring a Communication Based Train Control (CBTC) system. Aspects regarding the scalability of our methodology and its application in real world scenarios are also considered. Finally, we discuss the possibility of using the results obtained to engineer both design time and real time defensive mechanisms.

A Formal Approach to Analyzing Cyber-Forensics Evidence

Erisa Karafili’s paper “A Formal Approach to Analyzing Cyber-Forensics Evidence” was accepted at the European Symposium on Research in Computer Security (ESORICS) 2018. This work is part of the AF-Cyber Project, and was a joint collaboration with King’s College London and the University of Verona.

Title: A Formal Approach to Analyzing Cyber-Forensics Evidence

Authors: Erisa Karafili, Matteo Cristani, Luca Viganò

Abstract: The frequency and harmfulness of cyber-attacks are increasing every day, and with them also the amount of data that the cyber-forensics analysts need to collect and analyze. In this paper, we propose a formal analysis process that allows an analyst to filter the enormous amount of evidence collected and either identify crucial information about the attack (e.g., when it occurred, its culprit, its target) or, at the very least, perform a pre-analysis to reduce the complexity of the problem in order to then draw conclusions more swiftly and efficiently. We introduce the Evidence Logic EL for representing simple and derived pieces of evidence from different sources. We propose a procedure, based on monotonic reasoning, that rewrites the pieces of evidence with the use of tableau rules, based on relations of trust between sources and the reasoning behind the derived evidence, and yields a consistent set of pieces of evidence. As proof of concept, we apply our analysis process to a concrete cyber-forensics case study.

 

You can find the paper here.

This work was funded from the European Union’s Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 746667.

Label Sanitization against Label Flipping Poisoning Attacks

Andrea Paudice, Luis Muñoz-González, Emil C. Lupu. 2018. Label Sanitization against Label Flipping Poisoning Attacks. arXiv preprint arXiv:1803.00992.

Many machine learning systems rely on data collected in the wild from untrusted sources, exposing the learning algorithms to data poisoning. Attackers can inject malicious data in the training dataset to subvert the learning process, compromising the performance of the algorithm producing errors in a targeted or an indiscriminate way. Label flipping attacks are a special case of data poisoning, where the attacker can control the labels assigned to a fraction of the training points. Even if the capabilities of the attacker are constrained, these attacks have been shown to be effective to significantly degrade the performance of the system. In this paper we propose an efficient algorithm to perform optimal label flipping poisoning attacks and a mechanism to detect and relabel suspicious data points, mitigating the effect of such poisoning attacks.

Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection

Andrea Paudice, Luis Muñoz-González, Andras Gyorgy, Emil C. Lupu. 2018. Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection. arXiv preprint arXiv:1802.03041.

Machine learning has become an important component for many systems and applications including computer vision, spam filtering, malware and network intrusion detection, among others. Despite the capabilities of machine learning algorithms to extract valuable information from data and produce accurate predictions, it has been shown that these algorithms are vulnerable to attacks.

 Data poisoning is one of the most relevant security threats against machine learning systems, where attackers can subvert the learning process by injecting malicious samples in the training data. Recent work in adversarial machine learning has shown that the so-called optimal attack strategies can successfully poison linear classifiers, degrading the performance of the system dramatically after compromising a small fraction of the training dataset. In this paper we propose a defence mechanism to mitigate the effect of these optimal poisoning attacks based on outlier detection. We show empirically that the adversarial examples generated by these attack strategies are quite different from genuine points, as no detectability constrains are considered to craft the attack. Hence, they can be detected with an appropriate pre-filtering of the training dataset.

AF-Cyber: Logic-based Attribution and Forensics in Cyber Security

Dr Karafili is officially a Marie Curie Fellow at the Department of Computing, Imperial College. She will work on the project “AF-Cyber: Logic-based Attribution and Forensics in Cyber Security“.  The project was granted by the European Union’s Horizon 2020 Research and Innovation Programme under the Marie Sklodowska-Curie grant agreement No 746667.

 

AF-Cyber: Logic-based Attribution and Forensics in Cyber Security

The main goal of AF-Cyber is to investigate and analyse the problem of attributing cyber attacks. We plan to construct a logic-based framework for performing attribution of cyber attacks, based on cyber forensics evidence, social science approaches and an intelligent methodology for dynamic evidence collection. AF-Cyber will relieve part of the cyberattacks problem, by supporting forensics investigation and attribution with logical-based frameworks representation, reasoning and supporting tools. AF-Cyber is multi-disciplinary and collaborative, bridging forensics in cyber attacks, theoretical computer science (logics and formal proofs), security, software engineering, and social science.

Improving Data Sharing in Data Rich Environments

The paper “Improving Data Sharing in Data Rich Environments” was accepted at the IEEE Big Data International Workshop on Policy-based Autonomic Data Governance (PADG), part of the 15th IEEE International Conference on Big Data (Big Data 2017), December 11-14, 2017, Boston, MA, USA. This work was done in collaboration with our partners (BAE Systems, IBM UK and IBM US) from the DAIS International Technology Alliance (ITA). The paper can be found here.

Authors: Erisa Karafili, Emil C. Lupu, Alan Cullen, Bill Williams, Saritha Arunkumar, Seraphin Calo

Abstract: The increasing use of big data comes along with the problem of ensuring correct and secure data access. There is a need to maximise the data dissemination whilst controlling their access. Depending on the type of users different qualities and parts of data are shared. We introduce an alteration mechanism, more precisely a restriction one, based on a policy analysis language. The alteration reflects the level of trust and relations the users have, and are represented as policies inside the data sharing agreements. These agreements are attached to the data and are enforced every time the data are accessed, used or shared. We show the use of our alteration mechanism with a military use case, where different parties are involved during the missions, and they have different relations of trust and partnership.

The work was supported by EPSRC Project CIPART grant no. EP/L022729/1 and DAIS ITA (Sponsored by U.S. Army Research Laboratory and the U.K. Ministry of Defence under Agreement Number W911NF-16-3-0001).

 

Bayesian Attack Graphs for Security Risk Assessment

Attack graphs offer a powerful framework for security risk assessment. They provide a compact representation of the attack paths that an attacker can follow to compromise network resources from the analysis of the network topology and vulnerabilities. The uncertainty about the attacker’s behaviour makes Bayesian networks suitable to model attack graphs to perform static and dynamic security risk assessment. Thus, whilst static analysis of attack graphs considers the security posture at rest, dynamic analysis accounts for evidence of compromise at run-time, helping system administrators to react against potential threats. In this paper, we introduce a Bayesian attack graph model that allows to estimate the probabilities of an attacker compromising different resources of the network. We show how exact and approximate inference techniques can be efficiently applied on Bayesian attack graph models with thousands of nodes.

Luis Muñoz-González, Emil C. Lupu, “Bayesian Attack Graphs for Security Risk Assessment.” IST-153 NATO Workshop on Cyber Resilience, 2017.

Argumentation-based Security for Social Good

The paper “Argumentation-based Security for Social Good” presented at the AAAI Spring Symposia 2017 is now available at the AAAI Technical Report.

Title: Argumentation-Based Security for Social Good

Authors: Erisa Karafili, Antonis C. Kakas, Nikolaos I. Spanoudakis, Emil C. Lupu

Abstract: The increase of connectivity and the impact it has in ever day life is raising new and existing security problems that are becoming important for social good. We introduce two particular problems: cyber attack attribution and regulatory data sharing. For both problems, decisions about which rules to apply, should be taken under incomplete and context dependent information. The solution we propose is based on argumentation reasoning, that is a well suited technique for implementing decision making mechanisms under conflicting and incomplete information. Our proposal permits us to identify the attacker of a cyber attack and decide the regulation rule that should be used while using and sharing data. We illustrate our solution through concrete examples.

The paper can be found in the following link: https://aaai.org/ocs/index.php/FSS/FSS17/paper/view/15928/15306

A video of the presentation can be found in the workshop page AI for Social Good and also in following link: https://youtu.be/wYg8jaHPbyw?t=33m33s

An argumentation reasoning approach for data processing

The paper “An argumentation reasoning approach for data processing” is now published in the Elsevier Journal Computers in Industry.

Title: An argumentation reasoning approach for data processing

Authors: Erisa Karafili, Konstantina Spanaki, Emil C. Lupu

Abstract: Data-intensive environments enable us to capture information and knowledge about the physical surroundings, to optimise our resources, enjoy personalised services and gain unprecedented insights into our lives. However, to obtain these endeavours extracted from the data, this data should be generated, collected and the insight should be exploited. Following an argumentation reasoning approach for data processing and building on the theoretical background of data management, we highlight the importance of data sharing agreements (DSAs) and quality attributes for the proposed data processing mechanism. The proposed approach is taking into account the DSAs and usage policies as well as the quality attributes of the data, which were previously neglected compared to existing methods in the data processing and management field. Previous research provided techniques towards this direction; however, a more intensive research approach for processing techniques should be introduced for the future to enhance the value creation from the data and new strategies should be formed around this data generated daily from various devices and sources.

This work was supported by FP7 EU-funded project Coco Cloud grant no.: 610853, and EPSRC Project CIPART grant no. EP/L022729/1.

The paper can be found in the following link as Open Access: http://www.sciencedirect.com/science/article/pii/S016636151730338X

Towards Poisoning Deep Learning Algorithms with Back-gradient Optimization

Luis Muñoz-González, Battista Biggio, Ambra Demontis, Andrea Paudice, Vasin Wongrassamee, Emil C. Lupu, Fabio Roli. “Towards Poisoning Deep Learning Algorithms with Back-gradient Optimization.” Workshop on Artificial Intelligence and Security (AISec), 2017.

A number of online services nowadays rely upon machine learning to extract valuable information from data collected in the wild. This exposes learning algorithms to the threat of data poisoning, i.e., a coordinate attack in which a fraction of the training data is controlled by the attacker and manipulated to subvert the learning process. To date, these attacks have been devised only against a limited class of binary learning algorithms, due to the inherent complexity of the gradient-based procedure used to optimize the poisoning points (a.k.a. adversarial training examples).[/ezcol_2third]

In this work, we first extend the definition of poisoning attacks to multi-class problems. We then propose a novel poisoning algorithm based on the idea of back-gradient optimization, i.e., to compute the gradient of interest through automatic differentiation, while also reversing the learning procedure to drastically reduce the attack complexity. Compared to current poisoning strategies, our approach is able to target a wider class of learning algorithms, trained with gradient-based procedures, including neural networks and deep learning architectures. We empirically evaluate its effectiveness on several application examples, including spam filtering, malware detection, and handwritten digit recognition. We finally show that, similarly to adversarial test examples, adversarial training examples can also be transferred across different learning algorithms.

This work has been done in collaboration with the PRA Lab in the University of Cagliari, Italy.

Enabling Data Sharing in Contextual Environments: Policy Representation and Analysis

The paper “Enabling Data Sharing in Contextual Environments: Policy Representation and Analysis” was accepted at SACMAT 2017.

ACM Symposium on Access Control Models and Technologies (SACMAT 2017)

Authors: Erisa Karafili and Emil Lupu

Abstract: Internet of Things environments enable us to capture more and more data about the physical environment we live in and about ourselves. The data enable us to optimise resources, personalise services and offer unprecedented insights into our lives. However, to achieve these insights data need to be shared (and sometimes sold) between organisations imposing rights and obligations upon the sharing parties and in accordance with multiple layers of sometimes conflicting legislation at international, national and organisational levels. In this work, we show how such rules can be captured in a formal representation called “Data Sharing Agreements”. We introduce the use of abductive reasoning and argumentation based techniques to detect inconsistencies in the rules  applicable and resolve them by assigning priorities to the rules. We show how through the use of argumentation based techniques use-cases taken from real life application are handled flexibly addressing trade-offs between confidentiality, privacy, availability and safety.

RISS group was part of London Duathlon!

RISS Group participated at the London Duathlon this Sunday (18/09/16) at the Duathlon Relay. Erisa Karafili ran 10km, Daniele Sgandurra cycled 44km, and Rodrigo Vieira Steiner ran 5km.