RISS

Resilient Information Systems Security

Hazard Driven Threat Modelling for Cyber Physical Systems

Adversarial actors have shown their ability to infiltrate enterprise networks deployed around Cyber Physical Systems (CPSs) through social engineering, credential stealing and file-less infections. When inside, they can gain enough privileges to maliciously call legitimate APIs and apply unsafe control actions to degrade the system performance and undermine its safety. Our work lies at the intersection of security and safety, and aims to understand dependencies among security, reliability and safety in CPS/IoT. We present a methodology to perform hazard driven threat modelling and impact assessment in the context of CPSs. The process starts from the analysis of behavioural, functional and […]

Research Assistant / Research Associate in Federated and Adversarial Machine Learning

Research Assistant salary in the range: £35,477 to £38,566 per annum* Research Associate salary in the range: £40,215 to £47,579 per annum Full Time, Fixed Term appointment for to start ASAP until the 31/11/2021 The Resilient Information Systems Security Group (RISS) in the Department of Computing at Imperial College London is seeking a Research Assistant/Associate to work on EU funded Musketeer project. Musketeer aims to create a federated and privacy preserving machine learning data platform, that is interoperable, efficient and robust against internal and external threats. Led by IBM the project involves 11 academic and industrial partners from 7 countries […]

Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks (CCS ’19)

Our paper on procedural noise adversarial examples has been accepted to the 26th ACM Conference on Computer and Communications Security (ACM CCS ’19). official: https://dl.acm.org/citation.cfm?id=3345660 code: https://github.com/kenny-co/procedural-advml Abstract: Deep Convolutional Networks (DCNs) have been shown to be vulnerable to adversarial examples—perturbed inputs specifically designed to produce intentional errors in the learning algorithms at test time. Existing input-agnostic adversarial perturbations exhibit interesting visual patterns that are currently unexplained. In this paper, we introduce a structured approach for generating Universal Adversarial Perturbations (UAPs) with procedural noise functions. Our approach unveils the systemic vulnerability of popular DCN models like Inception v3 and YOLO […]

A Formal Approach to Analyzing Cyber-Forensics Evidence

Erisa Karafili’s paper “A Formal Approach to Analyzing Cyber-Forensics Evidence” was accepted at the European Symposium on Research in Computer Security (ESORICS) 2018. This work is part of the AF-Cyber Project, and was a joint collaboration with King’s College London and the University of Verona. Title: A Formal Approach to Analyzing Cyber-Forensics Evidence Authors: Erisa Karafili, Matteo Cristani, Luca Viganò Abstract: The frequency and harmfulness of cyber-attacks are increasing every day, and with them also the amount of data that the cyber-forensics analysts need to collect and analyze. In this paper, we propose a formal analysis process that allows an […]

SECURING CONNECTED DEVICES IN PHYSICAL SPACES

An Infographic based on our work has been published by IoT UK, which describes the fusion of the digital, physical and human aspects in IoT systems the vulnerabilities this introduces and the way to leverage these aspects to defend systems against malicious threats. Find the Infographic here.  

Can We Trust Cyber-Physical Systems?

A post/blog entry on the trustworthiness of cyber-physical systems including consideration of Malicious Data Injections, Adversarial Machine Learning and Bayesian Risk Assessment. Follow this link to the post.

Label Sanitization against Label Flipping Poisoning Attacks

Andrea Paudice, Luis Muñoz-González, Emil C. Lupu. 2018. Label Sanitization against Label Flipping Poisoning Attacks. arXiv preprint arXiv:1803.00992. Many machine learning systems rely on data collected in the wild from untrusted sources, exposing the learning algorithms to data poisoning. Attackers can inject malicious data in the training dataset to subvert the learning process, compromising the performance of the algorithm producing errors in a targeted or an indiscriminate way. Label flipping attacks are a special case of data poisoning, where the attacker can control the labels assigned to a fraction of the training points. Even if the capabilities of the attacker […]

Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection

Andrea Paudice, Luis Muñoz-González, Andras Gyorgy, Emil C. Lupu. 2018. Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection. arXiv preprint arXiv:1802.03041.    Data poisoning is one of the most relevant security threats against machine learning systems, where attackers can subvert the learning process by injecting malicious samples in the training data. Recent work in adversarial machine learning has shown that the so-called optimal attack strategies can successfully poison linear classifiers, degrading the performance of the system dramatically after compromising a small fraction of the training dataset. In this paper we propose a defence mechanism to mitigate the effect […]