Zaid joined the group as a Research Associate in May 2020. His activities focus on federated learning and adversarial machine learning.
Research Assistant salary in the range: £35,477 to £38,566 per annum*
Research Associate salary in the range: £40,215 to £47,579 per annum
Full Time, Fixed Term appointment for to start ASAP until the 31/11/2021
The Resilient Information Systems Security Group (RISS) in the Department of Computing at Imperial College London is seeking a Research Assistant/Associate to work on EU funded Musketeer project. Musketeer aims to create a federated and privacy preserving machine learning data platform, that is interoperable, efficient and robust against internal and external threats. Led by IBM the project involves 11 academic and industrial partners from 7 countries and will validate its findings in two industrial scenarios in smart manufacturing and health care. Further details about the project can be found at: www.musketeer.eu
Our paper on procedural noise adversarial examples has been accepted to the 26th ACM Conference on Computer and Communications Security (ACM CCS ’19).
Abstract: Deep Convolutional Networks (DCNs) have been shown to be vulnerable to adversarial examples—perturbed inputs specifically designed to produce intentional errors in the learning algorithms at test time. Existing input-agnostic adversarial perturbations exhibit interesting visual patterns that are currently unexplained. In this paper, we introduce a structured approach for generating Universal Adversarial Perturbations (UAPs) with procedural noise functions. Our approach unveils the systemic vulnerability of popular DCN models like Inception v3 and YOLO v3, with single noise patterns able to fool a model on up to 90% of the dataset. Procedural noise allows us to generate a distribution of UAPs with high universal evasion rates using only a few parameters. Additionally, we propose Bayesian optimization to efficiently learn procedural noise parameters to construct inexpensive untargeted black-box attacks. We demonstrate that it can achieve an average of less than 10 queries per successful attack, a 100-fold improvement on existing methods. We further motivate the use of input-agnostic defences to increase the stability of models to adversarial perturbations. The universality of our attacks suggests that DCN models may be sensitive to aggregations of low-level class-agnostic features. These findings give insight on the nature of some universal adversarial perturbations and how they could be generated in other applications.
We have released the code with a demo or our poisoning attack described in the paper “Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization.”
You can access the code in this link.
Erisa Karafili’s paper “A Formal Approach to Analyzing Cyber-Forensics Evidence” was accepted at the European Symposium on Research in Computer Security (ESORICS) 2018. This work is part of the AF-Cyber Project, and was a joint collaboration with King’s College London and the University of Verona.
Title: A Formal Approach to Analyzing Cyber-Forensics Evidence
Authors: Erisa Karafili, Matteo Cristani, Luca Viganò
Abstract: The frequency and harmfulness of cyber-attacks are increasing every day, and with them also the amount of data that the cyber-forensics analysts need to collect and analyze. In this paper, we propose a formal analysis process that allows an analyst to filter the enormous amount of evidence collected and either identify crucial information about the attack (e.g., when it occurred, its culprit, its target) or, at the very least, perform a pre-analysis to reduce the complexity of the problem in order to then draw conclusions more swiftly and efficiently. We introduce the Evidence Logic EL for representing simple and derived pieces of evidence from different sources. We propose a procedure, based on monotonic reasoning, that rewrites the pieces of evidence with the use of tableau rules, based on relations of trust between sources and the reasoning behind the derived evidence, and yields a consistent set of pieces of evidence. As proof of concept, we apply our analysis process to a concrete cyber-forensics case study.
You can find the paper here.
This work was funded from the European Union’s Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 746667.
An Infographic based on our work has been published by IoT UK, which describes the fusion of the digital, physical and human aspects in IoT systems the vulnerabilities this introduces and the way to leverage these aspects to defend systems against malicious threats.
A post/blog entry on the trustworthiness of cyber-physical systems including consideration of Malicious Data Injections, Adversarial Machine Learning and Bayesian Risk Assessment. Follow this link to the post.
Andrea Paudice, Luis Muñoz-González, Emil C. Lupu. 2018. Label Sanitization against Label Flipping Poisoning Attacks. arXiv preprint arXiv:1803.00992.
Many machine learning systems rely on data collected in the wild from untrusted sources, exposing the learning algorithms to data poisoning. Attackers can inject malicious data in the training dataset to subvert the learning process, compromising the performance of the algorithm producing errors in a targeted or an indiscriminate way. Label flipping attacks are a special case of data poisoning, where the attacker can control the labels assigned to a fraction of the training points. Even if the capabilities of the attacker are constrained, these attacks have been shown to be effective to significantly degrade the performance of the system. In this paper we propose an efficient algorithm to perform optimal label flipping poisoning attacks and a mechanism to detect and relabel suspicious data points, mitigating the effect of such poisoning attacks.
Andrea Paudice, Luis Muñoz-González, Andras Gyorgy, Emil C. Lupu. 2018. Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection. arXiv preprint arXiv:1802.03041.
Dr Karafili is officially a Marie Curie Fellow at the Department of Computing, Imperial College. She will work on the project “AF-Cyber: Logic-based Attribution and Forensics in Cyber Security“. The project was granted by the European Union’s Horizon 2020 Research and Innovation Programme under the Marie Sklodowska-Curie grant agreement No 746667.
AF-Cyber: Logic-based Attribution and Forensics in Cyber Security
The main goal of AF-Cyber is to investigate and analyse the problem of attributing cyber attacks. We plan to construct a logic-based framework for performing attribution of cyber attacks, based on cyber forensics evidence, social science approaches and an intelligent methodology for dynamic evidence collection. AF-Cyber will relieve part of the cyberattacks problem, by supporting forensics investigation and attribution with logical-based frameworks representation, reasoning and supporting tools. AF-Cyber is multi-disciplinary and collaborative, bridging forensics in cyber attacks, theoretical computer science (logics and formal proofs), security, software engineering, and social science.