Zaid joined the group as a Research Associate in May 2020. His activities focus on federated learning and adversarial machine learning.
Research Assistant salary in the range: £35,477 to £38,566 per annum*
Research Associate salary in the range: £40,215 to £47,579 per annum
Full Time, Fixed Term appointment for to start ASAP until the 31/11/2021
The Resilient Information Systems Security Group (RISS) in the Department of Computing at Imperial College London is seeking a Research Assistant/Associate to work on EU funded Musketeer project. Musketeer aims to create a federated and privacy preserving machine learning data platform, that is interoperable, efficient and robust against internal and external threats. Led by IBM the project involves 11 academic and industrial partners from 7 countries and will validate its findings in two industrial scenarios in smart manufacturing and health care. Further details about the project can be found at: www.musketeer.eu
Our paper on procedural noise adversarial examples has been accepted to the 26th ACM Conference on Computer and Communications Security (ACM CCS ’19).
Abstract: Deep Convolutional Networks (DCNs) have been shown to be vulnerable to adversarial examples—perturbed inputs specifically designed to produce intentional errors in the learning algorithms at test time. Existing input-agnostic adversarial perturbations exhibit interesting visual patterns that are currently unexplained. In this paper, we introduce a structured approach for generating Universal Adversarial Perturbations (UAPs) with procedural noise functions. Our approach unveils the systemic vulnerability of popular DCN models like Inception v3 and YOLO v3, with single noise patterns able to fool a model on up to 90% of the dataset. Procedural noise allows us to generate a distribution of UAPs with high universal evasion rates using only a few parameters. Additionally, we propose Bayesian optimization to efficiently learn procedural noise parameters to construct inexpensive untargeted black-box attacks. We demonstrate that it can achieve an average of less than 10 queries per successful attack, a 100-fold improvement on existing methods. We further motivate the use of input-agnostic defences to increase the stability of models to adversarial perturbations. The universality of our attacks suggests that DCN models may be sensitive to aggregations of low-level class-agnostic features. These findings give insight on the nature of some universal adversarial perturbations and how they could be generated in other applications.
We have released the code with a demo or our poisoning attack described in the paper “Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization.”
You can access the code in this link.
Aaron will be presenting a paper based on his MSc thesis work “Exploiting Correlations to Detect False Data Injections in Low-Density Wireless Sensor Networks” at 5th ACM CPSS 2019, co-located workshop with ACM ASIACCS.
The massive increase in data collected and stored worldwide calls for new ways to preserve privacy while still allowing data sharing among multiple data owners. Today, the lack of trusted and secure environments for data sharing inhibits data economy while legality, privacy, trustworthiness, data value and confidentiality hamper the free flow of data. By the end of the project, MUSKETEER aims to create a validated, federated, privacy-preserving machine learning platform tested on industrial data that is inter-operable, scalable and efficient enough to be deployed in real use cases. MUSKETEER aims to alleviate data sharing barriers by providing secure, scalable and privacy-preserving analytics over decentralized datasets using machine learning. Data can continue to be stored in different locations with different privacy constraints, but shared securely. The MUSKETEER cross-domain platform will validate progress in the industrial scenarios of smart manufacturing and health. MUSKETEER strives to (1) create machine learning models over a variety of privacy-preserving scenarios, (2) ensure security and robustness against external and internal threats, (3) provide a standardized and extendable architecture, (4) demonstrate and validate in two different industrial scenarios and (5) enhance data economy by boosting sharing across domains. The MUSKETEER impact crosses industrial, scientific, economic and strategic domains. Real-world industry requirements and outcomes are validated in an operational setting. Federated machine learning approaches for data sharing are innovated. Data economy is fostered by creating a rewarding model capable of fairly monetizing datasets according to the real data value. Finally, Europe is positioned as a leader in innovative data sharing technologies.
Visit Musketeer’s website here.
Follow Musketeer on Twitter: @H2020Musketeer
Abstract: Software-based attestation promises to enable the integrity verification of untrusted devices without requiring any particular hardware. However, existing proposals rely on strong assumptions that hinder their deployment and might even weaken their security. One of such assumptions is that using the maximum known network round-trip time to define the attestation timeout allows all honest devices to reply in time. While this is normally true in controlled environments, it is generally false in real deployments and especially so in a scenario like the Internet of Things where numerous devices communicate over an intrinsically unreliable wireless medium. Moreover, a larger timeout demands more computations, consuming extra time and energy and restraining the untrusted device from performing its main tasks. In this paper, we review this fundamental and yet overlooked assumption and propose a novel stochastic approach that significantly improves the overall attestation performance. Our experimental evaluation with IoT devices communicating over real-world uncontrolled Wi-Fi networks demonstrates the practicality and superior performance of our approach that in comparison with the current state of the art solution reduces the total attestation time and energy consumption around seven times for honest devices and two times for malicious ones, while improving the detection rate of honest devices (8% higher TPR) without compromising security (0% FPR).