RISS

Resilient Information Systems Security

Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks (CCS ’19)

Our paper on procedural noise adversarial examples has been accepted to the 26th ACM Conference on Computer and Communications Security (ACM CCS ’19).

official: https://dl.acm.org/citation.cfm?id=3345660
code: https://github.com/kenny-co/procedural-advml

Abstract: Deep Convolutional Networks (DCNs) have been shown to be vulnerable to adversarial examples—perturbed inputs specifically designed to produce intentional errors in the learning algorithms at test time. Existing input-agnostic adversarial perturbations exhibit interesting visual patterns that are currently unexplained. In this paper, we introduce a structured approach for generating Universal Adversarial Perturbations (UAPs) with procedural noise functions. Our approach unveils the systemic vulnerability of popular DCN models like Inception v3 and YOLO v3, with single noise patterns able to fool a model on up to 90% of the dataset. Procedural noise allows us to generate a distribution of UAPs with high universal evasion rates using only a few parameters. Additionally, we propose Bayesian optimization to efficiently learn procedural noise parameters to construct inexpensive untargeted black-box attacks. We demonstrate that it can achieve an average of less than 10 queries per successful attack, a 100-fold improvement on existing methods. We further motivate the use of input-agnostic defences to increase the stability of models to adversarial perturbations. The universality of our attacks suggests that DCN models may be sensitive to aggregations of low-level class-agnostic features. These findings give insight on the nature of some universal adversarial perturbations and how they could be generated in other applications.

Fulvio Valenza

Fulvio joined the group as a Visiting Researcher. His activities focused on analysing and modelling hybrid threats.

Research Assistant/Research Associate in Federated and Adversarial Machine Learning

Research Assistant/Research Associate in Federated and Adversarial Machine Learning (Imperial College London)

Full Time, Fixed Term appointment for to start October 2019 until the 31/11/2021

The Resilient Information Systems Security Group (RISS) in the Department of Computing at Imperial College London is seeking a Research Assistant/Associate to work on EU funded Musketeer project. Musketeer aims to create a federated and privacy preserving machine learning data platform, that is interoperable, efficient and robust against internal and external threats. Led by IBM the project involves 11 academic and industrial partners from 7 countries and will validate its findings in two industrial scenarios in smart manufacturing and health care. Further details about the project can be found at: www.musketeer.eu.

The main contribution of the RISS group to Musketeer project focuses on the investigation and development of federated machine learning algorithms robust against attacks at training and test time, including the investigation of new poisoning attack and defence strategies, as well as novel mechanisms to generate adversarial examples and mitigate their effects. The work also includes the analysis of scenarios where multiple malicious users collude to manipulate or degrade the performance of federated machine learning systems.

There will be opportunities to collaborate with other researchers and PhD students in the RISS group working on adversarial machine learning and other machine learning applications in the security domain.

To apply for this position, you will need to have a strong machine learning background with proven knowledge and track record in one or more of the following research areas and techniques:

  • Adversarial machine learning.
  • Robust machine learning.
  • Federated or distributed machine learning.
  • Deep learning.
  • Bayesian inference.

Research Assistant applicants will have a Master’s degree (or equivalent) in an area pertinent to the subject area, i.e., Computing or Engineering. Research Associate applicants will have a PhD degree (or equivalent) in an area pertinent to the subject area, i.e., Computing or Engineering.

You must have excellent verbal and written communication skills, enjoy working in collaboratively and be able to organise your own work with minimal supervision and prioritise work to meet deadlines. Preference will be given to applicants with a proven research record and publications in the relevant areas, including in prestigious machine learning and security journals and conferences.

The post is based in the Department of Computing at Imperial College London on the South Kensington Campus. The post holder will be required to travel occasionally to attend project meetings and to work collaboratively with the project partners.

How to apply:

Please complete our online application by visiting http://www.imperial.ac.uk/jobs/description/ENG00916/research-assistant-research-associates-federated-and-adversarial-machine-learning

Applications must include the following:

  • A full CV and list of publications
  • A 1 page statement outlining why you think you would be ideal for this post.

Should you have any queries regarding the application process please contact Jamie Perrins via j.perrins@imperial.ac.uk

Informal Enquiries can be addressed to Professor Emil Lupu (e.c.lupu@imperial.ac.uk)

Full Details, visit : https://www.jobs.ac.uk/job/BTY970/research-assistant-research-associates-in-federated-and-adversarial-machine-learning

MUSKETEER: Machine learning to augment shared knowledge in federated privacy-preserving scenarios

The massive increase in data collected and stored worldwide calls for new ways to preserve privacy while still allowing data sharing among multiple data owners. Today, the lack of trusted and secure environments for data sharing inhibits data economy while legality, privacy, trustworthiness, data value and confidentiality hamper the free flow of data. By the end of the project, MUSKETEER aims to create a validated, federated, privacy-preserving machine learning platform tested on industrial data that is inter-operable, scalable and efficient enough to be deployed in real use cases. MUSKETEER aims to alleviate data sharing barriers by providing secure, scalable and privacy-preserving analytics over decentralized datasets using machine learning. Data can continue to be stored in different locations with different privacy constraints, but shared securely. The MUSKETEER cross-domain platform will validate progress in the industrial scenarios of smart manufacturing and health. MUSKETEER strives to (1) create machine learning models over a variety of privacy-preserving scenarios, (2) ensure security and robustness against external and internal threats, (3) provide a standardized and extendable architecture, (4) demonstrate and validate in two different industrial scenarios and (5) enhance data economy by boosting sharing across domains. The MUSKETEER impact crosses industrial, scientific, economic and strategic domains. Real-world industry requirements and outcomes are validated in an operational setting. Federated machine learning approaches for data sharing are innovated. Data economy is fostered by creating a rewarding model capable of fairly monetizing datasets according to the real data value. Finally, Europe is positioned as a leader in innovative data sharing technologies.

Visit Musketeer’s website here.

Follow Musketeer on Twitter: @H2020Musketeer

LinkedIn

Towards More Practical Software-based Attestation

Our paper Towards More Practical Software-based Attestation has been accepted for publication by Elsevier’s Computer Networks Journal.

Authors: Rodrigo Vieira Steiner, Emil Lupu

Abstract: Software-based attestation promises to enable the integrity verification of untrusted devices without requiring any particular hardware. However, existing proposals rely on strong assumptions that hinder their deployment and might even weaken their security. One of such assumptions is that using the maximum known network round-trip time to define the attestation timeout allows all honest devices to reply in time. While this is normally true in controlled environments, it is generally false in real deployments and especially so in a scenario like the Internet of Things where numerous devices communicate over an intrinsically unreliable wireless medium. Moreover, a larger timeout demands more computations, consuming extra time and energy and restraining the untrusted device from performing its main tasks. In this paper, we review this fundamental and yet overlooked assumption and propose a novel stochastic approach that significantly improves the overall attestation performance. Our experimental evaluation with IoT devices communicating over real-world uncontrolled Wi-Fi networks demonstrates the practicality and superior performance of our approach that in comparison with the current state of the art solution reduces the total attestation time and energy consumption around seven times for honest devices and two times for malicious ones, while improving the detection rate of honest devices (8% higher TPR) without compromising security (0% FPR).

Luca Maria Castiglione

Luca joined the group as PhD student on HiPEDS in October 2018. He received his MSc in Computer Science and Engineering from University of Napoli Federico II, defending his thesis entitled “Negotiation of traffic junctions over 5G networks”. The thesis work has been carried out at Ericsson, Gothenburg (Sweden), within a joint project between University of Napoli Federico II, Chalmers University of Technology and Ericsson.

He strongly believes in open source development and he currently is a mentor within the Open Leadership Programme offered by Mozilla.

His research interests are on the edge between cybersecurity and control engineering. In particular, his studies aim to investigate resilience of networked systems and industrial plants against cyberattacks.

You can also find him on Linkedin.