Improving Data Sharing in Data Rich Environments

The paper “Improving Data Sharing in Data Rich Environments” was accepted at the IEEE Big Data International Workshop on Policy-based Autonomic Data Governance (PADG), part of the 15th IEEE International Conference on Big Data (Big Data 2017), December 11-14, 2017, Boston, MA, USA. This work was done in collaboration with our partners (BAE Systems, IBM UK and IBM US) from the DAIS International Technology Alliance (ITA). The paper can be found here.

Authors: Erisa Karafili, Emil C. Lupu, Alan Cullen, Bill Williams, Saritha Arunkumar, Seraphin Calo

Abstract: The increasing use of big data comes along with the problem of ensuring correct and secure data access. There is a need to maximise the data dissemination whilst controlling their access. Depending on the type of users different qualities and parts of data are shared. We introduce an alteration mechanism, more precisely a restriction one, based on a policy analysis language. The alteration reflects the level of trust and relations the users have, and are represented as policies inside the data sharing agreements. These agreements are attached to the data and are enforced every time the data are accessed, used or shared. We show the use of our alteration mechanism with a military use case, where different parties are involved during the missions, and they have different relations of trust and partnership.

The work was supported by EPSRC Project CIPART grant no. EP/L022729/1 and DAIS ITA (Sponsored by U.S. Army Research Laboratory and the U.K. Ministry of Defence under Agreement Number W911NF-16-3-0001).

 

Tracking the Bad Guys: An Efficient Forensic Methodology To Trace Multi-step Attacks Using Core Attack Graphs

Tracking the Bad Guys: An Efficient Forensic Methodology To Trace Multi-step Attacks Using Core Attack Graphs, has been presented at the 13th IEEE/IFIP International Conference on Network and Service Management (CNSM’17), November 2017, in Tokyo, Japan.

 

The paper is available here and the presentation slides (PDF) can be downloaded here.

Authors: Martín Barrère, Rodrigo Vieira Steiner, Rabih Mohsen, Emil C. Lupu

In this paper, we describe an efficient methodology to guide investigators during network forensic analysis. To this end, we introduce the concept of core attack graph, a compact representation of the main routes an attacker can take towards specific network targets. Such compactness allows forensic investigators to focus their efforts on critical nodes that are more likely to be part of attack paths, thus reducing the overall number of nodes (devices, network privileges) that need to be examined. Nevertheless, core graphs also allow investigators to hierarchically explore the graph in order to retrieve different levels of summarised information. We have evaluated our approach over different network topologies varying parameters such as network size, density, and forensic evaluation threshold. Our results demonstrate that we can achieve the same level of accuracy provided by standard logical attack graphs while significantly reducing the exploration rate of the network.

Naggen: a Network Attack Graph GENeration Tool

Naggen: a Network Attack Graph GENeration Tool, has been presented at the IEEE Conference on Communications and Network Security (CNS’17), October 2017, in Las Vegas, USA.

The paper is available here and the poster can be downloaded here.

Authors: Martín Barrère, Emil C. Lupu

Attack graphs constitute a powerful security tool aimed at modelling the many ways in which an attacker may compromise different assets in a network. Despite their usefulness in several security-related activities (e.g. hardening, monitoring, forensics), the complexity of these graphs can massively grow as the network becomes denser and larger, thus defying their practical usability. In this presentation, we first describe some of the problems that currently challenge the practical use of attack graphs. We then explain our approach based on core attack graphs, a novel perspective to address attack graph complexity. Finally, we present Naggen, a tool for generating, visualising and exploring core attack graphs. We use Naggen to show the advantages of our approach on different security applications.

About Naggen:

Bayesian Attack Graphs for Security Risk Assessment

Attack graphs offer a powerful framework for security risk assessment. They provide a compact representation of the attack paths that an attacker can follow to compromise network resources from the analysis of the network topology and vulnerabilities. The uncertainty about the attacker’s behaviour makes Bayesian networks suitable to model attack graphs to perform static and dynamic security risk assessment. Thus, whilst static analysis of attack graphs considers the security posture at rest, dynamic analysis accounts for evidence of compromise at run-time, helping system administrators to react against potential threats. In this paper, we introduce a Bayesian attack graph model that allows to estimate the probabilities of an attacker compromising different resources of the network. We show how exact and approximate inference techniques can be efficiently applied on Bayesian attack graph models with thousands of nodes.

Luis Muñoz-González, Emil C. Lupu, “Bayesian Attack Graphs for Security Risk Assessment.” IST-153 NATO Workshop on Cyber Resilience, 2017.

Research Associate: Security and Safety Stream in the PETRAS IoT Research Hub Cybersecurity of the IoT

Full time, fixed term appointment until 28th February 2019 (apply here)

The PETRAS Internet of Things Research Hub – Cybersecurity of the IoT is seeking a highly motivated postdoctoral researcher for its thematic research stream Safety and Security for IoT environments. The Hub comprises nine leading UK universities and over 47 partners from industry and the public sector. Its mission is to establish, over the next three years, a unique and exciting setting for research and development on critical issues in privacy, ethics, trust, reliability, acceptability, and security for the Internet of Things in the UK http://www.petrashub.org.The post will be based at Imperial College on the South Kensington Campus, in the Department of Computing.

The post holder will work with the technical and social leads of the above stream, Professor Emil Lupu (Imperial) and Professor Awais Rashid (Lancaster), respectively. The post offers a unique opportunity to conduct research on the Safety and Security challenges in the Internet of Things, with access to a wide pool of academic, industrial, and governmental stakeholders and research and development “in the wild”.

You will be responsible for reviewing reseach outcomes from PETRAS projects, formulating research problems and developing research outcomes that address these problems in the context of the Hub’s activities.  Acting as a communication nexus for hub activities in safety and security for the IoT also forms part of the role and you will be expected to collaborate with partners on projects across the hub.

To apply you must have a PhD degree (or equivalent) in computing or a relevant engineering discipline. You should also have a track record in security and/or safety of IoT systems with particular focus on sensors and embedded devices, and their use in the context of critical application environments such as healthcare or industrial control .

A proven research record with publications in the relevant areas is also required. You must be fluent in English.

Find more details about the position and the application guidelines here.

Argumentation-based Security for Social Good

The paper “Argumentation-based Security for Social Good” presented at the AAAI Spring Symposia 2017 is now available at the AAAI Technical Report.

Title: Argumentation-Based Security for Social Good

Authors: Erisa Karafili, Antonis C. Kakas, Nikolaos I. Spanoudakis, Emil C. Lupu

Abstract: The increase of connectivity and the impact it has in ever day life is raising new and existing security problems that are becoming important for social good. We introduce two particular problems: cyber attack attribution and regulatory data sharing. For both problems, decisions about which rules to apply, should be taken under incomplete and context dependent information. The solution we propose is based on argumentation reasoning, that is a well suited technique for implementing decision making mechanisms under conflicting and incomplete information. Our proposal permits us to identify the attacker of a cyber attack and decide the regulation rule that should be used while using and sharing data. We illustrate our solution through concrete examples.

The paper can be found in the following link: https://aaai.org/ocs/index.php/FSS/FSS17/paper/view/15928/15306

A video of the presentation can be found in the workshop page AI for Social Good and also in following link: https://youtu.be/wYg8jaHPbyw?t=33m33s

An argumentation reasoning approach for data processing

The paper “An argumentation reasoning approach for data processing” is now published in the Elsevier Journal Computers in Industry.

Title: An argumentation reasoning approach for data processing

Authors: Erisa Karafili, Konstantina Spanaki, Emil C. Lupu

Abstract: Data-intensive environments enable us to capture information and knowledge about the physical surroundings, to optimise our resources, enjoy personalised services and gain unprecedented insights into our lives. However, to obtain these endeavours extracted from the data, this data should be generated, collected and the insight should be exploited. Following an argumentation reasoning approach for data processing and building on the theoretical background of data management, we highlight the importance of data sharing agreements (DSAs) and quality attributes for the proposed data processing mechanism. The proposed approach is taking into account the DSAs and usage policies as well as the quality attributes of the data, which were previously neglected compared to existing methods in the data processing and management field. Previous research provided techniques towards this direction; however, a more intensive research approach for processing techniques should be introduced for the future to enhance the value creation from the data and new strategies should be formed around this data generated daily from various devices and sources.

This work was supported by FP7 EU-funded project Coco Cloud grant no.: 610853, and EPSRC Project CIPART grant no. EP/L022729/1.

The paper can be found in the following link as Open Access: http://www.sciencedirect.com/science/article/pii/S016636151730338X

Towards Poisoning Deep Learning Algorithms with Back-gradient Optimization

A number of online services nowadays rely upon machine learning to extract valuable information from data collected in the wild. This exposes learning algorithms to the threat of data poisoning, i.e., a coordinate attack in which a fraction of the training data is controlled by the attacker and manipulated to subvert the learning process. To date, these attacks have been devised only against a limited class of binary learning algorithms, due to the inherent complexity of the gradient-based procedure used to optimize the poisoning points (a.k.a. adversarial training examples).
In this work, we first extend the definition of poisoning attacks to multi-class problems. We then propose a novel poisoning algorithm based on the idea of back-gradient optimization, i.e., to compute the gradient of interest through automatic differentiation, while also reversing the learning procedure to drastically reduce the attack complexity. Compared to current poisoning strategies, our approach is able to target a wider class of learning algorithms, trained with gradient-based procedures, including neural networks and deep learning architectures. We empirically evaluate its effectiveness on several application examples, including spam filtering, malware detection, and handwritten digit recognition. We finally show that, similarly to adversarial test examples, adversarial training examples can also be transferred across different learning algorithms.

Luis Muñoz-González, Battista Biggio, Ambra Demontis, Andrea Paudice, Vasin Wongrassamee, Emil C. Lupu, Fabio Roli. “Towards Poisoning Deep Learning Algorithms with Back-gradient Optimization.” Workshop on Artificial Intelligence and Security (AISec), 2017.

This work has been done in collaboration with the PRA Lab in the University of Cagliari, Italy.

Efficient Attack Graph Analysis through Approximate Inference

Attack graphs provide compact representations of the attack paths an attacker can follow to compromise network resources from the analysis of network vulnerabilities and topology. These representations are a powerful tool for security risk assessment. Bayesian inference on attack graphs enables the estimation of the risk of compromise to the system’s 
components given their vulnerabilities and interconnections and accounts for multi-step attacks spreading through the system. While static analysis considers the risk posture at rest, dynamic analysis also accounts for evidence of compromise, for example, from Security Information and Event Management software or forensic investigation. However, in this context, exact Bayesian inference techniques do not scale well. In this article, we show how Loopy Belief Propagation—an approximate inference technique—can be applied to attack graphs and that it scales linearly in the number of nodes for both static and dynamic analysis, making such analyses viable for larger networks. We experiment with different topologies and network clustering on synthetic Bayesian attack graphs with thousands of nodes to show that the algorithm’s accuracy is acceptable and that it converges to a stable solution. We compare sequential and parallel versions of Loopy Belief Propagation with exact inference techniques for both static and dynamic analysis, showing the advantages and gains of approximate inference techniques when scaling to larger attack graphs.

Luis Muñoz-González, Daniele Sgandurra, Andrea Paudice, Emil C. Lupu. “Efficient Attack Graph Analysis through Approximate Inference.” ACM Transactions on Privacy and Security, vol. 20(3), pp. 1-30, 2017.