Category Archives: Contributions

Improving Data Sharing in Data Rich Environments

The paper “Improving Data Sharing in Data Rich Environments” was accepted at the IEEE Big Data International Workshop on Policy-based Autonomic Data Governance (PADG), part of the 15th IEEE International Conference on Big Data (Big Data 2017), December 11-14, 2017, Boston, MA, USA. This work was done in collaboration with our partners (BAE Systems, IBM UK and IBM US) from the DAIS International Technology Alliance (ITA). The paper can be found here.

Authors: Erisa Karafili, Emil C. Lupu, Alan Cullen, Bill Williams, Saritha Arunkumar, Seraphin Calo

Abstract: The increasing use of big data comes along with the problem of ensuring correct and secure data access. There is a need to maximise the data dissemination whilst controlling their access. Depending on the type of users different qualities and parts of data are shared. We introduce an alteration mechanism, more precisely a restriction one, based on a policy analysis language. The alteration reflects the level of trust and relations the users have, and are represented as policies inside the data sharing agreements. These agreements are attached to the data and are enforced every time the data are accessed, used or shared. We show the use of our alteration mechanism with a military use case, where different parties are involved during the missions, and they have different relations of trust and partnership.

The work was supported by EPSRC Project CIPART grant no. EP/L022729/1 and DAIS ITA (Sponsored by U.S. Army Research Laboratory and the U.K. Ministry of Defence under Agreement Number W911NF-16-3-0001).

 

Tracking the Bad Guys: An Efficient Forensic Methodology To Trace Multi-step Attacks Using Core Attack Graphs

Tracking the Bad Guys: An Efficient Forensic Methodology To Trace Multi-step Attacks Using Core Attack Graphs, has been presented at the 13th IEEE/IFIP International Conference on Network and Service Management (CNSM’17), November 2017, in Tokyo, Japan.

 

The paper is available here and the presentation slides (PDF) can be downloaded here.

Authors: Martín Barrère, Rodrigo Vieira Steiner, Rabih Mohsen, Emil C. Lupu

In this paper, we describe an efficient methodology to guide investigators during network forensic analysis. To this end, we introduce the concept of core attack graph, a compact representation of the main routes an attacker can take towards specific network targets. Such compactness allows forensic investigators to focus their efforts on critical nodes that are more likely to be part of attack paths, thus reducing the overall number of nodes (devices, network privileges) that need to be examined. Nevertheless, core graphs also allow investigators to hierarchically explore the graph in order to retrieve different levels of summarised information. We have evaluated our approach over different network topologies varying parameters such as network size, density, and forensic evaluation threshold. Our results demonstrate that we can achieve the same level of accuracy provided by standard logical attack graphs while significantly reducing the exploration rate of the network.

Naggen: a Network Attack Graph GENeration Tool

Naggen: a Network Attack Graph GENeration Tool, has been presented at the IEEE Conference on Communications and Network Security (CNS’17), October 2017, in Las Vegas, USA.

The paper is available here and the poster can be downloaded here.

Authors: Martín Barrère, Emil C. Lupu

Attack graphs constitute a powerful security tool aimed at modelling the many ways in which an attacker may compromise different assets in a network. Despite their usefulness in several security-related activities (e.g. hardening, monitoring, forensics), the complexity of these graphs can massively grow as the network becomes denser and larger, thus defying their practical usability. In this presentation, we first describe some of the problems that currently challenge the practical use of attack graphs. We then explain our approach based on core attack graphs, a novel perspective to address attack graph complexity. Finally, we present Naggen, a tool for generating, visualising and exploring core attack graphs. We use Naggen to show the advantages of our approach on different security applications.

About Naggen:

Argumentation-based Security for Social Good

The paper “Argumentation-based Security for Social Good” presented at the AAAI Spring Symposia 2017 is now available at the AAAI Technical Report.

Title: Argumentation-Based Security for Social Good

Authors: Erisa Karafili, Antonis C. Kakas, Nikolaos I. Spanoudakis, Emil C. Lupu

Abstract: The increase of connectivity and the impact it has in ever day life is raising new and existing security problems that are becoming important for social good. We introduce two particular problems: cyber attack attribution and regulatory data sharing. For both problems, decisions about which rules to apply, should be taken under incomplete and context dependent information. The solution we propose is based on argumentation reasoning, that is a well suited technique for implementing decision making mechanisms under conflicting and incomplete information. Our proposal permits us to identify the attacker of a cyber attack and decide the regulation rule that should be used while using and sharing data. We illustrate our solution through concrete examples.

The paper can be found in the following link: https://aaai.org/ocs/index.php/FSS/FSS17/paper/view/15928/15306

A video of the presentation can be found in the workshop page AI for Social Good and also in following link: https://youtu.be/wYg8jaHPbyw?t=33m33s

An argumentation reasoning approach for data processing

The paper “An argumentation reasoning approach for data processing” is now published in the Elsevier Journal Computers in Industry.

Title: An argumentation reasoning approach for data processing

Authors: Erisa Karafili, Konstantina Spanaki, Emil C. Lupu

Abstract: Data-intensive environments enable us to capture information and knowledge about the physical surroundings, to optimise our resources, enjoy personalised services and gain unprecedented insights into our lives. However, to obtain these endeavours extracted from the data, this data should be generated, collected and the insight should be exploited. Following an argumentation reasoning approach for data processing and building on the theoretical background of data management, we highlight the importance of data sharing agreements (DSAs) and quality attributes for the proposed data processing mechanism. The proposed approach is taking into account the DSAs and usage policies as well as the quality attributes of the data, which were previously neglected compared to existing methods in the data processing and management field. Previous research provided techniques towards this direction; however, a more intensive research approach for processing techniques should be introduced for the future to enhance the value creation from the data and new strategies should be formed around this data generated daily from various devices and sources.

This work was supported by FP7 EU-funded project Coco Cloud grant no.: 610853, and EPSRC Project CIPART grant no. EP/L022729/1.

The paper can be found in the following link as Open Access: http://www.sciencedirect.com/science/article/pii/S016636151730338X

Ransomware Dataset

Ransomware has become one of the most prominent threats in cyber-security and recent attacks has shown the sophistication and impact of this class of malware. In essence, ransomware aims to render the victim’s system unusable by encrypting important files, and then, ask the user to pay a ransom to revert the damage. Several ransomware include sophisticated packing techniques, and are hence difficult to statically analyse. In our previous work, we developed EldeRan, a machine learning approach to analyse and classify ransomware dynamically. EldeRan monitors a set of actions performed by applications in their first phases of installation checking for characteristics signs of ransomware.

You can download here the dataset we collected and analysed with Cuckoo sandbox, which includes 582 samples of ransomware and 942 good applications.

Further details about the dataset can be found in the paper:

Daniele Sgandurra, Luis Muñoz-González, Rabih Mohsen, Emil C. Lupu. “Automated Analysis of Ransomware: Benefits, Limitations, and use for Detection.” In arXiv preprints arXiv:1609.03020, 2016.

Please, if you use our data set don’t forget to reference our work. You can copy the BIBTEX link here.

Detecting Malicious Data Injections in Wireless Sensor Networks

Wireless Sensor Networks (WSNs) have become popular for monitoring critical infrastructures, military applications, and Internet of Things (IoT) applications.

However, WSNs carry several vulnerabilities in the sensor nodes, the wireless medium, and the environment. In particular, the nodes are vulnerable to tampering on the field, since they are often unattended, physically accessible, and use of tamper-resistant hardware is often too expensive.

Malicious data injections consist of manipulations of the measurements-related data, which threaten the WSN’s mission since they enable an attacker to solicit a wrong system’s response, such as concealing the presence of problems, or raising false alarms.

Measurements inspection is a method for counteracting malicious measurements by exploiting internal correlations in the measurements themselves. Since it does not need extra data it is a lightweight approach, and since it makes no assumption on the attack vector it is caters for several attacks at once.

Our first achievement was to identify the benefits and shortcomings of the current measurements inspection techniques and produce a literature survey, which was published in ACM Computing Surveys: V. P. Illiano and E. C. Lupu. ”Detecting malicious data injections in wireless sensor networks: A survey”, Oct. 2015 . The survey has revealed a large number of algorithms proposed for measurements inspection in sensor measurements. However, malicious data injections are usually tackled together with faulty measurements. Nevertheless, malicious measurements are, by and large, more difficult to detect than faulty measurements, especially when multiple malicious sensors collude and produce measurements that are consistent with each other.

We have designed an initial algorithm, which detects effectively malicious data injections in the presence of sophisticated collusion strategies among a subset of sensor nodes when a single event of interest (e.g. fire, earthquake, power outage) occurs at a time. The detection algorithm selects only information that appears reliable. Colluding sensors are not allowed to compensate for each other in the detection metric whilst still injecting malicious data thanks to an aggregation operator that is accurate in the presence of genuine measurements as well as resistant to malicious data. This work was published in IEEE Transactions on Network and Service Management, V. Illiano and E. Lupu, Detecting malicious data injections in event detection wireless sensor networks, Sept 2015

When multiple events manifest, more complex attack strategies are possible, such as creating false events near legitimate ones, transforming a severe event into several mild events etc. We have then reviewed and re-developed the initial approach to cope with such complex scenarios. Furthermore, we have dealt with the problem of characterisation, i.e. identification of the compromised sensors, and diagnosis, i.e. inferring when the anomaly is most likely malicious or faulty. This work has been published in IEEE Transactions on Dependable and Secure Computing, V. P. Illiano, L. Munoz-Gonzalez, and E. Lupu, Don t fool me!: Detection, characterisation and diagnosis of spoofed and masked events in wireless sensor networks, 2016

Whilst detection proved highly reliable also in the presence of several colluding nodes, we have witnessed that more genuine nodes are needed to make a correct characterisation of malicious nodes. Hence, we have studied techniques to increase the reliability in identifying malicious nodes through occasional recourse to Software Attestation, a technique that is particularly reliable in detecting compromised software, but is also expensive for the limited computation and energy resources of the sensor nodes. Based on a thorough analysis of the aspects that make measurements inspection and software attestation complementary, we have designed the methods that allow to achieve a reliability as high as for attestation with an overhead as low as for measurements inspection.
This work will appear in the 10th ACM Conference on Security and Privacy in Wireless and Mobile Networks (WiSec 2017).

More recently, we are working on the evaluation of the technique against evasion, i.e. an attacker that maximises the chance to stay undetected whilst causing damage.

Compositional Reliability Analysis for Probabilistic Component Automata

In this paper we propose a modelling formalism, Probabilistic Component Automata (PCA), as a probabilistic extension to Interface Automata to represent the probabilistic behaviour of component-based systems. The aim is to support composition of component-based models for both behaviour and non-functional properties such as reliability. We show how addi- tional primitives for modelling failure scenarios, failure handling and failure propagation, as well as other algebraic operators, can be combined with models of the system architecture to automatically construct a system model by composing models of its subcomponents. The approach is supported by the tool LTSA-PCA, an extension of LTSA, which generates a composite DTMC model. The reliability of a particular system configuration can then be automatically analysed based on the corresponding composite model using the PRISM model checker. This approach facilitates configurability and adaptation in which the software configuration of components and the associated composition of component models are changed at run time.

P. Rodrigues, E. Lupu and J. Kramer,  Compositional Reliability Analysis for Probabilistic Component Automata, to appear in International Workshop on Modelling in Software Engineering (MiSE), Florence, May 16-17, 2015.

LTSA-PCA : Tool support for compositional reliability analysis

ltsa-pca-pic

Software systems are constructed by combining new and existing services and components. Models that represent an aspect of a system should therefore be compositional to facilitate reusability and automated construction from the representation of each part. In this paper we present an extension to the LTSA tool  that provides support for the specification, visualisation and analysis of composable probabilistic behaviour of a component-based system using Probabilistic Component Automata (PCA). These also include the ability to specify failure scenarios and failure handling behaviour. Following composition, a PCA that has full probabilistic information can be translated to a DTMC model for reliability analysis in PRISM. Before composition, each component can be reduced to its interface behaviour in order to mitigate state explosion associated with composite representations, which can significantly reduce the time to analyse the reliability of a system. Moreover, existing behavioural analysis tools in LTSA can also be applied to PCA representations.