Publications

Extracting Randomness from the Trend of IPI for Cryptographic Operations in Implantable Medical Devices

H. Chizari and E. Lupu, “Extracting Randomness from the Trend of IPI for Cryptographic Operations in Implantable Medical Devices,” in IEEE Transactions on Dependable and Secure Computing, vol. 18, no. 2, pp. 875-888, 1 March-April 2021, doi: 10.1109/TDSC.2019.2921773.

Achieving secure communication between an Implantable Medical Device (IMD) and a gateway or programming device outside the body has showed its criticality in recent reports of vulnerabilities in cardiac devices, insulin pumps and neural implants, amongst others. The use of asymmetric cryptography is typically not a practical solution for IMDs due to the scarce computational and power resources. Symmetric key cryptography is preferred but its security relies on agreeing and using strong keys, which are difficult to generate. A solution to generate strong shared keys without using extensive resources, is to extract them from physiological signals already present inside the body such as the Inter-Pulse interval (IPI). The physiological signals must therefore be strong sources of randomness that meet five conditions: Universality (available on all people), Liveness (available at any-time), Robustness (strong random number), Permanence (independent from its history) and Uniqueness (independent from other sources). However, these conditions (mainly the last three) have not been systematically examined in current methods for randomness extraction from IPI. In this study, we first propose a methodology to measure the last three conditions: Information secrecy measures for Robustness, Santha-Vazirani Source delta value for Permanence and random sources dependency analysis for Uniqueness. Then, using a large dataset of IPI values (almost 900,000,000 IPIs), we show that IPI does not have Robustness and Permanence as a randomness source. Thus, extraction of a strong uniform random number from IPI values is impossible. Third, we propose to use the trend of IPI, instead of its value, as a source for a new randomness extraction method named Martingale Randomness Extraction from IPI (MRE-IPI). We evaluate MRE-IPI and show that it satisfies the Robustness condition completely and Permanence to some level. Finally, we use the NIST STS and Dieharder test suites and show that MRE-IPI is able to outperform all recent randomness extraction methods from IPIs and achieves a quality roughly half that of the AES random number generator. MRE-IPI is still not a strong random number and cannot be used as key to secure communications in general. However, it can be used as a one-time pad to securely exchange keys between the communication parties. The usage of MRE-IPI will thus be kept at a minimum and reduces the probability of breaking it. To the best of our knowledge, this is the first work in this area which uses such a comprehensive method and large dataset to examine the randomness of physiological signals.

Shadow-Catcher: Looking Into Shadows to Detect Ghost Objects in Autonomous Vehicle 3D Sensing

Zhongyuan Hau, Soteris Demetriou, Luis Muñoz-González, Emil C. Lupu, Shadow-Catcher: Looking Into Shadows to Detect Ghost Objects in Autonomous Vehicle 3D Sensing, 26th European Symposium on Research in Computer Security (ESORICS), 2021.

LiDAR-driven 3D sensing allows new generations of vehicles to achieve advanced levels of situation awareness. However, recent works have demonstrated that physical adversaries can spoof LiDAR return signals and deceive 3D object detectors to erroneously detect “ghost” objects. Existing defenses are either impractical or focus only on vehicles. Unfortunately, it is easier to spoof smaller objects such as pedestrians and cyclists, but harder to defend against and can have worse safety implications. To address this gap, we introduce Shadow-Catcher, a set of new techniques embodied in an end-to-end prototype to detect both large and small ghost object attacks on 3D detectors. We characterize a new semantically meaningful physical invariant (3D shadows) which Shadow-Catcher leverages for validating objects. Our evaluation on the KITTI dataset shows that Shadow-Catcher consistently achieves more than 94% accuracy in identifying anomalous shadows for vehicles, pedestrians, and cyclists, while it remains robust to a novel class of strong “invalidation” attacks targeting the defense system. Shadow-Catcher can achieve real-time detection, requiring only between 0.003s-0.021s on average to process an object in a 3D point cloud on commodity hardware and achieves a 2.17x speedup compared to prior work

Robustness and Transferability of Universal Attacks on Compressed Models

A.G. Matachana, K.T. Co, L. Muñoz-González, D. Martinez, E.C. Lupu. Robustness and Transferability of Universal Attacks on Compressed Models. AAAI 2021 Workshop: Towards Robust, Secure and Efficient Machine Learning. 2021.

Neural network compression methods like pruning and quantization are very effective at efficiently deploying Deep Neural Networks (DNNs) on edge devices. However, DNNs remain vulnerable to adversarial examples-inconspicuous inputs that are specifically designed to fool these models. In particular, Universal Adversarial Perturbations (UAPs), are a powerful class of adversarial attacks which create adversarial perturbations that can generalize across a large set of inputs. In this work, we analyze the effect of various compression techniques to UAP attacks, including different forms of pruning and quantization. We test the robustness of compressed models to white-box and transfer attacks, comparing them with their uncompressed counterparts on CIFAR-10 and SVHN datasets. Our evaluations reveal clear differences between pruning methods, including Soft Filter and Post-training Pruning. We observe that UAP transfer attacks between pruned and full models are limited, suggesting that the systemic vulnerabilities across these models are different. This finding has practical implications as using different compression techniques can blunt the effectiveness of black-box transfer attacks. We show that, in some scenarios, quantization can produce gradient-masking, giving a false sense of security. Finally, our results suggest that conclusions about the robustness of compressed models to UAP attacks is application dependent, observing different phenomena in the two datasets used in our experiments.

Object Removal Attacks on LiDAR-based 3D Object Detectors

LiDARs play a critical role in Autonomous Vehicles’ (AVs) perception and their safe operations. Recent works have demonstrated that it is possible to spoof LiDAR return signals to elicit fake objects. In this work we demonstrate how the same physical capabilities can be used to mount a new, even more dangerous class of attacks, namely Object Removal Attacks (ORAs). ORAs aim to force 3D object detectors to fail. We leverage the default setting of LiDARs that record a single return signal per direction to perturb point clouds in the region of interest (RoI) of 3D objects. By injecting illegitimate points behind the target object, we effectively shift points away from the target objects’ RoIs. Our initial results using a simple random point selection strategy show that the attack is effective in degrading the performance of commonly used 3D object detection models.

Z. Hau, K.T. Co, S. Demetriou, E.C. Lupu. Object Removal Attacks on LiDAR-based 3D Object Detectors. Automotive and Autonomous Vehicle Security (AutoSec) Workshop @ NDSS Symposium 2021.

Paper

Jacobian Regularization for Mitigating Universal Adversarial Perturbations

Authors: Kenneth Co, David Martinez Rego, Emil Lupu

Universal Adversarial Perturbations (UAPs) are input perturbations that can fool a neural network on large sets of data. They are a class of attacks that represents a significant threat as they facilitate realistic, practical, and low-cost attacks on neural networks. In this work, we derive upper bounds for the effectiveness of UAPs based on norms of data-dependent Jacobians. We empirically verify that Jacobian regularization greatly increases model robustness to UAPs by up to four times whilst maintaining clean performance. Our theoretical analysis also allows us to formulate a metric for the strength of shared adversarial perturbations between pairs of inputs. We apply this metric to benchmark datasets and show that it is highly correlated with the actual observed robustness. This suggests that realistic and practical universal attacks can be reliably mitigated without sacrificing clean accuracy, which shows promise for the robustness of machine learning systems.

Kenneth Co, David Martinez Rego, Emil Lupu, Jacobian Regularization for Mitigating Universal Adversarial Perturbations. 30th International Conference on Artificial Neural Networks (ICANN 21), Sept. 2021.

Pre-print on arxiv

 

Analyzing the Viability of UAV Missions Facing Cyber Attacks

With advanced video and sensing capabilities, un-occupied aerial vehicles (UAVs) are increasingly being usedfor numerous applications that involve the collaboration andautonomous operation of teams of UAVs. Yet such vehiclescan be affected by cyber attacks, impacting the viability oftheir missions. We propose a method to conduct mission via-bility analysis under cyber attacks for missions that employa team of several UAVs that share a communication network.We apply our method to a case study of a survey mission ina wildfire firefighting scenario. Within this context, we showhow our method can help quantify the expected missionperformance impact from an attack and determine if themission can remain viable under various attack situations.Our method can be used both in the planning of themission and for decision making during mission operation.Our approach to modeling attack progression and impactanalysis with Petri nets is also more broadly applicable toother settings involving multiple resources that can be usedinterchangeably towards the same objective.

J. Soikkeli, C. Perner and E. Lupu, “Analyzing the Viability of UAV Missions Facing Cyber Attacks,” in 2021 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), Vienna, Austria, 2021 pp. 103-112.
doi: 10.1109/EuroSPW54576.2021.00018

Hazard Driven Threat Modelling for Cyber Physical Systems

Luca Maria Castiglione and Emil C. Lupu. 2020. Hazard Driven Threat Modelling for Cyber Physical Systems. In Proceedings of the 2020 Joint Workshop on CPS&IoT Security and Privacy(CPSIOTSEC’20). Association for Computing Machinery, New York, NY, USA, 13–24.

Adversarial actors have shown their ability to infiltrate enterprise networks deployed around Cyber Physical Systems (CPSs) through social engineering, credential stealing and file-less infections. When inside, they can gain enough privileges to maliciously call legitimate APIs and apply unsafe control actions to degrade the system performance and undermine its safety. Our work lies at the intersection of security and safety, and aims to understand dependencies among security, reliability and safety in CPS/IoT. We present a methodology to perform hazard driven threat modelling and impact assessment in the context of CPSs. The process starts from the analysis of behavioural, functional and architectural models of the CPS. We then apply System Theoretic Process Analysis (STPA) on the functional model to highlight high-level abuse cases. We leverage a mapping between the architectural and the system theoretic(ST) models to enumerate those components whose impairment provides the attacker with enough privileges to tamper with or disrupt the data-flows. This enables us to find a causal connection between the attack surface (in the architectural model) and system level losses. We then link the behavioural and system theoretic representations of the CPS to quantify the impact of the attack. Using our methodology it is possible to compute a comprehensive attack graph of the known attack paths and to perform both a qualitative and quantitative impact assessment of the exploitation of vulnerabilities affecting target nodes. The framework and methodology are illustrated using a small scale example featuring a Communication Based Train Control (CBTC) system. Aspects regarding the scalability of our methodology and its application in real world scenarios are also considered. Finally, we discuss the possibility of using the results obtained to engineer both design time and real time defensive mechanisms.

Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks (CCS ’19)

Kenneth T. Co, Luis Muñoz-González, Sixte de Maupeou, and Emil C. Lupu. 2019. Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security (CCS ’19). Association for Computing Machinery, New York, NY, USA, 275–289. DOI:https://doi.org/10.1145/3319535.3345660

Abstract: Deep Convolutional Networks (DCNs) have been shown to be vulnerable to adversarial examples—perturbed inputs specifically designed to produce intentional errors in the learning algorithms at test time. Existing input-agnostic adversarial perturbations exhibit interesting visual patterns that are currently unexplained. In this paper, we introduce a structured approach for generating Universal Adversarial Perturbations (UAPs) with procedural noise functions. Our approach unveils the systemic vulnerability of popular DCN models like Inception v3 and YOLO v3, with single noise patterns able to fool a model on up to 90% of the dataset. Procedural noise allows us to generate a distribution of UAPs with high universal evasion rates using only a few parameters. Additionally, we propose Bayesian optimization to efficiently learn procedural noise parameters to construct inexpensive untargeted black-box attacks. We demonstrate that it can achieve an average of less than 10 queries per successful attack, a 100-fold improvement on existing methods. We further motivate the use of input-agnostic defences to increase the stability of models to adversarial perturbations. The universality of our attacks suggests that DCN models may be sensitive to aggregations of low-level class-agnostic features. These findings give insight on the nature of some universal adversarial perturbations and how they could be generated in other applications.

Towards More Practical Software-based Attestation

Rodrigo Vieira Steiner, EmilLupu, Towards more practical software-based attestation, J. Computer Networks, v. 149, pp 43-55, Elsevier, 2019.

Abstract: Software-based attestation promises to enable the integrity verification of untrusted devices without requiring any particular hardware. However, existing proposals rely on strong assumptions that hinder their deployment and might even weaken their security. One of such assumptions is that using the maximum known network round-trip time to define the attestation timeout allows all honest devices to reply in time. While this is normally true in controlled environments, it is generally false in real deployments and especially so in a scenario like the Internet of Things where numerous devices communicate over an intrinsically unreliable wireless medium. Moreover, a larger timeout demands more computations, consuming extra time and energy and restraining the untrusted device from performing its main tasks. In this paper, we review this fundamental and yet overlooked assumption and propose a novel stochastic approach that significantly improves the overall attestation performance. Our experimental evaluation with IoT devices communicating over real-world uncontrolled Wi-Fi networks demonstrates the practicality and superior performance of our approach that in comparison with the current state of the art solution reduces the total attestation time and energy consumption around seven times for honest devices and two times for malicious ones, while improving the detection rate of honest devices (8% higher TPR) without compromising security (0% FPR).

A Formal Approach to Analyzing Cyber-Forensics Evidence

Erisa Karafili’s paper “A Formal Approach to Analyzing Cyber-Forensics Evidence” was accepted at the European Symposium on Research in Computer Security (ESORICS) 2018. This work is part of the AF-Cyber Project, and was a joint collaboration with King’s College London and the University of Verona.

Title: A Formal Approach to Analyzing Cyber-Forensics Evidence

Authors: Erisa Karafili, Matteo Cristani, Luca Viganò

Abstract: The frequency and harmfulness of cyber-attacks are increasing every day, and with them also the amount of data that the cyber-forensics analysts need to collect and analyze. In this paper, we propose a formal analysis process that allows an analyst to filter the enormous amount of evidence collected and either identify crucial information about the attack (e.g., when it occurred, its culprit, its target) or, at the very least, perform a pre-analysis to reduce the complexity of the problem in order to then draw conclusions more swiftly and efficiently. We introduce the Evidence Logic EL for representing simple and derived pieces of evidence from different sources. We propose a procedure, based on monotonic reasoning, that rewrites the pieces of evidence with the use of tableau rules, based on relations of trust between sources and the reasoning behind the derived evidence, and yields a consistent set of pieces of evidence. As proof of concept, we apply our analysis process to a concrete cyber-forensics case study.

 

You can find the paper here.

This work was funded from the European Union’s Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 746667.

WSNs Under Attack! How Bad Is It? Evaluating Connectivity Impact Using Centrality Measures

Our paper WSNs Under Attack! How Bad Is It? Evaluating Connectivity Impact Using Centrality Measures has been presented at the Living in the Internet of Things: A PETRAS, IoTUK & IET Conference, Forum & Exhibition.

AuthorsRodrigo Vieira SteinerMartín BarrèreEmil C. Lupu

Abstract: We propose a model to represent the health of WSNs that allows us to evaluate a network’s ability to execute its functions. Central to this model is how we quantify the importance of each network node. As we focus on the availability of the network data, we investigate how well different centrality measures identify the significance of each node for the network connectivity. In this process, we propose a new metric named current-flow sink betweenness. Through a number of experiments , we demonstrate that while no metric is invariably better in identifying sensors’ connectivity relevance, the proposed current-flow sink betweenness outperforms existing metrics in the vast majority of cases.

Download a copy here.

Label Sanitization against Label Flipping Poisoning Attacks

Andrea Paudice, Luis Muñoz-González, Emil C. Lupu. 2018. Label Sanitization against Label Flipping Poisoning Attacks. arXiv preprint arXiv:1803.00992.

Many machine learning systems rely on data collected in the wild from untrusted sources, exposing the learning algorithms to data poisoning. Attackers can inject malicious data in the training dataset to subvert the learning process, compromising the performance of the algorithm producing errors in a targeted or an indiscriminate way. Label flipping attacks are a special case of data poisoning, where the attacker can control the labels assigned to a fraction of the training points. Even if the capabilities of the attacker are constrained, these attacks have been shown to be effective to significantly degrade the performance of the system. In this paper we propose an efficient algorithm to perform optimal label flipping poisoning attacks and a mechanism to detect and relabel suspicious data points, mitigating the effect of such poisoning attacks.

Ensuring the resilience of WSN to Malicious Data Injections through Measurements Inspection

Vitorio Paolo Illiano, PhD Thesis

Malicious data injections pose a severe threat to the systems based on Wireless Sensor Networks (WSNs) since they give the attacker control over the measurements, and on the system’s status and response in turn. Malicious measurements are particularly threatening when used to spoof or mask events of interest, thus eliciting or preventing desirable responses. Spoofing and masking attacks are particularly difficult to detect since they depict plausible behaviours, especially if multiple sensors have been compromised and collude to inject a coherent set of malicious measurements. Previous work has tackled the problem through measurements inspection, which analyses the inter-measurements correlations induced by the physical phenomena. However, these techniques consider simplistic attacks and are not robust to collusion. Moreover, they assume highly predictable patterns in the measurements distribution, which are invalidated by the unpredictability of events. We design a set of techniques that effectively detect malicious data injections in the presence of sophisticated collusion strategies, when one or more events manifest. Moreover, we build a methodology to characterise the likely compromised sensors. We also design diagnosis criteria that allow us to distinguish anomalies arising from malicious interference and faults. In contrast with previous work, we test the robustness of our methodology with automated and sophisticated attacks, where the attacker aims to evade detection. We conclude that our approach outperforms state-of-the-a
rt approaches. Moreover, we estimate quantitatively the WSN degree of resilience and provide a methodology to give a WSN owner an assured degree of resilience by automatically designing the WSN deployment. To deal also with the extreme scenario where the attacker has compromised most of the WSN, we propose a combination with software attestation techniques, which are more reliable when malicious data is originated by a compromised software, but also more expensive, and achieve an excellent trade-off between cost and resilience.

Download Thesis from here. 

Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection

Andrea Paudice, Luis Muñoz-González, Andras Gyorgy, Emil C. Lupu. 2018. Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection. arXiv preprint arXiv:1802.03041.

Machine learning has become an important component for many systems and applications including computer vision, spam filtering, malware and network intrusion detection, among others. Despite the capabilities of machine learning algorithms to extract valuable information from data and produce accurate predictions, it has been shown that these algorithms are vulnerable to attacks.

 Data poisoning is one of the most relevant security threats against machine learning systems, where attackers can subvert the learning process by injecting malicious samples in the training data. Recent work in adversarial machine learning has shown that the so-called optimal attack strategies can successfully poison linear classifiers, degrading the performance of the system dramatically after compromising a small fraction of the training dataset. In this paper we propose a defence mechanism to mitigate the effect of these optimal poisoning attacks based on outlier detection. We show empirically that the adversarial examples generated by these attack strategies are quite different from genuine points, as no detectability constrains are considered to craft the attack. Hence, they can be detected with an appropriate pre-filtering of the training dataset.

Determining Resilience Gains From Anomaly Detection for Event Integrity in Wireless Sensor Networks

Vittorio P. Illiano, Andrea Paudice, Luis Muñoz-González, and Emil C. Lupu. 2018. Determining Resilience Gains From Anomaly Detection for Event Integrity in Wireless Sensor Networks. ACM Trans. Sen. Netw. 14, 1, Article 5 (February 2018), 35 pages. DOI: https://doi.org/10.1145/3176621

Abstract: Measurements collected in a wireless sensor network (WSN) can be maliciously compromised through several attacks, but anomaly detection algorithms may provide resilience by detecting inconsistencies in the data. Anomaly detection can identify severe threats to WSN applications, provided that there is a sufficient amount of genuine information. This article presents a novel method to calculate an assurance measure for the network by estimating the maximum number of malicious measurements that can be tolerated. In previous work, the resilience of anomaly detection to malicious measurements has been tested only against arbitrary attacks, which are not necessarily sophisticated. The novel method presented here is based on an optimization algorithm, which maximizes the attack’s chance of staying undetected while causing damage to the application, thus seeking the worst-case scenario for the anomaly detection algorithm. The algorithm is tested on a wildfire monitoring WSN to estimate the benefits of anomaly detection on the system’s resilience. The algorithm also returns the measurements that the attacker needs to synthesize, which are studied to highlight the weak spots of anomaly detection. Finally, this article presents a novel methodology that takes in input the degree of resilience required and automatically designs the deployment that satisfies such a requirement.

Improving Data Sharing in Data Rich Environments

The paper “Improving Data Sharing in Data Rich Environments” was accepted at the IEEE Big Data International Workshop on Policy-based Autonomic Data Governance (PADG), part of the 15th IEEE International Conference on Big Data (Big Data 2017), December 11-14, 2017, Boston, MA, USA. This work was done in collaboration with our partners (BAE Systems, IBM UK and IBM US) from the DAIS International Technology Alliance (ITA). The paper can be found here.

Authors: Erisa Karafili, Emil C. Lupu, Alan Cullen, Bill Williams, Saritha Arunkumar, Seraphin Calo

Abstract: The increasing use of big data comes along with the problem of ensuring correct and secure data access. There is a need to maximise the data dissemination whilst controlling their access. Depending on the type of users different qualities and parts of data are shared. We introduce an alteration mechanism, more precisely a restriction one, based on a policy analysis language. The alteration reflects the level of trust and relations the users have, and are represented as policies inside the data sharing agreements. These agreements are attached to the data and are enforced every time the data are accessed, used or shared. We show the use of our alteration mechanism with a military use case, where different parties are involved during the missions, and they have different relations of trust and partnership.

The work was supported by EPSRC Project CIPART grant no. EP/L022729/1 and DAIS ITA (Sponsored by U.S. Army Research Laboratory and the U.K. Ministry of Defence under Agreement Number W911NF-16-3-0001).

 

Tracking the Bad Guys: An Efficient Forensic Methodology To Trace Multi-step Attacks Using Core Attack Graphs

Martín Barrère, Rodrigo Vieira Steiner, Rabih Mohsen, Emil C. Lupu, Tracking the Bad Guys: An Efficient Forensic Methodology To Trace Multi-step Attacks Using Core Attack Graphs, 13th IEEE/IFIP International Conference on Network and Service Management (CNSM’17), November 2017, in Tokyo, Japan.

In this paper, we describe an efficient methodology to guide investigators during network forensic analysis. To this end, we introduce the concept of core attack graph, a compact representation of the main routes an attacker can take towards specific network targets. Such compactness allows forensic investigators to focus their efforts on critical nodes that are more likely to be part of attack paths, thus reducing the overall number of nodes (devices, network privileges) that need to be examined. Nevertheless, core graphs also allow investigators to hierarchically explore the graph in order to retrieve different levels of summarised information. We have evaluated our approach over different network topologies varying parameters such as network size, density, and forensic evaluation threshold. Our results demonstrate that we can achieve the same level of accuracy provided by standard logical attack graphs while significantly reducing the exploration rate of the network.

Naggen: a Network Attack Graph GENeration Tool

Martin Barrere and Emil C. Lupu, Naggen: a Network Attack Graph GENeration Tool, IEEE Conference on Communications and Network Security (CNS’17), October 2017, in Las Vegas, USA.

Attack graphs constitute a powerful security tool aimed at modelling the many ways in which an attacker may compromise different assets in a network. Despite their usefulness in several security-related activities (e.g. hardening, monitoring, forensics), the complexity of these graphs can massively grow as the network becomes denser and larger, thus defying their practical usability. In this presentation, we first describe some of the problems that currently challenge the practical use of attack graphs. We then explain our approach based on core attack graphs, a novel perspective to address attack graph complexity. Finally, we present Naggen, a tool for generating, visualising and exploring core attack graphs. We use Naggen to show the advantages of our approach on different security applications.

Bayesian Attack Graphs for Security Risk Assessment

Attack graphs offer a powerful framework for security risk assessment. They provide a compact representation of the attack paths that an attacker can follow to compromise network resources from the analysis of the network topology and vulnerabilities. The uncertainty about the attacker’s behaviour makes Bayesian networks suitable to model attack graphs to perform static and dynamic security risk assessment. Thus, whilst static analysis of attack graphs considers the security posture at rest, dynamic analysis accounts for evidence of compromise at run-time, helping system administrators to react against potential threats. In this paper, we introduce a Bayesian attack graph model that allows to estimate the probabilities of an attacker compromising different resources of the network. We show how exact and approximate inference techniques can be efficiently applied on Bayesian attack graph models with thousands of nodes.

Luis Muñoz-González, Emil C. Lupu, “Bayesian Attack Graphs for Security Risk Assessment.” IST-153 NATO Workshop on Cyber Resilience, 2017.

Argumentation-based Security for Social Good

The paper “Argumentation-based Security for Social Good” presented at the AAAI Spring Symposia 2017 is now available at the AAAI Technical Report.

Title: Argumentation-Based Security for Social Good

Authors: Erisa Karafili, Antonis C. Kakas, Nikolaos I. Spanoudakis, Emil C. Lupu

Abstract: The increase of connectivity and the impact it has in ever day life is raising new and existing security problems that are becoming important for social good. We introduce two particular problems: cyber attack attribution and regulatory data sharing. For both problems, decisions about which rules to apply, should be taken under incomplete and context dependent information. The solution we propose is based on argumentation reasoning, that is a well suited technique for implementing decision making mechanisms under conflicting and incomplete information. Our proposal permits us to identify the attacker of a cyber attack and decide the regulation rule that should be used while using and sharing data. We illustrate our solution through concrete examples.

The paper can be found in the following link: https://aaai.org/ocs/index.php/FSS/FSS17/paper/view/15928/15306

A video of the presentation can be found in the workshop page AI for Social Good and also in following link: https://youtu.be/wYg8jaHPbyw?t=33m33s