Aaron will be presenting a paper based on his MSc thesis work “Exploiting Correlations to Detect False Data Injections in Low-Density Wireless Sensor Networks” at 5th ACM CPSS 2019, co-located workshop with ACM ASIACCS.
The massive increase in data collected and stored worldwide calls for new ways to preserve privacy while still allowing data sharing among multiple data owners. Today, the lack of trusted and secure environments for data sharing inhibits data economy while legality, privacy, trustworthiness, data value and confidentiality hamper the free flow of data. By the end of the project, MUSKETEER aims to create a validated, federated, privacy-preserving machine learning platform tested on industrial data that is inter-operable, scalable and efficient enough to be deployed in real use cases. MUSKETEER aims to alleviate data sharing barriers by providing secure, scalable and privacy-preserving analytics over decentralized datasets using machine learning. Data can continue to be stored in different locations with different privacy constraints, but shared securely. The MUSKETEER cross-domain platform will validate progress in the industrial scenarios of smart manufacturing and health. MUSKETEER strives to (1) create machine learning models over a variety of privacy-preserving scenarios, (2) ensure security and robustness against external and internal threats, (3) provide a standardized and extendable architecture, (4) demonstrate and validate in two different industrial scenarios and (5) enhance data economy by boosting sharing across domains. The MUSKETEER impact crosses industrial, scientific, economic and strategic domains. Real-world industry requirements and outcomes are validated in an operational setting. Federated machine learning approaches for data sharing are innovated. Data economy is fostered by creating a rewarding model capable of fairly monetizing datasets according to the real data value. Finally, Europe is positioned as a leader in innovative data sharing technologies.
Abstract: Software-based attestation promises to enable the integrity verification of untrusted devices without requiring any particular hardware. However, existing proposals rely on strong assumptions that hinder their deployment and might even weaken their security. One of such assumptions is that using the maximum known network round-trip time to define the attestation timeout allows all honest devices to reply in time. While this is normally true in controlled environments, it is generally false in real deployments and especially so in a scenario like the Internet of Things where numerous devices communicate over an intrinsically unreliable wireless medium. Moreover, a larger timeout demands more computations, consuming extra time and energy and restraining the untrusted device from performing its main tasks. In this paper, we review this fundamental and yet overlooked assumption and propose a novel stochastic approach that significantly improves the overall attestation performance. Our experimental evaluation with IoT devices communicating over real-world uncontrolled Wi-Fi networks demonstrates the practicality and superior performance of our approach that in comparison with the current state of the art solution reduces the total attestation time and energy consumption around seven times for honest devices and two times for malicious ones, while improving the detection rate of honest devices (8% higher TPR) without compromising security (0% FPR).
Luca joined the group as PhD student on HiPEDS in October 2018. He received his MSc in Computer Science and Engineering from University of Napoli Federico II, defending his thesis entitled “Negotiation of traffic junctions over 5G networks”. The thesis work has been carried out at Ericsson, Gothenburg (Sweden), within a joint project between University of Napoli Federico II, Chalmers University of Technology and Ericsson.
He strongly believes in open source development and he currently is a mentor within the Open Leadership Programme offered by Mozilla.
His research interests are on the edge between cybersecurity and control engineering. In particular, his studies aim to investigate resilience of networked systems and industrial plants against cyberattacks.
You can also find him on Linkedin.
Erisa Karafili’s paper “A Formal Approach to Analyzing Cyber-Forensics Evidence” was accepted at the European Symposium on Research in Computer Security (ESORICS) 2018. This work is part of the AF-Cyber Project, and was a joint collaboration with King’s College London and the University of Verona.
Title: A Formal Approach to Analyzing Cyber-Forensics Evidence
Authors: Erisa Karafili, Matteo Cristani, Luca Viganò
Abstract: The frequency and harmfulness of cyber-attacks are increasing every day, and with them also the amount of data that the cyber-forensics analysts need to collect and analyze. In this paper, we propose a formal analysis process that allows an analyst to filter the enormous amount of evidence collected and either identify crucial information about the attack (e.g., when it occurred, its culprit, its target) or, at the very least, perform a pre-analysis to reduce the complexity of the problem in order to then draw conclusions more swiftly and efficiently. We introduce the Evidence Logic EL for representing simple and derived pieces of evidence from different sources. We propose a procedure, based on monotonic reasoning, that rewrites the pieces of evidence with the use of tableau rules, based on relations of trust between sources and the reasoning behind the derived evidence, and yields a consistent set of pieces of evidence. As proof of concept, we apply our analysis process to a concrete cyber-forensics case study.
You can find the paper here.
This work was funded from the European Union’s Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 746667.
Our paper WSNs Under Attack! How Bad Is It? Evaluating Connectivity Impact Using Centrality Measures has been presented at the Living in the Internet of Things: A PETRAS, IoTUK & IET Conference, Forum & Exhibition.
Abstract: We propose a model to represent the health of WSNs that allows us to evaluate a network’s ability to execute its functions. Central to this model is how we quantify the importance of each network node. As we focus on the availability of the network data, we investigate how well different centrality measures identify the significance of each node for the network connectivity. In this process, we propose a new metric named current-flow sink betweenness. Through a number of experiments , we demonstrate that while no metric is invariably better in identifying sensors’ connectivity relevance, the proposed current-flow sink betweenness outperforms existing metrics in the vast majority of cases.
Download a copy here.
His general interests are in machine learning, cryptography, and mathematics. His current research is on the security of machine learning algorithms, primarily adversarial machine learning. He is also interested in health or lifestyle optimization, and is very much into enjoying good food.
Find him on LinkedIn.
Javi joined the group as a PhD student in May 2018. He received his BEng and MEng in Telecommunications Engineering and his MRes in Multimedia and Communications from University Carlos III of Madrid (Spain).
He is currently interested in the study of Machine Learning vulnerabilities in adversarial environments. Also, he is investigating the operation of these systems when they have been partially compromised.
You can also find him on LinkedIn.