Andrea joined us from the University of Naples as a first year PhD student on the HiPEDS CDT.
Erisa joined us after a PhD at the University of Verona and a Post-Doc at the Technical University of Denmark. She worked on Data Sharing Agreements and Argumentation and Logic Programming techniques applied to Security.
Wireless Sensor Networks (WSNs) are vulnerable and can be maliciously compromised, either physically or remotely, with potentially devastating effects. When sensor networks are used to detect the occurrence of events such as fires, intruders or heart-attacks, malicious data can be injected to create fake events and, thus, trigger an undesired response, or to mask the occurrence of actual events. We propose a novel algorithm to identify malicious data injections and build measurement estimates that are resistant to several compromised sensors even when they collude in the attack. We also propose a methodology to apply this algorithm in different application contexts and evaluate its results on three different datasets drawn from distinct WSN deployments. This leads us to identify different trade-offs in the design of such algorithms and how they are influenced by the application context.
Rodrigo was a PhD student in the RISS group working on attestation techniques for sensor networks. He currently works in Brazil.
Cloud and mobile are two major computing paradigms that are rapidly converging. However, these models still lack a way to manage the dissemination and control of personal and business-related data. To this end, we propose a framework to control the sharing, dissemination and usage of data based on mutually agreed Data Sharing Agreements (DSAs). These agreements are enforced uniformly, and end-to-end, both on Cloud and mobile platforms, and may reflect legal, contractual or user-defined preferences. We introduce an abstraction layer that makes available the enforcement functionality across different types of nodes whilst hiding the distribution of components and platform specifics. We also discuss a set of different types of nodes that may run such a layer.
Daniele Sgandurra, Francesco Di Cerbo, Slim Trabelsi, Fabio Martinelli, and Emil Lupu: Sharing Data Through Confidential Clouds: An Architectural Perspective. In proceedings of the 1st International Workshop on TEchnical and LEgal aspects of data pRivacy and SEcurity, 2015 IEEE/ACM, pp. 58-61, DOI: 10.1109/TELERISE.2015.19. Bibtex.
We are always looking for outstanding PhD students passionate about computer security and resilience for enterprise systems, large-scale infrastructures, or cyber-physical environments. The Department of Computing at Imperial College has a number of PhD studentships. Please check that your application meets the minimum entry requirements specified by the College before applying. Please don’t hesitate to contact (Professor Emil Lupu) if you would like to work with our group and include in your email a CV, transcripts, and examples of work that you have done e.g. papers or MSc/Bc thesis even if in draft form.
Topics of interest include:
- The resilience of Cyber-Physical Systems
- Robust information fusion
- Topics at the intersection of security and safety.
- The security of AR/VR environments.
- Designing security in very resource constrained environments e.g. implantable medical sensors.
- Adversarial Machine Learning with focus on: practical problems, toolchains for robustness, attack detection.
In this paper we propose a modelling formalism, Probabilistic Component Automata (PCA), as a probabilistic extension to Interface Automata to represent the probabilistic behaviour of component-based systems. The aim is to support composition of component-based models for both behaviour and non-functional properties such as reliability. We show how addi- tional primitives for modelling failure scenarios, failure handling and failure propagation, as well as other algebraic operators, can be combined with models of the system architecture to automatically construct a system model by composing models of its subcomponents. The approach is supported by the tool LTSA-PCA, an extension of LTSA, which generates a composite DTMC model. The reliability of a particular system configuration can then be automatically analysed based on the corresponding composite model using the PRISM model checker. This approach facilitates configurability and adaptation in which the software configuration of components and the associated composition of component models are changed at run time.
P. Rodrigues, E. Lupu and J. Kramer, Compositional Reliability Analysis for Probabilistic Component Automata, to appear in International Workshop on Modelling in Software Engineering (MiSE), Florence, May 16-17, 2015.
Self-managed systems need to adapt to changes in requirements and in operational conditions. New components or services may become available, others may become unreliable or fail. Non-functional aspects, such as reliability or other quality-of- service parameters usually drive the selection of new architectural configurations. However, in existing approaches, the link between non-functional aspects and software models is established through manual annotations that require human intervention on each re-configuration and adaptation is enacted through fixed rules that require anticipation of all possible changes. We propose here a methodology to automatically re-assemble services and component-based applications to preserve their reliability. To achieve this we define architectural and behavioural models that are composable, account for non-functional aspects and correspond closely to the implementation. Our approach enables autonomous components to locally adapt and control their inter- nal configuration whilst exposing interface models to upstream components.
P. Rodrigues, J. Kramer and E. Lupu, On Re-Assembling Self-Managed Components, to appear in International Symposium on Integrated Network and Service Management (IM), Ottawa, May 11-15, 2015
A short video clip discussing how humans and machines can learn from each other in a continuous feedback system.
Martín has recently joined the group after receiving a PhD degree in Computer Science from the University of Lorraine, France, in 2014. His current research work focuses on intelligent protection mechanisms for cloud environments at run-time. His topics of interest include computer security, autonomic computing, vulnerability management, mobile and cloud computing, distributed computing, digital evidence and forensics, formal models and languages, Linux-based systems and TCP/IP networks administration, Java technologies, logic and database systems.
Luis is a Research Associate in the Department of Computing at Imperial College London. He received his PhD from University Carlos III of Madrid (Spain) where he proposed novel Gaussian process models for non-stationary and heteroscedastic regression. His background includes machine learning and cyber-security. His current research interests are adversarial machine learning and security risk assessment with attack graph models. You can find more details about his current research activities and contact information at his personal web page, Google Scholar profile or Researchgate.
Federico received his MSc in Engineering in Computer Science from University of Ferrara. He joined the group in 2014 and is working towards a PHD. He is interested in Protocol Modelling, in particular applied to SCADA Networks. His work is focused on listening to the communication between two hosts and trying to extract information on the message format and state machine of the protocol.
Ubiquitous systems and applications involve interactions between multiple autonomous entities—for example, robots in a mobile ad-hoc network collaborating to achieve a goal, communications between teams of emergency workers involved in disaster relief operations or interactions between patients’ and healthcare workers’ mobile devices. We have previously proposed the Self-Managed Cell (SMC) as an architectural pattern for managing autonomous ubiquitous systems that comprise both hardware and software components and that implement policy-based adaptation strategies. We have also shown how basic management interactions between autonomous SMCs can be realised through exchanges of notifications and policies, to effectively program management and context-aware adaptations. We present here how autonomous SMCs can be composed and federated into complex structures through the systematic composition of interaction patterns. By composing simpler abstractions as building blocks of more complex interactions it is possible to leverage commonalities across the structural, control and communication views to manage a broad variety of composite autonomous systems including peer-to-peer collaborations, federations and aggregations with varying degrees of devolution of control. Although the approach is more broadly applicable, we focus on systems where declarative policies are used to specify adaptation and on context-aware ubiquitous systems that present some degree of autonomy in the physical world, such as body sensor networks and autonomous vehicles. Finally, we present a formalisation of our model that allows a rigorous verification of the properties satisfied by the SMC interactions before policies are deployed in physical devices.
Schaeffer-Filho, Alberto and Lupu, Emil and Sloman, Morris. Federating Policy-Driven Autonomous Systems: Interaction Specification and Management Patterns, Journal of Network and Systems Management, Springer, http://dx.doi.org/10.1007/s10922-014-9317-5
Vittorio Illiano is PhD student in the Department of Computing at Imperial College London, as part of the Intel ICRI on Sustainable and Connected Cities.
His main research area is security in Wireless Sensor Networks, with a focus on anomaly detection and related data analysis techniques.
He received the B.Sc. and M.Sc. in Computer Engineering from the University of Naples “Federico II”.
Vittorio left the group after completing and defending successfully his PhD Thesis. He is now working with Novartis.
Organisations, small and large, increasingly rely upon cloud environments to supply their ICT needs because clouds provide a better incremental cost structure, resource elasticity and simpler management. This trend is set to continue as increasingly information collected from mobile devices and smart environments including homes, infrastructures and smart-cities is uploaded and processed in cloud environments. Services delivered to users are also deployed in the cloud as this provides better scaleability and in some cases permits migration closer to the point of access for reduced latency.
Clouds are therefore an attractive target for organised and skilled cyber-attacks. They are also more vulnerable as they host environments from multiple tenant organisations with different interests and different risk aversion profiles. Yet clouds also offer opportunities for better protection both pro-actively and reactively in response to a persistent attack.…
Pedro has obtained his PhD in the RISS group working on composable techniques for reliability analysis.
Software systems are constructed by combining new and existing services and components. Models that represent an aspect of a system should therefore be compositional to facilitate reusability and automated construction from the representation of each part. In this paper we present an extension to the LTSA tool that provides support for the specification, visualisation and analysis of composable probabilistic behaviour of a component-based system using Probabilistic Component Automata (PCA). These also include the ability to specify failure scenarios and failure handling behaviour. Following composition, a PCA that has full probabilistic information can be translated to a DTMC model for reliability analysis in PRISM. Before composition, each component can be reduced to its interface behaviour in order to mitigate state explosion associated with composite representations, which can significantly reduce the time to analyse the reliability of a system. Moreover, existing behavioural analysis tools in LTSA can also be applied to PCA representations.
P. Rodrigues, E. Lupu and J. Kramer LTSA-PCA: Tool Support for Compositional Reliability Analysis, ICSE 2014, (formal demonstrations), Hyderabad, May 31 – June 7, 2014. download preprint of the paper.
Dickens, L. and Lupu, E. On Efficient Meta-Data Collection for Crowdsensing. In Crowdsensing Workshop at PerCom, 2014. (To appear.)
Building trustworthy systems that themselves rely on, or integrate, semi-trusted information sources is a challenging aim, but doing so allows us to make good use of floods of information continuously contributed by individuals and small organisations. This paper addresses the problem of quickly and efficiently acquiring high quality meta-data from human contributors, in order to support crowdsensing applications.
Crowdsensing (or participatory sensing) applications have been used to sense, measure and map a variety of phenomena, including: individuals’ health, mobility & social status; fuel & grocery prices; air quality & pollution levels; biodiversity; transport infrastructure; and route-planning for drivers & cyclists. Crowdsensing applications have an on-going requirement to turn raw data into useful knowledge, and to achieve this, many rely on prompt human generated meta-data to support and/or validate the primary data payload. These human contributions are inherently error prone and subject to bias and inaccuracies, so multiple overlapping labels are needed to cross-validate one another. While probabilistic inference can be used to reduce the required label overlap, there is a particular need in crowdsensing to minimise the overhead and improve the accuracy of timely label collection. This paper presents three general algorithms for efficient human meta-data collection, which support different constraints on how the central authority collects contributions, and three methods to intelligently pair annotators with tasks based on formal information theoretic principles. We test our methods’ performance on challenging synthetic data-sets, based on r eal data, and show that our algorithms can significantly lower the cost and improve the accuracy of human meta-data labelling, with a corresponding increase in the average novel information content from new labels.
Daniele joined the group as a Research Associate after having completed a PhD at University of Pisa, and having worked at IIT-CNR as a PostDoc Researcher. Daniele’s main research fields include virtualization and cloud security, threat modeling, mobile security, malware analysis and critical infrastructures and risk management.
His personal homepage can be found here. Daniele is now a Lecturer in the Security Group at Royal Holloway, University of London
Luke joined the group as a Research Associate after having completed a PhD at Imperial College London under the supervision of Dr Alessandra Russo. Luke focuses on machine learning techniques and is working on techniques for dealing with partially trusted sources of information in particular in crowdsourcing scenarios. Luke is now a lecturer at University College London.