The RISS Group is in the Department of Computing at Imperial College London and also has strong links with the Institute for Security Science and Technology at Imperial. Broadly speaking our work focuses on developing tools, techniques and autonomous system architectures that enable systems to be resilient to malicious compromise and to enable systems to operate when they have been partially compromised. Our work has attracted over £10M in funding from Industry, EPSRC and EU sources (see Current Projects and Past Projects for details) and led to numerous collaborations with both academic and industrial partners.
Security and Resilience of IoT Environments
We are in particular interested in techniques that enable to maintain the integrity of sensor networks and IoT environments in the presence of partial compromise. We have developed techniques for detecting and characterising Malicious Data Injections from analyses of the measurements taken by the sensors. We are working on techniques for improving software attestation of individual sensors and on how to combine sensor integrity with data integrity to provide assurances about the operation of the system.
Security Risk Assessment with Attack Graphs
Identifying, modelling, and assessing the security risks and prioritizing the most critical threats is of essence to optimise the resources for network protection. Attack graphs have been proven as a powerful tool for these tasks. They provide compact representations of the attack paths that can be used by attackers to compromise valuable resources in complex networks and systems. We are interested in scalable techniques to enable security risk assessment in large networks and infrastructures. We have developed exact and approximate inference techniques with Bayesian attack graphs for static and dynamic risk assessment, scaling up to networks with thousands of nodes. We are working on new attack graph representations to model the interdependence between different security aspects, including the physical, social, and cyber dimensions. We are also investigating more scalable attack graph generation models capable to cope with the size and the dynamic nature of current network environments, including IoT deployments.
Adversarial Machine Learning
Machine learning has produced a disruptive change in the society bringing both economic and societal benefits across a wide range of sectors. However machine learning algorithms are vulnerable and can be an appealing target attackers, who can inject malicious data to degrade system’s performance in a targeted or an indiscriminate way when retraining the learning algorithm. Attackers can also use machine learning as a weapon to exploit the weaknesses and blind spots of the system at test time, producing intentional misbehaviour. We are interested in understanding the mechanisms that can allow a sophisticated attacker to compromise a machine learning system and to develop new defensive mechanisms to mitigate these attacks. We are also interested in the development of new design and testing methodologies for more secure machine learning systems resilient to the presence of sophisticated attackers.
Adaptation and Reconfiguration for Autonomous Systems
We have worked for many years on policy-based systems where by policies we intend declarative rules that determine how the system should behave under different circumstances. Our work has led to the creation of the Ponder and Ponder2 policy frameworks which were released as open source. As part of this work we have pioneered many techniques for policy enforcement and deployment, as well as formal analysis for policy based systems (including conflict detection), refinement of concrete policies from higher level objectives and automated learning of policy rules from past decisions made by human managers or legacy systems.
Self Managed Cells were introduced as an architectural pattern for building and rapid prototyping of autonomous pervasive systems. In addition, to adaptation through policies, we have developed techniques for programmable peer-to-peer interactions, federation and composition of autonomous systemsas well as techniques for distributed planning with confidentiality.
We have developed new techniques for building composable autonomous systems by integrating the management view of a system along its behavioural and structural views.
We are broadly interested in techniques for ensuring data quality and protection whether this relates to personal data (privacy), sensor data (healthcare, WSN), crowdsourced data, or large data collections. We have designed techniques for learning privacy policies, defining and enforcing Data Sharing Agreements, protecting derived data (i.e., data obtained by modification or merging of previous data sets), and ensuring data dissemination and usage control including in mobile environments (e.g., in crises management situations). We are also working on techniques for data quality and trustworthinessin crowdsourcing environments.