The Resilient Information Systems Security (RISS) Group is in the Department of Computing at Imperial College London. Broadly speaking, our work focuses on developing tools and techniques to enable systems to be resilient to malicious compromise and to continue to operate when they have been partially compromised. We tackle a broad variety of challenges in cybersecurity from malicious data injections in cyber-physical systems to adversarial machine learning in both centralised and federated settings, risk and resilience analysis with attack graph structures and work at the intersection of security and safety. Our work has attracted over £10M in funding from Industry, EPSRC and EU sources (see Current Projects and Past Projects for details) and led to numerous collaborations with both academic and industrial partners.
See also the current list of positions available.
Adversarial Machine Learning
News & Updates
- Muhammad Zaid Hameed
- Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks (CCS ’19)
- Code Release: “Towards Poisoning of Deep Learning Algorithms with Back-gradient optimization”
- “Exploiting Correlations to Detect False Data Injections in Low-Density Wireless Sensor Networks” will be presented at 5th ACM Cyber-Physical Systems Security Workshop 2019.
- Robustness and Transferability of Universal Attacks on Compressed Models
- Object Removal Attacks on LiDAR-based 3D Object Detectors
- Jacobian Regularization for Mitigating Universal Adversarial Perturbations
- Analyzing the Viability of UAV Missions Facing Cyber Attacks
- Universal Adversarial Robustness of Texture and Shape-Biased Models