Projects

Security & Privacy of Machine Learning


Security of Foundation Models for Code

We explore the vulnerability of FMs to poisoning and evasion attacks, and the possible defenses at the various stages of the model lifecycle: pre-training, fine-tuning and deployment.

Combining Security Protection Mechanisms for Machine Learning Models

We investigate how to maintain the effectiveness of different protection mechanisms when they are applied to the same ML model.

Machine Learning for Security


Large Language Models For Cyber Threat Detection

We leverage LLMs to generate and prioritise security alerts from network and host logs.

Advanced Persistent Threat Dectection

We identify APTs early in their lifetime cycle, combining DNS, HTTP, and TCP/IP level features. We also plan to incorporate host features, and leverage GNNs to detect APTs in Critical Information Infrastructure.

Concept drift in Deep Learning-based security applications

We extend the life of DL-based NIDS by enhancing them with concept drift detection and adaptation.

Improving Intrusion Dataset Quality with Causal Methods

We automate experiment design to capture better network intrusion datasets, leveraging causal analysis.

Reinforcement Learning for Web Security

We use reinforcement learning to automate the fuzzing of web applications and browsers, to indentify a range of vulnerabilities (XSS, SQLi, buffer overflow, etc).

Software Vulnerability Detection using Deep Learning

We use GRUs, GNNs and Transformers to identify source code vulnerabilities in C/C++ and PHP.

`