Security of Foundation Models for Code
We explore the vulnerability of FMs to poisoning and evasion attacks, and the possible defenses at the various stages of the model lifecycle: pre-training, fine-tuning and deployment.
We explore the vulnerability of FMs to poisoning and evasion attacks, and the possible defenses at the various stages of the model lifecycle: pre-training, fine-tuning and deployment.
We investigate how to maintain the effectiveness of different protection mechanisms when they are applied to the same ML model.
We leverage LLMs to generate and prioritise security alerts from network and host logs.
We identify APTs early in their lifetime cycle, combining DNS, HTTP, and TCP/IP level features. We also plan to incorporate host features, and leverage GNNs to detect APTs in Critical Information Infrastructure.
We extend the life of DL-based NIDS by enhancing them with concept drift detection and adaptation.
We automate experiment design to capture better network intrusion datasets, leveraging causal analysis.
We use reinforcement learning to automate the fuzzing of web applications and browsers, to indentify a range of vulnerabilities (XSS, SQLi, buffer overflow, etc).
We use GRUs, GNNs and Transformers to identify source code vulnerabilities in C/C++ and PHP.