
Formal Methods in AI
Formal Methods in AI (FMAI) is Dr. Francesco Belardinelli's research group at Imperial College London dedicated to mathematically grounded techniques for specifying, verifying, and monitoring intelligent systems deployed in safety-critical settings.
Verification for RL systems
Methods that combine formal verification, synthesis, and neurosymbolic methods with modern ML and RL models, focusing on robustness, correctness, and alignment with high-level specifications.
Specifications, logics, and synthesis
Specification languages, causal reasoning, temporal and probabilistic logics, and synthesis tools for capturing what it means for AI systems to be safe, predictable, and auditable.
Infrastructure and open-source tooling
Reusable software components, benchmarks, and verification pipelines to make rigorous safety techniques practical for real-world deployments.
Latest news
Updates from the group
A quick snapshot of announcements, papers, and milestones from FMAI.


What we focus on
We collaborate across formal methods, programming languages, control, and machine learning to tackle questions such as:
- Providing formal guarantees of permissive safety for single and multi-agent reinforcement learning agents.
- How to prove properties of models that learn and update over time.
- How to detect and mitigate violations at runtime with guarantees.
- How to integrate verification into existing RL, ML and engineering workflows.
Collaborations & community
The group works with academic and industrial partners on open-source tools, benchmarks, and case studies in robotics, cyber-physical systems, and foundation models.
For collaborations or student opportunities, please get in touch via e-mail: francesco[dot]belardinelli[at]imperial.ac.uk.
