Formal Methods in AI Research Group
Formal Methods in AI is Dr Francesco Belardinelli's research group dedicated to mathematically grounded techniques for specifying, verifying, and monitoring intelligent systems deployed in safety-critical settings.
Verification for RL systems
Methods that combine formal verification, synthesis, and neurosymbolic methods with modern ML and RL models, focusing on robustness, correctness, and alignment with high-level specifications.
Specifications, logics, and synthesis
Specification languages, temporal and probabilistic logics, and synthesis tools for capturing what it means for AI systems to be safe, predictable, and auditable.
Infrastructure and open-source tooling
Reusable software components, benchmarks, and verification pipelines to make rigorous safety techniques practical for real-world deployments.
What we focus on
We collaborate across formal methods, programming languages, control, and machine learning to tackle questions such as:
- Providing formal guarantees of permissive safety for single and multi-agent reinforcement learning agents.
- How to prove properties of models that learn and update over time.
- How to detect and mitigate violations at runtime with guarantees.
- How to integrate verification into existing RL, ML and engineering workflows.
Collaborations & community
The group works with academic and industrial partners on open-source tools, benchmarks, and case studies in robotics, cyber-physical systems, and foundation models.
For collaborations or student opportunities, get in touch via abc@xyz.ac.uk.