[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

Safe Artificial Intelligence Lab

The overarching aim of the Safe Artificial Intelligence Lab is to develop novel computational methods and tools for providing safety guarantees to a wide range of autonomous systems, including autonomous vehicles, robotic systems, and swarm systems.

We are particularly active in the following topics:

  • Scalable methods and tools for the verification of neural networks, including CNNs, and RNNs.
  • Parameterised model checking methods for the verification of swarm systems.
  • AI-based specification languages and logic-based verification methods for reasoning about agent-based systems.
  • Safe reinforcement learning for agent-based systems.

Our work is guided by a passion for Artificial Intelligence and the belief that AI should be safe and secure for society to use.

We have a history of development and maintenance of open-source state-of-the-art toolkits for Safe AI and international collaboration both with academia and the industry.

We presently benefit from strong links with the Assured Autonomy DARPA program and the Centre for Doctoral Training in Safe and Trusted AI.

News

21 October 2024

Meet the team member: Alejandro Mercado

25 March 2024

Meet the team member: Sherwin Varghese

18 October 2023

Meet the team member: Atri Sharma

11 September 2023

Paper on verification of key point detection accepted at KR2023

09 August 2023

Panagiotis Kouvaros awarded prestigious IJCAI Early Career Spotlight Award

06 July 2023

SAIL (formerly VAS) Group has a paper on verification against LVM-based specifications accepted at CVPR23

... see all News

Next Scheduled Seminar

There are currently no seminars planned.

... see other seminars