Towards verifying AI systems based on deep Neural networks - Prof. Alessio Lomuscio (Imperial College London, UK)
A key difficulty in the deployment of AI solutions, including machine learning, remains their inherent fragility and difficulty of certification. Formal verification has long been employed in the analysis and debugging of traditional computer systems, including hardware, but its deployment in the context of safety-critical AI-systems remains largely unexplored. In this talk I will summarise some of the contributions on verification of neural systems from the Verification of Autonomous Systems Lab at Imperial College London. I will focus on the issue of specifications and verifications for deep neural classifiers. After a discussion on specifications, I will introduce recent exact and approximate methods, including MILP-based approaches, linear relaxations, and symbolic interval propagation. I will showcase the resulting toolkits, including Venus and Verinet, and exemplify their use on scenarios developed within the DARPA project on Assured Autonomy. This will enable us to observe that the verification of neural models with hundreds of thousands of nodes (corresponding to millions of tuneable paramaters) is now feasible with further advances likely to be achieved in the near future. Time permitting, I will briefly discuss closely related ongoing work, including verification of neural-symbolic multi-agent systems (closed-loop AI systems combining neural and symbolic components).