Seminar: Machine Learning and Formal Verification

Schedule:

  • Organizational meeting on seminars and projects: Thursday, Oct 24, 13:30, PZ901
  • Seminar (Prof. Dr. Stefan Leue)

    Thursday  13:30 - 15:00   Room: PZ 901     Start: 24 October 2019

Contents

The use of machine learning techniques, such as Deep Neural Networks (DNNs), have a significant impact on classical system design processes. The specification of their desired capabilities is often vague or incomplete. Since they do not possess an internal structure that provides obvious explanations of their behavior it is difficult to perform structural verification or testing. This seminar aims at presenting some first approaches towards the development of formal verification technology in order to proof properties of DNNs, or to test whether DNNs satisfy certain properties.

Learning objectives

This seminar aims at presenting some first approaches towards the development of formal verification technology in order to proof properties of DNNs, or to test whether DNNs satisfy certain properties.

Expected course achievement

30 - 45 min. presentation + 15 min. discussion

Workload

120 hours, of which 28 hours are spent in class and 92 hours of self study.

Participants

Master-level course. Open to Bachelor-level students.

Remark

The course will be taught in English. All course materials will be in English.

 Credits

2 SWS
4 ECTS-Points

Course literature

Topic 1 (taken): Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, Suman Jana:
Efficient Formal Safety Analysis of Neural Networks. NeurIPS 2018: 6369-6379

Topic 2: Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, Suman Jana:
Formal Security Analysis of Neural Networks using Symbolic Intervals. USENIX Security Symposium 2018: 1599-1614

Topic 3: Osbert Bastani, Yewen Pu, Armando Solar-Lezama:
Verifiable Reinforcement Learning via Policy Extraction. NeurIPS 2018: 2499-2509

Topic 4: Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin T. Vechev:
AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation. IEEE Symposium on Security and Privacy 2018: 3-18

Topic 5: Tommaso Dreossi, Daniel J. Fremont, Shromona Ghosh, Edward Kim, Hadi Ravanbakhsh, Marcell Vazquez-Chanlatte, Sanjit A. Seshia:
VerifAI: A Toolkit for the Formal Design and Analysis of Artificial Intelligence-Based Systems. CAV (1) 2019: 432-442

Topic 6 (taken): Gagandeep Singh, Timon Gehr, Markus Püschel, Martin T. Vechev:
An abstract domain for certifying neural networks. PACMPL 3(POPL): 41:1-41:30 (2019)

Topic 7: Souradeep Dutta, Susmit Jha, Sriram Sankaranarayanan, Ashish Tiwari:
Output Range Analysis for Deep Feedforward Neural Networks.NFM 2018: 121-138

Topic 8: Guy Katz, Derek A. Huang, Duligur Ibeling, Kyle Julian, Christopher Lazarus, Rachel Lim, Parth Shah, Shantanu Thakoor, Haoze Wu, Aleksandar Zeljic, David L. Dill, Mykel J. Kochenderfer, Clark W. Barrett:
The Marabou Framework for Verification and Analysis of Deep Neural Networks. CAV (1) 2019: 443-452

Topic 9 (taken): Yingxu Wang, Henry Leung, Marina Gavrilova, Omar A. Zatarain, Daniel Graves, Jianhua Lu, Newton Howard, Sam Kwong, Phillip C.-Y. Sheu, Shushma Patel:
A Survey and Formal Analyses on Sequence Learning Methodologies and Deep Neural Networks. ICCI*CC 2018: 6-15

Topic 10: Xiaowu Sun, Haitham Khedr, Yasser Shoukry:
Formal verification of neural network controlled autonomous systems. HSCC 2019: 147-156