Seminar: Formal Verification and Testing of Deep Neural Networks

Schedule

  • Seminar (Prof. Dr. Stefan Leue)
    Monday  15:15 - 16:45  Room: PZ 901     Start: 15 April 2019
  •  May 6: Topic 5; May 13: Topic 11; May 27: Topic 6; June 3: Topic 1; June 17: Topic 2; July 8: Topic 7.
  •  No seminar on May 20, June 10 (Holiday), June 24, July 1.

  Contents

The use of machine learning techniques, such as Deep Neural Networks (DNNs), have a significant impact on classical system design processes. The specification of their desired capabilities is often vague or incomplete. Since they do not possess an internal structure that provides obvious explanations of their behavior it is difficult to perform structural verification or testing. This seminar aims at presenting some first approaches towards the development of formal verification technology in order to proof properties of DNNs, or to test whether DNNs satisfy certain properties.

Learning objectives

This seminar aims at presenting some first approaches towards the development of formal verification technology in order to proof properties of DNNs, or to test whether DNNs satisfy certain properties.

Expected course achievement

30 - 45 min. presentation + 15 min. discussion

Workload

120 hours, of which 28 hours are spent in class and 92 hours of self study.

Participants

Master-level course. Open to Bachelor-level students.

Remark

The course will be taught in English. All course materials will be in English.

 Credits

2 SWS
4 ECTS-Points

Course literature

Topic 1 (taken): Guy Katz, Clark W. Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer:
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks. CAV (1) 2017: 97-117

Topic 2: Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, Suman Jana:
Efficient Formal Safety Analysis of Neural Networks.
NeurIPS 2018: 6369-6379

Topic 3:  Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, Suman Jana:
Formal Security Analysis of Neural Networks using Symbolic Intervals.
USENIX Security Symposium 2018: 1599-1614

Topic 4: Osbert Bastani, Yewen Pu, Armando Solar-Lezama:
Verifiable Reinforcement Learning via Policy Extraction.
NeurIPS 2018: 2499-2509

Topic 5 (taken): A. Sharma, H. Wehrheim:
Testing Machine Learning Algorithms for Balanced Data Usage
IEEE International Conference on Software Testing, Verification and Validation (ICST), Xi’an, China, 2019.

Topic 6 (taken): Kexin Pei, Yinzhi Cao, Junfeng Yang, Suman Jana:
DeepXplore: Automated Whitebox Testing of Deep Learning Systems.
SOSP 2017: 1-18

Topic 7: Yuchi Tian, Kexin Pei, Suman Jana, Baishakhi Ray:
DeepTest: automated testing of deep-neural-network-driven autonomous cars.
ICSE 2018: 303-314

Topic 8: Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin T. Vechev:
AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation.
IEEE Symposium on Security and Privacy 2018: 3-18

Topic 9: Gagandeep Singh, Timon Gehr, Markus Püschel, Martin T. Vechev:
An abstract domain for certifying neural networks.
PACMPL 3(POPL): 41:1-41:30 (2019)

Topic 10: Youcheng Sun, Min Wu, Wenjie Ruan, Xiaowei Huang, Marta Kwiatkowska, Daniel Kroening:
Concolic testing for deep neural networks.
ASE 2018: 109-119

Topic 11 (taken): John Törnblom, Simin Nadjm-Tehrani:
Formal Verification of Random Forests in Safety-Critical Applications.
FTSCS 2018: 55-71

Topic 12: Souradeep Dutta, Susmit Jha, Sriram Sankaranarayanan, Ashish Tiwari:
Output Range Analysis for Deep Feedforward Neural Networks.
NFM 2018: 121-138