Neural network Repair
The expressiveness of neural networks and their success in domains such as image recognition and natural language processing have sparked interest in applying them also in safety-critical domains. Examples include applications in medical systems, autonomous aircraft, and autonomous cars. As malfunctioning of a safety-critical system can threaten lives or cause environmental disaster, we require strong guarantees on the correct functioning of the neural networks involved. Neural network repair modifies a previously trained unsafe network, such that it is guaranteed to be safe. This is achieved using a verifier, a tool that can automatically prove whether a neural network conform to a formal safety specification or not. Our SpecRepair tool is a widely-applicable open-source neural network repair tool based on counterexample-guided repair. To better understand counterexample-guided repair from a theoretical perspective, we have developed a robust optimisation perspective on neural network repair. We are currently working together with Kyushu University and TU Wien to extend neural network repair to deep reinforcement learning agents. Our cooperation with the Pediatric Pharmacology and Pharmacometrics Research Center at UKBB is geared towards applying neural network repair for obtaining trustworthy neural networks for medical applications.
Publications
[1] David Boetius, Stefan Leue, Tobias Sutter. 2023. A Robust Optimisation Perspective on Counterexample-Guided Repair of Neural Networks. CoRR abs/2301.11342. [ICML 2023] [PDF]
[2] David Boetius, Stefan Leue. 2023. Verifying Global Neural Network Specifications using Hyperproperties. CoRR abs/2306.12495. Accepted at FoMLAS 2023. [FoMLAS 2023] [PDF]
[3] Fabian Bauer-Marquart, David Boetius, Stefan Leue, Christian Schilling. 2022. SpecRepair: Counter-Example Guided Safety Repair of Deep Neural Networks. In SPIN 2022, 79-96. [SPIN 2022] [PDF]