Verifying Feedforward Neural Networks for Classification in Isabelle/HOL

by Achim D. Brucker and Amy Stell

Cover for brucker.ea:feedforward-nn-verification:2023.Neural networks are being used successfully to solve classification problems, e.g., for detecting objects in images. It is well known that neural networks are susceptible if small changes applied to their input result in misclassification. Situations in which such a slight input change, often hardly noticeable by a human expert, results in a misclassification are called adversarial attacks. Such attacks can be life-threatening if, for example, they occur in image classification systems used in autonomous cars or medical diagnosis. Systems employing neural networks, e.g., for safety or security critical functionality, are a particular challenge for formal verification, which usually expects a program (e.g., given as source code in a programming language). Such a program does, per se, not exist for neural networks. In this paper, we address this challenge by presenting a formal embedding of feedforward neural networks into Isabelle/HOL, together with a discussion of properties that are desirable for neural networks in critical applications. Our Isabelle-based prototype can import neural networks trained in TensorFlow, and we demonstrate our approach using a neural network trained for the classification of digits.

Keywords: Neural network, Deep Learning, Classification Network, Feedforward Network, Verification, Isabelle/HOL
Categories: ,
Documents: (full text as PDF file)

QR Code for brucker.ea:feedforward-nn-verification:2023.Please cite this article as follows:
Achim D. Brucker and Amy Stell. Verifying Feedforward Neural Networks for Classification in Isabelle/HOL. In Formal Methods (FM 2023). Lecture Notes in Computer Science, Springer-Verlag, 2023.
Keywords: Neural network, Deep Learning, Classification Network, Feedforward Network, Verification, Isabelle/HOL
(full text as PDF file) (BibTeX) (Endnote) (RIS) (Word) (Share article on LinkedIn. Share article on CiteULike. )

BibTeX
@InCollection{ brucker.ea:feedforward-nn-verification:2023,
abstract = {Neural networks are being used successfully to solve classification problems, e.g., for detecting objects in images. It is well known that neural networks are susceptible if small changes applied to their input result in misclassification. Situations in which such a slight input change, often hardly noticeable by a human expert, results in a misclassification are called adversarial attacks. Such attacks can be life-threatening if, for example, they occur in image classification systems used in autonomous cars or medical diagnosis. Systems employing neural networks, e.g., for safety or security critical functionality, are a particular challenge for formal verification, which usually expects a program (e.g., given as source code in a programming language). Such a program does, per se, not exist for neural networks. In this paper, we address this challenge by presenting a formal embedding of feedforward neural networks into Isabelle/HOL, together with a discussion of properties that are desirable for neural networks in critical applications. Our Isabelle-based prototype can import neural networks trained in TensorFlow, and we demonstrate our approach using a neural network trained for the classification of digits.},
address = {Heidelberg},
author = {Achim D. Brucker and Amy Stell},
booktitle = {Formal Methods (FM 2023)},
editor = {Marsha Chechik and Joost-Pieter Katoen and Martin Leucker},
isbn = {978-3-642-38915-3},
keywords = {Neural network, Deep Learning, Classification Network, Feedforward Network, Verification, {Isabelle/HOL}},
language = {english},
location = {L{\"u}beck, Germany},
pdf = {https://www.brucker.ch/bibliography/download/2023/brucker.ea-feedforward-nn-verification-2023.pdf},
publisher = {Springer-Verlag},
series = {Lecture Notes in Computer Science},
title = {Verifying Feedforward Neural Networks for Classification in {Isabelle/HOL}},
url = {https://www.brucker.ch/bibliography/abstract/brucker.ea-feedforward-nn-verification-2023},
year = {2023},
}