Formalizing Neural Networks

By Achim D. Brucker and Amy Stell.

Deep learning, i.e., machine learning using neural networks, is used successfully in many application areas. Still, their use in safety-critical or security-critical applications is limited, due to the lack of testing and verification techniques. We address this problem by formalizing an important class of neural networks, feed-forward neural networks, in Isabelle/HOL. We present two different approaches of formalizing feed-forward networks and show their equivalence as well as demonstrate their use in verifying certain safety and correctness properties of various example. Moreover, we do not only provide a formal model that allows to reason over feed-forward neural networks, we also provide a datatype package for Isabelle/HOL that supports importing models from TensorFlow.js.

Please cite this work as follows:
A. D. Brucker and A. Stell, “Formalizing neural networks,” Archive of Formal Proofs, 2025. https://isa-afp.org/entries/Neural_Networks.html, Formal proof development

BibTeX
@Article{ brucker.ea:afp-neural_networks:2025,
  author   = {Achim D. Brucker and Amy Stell},
  title    = {Formalizing Neural Networks},
  journal  = {Archive of Formal Proofs},
  areas    = {security, software},
  month    = {November},
  year     = {2025},
  note     = {\url{https://isa-afp.org/entries/Neural_Networks.html},
              Formal proof development},
  issn     = {2150-914x},
  abstract = {Deep learning, i.e., machine learning using neural networks,
              is used successfully in many application areas. Still, their
              use in safety-critical or security-critical applications is
              limited, due to the lack of testing and verification
              techniques. We address this problem by formalizing an
              important class of neural networks, feed-forward neural
              networks, in Isabelle/HOL. We present two different approaches
              of formalizing feed-forward networks and show their
              equivalence as well as demonstrate their use in verifying
              certain safety and correctness properties of various example.
              Moreover, we do not only provide a formal model that allows to
              reason over feed-forward neural networks, we also provide a
              datatype package for Isabelle/HOL that supports importing
              models from TensorFlow.js.},
}