Testing | Coverage | Documentation |
---|---|---|
This library contains implementations of various methods to soundly verify deep neural networks. In general, we verify whether a neural network satisfies certain input-output constraints. The verification methods are divided into five categories:
-
Reachability methods: ExactReach, MaxSens, Ai2,
-
Search and reachability methods: ReluVal, DLV, FastLin, FastLip
-
Search and optimization methods: Sherlock, BaB, Planet, Reluplex
Reference: C. Liu, T. Arnon, C. Lazarus, C. Barrett, and M. Kochenderfer, "Algorithms for Verifying Neural Networks," arXiv:1903.06758.
To download this library, clone it from the julia package manager like so:
(v1.0) pkg> add https://github.com/sisl/NeuralVerification.jl
Please note that the implementations of the algorithms are pedagogical in nature, and so may not perform optimally. Derivation and discussion of these algorithms is presented in the survey paper linked above.
Note: At present, Ai2
, ExactReach
, and Duality
do not work in higher dimensions (e.g. image classification).
This is being addressed in #9
The implementations run in Julia 1.0.
using NeuralVerification
solver = BaB()
nnet = read_nnet("examples/networks/small_nnet.nnet")
input_set = Hyperrectangle(low = [-1.0], high = [1.0])
output_set = Hyperrectangle(low = [-1.0], high = [70.0])
problem = Problem(nnet, input_set, output_set)
julia> result = solve(solver, problem)
CounterExampleResult(:violated, [1.0])
julia> result.status
:violated
For a full list of Solvers
and their properties, requirements, and Result
types, please refer to the documentation.