id author title date pages extension mime words sentences flesch summary cache txt cord-031663-i71w0es7 Giacobbe, Mirco How Many Bits Does it Take to Quantize Your Neural Network? 2020-03-13 .txt text/plain 6525 332 53 For this reason, we introduce a verification method for quantized neural networks which, using SMT solving over bit-vectors, accounts for their exact, bit-precise semantics. As a result, we obtain a encoding into a first-order logic formula which, in contrast to a standard unbalanced linear encoding, makes the verification of quantized networks practical and amenable to modern bit-precise SMT-solving. We measured the robustness to attacks of a neural classifier involving 890 neurons and trained on the MNIST dataset (handwritten digits), for quantizations between 6 and 10 bits. We evaluated whether our balanced encoding strategy, compared to a standard linear encoding, can improve the scalability of contemporary SMT solvers for quantifier-free bit-vectors (QF BV) to check specifications of quantized neural networks. We introduced the first complete method for the verification of quantized neural networks which, by SMT solving over bit-vectors, accounts for their bit-precise semantics. ./cache/cord-031663-i71w0es7.txt ./txt/cord-031663-i71w0es7.txt