[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Unifying Syntactic and Semantic Abstractions for Deep Neural Networks

  • Conference paper
  • First Online:
Formal Methods for Industrial Critical Systems (FMICS 2024)

Abstract

Deep Neural Networks (DNNs) are being trained and trusted for performing fairly complex tasks, even in safety-critical applications such as autonomous driving, medical diagnosis, and air traffic control. However, these real-world applications tend to rely on very large DNNs to achieve the desired accuracy, making it a challenge for them to be executed in resource-constrained and real-time settings. The size of these networks is also a bottleneck in proving their trustworthiness through formal verification or explanation, limiting the deployability of these networks in safety-critical domains. Therefore, it is imperative to be able to compress these networks while maintaining a strong formal connection while preserving desirable safety properties. Several syntactic abstraction techniques have been proposed that produce an abstract network with a formal guarantee that safety properties will be preserved. These, however, do not take the semantic behaviour of the network into account and thus produce suboptimally large networks. On the other hand, compression and semantic abstraction techniques have been proposed that achieve a significant reduction in network size but only weakly preserve a limited set of safety properties. In this paper, we propose to combine the semantic and syntactic approaches into a single framework to get the best of both worlds. This allows us to guide the abstraction using global semantic information while still providing concrete soundness guarantees based on syntactic constraints. Our experiments on standard neural network benchmarks show that this can produce smaller abstract networks than existing methods while preserving safety properties.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 89.99
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 49.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    For our choice of \(\mathcal {C}\), (Sect. 3.1), Algorithm 1 reduces to hierarchial clustering [24], allowing us to leverage existing efficient implementations. Nonetheless, the general algorithm presented here will work for any choice of \(\mathcal {C}\).

  2. 2.

    The entirety of the code, networks and datasets utilized in our evaluation are available on https://github.com/digumx/unified-merges.

  3. 3.

    We have used a faithful re-implementation of this framework that follows exactly the procedure in the paper, with the only two distinctions being that we are using a two-class classification as seen in [8, 28, 29], and that the call to verify the \(\mathcal {N'}\) obtained in each iteration is sent to an instance of the NeuralSAT [16] solver as opposed to Marabou [25, 26, 48].

References

  1. Ashok, P., Hashemi, V., Křetínský, J., Mohr, S.: DeepAbstract: neural network abstraction for accelerating verification. In: Hung, D.V., Sokolsky, O. (eds.) Automated Technology for Verification and Analysis. ATVA 2020. LNCS, vol. 12302, pp. 92–107. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59152-6_5

  2. Bassan, S., Katz, G.: Towards formal XAI: formally approximate minimal explanations of neural networks. In: Sankaranarayanan, S., Sharygina, N. (eds.) Tools and Algorithms for the Construction and Analysis of Systems. TACAS 2023. LNCS, vol. 13993, pp. 187–207. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-30823-9_10

  3. Bojarski, M., et al.: End to end learning for self-driving cars. CoRR abs/1604.07316 (2016)

    Google Scholar 

  4. Bunel, R., Lu, J., Turkaslan, I., Torr, P.H.S., Kohli, P., Kumar, M.P.: Branch and bound for piecewise linear neural network verification. J. Mach. Learn. Res. 21, 42:1–42:39 (2020)

    Google Scholar 

  5. Carlini, N., Katz, G., Barrett, C., Dill, D.L.: Provably minimally-distorted adversarial examples (2018)

    Google Scholar 

  6. Carlini, N., Wagner, D.A.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy, SP 2017, San Jose, CA, USA, 22–26 May 2017, pp. 39–57. IEEE Computer Society (2017)

    Google Scholar 

  7. Chau, C., Kretínský, J., Mohr, S.: Syntactic vs semantic linear abstraction and refinement of neural networks. In: André, É., Sun, J. (eds.) Automated Technology for Verification and Analysis. ATVA 2023. LNCS, vol. 14215, pp. 401–421. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-45329-8_19

  8. Chauhan, A., Afzal, M., Karmarkar, H., Elboher, Y., Madhukar, K., Katz, G.: Efficiently finding adversarial examples with dnn preprocessing (2022)

    Google Scholar 

  9. Chen, X., Liu, C., Li, B., Lu, K., Song, D.: Targeted backdoor attacks on deep learning systems using data poisoning. CoRR abs/1712.05526 (2017)

    Google Scholar 

  10. Cheng, C., Nührenberg, G., Ruess, H.: Maximum resilience of artificial neural networks. CoRR abs/1705.01040 (2017)

    Google Scholar 

  11. Cheng, Y., Wang, D., Zhou, P., Zhang, T.: A survey of model compression and acceleration for deep neural networks. CoRR abs/1710.09282 (2017)

    Google Scholar 

  12. Clarke, E.M., Grumberg, O., Jha, S., Lu, Y., Veith, H.: Counterexample-guided abstraction refinement for symbolic model checking. J. ACM 50(5), 752–794 (2003)

    Article  MathSciNet  Google Scholar 

  13. Cohen, E., Elboher, Y.Y., Barrett, C.W., Katz, G.: Tighter abstract queries in neural network verification. In: Piskac, R., Voronkov, A. (eds.) LPAR 2023: Proceedings of 24th International Conference on Logic for Programming, Artificial Intelligence and Reasoning, Manizales, Colombia, 4–9th June 2023. EPiC Series in Computing, vol. 94, pp. 124–143. EasyChair (2023)

    Google Scholar 

  14. Dantzig, G.B.: Linear Programming and Extensions. RAND Corporation, Santa Monica, CA (1963). https://doi.org/10.7249/R366

  15. Djavanshir, G.R., Chen, X., Yang, W.: A review of artificial intelligence’s neural networks (deep learning) applications in medical diagnosis and prediction. IT Professional 23(3), 58–62 (2021)

    Article  Google Scholar 

  16. Duong, H., Nguyen, T., Dwyer, M.: A DPLL(T) framework for verifying deep neural networks (2024)

    Google Scholar 

  17. Dutta, S., Jha, S., Sankaranarayanan, S., Tiwari, A.: Output range analysis for deep feedforward neural networks. in: Dutle, A., Muñoz, C., Narkawicz, A. (eds.) NASA Formal Methods. NFM 2018. LNCS, vol. 10811, pp. 121–138. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-77935-5_9

  18. Dvijotham, K., Stanforth, R., Gowal, S., Mann, T.A., Kohli, P.: A dual approach to scalable verification of deep networks. In: Globerson, A., Silva, R. (eds.) Proceedings of the Thirty-Fourth Conference on Uncertainty in Artificial Intelligence, UAI 2018, Monterey, California, USA, 6–10 August 2018, pp. 550–559. AUAI Press (2018)

    Google Scholar 

  19. Ehlers, R.: Formal verification of piece-wise linear feed-forward neural networks. In: D’Souza, D., Narayan Kumar, K. (eds.) Automated Technology for Verification and Analysis. ATVA 2017. LNCS, vol. 10482, pp. 269–286. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68167-2_19

  20. Elboher, Y.Y., Gottschlich, J., Katz, G.: An abstraction-based framework for neural network verification. In: Lahiri, S., Wang, C. (eds.) Computer Aided Verification. CAV 2020. LNCS, vol. 12224, pp. 43–65. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-53288-8_3

  21. Fischetti, M., Jo, J.: Deep neural networks and mixed integer linear optimization. Constraints Int. J. 23(3), 296–309 (2018)

    Article  MathSciNet  Google Scholar 

  22. Gehr, T., Mirman, M., Drachsler-Cohen, D., Tsankov, P., Chaudhuri, S., Vechev, M.T.: AI2: safety and robustness certification of neural networks with abstract interpretation. In: 2018 IEEE Symposium on Security and Privacy, SP 2018, Proceedings, 21–23 May 2018, San Francisco, California, USA, pp. 3–18. IEEE Computer Society (2018)

    Google Scholar 

  23. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: Bengio, Y., LeCun, Y. (eds.) 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015, Conference Track Proceedings (2015)

    Google Scholar 

  24. Ward Jr, J.H.: Hierarchical grouping to optimize an objective function. J. Am. Stat. Assoc. 58(301), 236–244 (1963)

    Google Scholar 

  25. Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: an efficient SMT solver for verifying deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) Computer Aided Verification. CAV 2017. LNCS, vol. 10426, pp. 97–117. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63387-9_5

  26. Katz, G., et al.: The marabou framework for verification and analysis of deep neural networks. In: Dillig, I., Tasiran, S. (eds.) Computer Aided Verification. CAV 2019. LNCS, vol. 11561, pp. 443–452. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-25540-4_26

  27. Kurakin, A., Goodfellow, I.J., Bengio, S.: Adversarial examples in the physical world. In: 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, 24–26 April 2017, Workshop Track Proceedings. OpenReview.net (2017)

    Google Scholar 

  28. Liu, J., Xing, Y., Shi, X., Song, F., Xu, Z., Ming, Z.: Abstraction and refinement: towards scalable and exact verification of neural networks (2022)

    Google Scholar 

  29. Liu, J., Xing, Y., Shi, X., Song, F., Xu, Z., Ming, Z.: Abstraction and refinement: towards scalable and exact verification of neural networks. ACM Trans. Softw. Eng. Methodol. (2024). https://doi.org/10.1145/3644387

  30. Lomuscio, A., Maganti, L.: An approach to reachability analysis for feed-forward relu neural networks. CoRR abs/1706.07351 (2017). http://arxiv.org/abs/1706.07351

  31. Mangal, A., Kalia, S., Rajgopal, H., Rangarajan, K., Namboodiri, V.P., Banerjee, S., Arora, C.: Covidaid: COVID-19 detection using chest x-ray. CoRR abs/2004.09803 (2020)

    Google Scholar 

  32. Marques-Silva, J., Ignatiev, A.: Delivering trustworthy AI through formal XAI. In: Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22–1 March 2022, pp. 12342–12350. AAAI Press (2022)

    Google Scholar 

  33. Moosavi-Dezfooli, S., Fawzi, A., Frossard, P.: Deepfool: a simple and accurate method to fool deep neural networks. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, 27–30 June 2016, pp. 2574–2582. IEEE Computer Society (2016)

    Google Scholar 

  34. Neider, D., Johnson, T.T.: Track C1: safety verification of deep neural networks (DNNs). In: Steffen, B. (eds.) Bridging the Gap Between AI and Reality. AISoLA 2023. LNCS, vol. 14380, pp. 217–224. Springer, Cham (2024). https://doi.org/10.1007/978-3-031-46002-9_12

  35. Ostrovsky, M., Barrett, C., Katz, G.: An abstraction-refinement approach to verifying convolutional neural networks. In: Bouajjani, A., Holík, L., Wu, Z. (eds.) Automated Technology for Verification and Analysis. ATVA 2022. LNCS, vol 13505. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19992-9_25Ostrovsky, M., Barrett, C., Katz, G.: An abstraction-refinement approach to verifying convolutional neural networks. In: Bouajjani, A., Holík, L., Wu, Z. (eds.) Automated Technology for Verification and Analysis. ATVA 2022. LNCS, vol. 13505, pp. 391–396. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19992-9_25

  36. Owen, M.P., Panken, A., Moss, R., Alvarez, L., Leeper, C.: ACAS XU: integrated collision avoidance and detect and avoid capability for UAS. In: 2019 IEEE/AIAA 38th Digital Avionics Systems Conference (DASC), pp. 1–10 (2019). https://doi.org/10.1109/DASC43569.2019.9081758

  37. Palma, A.D., et al.: Improved branch and bound for neural network verification via lagrangian decomposition. CoRR abs/2104.06718 (2021)

    Google Scholar 

  38. Pham, L.H., Sun, J.: Verifying neural networks against backdoor attacks. In: Shoham, S., Vizel, Y. (eds.) Computer Aided Verification. CAV 2022. LNCS, vol. 13371, pp. 171–192. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-13185-1_9

  39. Ruan, W., Huang, X., Kwiatkowska, M.: Reachability analysis of deep neural networks with provable guarantees. In: Lang, J. (ed.) Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, 13–19 July 2018, Stockholm, Sweden, pp. 2651–2659. ijcai.org (2018)

    Google Scholar 

  40. Salman, H., Yang, G., Zhang, H., Hsieh, C.J., Zhang, P.: A convex relaxation barrier to tight robustness verification of neural networks. Adv. Neural Inf. Process. Syst. 32, 9835–9846 (2019)

    Google Scholar 

  41. Shi, Z., Jin, Q., Kolter, J.Z., Jana, S., Hsieh, C.J., Zhang, H.: Formal verification for neural networks with general nonlinearities via branch-and-bound. In: 2nd Workshop on Formal Verification of Machine Learning (WFVML 2023) (2023)

    Google Scholar 

  42. Singh, G., Gehr, T., Mirman, M., Püschel, M., Vechev, M.T.: Fast and effective robustness certification. In: Bengio, S., Wallach, H.M., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3–8 December 2018, Montréal, Canada, pp. 10825–10836 (2018)

    Google Scholar 

  43. Singh, G., Gehr, T., Püschel, M., Vechev, M.T.: An abstract domain for certifying neural networks. Proc. ACM Program. Lang. 3(POPL) 41, 1–41:30 (2019)

    Google Scholar 

  44. Singh, G., Gehr, T., Püschel, M., Vechev, M.T.: Boosting robustness certification of neural networks. In: 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, 6–9 May 2019. OpenReview.net (2019)

    Google Scholar 

  45. Szegedy, C., et al.: Intriguing properties of neural networks. In: Bengio, Y., LeCun, Y. (eds.) 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, 14–16 April 2014, Conference Track Proceedings (2014)

    Google Scholar 

  46. Virmaux, A., Scaman, K.: Lipschitz regularity of deep neural networks: analysis and efficient estimation. In: Bengio, S., Wallach, H.M., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3–8 December 2018, Montréal, Canada, pp. 3839–3848 (2018). https://proceedings.neurips.cc/paper/2018/hash/d54e99a6c03704e95e6965532dec148b-Abstract.html

  47. Wang, S., et al.: Beta-crown: Efficient bound propagation with per-neuron split constraints for neural network robustness verification. In: Ranzato, M., Beygelzimer, A., Dauphin, Y.N., Liang, P., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, 6–14 December 2021, virtual, pp. 29909–29921 (2021)

    Google Scholar 

  48. Wu, H., et al.: Marabou 2.0: a versatile formal analyzer of neural networks. CoRR abs/2401.14461 (2024)

    Google Scholar 

  49. Xu, K., et al.: Automatic perturbation analysis for scalable certified robustness and beyond. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., Lin, H. (eds.) Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, 6–12 December 2020, virtual (2020)

    Google Scholar 

  50. Xu, K., et al.: Fast and complete: enabling complete neural network verification with rapid and massively parallel incomplete verifiers. CoRR abs/2011.13824 (2020)

    Google Scholar 

  51. Zhang, H., et al.: General cutting planes for bound-propagation-based neural network verification. Adv. Neural Inf. Process. Syst. (2022)

    Google Scholar 

  52. Zhang, H., Wang, S., Xu, K., Wang, Y., Jana, S., Hsieh, C.J., Kolter, Z.: A branch and bound framework for stronger adversarial attacks of ReLU networks. In: Proceedings of the 39th International Conference on Machine Learning, vol. 162, pp. 26591–26604 (2022)

    Google Scholar 

  53. Zhang, H., Weng, T., Chen, P., Hsieh, C., Daniel, L.: Efficient neural network robustness certification with general activation functions. In: Bengio, S., Wallach, H.M., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3–8 December 2018, Montréal, Canada, pp. 4944–4953 (2018)

    Google Scholar 

  54. Zhao, Z., Zhang, Y., Chen, G., Song, F., Chen, T., Liu, J.: CLEVEREST: accelerating CEGAR-based Neural network verification via adversarial attacks. In: Singh, G., Urban, C. (eds.) Static Analysis. SAS 2022. LNCS, vol. 13790, pp. 449–473. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-22308-2_20

Download references

Acknowledgments

This project was partly funded by IRD, IIT Delhi under the Multi-Institutional Faculty Interdisciplinary Research Programme (MFIRP) Scheme, Project No. MI02571G, jointly between Indian Institute of Technology Delhi and Hebrew University of Jerusalem. This work was also partially supported by the CSE Research Acceleration Fund of IIT Delhi. The authors would also like to thank Guy Katz, a faculty member in School of Computer Science and Engineering at Hebrew University of Jerusalem, for his valuable inputs and feedback

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sanaa Siddiqui .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Siddiqui, S., Mukhopadhyay, D., Afzal, M., Karmarkar, H., Madhukar, K. (2024). Unifying Syntactic and Semantic Abstractions for Deep Neural Networks. In: Haxthausen, A.E., Serwe, W. (eds) Formal Methods for Industrial Critical Systems. FMICS 2024. Lecture Notes in Computer Science, vol 14952. Springer, Cham. https://doi.org/10.1007/978-3-031-68150-9_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-68150-9_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-68149-3

  • Online ISBN: 978-3-031-68150-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics