[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article
Open access

ReLU Hull Approximation

Published: 05 January 2024 Publication History

Abstract

Convex hulls are commonly used to tackle the non-linearity of activation functions in the verification of neural networks. Computing the exact convex hull is a costly task though. In this work, we propose a fast and precise approach to over-approximating the convex hull of the ReLU function (referred to as the ReLU hull), one of the most used activation functions. Our key insight is to formulate a convex polytope that ”wraps” the ReLU hull, by reusing the linear pieces of the ReLU function as the lower faces and constructing upper faces that are adjacent to the lower faces. The upper faces can be efficiently constructed based on the edges and vertices of the lower faces, given that an n-dimensional (or simply nd hereafter) hyperplane can be determined by an (n−1)d hyperplane and a point outside of it. We implement our approach as WraLU, and evaluate its performance in terms of precision, efficiency, constraint complexity, and scalability. WraLU outperforms existing advanced methods by generating fewer constraints to achieve tighter approximation in less time. It exhibits versatility by effectively addressing arbitrary input polytopes and higher-dimensional cases, which are beyond the capabilities of existing methods. We integrate WraLU into PRIMA, a state-of-the-art neural network verifier, and apply it to verify large-scale ReLU-based neural networks. Our experimental results demonstrate that WraLU achieves a high efficiency without compromising precision. It reduces the number of constraints that need to be solved by the linear programming solver by up to half, while delivering comparable or even superior results compared to the state-of-the-art verifiers.

References

[1]
2022. ERAN: ETH Robustness Analyzer for Neural Networks. https://github.com/eth-sri/eran
[2]
2023. pycddlib. https://pypi.org/project/pycddlib/
[3]
Ross Anderson, Joey Huchette, Christian Tjandraatmadja, and Juan Pablo Vielma. 2019. Strong Mixed-Integer Programming Formulations for Trained Neural Networks. In IPCO (Lecture Notes in Computer Science, Vol. 11480). Springer, 27–42. isbn:978-3-030-17953-3
[4]
David Avis and Komei Fukuda. 1991. A basis enumeration algorithm for linear systems with geometric applications. Applied Mathematics Letters, 4, 5 (1991), 39–42. issn:0893-9659 https://doi.org/10.1016/0893-9659(91)90141-h
[5]
David Avis and Komei Fukuda. 1992. A pivoting algorithm for convex hulls and vertex enumeration of arrangements and polyhedra. Discrete Computational Geometry, 8, 3 (1992), Sept., 295–313. issn:1432-0444 https://doi.org/10.1007/bf02293050
[6]
C. Bradford Barber, David P. Dobkin, and Hannu Huhdanpaa. 1996. The quickhull algorithm for convex hulls. ACM Trans. Math. Software, 22, 4 (1996), Dec., 469–483. issn:1557-7295 https://doi.org/10.1145/235815.235821
[7]
Elena Botoeva, Panagiotis Kouvaros, Jan Kronqvist, Alessio Lomuscio, and Ruth Misener. 2020. Efficient Verification of ReLU-Based Neural Networks via Dependency Analysis. Proceedings of the AAAI Conference on Artificial Intelligence, 34, 04, 3291–3299. issn:2159-5399 https://doi.org/10.1609/aaai.v34i04.5729
[8]
Rudy Bunel, Jingyue Lu, Ilker Turkaslan, Philip H. S. Torr, Pushmeet Kohli, and M. Pawan Kumar. 2019. Branch and Bound for Piecewise Linear Neural Network Verification. CoRR, abs/1909.06588 (2019).
[9]
Rudy Bunel, Ilker Turkaslan, Philip H. S. Torr, Pushmeet Kohli, and Pawan Kumar Mudigonda. 2018. A Unified View of Piecewise Linear Neural Network Verification. In NeurIPS. 4795–4804.
[10]
Donald R. Chand and Sham S. Kapur. 1970. An Algorithm for Convex Polytopes. J. ACM, 17, 1 (1970), 78–86.
[11]
Djork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. 2015. Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs). arxiv:arXiv:1511.07289.
[12]
Gregory Cohen, Saeed Afshar, Jonathan Tapson, and Andre van Schaik. 2017. EMNIST: Extending MNIST to handwritten letters. May, https://doi.org/10.1109/ijcnn.2017.7966217
[13]
George Bernard Dantzig and Mukund N Thapa. 2003. Linear programming: Theory and extensions. 2, Springer.
[14]
Sumanth Dathathri, Krishnamurthy Dvijotham, Alexey Kurakin, Aditi Raghunathan, Jonathan Uesato, Rudy Bunel, Shreya Shankar, Jacob Steinhardt, Ian J. Goodfellow, Percy Liang, and Pushmeet Kohli. 2020. Enabling certification of verification-agnostic networks via memory-efficient semidefinite programming. CoRR, abs/2010.11645 (2020).
[15]
Alessandro De Palma, Harkirat S Behl, Rudy Bunel, Philip Torr, and M Pawan Kumar. 2021. Scaling the convex barrier with active sets. In Proceedings of the ICLR 2021 Conference. Open Review.
[16]
Li Deng. 2012. The MNIST Database of Handwritten Digit Images for Machine Learning Research [Best of the Web]. IEEE Signal Processing Magazine, 29, 6 (2012), Nov., 141–142. issn:1053-5888 https://doi.org/10.1109/msp.2012.2211477
[17]
Krishnamurthy Dvijotham, Sven Gowal, Robert Stanforth, Relja Arandjelovic, Brendan O’Donoghue, Jonathan Uesato, and Pushmeet Kohli. 2018. Training verified learners with learned verifiers. arxiv:arXiv:1805.10265.
[18]
Krishnamurthy Dvijotham, Robert Stanforth, Sven Gowal, Timothy A. Mann, and Pushmeet Kohli. 2018. A Dual Approach to Scalable Verification of Deep Networks. In UAI. AUAI Press, 550–559.
[19]
Herbert Edelsbrunner. 1987. Algorithms in Combinatorial Geometry (EATCS Monographs on Theoretical Computer Science, Vol. 10). Springer. isbn:978-3-642-61568-9 http://dx.doi.org/10.1007/978-3-642-61568-9
[20]
Rüdiger Ehlers. 2017. Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks. Springer International Publishing, 269–286. isbn:9783319681672 issn:1611-3349 https://doi.org/10.1007/978-3-319-68167-2_19
[21]
Claudio Ferrari, Mark Niklas Muller, Nikola Jovanovic, and Martin Vechev. 2022. Complete Verification via Multi-Neuron Relaxation Guided Branch-and-Bound. arxiv:arXiv:2205.00263.
[22]
Komei Fukuda. 2003. Cddlib reference manual. Report version 093a, McGill University, Montréal, Quebec, Canada.
[23]
Komei Fukuda and Alain Prodon. 1995. Double Description Method Revisited. In Combinatorics and Computer Science (Lecture Notes in Computer Science, Vol. 1120). Springer, 91–111. isbn:3-540-61576-8
[24]
Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, and Martin T. Vechev. 2018. AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation. In IEEE Symposium on Security and Privacy. IEEE Computer Society, 3–18. isbn:978-1-5386-4353-2
[25]
Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep learning. MIT press.
[26]
Sven Gowal, Krishnamurthy Dvijotham, Robert Stanforth, Rudy Bunel, Chongli Qin, Jonathan Uesato, Relja Arandjelovic, Timothy Arthur Mann, and Pushmeet Kohli. 2019. Scalable Verified Training for Provably Robust Image Classification. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE. https://doi.org/10.1109/iccv.2019.00494
[27]
Gurobi Optimization, LLC. 2023. Gurobi Optimizer Reference Manual. https://www.gurobi.com
[28]
Xiaowei Huang, Marta Kwiatkowska, Sen Wang, and Min Wu. 2017. Safety Verification of Deep Neural Networks. In CAV (1) (Lecture Notes in Computer Science, Vol. 10426). Springer, 3–29. isbn:978-3-319-63387-9
[29]
Ray A. Jarvis. 1973. On the Identification of the Convex Hull of a Finite Set of Points in the Plane. Inf. Process. Lett., 2, 1 (1973), 18–21.
[30]
Michael Joswig. 2003. Beneath-and-Beyond Revisited. In Algebra, Geometry, and Software Systems. Springer, 1–21. isbn:3-540-00256-1
[31]
Guy Katz, Clark W. Barrett, David L. Dill, Kyle Julian, and Mykel J. Kochenderfer. 2017. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks. In CAV. 10426, Springer, 97–117. isbn:978-3-319-63387-9
[32]
Guy Katz, Derek A. Huang, Duligur Ibeling, Kyle Julian, Christopher Lazarus, Rachel Lim, Parth Shah, Shantanu Thakoor, Haoze Wu, Aleksandar Zeljic, David L. Dill, Mykel J. Kochenderfer, and Clark W. Barrett. 2019. The Marabou Framework for Verification and Analysis of Deep Neural Networks. In CAV. 11561, Springer, 443–452. isbn:978-3-030-25540-4
[33]
Alex Krizhevsky and Geoffrey Hinton. 2009. Learning multiple layers of features from tiny images.
[34]
Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-based learning applied to document recognition. Proc. IEEE, 86, 11 (1998), 2278–2324. https://ieeexplore.ieee.org/abstract/document/726791/ Publisher: Ieee
[35]
Andrew L. Maas, Awni Y. Hannun, and Andrew Y. Ng. 2013. Rectifier nonlinearities improve neural network acoustic models. In Proc. icml. 30, Atlanta, GA, 3.
[36]
Peter McMullen. 1970. The maximum numbers of faces of a convex polytope. Mathematika, 17, 2 (1970), 179–184.
[37]
Mark Huasong Meng, Guangdong Bai, Sin Gee Teo, Zhe Hou, Yan Xiao, Yun Lin, and Jin Song Dong. 2022. Adversarial robustness of deep neural networks: A survey from a formal verification perspective. IEEE Transactions on Dependable and Secure Computing.
[38]
Matthew Mirman, Timon Gehr, and Martin T. Vechev. 2018. Differentiable Abstract Interpretation for Provably Robust Neural Networks. In ICML (Proceedings of Machine Learning Research, Vol. 80). PMLR, 3575–3583.
[39]
T.S. Motzkin, H. Raiffa, G. L. Thompson, and R. M. Thrall. 1953. The Double Description Method. In Contributions to the Theory of Games II. Princeton University Press.
[40]
Mark Niklas Müller, Gleb Makarchuk, Gagandeep Singh, Markus Püschel, and Martin Vechev. 2022. PRIMA: general and precise neural network certification via scalable convex hull approximations. Proceedings of the ACM on Programming Languages, 6, POPL (2022), Jan., 1–33. issn:2475-1421 https://doi.org/10.1145/3498704
[41]
Aditi Raghunathan, Jacob Steinhardt, and Percy Liang. 2018. Semidefinite relaxations for certifying robustness to adversarial examples. In NeurIPS. 10900–10910.
[42]
Wenjie Ruan, Xiaowei Huang, and Marta Kwiatkowska. 2018. Reachability Analysis of Deep Neural Networks with Provable Guarantees. In IJCAI. ijcai.org, 2651–2659. isbn:978-0-9992411-2-7 https://doi.org/10.24963/ijcai.2018/368
[43]
Gagandeep Singh, Rupanshu Ganvir, Markus Püschel, and Martin T. Vechev. 2019. Beyond the Single Neuron Convex Barrier for Neural Network Certification. In NeurIPS. 15072–15083.
[44]
Gagandeep Singh, Timon Gehr, Matthew Mirman, Markus Püschel, and Martin T. Vechev. 2018. Fast and Effective Robustness Certification. In NeurIPS. 10825–10836.
[45]
Gagandeep Singh, Timon Gehr, Markus Püschel, and Martin Vechev. 2019. An abstract domain for certifying neural networks. Proceedings of the ACM on Programming Languages, 3, POPL (2019), Jan., 1–30. issn:2475-1421 https://doi.org/10.1145/3290354
[46]
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In ICLR (Poster).
[47]
Christian Tjandraatmadja, Ross Anderson, Joey Huchette, Will Ma, Krunal Kishor Patel, and Juan Pablo Vielma. 2020. The convex relaxation barrier, revisited: Tightened single-neuron relaxations for neural network verification. Advances in Neural Information Processing Systems, 33 (2020), 21675–21686.
[48]
Vincent Tjeng, Kai Y. Xiao, and Russ Tedrake. 2019. Evaluating Robustness of Neural Networks with Mixed Integer Programming. In ICLR (Poster). OpenReview.net.
[49]
Shiqi Wang, Huan Zhang, Kaidi Xu, Xue Lin, Suman Jana, Cho-Jui Hsieh, and J Zico Kolter. 2021. Beta-crown: Efficient bound propagation with per-neuron split constraints for neural network robustness verification. Advances in Neural Information Processing Systems, 34 (2021), 29909–29921.
[50]
Tsui-Wei Weng, Huan Zhang, Hongge Chen, Zhao Song, Cho-Jui Hsieh, Luca Daniel, Duane S. Boning, and Inderjit S. Dhillon. 2018. Towards Fast Computation of Certified Robustness for ReLU Networks. In ICML (Proceedings of Machine Learning Research, Vol. 80). PMLR, 5273–5282.
[51]
Eric Wong and J. Zico Kolter. 2018. Provable Defenses against Adversarial Examples via the Convex Outer Adversarial Polytope. In ICML (Proceedings of Machine Learning Research, Vol. 80). PMLR, 5283–5292.
[52]
Eric Wong, Frank Schmidt, Jan Hendrik Metzen, and J Zico Kolter. 2018. Scaling provable adversarial defenses. Advances in Neural Information Processing Systems, 31 (2018).
[53]
Han Xiao, Kashif Rasul, and Roland Vollgraf. 2017. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms. arxiv:arXiv:1708.07747.
[54]
Kaidi Xu, Huan Zhang, Shiqi Wang, Yihan Wang, Suman Jana, Xue Lin, and Cho-Jui Hsieh. 2021. Fast and Complete: Enabling Complete Neural Network Verification with Rapid and Massively Parallel Incomplete Verifiers. In International Conference on Learning Representation (ICLR).
[55]
Huan Zhang, Shiqi Wang, Kaidi Xu, Linyi Li, Bo Li, Suman Jana, Cho-Jui Hsieh, and J. Zico Kolter. 2022. General Cutting Planes for Bound-Propagation-Based Neural Network Verification. arxiv:arXiv:2208.05740.
[56]
Huan Zhang, Tsui-Wei Weng, Pin-Yu Chen, Cho-Jui Hsieh, and Luca Daniel. 2018. Efficient Neural Network Robustness Certification with General Activation Functions. In NeurIPS. 4944–4953.

Cited By

View all
  • (2024)Development and application of a model for the automatic evaluation and classification of onions (Allium cepa L.) using a Deep Neural Network (DNN)Development and application of a model for the automatic evaluation and classification of onions (Allium cepa L.) using a Deep Neural Network (DNN)Acta Scientiarum Polonorum Hortorum Cultus10.24326/asphc.2024.533723:5(39-57)Online publication date: 30-Nov-2024

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Programming Languages
Proceedings of the ACM on Programming Languages  Volume 8, Issue POPL
January 2024
2820 pages
EISSN:2475-1421
DOI:10.1145/3554315
Issue’s Table of Contents
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 05 January 2024
Published in PACMPL Volume 8, Issue POPL

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Convexity
  2. Neural Networks
  3. Polytope
  4. Robustness

Qualifiers

  • Research-article

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)413
  • Downloads (Last 6 weeks)41
Reflects downloads up to 19 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Development and application of a model for the automatic evaluation and classification of onions (Allium cepa L.) using a Deep Neural Network (DNN)Development and application of a model for the automatic evaluation and classification of onions (Allium cepa L.) using a Deep Neural Network (DNN)Acta Scientiarum Polonorum Hortorum Cultus10.24326/asphc.2024.533723:5(39-57)Online publication date: 30-Nov-2024

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media