[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article
Open access

Synthesizing Precise Static Analyzers for Automatic Differentiation

Published: 16 October 2023 Publication History

Abstract

We present Pasado, a technique for synthesizing precise static analyzers for Automatic Differentiation. Our technique allows one to automatically construct a static analyzer specialized for the Chain Rule, Product Rule, and Quotient Rule computations for Automatic Differentiation in a way that abstracts all of the nonlinear operations of each respective rule simultaneously. By directly synthesizing an abstract transformer for the composite expressions of these 3 most common rules of AD, we are able to obtain significant precision improvement compared to prior works which compose standard abstract transformers together suboptimally. We prove our synthesized static analyzers sound and additionally demonstrate the generality of our approach by instantiating these AD static analyzers with different nonlinear functions, different abstract domains (both intervals and zonotopes) and both forward-mode and reverse-mode AD.
We evaluate Pasado on multiple case studies, namely soundly computing bounds on a neural network’s local Lipschitz constant, soundly bounding the sensitivities of financial models, certifying monotonicity, and lastly, bounding sensitivities of the solutions of differential equations from climate science and chemistry for verified ranges of initial conditions and parameters. The local Lipschitz constants computed by Pasado on our largest CNN are up to 2750× more precise compared to the existing state-of-the-art zonotope analysis. The bounds obtained on the sensitivities of the climate, chemical, and financial differential equation solutions are between 1.31 − 2.81× more precise (on average) compared to a state-of-the-art zonotope analysis.

References

[1]
Assalé Adjé, Stéphane Gaubert, and Eric Goubault. 2010. Coupling Policy Iteration with Semi-definite Relaxation to Compute Accurate Numerical Invariants in Static Analysis. In European Symposium on Programming.
[2]
Sai Praveen Bangaru, Jesse Michel, Kevin Mu, Gilbert Bernstein, Tzu-Mao Li, and Jonathan Ragan-Kelley. 2021. Systematically differentiating parametric discontinuities. ACM Transactions on Graphics (TOG), 40, 4 (2021).
[3]
Barry Becker and Ronny Kohavi. 1996. Adult. UCI Machine Learning Repository.
[4]
Claus Bendtsen and Ole Stauning. 1996. FADBAD, a flexible C++ package for automatic differentiation.
[5]
Patrick Cousot and Radhia Cousot. 1977. Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints. In Proceedings of the 4th ACM SIGACT-SIGPLAN symposium on Principles of programming languages.
[6]
Patrick Cousot and Nicolas Halbwachs. 1978. Automatic discovery of linear restraints among variables of a program. In Proceedings of the 5th ACM SIGACT-SIGPLAN symposium on Principles of programming languages.
[7]
Tianyu Du, Shouling Ji, Lujia Shen, Yao Zhang, Jinfeng Li, Jie Shi, Chengfang Fang, Jianwei Yin, Raheem Beyah, and Ting Wang. 2021. Cert-RNN: Towards Certifying the Robustness of Recurrent Neural Networks. In CCS.
[8]
Oleg Fryazinov, Alexander Pasko, and Peter Comninos. 2010. Fast reliable interrogation of procedurally defined implicit surfaces using extended revised affine arithmetic. Computers & Graphics, 34, 6 (2010).
[9]
Khalil Ghorbal, Eric Goubault, and Sylvie Putot. 2009. The zonotope abstract domain taylor1+. In International Conference on Computer Aided Verification. 627–633.
[10]
Andreas Griewank and Andrea Walther. 2008. Evaluating derivatives: principles and techniques of algorithmic differentiation. SIAM.
[11]
Akhil Gupta, Lavanya Marla, Ruoyu Sun, Naman Shukla, and Arinbjörn Kolbeinsson. 2021. Pender: Incorporating shape constraints via penalized derivatives. In Proceedings of the AAAI Conference on Artificial Intelligence. 35.
[12]
Thibault Helaire et al. 2021. affapy library.
[13]
Paul D Hovland, Boyana Norris, Michelle Mills Strout, Sanjukta Bhowmick, and Jean Utke. 2005. Sensitivity analysis and design optimization through automatic differentiation. In Journal of Physics: Conference Series.
[14]
Jan Hückelheim and Laurent Hascoët. 2022. Automatic differentiation of parallel loops with formal methods. In Proceedings of the 51st International Conference on Parallel Processing.
[15]
Jan Hückelheim, Ziqing Luo, Sri Hari Krishna Narayanan, Stephen Siegel, and Paul D Hovland. 2018. Verifying Properties of Differentiable Programs. In International Static Analysis Symposium. 205–222.
[16]
Matt Jordan and Alex Dimakis. 2021. Provable Lipschitz certification for generative models. In International Conference on Machine Learning. 5118–5126.
[17]
Matt Jordan and Alexandros G Dimakis. 2020. Exactly computing the local lipschitz constant of relu networks. Advances in Neural Information Processing Systems.
[18]
Hans Kaper and Hans Engler. 2013. Mathematics and climate. SIAM.
[19]
Andrej Karpathy et al. 2020. micrograd library.
[20]
John Kitchin. 2018. A differentiable ODE integrator for sensitivity analysis.
[21]
Ching-Yun Ko, Zhaoyang Lyu, Lily Weng, Luca Daniel, Ngai Wong, and Dahua Lin. 2019. POPQORN: Quantifying robustness of recurrent neural networks. In International Conference on Machine Learning. 3468–3477.
[22]
Niklas Kochdumper, Christian Schilling, Matthias Althoff, and Stanley Bak. 2022. Open-and closed-loop neural network verification using polynomial zonotopes. arXiv preprint arXiv:2207.02715.
[23]
Jacob Laurel, Siyuan Brant Qian, Gagandeep Singh, and Sasa Misailovic. 2023. Appendix for Synthesizing Precise Static Analyzers for Automatic Differentiation. https://jsl1994.github.io/papers/ OOPSLA2023_appendix.pdf
[24]
Jacob Laurel, Siyuan Brant Qian, Gagandeep Singh, and Sasa Misailovic. 2023. Artifact for Synthesizing Precise Static Analyzers for Automatic Differentiation. https://doi.org/10.5281/zenodo.8332724
[25]
Jacob Laurel, Rem Yang, Gagandeep Singh, and Sasa Misailovic. 2022. A Dual Number Abstraction for Static Analysis of Clarke Jacobians. Proceedings of the ACM on Programming Languages, 1–30.
[26]
Jacob Laurel, Rem Yang, Shubham Ugare, Robert Nagel, Gagandeep Singh, and Sasa Misailovic. 2022. A general construction for abstract interpretation of higher-order automatic differentiation. Proceedings of the ACM on Programming Languages, 6, OOPSLA2 (2022), 1007–1035.
[27]
Samuel Lerman, Charles Venuto, Henry Kautz, and Chenliang Xu. 2021. Explaining Local, Global, And Higher-Order Interactions In Deep Learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 1224–1233.
[28]
Yingbo Ma, Vaibhav Dixit, Michael J Innes, Xingjian Guo, and Chris Rackauckas. 2021. A comparison of automatic differentiation and continuous sensitivity analysis for derivatives of differential equation solutions. In 2021 IEEE High Performance Extreme Computing Conference (HPEC). 1–9.
[29]
Azamat Mametjanov, Boyana Norris, Xiaoyan Zeng, Beth Drewniak, Jean Utke, Mihai Anitescu, and Paul Hovland. 2012. Applying automatic differentiation to the Community Land Model. In Recent Advances in Algorithmic Differentiation.
[30]
Antoine Miné. 2004. Relational abstract domains for the detection of floating-point run-time errors. In European Symposium on Programming. 3–17.
[31]
Ashitabh Misra, Jacob Laurel, and Sasa Misailovic. 2023. ViX: Analysis-driven Compiler for Efficient Low-Precision Variational Inference. In Design, Automation & Test in Europe Conference & Exhibition (DATE).
[32]
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, and Luca Antiga. 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32 (2019).
[33]
Brandon Paulsen and Chao Wang. 2022. Example Guided Synthesis of Linear Approximations for Neural Network Verification. In International Conference on Computer Aided Verification.
[34]
Brandon Paulsen and Chao Wang. 2022. LinSyn: Synthesizing Tight Linear Bounds for Arbitrary Neural Network Activation Functions. In International Conference on Tools and Algorithms for the Construction and Analysis of Systems. 357–376.
[35]
Harrison Rosenberg, Brian Tang, Kassem Fawaz, and Somesh Jha. 2023. Fairness properties of face recognition and obfuscation systems. In 32nd USENIX Security Symposium (USENIX Security 23).
[36]
Wonryong Ryou, Jiayu Chen, Mislav Balunovic, Gagandeep Singh, Andrei Dan, and Martin Vechev. 2021. Scalable polyhedral verification of recurrent neural networks. In International Conference on Computer Aided Verification. 225–248.
[37]
Andrea Saltelli, Marco Ratto, Stefano Tarantola, and Francesca Campolongo. 2005. Sensitivity analysis for chemical models. Chemical reviews, 105, 7 (2005), 2811–2828.
[38]
Zhouxing Shi, Yihan Wang, Huan Zhang, Zico Kolter, and Cho-Jui Hsieh. 2022. Efficiently Computing Local Lipschitz Constants of Neural Networks via Bound Propagation. In Advances in Neural Information Processing Systems.
[39]
Zhouxing Shi, Huan Zhang, Kai-Wei Chang, Minlie Huang, and Cho-Jui Hsieh. 2020. Robustness Verification for Transformers. In ICLR.
[40]
Gagandeep Singh, Rupanshu Ganvir, Markus Püschel, and Martin Vechev. 2019. Beyond the single neuron convex barrier for neural network certification. Advances in Neural Information Processing Systems, 32 (2019).
[41]
Gagandeep Singh, Timon Gehr, Matthew Mirman, Markus Püschel, and Martin T Vechev. 2018. Fast and Effective Robustness Certification. NeurIPS, 1, 4 (2018), 6.
[42]
Gagandeep Singh, Timon Gehr, Markus Püschel, and Martin Vechev. 2019. An abstract domain for certifying neural networks. Proceedings of the ACM on Programming Languages, 3, POPL (2019), 1–30.
[43]
Gagandeep Singh, Timon Gehr, Markus Püschel, and Martin Vechev. 2019. Boosting robustness certification of neural networks. In International conference on learning representations.
[44]
Gagandeep Singh, Markus Püschel, and Martin Vechev. 2017. Fast polyhedra abstract domain. In Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages.
[45]
Aishwarya Sivaraman, Golnoosh Farnadi, Todd Millstein, and Guy Van den Broeck. 2020. Counterexample-guided learning of monotonic neural networks. Neural Information Processing Systems.
[46]
Jorge Stolfi and Luiz Henrique De Figueiredo. 1997. Self-validated numerical methods and applications. In Monograph for 21st Brazilian Mathematics Colloquium, IMPA, Rio de Janeiro. Citeseer. 5.
[47]
James Paul Turner. 2020. Analysing and Bounding Numerical Error in Spiking Neural Network Simulations. Ph. D. Dissertation. University of Sussex.
[48]
Vassilis Vassiliadis, Jan Riehme, Jens Deussen, Konstantinos Parasyris, Christos D Antonopoulos, Nikolaos Bellas, Spyros Lalis, and Uwe Naumann. 2016. Towards automatic significance analysis for approximate computing. In 2016 IEEE/ACM International Symposium on Code Generation and Optimization (CGO).
[49]
James Walsh. 2015. Climate modeling in differential equations. The UMAP Journal, 36, 4 (2015), 325–363.
[50]
Yu Wang, Qitong Gao, and Miroslav Pajic. 2022. Learning Monotone Dynamics by Neural Networks. In 2022 American Control Conference (ACC). 1485–1490.
[51]
Yuting Yang, Connelly Barnes, Andrew Adams, and Adam Finkelstein. 2022. A δ : autodiff for discontinuous programs-applied to shaders. ACM Transactions on Graphics (TOG), 41, 4 (2022).

Cited By

View all
  • (2024)Input-Relational Verification of Deep Neural NetworksProceedings of the ACM on Programming Languages10.1145/36563778:PLDI(1-27)Online publication date: 20-Jun-2024

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Programming Languages
Proceedings of the ACM on Programming Languages  Volume 7, Issue OOPSLA2
October 2023
2250 pages
EISSN:2475-1421
DOI:10.1145/3554312
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 16 October 2023
Published in PACMPL Volume 7, Issue OOPSLA2

Permissions

Request permissions for this article.

Check for updates

Badges

Author Tags

  1. Abstract Interpretation
  2. Differentiable Programming

Qualifiers

  • Research-article

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)224
  • Downloads (Last 6 weeks)21
Reflects downloads up to 15 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Input-Relational Verification of Deep Neural NetworksProceedings of the ACM on Programming Languages10.1145/36563778:PLDI(1-27)Online publication date: 20-Jun-2024

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media