Explore tools for annotating intermediate layers of ML models with range bounds · Issue #1700 · google/heir · GitHub
More Web Proxy on the site http://driver.im/
You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The hope is to use these annotations to provide explicit per-SSA value range bounds that we can then use to bound CKKS noise estimates and produce useful parameters.
There seem to be two approaches to do interval analysis in ML models.
Runtime analysis using a validation set
For these, most techniques are ad-hoc. One uses something like pytorch's register_module_forward_hook to dump intermediate values, and run it on a validation set to get the values dumped to some format.
We can do this and parse the results back to annotations on the pytorch model, and try to get those preserved when dumping to MLIR. A colleague of mine at Google is going to tinker with a nice way to use the pytorch hook, and next week I can try to convert his work to produce MLIR annotations.
CC @ZenithalHourlyRate maybe you would want to try using alpha-beta-CROWN on some test model in the mean time to see what it outputs? Then we can get a sense for how we might preserve that information when converting to MLIR.
From the meeting notes on parameter selection https://docs.google.com/document/d/1ASmm8UiQisMyk1EQYVMSw--I095nsTswZEPJ6BlcrIQ/edit?usp=sharing
The hope is to use these annotations to provide explicit per-SSA value range bounds that we can then use to bound CKKS noise estimates and produce useful parameters.
There seem to be two approaches to do interval analysis in ML models.
Runtime analysis using a validation set
For these, most techniques are ad-hoc. One uses something like pytorch's register_module_forward_hook to dump intermediate values, and run it on a validation set to get the values dumped to some format.
We can do this and parse the results back to annotations on the pytorch model, and try to get those preserved when dumping to MLIR. A colleague of mine at Google is going to tinker with a nice way to use the pytorch hook, and next week I can try to convert his work to produce MLIR annotations.
Formal methods
There are tools like https://github.com/Verified-Intelligence/alpha-beta-CROWN (this one seems the best and most actively developed) and https://github.com/vas-group-imperial/VeriNet that are purely based on static analysis and provide interval-based propagation through a model for worst-case bounds.
I haven't seen these tools before today, but alpha-beta-CROWN seems to support pytorch and ONNX and there is a usage doc here: https://github.com/Verified-Intelligence/alpha-beta-CROWN/blob/main/complete_verifier/docs/abcrown_usage.md
The text was updated successfully, but these errors were encountered: