8000 Explore tools for annotating intermediate layers of ML models with range bounds · Issue #1700 · google/heir · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Explore tools for annotating intermediate layers of ML models with range bounds #1700

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
j2kun opened this issue Apr 10, 2025 · 2 comments

Comments

@j2kun
Copy link
Collaborator
j2kun commented Apr 10, 2025

From the meeting notes on parameter selection https://docs.google.com/document/d/1ASmm8UiQisMyk1EQYVMSw--I095nsTswZEPJ6BlcrIQ/edit?usp=sharing

The hope is to use these annotations to provide explicit per-SSA value range bounds that we can then use to bound CKKS noise estimates and produce useful parameters.

There seem to be two approaches to do interval analysis in ML models.

Runtime analysis using a validation set

For these, most techniques are ad-hoc. One uses something like pytorch's register_module_forward_hook to dump intermediate values, and run it on a validation set to get the values dumped to some format.

We can do this and parse the results back to annotations on the pytorch model, and try to get those preserved when dumping to MLIR. A colleague of mine at Google is going to tinker with a nice way to use the pytorch hook, and next week I can try to convert his work to produce MLIR annotations.

Formal methods

There are tools like https://github.com/Verified-Intelligence/alpha-beta-CROWN (this one seems the best and most actively developed) and https://github.com/vas-group-imperial/VeriNet that are purely based on static analysis and provide interval-based propagation through a model for worst-case bounds.

I haven't seen these tools before today, but alpha-beta-CROWN seems to support pytorch and ONNX and there is a usage doc here: https://github.com/Verified-Intelligence/alpha-beta-CROWN/blob/main/complete_verifier/docs/abcrown_usage.md

@j2kun
Copy link
Collaborator Author
j2kun commented Apr 10, 2025

CC @ZenithalHourlyRate maybe you would want to try using alpha-beta-CROWN on some test model in the mean time to see what it outputs? Then we can get a sense for how we might preserve that information when converting to MLIR.

@j2kun
Copy link
Collaborator Author
j2kun commented Apr 17, 2025

While working on a potential torch integration, filed pytorch/xla#8993 to ask about attribute propagation in a stableHLO export

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant
0