You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Orion paper has a PyTorch -> Lattigo lowering with some ML-specific CKKS packing strategies. We should port their work to HEIR. It would be a good start to arithmetic-FHE for ML in HEIR, and give us another front-end (via torch-mlir), and a starting point for lowering meaningful programs to a CKKS dialect.
This would require at least:
Having a recipe to go from PyTorch to TOSA (probably we don't need to integrate torch-mlir proper into this project)
A Lattigo exit dialect with the relevant ops
A lowering TOSA -> lattigo
Somewhere to put the optimizations from the paper
The text was updated successfully, but these errors were encountered:
After the talk they gave at Google, I'm reinvigorated to support their convolution method, which is a variant of SISO using a baby-step-giant-step method that seems very efficient.
The bootstrap placement algorithm they use also seems nice, but is a bit more complicated. In short, it seems to require simulating the neural network layer latency in order to build a graph, and then run a series of shortest-path algorithms to determine how to insert bootstrap ops. The main obstacle here is the simulation, which I believe is dataset specific. While we still don't have a sense of how we might incorporate such things into HEIR, we have had other cases in which knowing about the dataset can help the compiler be more optimal (e.g., in picking a polynomial approximation).
The Orion paper has a PyTorch -> Lattigo lowering with some ML-specific CKKS packing strategies. We should port their work to HEIR. It would be a good start to arithmetic-FHE for ML in HEIR, and give us another front-end (via torch-mlir), and a starting point for lowering meaningful programs to a CKKS dialect.
This would require at least:
The text was updated successfully, but these errors were encountered: