-
Notifications
You must be signed in to change notification settings - 8000 Fork 24.1k
failed to convert torch.jit.ScriptModule to ONNX (crash) #30512
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
file: https://filebin.net/scu91052e8txtl4r clean code |
when i compare it to normal export(create the model from code and export) i get the following:
where i try to export from torch module:
and
|
@dlibenzi - can you explain why your commit related to this issue? |
Should I know? 😄 |
@houseroad who is the right person to look at this? |
@lironmo what version of PyTorch are you using? |
@lara-hdr 1.3.0 and test also at 1.3.1 |
@lironmo , the issue is that the shape information of the tensors are not always available when scripting. The ONNX exporter needs these information in certain cases when PyTorch and ONNX operators' behaviors don't align perfectly. Batch_norm was recently updated to export without the shape information in this PR #29458, so the error you are getting with batch_norm is now fixed in master. However when I tried exporting your model with PyTorch master I got a similar error with flatten. Once this PR is merged you should be able to export your model in opset_version=11 (use the parameter opset_version=11 in the exporter api) with PyTorch nighly. |
@lara-hdr - thanks for your replay :), so i need to wait to next night build? i created the traced torch script module with the new night build - updated in the bin. |
@lara-hdr thanks for your reply :) I also tried to install the night build and create a new traced model and convert it to onnx, but i get the same problem with the batch norm layer (see below for trace) (convert) liron@liron-Latitude-5490:~/work/pixoneye/model_conversion$ pip freeze | grep -i torch so i need to wait to the next night build? i uploaded the new traced model to the bin (with night_build suffix, https://filebin.net/scu91052e8txtl4r) about the flatten, i will wait for the fix. trace: |
@lara-hdr sure you are right, Thanks! Traceback (most recent call last): |
@linronmo, as I explained above, the change for flatten could only be done for opset 11. |
@lara-hdr, I verified the fix and it's works. Thanks! |
…0751) Summary: Update ONNX Flatten to accept negative indices in opset 11. With this change, some cases of flatten do not rely on the input rank being available. Fixes : pytorch#30512 . Pull Request resolved: pytorch#30751 Reviewed By: hl475 Differential Revision: D18946904 Pulled By: houseroad fbshipit-source-id: a6fa30a9182fff92211e505a19325525c6112f19
…0751) Summary: Update ONNX Flatten to accept negative indices in opset 11. With this change, some cases of flatten do not rely on the input rank being available. Fixes : pytorch#30512 . Pull Request resolved: pytorch#30751 Reviewed By: hl475 Differential Revision: D18946904 Pulled By: houseroad fbshipit-source-id: a6fa30a9182fff92211e505a19325525c6112f19
I got same error. set opset 11 not work for me |
I got the same error also when calling torch.onnx.export api with parameter opset_version=11 set |
🐛 Bug
when convert scriptModule to onnx we crash and get the following exception:
To Reproduce
load the attached torch script, and try to convert to onnx:
cc @suo @houseroad @spandantiwari @lara-hdr @BowenBao @neginraoof
The text was updated successfully, but these errors were encountered: