8000 how to get mobile_sam_ecoder/decoder.onnx · Issue #6 · zhudongwork/SAM_TensorRT · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

how to get mobile_sam_ecoder/decoder.onnx #6

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
busyyang opened this issue Mar 11, 2024 · 8 comments
Open

how to get mobile_sam_ecoder/decoder.onnx #6

busyyang opened this issue Mar 11, 2024 · 8 comments

Comments

@busyyang
Copy link

In MobileSAM, there is only one weight file named mobile_sam.pt, how to get encoder and decoder weigt separately? Could you share the script for that?

@zhudongwork
Copy link
Owner

Sorry, I will add relevant scripts when I have time. For now, you can refer to https://github.com/dinglufe/segment-anything-cpp-wrapper.

@m-wei
Copy link
m-wei commented Apr 17, 2024

Have you modified the official export onnx code?when I export tensorrt,that reports a error:
Can you show me onnx version?

[04/17/2024-14:59:10] [E] [TRT] ModelImporter.cpp:768: While parsing node number 1209 [Slice -> "/Slice_2_output_0"]:
[04/17/2024-14:59:10] [E] [TRT] ModelImporter.cpp:769: --- Begin node ---
[04/17/2024-14:59:10] [E] [TRT] ModelImporter.cpp:770: input: "/Resize_output_0"
input: "/Constant_70_output_0"
input: "/Unsqueeze_19_output_0"
input: "/Constant_72_output_0"
input: "/Constant_73_output_0"
output: "/Slice_2_output_0"
name: "/Slice_2"
op_type: "Slice"

[04/17/2024-14:59:10] [E] [TRT] ModelImporter.cpp:771: --- End node ---
[04/17/2024-14:59:10] [E] [TRT] ModelImporter.cpp:774: ERROR: ModelImporter.cpp:195 In function parseGraph:
[6] Invalid Node - /Slice_2
[graphShapeAnalyzer.cpp::demandAndResolveInputTensor::1247] Error Code 4: Internal Error (orig_im_size: network input that is shape tensor must have type Int32)

@m-wei
Copy link
m-wei commented Apr 17, 2024

my tensorrt version is 8.6.0,can you give me : torch and onnx version

@busyyang
Copy link
Author

I convert onnx from models to tensorRT engine successfully with command:

trtexec.exe -- --saveEngine=path\mobile_sam_encoder.trt

@m-wei you can have a try with these weight files.

and @zhudongwork can you add the code to get encoder/decoder onnx weights from mobile_sam.pt?

Version of TensorRT is 8.6.1, on Windows 11.

@m-wei
Copy link
m-wei commented Apr 17, 2024

@busyyang these weight files is successful, my problem exported onnx model is from official MobileSAM library

@m-wei
Copy link
m-wei commented Apr 17, 2024

@busyyang my question is solved, you can compile nanosam nanosam/mobile_sam/utils/onnx.py, this code don't export mask node,I guess the node has some bug

@busyyang
Copy link
Author

Glad to hear that. And I have a try with export_onnx_model.py to convert to onnx successfully.

MobileSAM/scripts/export_onnx_model.py --checkpoint ../weights/mobile_sam.pt --output ../weights/mobile_sam.onnx --model-type vit_t

But it is not separate encoder and decoder weights.

@m-wei
Copy link
m-wei commented Apr 18, 2024

@busyyang nonono, this export code is only decoder model, encoder model is from https://github.com/dinglufe/segment-anything-cpp-wrapper, this library's exported model's name is xx_preprocess.onnx,this is actually encoder model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants
0