8000 Convert onnx Roialign to linalg ir error · Issue #4181 · llvm/torch-mlir · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Convert onnx Roialign to linalg ir error #4181

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wso4133560 opened this issue May 13, 2025 · 0 comments
Open

Convert onnx Roialign to linalg ir error #4181

wso4133560 opened this issue May 13, 2025 · 0 comments

Comments

@wso4133560
Copy link

env ->
OS: ubuntu 22.04

onnx model:
git clone https://github.com/onnx/onnx.git

in onnx project, the model path is
onnx/onnx/backend/test/data/node/test_roialign_mode_max/model.onnx

torch-mlir version: newest in main branch

run command:
python -m torch_mlir.tools.import_onnx ./model.onnx -o onnx.mlir
torch-mlir-opt onnx.mlir --mlir-print-debuginfo --torch-onnx-to-torch-backend-pipeline -o torch.mlir
torch-mlir-opt torch.mlir --mlir-print-debuginfo --torch-backend-to-linalg-on-tensors-backend-pipeline -o linalg.mlir

onnx ir:
module {
func.func @test_roialign_aligned_false(%arg0: !torch.vtensor<[1,1,10,10],f32>, %arg1: !torch.vtensor<[3,4],f32>, %arg2: !torch.vtensor<[3],si64>) -> !torch.vtensor<[3,1,5,5],f32> attributes {torch.onnx_meta.ir_version = 10 : si64, torch.onnx_meta.opset_version = 22 : si64, torch.onnx_meta.producer_name = "backend-test", torch.onnx_meta.producer_version = ""} {
%none = torch.constant.none
%0 = torch.operator "onnx.RoiAlign"(%arg0, %arg1, %arg2) {torch.onnx.coordinate_transformation_mode = "output_half_pixel", torch.onnx.output_height = 5 : si64, torch.onnx.output_width = 5 : si64, torch.onnx.sampling_ratio = 2 : si64, torch.onnx.spatial_scale = 1.000000e+00 : f32} : (!torch.vtensor<[1,1,10,10],f32>, !torch.vtensor<[3,4],f32>, !torch.vtensor<[3],si64>) -> !torch.vtensor<[3,1,5,5],f32>
return %0 : !torch.vtensor<[3,1,5,5],f32>
}
}

torch ir:
module {
func.func @test_roialign_aligned_false(%arg0: !torch.vtensor<[1,1,10,10],f32>, %arg1: !torch.vtensor<[3,4],f32>, %arg2: !torch.vtensor<[3],si64>) -> !torch.vtensor<[3,1,5,5],f32> attributes {torch.onnx_meta.ir_version = 10 : si64, torch.onnx_meta.opset_version = 22 : si64, torch.onnx_meta.producer_name = "backend-test", torch.onnx_meta.producer_version = ""} {
%int2 = torch.constant.int 2
%int5 = torch.constant.int 5
%float1.000000e00 = torch.constant.float 1.000000e+00
%false = torch.constant.bool false
%int6 = torch.constant.int 6
%none = torch.constant.none
%int1 = torch.constant.int 1
%0 = torch.aten.unsqueeze %arg2, %int1 : !torch.vtensor<[3],si64>, !torch.int -> !torch.vtensor<[3,1],si64>
%1 = torch.aten.to.dtype %0, %int6, %false, %false, %none : !torch.vtensor<[3,1],si64>, !torch.int, !torch.bool, !torch.bool, !torch.none -> !torch.vtensor<[3,1],f32>
%2 = torch.prim.ListConstruct %1, %arg1 : (!torch.vtensor<[3,1],f32>, !torch.vtensor<[3,4],f32>) -> !torch.list
%3 = torch.aten.cat %2, %int1 : !torch.list, !torch.int -> !torch.vtensor<[3,5],f32>
%4 = torch.torchvision.roi_align %arg0, %3, %float1.000000e00, %int5, %int5, %int2, %false : !torch.vtensor<[1,1,10,10],f32>, !torch.vtensor<[3,5],f32>, !torch.float, !torch.int, !torch.int, !torch.int, !torch.bool -> !torch.vtensor<[3,1,5,5],f32>
return %4 : !torch.vtensor<[3,1,5,5],f32>
}
}

error:
unknown :0: error: failed to legalize operation 'torch.constant.int'
unknown :0: note: see current operation: %0 = "torch.constant.int"() <{value = 2 : i64}> : () -> !torch.int loc(unknown)

Is this an error or not supported? All ONNX models that include the RoiAlign operator produce errors when converting Torch IR to Linalg IR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant
0