8000 [TORCH] Add support for logcumsumexp · Issue #4183 · llvm/torch-mlir · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

[TORCH] Add support for logcumsumexp #4183

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service 8000 and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
sharavana20 opened this issue May 14, 2025 · 2 comments · May be fixed by #4187
Open

[TORCH] Add support for logcumsumexp #4183

sharavana20 opened this issue May 14, 2025 · 2 comments · May be fixed by #4187
Assignees

Comments

@sharavana20
Copy link
sharavana20 commented May 14, 2025

I would like to request support for the torch.logcumsumexp operation in the Torch dialect of Torch-MLIR.

I tested with the torch.logcumexp using fx.export_and_import and the reproduced error is

test_logcumsumexp
-----------------
loc("/home/data/sharavana/torch-mlir/test/python/fx_importer/test_logcumsumexp.py":19:0): 
error: failed to legalize operation 'torch.operator' that was explicitly marked illegal
Traceback (most recent call last):
  File "/home/data/sharavana/torch-mlir/test/python/fx_importer/test_logcumsumexp.py", line 13, in <module>
    def test_logcumsumexp():
  File "/home/data/sharavana/torch-mlir/test/python/fx_importer/test_logcumsumexp.py", line 9, in run
    f()
  File "/home/data/sharavana/torch-mlir/test/python/fx_importer/test_logcumsumexp.py", line 20, in test_logcumsumexp
    m = fx.export_and_import(Logcumsumexp(), torch.randn(3, 4),output_type="torch")
  File "/home/data/sharavana/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir/fx.py", line 124, in export_and_import
    return _module_lowering(
  File "/home/data/sharavana/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir/fx.py", line 61, in _module_lowering
    run_pipeline_with_repro_report(
  File "/home/data/sharavana/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir/compiler_utils.py", line 127, in run_pipeline_with_repro_report
    raise TorchMlirCompilerError(trimmed_message) from None
torch_mlir.compiler_utils.TorchMlirCompilerError: Lowering TorchFX IR -> Torch Backend IR failed with the following diagnostics:
python exception: Failure while executing pass pipeline


For Torch-MLIR developers, the error can be reproduced with:
$ torch-mlir-opt -pass-pipeline='builtin.module(func.func(torch-match-quantized-custom-ops), torchdynamo-export-to-torch-backend-pipeline{ extra-library=})' /home/data/sharavana/tmp/UnnammedModule.mlir
Add '-mlir-print-ir-after-all -mlir-disable-threading' to get the IR dump for debugging purpose.`

Minimal Reproduction

def run(f):
    print(f"{f.__name__}")
    print("-" * len(f.__name__))
    f()
    print()
@run
def test_logcumsumexp():
    class LogcumsumOp(nn.Module):
        def forward(self, x):
            return torch.logcumsumexp(x,1)
    input_tensor = torch.randn(2, 4)
    exported = fx.export_and_import(LogcumsumOp(), input_tensor, output_type="torch")
    print(exported)
@sharavana20
Copy link
Author

@vivekkhandelwal1 I would like to take this up and implement it.

@vivekkhandelwal1
Copy link
Collaborator

@vivekkhandelwal1 I would like to take this up and implement it.

Sure. I have assigned this issue to you @sharavana20.

@sharavana20 sharavana20 linked a pull request May 20, 2025 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants
0